The checklist below provides detailed steps for partners who are setting up an integration with Captionsync. You can see an overview of the flow for existing integrations, and then return to this checklist for detailed integration steps.
NOTE: At the end of this article you can find media samples that you can use with our caption outputs. Please open a Support ticket to request specific samples of caption outputs.
1. To develop and test an integration, you'll need to use your CaptionSync developer account. This article assumes you have already created your account with us. If not, check out our article on creating a developer account (first paragraph).
2. Log into your CaptionSync account, and enable SSH (AST-Link) by clicking Settings -> Account Features and selecting Enable my account for AST-Link:
3. The next step is to create a public/private SSH key pair on the system that you will be using to communicate with CaptionSync. We have articles that describe how to create an SSH key on Windows or on Mac/Linux/Unix. Other SSH key generation tools on Windows include Chilkat (Windows commercial component library) or SSH.Net (Windows .NET). You need to generate keys of type DSA or RSA, with no passphrase.
4. Next, add the public key to your CaptionSync account under Settings -> SSH Keys. Wait about 5 minutes until the key shows a status of Ready.
5. An SFTP connection must be established between your system and the CaptionSync server for each transaction, using the SSH keys and no password. Once the keys are generated and installed on both systems, you can test the authentication using sftp (Linux, Mac) or a third-party SFTP application such as PuTTY, Coda, WinSCP, or one of the libraries listed above.
6. AST and the integration partner should agree on one of two protocols to use: .sub (requires SFTP upload of all media files, followed by .sub file as manifest), or .lst (SFTP upload of a list of URLs for media files, which CaptionSync then downloads for ingest). At this point please open a Support ticket and indicate the XML format you will use (.sub or .lst).
7. AST and the integration partner should agree on a value for the <Source> tag and you should use it for all submissions. This is used on the CaptionSync system to track submissions from your platform, and check that the preferred output formats for your platform have been generated. The tag should indicate the name of your company or product, such as AlphaVideoServer. Please add this as a comment on your existing integration support ticket. *** This value needs to be approved by AST before it can be used ***
8. AST and the integration partner should agree on standard input format(s) for the media files. While AST handles a variety of input formats, if the platform transcodes video, it is generally best to agree on one or two codecs and container formats that will be ingested by AST for all customer captioning requests. Typical example would be a low bitrate MP4 for videos. If transcoding or editing is performed on the file after the original customer video is uploaded or recorded, please ensure that the file provided to AST has the same duration and timecodes as the video(s) that will ultimately be streamed to viewers. If customers have the ability to edit videos after upload/recording, the best practice is to prevent further editing after a caption request has been submitted. You can note these formats on your integration ticket as well.
9. AST and the integration partner should agree on a preferred caption output format. We support several output formats, and new ones are constantly being developed and added. The most popular formats at the moment are DFXP with begin/end tags (.DFXP.XML) and WebVTT. AST has traditionally recommended DFXP when it is one of the options for the platform's players, as DFXP has more robust styling and character encoding options than SRT or WebVTT. But WebVTT is becoming the preferred caption format for captions on mobile devices, so that is also a good option. Please note your preferred output format on your integration ticket. You can have more than one output per submission (e.g. .DFXP.XML and .CLEAN.TXT). If you are using the callback method you may optionally specify a MIME type for each output type, or a macro in the URL to differentiate between the different outputs.
10. At this point you can perform a test submission, using one of our test video files below. If you are using the .lst submission format, make the video file available on your platform, include the full URL in a test .lst file and upload the .lst file to the /incoming directory of your developer account, using SFTP. If you are using the .sub protocol, upload the video file first, followed by the .sub file.
11. Note that there is currently some manual intervention required on our side when submitting test requests, so the output captions will not be available immediately, especially if the submission is made outside of normal business hours (Pacific time zone). However, the test files above are already transcribed, so response should be fairly quick within business hours.
12. When output files (caption files, and optionally transcription files) are ready, you can either download them from the CaptionSync account using SFTP from the outgoing directory (using a polling method), or CaptionSync can POST the output files to a callback URL designated in the .sub or .lst file. Automatic posting using callbacks is generally preferred in recent integrations. Note that if you use the .lst protocol, we do not, by default, put the caption files in the outgoing directory.
13. If you are using the callback post method for receiving caption files, you can test your callback handler independently using the DFXP sample file, attached at the end of this article. Note that CaptionSync will hit the callback URL with a single part "raw" POST for each output type with the outputs in the body (content) of the POST. Note also that your callback handler should be able to handle subsequent updates to the caption file for a particular request. This happens when a customer requests an update to the transcript or caption file using the CaptionSync redo feature (see notes below). Below is sample code in PHP for a callback handler:
$postdata = file_get_contents("php://input");
// check valid id value or db dip to get mapping from id to filename
$cap_fn=$mypath."/".$_GET['id'].".srt";
if (!$handle = fopen($cap_fn, 'w'))
trigger_error("Internal fopen error! Could not open file ".$cap_fn.".");
if (fwrite($handle, $postdata) === FALSE)
trigger_error("Internal fwrite error! Could not write file ".$cap_fn.".");
You can also test your callback handler and simulate AST callbacks using cURL, with something like:
curl --raw -L -X POST -H 'Content-Type: text/xml' --data-binary
@CaptionSync_Sample.dfxp.xml https://yourcallbackURL.com
Those are the setup steps for testing most integrations. Other points to consider include:
- It is best-practice to include a "Notes to transcriber" field in your caption request submission UI. This allows the customer to include notes about the particular video that will be useful to the transcriber. For example, the notes could include technical terms included in the lecture, spellings of names used, etc. It is also possible for customers to set up a "persistent transcriber note" for any account via the CaptionSync web interface (these notes will be routed to transcribers with every submission on the account), but in many cases there are notes specific to a lecture or video that should be included at time of submission.
- Be sure to incorporate cross-checks that prevent users from accidentally submitting duplicate caption requests. For example, before sending a file to CaptionSync, check to see if there are already captions associated with the video, or if a previous request has been submitted and you are awaiting results. There are cases where a user may need to re-submit a video so you will want to have a way to reset the status to "uncaptioned", but in general you should have mechanisms in place to prevent accidental re-submission of a video that has already been submitted to CaptionSync. Similarly, if you allow users to copy or clone recordings, the status of captions and caption requests should also be copied to the new video.
- If you are using callbacks to receive caption files, your caption callback handler should be able to receive updates to the caption file for a particular request. There are several reasons for this: a) CaptionSync users may do free "redos", which allow them to update the transcription and subsequent caption files, and b) if the first caption file is incomplete or corrupt the problem can be requested by a second POST of the caption file using your callback URL for the request.
- While not strictly required, it is best practice to include the <app> tag in your .sub or .lst manifests. This tag indicates whether the submission is for captioning (you want to receive a closed caption file with caption timing data back), transcription (you only need a .txt transcript), or production transcripts. If this tag is not included, CaptionSync will use the default settings on the user's CaptionSync account.
- Consider setting up a status callback handler, as described in the .sub and .lst documents. This allows you to supplement state information that you store in your database based on your own workflow actions with additional status updates provided by AST.
- While most integrations do not offer this, it is possible to allow customers to upload a transcript that corresponds to the video and then request only the synchronized caption output, rather than full transcription and captioning services. We can provide specifics on how to implement this feature if desired.
Another feature that is not typically included in integrations, but which is possible to implement via an XML integration, is a "redo" feature. We allow users to download a raw text transcript of the transcribed video or audio, make any desired corrections (for example changing spelling of proper nouns or names) and then submit the updated transcript for a free "redo" of the request. Most partners choose to have customers submit redos using the CaptionSync web interface, but it is possible to provide your own UI for this process.
Detailed information on developing the integration is available in the following articles:
- CaptionSync SFTP/XML Integration Overview
- XML .sub file Layout for SFTP Processing
- XML .lst file Layout for SFTP List Processing
Test Files:
Use the following sample files for testing purposes:
Comments
0 comments
Article is closed for comments.