Bitmovin provides a cloud-based API for encoding, analytics and HTML5 player for MPEG-DASH and HLS. The premise behind the encoding service is parallel chunk-based transcoding which can scale to meet your needs.
The Bitmovin encoder is a great solution for very fast encoding of media formats suitable for web delivery. If you have or are considering implementing a Vidispine API-based MAM, then this blog shows how to configure the Vidispine API to use the Bitmovin encoding API.
This integration aimed to see if we could add Bitmovin encoding to a Vidispine MAM so the Vidispine MAM could use it as an external transcoder.
For this integration we used several Amazon Web Services (AWS) services to integrate the Vidispine APIS with the Bitmovin encoding API, similar results can be achieved with different architectures. (see diagram below 'Integration Architecture')
This integration consists of 3 parts:
Setting up the AWS services consists of creating two S3 buckets, an SQS queue and a Lambda function.
Configuring the Bitmovin encoder. This entails configuring storage and creating encoding profiles.
Adding an external transcoder to the Vidispine MAM.
Vidispine Integration with Bitmovin as an Extern Transcoder
The diagram above ('Vidispine Integration with Bitmovin as an External Transcoder') outlines the happy path of this workflow.
A user triggers a job.
The Vidispine MAM starts a new transcode job, uploads the items source file to the Source in Bucket.
Once the file has finished uploading an S3 event is triggered and adds a message to the SQS queue.
This SQS message is picked up by a Lambda function which creates the Bitmovin encoding resource, adds the media streams, sets the muxing type and starts the encoding job all via API calls.
Bitmovin encoder creates the new file and moves it to our Transcode Out Bucket.
The Vidispine API has been polling the Transcode Out Bucket, waiting for a file to appear that meets the regex pattern we configured it to wait for.
Once the file from the Bitmovin encoder arrives it cleans up after itself deleting the source file from the Source S3 Bucket.
The Vidispine API then completes the transcode job setting its status to complete.
This diagram highlights how AWS’s services form the core of the integration, here is a breakdown of how each service is being used.
We use two S3 buckets, the first to provide the Bitmovin encoder with source files and the second for the Bitmovin encoder to move the transcoded files to.
The S3 bucket which is used for source files also provides an important secondary role, once a file has been successfully uploaded an S3 event is triggered. This bucket has been configured to POST an SQS message to our SQS queue.
We are using an SQS queue as our job queue, whenever a file is uploaded to the Source File S3 bucket an SQS message is added to this queue and is consumed by a Lambda function which polls this queue for new jobs.
Our Lambda function polls the SQS queue for new messages, once it picks up a message, it will then use the data in that message to make the API calls which are required to create and start a Bitmovin encode job.
For this integration we need to configure the Bitmovin encoder to use the two S3 storages that we have created in AWS. This can be done via the Bitmovin UI or via IP requests. But first we need to create an account and get a Bitmovin API key.
We" class="redactor-autoparser-object">https://api.bitmovin.com/v1/en... can now trigger a transcode from the Vidispine MAM and the Bitmovin encoder will process the file and create it in our S3 bucket.
By integrating the Vidispine MAM with Bitmovin’s encoder you are able to make use of a powerful transcoding engine without having the overheads that traditionally come with running a transcoding farm. You also get the flexibility of being able to use the Bitmovin Encoder to transcode items that have been ingested into Vidispine’s MAM.
Contact us for more information or to see how we can help your Media Asset Management.