Running Bitmovin encoding on GCP

Running Bitmovin encoding on GCP

People who run video streaming platforms typically have four major concerns

How do we get the content online faster?

How do we reduce my video transcoding costs while retaining the quality?

How do we reduce my CDN Egress bills?

How to simplify workflows and integrations?

Now, the problem

Traditionally, transcoding solutions have required a lot of compute, memory and storage. Even then, most encoding services run far slower than real time. E.g. In my recent client interaction it came to our attention that their current encoding service takes more than an hour to transcode an hour long video. Now, we can’t really reduce the encoding time without increasing the CPU and memory resources. With the current se tit’s becoming increasingly hard to strike a balance.

Why Bitmovin?

Bitmovin is a cloud based encoding solution with a plethora of video encoding features and it’s battle-tested to take us to this promised land. It takes a video, splits it into chunks, and encodes segments simultaneously across multiple instances. With this approach, a video can be encoded with speeds of up to 100x real-time. It also offers several other features such as per title encoding, highly customisable encoder (e.g. You don’t need to re-transcode the entire video if you need to just add one more audio track of quality profile) and several others.

Three flavours

  1. Bitmovin SaaS offering — Running your encode jobs using Bitmovin APIs where the infra and platform elements are completely abstracted from the end user. You just need to key in the Input and output bucket locations
  2. Bitmovin Infrastructure for On-prem workloads — Here, Bitmovin will be making use of your on-prem infra where they will be deploying their SDKs and run jobs inside containers on top of Kubernetes
  3. Bitmovin Infrastructure for Public Cloud — This is what we will be covering in detail in this post. In this case, Bitmovin deploys their SDK in the customer’s GCP environment. Bitmovin utilices the benefits of cloud’s elasticity and pay-for-what-you-use constructs.

In all the three aforementioned approaches, the core Bitmovin APIs remain the same. It is the infrastructure or the ‘where to run’ which is different. In this post, we’re going to create Bitmovin Infrastructure on Google Cloud Platform.

Why GCP?

  • Bidding-less Preemptible VMs: Bitmovin leverages the elasticity and on-demand nature of Public cloud to provide speed and cost efficiency. With GCP’s PVMs, instances can last up to 24 hours but cost 80% less. More importantly, there’s no bidding involved. This means you don’t have to worry about what you will get. The prices are set
  • Fast VM provisioning time: The VM start up time plays a key role in the process of transcoding workflow. Typically, there will be several jobs piped in the queue and the encoding speed is tightly coupled with faster provisioning of VMs. One of GCP’s biggest differentiators is it’s VM provisioning time

Test Case

We’re going to ingest a 80 min video (.mov) into the Bitmovin on GCP workflow and calculate the time it takes to encode the video. Also, we’re going to stick with the following VOD preset configurations and see how long it takes for the encoding to complete in these conditions.

Project Repository on GitHub for running Bitmovin on GCP (for the impatient!)

This repository on GitHub contains sample code depicted in the diagram below. There is also a companion tutorial in README.md for the pre-reqs and for creating this workflow step-by-step. It also contains architecture diagrams

Results

  • Bitmovin was able to improve the encoding speed by 4x. The 1 hour 40 min video was encoded in 20 mins
  • One n1-standard-8 VM and 15 n1-standard-8 PVMs were spun up to complete this job
  • The encoding speed was 1382 fps
  • The total output size is 14.65 GB

Bitmovin Dashboard

Bitmovin offers a ready-made dashboard to track and monitor the encoding jobs. It’s a comprehensive dashboard with a lot of useful information updated near real-time

The four main stages of an encoding job — Queue, Download, Analysis and Encoding, for every job can also be viewed from the dashboard

In the next blog, we will see how we can set up a workflow automation architecture using a server-less watch-folder method on GCP.