Decoding The Incredible Scalability Of Disney+Hotstar App - TechAhead

Technology

Decoding The Incredible Scalability Of Disney+Hotstar App: System Structure, Concurrency & More

Jordan Smith

Jordan Smith

August 29, 2022   |   4980 Views

  • Twitter
  • Facebook
  • Linkedin
Decoding The Incredible Scalability Of Disney+Hotstar App: System Structure, Concurrency & More

On August 28th, 2022, when India was playing against Pakistan at Asia Cup T20 Championship in Dubai, more than 1.3 crore or 13 million people were concurrently watching the match on the Disney+Hotstar OTT app, on a global level.

10 million+ concurrent viewers, on a single mobile app, with a global audience is in fact, not a record. It’s 25.3 million concurrent viewers on Disney+Hotstar App, which happened in 2019 during India vs New Zealand World Cup Semi-Final match.

A world record, because active viewers on a single mobile app, at this scale and magnitude have seldom happened. 

How did Disney+Hotstar manage this feat? 

In this blog, we will discuss how Disney+Hotstar ensures this incredible scalability of the app by understanding and decoding its system architecture, concurrency, scalability models and more. 

But first, a brief introduction to the world’s second-biggest, and India’s #1 OTT platform: Disney+Hotstar.

Disney+Hotstar: An introduction

The journey started with the launch of the Hotstar app, in 2015, which was developed by Star India. The 2015 Cricket World Cup was about to start, along with the 2015 IPL tournament, and Star network wanted to fully capitalize on the insane viewership.

While Hotstar generated massive 345 million views for World Cup, 200 million views were generated for the IPL Tournament.

This was before the Jio launch, which happened in 2016. And watching TV series and matches on the mobile was still at a nascent stage. The foundation was set.

The introduction of Reliance Jio’s telecom network changed Internet usage in India, and this changed everything for Hotstar.

By 2017, Hotstar had 300 million downloads, making them the world’s second-biggest OTT app, only below Netflix.

In 2019, Hotstar was acquired by Disney, as part of their 21st Century Fox acquisition, and the app was rebranded to Disney+Hotstar.

As of now, Disney+Hotstar has 400 million+ downloads, with a whooping user base of 300 million active monthly users, and 100 million daily active users. Almost 1 billion minutes of videos are watched on the app daily.

The 2019 IPL tournament was watched by 267 million Disney+Hotstar users, and in 2020, a record 400 billion minutes of content was viewed during the IPL matches.

In India, Disney+Hotstar has a very intense focus on regional content, as more than 60% of the content is viewed in local languages. This is the reason they support 8 Indian languages, with plans to expand this number. The same strategy is visible in other countries as well, with deep focus on regional content, along with regular English content.

They have 100,000+ hours of content for viewers, and India accounts for approximately 40% of their overall user base.

As of now, Disney+Hotstar is available in India, US, UK, Indonesia, Malaysia, and Thailand and by 2023, they will launch in Vietnam.

Decoding the scalability of Disney+Hotstar app: powerful data structure

We will observe the architecture of the Disney+Hotstar app, and decode how they are able to ensure such powerful scalability, on a consistent basis.

Backend of Disney+Hotstar

The team behind Disney+Hotstar has ensured a powerful backend by choosing Amazon Web Services or AWS for their hosting, while their CDN partner is Akamai.

Almost 100% of their traffic is supported by EC2 instances & S3 Object store is deployed for the data store.

At the same time, they use a mixture of on-demand & spot instances to ensure that the costs are controlled. For spot instances, they use machine learning & data analytics algorithms which drastically reduces their overall expenses of managing the backend.

AWS EMR Clusters is the service they use to process terabytes of data (in double-digit) on a daily basis. Note here, that AWS EMR is a managed Hadoop framework for processing massive data across all EC2 instances.

In some cases, they also use Apache Spark, Presto, HBase frameworks in-sync with AWS EMR.

The core of scalability: infrastructure setup

Here are some interesting details about their infrastructure setup for load testing, just before an important event such as IPL matches.

They have 500+ AWS CPU instances, which are C4.4X Large or C4.8X Large running at 75% utilization.

C4.4X instances have typically 30 Gigs of RAM & C4.8X 60 Gigs of RAM!

The entire setup of Disney+Hotstar infrastructure has 16 TBs of RAM, 8000 CPU core, with a peak speed of 32Gbps for data transfer. This is the scale of their operations, which ensures that millions of users are able to concurrently access live streaming on their app.

Note here, that C4X instances are really high CPU-intensive operations, ensuring a low price-per-compute ratio. With C4X instances, the app has high networking performance and optimal storage performance at no additional cost.

Disney+Hotstar uses these Android components for having a powerful infrastructure (and to keep the design loosely coupled for more flexibility):

  • ViewModel: For communicating with the network layer and filling the final result in LiveData.
  • Room
  • LifeCycleObserver
  • RxJava 2
  • Dagger 2 and Dagger Android
  • AutoValue
  • Glide 4
  • Gson
  • Retrofit 2 + okhttp 3
  • Chuck Interceptor: For ensuring swift and easy debugging of all network requests, when the devices are not connected with the network.

How Disney+Hotstar ensures seamless scalability?

There exist basically two models to ensure seamless scalability: Traffic based and Ladder based.

In traffic-based scaling, the tech team simply adds new servers and infrastructure to the pool, as the number of requests being processed by the system keeps on adding.

Ladder-based scaling is opted in those cases, wherein the details and the nature of the new processes are not clear. In such cases, the tech team of Disney+Hotstar has pre-defined ladders per million concurrent users.

As more requests are processed by the system adds on, new infrastructure in terms of ladders is added.

As of now, the Disney+Hotstar app has a concurrency buffer of 2 million concurrent users, which are, as we know, optimally utilized during the peak events such as World Cup matches or IPL tournaments.

In case the number of users goes beyond this concurrency level, then it takes 90 seconds to add new infrastructure to the pool, and the container and the application take 74 seconds to start.

In order to handle this time lag, the team has a pre-provisioned buffer, which is the opposite of auto-scaling and has proven to be a better option.

The team also has an in-built dashboard called Infradashboard, which helps the team to make smart decisions, based on the concurrency levels, and prediction models of new users, during an important event.

By using Fragments, the team behind Disney+Hotstar has ensured modularity to the next level.

Here are some of the features that a typical page holds:

  • Player
  • Vertically and horizontally scrolling lists, which display other contents. Now, the type of data being displayed and the UI of these lists varies based on what type of content it is.
  • Watch and Play, Emojis.
  • Heatmap and Key Moments.
  • Different type player controllers. — Live, Ads, VoD (Episodes, Movies etc.)
  • Different type of Ad formats
  • Nudge to ask user to login.
  • Nudge to ask user to pay for All Live Sports
  • Chromecast
  • Content Description
  • Error View and more

Deploying intelligent client for seamless performance

On occasions when latency in response is increased for the application client and the backend is overwhelmed with new requests, then there are established protocols, which absorb this sudden surge.

For instance, in such cases, the intelligent client deliberately increases the time interval between subsequent requests, and the backend is able to get some respite.

For the end-users, there exists caching & intelligent protocols, which ensures that they are not able to differentiate this intentional time-lag, and the user experience is not hampered.

Besides, the Infradashboard continuously observes and reports every single severe error and fatal exception happening on millions of devices, and either they are rectified in real-time, or deploy a retry mechanism for ensuring seamless performance.

This was just the tip of the iceberg!

If you wish to know more about how Disney+Hotstar operates, its system architecture, database architecture, network protocols, and more and wish to launch an app similar to Disney+Hotstar, then you can connect with our team, and explore the possibilities.

With more than 13 years of experience in accelerating business agility & stimulating digital transformation for startups, enterprises, and SMEs, TechAhead is a pioneer in this space.

Book an appointment with our team, and find out why some of the biggest and known global brands have chosen us for their Digital and Mobile Transformation

Subscribe our newsletter
Get the latest tech
updates by subscribing
Loading
Jordan Smith

Written by

Jordan Smith

Jordan Smith is a Marketing Specialist and Technical Content writer at TechAhead, a leading mobile app development company. TechAhead specializes in building both native and cross-platform mobile apps and provides your company with the best solution that drives success.

×
Subscribe our newsletter
Get the latest tech
updates by subscribing
Loading
back to top