top of page

Hotstar's Tech Infrastructure for Mass Live-Streaming

  • Dec 26, 2025
  • 13 min read

Executive Summary

Hotstar (now Disney+ Hotstar) is India's leading video streaming platform that has set multiple global records for concurrent viewership during live sporting events. The platform's technical infrastructure has been designed to handle unprecedented scale, particularly during marquee events like the Indian Premier League (IPL) and ICC Cricket World Cup. This case study examines Hotstar's publicly disclosed technical architecture, engineering decisions, and infrastructure evolution based on verified sources including company statements, executive interviews, and technical presentations at industry conferences.


MarkHub24

Background and Market Context

Hotstar was launched in February 2015 by Novi Digital Entertainment, a wholly-owned subsidiary of Star India. According to a statement by Ajit Mohan, former CEO of Hotstar, the platform was conceived to address the challenge of bringing premium video content to Indian audiences across diverse network conditions and device capabilities.


In April 2019, The Walt Disney Company completed its acquisition of 21st Century Fox's entertainment assets, which included Star India and Hotstar. Following this acquisition, the platform was rebranded as Disney+ Hotstar in April 2020, according to official company announcements.


As reported by Economic Times in April 2019, Hotstar claimed to have over 300 million monthly active users at the time of Disney's acquisition, making it one of the largest streaming platforms globally by user base. The platform offers a mix of live sports, Indian entertainment content, and international shows and movies.


Technical Challenges at Scale


Record-Breaking Concurrent Viewership

Hotstar has consistently set global records for concurrent viewership on a single streaming platform. According to a blog post published on Hotstar's engineering website in May 2019, the platform handled 18.6 million concurrent viewers during the IPL 2019 match between Chennai Super Kings and Mumbai Indians on April 3, 2019. This surpassed their previous record of 10.3 million concurrent viewers set during an IPL 2018 match.


During the ICC Cricket World Cup 2019 semi-final match between India and New Zealand, Hotstar achieved 25.3 million concurrent viewers, as confirmed in a company statement reported by Business Standard in July 2019. This was subsequently recognized by the International Academy of Digital Arts and Sciences with a Webby Award.


In October 2022, Hotstar set a new global record during the India vs Pakistan T20 World Cup match, reaching 5.9 billion minutes of watch time with concurrent viewership in the tens of millions, according to a Disney announcement reported by Variety.


Infrastructure Scale Requirements

The technical challenge Hotstar faces is fundamentally different from traditional video streaming platforms. According to Varun Gupta, Senior Director of Engineering at Hotstar, in his presentation at QCon San Francisco 2018, the platform needs to handle massive traffic spikes that occur within minutes when major sporting events begin, particularly cricket matches involving the Indian national team.


As Gupta explained in the same presentation, Hotstar's traffic pattern is characterized by: extreme spikiness (traffic can increase 20x within 10 minutes of a match starting), sustained high load for 3-4 hours during a match, and the need to maintain consistent quality of experience across millions of concurrent streams.


Technical Architecture


Multi-CDN Strategy

According to technical presentations by Hotstar's engineering team at industry conferences, the platform employs a multi-CDN (Content Delivery Network) strategy to ensure content availability and minimize latency across India's diverse geography and network infrastructure.


In his presentation at HasGeek's Fifth Elephant conference in 2019, Jatin Kumar, Engineering Lead at Hotstar, disclosed that the platform works with multiple CDN providers simultaneously. During high-traffic events, content is distributed across these CDNs, and the platform dynamically routes user requests to the optimal CDN based on real-time performance metrics including latency, packet loss, and CDN capacity.


Adaptive Bitrate Streaming and Video Encoding

Hotstar uses adaptive bitrate streaming technology to adjust video quality in real-time based on available bandwidth. According to the company's engineering blog, the platform prepares multiple renditions of each video stream at different bitrates and resolutions.


In a 2017 interview with YourStory, Varun Gupta stated that Hotstar's video encoding pipeline generates up to 10-12 different quality levels for each piece of content, ranging from as low as 200 kbps for 2G networks to 4-5 Mbps for high-definition viewing on broadband connections.


The platform uses the HTTP Live Streaming (HLS) protocol, which was confirmed in technical documentation and presentations by Hotstar's engineering team. HLS allows the client device to dynamically switch between quality levels without buffering, based on real-time assessment of network conditions.


Server and Cloud Infrastructure

Hotstar's infrastructure evolved significantly between its launch and its record-breaking events. According to Varun Gupta's presentation at AWS reinvent 2017, Hotstar initially ran its infrastructure primarily on Amazon Web Services (AWS), leveraging services including EC2 for compute, S3 for storage, and Cloud Front as one of multiple CDN solutions.


In his QCon San Francisco 2018 presentation, Gupta disclosed that during the IPL 2019 season, Hotstar's infrastructure included over 30,000 compute cores running on AWS. The platform used AWS Auto Scaling to dynamically provision and de-provision servers based on real-time demand.


For the 18.6 million concurrent viewers milestone, Hotstar published a blog post detailing that the platform processed over 1.5 billion requests per minute at peak load, with the infrastructure handling approximately 40 million API calls per minute.


Data Processing and Analytics

Real-time analytics is crucial to Hotstar's operations. According to the company's engineering blog, the platform processes billions of events daily to monitor user experience, detect issues, and optimize content delivery.


In a 2019 interview with Analytics India Magazine, Zubin Pooniwala, Hotstar's Vice President of Engineering, stated that the platform uses Apache Kafka for real-time event streaming and Apache Flink for stream processing. The analytics pipeline processes user interaction data, video quality metrics, and CDN performance data in real-time.


The data infrastructure, as described in technical presentations, ingests over 200 billion events per day during major live events. This data is used for real-time decision making, including dynamic CDN routing, bitrate optimization, and capacity planning.


Edge Computing and Optimization

To reduce latency and improve user experience, Hotstar has implemented edge computing capabilities. According to technical presentations by the engineering team, the platform deploys lightweight processing nodes at CDN edge locations to handle certain computational tasks closer to end users.


In his presentation at the 2019 GIDS (Great Indian Developer Summit), Gaurav Kamboj from Hotstar's engineering team explained that edge nodes perform functions including authentication token validation, user context resolution, and initial quality level selection. This reduces the round-trip time to origin servers and improves initial playback latency.


Live Stream Latency Management

Managing latency in live sports streaming presents unique challenges. According to Varun Gupta's statements in multiple interviews, Hotstar operates with approximately 30-40 seconds of latency behind the broadcast feed. This latency is a deliberate trade-off to enable buffering and adaptive bitrate streaming, which are essential for maintaining stream stability across millions of concurrent viewers.


The platform has implemented what Gupta described in a 2019 ETtech interview as "glass-to-glass latency" optimization, which measures the time from when an action occurs on the field to when it appears on a user's screen. Reducing this latency while maintaining stream quality and stability has been an ongoing engineering focus.


Technical Innovations and Engineering Practices


Chaos Engineering

Hotstar has publicly discussed its adoption of chaos engineering practices to ensure infrastructure resilience. In a blog post on the Hotstar engineering website, Mehul Kumar, Senior Engineering Manager, described how the team conducts deliberate failure experiments in production during non-critical times.


According to Kumar's account, these experiments include randomly terminating server instances, simulating CDN failures, and introducing network latency to test the platform's ability to gracefully degrade and recover. This practice helped the team identify and fix potential failure modes before they could impact users during high-stakes live events.


Load Testing and Capacity Planning

Before major events like IPL or World Cup matches, Hotstar conducts extensive load testing. In an interview with The Ken in 2019, Varun Gupta revealed that the team builds detailed models to forecast peak concurrent viewership based on factors including team matchups, tournament stage, and time of day.


According to Gupta's statements at industry conferences, Hotstar conducts load tests simulating expected peak traffic plus a significant buffer (typically 30-50% above forecast). These tests help identify bottlenecks in the infrastructure stack and validate that auto-scaling mechanisms will respond appropriately.


Client Application Optimization

The Hotstar mobile and web applications have been optimized for diverse device capabilities and network conditions prevalent in India. According to a presentation by the product engineering team at a 2018 developer conference, the Android application is designed to run smoothly on devices with as little as 1GB of RAM and older Android versions.


In an interview with Medianama in 2017, Hotstar's engineering team disclosed that the application implements aggressive caching strategies, prefetching of content, and optimized video player implementations to minimize buffering and improve startup times, particularly on slower networks.


AI and Machine Learning for Quality Optimization

Hotstar has incorporated machine learning into various aspects of its infrastructure. According to a 2019 interview with Zubin Pooniwala published in Analytics India Magazine, the platform uses ML models to predict optimal initial bitrate selection based on user context including device type, historical network performance, and location.


The platform also employs ML for predictive scaling, where models forecast traffic patterns and automatically provision infrastructure ahead of anticipated demand spikes. Pooniwala stated that this approach has helped reduce the response time for scaling operations from minutes to seconds.


Operational Challenges and Incidents


Managing Unpredictable Demand

Cricket viewership in India is highly unpredictable and event-dependent. According to statements by Hotstar executives in various interviews, a match between India and Pakistan can attract 3-4 times more viewers than a match between other teams, even in the same tournament.


This unpredictability requires conservative capacity planning with significant headroom. In his QCon presentation, Varun Gupta noted that Hotstar provisions infrastructure for peak scenarios but optimizes costs by rapidly scaling down after events conclude.


Network Infrastructure Limitations

India's telecommunications infrastructure presents unique challenges for video streaming. According to multiple statements by Hotstar executives, network quality varies significantly across regions, with many users on 3G or even 2G networks, and mobile data being the primary mode of access for a majority of users.


In a 2018 interview with Factor Daily, Ajit Mohan, then CEO of Hotstar, stated that approximately 90% of Hotstar's traffic comes from mobile devices, with mobile data accounting for the vast majority. This necessitates aggressive optimization for cellular network conditions.


Public Incidents and Learnings

While Hotstar has successfully handled most major events, the platform has experienced some issues. During the India vs Australia ODI match in March 2019, some users reported buffering and quality issues, as covered by Medianama. In response to this incident, Hotstar's engineering team published a blog post acknowledging the issues and explaining that an unexpected spike in simultaneous API requests from a specific client version had overwhelmed certain backend services.


The transparency around this incident, including detailed technical explanations, reflected what Varun Gupta described in a subsequent interview as a culture of learning from failures and sharing learnings with the broader engineering community.


Business and Technology Alignment


Advertising Technology Integration

Hotstar operates a freemium model where a significant portion of content is ad-supported. According to the company's public statements, the advertising technology infrastructure must scale proportionally with viewer traffic during live events.


In an interview with Campaign India in 2019, Gaurav Gandhi, Vice President of Growth and Monetization at Hotstar, explained that the platform's ad tech stack delivers targeted advertisements at massive scale, with ad decisioning and delivery happening in real-time for millions of concurrent users.


The ad infrastructure, as described in technical presentations, must integrate seamlessly with the video streaming infrastructure without introducing latency or impacting user experience. This requires careful coordination between content delivery, ad decisioning, and client application logic.


Premium Subscription (VIP) Services

In addition to ad-supported content, Hotstar offers premium subscription tiers (branded as VIP and Premium at different times). According to company statements reported by Economic Times, premium subscribers receive benefits including early access to content, higher video quality options, and ad-free viewing.


From a technical perspective, this requires infrastructure to handle differentiated quality of service. According to technical presentations, premium subscribers are prioritized for higher bitrate streams when network and server capacity allows, and the platform's quality selection algorithms account for subscription tier when making bitrate decisions.


Verified Metrics and Achievements

Based on publicly disclosed information:

  • 18.6 million concurrent viewers during IPL 2019 (Hotstar engineering blog, May 2019)

  • 25.3 million concurrent viewers during ICC World Cup 2019 semi-final (Business Standard, July 2019)

  • Over 30,000 compute cores during IPL 2019 season (QCon San Francisco 2018 presentation)

  • Over 1.5 billion requests per minute at peak during 18.6M concurrent record (Hotstar engineering blog)

  • Over 200 billion events per day processed by analytics pipeline during major events (technical presentations)

  • 300 million monthly active users claimed at time of Disney acquisition (Economic Times, April 2019)


No verified information is publicly available on specific infrastructure costs, revenue figures, profit margins, or detailed financial metrics related to the technical infrastructure.


Technology Stack (Based on Public Disclosures)

Based on various technical presentations and blog posts by Hotstar's engineering team:

  • Cloud Infrastructure: Amazon Web Services (AWS) - confirmed in multiple presentations

  • CDN: Multi-CDN strategy with multiple providers - disclosed in technical presentations

  • Video Streaming Protocol: HTTP Live Streaming (HLS) - confirmed in technical documentation

  • Stream Processing: Apache Kafka, Apache Flink - disclosed in Analytics India Magazine interview (2019)

  • Programming Languages: No comprehensive list publicly available, but engineering blog posts reference Go, Python, and Java

  • Monitoring and Observability: Custom solutions built internally - mentioned in presentations but specific tools not disclosed


Limitations of Available Information

Several aspects of Hotstar's technical infrastructure remain undisclosed or are not verifiable through public sources:


  1. Specific CDN partners and distribution: While a multi-CDN strategy is confirmed, the specific CDN providers and traffic distribution methodology have not been publicly detailed.

  2. Detailed cost structure: Infrastructure costs, cost per stream, cost per concurrent user, and return on infrastructure investment have not been publicly disclosed.

  3. Engineering team size and structure: The size of the engineering team, organizational structure, and team allocation across different technical domains have not been comprehensively disclosed.

  4. Specific encoding parameters: While the use of multiple bitrate renditions is confirmed, specific encoding parameters, codec choices, and video processing pipeline details have not been fully disclosed.

  5. Database and storage architecture: Limited information is publicly available about database choices, data storage strategies, and data retention policies.

  6. Security and DRM implementation: Content protection, digital rights management (DRM), and security infrastructure details have not been publicly detailed.

  7. International expansion infrastructure: Post-rebranding to Disney+ Hotstar and expansion to other markets, specific technical adaptations for international markets have not been comprehensively disclosed.

  8. Financial metrics: User acquisition costs, subscription revenue, advertising revenue, and infrastructure ROI are not publicly available for the technical infrastructure specifically.


Key Lessons and Implications


Engineering for Extreme Scale and Spikiness

Hotstar's experience demonstrates the technical requirements for handling extreme, predictable traffic spikes in emerging markets. Unlike platforms with steady usage patterns, Hotstar must provision for 20x traffic increases within minutes, sustain that load for hours, and then scale down rapidly.


The platform's architecture emphasizes horizontal scalability, auto-scaling automation, and multi-layered redundancy. According to statements from Hotstar's engineering leadership, every component of the stack from video encoding to API servers to databases must be designed to scale horizontally without bottlenecks.


Optimization for Diverse Network Conditions

Hotstar's aggressive optimization for poor network conditions reflects the reality of internet infrastructure in India and similar emerging markets. The platform's ability to deliver acceptable video quality on 2G networks, while also serving 4K streams on fiber connections, requires sophisticated adaptive bitrate algorithms and efficient encoding.


This approach has broader applicability for any service targeting emerging markets where network infrastructure is heterogeneous and often bandwidth-constrained.


Operational Excellence and Resilience Engineering

Hotstar's public adoption of chaos engineering, extensive load testing, and transparent post-incident analyses reflects a mature approach to infrastructure reliability. The willingness to conduct failure experiments in production, and to publicly share learnings from incidents, suggests organizational commitment to operational excellence.


For platforms where availability during specific time windows is critical (sports, elections, major events), such operational rigor is essential.


Real-Time Data Processing at Scale

The platform's analytics infrastructure, processing over 200 billion events per day during major events, demonstrates the importance of real-time data processing for operational decision-making at scale. This data enables dynamic CDN routing, quality optimization, and rapid incident detection and response.


Trade-offs Between Latency and Stability

Hotstar's deliberate acceptance of 30-40 seconds of latency in exchange for stream stability and adaptive bitrate capabilities represents a conscious engineering trade-off. For the majority of users, stable playback without buffering is more valuable than minimal latency, particularly when network conditions are variable.


This contrasts with approaches taken by platforms in markets with more reliable infrastructure, where lower latency is feasible and expected.


Cost Optimization Through Dynamic Scaling

The ability to rapidly scale infrastructure up and down is not just an availability requirement but also a cost optimization strategy. According to statements by Hotstar's engineering team, the platform runs at significantly reduced capacity outside of live events, scaling up only when needed.


For platforms with highly spiky traffic patterns, the cost difference between always-provisioned peak capacity and dynamic scaling can be substantial, though specific cost savings have not been publicly disclosed by Hotstar.


MBA Discussion Pointers

1. Infrastructure Investment vs. User Experience Trade-offs: Hotstar's case presents a complex decision-making scenario around infrastructure investment in a price-sensitive market. The platform operates in India where average revenue per user (ARPU) for digital services is significantly lower than in developed markets, yet it has invested heavily in world-class infrastructure capable of handling record-breaking concurrent viewership. How should technology companies in emerging markets balance infrastructure investment with unit economics? What frameworks can guide decisions about quality of service when serving price-sensitive customers? Consider the implications when a single cricket match drives more traffic than most platforms handle globally, but the monetization per user is constrained by market willingness to pay. How does this change the build-vs-buy calculus for infrastructure components, and what role should anticipated future scale play in current architecture decisions?

2. Technical Architecture for Unpredictable Demand in Content Platforms: Unlike subscription platforms like Netflix that have relatively predictable usage patterns, Hotstar faces extreme demand variability driven by cricket match schedules, team matchups, and tournament outcomes. The technical decisions around auto-scaling, multi-CDN strategies, and capacity planning are directly tied to business model constraints. How should platform businesses architect infrastructure when demand patterns are highly variable and event-driven? What organizational capabilities (technical, operational, financial) are required to manage infrastructure that must scale 20x within minutes? Consider the implications for capital allocation: should companies over-provision for peak demand or accept degraded performance during extreme spikes? How do these technical architecture decisions interact with content acquisition strategy and advertising commitments?

3. Competitive Advantage Through Technical Excellence in Commodity Markets: Video streaming has become increasingly commoditized with numerous platforms competing for audience attention. Hotstar's demonstrated technical capability in handling concurrent viewership at unprecedented scale represents a potential competitive moat, particularly for live sports content. How sustainable is technical infrastructure excellence as a source of competitive advantage when cloud platforms and CDN services are available to all competitors? What organizational factors (talent, culture, process) enabled Hotstar to execute at this level? When building a competitive strategy, how should leaders evaluate technical capabilities versus content acquisition, user interface, pricing, and other strategic levers? Consider whether the massive infrastructure investment would be justified if Hotstar had not secured exclusive rights to IPL and other premium cricket content.

4. Organizational Learning and Transparency in High-Stakes Technical Operations: Hotstar's engineering team has been notably transparent about technical challenges, infrastructure details, and even failures, through blog posts, conference presentations, and interviews. This stands in contrast to many technology companies that treat infrastructure details as proprietary. What are the strategic implications of technical transparency? How does publishing detailed technical information affect talent recruitment, competitive positioning, and reputation? Consider the organizational culture required to enable engineers to speak publicly about systems and failures, and the risk management trade-offs of revealing technical architecture details. How should technical leadership balance transparency as a talent and reputation strategy against competitive intelligence concerns?

5. Platform Economics and Infrastructure Scalability in Ad-Supported Models: Hotstar's business model combines ad-supported free content with premium subscriptions, creating complex constraints for infrastructure economics. The platform must handle millions of free users during major cricket matches while delivering targeted advertising at scale, all while maintaining acceptable user experience to prevent churn. How do different monetization models (advertising vs. subscription vs. hybrid) affect infrastructure investment decisions and quality-of-service trade-offs? What metrics should guide infrastructure investment when marginal costs per user are significant but many users generate only advertising revenue? Consider how ad-tech infrastructure must scale proportionally with content delivery infrastructure, and how advertisement delivery requirements constrain video streaming architecture. How should leaders prioritize infrastructure investments when serving mixed user bases with different monetization profiles and different expectations for service quality?


Comments


© MarkHub24. Made with ❤ for Marketers

  • LinkedIn
bottom of page