Why inflight connectivity is repeating the internet’s most important lesson
In the late 1990s, the commercial internet faced a fundamental constraint: bandwidth was scarce, latency was high, and reliability was fragile. Pages loaded slowly. Video barely worked. A single traffic spike could overwhelm an origin server.
The breakthrough wasn’t simply adding more bandwidth. It was changing where and how data was delivered.
Caching, and the rise of Content Delivery Networks, moved data closer to users, reduced latency, and eliminated the inefficiency of repeatedly pulling the same content across constrained backbone links. What’s often missed in hindsight is this:
Caching didn’t disappear once bandwidth improved. It became indispensable.
That lesson did not belong to the internet alone. Today, commercial aviation connectivity is replaying that same evolution but compressed into a far shorter timeframe and unfolding at 35,000 feet.
From internet scarcity to abundance—and more caching than ever
In 1998, global internet traffic was measured in terabits per second. According to early estimates from Cisco and the ITU, total global IP traffic at the turn of the millennium was well under 100 petabytes per month. Consumer connections were typically dial-up at 56 kbps, with early broadband struggling to reach a few megabits per second.
Fast forward to today:
- Global IP traffic exceeds 400+ exabytes per month (per Cisco VNI estimates)
- Average fixed broadband speeds globally exceed 80–100 Mbps (per Ookla), with many regions far beyond that
- Backbone and hyperscaler networks routinely operate at terabits per second per link
Bandwidth increased by orders of magnitude.
Yet caching didn’t become less relevant, it became foundational. Today, an estimated 70–80% of all internet traffic is delivered via CDNs and edge caches, including platforms operated by Akamai, Cloudflare, Fastly, and hyperscalers.
Streaming video, software updates, social media feeds, and AI inference (e.g., YouTube, Netflix, Meta, Cloudfront) all depend on pre-positioned and cached data.
Why? Because demand always scales faster than raw capacity. More bandwidth enables richer experiences across video, personalization, and real-time interaction which in turn makes efficient delivery more critical, not less.
More bandwidth didn’t eliminate the need for caching.
It exposed how impossible it is to scale purely from the origin.
Aviation connectivity is entering the same phase
Aviation connectivity began from an even more constrained baseline than the early internet. Early passenger connectivity systems delivered limited throughput and high latency, making anything beyond basic messaging impractical.
The shift to Ku- and Ka-band systems, followed by LEO constellations, has dramatically changed that equation. Aircraft now have access to tens or hundreds of megabits per second with latency low enough to support modern applications.
This progress is often framed as an endpoint: “Now we finally have enough bandwidth onboard.”
But history suggests otherwise.
As bandwidth improves, airlines unlock new classes of demand: streaming video, richer portals, personalized experiences, operational applications, and an expectation that onboard connectivity behaves like the ground internet.
Passenger usage responds immediately. Video becomes dominant. The same content is requested repeatedly across a shared, variable, and expensive link.
Just as it was on the early internet.
Why caching becomes more important—not less
There’s a paradox at the heart of connectivity: the better the pipe becomes, the more inefficient it is to use it naïvely.
Satellite bandwidth, including LEO, is still:
- Finite
- Shared
- Variable by geography, weather, and beam contention
- Among the most expensive transport per delivered bit
For example, a LEO service providing ~200 Mb/s to an aircraft with 200 seats results in only 1Mb/s per seat. In real-world conditions, this is not sufficient to support concurrent streaming of modern video services at acceptable quality.
Without onboard caching:
- Popular content is downloaded again and again
- Latency-sensitive applications degrade during peak demand
- Costs scale linearly with usage instead of intelligently with demand
With caching and edge intelligence:
- High-demand content is served locally and instantly
- Satellite links are preserved for truly dynamic or long-tail data
- Passenger experience becomes consistent, not just fast
For example, caching enables a single stream of 5-7Mb/s to be delivered to the aircraft once, and then it is subsequently delivered to many passengers at the same time, for no extra satellite bandwidth. This is particularly important for live streaming events like sports or news.
This is exactly how the terrestrial internet scaled. Edge architectures didn’t compete with bandwidth growth, they made it usable.
The forward-looking takeaway
Aircraft are becoming flying edge data centers.
As connectivity continues to improve, the airlines that lead in passenger experience won’t be those chasing headline bandwidth numbers. They’ll be the ones that architect intelligently by combining satellite capacity with onboard caching, content pre-positioning, and edge processing.
The lesson from the internet is clear and well proven:
Caching isn’t a workaround for limited bandwidth.
It’s a strategy for scaling experience.
In aviation, as on the ground, the future lives at the edge. Building this correctly requires years of streaming, network, and edge experience. It is not a bolt-on.
This is what Siden was built for. We are streaming experts that have built a streaming-native platform for aviation.
Jim
CEO
