More on this book
Community
Kindle Notes & Highlights
by
Matthew Ball
Read between
July 19 - July 27, 2022
The challenge with BGP is that it was designed for the internet’s original use case of sharing static, asynchronous files. It does not know, let alone understand, what data it’s transmitting (be it an email, a live presentation, or a set of inputs intended to dodge virtual gunfire in a real-time rendered virtual simulation), nor its direction (inbound or outbound), the impact of encountering network congestion, and so on.
BGP is managed by the Internet Engineering Task Force and can be revised. However, the viability of any changes depends on opt-in from thousands of different internet service providers, private networks, router manufacturers, content delivery networks, and more. Even a substantial update is likely to be insufficient for a globally scaled Metaverse—at least in the near future.
The Metaverse will only become “the Metaverse” if it can support a large number of users experiencing the same event, at the same time, and in the same place, without making substantial concessions in user functionality, world interactivity, persistence, rendering quality, and so on. Just imagine how different—and limited—society would be today if only 50 to 150 people could attend any given sporting match, concert, political rally, museum, school, or mall.
So now we understand my definition of the Metaverse: “A massively scaled and interoperable network of real-time rendered 3D virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.”
this definition, as well as its sub-descriptions, are all missing the terms “decentralization,” “Web3,” and “blockchain.” There is good reason for this
Web3 refers to a somewhat vaguely defined future version of the internet built around independent developers and users, rather than lumbering aggregator platforms such as Google, Apple, Microsoft, Amazon, and Facebook. It is a more decentralized version of today’s internet that many believe is best enabled by (or at least most likely through) blockchains.
Both the Metaverse and Web3 are “successor states” to the internet as we know it today, but their definitions are quite different. Web3 does not directly require any 3D, real-time rendered, or synchronous experiences, while the Metaverse does not require decentralization, distributed databases, blockchains, or a relative shift of online power or value from platforms to users. To mix the two together is a bit like conflating the rise of democratic republics with industrialization or electrification—one is about societal formation and governance, the other is about technology and its
...more
The Metaverse and Web3 may nevertheless a...
This highlight has been truncated due to consecutive passage length restrictions.
The Metaverse will require the development of new standards and creation of new infrastructure, potentially require overhauls to the long-standing Internet Protocol Suite, involve the adoption of novel devices and hardware, and might even alter the balance of power between technology giants, independent developers, and end users.
As shrewd business leaders know well, every time a new computing and networking platform emerges, the world and the companies that lead it are forever changed.
We can already see precursors to the Metaverse. In platforms and operating systems, the most talked about contenders are virtual world platforms like Roblox and Minecraft, and real-time rendering engines such as Epic Games’ Unreal engine and Unity Technologies’ eponymous engine.
Discord, meanwhile, operates the largest communications platform and social network focused on video gaming and virtual worlds.
the Metaverse will not replace or fundamentally alter the internet’s underlying architecture or protocol suite. Instead, it will evolve to build on top of it in a way that will feel distinctive.
Think about the “current state” of the internet. We refer to it as the mobile internet era, yet most internet traffic is still transmitted via fixed-line cables—even for data sent from and to mobile devices—and mostly runs on standards, protocols, and formats designed decades ago (though they’ve evolved since).
We also recognize that the internet is a bundle of many different “things.” To interact with the internet, the average person typically uses a web browser or app (software), which they access through a device that can itself connect to “the internet” using various chipsets, all of which communicate using various standards and common protocols, which are transmitted through physical networks. Each of these areas collectively enable internet experiences. No one company could drive end-to-end improvements in the internet—even if they operated the entire Internet Protocol Suite.
The internet originated in government research labs and universities. Later, it expanded into enterprise, then small-to-medium businesses, and later still, consumers.
Given their complexity, it should be obvious that real-time rendered 3D virtual worlds and simulations were even more constrained by the early decades of the personal computer and internet than almost all other types of software and programs.
The companies that typically focused on powering video game consoles and PCs are now some of the most powerful technology companies in human history. The best example is computing and system-on-a-chip giant Nvidia, which is far from a household name yet ranks alongside consumer-facing tech platforms Google, Apple, Facebook, Amazon, and Microsoft as one of the ten largest public companies in the world. Nvidia’s CEO, Jensen Huang, didn’t start his company with the intention of it becoming a gaming giant. In fact, he founded it based on the belief that eventually graphics-based computing would be
...more
3D imaging graphics hardware was outrageously expensive,
What I didn’t anticipate, what actually came along to drive down the cost of 3D graphics hardware, was games. And so the virtual reality that we all talked about and that we all imagined 20 years ago didn’t happen in the way that we predicted. It happened instead in the form of video games.”
For similar reasons, the software solutions that are best at real-time 3D rendering come from gaming, too. The most notable examples are Epic Games’ Unreal Engine, as well as Unity Technologies’ eponymous engine, but there are dozens of video game developers and publishers with highly capable proprietary real-time rendering solutions.
what was once a physical alarm clock on a nightstand is now an application inside the smartphone on a nightstand, or just data stored on a smart speaker nearby.
In Part II, I explain what it will take to power and build the Metaverse, starting with networking and computing capabilities, and then moving on to the game engines and platforms that operate its many virtual worlds, the standards which are needed to unite them, the devices through which they’re accessed, and the payment rails that underpin their economies.
For the typical online game, what actually comes from online multiplayer servers? Not much. Fortnite’s PC and console game files are roughly 30 GB in size, but online play involves only 20–50 MB (or 0.02–0.05 GB) in downloaded data per hour. This information tells the player’s device what to do with the data they already have. For example, if you’re playing an online game of Mario Kart, Nintendo’s servers will tell your Nintendo Switch which avatars your opponents are using and should therefore be loaded. During the match, your continuous connection to this server enables it to send a constant
...more
Overall, Roblox’s data usage is much greater than that of Fortnite—roughly 100–300 MB per hour, rather than 30–50 MB—but still manageable.
At its target settings, MSFS needs nearly 25 times as much hourly bandwidth than Fortnite and five times as much as Roblox.
Bandwidth and latency are often conflated, and the mistake is understandable: they both impact how much data can be sent or received per unit of time. The classic way to differentiate the two is by likening your internet connection to a highway. You can think of “bandwidth” as the number of lanes on the highway, and “latency” as the speed limit. If a highway has more lanes, it can carry more cars and trucks without congestion. But if the highway’s speed limit is low—perhaps due to too many curves or because it’s laid in gravel not pavement—then the flow of traffic is slow even if there’s spare
...more
In Part I, I explained that few online services today need ultra-low latency. It doesn’t matter if it takes 100 milliseconds or 200 milliseconds or even two-second delays between sending a WhatsApp message and receiving a read receipt. It also doesn’t matter if it takes 20 ms or 150 ms or 300 ms after a user clicks YouTube’s pause button until the video stops—and most users probably don’t register the difference between 20 ms and 50 ms. When you’re watching Netflix, it’s more important that the video plays reliably rather than immediately. And while latency in a Zoom video call is annoying,
...more
In games such as Fortnite, Roblox, or Grand Theft Auto, avid gamers become frustrated after 50 ms of latency (most game publishers hope for 20 ms). Even casual gamers feel input delay, rather than their inexperience, are to blame at 110 ms.3 At 150 ms, games that require a quick response are simply unplayable.
Again, networks aren’t reliable. To manage latency, the online gaming industry has developed a number of partial solutions and workarounds. For example, most high-fidelity multiplayer gaming is “match made” around server regions. By limiting the player roster to those who live in the northeastern United States, or Western Europe, or Southeast Asia, game publishers can minimize latency within each region. Because gaming is a leisure activity and typically played with one to three friends, this clustering works well enough. You’re unlikely to want to play with a specific person several time
...more
Subspace, a real-time bandwidth technology company,
Subspace has found that an average 10 ms increase or decrease in latency reduces or increases weekly play time by 6%.
Almost no other type of business faces such sensitivity, and in that gaming is an engagement-based business, the revenue implications are considerable.
it can take 35 ms to send data from the US Northeast to the US Southeast. It takes even longer to travel between continents. Median delivery times from the US Northeast to Northeast Asia are as much as 350 or 400 ms—and even longer from user to user (as much as 700 ms to 1 full second).
Every single additional user to a virtual world only compounds synchronization challenges.
Latency is the greatest networking obstacle on the way to the Metaverse. Part of the issue is that few services and applications need ultra-low-latency delivery today, which in turn makes it harder for any network operator or technology company focused on real-time delivery. The good news here is that as the Metaverse grows, investment in lower latency internet infrastructure will increase.
The average latency of a packet sent from Amazon’s northeastern US data center (which serves NYC) to its Southeast Asia Pacific (Mumbai and Tokyo) data center is 230 ms.
NYC has a direct undersea cable to France, but not to Portugal. Traffic from the United States can go directly to Tokyo, but reaching India requires jumping from one undersea cable to another on the Asian or Oceanian continent. A single cable could be laid from the United States to India, but it would need to navigate through or around Thailand—adding hundreds or even thousands of miles—and that only solves shore-to-shore transmission.
Perhaps surprisingly, it’s harder to improve domestic internet infrastructure than international internet infrastructure.
Laying a cable over a seamount in international waters is simple compared to laying a cable over a private-public mountain range.
It’s much easier to upgrade wireless infrastructure. 5G networks are primarily billed as offering wireless users “ultra-low latency,” with the potential of 1 ms and a more realistic 20 ms expected. This represents 20–40 ms in savings versus today’s 4G networks. However, this only helps the last few hundred meters of data transmission. Once a wireless user’s data hits the tower, it moves to fixed-line backbones.
Starlink, SpaceX’s satellite internet company, promises to provide high-bandwidth, low-latency internet service across the United States, and eventually the rest of the world. However, satellite internet doesn’t achieve ultra-low latency, especially at great distances. As of 2021, Starlink averages 18–55-ms travel time from your house to the satellite and back, but this time frame extends when the data has to go from New York to Los Angeles and back, as this involves traveling across multiple satellites or traditional terrestrial networks.
SENDING ENOUGH DATA AND IN A TIMELY FASHION is just one part of the process of operating a synchronized virtual world. The data must also be understood, code must be run, inputs assessed, logic performed, environments rendered, and so on. This is the job of central processing units (CPUs) and graphics processing units (GPUs), broadly described as “compute.”
It was only by the mid-2010s that millions of consumer-grade devices could manage a game like Fortnite—one with dozens of richly animated avatars in a single match, each one capable of a wide range of actions, and interacting in a vivid and tangible world, rather than the cold vastness of space. It was around this same time that enough affordable servers were available that could manage and synchronize the inputs coming from so many devices.
Metaverse will involve hundreds of thousands participating in a shared simulation and with as many custom virtual items as they like; full motion capture; the ability to richly modify a virtual world (rather than pick from a dozen or so options) with full persistence; and rendering that world not just in 1080p (typically considered “high definition”), but 4K or even 8K. Even the most powerful devices on earth struggle to do this in real time because every single asset, texture, and resolution increase or added frame and player means an additional draw on scarce computing resources.
Nvidia’s founder and CEO, Jensen Huang, imagines the next step for immersive simulations as taking us far beyond more realistic-looking explosions or a more animated avatar. Instead, he envisions the application of the “laws of particle physics, of gravity, of electromagnetism, of electromagnetic waves, [including] light and radio waves . . . of pressure and sound.”
computing power is always scarce specifically because additional computing capabilities lead to important advances.
Shifting as much processing and rendering to industrial-grade data centers seems both more efficient and essential to building the Metaverse.
There are already companies and services pointing in this direction. Google Stadia and Amazon Luna, for example, process all video gameplay in remote data centers, then push the entire rendered experience to a user’s device as a video stream. The only thing a client device needs to do is play this video and send inputs (move left, press X, and so on)—similar to watching Netflix.
Proponents of this approach often highlight the logic of powering our homes via power grids and industrial power...
This highlight has been truncated due to consecutive passage length restrictions.