My Grand Theory of Technological Progress
Lately I’ve been luxuriating in the ocean of articles detailing the troubles faced by Meta, nee Facebook. So far, the pivot from being a company that offers you Minions memes in exchange for every kernel of data that can be gleaned from your digital life to a company that provides a complete virtual reality alternative to the material world is not going well. Meta is dumping billions of dollars into this plan, with no real clarity about when it will come to fruition.
It’s not — definitely not! — schadenfreude at the thought that a billionaire architect of our digital dystopia might be brought low that has me so raptly following this story. I’m far too high minded for that, and besides, the thought of Mark Zuckerberg’s net worth dropping massively, perhaps even into the single digit billions, doesn’t feel like a very satisfying comeuppance for undermining the foundations of society. No, the reason I like this story so much is because it validates both rules of the Garth Brown Theory of Technological Change.
Rule One: Material Changes are Harder than Digital
First, I’ll dispense with the pedantic objection you might raise to this formulation. Of course the digital cannot be cleanly separated from the material. Everything digital relies on hardware, and very few novel technologies don’t have some digital component. I’m describing a gradient, not a binary.
But it’s important to recognize how weird the hardware that supports the digital world is. For comparison, take a look at improvements in auto safety. In the past sixty years deaths per hundred million miles driven have dropped from roughly five to just over one. Impressive! Or look at the six fold increase in corn yields since the mid twentieth century. Amazing!
Now try to wrap your head around the growth in computing power. Here’s an animation might help. Yes, it’s interesting that Moore’s Law seems to have broken, but don’t let that distract you from the increase from 2300 transistors on the first commercially available chip in 1971 to 23 billion in 2021. I can think of no other area of technology that has showed such sustained exponential growth. What this means is that in the world of computing there really is a reasonable expectation that capacities will be radically better in the very near future.
Basically everywhere else incremental improvement or even stagnation is the norm, which brings me back to my first rule: the more an idea involves radical change in the material world, the harder it will be to execute.
At first glance, it would seem like Facebook becoming Meta would be a shift entirely within the digital space, and thus would not violate rule one, but this is not the case. Despite the name, virtual reality is contingent on very material developments. Specifically, virtual reality gear — the physical apparatus a person might use to access the virtual realm — must be as easy and fun to use as scrolling on a smartphone. It can’t be an awkward, disorienting, nausea-inducing novelty. The way the goggles interact with human eyes, the way the display changes in response to movement, the way sound is integrated, all must work with human physiology to create a near universally positive experience. This is currenently not the case.
While there’s lots of debate over exactly what the potential negative effects of VR might be and their prevalence, my evidence that it is still an unproven technology relies on anecdote, namely that the people I know who have Virtual Reality headsets use them very rarely, but even more on sales. VR gear simply shows no signs of achieving widespread adoption. Sure, a projected global 14 million units sold annually by 2024 is nothing to sneeze at, but it’s a drop in the bucket compared to the roughly 1.5 billion smartphones that sell each year.
Meta is hitching its future to the idea that an unproven mode of information consumption, namely Virtual Reality gear, will substantially replace a proven mode of information consumption, namely the screens of our phones, tablets, and computers. And this leads me to the second rule.
Rule Two: A Technology Must be Better than What it seeks to Replace
I am not using the word “better” in any high minded way. Quite the opposite. I mean “better in the most immediate sense. Obviously, a new technology does not need to promote long term flourishing to achieve widespread adoption, but people have to like using it, and when it’s seeking to replace something that already exists, they have to like using it more.
Screens are a great example of this. Over the past century they have gone from a novelty to an entertainment to a standard household item to an interactive portal that we rely on for everything from news to work to ordering groceries to finding a potential spouse.
It turns out we really, really like looking at them. Most of us will do it for hours a day, if given the choice. Sure, many of us must use screens for work, but most of us also compulsively check the small screen we carry in our pocket dozens of times each day, then watch a screen to relax when we get home at night. They’ve diminished the time we spend reading books and newspapers, talking to people, and most of all just being bored.
It is possible that at some point virtual reality gear will advance to a point where it can serve many commonplace uses dramatically better than screens. There might be a way to make it as intuitive to use as a smart phone or as pleasurable to look at as a modern television, but I am skeptical. Screens might simply prove to be the most effective interface for human bodies.
But Meta is betting on technological innovations that make virtual reality gear interact with material human bodies much more seamlessly than it currently can, and further, that this seamless experience will be better in an obvious way than the staggeringly good screens that already compete for our attention.
Perhaps this all sounds a bit vague or overly broad, so let’s look at another specific example that has a bit more of a track record than Meta.
Rules in the Wild
A few years ago there was a lot of buzz about driverless cars. In fact, Tesla, Uber, and other startups were confident we’d have hundreds of thousands of autonomous cars on the road by this point. The theory was straightforward: improvements in computers, and particularly machine learning, would be able to take in inputs — maps of the world, the data from arrays of cameras and other sensors, millions of miles driven under controlled conditions — and use them to not just replicate the abilities of human drivers, but to become far safer than humans.
But it turns out the real world is frustratingly random. Road work means there are no painted lines. Weather is capricious. Most of all, human drivers are incredibly unpredictable. So far, this wildness has proven insurmountable. While work continues on driverless cars, and while I still expect increasing degrees of autonomy to proliferate in the coming years, progress is coming in fits and starts. Perhaps driverless cars will start operating in certain cities or certain areas of cities, or perhaps on highways in good conditions, but it seems overwhelmingly likely that it will be decades before fully autonomous vehicles begin to replace a significant number of drivers.
Compare this Uber. For Uber and other ride sharing platforms to proliferate, no advances in the material world needed to take place. Existing cars that people already owned could do the job, and basically everyone already had smartphones. Creating an app to facilitate the connection of riders to drivers was a purely digital proposition.
Refining the Idea
My rules most clearly apply to a specific scenario. Namely, be skeptical when the Silicon valley idea of disruption and rapid advancement seeks to remake the material world. Theranos would be perhaps the most famous example of this dynamic, with its attempt to revolutionize blood tests. But, since it’s one of my hobby horses, let’s look at what my theory tells us about meat alternatives.
Like driverless cars, lab grown meat was meant to be commonplace by now. Besides a few gimmicks in which companies will sell a limited amount of cultured tissue at a loss, this is not the case. You cannot buy lab grown ground beef, let alone a lab grown salmon steak, and despite credulous coverage of a new lab grown meat start up every six months or so, this shows no sign of changing anytime soon.
I’m far less sanguine that lab grown meat has any viable future than driverless cars. With driverless cars there is at least a clear way increases in computing power and machine learning might continue to improve capacities. Lab grown meat, on the other hand, has a series of massive technical problems that must be solved before it will be commercially viable. It is not clear to me that better computers will help all that much in this endeavor.
Unlike the endlessly malleable digital world, in which scale and capacity improve rapidly, there are hard limits in the physical world. While it is technically feasible to grow meat cells in laboratory conditions, there is no particularly good reason to believe it will ever be possible to do so at scale and cheaply enough to compete with growing meat cells inside a chicken.
If I was in the business of developing an alternative to meat, I would be focusing on a plant based product. While the Impossible Burger and the offerings of Beyond Meat have seen stagnating sales, they are at least actual products that you can actually buy at actual stores.
It’s still an incredibly challenging problem. Meat is as material as it gets, and meat is also very good at being meat. But from tofu to the current heavily engineered pea protein products, there are plenty of established way to make a meat-like product that at least some people will happily eat, though they generally do so for reasons of ethics or perceived healthfulness.
To my mind the limiting factor on the proliferation of plant based meat is taste. Remember, my second rules says a new offering must be better than what it seeks to replace. When it comes to mass marketed food this means taste and little else. Big marketing budgets and the aura of healthfulness that surrounds the term plant-based have encouraged a lot of people to try meat alternatives, but relatively few people have become regular, repeat customers. the obvious reason for this is that it doesn’t taste good enough.
Whether its possible to make a plant based meat alternative that tastes better than real meat is an open question, but it strikes me as far more likely than trying to synthesize meat cells at a massive scale and low cost.
On the one hand, I find this dynamic both useful and hopeful: useful in that it is a good reminder to moderate credulity when viewing certain types of projected advances, hopeful in that I believe it to be salutary to recognize and live within limits. On the other hand, these are pretty flimsy when stood against the extent to which the digital world has already supplanted the physical.
As for Meta, I hope it continues to dump billions of dollars into virtual reality to no avail, that virtual reality remains a niche product, a novelty reserved for games and little else.