By Garth Brown
Note: My post last week discussed a new genetic editing technique that will likely soon be coming to a farm animal near you. I do intend to continue that discussion, but a viewing of the documentary The Social Dilemma has me thinking about the broader technological context. I’ll write about that here, and next week I’ll tie it to gene editing and agriculture. - Garth
The trouble with the end of the world is its uncertainty. Though there’s a ubiquitous feeling that it’s approaching, there’s little agreement about the form it will take. Old standbys like nuclear war and climate change remain, and while I don’t wish to detract from either of them, ours is truly a golden age of catastrophe. There are obvious reasons to imagine the likely effects of a more virulent and lethal pandemic than the one currently ravaging the world. Read up on the non-existence of basic international efforts to monitor the use of the tools that could engineer such a virus and you’ll be shocked it hasn’t happened already! If you’re a debt hawk you can look with horror at the balance sheet of basically every country in the world. Peak oil seems to have lost its momentum as a driver of societal collapse, but have you heard about peak phosphorus? And there’s always the background of flashy, fictional disasters - zombies and meteors and soon, I assume, zombies riding on meteors.
But my sense is that the apocalypse most of us are imagining these days has to do with some combination of extremism and polarization. Of course, the fact that this anxiety is ascendent at the moment doesn’t mean it’s actually the greatest threat. But for my part I can’t spend more than a few minutes reading the news or looking at Twitter without a dreadful feeling that society is already about halfway unravelled flooding my chest. The Social Dilemma offers a compelling explanation for why this should be so.
The broad thrust of the argument is that the internet broadly, and social media especially, works primarily as a vehicle of behavior modification. In a limited sense media has always relied on this model; the whole point of advertisements is to encourage you to purchase something you otherwise wouldn’t. What’s different about novel technologies is their ability to algorithmically tailor themselves to each of us. A car advertisement might appear on one page in a newspaper, but if Google suspects you’re in the market a fleet of sedans will follow you across the entire internet.
But targeted ads are benign compared to the means Facebook and Twitter employ to capture attention. This too comes back to revenue, in that the more time you spend using a service the more ads you’ll see. The unfortunate part is that most of us seem to find more extreme arguments more engaging than nuanced discussion. In their pursuit of preventing your eyeballs from straying, social media companies employ algorithms that seek only to find the particular topic that revs you up, whether its politics, sports, or cooking pancakes, and then to give you a firehose of increasingly crazy information about it. Before you know what’s happened you might find yourself nodding along to some maniac telling you to put cottage cheese and cheerios in your flapjacks, and becoming a pancake radical is the absolute best case scenario.
Here’s another way of thinking about it. The only way Facebook makes money off you is when you are looking at it. The more time you spend scrolling the more ads you will see, and thus the more valuable you are. On the other side of the screen a program is working - looking at where you spend you time on the internet, what posts have caught your eye in the past and for how long they’ve engaged you, people or topics that routinely get a reaction out of you - and then trying out a variety of content to see what most successfully captures and retains your attention. The more time you spend looking, the harder it gets to look away.
If you ever decrease your time on a social media platform you’ll be flooded with emails telling you what you’ve missed. It’s not because Facebook is filtering out the very most interesting things and magnanimously telling you about the one or two that are genuinely important. It’s because Facebook is trying to get you back to scrolling your feed. This is also why basically every company on the internet is trying to send you notifications. And all this effort is geared to delivering whatever content best gets your attention, regardless of its merit or even its truth.
This is obviously bad, and it’s likely the dynamic that is driving at least some of the increasing polarization that characterizes the current moment. But a question I find even more interesting is what it does to us at a human level to engage for hours (and hours and hours) every day with increasingly clever machines that have no goal beyond capturing and holding attention. The Social Dilemma brings up some important issues having to do with self-image and social isolation, but I’m thinking of something more ephemeral but every bit as important. What kind of people are we, both individually and together? What sorts of experiences are we having, and on what experiences are we foreclosing? That’s all a bit vague, and I’m going to leave it so in the hopes that I’ll have time to return to it in a future post.
The weakest part of the documentary is the solutions offered at its end, which consist of hoping for regulation and turning off notifications. After ninety minutes of argument that these technologies are literally an existential threat, both to each of us as individuals and to society at large, such meager suggestions seem completely inadequate. There’s nothing wrong with telling your phone you don’t need to hear about it every time cheesegraterreviews.com posts a new cheese grater review, but it’s hard to believe that doing so can counter the exponentially growing power of the algorithm.
And here I see some overlap with broader questions of how to consider the intersection of technology with the way we live. Too often arguments on the topic are confined to narrow parameters, like whether smartphones are bad for mental health, or if social media spreads truth or misinformation, but also if genetically modified corn is nutritionally identical to unmodified corn, or how should the warming potential of biogenic methane compare to methane emissions caused by fossil fuel extraction. These are obviously important, and it’s good to have the answers to them. But they do not provide a broad enough framework.
Taking one individual tool or piece of software at a time and trying to figure out the specific ways it may be good, bad, or neutral cannot be the foundation for a coherent understanding of the role technology should play in our lives and society. To its credit, The Social Dilemma does give the topic a larger framing. But what’s absent is a coherent alternative. What are the foundational goods, both individually and collectively, that might be powerful and clear enough to guide us towards a healthier way of integrating technology into our lives, even if that means limiting or eliminating certain conveniences and distractions? What sort of world do we want to build? I hope you will keep these questions in mind until next week, when I promise to return to the question of gene editing in a way that will start to answer it.