Welcome to our store How it Works

Order by Monday for delivery Wednesday each week.

Free shipping on orders over $200

Live Wires

Live Wires

Garth Brown |

If you haven’t been paying attention to what’s happening in artificial intelligence, now would be a good time to start. While I remain a skeptic of the idea that we will have a computer program capable of thinking like a human anytime soon, I am convinced that AIs built for more discreet tasks, from generating images to writing to diagnosing illnesses are going to become ubiquitous in the near term, with radical implications. I try not to be alarmist, but I think alarm is probably warranted in this case.

For example, here’s a (fake) news article written by AI. Here’s one discussing the fraught terrain of AI image generators, and how they should relate to human artists. Here’s a story about a faked video of Elon Musk that has been credulously viewed millions of times.

If you’re like me, you find the above examples unsettling and impressive, but not quite to the standard of what a skilled human could produce, deepfakes of billionaires aside. AI might not yet be ready to write The Hedgehog Review, but given how much more facility young people generally have for novel technologies than the english language, existing chatbots are already good enough to necessitate a radical rethinking of the very concept of essays and papers as standard academic work.

One line of argument is that there will, of course be growing pains, but that these technologies will be on the whole beneficial. (And did anyone really enjoy churning out five paragraph essays?) We humans are short-sighted, so what seems like a wild upheaval in the moment may prove to be tremendously salutary with a few decades of hindsight. Take the printing press, the advent of which, viewed a certain way, roiled Europe for centuries, even as it democratized information and laid the groundwork for the enlightenment.

Compare that timeline to something like smartphones, the first of which arrived in 2008 and which didn’t become widespread for another five years or so. Give us a couple hundred years and maybe we’ll have figured out how to organize society to manage the novel ways they distribute information.

I suppose it’s possible, but I’m not too optimistic. We don’t have centuries or even decades to build healthy new social norms around smart phones, assuming such a thing is possible, because phones are not at all one, static thing; the hardware of cell phones is changing steadily, but it is the software that really matters. If smartphones had been frozen with only their 2008 capacities, namely calling, texting, and emailing, we would likely have already reached a workable accommodation. Instead, we have a rapidly evolving market of apps in which the single biggest pressure is to monopolize a user’s attention as completely as possible.

And so it will be with AI, I’m afraid. If their capabilities stopped developing at this moment, they would be a scourge upon lit 101 professors and novelties to the rest of us. But according to various people who know far more about this than I do, it’s more likely the opposite. We should be anticipating a period of rapid improvement, not stagnation. Even if we could adjust to the capacities of current AIs, like those discussed in the articles linked above, by next month they will be significantly more capable.

A related line of AI apologetics relates specifically to art and creative work. It argues that AIs are tools, and like all tools, they may come with downsides, but also open new vistas. In this view, AI image (and video and audio and text) generators simply supercharge the creative process, allowing an artist to rapidly realize a creation that would previously have taken far longer. Instead of laboriously sketching a cow in a field, the artist can enter the prompt “cow in a field” and then sift through results, or refine the prompt to “cow standing in a dark green field with the sun rising” and repeat until the algorithm spits out the right image.

This has an element of creativity, but it falls into a different category than what we understand as art. A painting an aurochs on a cave wall and painting of a bull done on canvas are wildly different, but both involve the manipulation of physical matter. Painting well requires practice and patience and a sustained effort at realizing a singular vision. Even drawing a digital image with a cursor or stylus has at least an echo of the craft inherent in more traditional forms of image making.

You might argue that there’s skill and an aesthetic sensibility required to coax a truly exceptional image from an AI, to which I say, talk to me in five years. At that point there will likely be an AI image generator that has enough data to tailor its results to what it knows you like. My bet is it will eventually be as easy as searching in Google. Indeed, why should Google try to match pictures it culls from the web with your search intent if it becomes possible to make superior images from scratch? Just stick a little “Google Diffusion” watermark in the corner and it’s good to go.

Before trying to figure out what this has to do with farming (we’re heading there, I promise!) I want to get a bit more specific about the differences between these activities. I’m going to stick with ways of producing images, but the basic principles translate to music and even to writing, to an extent, which is troubling, given how much more time I spend in Docs than writing longhand.

An obvious difference between drawing a picture of a cow and asking an AI to make one is time; even the fastest artist can’t compete with an algorithm, especially when it comes to generating extremely realistic images. A drawing or painting takes time to execute, and it takes vastly more time — a lifetime, even — to cultivate the ability to translate a scene from eye to brain to hand to canvas.

A physically created image also relies on the available materials. A pencil sketch on a legal pad is one thing, a small watercolor another, and an oil painting a third. The chosen medium sets certain limitations on what form a picture can possibly take. Graphite on cheap, lined yellow paper will always be graphite on cheap, lined yellow paper. By contrast, an AI can be completely agnostic about style. On demand it can make a sketchy image or a photorealistic one. It can mimic Rembrandt or Ansel Adams.

Superficially, at least, this is to say material images are constrained in a way AI generated images simply are not. The medium used, knowledge of the tradition, and especially the degree to which a skill has been developed are hard limits on what any one person can produce.

But in a less obvious way a person producing images with AI operates under even more stringent restrictions. Though it would be possible to print an AI generated image, and it would even be possible in principle to use a printer capable of working with brushes and oil paints to achieve a more analogue visual effect, for practical purposes AI image are digital and viewed on screens, with no physical instantiation.

More significantly, with AI the only input the human involved provides is textual. Type a prompt, sift through a smorgasbord of imagery, and iterate. Yes, there is a degree of discernment in the process, but it is primarily creation as a side effect of consumption. It is further bounded by the working of the AI involved and the dataset on which it has been trained.

I believe the former — the frustrating, weird, process of physically creating a drawing — is a fundamentally more human activity than interacting with an image generating computer program. Further, it is obvious to me that the sorts of pictures created will necessarily be different. The computer might land upon something very much like what you would have drawn with a pen and paper, but it will never be identical, and this is without even getting into the ways in which a physical object, like a sheet of paper or a canvas, differs from an arrangement of glowing pixels.

* * *

For years I’ve had a persistent feeling that the way our devices shape us individually and collectively is not so great. Our lives are bifurcated between the digital and the physical, and the digital is not-so-slowly supplanting the physical. I know plenty of people who aren’t concerned about this trend, but I know more who are, though few are quite as worried as I am. Still, it seems like there are enough of us to do more than wring our hands.

Farming, defined as capaciously as possible, strikes me as a particularly potent alternative to digital life. The process of raising food, even something as simple as a head of lettuce, is physical. It is quite literally grounding. It is bound to the seasons and the weather, warmth, water, fertility, the presence or absence of hungry insects or rabbits. It’s cyclical, more or less the same from one year top the next, even as it is endlessly variable and impossible to completely master. Raise an animal for meat and there are additional layers of complexity, both practical and moral, as messy things like manure and death demand an accounting.

But we shouldn’t all be farmers. There’s no going back to the agrarian society of the 19th century, nor would I want to. Privations like high rates of infant mortality, malnutrition, and the underdevelopment of human potential are an inherent part of subsistence farming. It is in large part thanks to the efficiency of the modern economy, including the modern food system, that I have the freedom and the capacity to worry about the rise of Artificial Intelligence rather than the failure of my potato crop.

Still, I see a middle ground, a way in which small and mid sized farms might produce good, healthy food while also being points of connection for people in the broader community. Everyone shouldn’t be a farmer, but maybe everyone, especially kids and teenagers, should have an opportunity to spend time working on a farm.

When thinking through contrasting methods of making images, as I attempt to do above, the fundamental point is that the process matters more than the result, though the two cannot be disentangled. This is even more the case when it comes to food. A carrot that you have planted, grown, and harvested is different from one you buy in the grocery in large part because of what you’ve put into it.

I haven’t thought through the details of how incorporating more participation in more farms would be like currently existing agrotourism enterprises and how it would differ, save that actually doing something, rather than just seeing and listening to a farmer yammer on, would have be a part of it, and that long term relationships between individual people and individual farms would be an important goal. Despite this lack of clarity, I’ve come to think it’s a more promising path forward for alternative agriculture than trying to remake the food system broadly, though hopefully, in the long term, it would do just that.

A revitalization of agriculture is one potential benefit, and I’m particularly taken with it for obvious reasons, but it’s not critical to the larger point, which is cultivating a recognition that a healthy human life must have a foundation of engagement with the physical world in various forms, not just growing food or drawing, but conversation, walks with dogs, reading books, playing music, singing, and so on.

Earlier I expressed skepticism that we will collectively figure out a healthy way to assimilate these technologies into our lives, and here I am envisioning a future in which we’ve managed to do just that. The reason is that my skepticism has one significant exception. The only path forward I can see worth walking down is one in which first individuals and then groups of individuals and then communities start cordoning off the electronic realm, limiting the areas of life and especially the amount of time it can consume.

In other words, I think we should spend less energy worrying about whether to use or allow our children to use Tik Tok or Reddit (or an AI image generator) and more energy putting strict limits on the amount of time any of us spend using electronic devices at all, particularly in our leisure hours. This neatly avoids many of the perils of engaging the endlessly proliferating technologies that are in a brutal, amoral, Darwinian competition for our attention; no matter what advances arrive in the coming months or years or decades, if we only let them tinker with our minds for, say, half an hour per day, their effect will be limited.

The idea of machines supplanting humans becomes less concerning when much of life is set aside as an exclusively human domain. It would not entirely negate the influence that novel technologies like AI might have on our minds, and I haven’t even touched on their profound economic implications, but it would at least leave us plenty of space to think, breath, and maybe even plant a garden.

Leave a comment

Please note: comments must be approved before they are published.