Roland Writes

ABOUT

Roland Writes

ABOUT

Huge breakthroughs, tiny changes: the next decade of artificial intelligence

Recent developments like DALLE-2 and LaMDA are impressive and seem ready for impact. Is AI ready to change the world?

July 1, 2022

Whether you love, fear, or have mixed feelings about the future of artificial intelligence, the cultural fixation on the subject over the past decade has made it feel like the technology’s meteoric impact is just around the corner. The problem is that it is always just around the corner, yet never seems to arrive. Many hype-filled years have passed us by since the releases of Ex Machina (2014) and Westworld (2016), but it feels like we are still waiting on AI’s big splash. However, a handful of recent developments—specifically, OpenAI’s unveiling of GPT-3 and DALLE-2, and Google’s LaMDA controversy—have unleashed a new wave of excitement—and terror—around the possibility that AI’s game-changing moment is finally here.

There are several reasons why it feels it has taken a long time for AI projects to bear fruit. One is that pop culture seems almost exclusively focused on the possible endgames of the technology, rather than its broader journey. This isn’t much of a surprise. When we stream the latest sci-fi movie or binge Black Mirror episodes, we want to see killer robots and computer chip brain implants. No one is buying a ticket to see a movie about the slow, incremental rollout of existing technology—not unless it mutates and starts killing within the first 30 minutes. But while AI’s more futuristic forms are naturally the most entertaining, and provide an endless source of material for screenwriters, anyone who based their expectations for AI off of Bladerunner has got to be feeling disappointed by now.

Another issue is that, until recently, the developments in AI that did manage to punch through into mainstream discourse were underwhelming—at least to those on the lookout for truly revolutionary tech. DeepMind's AlphaGo defeating the world’s top “Go” player in 2016; human-shaped robots from Boston Dynamics doing backflips; convincing celebrity deepfakes, and a handful of scientific breakthroughs within computer-friendly domains, while impressive as individual feats, did not convince anyone that AI was one step away from changing the way we all live. Rather, they had the affect of cute roadside attractions on the path to something bigger.

But the feeling that AI has taken too long to live up to its hype is not just an illusion caused by our obsession with cinema-worthy breakthroughs; the data supports this sentiment as well. Many techno-optimists are surprised to learn that there is no evidence that AI has made us more productive or meaningfully impacted our economy in any way—at least not yet. But where, exactly, do we look for such evidence? Before the killer robot dogs show up, how will we know, empirically, that AI’s broad impact has arrived?

While AI’s hype-beasts and their more pessimistic counterparts disagree on many things, they at least seem able to agree on the best ways to monitor AI’s aggregate progress. Perhaps the most popular of these benchmarks is labor productivity. This seems intuitive and fair. Since AI’s proponents largely bill it as technology that will free us from tedious and repetitive work, catalyze innovation, and automate many jobs altogether—all while growing the larger economic pie—it makes sense that individual worker productivity, defined in terms of output and time, serves as an agreeable measuring stick.

It is certainly adequate for the famous economist Robert Gordon and the director of Stanford’s Digital Economy Lab, Erik Brynjolfsson. The two have taken opposite sides in a public bet on whether AI will move the productivity needle meaningfully by the end of this decade. With $400 at stake, they are wagering reputations more than wallets—but so far, Gordon, the notorious pessimist, seems the favorite to win.

At the outset, Brynjolfsson’s predictions appear to rest on bolder assumptions than Gordon’s. Annual labor productivity growth averaged less than 1% for the 2011-2019 period, and their bet is whether or not this statistic can nearly double and reach 1.8% for the 2020-2029 period. While Gordon is ultimately betting on more of the same, for Brynjolfsson to win requires nothing short of a productivity revolution. With less than eight years to go on their bet, could AI still deliver one?

Game changers
In April, Silicon Valley’s darling AI research organization, OpenAI, released a paper announcing their latest breakthroughs in DALLE-2, their AI-powered text-to-image tool. It was an instant sensation. It spawned a tidal wave of tweets, blog posts, and AI-related magazine covers. And rightfully so—the technology is impressive, to put it mildly. If you still haven’t taken a look at it, you should. The ability of DALLE-2 to produce high quality images from specific instructions is more than enough to make any graphic designer sweat.

Luckily for the many artists of the world, OpenAI is at least giving them a little time: they are not unleashing DALLE-2 for anyone to use right away. In an effort to go about AI entrepreneurship responsibly, they are only granting a batch of handpicked startups access to DALLE-2 before they make any decisions about making it more widely available. But will they ever make it widely available? Should they?

These are tough questions that give rise to as many interesting and complex philosophical considerations as economic ones. But a key difference between breakthroughs like DALLE-2 and many of AI’s prior achievements is that DALLE-2 seems ready—from a utilitarian standpoint—for immediate and widespread commercial impact. Looking at the examples OpenAI shared to show off what DALLE-2 is capable of, it is easy to imagine many use cases that would replace countless hours of human labor: book illustrations; video game design; infographics; concept cover art for articles like this one—the list goes on. While large companies that have the resources will probably still prefer the human touch, it is hard to imagine a DALLE-2-enabled world where the massive, multi-billion dollar market for “eh, good enough” corporate art does not lean heavily into AI-generated work.

Many have also pointed out that DALLE-2 and similar projects don’t need to automate a single job to make an impact. At least in the near term, these tools will likely not destroy jobs entirely but serve as assistants to human workers who use them to boost their own productivity. Game designers in search of inspiration can ask their AI assistant to spin up ten prototypes of a boss character for a new level they’re working on; a local artist tasked with creating the poster art for the upcoming EDM festival can get more than a few ideas in the same way.

Additionally, startups will be able to integrate these models into existing technologies. Going back to gaming, the video game industry has been using AI to assist in areas like the generation of enormous virtual landscapes ("procedural generation") and NPC behavior long before DALLE-2 came along. But game developers will soon be able to build these foundational models into their existing infrastructures, enabling richer and more dynamic features.

Okay, so DALLE-2 is ready to disrupt at least a few industries. But even if it was fully unleashed, would anyone outside of the corporate art world and its adjacent communities notice? If this technology destroys tens of thousands of jobs, will its impact on the larger economy at least be visible to the naked eye? And how much closer would it bring Brynjolfsson to winning his bet against Gordon?

A case study in disruption
To get a sense of how impactful a fully unchained DALLE-2 could be from an economic perspective, let’s do some back-of-the-envelope math on one of its more dramatic scenarios. According to IBISWorld, there are about 175,000 graphic designers employed in the United States. Let’s pretend DALLE-2 puts all of them out of work. This would never happen, but it’s perhaps not a terrible proxy for DALLE-2’s impact on the broader labor market in the extreme situation we’re imagining.

In 2021, US GDP was $23 trillion. According to data from the Federal Reserve and the BLS, there were about 161 million people in the US labor force in 2021, and each of these workers clocked an average of about 35 hours of work per week. This means the average American worker produces about $78.49 of GDP per hour—something you can mention next time you ask for a raise.

Now, let’s adjust these numbers to mimic a complete DALLE-2 takeover of the graphic design industry by holding 2021 GDP constant and subtracting its 175,000 jobs from the labor force, thereby reducing this group’s combined annual effort from 320 million hours worked to zero. In this scenario, the average American worker—those who didn’t lose their jobs to DALLE-2—now produces $78.58 of GDP per hour—not even 10 cents more than before. In percentage terms, this comes out to a productivity increase of about one tenth of one percent.

When we are dealing with enormous, incomprehensible numbers in the trillions and billions, we of course have to keep in mind that even small changes in percentage terms—like 1 or 2 percent, which are tiny in most contexts—are amplified in significance and often mark transformative events when measured on scales as large as the US economy. An increase in total US labor productivity of one tenth of one percent is no small feat. That said, it does feel a little underwhelming considering the gravity of the thought experiment just carried out: nearly 200,000 jobs that make up a roughly 15 billion dollar industry completely destroyed by a few algorithms.

And if we go back to the bet between Gordon and Brynjolfsson, this heavy scenario, if it happened, would get Brynjolfsson only about one seventh of the way to winning. He would still need about six more DALLE-2-sized projects to win the $400 and bragging rights. And that’s assuming these projects are fully unleashed by their creators. If OpenAI’s recent licensing agreement with Microsoft is any indication, the future of AI looks a lot more like selective relationships that limit access to a handful of already powerful tech companies rather than fast, widespread release.

Unknown unknowns
Google DeepMind’s LaMDA project was recently mired in controversy when one of their former employees claimed a computer he was interacting with had become sentient. This sparked all kinds of conversations, as well as debates about what it even means for something to be sentient and the nature of consciousness. We may never reach a satisfying consensus for these kinds of questions. But as far as how these new technologies will actually change the way we live—changes that will, for better or worse, happen much sooner than changes to the way we think—whether or not something is sentient is far less important than whether it is useful. And if you read the transcripts between the ex-Google employee and LaMDA, it is terrifyingly clear that, much like DALLE-2, the point of usefulness has arrived.

But, if it it’s not already obvious, I would take Gordon’s side of the bet if forced to choose. Though I appreciate the incredible power of human innovation—it’s evidence is all around us—I, like many, have grown weary of the bombastic claims that flow constantly from Silicon Valley futurists.

But I'm also ready to be wrong. The past few years have certainly taught me to expect the unexpected. Economists like Gordon who are predisposed to pessimism have made entire careers out of acting as a sobering counterbalance to—and refuge from—all of the in-your-face tech bro hype. But you still have to give the bold and crazy ones their due: every now and then they do pull it off.

And perhaps AI does not need explosive, game-changing new projects to hit the lofty goals and timelines set forth by its proponents. Maybe just massaging the already impressive lineup of foundational models into the plumbing of thousands of existing products is sufficient to make a sizable dent over the next several years. Little things do add up.

However, despite AI’s recent developments, I’m not holding my breath waiting for the next technological revolution. For the next decade, at least, I’m afraid freedom from tedium will still have to come from within.

dark twitter icon

Sign up for the newsletter

You'll be notified of new posts. For now, that's it. But I'll let you know of any plans to change this.