😨 AI is Risky Business, Pt 1

A reader request for some both sides-ism

Hello, and thanks for reading One AI Thing. Get smarter about artificial intelligence, one thing at a time.

šŸ‘€ Today’s Thing: AI is Risky Business, Pt 1

šŸ“– One Big Thing

I’m playing with a new format today. This post will be more of a traditional essay than the broken out/scannable format I’ve used to far. I’ve been thinking about mixing in the occasional longform essay, to get deeper into a topic than short blurbs allow for. That’s the idea, anyway. Let me know what you think!

The Societal Risks of Advanced AI

A reader named Nic replied to my call for AI Things you’d like to know more about:

i could use more of ā€œboth sidesā€ in the AI newsletter - is there a specific reason you don’t want to spend too much time on risks?

There’s definitely not a reason! At least not on a conscious level. I think I’ve actually been trying to be more optimistic over the past several months, generally, so maybe that’s crept into the newsletter. Also, I’ve been geeking out on AI ā€œfrom afarā€ for years. Now that so many people can and are touching the tech for themselves, it’s incredibly exciting times. So maybe it’s just more fun to write about that than the scary stuff, like how there’s an argument to be made (quite easily, in fact) that we’re riding the razor’s edge of dystopia, racing blindly and with fingers crossed towards an unknowable savior, and hoping we don’t collectively outsmart ourselves to death before we get there. There could be some of that, the ā€œhave fun while writing about — and during! — very uncertain timesā€ factor. šŸ¤·ā€ā™‚ļø

If anything, I’ve been trying to balance pros and cons to some degree in each post. Maybe I’ll run all of my posts to date through a sentiment checker to see how I’m actually doing …

Right up top I’ll say this about the risks of advanced AI: From my point of view, the risks fall into two big buckets. The first is human risks. What are we reasonably afraid humans might do with this technology? The second is technological risks. What are we afraid might happen if the technology ā€œgets out of controlā€?

The second bucket is important. So important, in fact, that I’ll address it with one or maybe two sentences and then leave it for another post: To paraphrase Geoffrey Hinton, it’s entirely possible that humanity is just a passing phase in the evolution of superintelligence. We humans may exist merely to invent digital intelligence, and once that intelligence has sufficiently advanced, it will move on without us. I’ll leave it at that; you can scare yourself half to death trying to figure out what that might mean, or, y’know, put a pin in it and read on.

The first bucket is scarier to me in the short term, perhaps because it’s easier to imagine. Humans have shown we’re capable of being all kinds of awful to one another. (We can also be excellent, don’t ever forget!) Give a human a new super technology, and you gotta take the good with the bad. Gunpowder, electricity, nuclear fission, social media … all the big technologies come with risks and rewards. So, again, if advanced AI goes where the experts think it’s going to go, we need to be ready for some bad actors acting badly.

I’ll also say right here up top that I’m not a techno-utopian but I do believe advanced AI has the potential to really, really help humanity out. But our current society feels much like a barbell with two increasingly large weights on either end of a rapidly shrinking bar. The haves keep having, the have nots keep getting less, and pretty soon there ain’t gonna be any middle anymore. AI, like any sufficiently useful technology, can be used to exacerbate inequality or to smooth it out. It’s up to us humans.

ā˜ ļø How Humans Might Use AI to Destroy Other Humans

That said, I wrote back to Nic for some clarification on what sorts of risks he was thinking about. I sent this top-of-my-head list:

  • data bias

  • mis-alignment (of AI goals w/human goals)

  • energy consumption/carbon emission (big AI = big data centers = big power)

  • bad actors weaponizing AI

  • deploying AI-written code with security holes in it

  • consolidation of power

  • using the black box nature of AI as an excuse for bad actions

  • job displacement/economic upheaval

The last two items struck a chord with Nic, as did the phrase ā€œAI Positivity,ā€ as seen in this reddit thread:

I think we can lump GPT’s possible pro-AI bias in with those two items from my list. And I’ll add one more, as I think these three concerns go together:

  • consolidation of power

  • using the black box nature of AI as an excuse for bad actions

  • job displacement/economic upheaval

Thinking about the first and third items is pretty straightforward in my head. (And the whole ā€œWe didn’t expect the AI model to do that! Not knowing exactly how it works is part of the deal šŸ¤·ā€ā™‚ļøā€ excuse is pretty tailor-made for prodigal supervillains and their Comms teams.) If AI continues to advance rapidly, it’ll be capable of doing more and more people’s jobs on the cheap. It’ll look less like ā€œthe entire team got laid offā€ and more like, ā€œthey replaced 5 writers and 5 art directors with one gal who’s really good at Writer.ai and one guy who’s boss on Adobe Firefly.ā€ 2 people doing the work of ten, or twenty, etc. At least at first, until the boss man can just click a button and print cash, no employees required. The they in this scenario, of course, are the titans of industry who understand that two salaries + two AI apps = a lot more profit than keeping ten salaries on the books.

I’m no Econ major, but there’s something about labor, capital, and owning the means of production that’s rattling around in my head right now. If advanced AI turns out to be really, really good at writing web content, writing code, writing educational materials, writing legal documents, and doing all the other knowledge work stuff humans get paid to do today, where does that leave us? For starters, it consolidates power in the hands of business owners, investors, public officials, et al. who can choose to pay human workers, pay generative AI companies, or spend a little on each. The big societal risk here, of course, is that the owners decide, ā€œYeah this AI stuff is great!ā€ cut as many human workers loose as possible, and reap the rewards of bigger profit margins on the no-salary-havin’ backs of machines while the masses go broke, go hungry, and revolt and/or die.

How risky that makes you feel probably depends on a few factors, including where you live. Italy just earmarked $33 million to ā€œshield workers from AI replacement threat.ā€ Universal Basic Income seems to come up in conversation a little more often these days, for whatever that’s worth.

Optimists I’ve talked to about this tend to say the same thing: AI will wind up creating more jobs/prosperity/wealth, not taking it away. We may well have to go through a crisis before we get to the other side, but that’s what always happens during periods of economic revolution. See also: Cotton Gin, Assembly Line, All the Dot Coms, etc.

Another person told me Gen AI for white collar jobs will probably be like when the car factories got robots and people learned how to work the robots. That was bad for Detroit, in particular:

Given industry trends in robot deployment, the area of the country most affected is the seat of the automobile industry. Michigan has the highest concentration of robots in the workplace, with employment in Detroit, Lansing, and Saginaw affected more than anywhere else in the country.

ā€œDifferent industries have different footprints in different places in the U.S.,ā€ Acemoglu observes. ā€œThe place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere].ā€

In commuting zones where robots were added to the workforce, each robot replaces about 6.6 jobs locally, the researchers found. However, in a subtle twist, adding robots in manufacturing benefits people in other industries and other areas of the country — by lowering the cost of goods, among other things. These national economic benefits are the reason the researchers calculated that adding one robot replaces 3.3 jobs for the country as a whole.

MIT News, May 2020 (emphasis mine)

Like I said, I’ve been trying to be more optimistic in recent times. But I also watch Succession. And I work in an industry making headlines for mass layoffs. An industry with big name CEOs talking about holding off on hiring in favor of more AI. Worrying less about our collective economic future would be easier if I and everyone I know weren’t endlessly doomscrolling the same LinkedIn want ads offering low pay to … yup, train AI models.

So what do we do now? Pulling the plug on AI ain’t gonna happen. But I also don’t know that it should. To take a very crude example I don’t know a lot about, do you really think we have a better shot at, say, solving climate change without AI than with it? Yeah, me neither. AI’s great for getting a hold on pandemics, and maybe even curing cancer, too! Artificial Intelligence is quite helpful in a number of beneficial ways.

AI could legitimately also help us get to a place where there’s plenty of clean water, food, shelter, and medicine for everyone on the planet. I mean, if we’re already using AI to design better computer chips for AI, who says we’re not actually close to an AI-designed nanofabricator that can whip up nutritionally balanced ice cream sundaes on the cheap?

Thing is, we’ve long since been producing enough food to feed everyone on the planet. And yet, people still go hungry. AI or no AI, human behavior matters.

Lest I end this post in a fuzzy haze of vague nihilism, let’s get down to brass tacks. Some concrete ways in which advanced AI could accelerate the three risks mentioned above:

• Business owners replace workers with AI. Costs go down, profits go up, but only the 1% reap the rewards. The rest of us wind up jobless, or in dead end jobs laboring so the 1% can enjoy the fruits of our work.

• Politicians and other global leaders use AI to generate and disseminate propaganda on a scale even greater than what we see today. Facts go further by the wayside as AI-generated text, imagery, audio and video grows even more convincing and manipulative. Citizens are persuaded to give even more power over to a ruling class that, increasingly, rules only to further its own selfish ends.

• The digital divide grows into an uncrossable chasm. Tech companies and other public and private institutions control the closed-source AI models that run the world. Schools exist to train workers to feed the models, and not to become thoughtful citizens capable of critical thinking. Advanced immersive entertainment placates the labor force with increasingly potent opiate for the masses.

• Decision-makers increasingly rely on AI models to help make decisions without really understanding how the models work, and if they’re even any good at deciding. Job interview bots (I’m using bots loosely here) are a prime early example of this. Sifting through resumes and conducting first-round interviews is the kind of repetitive, time-intensive task AI could conceivable help with! But what if the AI isn’t any good at it? Employers wind up offering jobs to candidates based on entirely invalid reasoning, apparently.

So yeah, there’s a lot to be concerned about. It’s reasonable to think that Sam Altman wants regulation because he wants OpenAI’s competitors regulated out of business. But it’s also reasonable to think he wants regulation because this AI stuff is fast becoming potent as hell. Maybe one of you reading this who knows more about regulation than I do can reply back and shed some light on what regulated AI might actually look like!

In the meantime, my personal feelings about things like social safety nets and other, cooler uses for AI than making money remain more or less the same. My heart says a world in which technology frees us all up to pursue the top of Maslow’s pyramid is possible. My head says it’s not that easy, never has been, never will be. I’m trying to figure out how to push for the former, perhaps by writing a newsletter (Have a better idea? I’m open to suggestions!). Meantime, the black box, self-learning nature of AI brings up a whole host of other risks in that second bucket I mentioned earlier. I’ll dive in and swim around in that stuff in Part 2 of this post, soon enough.

But I think the next installment of the newsletter might focus on a fun AI Thing. Gotta take the good with the bad, right?

šŸ•µļø Need More?

Searching for a certain kind of AI thing? Reply to this email and let me know what you'd like to see more of.

Until the next thing,

- Noah

p.s. Want to sign up for the One AI Thing newsletter or share it with a friend? You can find me here.