- One AI Thing
- Posts
- šØ AI is Risky Business, Pt 1
šØ AI is Risky Business, Pt 1
A reader request for some both sides-ism
Hello, and thanks for reading One AI Thing. Get smarter about artificial intelligence, one thing at a time.
š Todayās Thing: AI is Risky Business, Pt 1
š One Big Thing
Iām playing with a new format today. This post will be more of a traditional essay than the broken out/scannable format Iāve used to far. Iāve been thinking about mixing in the occasional longform essay, to get deeper into a topic than short blurbs allow for. Thatās the idea, anyway. Let me know what you think!
The Societal Risks of Advanced AI
A reader named Nic replied to my call for AI Things youād like to know more about:
i could use more of āboth sidesā in the AI newsletter - is there a specific reason you donāt want to spend too much time on risks?
Thereās definitely not a reason! At least not on a conscious level. I think Iāve actually been trying to be more optimistic over the past several months, generally, so maybe thatās crept into the newsletter. Also, Iāve been geeking out on AI āfrom afarā for years. Now that so many people can and are touching the tech for themselves, itās incredibly exciting times. So maybe itās just more fun to write about that than the scary stuff, like how thereās an argument to be made (quite easily, in fact) that weāre riding the razorās edge of dystopia, racing blindly and with fingers crossed towards an unknowable savior, and hoping we donāt collectively outsmart ourselves to death before we get there. There could be some of that, the āhave fun while writing about ā and during! ā very uncertain timesā factor. š¤·āāļø
If anything, Iāve been trying to balance pros and cons to some degree in each post. Maybe Iāll run all of my posts to date through a sentiment checker to see how Iām actually doing ā¦
Right up top Iāll say this about the risks of advanced AI: From my point of view, the risks fall into two big buckets. The first is human risks. What are we reasonably afraid humans might do with this technology? The second is technological risks. What are we afraid might happen if the technology āgets out of controlā?
The second bucket is important. So important, in fact, that Iāll address it with one or maybe two sentences and then leave it for another post: To paraphrase Geoffrey Hinton, itās entirely possible that humanity is just a passing phase in the evolution of superintelligence. We humans may exist merely to invent digital intelligence, and once that intelligence has sufficiently advanced, it will move on without us. Iāll leave it at that; you can scare yourself half to death trying to figure out what that might mean, or, yāknow, put a pin in it and read on.
The first bucket is scarier to me in the short term, perhaps because itās easier to imagine. Humans have shown weāre capable of being all kinds of awful to one another. (We can also be excellent, donāt ever forget!) Give a human a new super technology, and you gotta take the good with the bad. Gunpowder, electricity, nuclear fission, social media ⦠all the big technologies come with risks and rewards. So, again, if advanced AI goes where the experts think itās going to go, we need to be ready for some bad actors acting badly.
Iāll also say right here up top that Iām not a techno-utopian but I do believe advanced AI has the potential to really, really help humanity out. But our current society feels much like a barbell with two increasingly large weights on either end of a rapidly shrinking bar. The haves keep having, the have nots keep getting less, and pretty soon there aināt gonna be any middle anymore. AI, like any sufficiently useful technology, can be used to exacerbate inequality or to smooth it out. Itās up to us humans.
ā ļø How Humans Might Use AI to Destroy Other Humans
That said, I wrote back to Nic for some clarification on what sorts of risks he was thinking about. I sent this top-of-my-head list:
data bias
mis-alignment (of AI goals w/human goals)
energy consumption/carbon emission (big AI = big data centers = big power)
bad actors weaponizing AI
deploying AI-written code with security holes in it
consolidation of power
using the black box nature of AI as an excuse for bad actions
job displacement/economic upheaval
The last two items struck a chord with Nic, as did the phrase āAI Positivity,ā as seen in this reddit thread:

I think we can lump GPTās possible pro-AI bias in with those two items from my list. And Iāll add one more, as I think these three concerns go together:
consolidation of power
using the black box nature of AI as an excuse for bad actions
job displacement/economic upheaval
Thinking about the first and third items is pretty straightforward in my head. (And the whole āWe didnāt expect the AI model to do that! Not knowing exactly how it works is part of the deal š¤·āāļøā excuse is pretty tailor-made for prodigal supervillains and their Comms teams.) If AI continues to advance rapidly, itāll be capable of doing more and more peopleās jobs on the cheap. Itāll look less like āthe entire team got laid offā and more like, āthey replaced 5 writers and 5 art directors with one gal whoās really good at Writer.ai and one guy whoās boss on Adobe Firefly.ā 2 people doing the work of ten, or twenty, etc. At least at first, until the boss man can just click a button and print cash, no employees required. The they in this scenario, of course, are the titans of industry who understand that two salaries + two AI apps = a lot more profit than keeping ten salaries on the books.
Iām no Econ major, but thereās something about labor, capital, and owning the means of production thatās rattling around in my head right now. If advanced AI turns out to be really, really good at writing web content, writing code, writing educational materials, writing legal documents, and doing all the other knowledge work stuff humans get paid to do today, where does that leave us? For starters, it consolidates power in the hands of business owners, investors, public officials, et al. who can choose to pay human workers, pay generative AI companies, or spend a little on each. The big societal risk here, of course, is that the owners decide, āYeah this AI stuff is great!ā cut as many human workers loose as possible, and reap the rewards of bigger profit margins on the no-salary-havinā backs of machines while the masses go broke, go hungry, and revolt and/or die.
How risky that makes you feel probably depends on a few factors, including where you live. Italy just earmarked $33 million to āshield workers from AI replacement threat.ā Universal Basic Income seems to come up in conversation a little more often these days, for whatever thatās worth.
Optimists Iāve talked to about this tend to say the same thing: AI will wind up creating more jobs/prosperity/wealth, not taking it away. We may well have to go through a crisis before we get to the other side, but thatās what always happens during periods of economic revolution. See also: Cotton Gin, Assembly Line, All the Dot Coms, etc.
Another person told me Gen AI for white collar jobs will probably be like when the car factories got robots and people learned how to work the robots. That was bad for Detroit, in particular:
Given industry trends in robot deployment, the area of the country most affected is the seat of the automobile industry. Michigan has the highest concentration of robots in the workplace, with employment in Detroit, Lansing, and Saginaw affected more than anywhere else in the country.
āDifferent industries have different footprints in different places in the U.S.,ā Acemoglu observes. āThe place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere].ā
In commuting zones where robots were added to the workforce, each robot replaces about 6.6 jobs locally, the researchers found. However, in a subtle twist, adding robots in manufacturing benefits people in other industries and other areas of the country ā by lowering the cost of goods, among other things. These national economic benefits are the reason the researchers calculated that adding one robot replaces 3.3 jobs for the country as a whole.
Like I said, Iāve been trying to be more optimistic in recent times. But I also watch Succession. And I work in an industry making headlines for mass layoffs. An industry with big name CEOs talking about holding off on hiring in favor of more AI. Worrying less about our collective economic future would be easier if I and everyone I know werenāt endlessly doomscrolling the same LinkedIn want ads offering low pay to ⦠yup, train AI models.
So what do we do now? Pulling the plug on AI aināt gonna happen. But I also donāt know that it should. To take a very crude example I donāt know a lot about, do you really think we have a better shot at, say, solving climate change without AI than with it? Yeah, me neither. AIās great for getting a hold on pandemics, and maybe even curing cancer, too! Artificial Intelligence is quite helpful in a number of beneficial ways.
AI could legitimately also help us get to a place where thereās plenty of clean water, food, shelter, and medicine for everyone on the planet. I mean, if weāre already using AI to design better computer chips for AI, who says weāre not actually close to an AI-designed nanofabricator that can whip up nutritionally balanced ice cream sundaes on the cheap?
Thing is, weāve long since been producing enough food to feed everyone on the planet. And yet, people still go hungry. AI or no AI, human behavior matters.
Lest I end this post in a fuzzy haze of vague nihilism, letās get down to brass tacks. Some concrete ways in which advanced AI could accelerate the three risks mentioned above:
⢠Business owners replace workers with AI. Costs go down, profits go up, but only the 1% reap the rewards. The rest of us wind up jobless, or in dead end jobs laboring so the 1% can enjoy the fruits of our work.
⢠Politicians and other global leaders use AI to generate and disseminate propaganda on a scale even greater than what we see today. Facts go further by the wayside as AI-generated text, imagery, audio and video grows even more convincing and manipulative. Citizens are persuaded to give even more power over to a ruling class that, increasingly, rules only to further its own selfish ends.
⢠The digital divide grows into an uncrossable chasm. Tech companies and other public and private institutions control the closed-source AI models that run the world. Schools exist to train workers to feed the models, and not to become thoughtful citizens capable of critical thinking. Advanced immersive entertainment placates the labor force with increasingly potent opiate for the masses.
⢠Decision-makers increasingly rely on AI models to help make decisions without really understanding how the models work, and if theyāre even any good at deciding. Job interview bots (Iām using bots loosely here) are a prime early example of this. Sifting through resumes and conducting first-round interviews is the kind of repetitive, time-intensive task AI could conceivable help with! But what if the AI isnāt any good at it? Employers wind up offering jobs to candidates based on entirely invalid reasoning, apparently.
⢠As above, but re: AI exacerbating racial bias in lending decisions, racial bias in healthcare, racial bias in criminal justice, racial bias in education, and ⦠well, you get the picture.
So yeah, thereās a lot to be concerned about. Itās reasonable to think that Sam Altman wants regulation because he wants OpenAIās competitors regulated out of business. But itās also reasonable to think he wants regulation because this AI stuff is fast becoming potent as hell. Maybe one of you reading this who knows more about regulation than I do can reply back and shed some light on what regulated AI might actually look like!
In the meantime, my personal feelings about things like social safety nets and other, cooler uses for AI than making money remain more or less the same. My heart says a world in which technology frees us all up to pursue the top of Maslowās pyramid is possible. My head says itās not that easy, never has been, never will be. Iām trying to figure out how to push for the former, perhaps by writing a newsletter (Have a better idea? Iām open to suggestions!). Meantime, the black box, self-learning nature of AI brings up a whole host of other risks in that second bucket I mentioned earlier. Iāll dive in and swim around in that stuff in Part 2 of this post, soon enough.
But I think the next installment of the newsletter might focus on a fun AI Thing. Gotta take the good with the bad, right?
šµļø Need More?
Searching for a certain kind of AI thing? Reply to this email and let me know what you'd like to see more of.
Until the next thing,
- Noah
p.s. Want to sign up for the One AI Thing newsletter or share it with a friend? You can find me here.