🙈 AI Bias is a Problem

Garbage in, garbage out

Hello, and thanks for reading One AI Thing. Understand artificial intelligence, one thing at a time.

Note: It’s been a few weeks since the last AI Thing. If you noticed, let alone missed the newsletter, I am sorry! I got busy with a new consulting gig on top of a marathon of family events. I’m aiming to ramp the newsletter back up to a regular cadence, and I’m also aiming to figure that cadence out soon! And speaking of consulting gigs, feel free to reach out if you’re working on something interesting and need an extra brain. I’m always up for keeping work interesting…

👀 Today’s Thing: AI Bias is a Problem

(l) Me (r) Runway ML’s transformed image of … me?

🤖 AI bias is when an AI model exhibits systemic bias — most often, bias along racial and/or gender lines — in its outputs. AI bias can be easy to spot, as I found out when playing with image generation app Runway recently (more on that in a moment), or it can be trickier to identify, as in the case of healthcare algorithms recommending far more care to White patients than Black patients. It shouldn’t be surprising that AI models exhibit bias, given that they’re trained on heaps of human-generated data, and humanity has a long-standing problem with systemic bias. But, surprising or not, AI bias is real and we need to deal with it before the self-reinforcing nature of machine learning (and how humans use ML) makes things even worse.

🎧 How is AI doing good in healthcare? Check out my podcast episode on how AI is advancing vaccine distribution.

📖 Backstory

☞ Recently, I was complaining about some pro sports thing or another while texting with a friend. My friend teased me — and rightfully so — about being “so Kravvy” (eg, so me in my complaints). I copied and pasted his text into ChatGPT, adding some directions to turn it into an 80’s novelty rap-style song about a guy who’s always so Kravvy all the time. The resulting lyrics were pretty decent for a first draft (GPT excels at generating new text in the style of a specific kind of old text). But before sending them back to my friend, I had an idea. And maybe I should have known better, but the idea kinda went off the rails once Gen AI got ahold of it.

☞ I copied a snippet of the lyrics, found a recent photo of myself, and fired up Runway’s Image-to-Image transformer tool. The tool asks for two inputs: An image, and a text prompt. I gave it this image:

Yes, it’s an attempt at a passport photo: No glasses, and no smiling!

And I gave it this text prompt:

Dressed in the 90s, he's a vintage soul,

Rocking the block with the old school role.

Backward cap, baggy jeans, bright white kicks,

Out there hustling his fly magic tricks.

☞ Runway did its thing, and returned this image as output:

Screencap via RunwayML

To be very clear, I didn’t specify anything more than what’s in the screencap above. I loaded the first image (my photo), pasted the above lyrics into the Prompt field, and hit generate. Nothing about race, skin tone, etc in either the lyrics or the prompt. The AI model seemed to pick up specifically on the words “backward cap” and “jeans” — the person in the new image has a typically AI-weird garment on his head that at least references the adjustable strap found on the back of most baseball caps, and his typically AI-weird hoodie/poncho top looks to be denim. There’s also some resemblance between my facial features, expression, and shoulders/torso and those of the AI generated person. But, elephant in the room time … the model decided that my photo plus those lyrics equalled a Black man. The opaque nature of AI models like this makes it impossible to know for sure, but my educated guess is something along the lines of “rap lyrics about bright white kicks and fly magic tricks = Black person.” Which, uh, kinda seems like bias, no?

🔑 Keys to Understanding

🥇 My experience with Runway is far from the first time Generative AI has shown bias. Bloomberg’s recent deep-dive into Stable Diffusion’s bias problems opens with this banger of a scene-setter:

The world according to Stable Diffusion is run by White male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers.

The Bloomberg piece is very well done (great data visualizations!) and well worth your time, leveraging their own research to show how Stable Diffusion’s outputs devalue women and people of color. Just as importantly, the article’s authors, Leonardo Nicoletti and Dina Bass, get into some of the myriad ways Generative AI bias fuels very real harms:

Perpetuating stereotypes and misrepresentations through imagery can pose significant educational and professional barriers for Black and Brown women and girls, said Heather Hiles, chair of Black Girls Code.

“People learn from seeing or not seeing themselves that maybe they don’t belong,” Hiles said. “These things are reinforced through images.”

Black women have been systematically discriminated against by tech and AI systems like commercial facial-recognition products and search algorithms.

HUMANS ARE BIASED. GENERATIVE AI IS EVEN WORSE - Bloomberg Tech

🥈 The healthcare AI study I linked to at the top was cited in this NPR article about the risk/reward dilemma AI poses to medicine. AI has already shown its prowess in aiding with medical tasks ranging from drug discovery to helping doctors improve their beside manner. The technology isn’t going away, but the stakes for managing and improving AI bias are as high as they get. As a data scientist quoted in the piece put it, "If you mess this up, you can really, really harm people by entrenching systemic racism further into the health system.”

🥉 Bloomberg: “As AI models become more advanced, the images they create are increasingly difficult to distinguish from actual photos, making it hard to know what’s real. If these images depicting amplified stereotypes of race and gender find their way back into future models as training data, next generation text-to-image AI models could become even more biased, creating a snowball effect of compounding bias with potentially wide implications for society.”

🕵️ Need More?

Searching for a certain kind of AI thing? Reply to this email and let me know what you'd like to see more of.

Until the next thing,

- Noah

p.s. Want to sign up for the One AI Thing newsletter or share it with a friend? You can find me here.