The End of the Beginning of AI
We've had some time to get to know AI. Now it's time to get smart about it.
Is it just me, or are we facing down a fresh apocalypse approximately every year? Wasn’t it 10 years under Obama? Is this related to inflation somehow?
Maybe after years of living with impending cataclysm, we’re not so much exhausted from it as we are addicted to it.
That would explain the AI apocalypse — not a natural phenomenon, but our chosen one. It’s the ending that comes when the computers take all of our jobs, then become sentient, and destroy the world. Or something.
The likelihood of real destruction by powerful yet unpredictable computer software is very low. The chances are much greater that we’ll destroy ourselves in a familiar nostalgic way, by killing each other. That’s an easy prediction to make, since it’s come true here in the USA.
The immediate threat from AI is real but a bit less deadly. Although rarely contemplated in our nihilistic fantasies, a real and gathering menace has emerged. People and companies are eager to prey on our AI hopes or fears, perhaps both. We can probably manage the risk to our pocketbooks, but it chips away at our understanding of what AI is and isn’t, which is a real danger on its own.
Product management, software development, and the creative industries are all flooded with misleading and fraudulent claims about the power of AI. Every new innovation is “powered by AI.” Every product has a ChatGPT plugin, even when there’s no clear reason for it.
In the midst of this confusion, the Federal Government has published an extraordinarily thoughtful, helpful, and sincere document that is both informative and entertaining.
They want you to get smart about AI. They’re putting out a warning about hucksterism and misinformation. They’re letting us know that they’ve totally got our backs on this whole AI thing.
In their wonderful post called “Keep your AI claims in check” the agency is putting companies on written notice that they’re being watched. The tone of the article manages to be stern, slightly jovial, and mildly condescending all at the same time. It’s like the substitute teacher for AP calculus just arrived, found the classroom in complete chaos, and is threatening us all with detention. Color me impressed!
Here’s an excerpt to help you understand what the FTC wants you to know. It is, essentially, a formal invitation from the Feds to go ahead and fuck around, but only if what you’d really like to do is find out.
Check this out:
If you think you can get away with baseless claims that your product is AI-enabled, think again…
[W]e’re not yet living in the realm of science fiction, where computers can generally make trustworthy predictions of human behavior. Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions…
You don’t need a machine to predict what the FTC might do when those claims are unsupported.
Awesome, right? I thought FTC stood for Federal Trade Commission but from now on I’m expanding the acronyms to “Fuck These Clowns.”
Give it a read. You’ll learn something about what AI is and isn’t. It’s rare to read anything interesting on this topic from somebody who has nothing to sell you.
Great work, Federal Trade Commission!
When Artificial Intelligence Isn’t
I was talking with a colleague1 about AI a few weeks ago. They pointed out that when we are talking about the amazing output of an AI, we should always ask the question “Compared to what?”
It’s easy to get carried away. We’re all still wrestling, after all, with our understanding of what AI is good for. In the earlier piece I mentioned, I discussed the fable of the talking horse. Overwhelmed by the discovery that a horse is able to talk, we are disinclined to pay attention to exactly what it says.
That’s not good enough, though, if the horse is widely expected to cause us all to get fired. It’s increasingly plain to see and increasingly urgent that we put our critical thinking skills to work on this.
Our imaginations overwhelmed, we’re in danger of falling short on skepticism. My friend’s suggestion is wise: Critically compare the output to the alternatives, or risk gravely misunderstanding what’s really happening around you.
Credulous optimism also leads us to underestimate or even ignore the real beauty and power that human creativity brings to the world. It’s okay to be excited. It’s not okay to minimize the value or meaning of what real people contribute to our world and our culture.
Comparing Generative AI
Generative AI has made great leaps forward, without question. It is indisputably amazing that computers are able to make attractive, appealing creative works on their own.
Hold on, though. When we say the computer’s work is amazing — amazing compared to what? Compared to another computer? To a talented human being? To an unskilled novice?
When I heard that a publication in the Journal of the American Medical Association had taken up this very approach, one eyebrow shot up so high that my hat fell off.
The study just screams out “tweet me!” In it, health questions asked by patients were supplied to a panel of real physicians, as well as to ChatGPT. A team of licensed healthcare professionals evaluated the answers and scored them for “quality” and “empathy.”
ChatGPT outperformed the human doctors, because of course it did. Otherwise we wouldn’t be reading about this study, and it wouldn’t be ripping across the socials, and I’d still have my hat.
If you actually read the study, things get pretty weird. I learned that the “real physicians” that ChatGPT was compared against were… I almost don’t want to tell you. I advised you to clear a forehead-shaped space on your desk before you read this excerpt from the study:
we collected public and patient questions and physician responses posted to an online social media forum, Reddit’s r/AskDocs.
That’s right, the “real physicians” in the JAMA study were “verified” users of a social networking site.
The answer to “compared to what” in this case was, essentially, the blue checkmark crowd. The fact that they were beaten by ChatgPT says more about the verified doctors than it does about the AI, for me.
So, ChatGPT is a better doctor than the checkmarked people on Reddit. I can live with that. I think I am handling it really well.
This example of Midjourney output for the prompt “Pichachu Robot” is undeniably compelling. The AI envisions the omnipresent, electrically-charged Pokémon with convincing accuracy, even though the human creator spelled its name wrong.
The irony is delicious and yet so frustrating. AI is accelerating like the fighters in the Battlestar Galactica credits. But spell check and autocorrect are standing perfectly still, frozen in time. It’s almost like they’re holding their breath in hopes we don’t notice they’re still around.
The AI-generated image is attractive, even though Pikachu looks to be made of PowerBait. If I assigned this task to a competently trained designer, or a talented effects artist, would I be thrilled with this result? I’m not sure. I might hope for something… different.
Joshua Dunlop is a London-based illustrator and concept artist who is clearly insanely talented. Take a look at this image of Sandshrew that he created, and tell me how you think it compares:
I’m not a professional, so I won’t insult Joshua by artlessly pointing with my sausage fingers at the details I think put his work in a different category from generative output. Outstanding craftspeople like Joshua Dunlop should know there are people out here who are aware that we see the difference, and we care.
Here’s a reminder, brought to you by the FTC. Watch out for these two types of people who aren’t interested in critically evaluating the output of AI, and asking “What’s this good for?”
Those who don’t know better. They don’t understand the technology, and they don’t have the experience or taste to know good from bad.
Those who are selling something.
It’s tough to believe that the authors of the JAMA study fall into the former category. But I forgive them, if they did. Like the rest of us, they’re probably overwhelmed with misinformation about what’s really happening in AI.
Or maybe this is the exactly the kind of thing the FTC is warning us about.
A 3D Analogy
In my household, we’ve gotten pretty handy at 3D printing. One of my kids (10 years old) made this adorable octopus all by himself. Isn’t it cute?
The giant ampersand was made by a talentless, color-blind, unremarkable middle aged man. Amazing, right?
If you take a look at some other octopus examples and you’ll see that our kid’s octopus is actually quite crude. That is, if you compare it to others made by people who know what they’re doing.
When we show the octopus to people who aren’t familiar with the technology, they’re amazed. They imagine 3D-printed jewelry, flatware, furniture, or (my favorite) credit cards. All good ideas!
In fact, 3D printing technology is hard at work in the hands of professionals in most of those industries. If we lay people let ourselves get carried away with our own misunderstandings of the reality of technology, we risk squandering its real potential.
At-home 3D printing delivers an amazing capability compared to what we could make out of clay or balsa wood. Compared to Lego, it puts a new world of possibilities in the hands of a child, so they can create things from their imagination much more vividly than would have been possible before.
To a child, it’s a toy. In the hands of professionals, 3D printing provides the capability to rapidly design and produce complex, high-fidelity prototypes much earlier in the production process. This aids in the creation of better products, accelerates production time, and lowers production cost.
AI Where and When
The real potential of generative AI is to accelerate, empower, embolden, and stimulate the imagination of talented content creators. My former boss, Adobe’s Scott Belsky, got this exactly right in this post which imagines how AI could speed on up the work of a designer in Photoshop.
What professionals in almost every field need from AI the most is the same. We all need repetitive tasks automated. We need smarter defaults, and software that will encourage us to make choices that are more likely to be successful. This will free up our time, which we’ll use to make truly amazing things happen.
In developing the idea of a computer helping you quickly make smart choices, my brilliant colleague and friend Tim Brown described the idea as “snap to beautiful.” It means that the computer makes it easier for you to do something right than something random, unhelpful, or harmful to your goals.
This is perfect work for AI that’s been shown to know something about what success looks like. If we expect that same AI to complete the task from start to finish, we’ll find in every case there’s a point at which the computer cannot find its ass with its own two hands. So to speak.
The role of all technology, since we began creating it, is to accelerate and empower our work. It’s most effective when encouraging our best creative choices, and minimizing the harm caused by the bad ones.
The best example of this that I’m aware of is AI code completion, like GitHub CoPilot. It recognizes when you’re trying to make something that’s been made before. It tries to sort of auto-complete your code with existing code that’s known to work. It’s not quite a “snap to functional” experience for developers, but it’s close.
Another example that I worked on in a previous role is accounting software that guesses at the appropriate categorization of expenses. It’s trying to save you the effort of having to enter it for each one. Even if it’s wrong 20% of the time, it’s saving you 80% of the work. Can it do the entire expense report from start to finish, fulfilling the job loss fantasies of credulous pundits? Not even close.
Labor-saving solutions are compelling, valuable experiences for users. They’re also the antithesis of the science fiction that many expect from AI, based on what they’re reading in the news.
It’s true of every field I can think of — from medicine to visual effects to writing to advertising to type design. All would benefit most from the approach of using AI to save labor, streamline repetitive work, double-check decisions and point out unintended or unexpected consequences.
These capabilities would free up our time, enabling us to live happier, healthier, fuller lives. There’s no need for the success of AI to threaten the livelihoods or the way talented, hard-working, creative people earn their living.
That means that the AI apocalypse previously planned for 2023 is hereby canceled. We’ll have to find another way to destroy ourselves this year.
I can’t remember who! Speak up, colleague, so I can correct this.