DEEPTI’S RESEARCH DISPATCH
Categorizing the Risks and Concerns around Generative AI
I’ve been following the news and opinion coverage of generative AI since it first came on the scene.
Writers have raised a variety of concerns over potential risks of widespread use of text and image generators like ChatGPT, DALL-E, Midjourney, and the many applications being developed on them.
I find it useful to put the risks into categories, to better understand the range and scope of potential issues.
Though it may make for an alarming read, I believe all these problems to be solvable. Skynet-like dystopian prophecies are the flip side of the tech industry’s self-mythologizing hype, and just as baseless. The actual risks are more mundane, but still worrying.
Understanding the risks is the first step to figuring out how to address them through regulation or other means.
This roundup contains the first 4 categories of risks, and I’ll be back with Part 2 with the remaining categories.
Since this is a research roundup, it is heavy on links and excerpts of other articles and might make for a dense read. So, I encourage skimming, jumping to the sections you are interested in, and following through to the linked sources to aid your own research.
The risk categories are:
1. People trusting false AI answers
Generative AI frequently produces false information or hallucinations.
[…], they don’t understand the physical world or how to use logic, are terrible at math, and, most germane to searching the internet, can’t fact-check themselves. Just yesterday, ChatGPT told me there are six letters in its name. These language programs do write some “new” things — they’re called “hallucinations,” but they could also be described as lies. Similar to how autocorrect is ducking terrible at getting single letters right, these models mess up entire sentences and paragraphs. The new Bing reportedly said that 2022 comes after 2023, and then stated that the current year…