DEEPTI’S RESEARCH DISPATCH

Categorizing the Risks and Concerns around Generative AI — Part 2

A roundup of the coverage: paranoia edition

Deepti Kannapan
8 min readAug 3, 2023

--

Photo by Norbert Braun on Unsplash

This is part 2 of my research roundup on the concerns surrounding generative AI. I find it useful to put the risks into categories, to better understand the range and scope of potential issue and think about potential solutions (especially regulation).

I give more background and context in Part 1, so I’d recommend starting there if you are new to the topic.

Since this is a research roundup, it is heavy on links and excerpts of other articles and might make for a dense read. So, I encourage skimming, jumping to the sections you are interested in, and following through to the linked sources to aid your own research.

In Part 1, I covered the following categories of risks and concerns:

  1. People trusting false AI answers
  2. Misinformation and disinformation
  3. Perpetuating bias
  4. Manipulation and mental health concerns.

Now, continuing onto the new categories:

5. Inappropriately sourced and mishandled training data

--

--

Deepti Kannapan
Deepti Kannapan

Written by Deepti Kannapan

Painter, occasional cartoonist, aerospace engineer. Writes about sustainable technology, creativity, and journaling.

No responses yet