The internet has been flooded recently with AI-related doom and gloom. AI is coming for your jobs, your art, and your writing. Nothing is real; the internet is dead space filled with bots. Our time as the top species is rapidly coming to an end and we will all soon be replaced with AI.
But the more I research what is driving the news, the more underwhelmed I am. AI isn’t creating great art, it’s creating mediocrity. It’s not taking your job, but predatory companies would sure like to use it to take your job. It’s not creating new problems so much as augmenting existing ones and allowing bad-faith actors to massively increase their output of fake news, spam, and phishing attacks.
The internet buzz around Chatgpt really grew to a deafening roar when Chatgpt passed the board exam for law.*
*big caveat here, someone passed a mock exam using answers from chatgpt. Chatgpt is a long way from being able to spontaneously take an exam and can not practice law nor can it be used to practice law.
That said, it passed with a mediocre C+ average. And the first attempt to use Chatgpt in the wild to practice law ended, well, badly. Like really, really badly. This video breaks it down well, so I won’t go into detail. But the short version is that Chatgpt tends to make things up.
Insert video
Just how amazing is this?
Not very, unfortunately. Chatgpt is good at aggregating information from many sources, kind of like Wikipedia. In fact, I bet a student copying and pasting from Wikipedia could have passed most of the exams that Chatgpt has passed.
Can Chatgpt now be used to practice law?
No, absolutely not. The problems addressed in the video notwithstanding, every legal motion must be signed off by a human being with a law license, who then takes responsibility that the motion is in accordance with legitimate legal practice.
The same can be said for medicine. Passing the boards using Chatgpt doesn’t grant you the right to practice medicine. Every medical decision still needs to be made by a human being.
Is it useful?
If the problem of Chatgpt making things up. (Experts call it hallucinating) then definitely. The information would still need to be read and interpreted by an expert but Chatgpt could:
Use a set of symptoms to suggest possible tests to run and diagnosis to make, greatly increases a doctor’s efficiency.
Create the legal framework, with necessary citations for a lawyer to build a case around.
Fans of Chatgpt like to point out that it could drastically decrease routine paperwork by writing drafts of legal contracts, reducing medical charting, etc. The problem with this is that we already have, and have had for a long time, tools for this. Lawyers use boilerplate contracts and then modify them to fit the needs of the client. They even use autofill forms to avoid having to add things like the names of the various parties into a form over and over. Medical charting is mostly done on the computer these days and is mostly click-through charting with drop-down menus of standard options.
What about creative writing?
Authors are terrified that Chatgpt will flood the market with AI-generated content that will drown out already beleaguered authors and reduce are already meager earnings to zero. There are two reasons why this probably won’t happen.
The first has to do with how AI works in the first place, and why it only got a C+ on its legal exam. Chatgpt is a writing “aggregator” It collects thousands of examples of writing and then constructs a similar-sounding text, putting one word after another in a way that resembles natural text. There are a couple of shortcomings to this approach.
AI has no idea what it is doing or why. It’s simply copying other texts. This is why it regularly “hallucinates” or makes up information. It’s stringing text together in a logical way, but not one that is based on any reality. So if it’s writing a legal document, it will give realistic-sounding citations, but it can’t understand the difference between real and fake citations. That’s why it’s untrustworthy.
But the second aspect of this is why I’m not afraid of AI-generated content. AI-generated text is almost by definition, average. That’s the whole point of aggregations, to learn what is average and apply it.
I like to think my writing is above average. Maybe I’m fooling myself, but I am going to stick to that belief. And AI-generated content is never likely to be better than average. If anything it aspires to be more average. This tendency towards average-sounding text is likely to make the majority of AI-generated content boring and repetitive.
AI as it exists now is not designed to create great, or even good writing. It’s designed to create realistic and believable but on the whole, average writing.
The real danger of AI writing: unexamined bias
There are, in my opinion, two real dangers to AI writing, its tendency to make up things that aren’t true but sound plausible, and our own biases. Because AI aggregates the writing of millions of people from across the internet, it is also aggregating our biases.
When AI “hallucinates” completely new information, that can present a danger if the information is taken at face value as fact. But typically people quickly recognize it as false information. But often the fabrications that AI makes aren’t so obvious and do in fact get taken as fact.
Because we have a history of racism, sexism, and homophobia, AI tends to aggregate these biases and incorporate them into its output in ways that aren’t always directly obvious. And this presents a huge danger of those biases spreading and becoming increasingly accepted as facts.
AI art and the dead internet
What about AI art?
AI art faces many of the same problems that AI writing experiences. For example, AI art also hallucinates, coming up with unreal images frequently in response to prompts. But then again, in art, the question might be what is real and what is unreal.
There is also evidence that AI art is getting worse in large part because of AI art. AI creates art the same way it creates text, by scraping the internet for large numbers of examples of art and then creating something that appears similar. But as more and more people create and share AI-generated art, a greater portion of the art being scraped is AI generated, and the problems with it get amplified.
Is the internet dead?
There is a wild new theory called the dead internet theory. What I love about this theory is that if it isn’t true, it likely will be soon enough.
The theory goes like this. Humans create bots to interact with others on the internet. Already studies suggest that the majority of interactions we have on the internet are not with people but with bots. We go to a website and immediately get offered assistance, but these aren’t call center employees, as we might think but bots trained to act like call center employees. Dating sites often use bots to pretend to be women, to make men think there are more women on the site than there are. And of course, so many people create Twitter bots designed to drive engagement to posts from a certain account or on a certain topic.
So along comes another person who creates a bot to scrape commercial sites for certain information by interacting with the call center bots. Others create fake Twitter accounts that act in a way to attract Twitter bots to boost their account. And now we have bots interacting with bots.
This is the dead internet. There are few if any, actual people involved in it. We go to commercial sites and see products recommended by bots, not people. We go on social media and see trends that are driven by bots, rather than humans. Blogs are written by AI. Pictures are created by AI. Both are shared by automation. They receive likes and engagement, but much of it comes from bots, either bought by promoters or received because the content matches some criteria that other bot manufacturers value.
And it’s all driven by algorithmic averages.
Let’s imagine the AI-generated influencer of the future.
A seedy entrepreneur wants to make money by creating an artificial “influencer” that will travel the world and sell content. So he goes to an AI art generator and starts to load in prompts like “beautiful woman.” And let’s be honest, it generates a white woman because racism.
And because our seedy entrepreneur isn’t the only such entrepreneur out there, a fair amount of the “art” scraped to create this ideal “beautiful woman” is also AI-generated. So beauty standards become exaggerated. Thin becomes ridiculously thin. Big breasts become even bigger. The proportions are unrealistic for most women.
But something else happens at the same time, these AI-generated beauties become more average as well. Distinct forms of beauty are erased. Features that cause one to stand out don’t work in this new artificial world of beauty.
Where does the entrepreneur send his new AI influencer? He scrapes that from top hitting keywords like “best tourist destination.” And so she only goes to the top tourist destinations and is only pictured (artificially of course, through more AI-generated images) at the best beaches, the most famous sites.
And it’s accompanied by AI-generated text, stuffed with SEO keywords.
Once he’s created his new influencer it is time to make her famous. To do that he creates a small bot army to follow and like her posts. He might also steal some bots from other entrepreneur’s bot armies, by figuring out what keywords, images, and other identifying traits they are using to program their bots.
After a while, he gets some impressive numbers and uses these to get companies to pay him to put his AI influencer in their brands. She’s pictured wearing the latest fashion, visiting restaurants that pay top dollar for the mention.
You might be thinking that surely companies will get wise to this and stop spending money on fake likes. But it’s just as likely that commercial sites like Amazon will see the massive “engagement” these brands have and move them up in their own algorithm, putting them on the landing page and artificially boosting sales. All of this with no evidence that any actual human being.
It’s bad enough that so many young women have problems with their self-image because they are constantly bombarded with unrealistic images of beautiful women online. What’s worse is that these beauty standards are increasingly “average” and despite hundreds of “likes” not one actual human being has looked at that picture and said, “Wow, I like that.” That is the dead internet at its’ worst.
What do you think? Do you trust what you see on the internet? Is it real? What should we do about it? Will AI be the end of humanity? Or maybe just the end of the internet? I would love to hear other ideas in the comments.