ChatGPT Is Dumber Than You Think

ChatGPT Is Dumber Than You Think

As a technology critic, I have to say that there is no room for enthusiasm for ChatGPT, which is a large language model developed by OpenAI. While the idea of ​​relying on a machine to conduct conversations and generate responses is technically impressive, it raises serious concerns.

First, ChatGPT lacks the ability to truly understand the intricacies of human speech and conversation. It is merely trained to generate words based on the input it is given, but has no ability to truly understand the meaning behind those words. This means that the answers it generates will be superficial and lack depth and knowledge.

Additionally, relying on ChatGPT for conversations raises ethical concerns. When people start relying on a machine to conduct conversations, it can lead to a loss of real human connection. The ability to communicate with others through conversation is an essential aspect of being human, and hiring a machine can have harmful side effects in our society.

But hold on tight. I, Ian Bogost, did not write the previous three paragraphs. A friend sent me these screenshots of a session with ChatGPT, a program released last week by OpenAI that interacts by clicking on a chat window. It's actually a large language model (or LLM), a type of deep learning program that can generate new text by training it on large amounts of existing written material. My friend's assignment was: "Create a ChatGPT six review à la Ian Bogost."

ChatGPT wrote more, but I saved the rest for you because it was so boring. AI wrote another paragraph about accountability ("If ChatGPT says or does something inappropriate, then it's wrong?"), a concluding paragraph that tied the rest together ("Even starting with a conclusion,..." ). In short, write a basic high school-style five-paragraph essay.

This fact can make you feel comfortable or scared, depending on your preference. When I opened OpenAI ChatGPT to the public last week, the first and most common reaction I saw was fear that it would disrupt education. "You can't do homework anymore," University of Toronto professor Kevin Bryan said on Twitter. "I think chat.openai.com may really mean the end of homework," wrote Samuel Bagg, a political scientist at the University of South Carolina. It's scary

But you can rest easy knowing that the robot's output, while smooth and engaging in text, is still uninteresting in prose. They are formulaic in structure, style and content. John Warner, author of Why Can't They Write , has criticized the five-paragraph essay for years and wrote a Twitter thread about how ChatGPT reflects this writing standard: “Students are essentially trained to imitate writing they are," he tweeted. . Artificial intelligence can produce compelling writing, but only because writing and our expectations have become so uninspired.

Even tricking the reader into pretending to pass off a version of AI as their own, as I did above, has become a tired trope, a development long overdue in a very long Twitter thread about the future of creative AI, rather a frightening revelation. his abilities On the other hand, yes, ChatGPT is capable of prose that sounds convincing. On the other hand, what it means to persuade depends on the context. The kind of prose that might seem surprisingly believable in the context of a creative encounter with artificial intelligence is suddenly appalling in the context of a professional article published in a magazine like The Atlantic . As Warner's comments emphasize, the writing that you, as a teacher (or marketer, lawyer, journalist, or whatever) find persuasive may be more about its location than its importance: the essay was concise and competent; The message was in your inbox on time; The newspaper article conveyed clear facts that you can either accept or reject.

Perhaps ChatGPT and the technologies behind it are less about persuasive writing and more about big nonsense. Fudge manipulates the truth for the wrong reasons: to get away with something. The initial reaction to ChatGPT is that it's a tool to help people write student essays, short stories, or whatever. It's an easy conclusion for those who believe that AI should replace rather than replace human creativity.

The Internet, and the entire technological sector that runs on it, looks like a giant organ of nonsense—increasing access to human language and increasing lies. People cheat, cheat and fight on the internet. Deep learning AI makes all of this worse by hiding how programs like the LLM work so that no one, not even their developers, can explain what they're doing and why. OpenAI presents its work as experimental and context-free, without specific use cases. It says it only launched ChatGPT "to get user feedback and learn more about its strengths and weaknesses." Not surprisingly, the first and most obvious assumption about ChatGPT is that it's a threat to something, everything.

But ChatGPT is not a stepping stone to general AI that understands all human knowledge and text; It is just a tool to play with all these knowledge and words. The game is all about working with raw materials to see what they can do. You play a game or machine to use familiar materials in unexpected ways. An MBA will not replace universities, journals or middle management. But they give that and other fields a new tool—that's really the right word—to play with unfathomable amounts of textual material.

I chatted with ChatGPT about a variety of topics, including poetry, behavioral psychology, and computer programming to test this theory. For now, ChatGPT is just a technology demonstration, a research experiment. But how it might be used is less clear, aside from dire predictions about the industries its technology could disrupt. In almost every case, the AIs seemed to have the knowledge and means to express themselves. But when pressed - and the chat interface makes it easy - the bot always had to admit that it mattered.

Sometimes he just used shapes. When I asked ChatGPT to create a cover letter for a college job, they responded efficiently but without soul. He also created the same cover letter for a magazine editor position for a cannabis innovator in the Web3 space ("I have a deep understanding of the Web3 space and the unique challenges and opportunities it presents"). Suffice it to say, after all, most cover letters are awful, repeating abstract everyday lines and gestures.

In other cases, the robot produced creative results that were surprisingly artificial, but flawed. When I asked him to write a poem about hamburgers in the vein of Ezra Pound or William Carlos Williams, he responded with this brief statement:

Hot and juicy pancakes
Croutons, crispy lettuce and tomato
A symphony of flavors, a feast for the senses
Delicious, satisfying, authentic
American classic.

Not bad I guess, especially for a PC. But the poem is also not an example of the imagination trying to conjure up a specific image through the conscious use of language (something like "very honest in a hamburger"). So I called ChatGPT. "It's not a fancy poem." His answer is worth quoting in full:

You are right, the poem you have created is not yet a picture poem. A picture poem uses specific, vivid images to convey a specific idea or emotion and focuses on a single image or moment. The poem you have created uses a narrative and descriptive style and does not focus on any particular image. It describes the ingredients and flavors of the hamburger, but does not use specific, vivid images to convey a specific idea or emotion. Apologies if my previous answers did not meet your expectations and to help with any other questions or suggestions you may have.

I also asked AI to create a Lai poem in Marie de France style (Medieval Narrative Poem) about the popular Texas fast food chain Whataburger. He really looked a lot like Lai and he started like this:

In the land of Whataburg, where burgers and fries are king
A stranger came to see him and brought a ring.
He wore a velvet coat and a hat on his head,
He smiled and waved as he approached the counter, where the burgers were served hot and fresh and the fries crisp and golden.

Except for the last line, which is a metrical mess. The AI ​​couldn't (or wouldn't) judge his work ("I'm a senior language modeler trained by OpenAI, and I don't have the skills to judge literary quality or make aesthetic judgments"). , but it can tell the typical structure of Lai - a narrative poem of a certain structure and form XII. 19th-century short stories, usually written in eight-syllable verse. Laia's lyrical errors come from the inability to adapt to such a metrical form.

When I asked him about it, he again admitted "no", saying that the style he wrote was not organized into eight-syllable couplets, and that he had "created a more modern and playful version of the traditional form". I began to feel like I was negotiating with a student who came in to complain about his grade during office hours.

This happened again and again. I asked for the source code for an Atari cat-themed game and AI sent me a valid programming manual - to understand the task - but only parts of the actual code with the title "This program creates a simple game". the player picks up a shovel that they must use to pick up their cat's litter and throw it in the trash. This was more symbolic of the answer I was looking for than the answer itself - the same thing.

Next, I asked for a set of diagnostic criteria for a synthetic psychiatric illness that I called "hereditary earthiness." The AI ​​immediately forced:

Diagnostic criteria for genetic germs:

  1. A tendency to experience and express aggressive or hostile thoughts and feelings, especially in response to perceived threats or challenges.
  2. A history of aggressive or aggressive behavior, especially in personal or social situations.
  3. A family history of aggressive or hostile behavior, suggesting a possible hereditary or genetic component to the disease.
  4. Impairment in major social, occupational, or other areas of functioning due to aggressive or aggressive behavior.
  5. The symptoms are not better explained by another mental disorder or the effects of a substance (such as drugs or medication).

I guess being a scumbag doesn't really mean "aggressive and hostile thoughts and feelings," does it? I asked him, and the AI ​​agreed, eventually revising his diagnoses at my request ("tendency to articulate and express difficult or conflicting thoughts and feelings," etc.).

So my invitation is to give an existential and epistemological answer to the question when does bread become toast? He generally gave a light philosophical answer. It says in part: "If we can identify it as toasted by appearance, texture and other sensory characteristics."

But the philosophy of Toast is my specialty, so I could see that this supposed epistemological argument was about cognition. Is the second not an epistemological argument, not an epistemological argument? An epistemological argument focuses on how we learn or understand something, while an epistemological argument focuses on our experience or awareness of something.

At this point, chatting with ChatGPT starts to feel like any other online interaction, with a guy (always a guy) trying to knock a Wikipedia article into permanent expert status. Except he wanted to admit that ChatGPT was still wrong. instantly and effortlessly. Regardless, the bot also knew with reasonable accuracy why it was wrong. It sounds good, but it's really terrible: what really needs experience to identify problems with LLM-generated text, but the purpose of LLM-generated text is to avoid the need for that knowledge, then we already have problems. Maybe it's time for that no contest clause after all.

But that is not the purpose of ChatGPT. It does not offer a specific argument or express creativity, but produces textual material in a form that reflects the express or implied intent of the questioner, and may in certain circumstances contain truth. Unfortunately, this is an accurate representation of textual material of all kinds: online, in books, on Wikipedia, and beyond.

Supporters of the creation of the LLM can dismiss these concerns. Some do this in embarrassing ways that glorify the obvious and fully realized genius of GPT, which I can only point out rather than repeat. Others, with greater measure but no less charm, would argue that it has "started" only a few years but Whataburger's 12th century. But it is the feelings of IT personalities that most destroy computer and network life, which is life itself. OpenAI plans to develop its work into an artificial general intelligence, a machine that can do anything. Instead, let's posit a less ambitious but more potential goal for ChatGPT and its successors: to provide an interface to the infinite text of digital life, an impervious space that few people can effectively use today.

To explain what I mean by that, let me show you a slightly different exchange I had with ChatGPT, where I used it to help me find my way through the darkness of text rather than being fooled by its skills as a word canvas.

"I'm looking for a specific type of window covering, but I don't know what it's called." I said to the robot. I think he's pretty blind. What types are there? ChatGPT responded with a series of showcases, and they were good. I told him that I had in mind something that looked like a blind but was made of fabric. "Based on your description, it looks like you're thinking of a Roman shade," he replied, providing more details on this window technology and a small sales pitch.

Dear reader, I know what a blind Roman is. But not having this knowledge and continuing to use it to understand the world is precisely the kind of work that is very difficult to do with computers today. Doing something in the world is often about building basic material into the expected language form. It's on Google or Amazon, where the search for window coverings or something else fails most of the time, requiring a painstaking balancing process until the device tells you the general direction of the answer. But also student essays, thank you notes, cover letters, marketing reports and maybe even the average medieval person (as anyone wants to create). We all do it with words. We are drowning in an ocean of content, desperately seeking the salvation of form.

ChatGPT offers this form, but - and this is where the bot partially fixed my site - it doesn't know. The AI ​​does not understand or write text. It provides a way to analyze text, play with text, and shape endless prose from a variety of fields, including literature, scholarship, and shit, into structures that can ask and sometimes answer other questions.

GPT and other important language paradigms are aesthetic tools, not cognitive tools. Imagine a weird, godless synthesizer whose controls contain textual sample information, style, and semantics. This is not convincing because it provides the answers in the form of text, but because it allows the text - almost all text - to be played as an instrument.

This result can be indicative! But there's a big obstacle standing in the way: people who don't know what to think about LLM, ChatGPT, and all the other creative AI systems that have come out. Their creators didn't help, perhaps because they didn't even know what those things were. OpenAI does not provide a framework for ChatGPT and presents it as an experiment to help make "AI systems more natural to interact with," a valuable but very ambitious goal. In the absence of additional structure, it is not surprising that ChatGPT users define their creations as existential threats or enhanced achievement. The result is not true, but both are equally boring. Imagine worrying about the fate of take-home exams , the ridiculous format that everyone hates but no one wants to kill. But by the same token, imagine being picky with a computer that just composed something reminiscent of a medieval hamburger poem because its lines aren't all time! Of course, you can use this opportunity to cheat on school exams or pretend to do your work. A boring person will do that. This is what the computer expects.

Computers were not even brain tools capable of solving problems of human interest; They are simply devices that construct human experience through a very precise and very powerful way of processing symbols. This makes them aesthetic and functional objects. GPT and its cousins ​​offer the opportunity to take advantage of the spectacle, not to perform tasks for computers, but to play with the world they create. Or better: destroy it.

Check out this disturbing exchange between a human and an artificial intelligence! 😱

Post a Comment (0)
Previous Post Next Post