A conversation about and with AI

As the use of artificial intelligence (AI) chatbots like ChatGPT becomes more prevalent in education, there are concerns that theythese chatbots could be used by students as a way to avoid writing their own essays. While ChatGPT and other AI chatbots are designed to assist students in their learning by providing personalized feedback and guidance, there is a risk that they could be used as a shortcut to completing assignments.
One concern is that students may use ChatGPT and other AI chatbots to generate entire essays or papers without putting in the effort to do the research and writing themselves. While these chatbots are able to produce coherent and well-written responses, they do not have the same level of depth and understanding as a human-written essay. By relying on ChatGPT and other AI chatbots to complete their assignments, students may not be developing the critical thinking and writing skills that are essential for success in higher education and beyond.
Another concern is that the use of ChatGPT and other AI chatbots could lead to a lack of accountability for students’ work. In a traditional classroom setting, teachers are able to identify when a student has not put in the effort to complete an assignment or has copied the work of another student. With the use of ChatGPT and other AI chatbots make , it may be more difficult for teachers to identify when a student has not done their own work. This could lead to a decrease in the overall quality of students’ work and a lack of accountability for the effort they put in.
As the use of ChatGPT and other AI chatbots in education continues to grow, it is important to consider the potential impact they could have on student motivation and the desire to do one’s own work. While these chatbots have the potential to revolutionize the way we approach education, it is important to ensure that they are used in a way that maximizes their potential while minimizing any negative impacts.

…is what ChatGPT said when I asked it to “write the beginning of an article about ChatGPT being used by students to avoid writing their own essays.” Ironic, I know.

The world of Artificial Intelligence (AI) is uncharted territory. Over the past year, AI has exploded into public focus through the products created by OpenAI: Dall-E 2, a text-to-art generator, and ChatGPT, a language bot trained to act in a conversational manner.
While different versions of OpenAI’s image generator have existed since January 2021, it was released to the public as Dall-E 2 in November of 2022. While initially met with excitement from the general public, the existence of “AI Art” has raised concerns and questions from many human artists.
“Where are these images coming from?” asked Colie Wertz, a concept artist, in an interview with The Urban Legend. “The AI that’s gotten trained, has been trained on something. So the question is, well, what has it been trained on?”
According to OpenAI, Dall-E 2 has been trained on hundreds of millions of captioned images from the internet. The computer is trained to find connections between words in the captions and patterns in the images. It is then able to work backward from random noise (an image similar to TV static), by manipulating the image based on the text prompt it was given. While they have openly shared their process for filtering what kinds of images are used, they have not been as transparent about where the images are coming from.
“All the AI companies that I know of right now, your Midjourneys and your Stable Diffusions, [two leading AI art generators,] they’ve scraped everything,” said Wertz. “And no one is really able to opt-out. And no one really opted in.”
With major companies taking images from all over the internet, anybody’s artwork could be in the dataset and they would never know.
“One of the big problems is most artists on the internet don’t have proper copyright protection,” said Alex Khosrowshahi ‘23.
But what’s so bad about having your art train an AI? What makes this different from a human artist taking inspiration from pieces that they see?
“An AI model doesn’t work like a human,” said Khosrowshahi. “A human can use other art pieces as a reference but then can twist them in ways that go away from [the original]. AI models are using them as sets of data that go directly into the process of creation.”
However, this does not mean that the AI is creating a collage. AI art generators work through a process called diffusion, which involves repeatedly removing layers of noise until the AI arrives at a final product. When trained on such a massive dataset, this results in patterns that resemble creativity. However, it will never spit out an exact copy of the artwork that was fed into the dataset, this would actually be considered a failure by AI. The goal of these art generators is to always create something new. You could perfectly describe the Mona Lisa, but it will never generate the Mona Lisa exactly.
ChatGPT, OpenAI’s conversation bot, has also been trained to a point that it can produce patterns that seem like original thoughts. While it may feel like you are having a conversation just like you would with another person, AI does not function like a human brain. These language bots have been trained to predict the next word in the sentence of a piece of sample text, and can now form its own sentences by “predicting” what it should say next to answer the question it was given.
“[The AI] is going to cobble together [a response] based on essays that it’s read or its sense [of] what an essay is and what you look for in it and how arguments are constructed,” said Riley Maddox, math and computer science teacher, and grade dean. “It’s going to be able to bring different sets of skills than [a person] will.”
As demonstrated through the beginning of this article and all the artwork on these pages, although they are not creative in the way humans are, ChatGPT and Dall-E 2 can produce what could pass as a final product.
So how or when should we be using these bots? Should I be submitting art pieces generated from my prompts to my art teacher? I mean, it was my prompt that resulted in the art that was generated right? Does that make it my art?
Well, I did turn in an art piece created using Dall-E 2 right after it was released to the public this fall while taking Making Media Matter with Urban art teacher Kelli Yon. I recently interviewed her and asked her what she thought about it from an art teacher’s perspective.
“For me, that was about a concept. If you turned it in and said, ‘Look at this beautiful thing I’ve made.’ I would have dealt with it very differently than if you turned it in and said, ‘I had this idea and this is what this computer made,’” said Yon. “What we look at in our department are ideas and processes [rather] than products.”
Khosrowshahi values the ideas used to create AI as well but differentiates this skill from that of an artist.
“I would consider the prompter not really the artist, I would consider them the prompter. I think they can be credited as the prompter and say that’s a cool prompt,” said Khosrowshahi. “I actually think that the person who created the [AI] model itself is [the artist].”
Yon justifies her distinction by explaining what she believes to be the most important difference between human artists and AI generators.
“[Unlike the AI generator, an] artist is able to put their experience, their sensory experience, their artistry [into what they create],” said Yon.
But these art bots can still be used in somebody’s artistic process. The artwork generated by Dall-E 2 does not hold the same kind of emotional value Yon spoke of, but a human artist can put it there.
“The other question is, well, once you have the image that you like, what are you going to do with it … how are you going to develop it?” said Wertz. “AI is great, and it’s cool. But you know, it just gets me in the ballpark.”
Dall-E 2 has now given everybody an easy way to bring their ideas to life, something that had previously required years of experience to achieve.
“[For] all [the] things I had to learn … in my career in visual effects, I had to know what it was I was faking. This whole AI thing erases so much of everyone asking ‘well why is that there?’” said Wertz. But Wertz’s concerns regarding AI are not just about artists losing the need to learn technical skills. “The process is kind of everything. That’s where it’s fun for me. I would hate to see that erode.”
“It sort of, honestly gives me a stomachache to think about,” said Yon. “I don’t believe there will ever be a time [when] skipping over a process is going to make a piece better.”
Yon views the new accessibility of art generation as similar to the current state of photography.
“Everyone having a camera on their bodies changed the world of photography because anyone can always take a photo. Camera phones have gotten so good in terms of their lens quality, the way that they can alter depth of field and the way that you can edit images on a phone, but that still hasn’t changed the quality of a good photograph.”
But this loss of experience is not only a product of the art generators, with the release of ChatGPT educators and students alike are being faced with similar questions.
“Much of the implication is around teachers, if not more than [it is] around students,” said Maddox. “Teachers and students have always been in [a] contract with one another right? … [AI] is going to invite us to revisit that contract and reconsider what we [are] here for [and] what are we trying to get [out of this].”
While using ChatGPT to write an entire essay obviously violates academic integrity (unless you do so in a high-quality article found in The Yeti magazine as a rhetorical technique to draw in the reader), what about bouncing ideas off it in a back-and-forth conversation? Is that any different from finding inspiration in a classroom discussion or talking through your ideas with a friend?
“I am not sure that [finding inspiration from AI-generated art] is any different than looking at images in books or on the web and finding imagery around an idea,” said Yon. “I’m not worried about it becoming an issue with pirating, I feel like it currently is an idea generator.”
While Yon is currently unconcerned about the issue of pirating work, teachers in other departments may not feel the same.
“I think [these questions are] healthy,” said Maddox. “I think it’s good for teachers to get to reflect on, ‘How am I assessing mastery of content?’ ‘How am I assessing student process?’ and ‘What do I value in terms of outcomes?’ … [and] as students, ‘What skills am I trying to develop?’ ‘What knowledge am I trying to acquire?’ And those have always been questions in education, and it’s something that I think urban has been especially thoughtful around in the past.”
Maddox says that students come to classes expecting to gain something, whether intrinsic or extrinsic, and that this will change with easy access to AI.
“The fact that [AI] has made the cost of going to another resource basically zero will more readily prompt that question of ‘what’s the value of this marginal assignment to me?’” said Maddox.
“I think that the ethics of AI are something that young students are going to want to focus on more in the coming time,” said Khosroshahi. “As [AI] increases in power I see it going multiple ways. I don’t see it as all bad or all good in any way. But I do think that we do have to keep in mind the ethics of how we train [it] and how we exist with it.”