Body

Rice faculty share three perspectives of ChatGPT

Conversational AI text concerns addressed by professors in Computer Science and Communication

David Messmer, Vicente Ordóñez Roman, and Rodrigo Ferreira 

Photo: Rice experts in communication, neural networks, and technology and ethics: (From Left to Right) David Messmer, Vicente Ordóñez Roman, and Rodrigo Ferreira 
 

In the middle of Rice University’s spring recess dates, 43 faculty and staff members returned to campus to fill the Center for Teaching Excellence (CTE). They wanted to hear three of their colleagues — experts in their fields — discuss a new artificial intelligence (AI) model that can respond to natural (human) language questions with conversational-style text to create paragraphs and papers. The panel, co-hosted by the CTE and the Program in Writing and Communication (PWC), was created to open a dialogue about what ChatGPT might mean for students’ writing assignments. 

Vicente Ordóñez-Román, an associate professor in the Department of Computer Science, opened the discussion. His research interests lie at the intersection of computer vision, natural language processing and machine learning (ML), and his remarks began with a simplified explanation of how this type of text prediction modeling has advanced over time.  

“Essentially, we’ve been developing neural networks or machine learning models to predict the next word in a sentence or phrase for years. Many of the advancements were made before 2013. We saw another breakthrough in 2017. OpenAI has gotten a lot of press for their GPT models, but Google and other companies have been developing similar chatbots and models. The recent announcements about ChatGPT and other similar chatbots are simply the latest iterations.” 

He said improvements to machine translation can now be scaled to even larger models for more sophisticated prediction of the next word, to the extent that large amounts of text can be generated. But the amount of compute power needed to accomplish the task requires supercomputers that are available only to the largest organizations. 

“For someone like me, who works in the ML field, I see ChatGPT from a different perspective,” said Ordóñez-Román. “This did not just ‘come out of nowhere.’ It has been in the pipeline. What is new is the training that incorporates human feedback. And now, the input is in the form of a human asking a specific question. As you can guess — in academia, we are still working on and discussing the validity of these models. How does the model try to understand the question or sets of questions? 

“As it turns out, we’ve developed models that have become really good at chain-of-thought. With instruction training, the models improve further, and we can now create better models with less data. But it is very hard to know from this current state just how much more capable the models will become. For some cases, being right 90% of the time is adequate, but in most instances that is just not good enough. Remember the prediction that everyone would be in self-driving cars by now? We are just not there yet – not with self-driving cars and not with computer generated text to everyday questions – and it is hard to predict how much longer it will take to get these systems really close to 100%.” 

Rodrigo Ferreira also expressed skepticism at popular media stories claiming that chatGPT represents a critical threat to higher education. He is an assistant teaching professor of computer science and his Rice courses focus on technology and ethics.  

“When we – as instructors – feel the learning outcomes for our students are threatened by new technologies, we tend to respond in three ways: either we retreat or we try to challenge or outsmart them. But this is not the first time there has been pushback to new technologies,” said Ferreira. 

“Going back to ~350 BC, in Plato’s Phaedrus we find a critique of writing as a technological invention. In this dialogue, Socrates expresses concern that written text, in contrast to oral speech, can easily ‘escape’ from the author’s intended audience or can even unintentionally obscure his identity. For these reasons, Socrates viewed writing as deficient; it prevented teachers from effectively reaching students, or as Socrates put it, addressing their ‘soul.’  In context of some of these concerns is that when Plato spoke of the Academy he envisioned a place where students and teachers could properly have the time to contemplate, to learn, and critically reflect on social life away from the pressures and demands of traditional vocations.” 

Ferreira contrasted Plato’s vision of the academy against the disciplinary boundaries in most universities today, creating silos between humanities, science, and business schools, and referred to current work that criticizes the ‘uberfication’ of higher education in the United States, where students increasingly tend to view the experience as transactional and universities are increasingly concerned with rankings and both student and faculty metrics.   

“The real challenge with ChatGPT,” Ferreira said, “is not that of a new technology that may help students cheat, but for us to re-think our educational model, where complex social problems are sometimes framed in ways that lead students to think that they can be exhaustively addressed through a simple technological solution. We can’t blame students for wanting to cut corners to achieve an individually-desired outcome, when so much of their learning environment is precisely grounded on that same technologically-solutionistic and individualist mindset.” 

Ferreira then returned to the divided opinions of how faculty should address student use of technologies like ChatGPT. He said, “What if we instead found ways to collaborate and co-create with ChatGPT? Think about encouraging students to bring in their ChatGPT generated-responses to examine and critique together. Rather than pressuring them to perform mastery of the topic in their first draft, the process could be more focused on iterative practice and peer collaboration. Another way to incorporate ChatGPT and other generative AI models is to help develop speculative images and texts, particularly when imagining a different future. This is something that artists, scholars, and activists have been doing for a long time. As an example, for educators aiming to help students communicate the impact of climate change, AI-generated texts and images have been an excellent narrative tool.” 

David Messmer has been teaching a variety of communication courses at Rice since 2009 and was named director of the First-Year Writing Intensive Seminar (FWIS) program in 2018. Stepping to the microphone, Messmer joked about having asked ChatGPT what he should say in a presentation about ChatGPT. Displaying ChatGPT’s responses quickly revealed the limitations of the model’s current iteration and initiated another round of laughter. 

“What I came away with – after asking it what ChatGPT can and can’t do – was a lot less concern than I had going in,” he said. “Students have always had opportunities to cheat; this is nothing new. Remember Cliffs Notes? And Wikipedia is now older than our students. Grammarly and Google – these are all examples of the tools students have turned to and teachers have been encountering for centuries.  

“The vast majority of students are willing to comply with our guidelines, as long as we are clear about our expectations and why we expect those behaviors. Just because OpenAI has released ChatGPT doesn’t mean that honest students will begin to cheat.” 

Messmer said one of the challenges instructors face is the blurred boundaries between what is and isn’t a student’s original work. With ChatGPT, plagiarism is not limited to copying and pasting something students found in a Google search. If a student poses a question to ChatGPT and a dialogue ensues, what is actually their material and what belongs to the Chatbot? He stressed the importance of doing more coaching up front when teaching students critical thinking and writing skills so that they can know the difference. 

“I gave ChatGPT another test, this time to analyze a sonnet by Shakespeare,” said Messmer. “Now the model’s limitations become very apparent. Every paragraph used exactly two quotes, and even some of the same phrases. If I were an instructor reading this submission by a student, I could immediately tell something is wrong. But if I am a student struggling to understand Shakespeare, this ChatGPT paper looks pretty good. This is where we need to coach students on the dangers involved in using technology to do their work. 

“But where do we start? Rodrigo made the point about critiquing ChatGPT responses and I’ll take that a step further. Have the students write an essay on their own and then write the essay using ChatGPT and compare them. This exercise has a lot of value, but also danger because a less experienced student might look at the ChatGPT essay and think it is superior. The students who need to practice writing most are the same ones that a ‘better’ ChatGPT essay does not serve well.” 

Rather than comparing only their written essay with their ChatGPT essay, Messmer suggested having a group critique where the members look at several student-written essays and the corresponding ChatGPT versions. Examining and discussing several papers should reveal that the papers the students wrote are all different, while the papers generated by AI will be more or less the same. 

“When students see – and teachers emphasize – the value of unique perspectives and the validity of original thought, then the students will come to see how ChatGPT has its limitations,” said Messmer. 

The principle underlying this example of group comparison? “If we want the students to practice and improve, then we must value their perspectives and originality. We have to give them assignments and feedback that supports and values their work and their responses.” 

During the question-and-answer session, the speakers gave a few more examples of the importance of doing their own homework. Ordóñez Roman decided he’d like to learn German. He could cheat on his homework and use a translator, but the end result would be never really learning German. Similarly, a medical student is going to have to recall the material they are learning in a real situation.  

Messmer said when a child asked why they needed to learn multiplication when they could just use a calculator, he tried to explain it was a building block for algebra and future classes. Then he stopped and gave a more visual example. 

“If I want to run a marathon, I have to start by running one or two miles over and over again. I can’t do those miles in my car. As long as instructors are clear about what we are trying to accomplish, then our students shouldn’t have to ask why we are doing this.” 

Rice community members who missed the talk can access the video from the CTE ChatGPT event page. Talks like these are coordinated by the CTE staff throughout the year; watch their website for announcements.  

 

Carlyn Chatfield, contributing writer