“So when people get lazy and [say], ‘Hey, write this thing for me,’ and then take it and use it, there could be errors in it,” said Schneider. This makes it a valuable tool for generating ideas and writing rough drafts, but a risky option when using it for final assignments. Students who decide to use ChatGPT will likely need to double check that the information it provides is correct either by knowing the information in the first place or confirming with other dependable sources.
ChatGPT can support teachers, not replace them
For some educators, ChatGPT also raises alarm that the widespread adoption of AI could lead to job losses, particularly in areas such as tutoring and teaching languages. Schneider said that’s unlikely. “I can’t imagine a school system that has no teachers in it,” he said. Numerous studies show a correlation between strong student-teacher connections and increased student involvement, attendance and academic performance.
As people explore how AI will support teaching and learning, teachers’ roles may change as these tech tools become more widely used. “Teachers are going to have to evolve and figure out how to harness the power of this tool to improve instruction,” said Schneider. For example, the AI Institute for Transforming Education for Children with Speech and Language Processing Challenges, which was awarded $20 million in funding from IES and the National Science Foundation, is exploring how ChatGPT can support speech pathologists. According to a recent survey by the American Speech-Language-Hearing Association, the median number of students served by one speech pathologist is 48. “There are simply not enough pathologists in schools,” said Schneider. ChatGPT has the potential to help speech pathologists complete paperwork, which takes up almost six hours each week, and build personalized treatment plans for students with cognitive disabilities, such as dyslexia.
“We need to rethink what we can do to free up teachers to do the work that they are really good at and how to help them individualize their interventions and provide instruction and support,” said Schneider.
When you use ChatGPT, your data is not secure
ChatGPT is convincing because it references a massive amount of data and identifies patterns to generate text that seems like it is written by a human. It can even mimic the writing style and tone of the person who uses it. “The more data they have, the better the model,” said Schneider, referring to ChatGPT’s ability to generate responses. “And there’s tons of data floating around.”
The information that users put into ChatGPT to make it generate a response – also known as the input – can take the form of a question, a statement or even a partial text that the user wants ChatGPT to complete. But when students use ChatGPT they may be putting their data at risk.
Schneider acknowledged that if ChatGPT will be used to support teaching and learning, privacy is a major concern. “We are developing much better methods for preserving privacy than we have in the past,” he said. “We have to remember it’s a bit of a cost analysis. Using all this data has many benefits. It also has some risks. We have to balance those.” He added that ChatGPT is similar to wearing an Apple Watch or talking to an Amazon Alexa, because those tools also rely on data from users.
Banning ChatGPT isn’t a long-term solution
Because students can input original prompts into ChatGPT and get unique answers, it raises the question: Is using ChatGPT plagiarism? And how much does AI-generated text need to be edited until it is considered a students’ own work? In lieu of answering these questions, some schools, including districts in Los Angeles, New York City and Seattle, have opted to ban use of ChatGPT outright.