While I have explored artificial intelligence, I have never used it for any of my writing. This past academic year I chaired the AI Task Force in the College of Education and served on the AI Community of Practice committee at Piedmont University. It is fair to say that my colleagues and I share concerns about artificial intelligence and are working diligently to mitigate these concerns.
If you ask any K-12 teacher or college professor who requires students to write extensively, you will hear concerns of students inappropriately using AI. While plagiarism has been around for as long as anyone can remember, the use of AI seems to be taking an ominous and rapid turn. There is software to identify plagiarism; but there is no robust software to identify use of artificial intelligence.
Our College of Education is trying to be responsive to local school districts and educators who have similar concerns. At present, the AI instructional resources available to teachers, and to their students, are ubiquitous. There are so many companies with a wide breadth of support for teachers to use to help guide their instructional preparation and even to assess their students’ progress.
Likewise, there are models on the internet which will write students’ papers for them.
Credit: contributed
Credit: contributed
I want that last sentence to stand by itself. Artificial intelligence programs can write papers for students. It’s extremely difficult for teachers to identify such artificial plagiarism. A couple of critical problems are apparent. Teachers and professors across the nation are telling us their concerns, if we care to listen.
First, the quality and veracity of these artificially created reports may be spurious. The algorithms used by AI search the internet and pick up what’s out in the ether and then create new essays, without judgment of authenticity or accuracy. AI also has a tendency to hallucinate — to make stuff up. The AI algorithms cannot capture all the variables, and it shows.
Second, using AI to write your essay will result in lazy or flabby human intelligence. While AI can help lead the student to resources or new ideas, it does not guarantee any thinking is being done. Our fear is students aren’t thinking, let alone thinking critically or creatively. This is where parents and educators must work together to mitigate these concerns.
I would like to share some insights I have learned from my own classroom experience, from these committees, as well as from professional journal articles by Arthur Perret, and by Aras Bozkurt and dozens of his colleagues.
Perret, for example, explains that programs such as ChatGPT are human conversation “simulators,” not actual information systems or knowledge bases. It is not a substitute for original or primary sources or research — it doesn’t do research. It doesn’t know the difference between facts and fiction.
Further, Perret says “it can only upgrade your writing to middling quality, or downgrade it to that same level.” It cannot summarize key concepts, only shorten them. It doesn’t help you question what you are reading or writing; it doesn’t, for example, use the Socratic method to press student rationale or critical thinking. In other words, it misses out on intellectual engagement for the user.
Perret concludes with three issues related to use of AI for writing. First is ethics. He notes that most of the models were built on others’ data. Second is cognition. Using it makes you more dependent and less smart. And third is the environment. He states, “The energy costs of generative AI are an order of magnitude greater than pre-existing technology.”
He made me laugh with a quote from comedian George Carlin: “Think of how stupid the average person is, and realize half of them are stupider than that.”
Let me now turn to Bozkurt and his four dozen colleagues for their insights and concerns, along with some strategies for mitigation. First, my own thought: Artificial intelligence doesn’t make value judgments; all research and opinions are treated as equal, but they simply are not.
Bozkurt goes further when he writes that Generative AI “is not ideologically and culturally neutral. Instead, it reflects [majority] worldviews that can reinforce existing biases and marginalize diverse voices. . . it risks eroding essential human elements — creativity, critical thinking and empathy.”
But more importantly, to the first point, AI’s algorithms give weight to articles or words on the internet that are mentioned most often. It does not differentiate as to quality. So, the more often something is seen on the internet, the more likely it will be used in its paper created for the student. This emphasizes and reinforces the majority voice, or an intellectual or opinion echo chamber, and disregards dissenting views.
AI can certainly save a writer time and provide for efficiencies, without question. However, it can in no way guarantee the student, or potential researcher, has learned anything or done any thinking, let alone practiced critical thinking. For teachers using AI tools, though, it can help provide customized or differentiated instruction to match student cognitive levels, and it can help to provide prompt feedback.
Finally, Bozkurt warns us again that customized AI instructional tools may “intentionally reinforce bias and limit diverse perspectives.” Teachers need to use it to challenge students to think critically and to explore new ideas. AI feedback tools “may overemphasize objective correctness, neglecting the nuanced insights provided by human educators.”
I’ve run out of space and time for this column. It was not generated with artificial intelligence. You cannot be certain; that is both sad and unfortunate.
Perry Rettig is a distinguished professor and former vice president at Piedmont University. He has spent 42 years as an educator, including stints as a public schoolteacher and principal. This is the first of three columns by Rettig on AI. His next piece will focus on ways to mitigate concerns educators have about AI.
About the Author
Keep Reading
The Latest
Featured