AI in Universities: Is It a Friend or Foe to Academic Integrity?

Why Trust Techopedia

Have you ever been a student at a university or college? Imagine the day before submission, and, for whatever reason, you’re faced with a blank page.

You know what you want to say, but you’re struggling to convert your thoughts into well-crafted academic prose. The anxiety feels overwhelming. If only there were a tool that could help with the mental block, maybe even produce some words — now, that would be nice.

For many students, the launch of ChatGPT at the close of 2022 provided an opportunity to take some sizeable shortcuts with their assignments. The problem is that ChatGPT wasn’t designed to be an AI chatbot for higher education. It is not obliged to uphold the rigorous standards of academia, and so students should use it cautiously.

A 2023 Best Colleges survey showed that 43% of students admitted to using generative AI tools, and 50% of that number said they utilized ChatGPT or a similar application to complete assignments or exams. Another 2023 study revealed that 51% of students would disregard any prohibition of generative AI by their universities.

Student Likelihood to Use AI Writing Tools, Even if Prohibited

Understandably, such findings generate concern around the policing of artificial intelligence in higher education and contribute towards a consensus that it threatens academic integrity.

AI threats should be taken seriously, but now we’re well into 2024, and universities have had over a year to come to grips with the technology’s strengths and flaws, as well as consider how they will tackle its use.


Has AI become a friend or foe to academic integrity?

Key Takeaways

  • The conversation about AI in higher education has shifted from policing its use to ethically integrating the technology.
  • Unsupported AI-human interactions will negatively impact academic disciplines, such as critical thinking, problem-solving, and writing.
  • As universities shape the professionals of the future, they need to teach students to think independently whilst being aware of the tools that can aid them.  
  • Clear AI policies, training on responsible use, and a re-evaluation of student assessment will help prevent violations of academic integrity.
  • The dangers of artificial intelligence are genuine, but the future of college AI is hopeful.

The 2024 Landscape: Cheating, Laziness & Lack of Discernment?

In a recent podcast that discussed AI tools for higher education, Professor Noah Giansiracusa of Bentley University (USA) advocated for a shift away from policing the use of AI towards encouraging students to engage with it responsibly. The mission should be to “minimize the harm and maximize the opportunity.”

Giansiracusa believes that ChatGPT is here to stay, and if its exploration and integration are not handled wisely, it could lead to catastrophic mistakes. In short, students could become increasingly lazy and undiscerning. 

Undoubtedly, the biggest violation of academic integrity is using generative AI to produce answers to written assignments.

The effects of cheating, however, are more destructive than simply robbing students of learning how to formulate arguments and put words down on a page.

Indiscriminate reliance on chatbots not only undermines but also demolishes the cherished discipline of critical thinking. In an article for AACSB, Anthony Hié and Claire Thouary have stated that AI models will likely become the primary way that we access information.

While this might be the case, if universities aren’t supporting AI-human interactions, then ChatGPT’s infamous production of hallucinations and distortion of the informational landscape could increasingly find a way into student’s essays.

It is important to recognize that not all students will trust ChatGPT wholeheartedly.

When asked about his thoughts on student use of AI, Professor Richard Harvey, School of Computer Sciences, University of East Anglia (UK), shared his belief that ChatGPT can encourage deeper criticism and reflection.

Interestingly, Harvey’s students are less likely to trust AI-generated code, subjecting it to more rigorous testing than code they have generated themselves, which they are much more likely to assume as correct. Harvey added:

“This makes for interesting discussions in bench demos,” where students are encouraged to say how they reached their conclusions and walk lecturers through their processes.

The Benefits of AI for Education: Enhanced Accessibility & Better Preparedness for Real Life

When used well, AI tools can collate and clarify vast amounts of information, help a student brainstorm and produce plausible counterarguments, and, in some ways, help with the preparation of papers.

AI also creates a level playing field by getting everyone up to a certain threshold. This will be particularly important for students who have additional educational needs and struggle with multiple accessibility issues.

Senior education analyst Mark J. Drozdowski touched upon this in an article for Best Colleges. He wrote:

“Students with learning difficulties such as dyslexia, ADHD, and autism can benefit from AI tools that identify patterns students might exhibit that are consistent with specific learning challenges. Universities can then make assignments and exams more tailored and accessible.”

In terms of the benefits for university staff, AI has the potential to perform more mundane and repetitive administrative duties, such as syllabus writing and email tasks.

Freeing up lecturers’ time to focus on faculty-student relationships will help universities reclaim what has been slowly slipping through their fingers over the past few decades: a sense of mentorship.

Prospective students are becoming increasingly skeptical of higher education. In many cases, it seems to promise more than it can deliver.

The rise of AI has forced higher education to reflect on its goals. Whether it’s the humanities or the sciences, academics should prepare students for employment, as well as well-being and individual growth.

ChatGPT might have provided a much-needed re-evaluation of how universities are preparing students for the world that exists outside the campus.

How Can Universities Stop Breaches of Integrity?

Educauses’ recent AI Landscape study polled 910 university staff members. Almost three-quarters (72%) of respondents said that their academic integrity policies had been impacted by AI.

What was once a tangential issue has become one of the most debated topics within academic spheres, but what solutions are being put in place to stop violations of integrity?

Assessment is one of the most impacted areas, and because the technology that detects the use of AI is not particularly reliable, a more preventative approach is required.

  • The first port of call among most institutions has been the distribution of guidance that clarifies what constitutes the use and misuse of generative AI, recognizing that each department has its own unique requirements.
  • Secondly, academics have begun to gain a working knowledge of AI tools so that they can help students better navigate their use.
  • Effective prompt engineering is key to getting the most out of AI, and teachers should be spearheading this process. “Only then,” according to Hié and Thouary, “will [students] be able to use AI to deepen their knowledge of complex ideas, find viable solutions, and explore new areas of knowledge.”
  • Finally, a return to authentic assessment, where students are asked to demonstrate their skills and knowledge in meaningful contexts, is being considered. This will require a level of self-reflection and authenticity that AI will struggle to impinge upon.

The Bottom Line

The conversation around AI for higher education has certainly shifted. The reality that AI is shaping the future has sunk in. Now universities are realizing, some more than others, that they are the primary institutions at the forefront of ensuring it is embraced responsibly.

AI can be an ally and an enemy, it can lead and mislead. Being a student in the era of chatbots is both exciting and precarious. Diligence is required as universities build the infrastructure to help them grow alongside AI.

Even as technology advances, there will always be some educators who will remain skeptical. While AI poses a low-level threat to the development of critical thinking skills, it does not need to militate against high standards of academic integrity. AI can work well for education; it only requires healthy integration.


How is AI used in higher education?

How can generative AI be used in higher education?

What are the issues with AI in higher education?

How is AI used in college admissions?

What are the threats of AI in education?


Related Reading

Related Terms

John Raspin
Technology Journalist
John Raspin
Technology Journalist

John Raspin spent eight years in academia before joining Techopedia as a technology journalist in 2024. He has a degree in creative writing and a PhD in English Literature. His interests lie in AI and he writes fun and authoritative articles on the latest trends and technological advances. When he’s not thinking about LLMs, he likes to run, read, and write songs.