2024 EDUCAUSE AI Landscape Study

The Future of AI in Higher Education

The higher education community is still looking for common ground on how AI should and should not be used for learning and work. Respondents were asked to describe both appropriate and inappropriate uses of AI in higher education. In general, respondents emphasized the ethical and transparent use of AI, no matter the specific application. One respondent summarized, "AI should be embraced as an emerging technology and should have a place in coursework with focus on implementation, adoption, research, utilization, and ethical and legal considerations."

Appropriate Uses

  • Personalized student support: tutoring, translating, academic/career advising, easing administrative processes, brainstorming, editing, accessibility tools and assistive technology
  • Teaching assistant: course design, grading, providing feedback to students, providing feedback to faculty, improving accessibility of course materials
  • Research assistant: finding and summarizing literature, sorting and analyzing data, predictive modeling, creating data visualizations
  • Administrative assistant: automating tasks, drafting/revising communications (e.g., email), transcribing audio
  • Learning analytics: analyzing and visualizing student success data, providing insights for student recruitment and retention
  • Digital literacy education: preparing students to be part of the digital workforce and society

Inappropriate Uses

  • Trusting generative AI outputs without human oversight
  • Simulating human judgment (e.g., grading student work, peer reviewing academic articles, writing recommendation letters)
  • Representing AI-generated work as one's own (e.g., using AI to write papers, take exams)
  • Not citing AI as a resource for generated content
  • Making high-stakes decisions (e.g., student admissions) without human oversight
  • Conducting invasive data collection or surveillance
  • Relying on AI tools in place of human thought and creativity
  • Giving tools unauthorized access to sensitive data (e.g., PII) or intellectual property

Though these lists reflect areas of general agreement among respondents, it is clear that the higher education community has not found widespread agreement on the use of AI. For example, the use of AI for grading student work was described as "appropriate" by some respondents and "inappropriate" by others. Similarly, some respondents asserted that most or all uses of AI in the academy are not appropriate. One respondent stated, "Any use at all outside of CS and data courses [is inappropriate]—use of generative AI is defined as academic misconduct on campus and treated the same as plagiarism."

AI tools bring great opportunity for higher education, but not without risk. In open-ended survey responses, respondents described the following opportunities and risks associated with using AI technologies to improve higher education:

Opportunities

  • Teaching and learning: meeting students' needs in real time, individualizing learning experiences, improving student recruitment and retention, creating better learning tools, improving access and accessibility (e.g., multilingual support, assistive technology), creating new tools for student services (e.g., academic and career advising), aiding course design and development, transforming assessment, supporting student engagement, preparing students for the workforce and our digital society
  • Research and analytics: generating insights for data-informed decision-making, analyzing large datasets (including qualitative data), democratizing data and data insights, accelerating research progress, improving data governance, developing new and better models, discovering data, real-time data analytics and visualization
  • Administration: offloading administrative burdens and mundane tasks, automating repetitive processes

Risks

  • Ethics: plagiarism, violations of copyright and intellectual property, marginalizing individuality, reinforcing normality and bias, widening the digital divide, increased misinformation and disinformation
  • Privacy and security: insufficient data protection, intrusive data collection or surveillance, use of data without consent, violation of privacy and security laws and policies, loss of trust among stakeholders (especially students), outsourced and automated hacking
  • AI literacy: lack of leadership and professional development, inability to evaluate AI-generated content, missing opportunities to teach students about AI, falsely accusing students of unapproved AI use
  • Creativity and critical thinking: loss of fundamental skills requiring independent thought, diminishing personal relationships in learning environments, less personalized work and learning outputs, proliferation of "surface level" work and learning

The higher education community is cautiously optimistic about the future of AI. Survey respondents were asked to indicate the direction they believe higher education will lean over the next two years, with respect to AI (see figure 20). Most respondents envision a future in which AI tools are used more for learning analytics (69%). Respondents also believe that AI tools will improve accessibility for students (68%) and for faculty and staff (66%). Still, in another closed-ended survey item, just under half of respondents (49%) disagreed or strongly disagreed that their institution has adequate resources and knowledge to effectively provide support for students with disabilities to use AI tools, and half of respondents (50%) disagreed or strongly disagreed that their institution has adequate resources and knowledge to effectively provide support for faculty and staff with disabilities to use AI tools. These data indicate that there is still work to be done in supporting the accessibility of AI tools.

Figure 20. Respondents' Predictions about the Impacts of AI on Higher Education by 2026
Chart showing which way respondents believe higher education will lean over the next two years on a set of paired statements. ‘AI tools are used less for learning analytics. | AI tools are used more for learning analytics.’ (7% said less, 24% were neutral, and 69% said more); ‘AI tools impede accessibility for students with disabilities. | AI tools improve accessibility for students with disabilities.’ (9% said impede, 24% were neutral, and 68% said improve); ‘AI tools impede accessibility for faculty and staff with disabilities. | AI tools improve accessibility for faculty and staff with disabilities.’ (8% said impede, 26% were neutral, and 66% said improve); ‘Academic dishonesty has decreased. | Academic dishonesty has increased.’ (7% said decreased, 30% were neutral, and 64% said increased); ‘Students think critically about AI tools. | Students trust AI tools too much.’ (22% said think critically, 19% were neutral, and 60% said trust too much); ‘AI tools narrow access to higher education. | AI tools broaden access to higher education.’ (11% said narrow, 34% were neutral, and 55% said broaden); ‘AI tools reduce workloads. | AI tools increase workloads.’ (53% said reduce, 27% were neutral, and 20% said increase); ‘Faculty and staff think critically about AI tools. | Faculty and staff trust AI tools too much.’ (46% said think critically, 25% were neutral, and 29% said trust too much); ‘Assessments are less meaningful. | Assessments are more meaningful.’ (23% said less, 36% were neutral, and 41% said more); ‘AI outputs are less biased. | AI outputs are more biased.’ (23% said less, 41% were neutral, and 36% said more).

In alignment with other survey results, respondents also believe that academic dishonesty will increase (64%) and that students will trust AI tools too much (60%). Respondents were equivocal about whether AI outputs will become more or less biased. Taken as a group, these results suggest that respondents are optimistic about the potential for AI to positively impact the future of higher education while recognizing several areas of concern.