2025 EDUCAUSE AI Landscape Study: Into the Digital AI Divide

Strategy and Leadership

Strategic Planning and Goals

The current status of institutional AI strategy shows both evidence of progress and a need for more work. More than half of respondents to this year's survey (57%) agreed or strongly agreed that "we view AI as a strategic priority," up from 49% in last year's study. Institutional leaders' attitudes toward AI appear to be warming as well (see figure 1), with respondents' impressions of leaders' attitudes towards AI shifting slightly towards optimism.

Figure 1. Impressions of Institutional Leaders' General Attitudes toward AI
Stacked bar chart showing that from 2024 to 2025, the percentage of respondents who said institutional leaders were either enthusiastic or very enthusiastic about AI rose from 19% to 25%.

Although the fact that just 11% of respondents reported that their institution has no AI-related strategy is encouraging, only about a fifth of respondents (22%) reported having an institution-wide approach to AI-related strategy, implying that progress and experiences with AI on campus remain uneven (see figure 2). The majority of respondents (55%) indicated that AI strategy is currently happening in pockets around the institution, suggesting opportunities for many institutions to develop a more holistic, unified approach to AI.

Figure 2. Institutions' Approaches to AI-Related Strategy
image

The current focus of AI strategy is on the people. With the steady proliferation of AI tools that institutional stakeholders now have at their fingertips, many institutional strategies focus on supporting and managing the human side of AI implementation and use. The most popular elements of institutions' AI-related strategies center on training and access to AI tools (see figure 3). Comparatively fewer respondents reported that their strategy is focused on the technologies themselves, with less than a third saying their institution is focused on elements such as internal IT support and AI data and model development.

Figure 3. Elements of AI-Related Strategy
Bar chart showing that, across 23 strategic areas, the top seven, ranging from 63% to 41% of respondents, concerned either AI literacy or policies, guidelines, and processes.

Similarly, institutions' motivations for engaging in AI-related strategic planning and their goals for such planning (see figures 4 and 5) place the student front and center as the focus of concerns and hopes for these technologies. Whether focusing on guiding student use of AI (and mitigating their inappropriate use) or improving students' learning experiences and outcomes, institutions appear to be prioritizing the persons at the heart of the institution's mission and goals. These findings are fairly similar to those in 2024, perhaps due to our community's enduring commitment to student-centeredness. Also similar to 2024 results, concerns about falling behind ranked among the top three motivators for AI-related strategic planning.

Figure 4. Primary Motivators for AI-Related Strategic Planning
Bar chart showing that the primary motivators for AI included the rise of student use of AI (73%), risks of inappropriate use (66%), and concern about falling behind technologically (63%).
Figure 5. Primary Goals of AI-Related Strategic Planning
Bar chart showing that, among 12 goals for AI, the two most important were to prepare students for the future workforce (68%) and to explore new methods of teaching and learning (66%). Improving higher education generally was reported by 41% of respondents, and no other goal was identified by more than 33% of respondents.

Making AI-related plans is one thing; finding the funds to support those plans is quite another. Asked how their institution is accommodating new AI-related costs, a plurality of respondents (41%) said they don't know how costs are being accommodated, while another 30% reported that their institution simply has no accommodations for new AI-related costs (see figure 6). The most common form of cost accommodation for AI is reallocation of existing budget (16%), with a mere 2% of respondents reporting that these accommodations are primarily from new funding sources. In open-ended comments, respondents identified a wide range of funding sources:

  • Flexible technology spending budgets and discretionary funds
  • Recovered budget from unused subscriptions or duplicative software
  • Donations, endowments, and grants
  • Planning and piloting budgets
  • Allocation of new revenue
  • Reducing workforce budget (e.g., not backfilling positions, reducing new hires)
  • Reserves and carryforward funds
  • State funding
Figure 6. How Institutions Are Accommodating New AI-Related Costs
image

The share of institutions partnering to share AI investment costs has seen a modest increase. In 2024, 63% of executive leaders said they were not working with anyone for this purpose, and in 2025 that number dropped to 56% of executive respondents. Still, the vast majority of respondents reported either that they don't know about external sources of funding (52%) or that they are not partnering with an external source (28%) (see figure 7). For other respondents government funding agencies, in particular, were noted as a source for shared investment costs (12%).

Figure 7. Third-Party Partners for Sharing AI Investment Costs
image

Regardless of where the financial support is coming from, accurately estimating how much financial support is needed may be a challenge for some institutions. A plurality of executive leaders (34%) said that their institution has tended to underestimate AI-related costs, and only 21% said they have tended to accurately estimate AI-related costs. Notably, nearly a quarter (23%) of executive leaders said that their institution has not had any AI-related costs, and none reported that they tend to overestimate AI-related costs.

Risks and Opportunities

Most respondents are concerned about many AI risks. Of the 22 potential areas of concern related to AI that we asked about in our survey, more than three-quarters of respondents said they are concerned "somewhat" or "to a great extent" about 18 of them (see figure 8). In particular, concerns around the ethical misuses of AI (misinformation, disinformation, and copyright/IP violations) and concerns around security and privacy (use of data without consent, insufficient data protection) seemed to be especially important to respondents. The more "human" elements of AI use—diminished personal relationships, the loss of personalized work and learning—seemed to be slightly less worrying, though still of concern to the majority of respondents.

Figure 8. Concern about AI-Related Risks
image

In open-ended comments, respondents described other AI-related risks they are concerned about:

  • Accessibility of AI tools
  • Widening the socioeconomic divide
  • Widening the divide between higher education institutions based on resources
  • Deterioration of research outputs
  • Faculty and staff burnout
  • Environmental impacts
  • Division among faculty and staff due to differing AI-related beliefs
  • Governmental involvement in higher education through AI-related regulation and funding
  • Higher education falling behind societal and workforce shifts
  • Increasing influence of for-profit technology companies on higher education

Of these risks, division among faculty and staff is a notably newer result of EDUCAUSE AI-related research. In a time when other research insights point to a global context of increasing social and political division, respondents' comments in this vein are particularly concerning:

"Fracturing of relationships between staff members on either side of any of these debates. How to manage challenges to deeply held beliefs about our purpose in [higher education] and role of AI in our activities."

"Hostility directed at faculty who are experimenting with AI and trying to teach students how to use the tools in creative, ethical ways."

"There is a lack of collegiality between the two sides of faculty. Our faculty are in two camps, either they love it or they hate it. When they meet, it is often not a positive interaction because of the personal biases toward AI use."

Most are also optimistic about many AI benefits. As concerned as respondents are with many of the potential risks of AI, they also see many of the potential benefits of these technologies (see figure 9). Among the top benefits highlighted by respondents, AI holds the promise of improved data-related practices at the institution, particularly for analyzing large datasets, access to real-time data and visuals, and insights for data-informed decision-making. It is encouraging, then, to see that more respondents this year than last year reported that their institution has begun preparing their data to be AI-ready, 42% and 29%, respectively (see figure 10).

Figure 9. Optimism about AI-Related Opportunities
Bar chart showing that, across 21 potential benefits from AI, majorities of respondents, ranging from 68% to 94%, said they considered each item to be somewhat or greatly beneficial.
Figure 10. Extent to Which Institutions Are Preparing Data to Be AI-Ready
image

In open-ended comments, respondents described other opportunities associated with using AI in higher education:

  • Advancing curriculum reform
  • Improving overall cybersecurity
  • Educating the general public about AI
  • Supporting a culture of personalized and self-guided learning
  • Enabling all individuals to code without programming skills
  • Expanding creative and critical thinking
  • Advancing the work of professional fields (e.g., medicine, finance, architecture)
  • Improving understanding of authentic learning
  • Improving communication
  • Reducing operating costs
  • Increasing awareness and importance of digital literacy

Notably, several respondents indicated that their optimism for using AI in higher education is "longer term," describing the current state of AI technologies as too "immature and imperfect" to be of value.

Beyond focusing on opportunities for the future, respondents also provided specific examples of how their institutions are currently using AI. With regard to tools, respondents indicated that they are using chatbots, customized software, and AI capabilities recently added to software they were already using (e.g., Zoom, Fresh Service, Microsoft Office 365, Adobe Firefly, Copilot, Panopto). Some of the capabilities they described include the following:

  • Note taking
  • Automating business processes
  • Managing and analyzing data
  • Strengthening collaboration between stakeholder groups
  • Training faculty
  • Developing courses (e.g., making course assignments more engaging, writing better exam questions)
  • Knowledge extraction from documents
  • Pattern and anomaly detection
  • Teaching students how to use AI tools and think critically about AI outputs
  • Writing assistance
  • Supporting the help desk

Finally, when asked about the potential impacts of AI on higher education in the next two years, respondents on the whole leaned more toward an optimistic view of AI's future than a pessimistic one (see figure 11). In particular, respondents expressed optimism about the potential for AI to benefit learning analytics and improve accessibility for students, faculty, and staff with disabilities. The few areas where respondents expressed more pessimism than optimism relate to personal misuse of the technology (students trust AI too much, and academic dishonesty has increased), and the ethical implications of AI (the digital divide has widened, and outputs are more biased).

Figure 11. Respondents' Predictions about the Impacts of AI on Higher Education by 2027
image

Respondents' predictions about the future of AI in higher education are similar to their predictions a year ago. Two notable differences indicate that our community might be more optimistic now than they were a year ago about how AI will impact assessments. Specifically, 55% of respondents predict that academic dishonesty will increase in the future (a decrease of 9 percentage points from 2024), and 49% predict that assessments will be more meaningful in the future (an increase of 8 percentage points from 2024).