2024 EDUCAUSE Action Plan: AI Policies and Guidelines
EDUCAUSE's 2024 AI Landscape Study, based on a survey of more than 900 higher education technology professionals, made clear the current gaps in higher education's AI-related policies and guidelines:
"Only 23% of respondents indicated that their institution has any AI-related acceptable use policies already in place, and nearly half (48%) of respondents disagreed or strongly disagreed that their institution has appropriate policies and guidelines in place to enable ethical and effective decision-making about AI use."
More than a year after the "AI spring" suddenly upended notions of what could be possible both inside and outside the classroom, most institutions are still racing to catch up and establish policies and guidelines that can help their leaders, staff, faculty, and students effectively and safely use these exciting and powerful new technologies and practices.
Thankfully, institutions need not start from scratch in developing their AI policies and guidelines. Through the work of Cecilia Ka Yuk Chan and WCET, institutions have a foundation to build on, a policy framework that spans institutional governance, operations, and pedagogy. Built around these three pillars, this framework helps ensure that institutional AI-related policies and guidelines comprehensively address critical aspects of institutional life and functioning:
- Governance encompasses data governance, evaluation of AI use across the institution, promoting and monitoring faculty and staff usage of AI (including research), inclusive and equitable access, intellectual property, and AI use for promotion and tenure and reappointment practices.
- Operations encompasses professional development (training and support), developing and maintaining infrastructure for AI, and reviewing and recommending AI implementation to improve operational practices.
- Pedagogy encompasses academic integrity, assessment practices, clear communication to students regarding AI expectations, developing student AI competencies and skills for workforce preparation, understanding algorithmic bias, regular and substantive interaction, and learner accessibility.
With this framework as a backdrop, we developed this action plan by convening a panel of higher education technology professionals to brainstorm potential actions institutions might take over the next two years to begin establishing effective AI policies and guidelines. We asked panelists to review content from the 2023 EDUCAUSE Horizon Action Plan: Generative AI, reflecting specifically on the "preferred future" described in this report, and list the potential impacts of AI policies and guidelines on individuals, units, institutions, and cross-institution collaborations. We then asked panelists to consider the actions that could help higher education stakeholders navigate both existing and new AI policies and guidelines.
Relative to the ten-year timeline we typically use for foresight work, the two-year window allowed our panelists to reflect on challenges and opportunities much more immediate to our present-day planning and decision-making. AI capabilities and needs are evolving rapidly, and higher education leaders need to take action now. Outlining actions that can either leverage existing policies and guidelines or help us create new policies and guidelines, the panel has begun to lay a path forward for AI practice in higher education that we hope you'll find insightful, relevant, and actionable.
EDUCAUSE and the WICHE Cooperative for Educational Technologies (WCET) collaborated on the planning and data collection for this action plan.
Leveraging Existing Policies and Guidelines
Many institutions already have policies, guidelines, and processes in place that, while not specific to AI, could be related or applicable to AI tools and practices. Rather than "reinventing the wheel" by creating an entirely new set of policies and guidelines, leaders at these institutions might instead focus on understanding how best to leverage the existing resources and supports they have at their disposal. The following recommended actions may help institutional stakeholders build on what already exists:
Individual Actions
- Become familiar with common AI tools. Setting aside time to explore and test the AI tools already available to them, individuals can become familiar with both the opportunities and the risks of potential new AI uses and cultivate more familiarity and comfort with these tools. Guidance from leadership on which existing tools to explore and for which uses can help ensure individuals' exploration and testing is appropriate and safe.
- Be a good steward of institutional data and existing guidelines. Becoming more familiar with the institution's existing policies on data use, data sharing, and academic integrity, as well as ethical concerns and existing regulations on data privacy and AI, can help individuals and their institutions mitigate risk and avoid legal pitfalls from potential misuses of AI.
- Develop individual AI literacy. Individuals have a shared responsibility for keeping abreast of the rapidly evolving technologies and trends impacting their work and their institution, and much of this information is readily available through online news outlets, think tanks, and research. By maintaining an ongoing understanding of technology and policy developments in AI, as well as the impacts of AI on higher education, individuals can be more informed and effective decision-makers in their own adoption and use of AI.
Departmental or Unit Actions
- Clarify which existing departmental or unit policies apply to AI and which do not. Individual departments or units often have their own policies and guidelines directing their work, and these teams should make space in their ongoing meeting agendas and communications to review and discuss existing policies and identify which, if any, are relevant to the use of AI and/or may need to be updated to better accommodate emerging AI-related needs.
- Develop standard language for multiple types of AI use in courses. With buy-in from academic leadership, as well as direct input from faculty, departments or units can review existing syllabus templates, curriculum, and course design materials to ensure standardization and consistency of AI use in their courses. As faculty determine the best and most appropriate uses of AI in their courses, consistency in language across courses can help ensure greater clarity and better understanding for the student experience.
- Create collaborative opportunities to figure out what works and what doesn't. One of the most valuable resources departments or units have at their disposal is one another. Opportunities to share with one another and collaborate on AI-related issues can be abundant, ranging from informal team communications—using group email or chat channels to share new technologies or ideas for new use cases—to formal, team-wide actions such as establishing department- or unit-level working groups or communities of practice to tackle more challenging AI-related questions.
Institution Actions
- Form cross-functional committees and communities of practice to evaluate and improve AI practices. Many institutions have established models for convening cross-functional groups to take on important institution-wide projects. Communities of practice across key areas such as teaching and learning, information technology, and productivity can help foster learning and sharing of AI best practices across the campus. And oversight committees focused on the review of ethical applications of AI can provide consistency in practice, much like the function of an IRB.
- Build on existing professional development to include AI literacy training for all faculty, staff, and students. Proactive AI training can help key stakeholder groups better understand the pitfalls to watch out for and help avoid potential feelings of alarm across campus. Streamlining these trainings and aligning them with existing training can keep stakeholders from feeling overwhelmed by "yet another" training requirement.
- Map AI policies and guidelines to the institution's existing mission, values, and strategies. Key stakeholders are more likely to buy in to and support the institution's AI policies and guidelines when they know those policies and guidelines are consonant with and in service to the institution's larger purpose and in line with the institution's values.
Multi-Institution Actions
- Collaborate to create a common understanding of the potential implications of state/federal regulations. Through an existing convening group or organization with state, regional, or national reach, collections of institutions can gather with subject-matter experts to interpret and better understand the implications of state and federal regulations for institutional practice.
- Develop vetting criteria that could be added to existing resources. Stakeholders from across institutions and other higher education–focused organizations can develop new AI-related criteria to be shared with solution providers and held up as standards for AI tool procurement decisions within or alongside existing tools. One specific example of such existing tools is the Higher Education Community Vendor Assessment Toolkit (HECVAT), a questionnaire framework designed for institutions to measure vendor risk.
- Collaborate with peer institutions to review and compare AI policies and procedures. Many institutions will likely encounter similar questions, challenges, and opportunities as they explore the use of AI. Lessons and ideas shared across institutions will help advance AI-related practices across higher education and help limit the number of repeated mistakes and costly lessons learned across institutions.
Creating New Policies and Guidelines
Although existing resources and supports may help with some AI-related needs, nascent AI technologies and practices can certainly present institutions with novel challenges that existing policies and guidelines simply fail to address. In these instances, institutions may need to create new policies and guidelines and establish new structures and supports to help stakeholders navigate these uncharted waters. The following recommended actions may help institutional stakeholders know where to start when there's a blank slate in front of them:
Individual Actions
- Surface student voices and perspectives when developing new policies and guidelines. Making the space and time for student connection and input into the development of new AI policies and guidelines can help ensure student buy-in and compliance with the institution's AI policies and practices. Student inclusion will also enrich the institution's policies and guidelines, making them more relevant and helpful to students.
- Recommend scenarios or issues for which additional guidance on AI is needed. A mechanism for gathering individual stakeholder feedback on AI-related scenarios and use cases can enrich the collective understanding of AI-related needs and help ensure campus-wide support, understanding, and buy-in to new AI policies and guidelines that may be needed.
- Be transparent and open about AI usage to generate conversation, including documentation of new and emerging use cases. Open dialogue about common areas of AI use among faculty, staff, and students can help generate examples of use cases and templates that will ensure standardized and consistent use of AI across the institution. Documentation of new and emerging use cases will help the institution identify potential areas for further exploration and additional support.
Departmental or Unit Actions
- Regularly review academic programs and courses to determine where AI usage and/or literacy should be enhanced and supported. Marketplace scans can help departmental or unit leadership identify emerging AI skills or competencies that should be included as degree program outcomes. Department- or unit-level AI policies should be established based on current research and best practices for their respective discipline or field.
- Develop new collaborations to break down silos. Though some department- or unit-level AI practices and policies will be discipline- or field-specific, other practices and policies may be more generally applicable or helpful to other departments, units, and institutions. Be intentional about communicating your department or unit's AI-related activities to external stakeholders, such as other departments within and outside of your institution.
- Establish clear policies and procedures for AI-related tenure and promotion issues. Departments or units may discover benefits to using AI tools in drafting tenure and promotion materials, analyzing and evaluating tenure and promotion criteria, and streamlining administrative tasks and needs surrounding tenure and promotion. Some departments or units may even consider updating tenure and promotion criteria to incentivize staff and faculty participation in and support of new AI policies and guidelines.
Institution Actions
- Ensure equitable access to AI tools across the campus. Departments across the institution have varying budgets and staff capabilities for implementing and supporting AI tools, and institutions will need to consider balanced and equitable approaches to allocating additional AI-dedicated funds and staff support. Equitable access across departments will also help ensure that students and faculty within each of those departments have equitable access to these tools.
- Hire leadership and/or staff specifically tasked with leading AI for the institution. Whether focused on developing new training opportunities and raising campus awareness of AI or establishing institution-wide policies, governance, and guidelines for the use of AI, dedicated leadership and staff capacity will be critical for the success of AI initiatives.
- Create a high-level AI governance structure (outside IT), including regular audits. Having a dedicated AI governance structure in place will better position institutions to establish an institutional culture and values around the ethical and responsible use of AI and to manage the institution's overall risk in implementing AI practices and tools. Institution-wide auditing of AI uses and risks will help leadership, staff, and faculty decide, in good conscience and in full compliance with ethical and legal standards, whether to move forward with AI adoption.
Multi-Institution Actions
- Build shared frameworks for evaluating internal and solution-provider AI products and models. As institutions become increasingly willing to share their experiences with and evaluations of AI tools and practices, higher education will be able to keep lower-quality and biased AI tools from getting into the hands of individual users and will be able to improve and evolve the tools they have available to them.
- Leverage the higher education community and common compliance standards to push AI solution providers to meet needs of higher education. Through collective bargaining power and the influence of key leaders and institutions, colleges and universities can work together to limit the resources needed by each institution to negotiate contracts and procure solutions, ensure a wider selection of quality AI tools, and reduce potentially predatory sales practices targeted at vulnerable institutions.
- Launch cross-institutional initiatives for advancing AI policies and standards. Regional, national, and/or global collectives of institutions can work together—functioning like "accreditation" bodies for AI—to develop standards for institutions in areas of AI practice such as data use, academic research, and testing and scoring AI bias. These collectives might also be able to proactively influence legislation impacting the use of AI in higher education.
Action Plans
EDUCAUSE Action Plans are developed through expert panel recommendations about how to achieve a preferred future state within a specified period of time. The process is based on components of the Institute for the Future (IFTF) foresight methodology, and the expert panel includes practitioners and thought leaders who represent the higher education technology community. Panel members are selected for their particular viewpoints, as well as their contributions and leadership within their domain, and every effort is made to ensure those voices are diverse and that each can uniquely enrich the group's work. For information about research standards, see the EDUCAUSE Research Policy.
Expert Panel Roster
We would like to acknowledge and express our deepest gratitude to the panel of experts listed below, who were responsible for generating all the big ideas summarized throughout this resource. Their brilliant thinking and rich discussions were the foundation of this work, and this resource would not exist had it not been for their dedication to this project and their passion for serving higher education.
Heather Brown
Instructional Designer
Tidewater Community College
Lance Eaton
Director of Faculty Development & Innovation
College Unbound
Jake Harwood
Senior Information Security Manager
University of California Berkeley
James Hutson
Lead XR Disruptor, Professor, Department Head
Lindenwood University
Tracy A. Mendolia-Moore
Manager of Educational Technology Innovation
Western University of Health Sciences
Kate Miffitt
Senior Director for Innovation, Digital Experience, & Accessibility
California State University
John Opper
Executive Director for Distance Learning and Student Services
Florida Virtual College
Sunay Palsole
Assistant Vice Chancellor for Engineering Remote Education
Texas A&M University
Pegah Parsi
Chief Privacy Officer
University of California San Diego
Elizabeth Reilley
Executive Director, AI Acceleration
Arizona State University
Dave Weil
Chief Information Officer
Ithaca College
Jim Wilgenbusch
Director of Research Computing
University of Minnesota
Van Davis
Chief Strategy Officer
WCET
Mark McCormack
Senior Director, Research & Insights
EDUCAUSE
Kathe Pelletier
Director, Teaching and Learning Program
EDUCAUSE
Jenay Robert
Senior Researcher
EDUCAUSE
Keturah Young
Program Manager, Communities
EDUCAUSE
© 2024 EDUCAUSE. The content of this work is licensed under a Creative Commons BY-NC-ND 4.0 International License.
Jenay Robert and Mark McCormack. 2024 EDUCAUSE Action Plan: AI Policies and Guidelines. Boulder, CO: EDUCAUSE, May 2024.