Policies and Procedures
Institutional policies are being revised and created to address AI-related issues. Only 23% of respondents indicated that their institution has any AI-related acceptable use policies already in place, and nearly half (48%) of respondents disagreed or strongly disagreed that their institution has appropriate policies and guidelines in place to enable ethical and effective decision-making about AI use. However, more than half of respondents (58%) indicated that AI is having an impact on their institution's policies, either through the revision of existing policies, the creation of new policies, or both (see figure 12).
Restrictive AI-related policies are not as common as permissive or neutral policies. Nearly three-quarters of respondents (73%) characterized the general orientation of their AI-related policies as extremely permissive, somewhat permissive, or neutral (see figure 13). In open-ended data, respondents described extremely permissive policies as those that give individuals full autonomy to use AI as they wish, particularly in the context of teaching and learning. For example, one respondent explained, "Permitting use of AI is contextual and best left up to course facilitators and designers." In contrast, respondents described extremely restrictive policies as those that ban student use, prohibit faculty from asking students to use AI tools, and lack guidance for the responsible use of AI.
AI is making a big impact on teaching and learning, technology, and cybersecurity and data privacy policy. In alignment with our findings related to strategic responsibility, respondents indicated that the top three functional areas whose policies are already or soon will be impacted by AI are teaching and learning, technology, and cybersecurity and data privacy (95%, 79%, and 72%, respectively; see figure 14). The 11% of respondents who said that other areas not listed were already or would soon be impacted by AI were asked to list those areas; responses included professional development for faculty and staff, accessibility, intellectual property and content ownership, sexual harassment, and student support services.
Academic integrity is the most impacted element of teaching and learning. Given that academic integrity concerns have dominated discussions of AI in higher education over the past year, it was no surprise that a majority of respondents (72%) identified academic integrity as a teaching and learning element that has been at least somewhat impacted (see figure 15). Respondents also identified coursework and assessment as impacted areas (62% and 47%, respectively).
Are institutions preparing their data to be AI-ready? It depends who you ask. More than half of respondents (54%) disagreed or strongly disagreed that, "We have an effective, established mechanism in place for AI governance (responsible for policy, quality, etc.)." Only 30% of respondents indicated that their institution has at least begun preparing data to be AI-ready (see figure 16). A large proportion of respondents (37%) indicated that they did not know whether their institution is preparing data to be AI-ready. Given the demographics of our sample, it is reasonable to expect that many stakeholders may not be aware of such work if it is happening. In contrast to the general sample of respondents, only 14% of executive leaders indicated that they did not know whether their institution is preparing data to be AI-ready. In fact, 47% of them said they had at least begun preparing data. Similarly, 55% of respondents who work in a data and analytics role said they had at least begun preparing data. Taken together, these results suggest that about half of respondents' institutions are preparing their data to be AI-ready, but this work is not widely communicated.
Institutions likely have at least some privacy and security policies that adequately address AI, but there is work to be done. Similar to the findings related to data preparation, the extent to which respondents believe their institutions' cybersecurity and privacy policies are adequate to address AI-specific risks significantly varies by respondents' professional roles. Overall, 41% of respondents indicated that their policies are at least somewhat adequate (see figure 17). This proportion rises to 68% of executive respondents and 59% of respondents who work in cybersecurity or data privacy. These results suggest that there is an opportunity for cybersecurity and data privacy professionals to communicate with other institutional stakeholders about how existing policies are related to AI use cases.
Data security is a top concern for all stakeholders. Only 18% of respondents agreed or strongly agreed that their institution has the appropriate technology in place to ensure the privacy and security of data used for AI. Data security was selected by the most respondents (55%) as an AI-related cybersecurity and privacy concern being discussed at their institution (see figure 18), and among cybersecurity and data privacy professionals, who have specific expertise in this area, a full 82% identified data security as a primary concern. Other items that more than half of cybersecurity and data privacy professionals selected were compliance with federal regulations (74%), ethical data governance (56%), compliance with local regulations (56%), and the impacts of biases in data (52%).