Main Nav

Accountability, Demands for Information, and the Role of the Campus IT Organization

Brian L. Hawkins

Higher education is encountering unprecedented pressure for accountability from both internal and external constituencies. Frank Rhodes, the former president of Cornell University, has stated: “Accountability…is the newest buzzword for all institutions. It is an important—indeed, a vital—obligation, but it means very different things to different people.”1 These constituencies include legislators, the families of prospective students, accreditors, trustees, current students, faculty, and administrators—each wanting something quite different from the institution and each wanting the information for varying reasons and purposes. This pressure for accountability in higher education is actually nothing new; it has been a top concern for nearly 15 years. Today, however, the rising price of tuition is exacerbating the call for colleges and universities to demonstrate their effectiveness and to become more transparent about how resources are used.

Higher education, meanwhile, has been extremely reluctant to step up to the challenge of measuring the outcomes of its teaching, learning, and research. Ironically, researchers can measure the movement of subatomic particles and the radiation and other effects of unseen nebulae, but when it comes to measuring and assessing the impact and effectiveness of teaching, learning, and research on campus, we all too often hear that such an effort is too difficult. Whether the difficulty is because we have not yet learned how to do this effectively or because we have merely avoided the task is irrelevant. Society is becoming increasingly intolerant of such responses, and political pressures are mounting for campuses to deal with these issues or to have the government do it for—and undoubtedly to—higher education.

Demands for Information: The Spellings Commission

In 2005–2006, a national committee was established by Secretary of Education Margaret Spellings to carefully examine and make recommendations about higher education in the United States. The committee report, A Test of Leadership: Charting the Future of U.S. Higher Education, came with a wide variety of recommendations, largely focusing on accountability, affordability, and access. Many of these recommendations directly related to the information needs of higher education. The report stated:

Our complex, decentralized postsecondary education system has no comprehensive strategy, particularly for undergraduate programs, to provide either adequate internal accountability systems or effective public information. Too many decisions about higher education—from those made by policy makers to those made by students and families—rely heavily on reputation and rankings derived to a large extent from inputs such as financial resources rather than outcomes...Parents and students have no solid evidence, comparable across institutions, of how much students learn in colleges or whether they learn more at one college than another. Similarly, policy makers need more comprehensive data to help them decide whether the national investment in higher education is paying off and how taxpayer dollars could be used more effectively.2

One of the foci of the Department of Education for the past several years has been on developing a unit record system that will track every student, at every higher education institution, for every quarter or semester—in order to develop a longitudinal data collection of students’ progress over the years, at all institutions attended. The purpose of this extraordinarily complex (as well as expensive and intrusive) system is to provide information to policy makers and consumers alike about two primary variables: graduation rates and the net price of higher education.

Although the Spellings Commission report calls only for a pilot of this system, the intent is to create more information and to focus on accountability and also on transparency within higher education. Yet accountability and transparency are different concepts calling for different information elements: one is for policy makers and the other is for consumers. Viewing these two concepts as two sides of the same coin has led to the mistaken conclusion (by the Department of Education) that the unit record system will play a critical role in providing the data for transparency and hence the basis for accountability. While the rhetoric talks about accountability (in terms of the assignment of federal aid and funding of higher education), the focus narrows in on transparency, which is really about consumer information rather than the information that drives informed policy. The metrics of graduation rates and net price may have policy implications, but the clear focus of the proposed unit record system is on providing consumers with information that will allow them to compare and contrast colleges and universities, as if they were purchasing an automobile.

Higher education must avoid falling into this one-size-fits-all mentality. Although this approach is officially disclaimed by Secretary Spellings and her colleagues, there is a strong push to have “comparability across institutions.” Higher education leaders all live, breathe, and believe the mantra that the greatness of U.S. higher education lies in its diversity and that it is up to them to provide a means of demonstrating accountability so that this diversity can emerge and be seen. The notion of transparency highlights differences as well as what we have in common—a point that the Spellings report completely misses. All institutions desire and strive to be different in some way, segmenting themselves in a competitive environment. Education is not a commodity that can be bought like an automobile or a box of tissues. Across institutions, there are certainly common aspects that may be compared, but there are also factors that are intangible, hard to measure, or pertinent only to some segments of the community and to some institutional missions. Institutions must continue to focus on their uniqueness and their noncomparability. The higher education community could develop a list of data elements that are appropriate and desirable for consumer information, but it must also consider ways to demonstrate and highlight institutions’ unique aspects.

There is a substantial difference between the kinds of metrics and indicators that are meant to measure students’ and consumers’ information needs and those that are meant to measure institutional accountability and public policy needs. When we talk about net price and graduation rates, we are not focusing on the more deeply rooted issues of learning and outcomes and how they affect our society, communities, workforce, economy, and quality of life. We need to segment the discussion. Although some of the policy-related metrics might be of consumer interest, the inverse is not necessarily true—except to supply uninformed sound bites in what should be a serious discussion of higher education and its directions.

This distinction between information for consumers and information for policy makers is vitally important, because what we measure will shape behavior and define results. For example, focusing on “time to degree” suggests that it is a good thing to get a degree faster—perhaps at the expense of more important accomplishments such as learning, developing self-confidence, attaining knowledge, and finding what makes one tick. Asking for something like graduation rates by institution, as a measure of accountability, will reveal and drive behavior across the educational system in ways we cannot fully know or predict at this point. If higher graduation rates are the goal, then students—following an institution’s counseling—might take more coursework to complete their associate degree to show degree attainment, and yet the transfer process won’t recognize their additional credits, thus creating a greater total price for a student’s degree. Alternatively, schools might simply change the type of students they admit, and this too could have significant social impact. Although our institutions wouldn’t deliberately lead students down these paths, we should be concerned about the unintended consequences that might result from adopting arbitrarily defined metrics.

Another example of a deceptive measure is net price. As most higher education insiders know, the problem is that the net price an institution charges a student to enroll depends on the family income of the student, the number of siblings simultaneously enrolled in college, and other factors. Just as virtually all passengers on board an airplane pay different prices for the plane tickets, so do students at higher education institutions pay different prices for tuition. So how can we provide reasonably accurate price information to the public? The answer is not by simply calculating the sticker price less the various sources of aid, since that will not be the price for the most economically disadvantaged students or for the richest students. Such an average will only add more confusion.

Institutions need to work on a national basis to determine what data are really needed, to define the level of granularity for such data, and to provide a template for these variables. This effort could best be led by the key higher education associations in Washington, D.C. It requires coordination across all segments of higher education; it is not something that can be dealt with effectively on a campus-by-campus basis. However, that leads to the logical question: What role can and should the campus information technology leader play in this new world of accountability?

The Role of Information Technology and the Campus CIO

Clearly, the public demand for transparency of information, accountability, and outcomes has implications for the role of the CIO (chief information officer). In an EDUCAUSE Review article, Casey Green highlights a new role for information technologists—a role brought to the fore by the Spellings Commission report. He argues that information technology (IT) has a unique opportunity to help institutions address the increasing demand for more and better institutional data:

Information technology now offers viable methodologies to address the mandates for outcomes assessment.

The question here no longer concerns if information technology has a role to play in the campus conversations and public discussions about assessment and outcomes. Rather, the issue before us in the wake of the Spellings Commission report concerns when college and university IT leaders will assume an active role, a leadership role, in these discussions, bringing their IT resources and expertise—bringing data, information, and insight—to the critical planning and policy discussions about institutional assessment and outcomes that affect all sectors of U.S. higher education.3

Today, we have information of all kinds: student information, financial information, research data, transactional records, donor records, medical images, climatological data, and so on. And to manage information technology on our campuses, we have CIOs. Yet though the term “CIO” has been adopted to describe the head of a campus technology group, CIOs now wrestle with more than technology challenges. They increasingly wrestle with how to make the information that they have at their disposal useful to the campus in addressing these accountability issues. Even though this is a responsibility of all senior campus administrators, the CIO may lead the discussion, explaining the value of information technology, identifying useful data sources, and clarifying why it is important that IT be a partner with other units in addressing these needs.

The CIO has a great deal of data that can be combined and used to understand the accomplishments of the students at the college or university. Data from learning management systems and student record systems could be combined and mined to help the institution better understand the dynamics that lead to—or prevent—various learning outcomes. Using the powerful technological and statistical tools that are available and drawing on new sources of information are the elements of the emerging field of analytics in higher education. As stated by Mark Milliron in a recent article:

Technology is neither good nor bad. It is our use of technology tools—within our contexts and toward specific ends—that can make a difference. This idea is the foundation on which today’s insight initiatives are built. Insight initiatives are known by other names in other sectors: business intelligence, in the corporate world; evidence-based medicine, in healthcare. These efforts, which combine explorations of information from the past (hindsight) with looks to the future (foresight), come together in a moment of insight to power decisions that make a positive difference. These initiatives leverage technology, planning, research, strategy, and a host of other key elements to truly realize the treasure of student and institutional data at our fingertips.4

We do not currently have meaningful outcome data with which to refute the various attacks on higher education. Analytics potentially provides a useful methodology for exploring available data and for developing significant models that could serve this purpose. What factors lead to greater time to complete a degree? How can the institution mitigate these factors? What costs can be reduced or eliminated by business process evaluation? These are the kinds of questions institutions must address in the years to come. To help institutions in this effort, the EDUCAUSE “Grand Challenges” initiatives—originally conceived in 2005—include a major focus on analytics.


The pressure for greater accountability has been coming from both Republicans and Democrats, from corporate America, from accreditors, from trustees, and from other stakeholders. This is not a partisan issue, and it will not be going away. Trying to identify learning outcomes and to measure whether a college or university is fulfilling its mission is not easy, but that difficulty is not an excuse to avoid the issue, as higher education has tended to do for all too long.

Higher education has never before encountered the current level of pressure to change and to modify its methods. The effective use of information within our systems is a change method that should be explored and capitalized on, but the impetus for change must come from the leadership of our institutions. Campuses cannot continue to use the slow, “bottom-up” change methods of the past. The pressures are too great. In addition, for the most part those at “the bottom”—the faculty and the participants within the institutions—are not directly feeling the pressures. And those who are aware have been resistant to change.

The times are different today. Bold leadership from our presidents and chancellors is called for. Higher education needs to act directly and quickly to prevent Draconian solutions from being imposed by legislators and others who are demanding greater accountability and transparency on campus.


  1. Frank H. T. Rhodes, The Creation of the Future: The Role of the American University (Ithaca, NY: Cornell University Press, 2001), 242.
  2. U.S. Department of Education, A Test of Leadership: Charting the Future of U.S. Higher Education, a Report of the Commission Appointed by Secretary of Education Margaret Spellings (Washington, D.C.: U.S. Department of Education, 2006), 14,
  3. Kenneth C. Green, “Bring Data: A New Role for Information Technology after the Spellings Commission,” EDUCAUSE Review (November/December 2006): 46,
  4. Mark Milliron, “Insight Initiatives,” EDUCAUSE Review (March/April 2007): 68,


Green, Kenneth C. “Bring Data: A New Role for Information Technology after the Spellings Commission.” EDUCAUSE Review (November/December 2006): 30–47.

Milliron, Mark. “Insight Initiatives.” EDUCAUSE Review (March/April 2007): 68–69.

Rhodes, Frank H. T. The Creation of the Future: The Role of the American University. Ithaca, NY: Cornell University Press, 2001.

U.S. Department of Education. A Test of Leadership: Charting the Future of U.S. Higher Education, a Report of the Commission Appointed by Secretary of Education Margaret Spellings, Washington, D.C., 2006.