The Academic Culture and the IT Culture: Their Effect on Teaching and Scholarship
© 2004 Edward L. Ayers
EDUCAUSE Review, vol. 39, no. 6 (November/December 2004): 48–62.
The Academic Culture and the IT Culture: Their Effect on Teaching and Scholarship
A year ago, my colleague Charles Grisham and I wrote an EDUCAUSE Review article entitled "Why IT Has Not Paid Off As We Hoped (Yet)." In short, we argued that information technology has not yet transformed higher education because the areas of teaching and scholarship, the "heart" of colleges and universities, have remained relatively untouched by the new technologies. In this article, I’d like to continue the discussion and also go further, exploring not only why these two areas continue to be, for the most part, resistant to the changes but also how technology can successfully address these core missions of higher education.1
The Invisible Success of IT
Those of us who have been involved for a while in the long courtship between higher education and information technology can recall many ups and downs in the last thirty years or so.2 We remember when we first saw Mosaic, Netscape, and the World Wide Web. At each step along the way, some of the more impressionable among us thought that one innovation or another would push us over the top, that we would have finally gained the critical mass that would channel the undeniable power of information technology into higher education. We watched as commerce was transformed, as entertainment was transformed, as personal communication was transformed, and we kept waiting for the moment when higher education would be transformed in the same way.
In particular, we waited for the time when the very heart of education—the classroom and the scholarship taught in that classroom—would be transformed. Yet despite the tremendous investment that all institutions of higher education have made in information technology, despite the number of classrooms wired and the number of laptops mandated, the vast majority of classes proceed as they have for generations—isolated, even insulated, from the powerful technologies we use in the rest of our lives. Moreover, the form in which scholarship appears has barely changed. Across almost every field, researchers, no matter how sophisticated the technology they use in discovery, translate their discoveries into simple word-processed documents. Sure, they sometimes add JPEG images and other illustrations; and in the sciences, pre-prints rush around the world long before print journals would be able to publish the articles. But producing scholarly discourse in HTML and PDF formats has not changed scholarship in any significant manner. The nature of argument has remained remarkably resistant to innovation in rhetoric or form in every field of scholarly endeavor.
Very real technological accomplishments have tended to become invisible because they have been so successful. If you had told people a decade ago that card catalogs would virtually disappear within ten years and would be replaced by our current information-management systems, they would not have believed you. Librarians have been the real heroes of the digital revolution in higher education. They are the ones who have seen the farthest, done the most, accepted the hardest challenges, and demonstrated most clearly the benefits of digital information. In the process, they have turned their own field upside down and have revolutionized their professional training. It is testimony to their success that we take their achievement—and their information-management systems—for granted.
Similarly, college and university IT professionals have done more than anyone has asked them to do. The speed with which they have built networks and infrastructure, trained people, and created new student-registration and fiscal-management systems has been remarkable. And again, their success is taken for granted, with IT becoming almost as invisible as the electricity on which it runs. In a cruel irony, few faculty think "Ah, I will now use technology" whenever they check to see whether a book is in the library, or whether a student is enrolled, or whether their paycheck has been posted. And yet many do think: "I don’t want to use technology, or I can’t use technology, to teach in the classroom or to disseminate my scholarship." Those faculty who have ignored all the excitement up to this point have decided that they can withstand whatever else is put before them until the end of their careers. They go to their professional scholarly meetings and see only a few workshops and talks on the new technologies; they read the job ads and see that the jobs require exactly the same credentials as were required a quarter century ago.
The bottom line is that despite all the work and successes of IT professionals, teaching and scholarship at leading institutions of higher education remain relatively resistant to the possibilities of information technology.
The Academic and IT Cultures
From the viewpoint of a dean who would love to see the transformation of higher education accelerated, and from the viewpoint of a long-time laborer in the technology vineyard who would love to see some of the fruit come to harvest, I’m struck by many faculty members’ resistance to the obvious benefits of the maturing technologies. From the viewpoint of a professor, however, I understand some of the more obvious reasons for this resistance: shortages of time, money, and energy. In addition, I see more systemic reasons, ones that we might call "cultural": deeply patterned, deeply entrenched habits of thoughts and behavior. The problem is that the academic culture and the IT culture simply do not mix together well.
Nobody seems to like the word academic. "That’s merely academic" is used as a dismissive description of something irrelevant to real life, something as pointless as counting angels on the head of a pin or writing an English composition paper on Beowulf. Any mention of the word academic in a book review is a kiss of death. In a particularly cruel twist, even when a nonacademic praises a book by a professor, the reviewer often dismisses the academy in the process: "Not the boring, self-indulgent, impenetrable, dithering book we always expect from an academic, this book is almost as good as one written by someone who knows a lot less about the subject."
When asked to identify ourselves, almost no professors choose "academic" as their first choice. "College teacher" can sometimes sound good, with its shades of the movie Dead Poets Society. "Professor" can be OK on occasion, bringing to mind John Houseman in the movie The Paper Chase. Saying that you work "at the college" or "over at the university" can usually get you through a casual conversation without too much loss of status at the tire store or supermarket.
But being more specific can often cause problems. When I’m on an airplane and tell someone that I teach history, all too often the response is: "Boy, I always hated history—all those names and dates." I got some notion of this when I started to work on the subject of the Civil War, and my mother-in-law, a very sweet woman, introduced me to one of her friends as a "Civil War buff." I carefully tried to explain the difference between a historian and a buff, with the main difference seeming to be that I don’t have another job from which the Civil War is merely a hobby.
As problematic as disciplinary nomenclature can be, adding "academic" makes it even more toxic. The title of "dean" sounds imposing, if faintly scary (satisfyingly enough), since so few people, including deans, know exactly what a dean does. But even I cringe when I think about defining myself as what I actually am during most of my waking hours: an "academic administrator." It’s hard to think of many job descriptions (for legally paying work) that have more negative connotations than that. The title conjures up all the mustiness of "academic" along with all the bureaucratic, paper-pushing, rubber stamp–wielding, red tape–entangling connotations of "administration."
On the other hand, as someone who has served on IT committees dominated by IT staff, I know how IT people speak about academics. I’ve seen the eye-rolling and heard the chuckling at some of the more clueless of my academic colleagues who can’t figure out how to empty the trashbin on their desktop computer. Still, my friends in information technology have their own struggles. You know the stereotypes. You’ve heard the whispers: "geek." As for me, I represent the worst of all worlds: I’m both a lifelong academic and a longtime IT geek. But perhaps this does give me the credentials to delve into the nomenclature of both the academic culture and the IT culture.
For a definition of geek, I turn to a very convenient authority, the dictionary function of Microsoft Word:
1. somebody who is considered unattractive and socially awkward (insult)
2. a carnival performer whose act consists of outrageous feats such as biting the heads off live animals
3. somebody who enjoys or takes pride in using computers or other technology, often to what others consider an excessive degree (informal disapproving)
Leaving aside "biting the heads off live animals"—an activity that, in my experience, is indulged in by only a few academic administrators, and usually in private—I rest my case. When your own computer program tells you that by using that very program to "an excessive degree," you are becoming increasingly "unattractive and socially awkward," you might suspect that you’re in trouble. If you brush that warning aside to finish writing an article with that same program, you really are a geek.
As is often the case with oppressed groups, the disdain faced by those in the IT arena and those in the academic arena has not always brought the two together in a shared bond. The two cultures have so much to offer one another, so much to teach one another, if they would only look past the tweed and elbow patches on the one hand and the pocket protectors on the other. The IT industry and the academy share some obvious and important characteristics. Both deal with intangibles, especially ideas. Both are focused on networks and on the information those networks carry. Both are dedicated to innovation and competition. Both are extensible structures: build something once, and you can apply it everywhere.
But taking a clear-eyed view reveals that there’s more to the story. As shown in Table 1, information technology and the academy display competing characteristics.
Information Technology The Academy everywhere and nowhere strongly identified with a very specific location brash young industry a self-consciously ancient institution highly unstable the most stable institution across the world new competitors continually emerge impossible to break into top ranks possibility of great profits no possibility of profit at all work performed by anonymous teams centered on scholarly stars obsolescence built in designed to deny obsolescence virtually instant results necessary patience a central virtue designed to be transparent opaque and labyrinth
Since information technology has infiltrated every nook and cranny of other parts of life, it seems to me that it must be the academy that resists. That is because several basic paradoxes lie at the heart of the modern American university—basic conflicts that make the academy a fascinating place to live and a hard place to administer:
1. People in universities are supposed to be both communal and profoundly autonomous.
- Our fragmented institutions are unified mainly by people’s common willingness to allow others to pursue their own, often incomprehensible, expertise.
- Peer review by disinterested experts, preferably strangers, determines who succeeds within close-knit communities.
- Higher leadership is generally transitory, amateurish, and constrained but is the only force providing any coordination or direction to many otherwise disconnected scholars, departments, and disciplines.
2. The university is built to be both a protected ivory tower and a fearless creator of the future.
- Universities are ancient and unchanging institutions built to generate change.
- Most academics welcome change in society and hate any change in their immediate environments.
3. University students are customers, colleagues, a labor force, and fictive kin—all at once.
4. Reputation is the only measurable index of success, and everyone acts as if rankings, whether generated by research councils or popular magazines, are real, though no one really believes them. The most powerful ranking— in U.S. News & World Report—is the one people claim to believe in the least.
- Awards, prizes, and titles often replace money as indexes of success; other than the military, this is the only American institution in which this is so.
5. Each university is profoundly unique and also profoundly like every other university.
- All institutions sell the same things, often with interchangeable pictures and slogans: for the humanities, professors and students in tree-filled quads; for the sciences, people in lab coats with beakers of colored fluid.
- All differences are fetishized because all the constituent parts are fungible.
No wonder those in the IT areas have a hard time living in such an environment! How can information technology be adopted to live in this paradoxical atmosphere?
Teaching, Scholarship, and IT
All the early academic adopters of information technology appear to have been recruited. Meanwhile, more junior faculty are receiving mixed signals: "Yes, experiment with technology, but not too much. Don’t count on tenure or recognition in your discipline as a result. Use information technology in your research but not in its presentation."
So what should be done? Those in information technology should keep on doing what they’re doing: building the standards, techniques, tools, and vision of the infrastructure that ties everything together. But colleges and universities need to think about ways to get that infrastructure more directly involved in the mysterious exchange between student and teacher and in the intimate relationship between researcher and scholarship.
As a dean, I sometimes find myself taken aback as I walk down the hall in a classroom building. Within the space of a few minutes, I hear Mandarin being spoken in one classroom, economics taught in the next, and poetry read in another. I look at the faces of the students and the faculty, and they’re in their own world. They’re oblivious to their setting, to the weather. They’re suspended in time.
All of that is fragile. I think of it as a flame, intense but vulnerable. I think of my job as dean as protecting that flame. Universities have to build massive shells to shelter those flames. The flames can be snuffed out by many things, from bad teachers, to the wrong classrooms, to budget cuts, to the failure to capitalize on opportunities.
To fit into the complex environment of the academy, the manifestations of information technology need to be calibrated more precisely to the particular purpose at hand. We’ve tended to build big things in the hopes of capturing as many uses as possible. But maybe now we need to build lighter, smaller things. We might build simpler ways to use our vast collections. We might build expressly for the devices that we will increasingly use, devices that are far more portable, wireless, and ubiquitous than those to which we’ve become accustomed. These devices will be the first that can really work within a classroom, that can create the most local networks of all—temporary networks of that class in that time and place.
Those networks can be spontaneous, recursive, unpredictable, quick. They build on face-to-face contact as well as on the power of networks; they add to, rather than displace fully, human contact. They allow instant testing so that the instructor can immediately see what students know, what they think they know, and what they don’t know. The networks allow the classroom to connect with the world outside, even during class itself. They can shift the emphasis from note-taking to deeper kinds of engagement.
We need, in short, to scale down as well as scale up, to build more personal networks at the same time that we build more robust networks.
We need to tailor new technologies more carefully to scholarship as well. I’ve struggled with this myself. In the 1990s, I began Valley of the Shadow: Two Communities in the American Civil War (http://valley.vcdh.virginia.edu/). The idea for the project was straightforward: put every piece of information about every person in a Northern community and a Southern community in the era of the Civil War into a digital context so that students and scholars would have an unprecedented command over those millions of pieces of evidence. The project was conceived before the World Wide Web appeared, but having been built in SGML, it was able to move quickly to the Web and HTML. The project makes no scholarly argument of its own, however, and it puts forward no thesis to be tested. It does not provide a narrative of events against which students can test their own interpretations, and it does not engage the immense scholarly literature on the Civil War.
As a result, Will Thomas, a colleague and longtime member of the team behind the Valley project, and I decided to try to close the circle.3 Commissioned by the American Historical Review (AHR), the leading journal in the discipline in the United States, we undertook to make an argument—along the lines of traditional historical scholarship—based on the Valley project. But since Will and I could not address all of the many issues that the Valley project allowed us to explore, we focused on a key question for which most of the evidence had already been gathered: How different was the antebellum South from the antebellum North on the eve of the Civil War? We hoped that by measuring as precisely as possible the various facets of life in Franklin County and Augusta County—demography, land use, agriculture, industry, literacy, religion, gender, and race relations, among others—we could come up with a clearer answer to a question that had generated enormous discussion for many decades.
We struggled to imagine how we might possibly contain and convey so much information. We knew that we did not want to use the computer merely as a distribution device; we wanted to rethink the ways that text could be presented. This meant that we would need to write in discrete modules of prose, with each module making one point clearly and connecting directly to the relevant evidence and scholarly literature. We had to reinvent the most basic elements of scholarship. There could be no fixed page numbers, for example, since in a digital article, people could start from many different places and follow many different paths. There could be no traditional footnotes or index. In other words, we were devising an argument for a journal article at the same time that we were envisioning a new kind of journal article. Could our colleagues be convinced this was a worthwhile activity?
Fortunately, the single most important criterion for historians is a plausible fit between evidence and argument. Since the digital archive we had built emphasized evidence, our fellow historians could see that we were working within the empirical tradition of the profession even if we were experimenting with presentation. Furthermore, Will and I were by no means enemies of traditional forms; we had both written monographs and scholarly articles and planned to write more. Finally, by the time we began our article in 2000, historians had already adapted to some aspects of the electronic environment. In 1999, the president of the American Historical Association, Robert Darnton, had issued a call for using digital media:
Anyone who has done long stints of original research knows the feeling: If only my reader could have a look inside this dossier, at all the letters in it, not just the lines from the letter I am quoting. If only I could pursue that trail through the archives, despite the detour from my central argument. If only I could show how themes interweave through diverse bodies of documents, even though the patterns extend beyond the bounds of my narrative." To bring that feeling into reality, Darnton envisioned an electronic book in layers: "The top layer could be a concise account of the subject, available perhaps in paperback. The next layer could contain expanded versions of different aspects of the argument, not arranged sequentially as in a narrative, but as self-contained units that feed into the topmost story. The third layer could be composed of documentation. . . . A fourth layer might be historiographical. . . . A fifth layer could be pedagogic. . . . And a sixth layer could contain readers’ reports, exchanges between author and editor, and letters from readers.4
Darnton’s vision, though exciting, was more archival than hypertextual. It elaborated on the traditional book but did not change the central narrative. Will and I were thinking of something a bit different for our article. We thus arrayed our writing around an elaborate mechanism and sent the Web address to the AHR to distribute to anonymous reviewers, the litmus test for scholarly publication. But when the reviews came back, it was clear that the readers could not find our argument in the hypertextual jungle. The article made demands that violated an unspoken contract. Partly because of its length and partly because of its hypertextual form, our article frustrated readers’ expectations for a scholarly article laid out in a certain way.
We went back to the drawing board. The next version of the article took a much-simplified form. Will and I returned to first principles. We decided that we would build the article around the three elements of all professional historical writing—argument or narrative, evidence, and the scholarly literature on the subject (also called historiography)—and that we would link them together in what we thought would be a useful way. We envisioned the article as a prism: each module refracted evidence, argument, and historiography in a different way, shining different angles of light on the same complicated problem. This time, the peer reviewers thought the digital article worked about as well as such a strange form of scholarship might work. They encouraged the AHR to publish the article, and it was subsequently featured on the cover of the print version of the journal, which contained a summary and an introduction to the article. The real article, of course, could live only online.5
In the article, we talk about a tool we are developing to make it easier for other authors to write digital articles without having to build them from the ground up. That tool is called CHART, for Comprehensive Historical Analysis and Research Tool. Our goal is to provide this as an XML template, freely available to all who would like to use it. As with teaching, our tools for scholarship need to be lighter, more nimble, cheaper, and quicker. CHART is designed to be a step in that direction.
Information technology has not made the impact on higher education—or at least on the core missions of higher education—that it has made on many other aspects of society. We’ve built a great infrastructure that has transformed many social and business aspects of our work and our libraries, but teaching and scholarship have been relatively little touched. I think we’re ready for the next stage: building tools that can be carried into the heart of the academic enterprise. For teaching, we need tools that anyone can pick up, that can be customized, that are quick and adaptable, and that are less expensive in money, time, and commitment. For scholarship, we need to craft forms of scholarly presentation that take advantage of the power of the new media we now possess. For both teaching and scholarship, therefore, we need IT people and academic people to work together more closely than ever before.
As someone who believes that the rapid development of information technologies is perhaps the most significant long-term social change of our time, and as someone who believes that the academy is among the most important of human institutions, I think we simply must find ways to get the two cultures to work together more effectively. The academic culture has already changed radically before. It has adapted to a vastly larger and more diverse student body, to entirely new disciplines, and to a reliance on philanthropy and research dollars. It can change again, to a time when the ancient joys of face-to-face education can be made even greater, through the pioneering work being done within the IT culture and through the wise integration of new technological tools.
1. The basis for this article was my Plenary Address at the Coalition for Networked Information (CNI) Spring 2004 Task Force Meeting in Alexandria, Virginia, April 15, 2004.
2. The ideas in this section are drawn from the earlier article: Edward L. Ayers and Charles M. Grisham, "Why IT Has Not Paid Off As We Hoped (Yet)," EDUCAUSE Review, vol. 38, no. 6 (November/December 2003): 40–51, http://www.educause.edu/pub/er/erm03/erm0361.asp. They are repeated here with the permission of the co-author, Charles Grisham.
3. Thomas is the co-founder and the executive director of the Virginia Center for Digital History: http://www.vcdh.virginia.edu/.
4. Robert Darnton, "A Program for Reviving the Monograph," Perspectives, vol. 37, no. 3 (March 1999), http://www.historians.org/perspectives/issues/1999/9903/9903PRE.CFM.
5. William G. Thomas III and Edward L. Ayers, "An Overview: The Differences Slavery Made: A Close Analysis of Two American Communities," American Historical Review, vol. 108, no. 5 (December 2003), http://www.historycooperative.org/journals/ahr/108.5/. For the full online version, see http://www.vcdh.virginia.edu/AHR/.