Adopting Digital Technologies in the Classroom: 10 Assessment Questions
Technology has long been a part of the classroom space. It is incorrect to say that "technology in education" is a new or radical phenomenon; blackboards, books, paper, and pencils are all technologies, to say nothing of written language or mathematics. Sometime in the 1990s, the word technology was co-opted to refer only to digital tools. "Technology in the classroom" or "technology stocks" or "the dangers posed by technology" came to refer only to digital technology rather than to technology as a whole. As such, much of the discussion surrounding those new tools—pro and con—has been far too narrow in scope and consequence. In order to thoughtfully approach digital technology acquisition and use in classrooms, I propose that we look at technology inclusively; that is, view digital technologies as part of the larger "information ecology" of the classroom, which has long housed technologies of many varieties (see the sidebar).
In this article I propose 10 questions, the answers to which will help guide faculty in adapting digital technology for classroom use. These questions especially target those late adopters who have largely refrained from employing digital technology in the classroom.
10 Assessment Questions
The guidelines built around the answers to these 10 questions present what I believe to be a thoughtful approach to digital technology, one that does not assume that teachers should automatically adopt the latest tool for fear of appearing behind the curve. Nor does this approach automatically reject new technology acquisition in a reactionary belief that such tools have no proper place in education. Digital technology will not aid in education—and in fact can have harmful, unintended consequences—if not used wisely. In an educational setting, that wisdom derives from pedagogical concerns and from the teaching practices and philosophies of educators who use the technologies.
1. What impact does the technology have on the ergonomics of the classroom space?
The spatial arrangement of people and technologies is often an explicit statement of pedagogical practice. Think of a traditional classroom space, with rows of desks bolted to the floor facing a lectern at the front. Such a physical arrangement favors a lecture-based pedagogy, one in which a professor "professes" to an audience of students who listen and absorb his information. I prefer a classroom with easily moveable chairs that can be arranged in a circle or distributed around the room, since I practice a pedagogy based on both general class discussion and small-group work. Similarly, the physical arrangement of the technologies in the classroom space should reflect the instructor’s pedagogical strategies.
I once visited a digital classroom where computer terminals were placed upon rows of bolted desks, suggesting a continuation of the lecture mode. My preference would be for a room of laptops (wireless would be even better) that students could carry around, facilitating the small-group or decentralized discussion mode.
The ergonomics of technological choice must match the professor’s pedagogical intentions as closely as possible. Any technology that unnecessarily disrupts this arrangement or whose arrangement runs counter to the teacher’s pedagogical style should not be adopted.
2. How does the technology expand the dimensions of the classroom space?
Not all instruction occurs within the classroom. Engagement with students also occurs outside the classroom, as during office hours. I have taught at colleges where students think nothing of sitting down next to me during my lunch break or in the quad to ask a question or to have a discussion. Professors ask students to complete homework; lead them on field trips; and require their attendance at concerts, performances, and other out-of-class functions. At the same time, faculty—even faculty who do not employ technology in the classroom—use digital tools to produce syllabi, to write up term paper assignments, and to calculate grades. Each of these activities is an extension of classroom activities.
Digital technologies can often legitimately expand the information ecology of the classroom space. Many students define the Internet as a library or archive. Research suggests that students often use the Internet as a "virtual study group."1 The chatroom functions of many course management systems allow faculty to hold virtual office hours, and I have found fewer and fewer students coming to my office hours. At the same time, an increasing number of students prefer to send e-mail when they need help.
Asynchronous threaded discussions are another way in which classroom activities can extend beyond the physical space. I have managed many independent study and senior research projects through regular e-mail contact with students, most of whom I saw face-to-face only once every few weeks.
"Technology in the classroom" need not refer to the tools that physically occupy the traditional classroom space. Indeed, technology can be used to effectively expand that space.
3. Why is this technology here?
Kathryn Henderson described the impact of computer graphics tools on the design engineering profession. The practices and habits of design engineers were forged in a "paper world" with its own drafting conventions. Adding computer graphics to this paper world proved a difficult transition for many of the engineers Henderson observed, and many of them simply avoided using the new tools. This was not because they were not capable of using the technology but because the drafting conventions enforced by computer-aided design were different from the conventions these engineers learned in their paper-world practices. In addition to having to learn how to use the new tool, the engineers had to learn a whole new domain of tacit knowledge. The engineers, noted Henderson, rarely chose to use digital graphics technology: someone else decided that these tools belonged in their workplaces.2
Henderson chronicled a common complaint heard in businesses, organizations, and higher education in the 1990s: the sudden appearance of new or upgraded technologies on a user’s desk, with or without the user’s consent or input, typically as the result of a decision made by the administration or the IT department.
In an unhealthy information ecology, little communication and feedback occur among the administrators who determine university technology strategy, those who decide on the purchase and service of new technologies, the professors who are asked to employ them, and the students who are required to use them. Indeed, each group may have competing reasons to employ technology and might even work at cross purposes. There are indications, for example, that students use the Internet in ways not envisioned by their teachers, such as for virtual study sessions or to "store" their work as they move from home to school. Many of the students surveyed in the Pew Internet and American Life research project said they wished that their teachers could use the technology in class in the same way they (the students) did at home.3
Universities should foster frequent feedback and communication among students, professors, administrators, and technical staff on technology acquisition and use. One way might be through a technology audit, a campus-wide assessment survey that would focus on a specific technology.4 The audit would ask all campus constituencies questions such as
- What is the primary purpose of this technology?
- How was this purpose agreed upon?
- How is this technology actually used, and how well does this use match the purpose?
- What are the benefits and risks of using this technology?
It makes little economic sense for the IT department to purchase new tools if professors do not use them (or under-use them). Neither is it practical for a few early adopters to request such tools to suit their own idiosyncratic needs. Rather, technology choices should be based on a broad consensus reached after discussion among all constituencies.
4. Does the technology add some demonstrable pedagogical value?
Replacing an old technology with a newer digital technology simply because it is new and digital is no longer a sufficient rationale—if it ever was—for universities struggling with tight technology budgets. Instead, the adopter must be able to articulate a legitimate pedagogical advantage to the new tool.
I have long used transparencies and an overhead projector to present visual evidence in my history classes. There is pedagogical value in students looking at and thinking about visual evidence, and the technology of the overhead projector enables that pedagogical strategy. Ideally, the students should examine the original images. Because this is rarely an option, students must look at reproductions. Digitized images displayed with PowerPoint or on a Web site are high-resolution images, much higher resolution than transparencies or Xeroxed paper copies. Replacing these lower-resolution methods with a computer and projector is not merely a technological "upgrade" decision but a pedagogical choice.
For example, in teaching a class on "visual thinking," I rely on online images to communicate the concepts. If my classroom has no digital technology, I cannot use the kinds of images I want, having to settle instead for lower-resolution paper handouts. Display technologies like overhead projectors allow me to point to specific areas of the image so that we might analyze it as a class. This substitution of paper for digital images lowers the quality of the experience and makes it harder—although not impossible—to achieve the pedagogical goals of the class.
It is perfectly acceptable not to use a new technology when an old technology works just as well. Face-to-face discussion is an excellent pedagogical technique, especially useful in discerning facial expressions and other nonverbal cues. If a professor’s pedagogy relies on such nonverbal communication, she may see little value in carrying out an asynchronous discussion through a course management system and may therefore choose not to adopt Blackboard or WebCT for this purpose. On the other hand, a professor might conclude that technologies that allow for asynchronous online discussion could prove very helpful to students who might not otherwise participate in a large group discussion.
One could make the argument that participation in online chatrooms is commonplace among students and therefore should be incorporated as part of classroom instruction. However, ubiquity of technology is an insufficient rationale for inclusion in a classroom. Telephones are a widely used technology, but this does not mean that professors are required to incorporate them into teaching (unless, of course, a compelling pedagogical reason can be articulated). Pedagogy, rather than technology, should drive the process of adoption.
5. Does the technology encourage authentic pedagogy?
Authentic pedagogy means that the activities professors ask students to engage in are similar to the activities carried out by practitioners in a field. Effective educational technology is authentic and appropriate for instruction when it facilitates authentic activities—when students use tools in the same way master practitioners use them. Students of biology should work appropriately with the technologies found in the laboratory, while theater majors should work with the technologies involved with stagecraft.
Digital tools can often serve authentic ends. For example, in my history classes I ask students to access primary source collections online, such as the digitized documents located at the Library of Congress, and analyze these documents in order to compose written histories. Accessing and analyzing such records is an activity all historians engage in, and having digital access to such records is an authentic activity for undergraduate history students. If master practitioners use digital technologies in the practice of their disciplines, those technologies should also be used in educating students.5
6. Does the technology promote "augmented" education?
Users of technology all too frequently employ it to disengage from other people and the surrounding real world. We are all familiar with teenagers with headphones listening to portable CD players, drivers distracted by their cell phone conversations, and business people with their laptops out in crowded restaurants, oblivious to their surroundings, focused in on their devices, ignoring the people around them. In contrast, researchers have recently begun to explore the idea of "augmented reality," where users of technology employ it to engage their physical surroundings and interact with other people.
We might think of the CD listener or cell phone user as engaged in a virtual world; that is, physically present in the real world but disengaged, being cognitively "present" in a virtual world of digital interfaces and information. Henry Jenkins observed that augmented reality "[heightens] our awareness of the real world by annotating it with information conveyed by mobile technologies."6
Augmented reality serves as a useful metaphor for thinking about technology in the classroom: the classroom should be viewed as a mixture of virtual elements and real physical elements. A completely virtual educational experience leaves out a great deal that is valuable to education. Although virtual education is here to stay, and most certainly fulfills a vital niche function, face-to-face interactions in physical classrooms will likely remain the norm for education. Augmented reality in education will likely serve as a model for the thoughtful use of technology in the classroom.
Jenkins described a schoolchildren’s mystery game sponsored by Boston’s Museum of Science. The children and their parents were given "location aware" handheld devices, tools that deliver digital information depending on the user’s physical location. The teams used these devices and walkie-talkies to locate clues, piece together evidence, and solve the "mystery" of the location of a missing artifact. The children learn through play, engaging with their team members and the physical artifacts of the museum, in addition to the information served through the technology. "The handheld," observed Jenkins, "delivers a media-rich experience, enabling you to access photographs, sound files, and moving images that complement what you are seeing with your own eyes."7 Applications exist for self-guided tourism and, clearly, for classroom education.
Hybrid classes (those that combine classroom and online experiences) or traditional physical classroom spaces augmented with digital technology should reflect a balance among the physical world of people, the natural environment, and the virtual world of rich digital information. In a healthy classroom space, the virtual complements the real.
7. Will professors use the technology to aid students in the acquisition of knowledge, not just information?
John Seely Brown and Paul Duguid distinguished knowledge from information.8 Information is data, and digital technologies are certainly capable of distributing information. Knowledge, on the other hand, is usually associated with a specific human being, is very difficult to bundle and transport, and is typically associated with the knower’s appreciation of the meaning of the information. Two people can access the same information, but differences in their levels of knowledge shape the way they use and interpret that information. While digital technologies can certainly deliver content and information, only master practitioners can use technology to teach the skills and habits of mind of a discipline.
Luddite is a term often used indiscriminately to describe a technophobe or anyone who questions the value of technology. But the original Luddites were not simply anti-technology; they were weavers resisting the elimination of their jobs. Although Luddites are known for their physical attacks on technology, it was not the technology itself the Luddites rejected. Rather, the mechanical loom was a symbol of a new industrial system that made them redundant. Some technophobes in higher education see digital technology—all digital technologies—in a similar fashion. Digital technology seemingly can deliver instructional materials to any location at any time, making education an "always on" experience for students, one that some fear will one day no longer require the presence of teachers. Some professors have expressed the fear that once digital tools are permitted to overrun and eclipse the classroom space, the instructor will become redundant.9
Removing human beings from the classroom—allowing technology to overwhelm the space—is a recipe for an unhealthy information ecology. Nardi and O’Day contended that maintaining a healthy information ecology requires skilled people to support the use of technology. Even in the digital age, libraries will continue to require librarians—knowledgeable people who can help users navigate the labyrinth of information. Similarly, students will still require teachers—skillful people who can impart knowledge.
My history students are perfectly capable of accessing historical information themselves; my role as a history professor is to teach them how historians build knowledge from that information. As students access online primary sources, I walk around the room and help them with their reading and analysis. Occasionally I will sit down with them and work through a document, as in a master-apprentice relationship. Imparting information to students is instruction; helping them to develop knowledge is education. Technology is simply a tool that can support either function.
8. Does the technology appeal to different learning styles, allowing students to produce (not just consume) knowledge and information?
Twenty years ago, Howard Gardner made us all aware that human intelligence cannot be measured according to one quantity; instead, each human mind is a collection of multiple intelligences, each of varying degree in each individual.10 Educational theorists have since argued that in an effective educational setting, teachers must appeal to a wide variety of intelligences: some students are visual learners; others prefer more tactile experiences; others need to hear information presented orally. All teachers should assume a "cognitive diversity" in any class they teach and present information accordingly.
Because digital technologies can provide access to sounds, images, moving pictures, colors, and text, they appear to be ideal tools for the cognitively diverse classroom. This is only one side of the equation, however. Advocates of the multiple intelligences approach often focus on the types of information students consume. That is, a thoughtful teacher should present multiple types of information in class in order to appeal to all of these learners.
Teachers also need to be aware of how technology enables diverse student performances, that is, the types of information students produce. It makes little sense to provide students with a wide variety of information, only to have them produce one kind of performance: a written essay or test. Effective educational technologies enable all students to produce visual, kinetic, and logical-mathematic performances. In addition to submitting written essays, for example, students could be asked to create concept maps. Preparing speeches via videoconferencing would exercise their aural intelligence skills. When thoughtfully used by faculty, digital technologies present many opportunities for students to produce information and knowledge that exercises all of their multiple intelligences.
9. Does the technology promote play or merely entertainment?
That student learning should be active and not passive has become a commonplace refrain among educators today. Some pedagogical strategies, however, are more active than others. Active learning is not just having students doing something other than listening to a lecture. Although it might appear that they are "active," students who simply look at interesting graphics or hear digitized sound—while perhaps being entertained—are not engaged in an active pedagogy. Having students generate those graphics or experiment with that sound, on the other hand, would be an example of technology promoting active play.
Digital technologies are often sold as beneficial for education because they "make learning fun." Although this argument is used when discussing elementary and secondary education, we are beginning to hear the same argument made with regard to higher education. "Edutainment" is frequently defended as the chief advantage of technology in the classroom.
I am certainly not in favor of education being dull and stultifying, but neither am I a proponent of the "education should be entertaining" line of reasoning. We need to carefully distinguish between entertainment and play. Entertainment is largely a passive experience: watching others at play for our amusement. Thus, watching a video might appear on the surface to be active, but unless professors ask students to view and critique the video as they would a text, such an experience is in fact a form of passive entertainment. Play, on the other hand, is an active experience: it is a hands-on activity, involving creativity, participation, and experimentation. Watching a sporting event is entertainment; participating in a sport is play. Watching a video is entertainment; producing a video is a form of play. Technology should be used in a classroom space to encourage creativity and experimentation and not to keep students entertained for 50 minutes. The students at the museum enjoying their mystery game are learning through play. Adhering to authentic practices—as in having students "play at" being an historian or economist—is another way to ensure that students use technology for play, not just edutainment.
"Game-based learning" mediated by technology is an example of education as play.11 Business simulations have long been a staple of that discipline, and such scenarios have become more technologically mediated. "Some scholars have suggested," observed Don Marinelli and Randy Pausch, "that simulation-based scientific inquiry be considered a ‘third paradigm,’ alongside theoretical and empirical approaches."12 With game-based simulations, students can enact real-world decisions in a "safer" environment—their decisions have consequences only within the simulated world. Such games allow students to play at decisions they might face in the real world.
Simulations are authentic in that they can mirror the activities of the discipline in question; think how moot court allows law students to learn the requirements and habits of their discipline by "playing" at law. An increasing number of three-dimensional, virtual, computer-based simulation games ask students to behave and act as if they were in a real world. Think of SimCity-type games, where a player must make decisions about resource allocation, political alliances, and social legislation. When used in the classroom, such games allow students to learn about the complexities of urban life while engaged in play. "Rote learning of facts and figures," concluded Joel Foreman, "is less valuable than learning how to do things in the human world that students … must live in."13
10. Is it any good?
I recall clearly my first experience with inkjet printers. I was a teaching associate in 1987 at a time when most student essays were still composed on typewriters and mistakes fixed with correction fluid. A student submitted an essay printed with one of the first-generation inkjet printers. Unlike the other essays in my stack, this one looked like a published book—dark black ink and a provocative font on substantial paper. I was certain I was holding an A submission, so mesmerized was I by the technology that produced it. Certain, that is, until I actually read the essay. Like many of the others, it was poorly composed, lacking in detailed research, and not rising above the C grade I finally gave it.14
Had this student been in a design class, she might well have received an A, since the disciplinary requirements for an authentic performance in design are different from those in history. That is, the same product of technological use might result in different assessments. In history, the assessment criteria do not include the shape of the letters or the aesthetic qualities of the font: historians are much more concerned with the content of the words and how those words are used to shape an argument or to construct a narrative.
I have attended many technology and education conferences where the same question arises: "Is it any good?" That is, how do we know that the results of a student’s use of technology are any good? I cannot provide a single rule of thumb that governs the assessment of student uses of technology; I can only point to the specific requirements of each discipline.
Students do not use technology in a vacuum: except in some limited circumstances, simply demonstrating technical proficiency with a technology is rarely sufficient. For example, some of my students have submitted written papers with clip art additions that were irrelevant (and distracting) and never integrated into the written text. Other students (using the same techniques) have written papers on the history of art, adding scanned images that were analyzed in the body of the text. The latter use of the technology was much better than the first; better, at least, in the context of the discipline of history.
Each discipline establishes its own criteria for authentic performance. Technology use in education is context specific as well: each discipline establishes its own assessment criteria. This is what Nardi and O’Day meant when they described healthy information ecologies in terms of their "locality"—each discipline defines how "good" a student’s use of technology is.
Does this mean that students should only use the tools to the extent that they replicate traditional forms of performance? Take my history example: what if a student wanted to use technology and software that would produce something other than a written paper (the standard performance for historians)? I had one student who used technology to produce a Web-based oral history and another who used PowerPoint to produce a museum-like display of images, analyzed with richly worded captions. Both were legitimate disciplinary performances but also outside the traditional assignment typically available to history students. In fact, these students were creating something authentic that they might not have been able to do were the technology not available. A "good" use of technology derives from a disciplinary context, not from technical proficiency. If teachers find it difficult to have students use technology to produce "good" work, then the technology should not be adopted.
Technology and the Pedagogical Infrastructure
Digital technologies serve a variety of infrastructural functions in the modern university, from administration to communications to recreation. That digital technology is part of the space of the university cannot be denied. Nonetheless, universities do not attract students because of the presence of digital technology. In today’s environment, digital tools have become as indispensable and as invisible as indoor plumbing or electricity. Digital technology cannot be viewed as a value-added product in and of itself, but its absence—like the absence of electricity—could well discourage prospective students. Given the rise of ubiquitous computing, how and when should such technologies be placed within the physical and conceptual space of the classroom? What are the best strategies for making digital technology a part of a university’s pedagogical infrastructure?
As the assessment criteria in the 10 questions stress, education is a relationship between teacher and student. Technologies used effectively in education mediate this relationship. Any assessment of technology in the classroom must consider how these tools enhance, extend, and enable that core relationship between teacher and student.