The New Economy, Technology, and Learning Outcomes Assessment

min read
Viewpoint
The New Economy, Technology, and Learning Outcomes Assessment
Answering calls for change entails meaningful assessments of technology-enabled learning

According to Claudia Goldin, the "new economy" at the beginning of the 20th century was driven by such phenomena as greater use of science by industry; the proliferation of academic disciplines; the diffusion of a series of critical inventions (including small electric motors, the internal combustion engine, the airplane, and chemical processes); the rise of big business; and the growth of retailing.1 Progress for industrial nations depended on educating more people at secondary and postsecondary levels. The United States established an education system that produced more educated citizens and workers, enabled geographic and economic mobility, resulted in large decreases in inequality of economic outcomes, and may have increased technological change and productivity (though that is harder to prove, she wrote). It was largely a decentralized, forgiving education system that—in the context of the day—was highly successful. Today, however, more than one hundred years later, economic and social drivers are quite different, calling into question some of the assumptions that underlie our institutions of higher education.

The "new economy" of the 21st century is driven in large measure by unprecedented advances in transportation and in computing, information, and communications technologies. To be competitive, industrialized and developing nations alike are driven by needs such as greater use of science and new technologies by average citizens; more interdisciplinary work; greater understanding of highly complex, interacting systems; new and renewed efforts at building community and solving local challenges in the face of globalization and massification; and a substantial rethinking of retailing, services, and business in general as a result of changing tools, physical possibilities, and financial opportunities.

In The Singularity Is Near, Ray Kurzweil proposed that the exponential rates of technological change in modern times offer possibilities for gestalt shifts in the way we approach many challenges.2 For such shifts to occur in today's new economy, time-honored content and emerging ideas will be joined in innovative ways with old and new technologies to benefit modern society's needs. In fresh approaches to teaching and learning, deciding what students need to know and should be able to do—in the context of a changing panoply of computing, information, and communications technologies—is a critical first step.

Next come rigorous assessments that demonstrate the manner and degree to which learning takes place. More important, these assessments must evaluate information literacies, technology fluencies, and content competencies together, not as separate remnants of last century's economic and social imperatives.

Public Discourse

Not surprisingly, reports continue to call for increased accountability in higher education. Outcomes are a major theme. Using phrases that will likely resonate with various audiences, the recommendations often challenge higher education to "measure up" and "make the grade" because "what gets measured gets our attention, gets funding, and gets taught."3 But what do such familiar phrases really mean? What ought we measure? What are good measures, and what might they help us accomplish over time? Many observers describe the 21st century as a complex age with new demands for education and new requirements for accountability in teaching and learning to meet society's needs in a new, global economy. At the same time, innovations in teaching and learning and proposals for measuring them often seem disconnected from public and political discourse.

Good assessment of learning outcomes is not a simple task. Asking the right questions is critical to measuring goals in the short term, not to mention assessing outcomes that will continue to develop over a lifetime.4 Modern, rapidly changing technologies and their relationships to contemporary learning imperatives pose such wide-ranging possibilities for assessment that we can responsibly do no less than explore seriously and systematically what we think students need to know in this millennium's technology-enabled learning environments.

Proposing Directions

In 1999, the National Research Council (NRC) published results of a two-year study of information technology literacy. The National Science Foundation (NSF) had requested the study because the ubiquity of computing, information, and communications technologies in modern life called for better articulation of what everyone needs to know to be productive citizens. Entitled "Being Fluent with Information Technology," the report acknowledged tendencies to focus on skills when approaching technology literacy.5 The report explained that literacy today requires a complement of knowledge and related abilities to be fluent in information technology (FIT). According to the report, FITness is a long-term process of self-expression, reformulation, and synthesis of knowledge in three realms:

  • "Contemporary skills, the ability to use today's computer applications, enable people to apply information technology immediately...are an essential component of job readiness...[and] provide...practical experience on which to build new competence.
  • Foundational concepts, the principles and ideas of computers, networks, and information, underpin the technology...explain the how and why of information technology...give insight into its limitations and opportunities...[and] are the raw material for understanding new information technology as it evolves.
  • Intellectual capabilities, the ability to apply information technology in complex and sustained situations, encapsulate higher-level thinking in the context of information technology...empowers people to manipulate media to their advantage and to handle unintended and unexpected problems when they arise...[and] foster more abstract thinking about information and its manipulation."6

The report offers an intellectual framework that can help distinguish between achievements (those of a particular time) and learning outcomes (results over time) when assessing what competencies students need to have. The proposed framework might also help differentiate among research (of teaching and learning theories), evaluation (of learning programs and processes), and assessment (of learning outcomes) as scholars and their audiences seek to show who and what measure up or make the grade. Although the specific skills for each area will change with the technology, the concepts are rooted in the basic information and abilities required to function in technology-enabled environments.

In the early days of information technologies, assessments of technology fluency tended to focus on contemporary skills, testing students' ability to use applications ranging from word processing to spreadsheets to search engines. Creating Web sites and PowerPoint slides figured in many early training and skills assessment efforts; using podcasts, wikis, and blogs surfaced more recently.

As time passed, more studies focused on what technology could and could not do. Arguably, these later evaluation strains have attempted to sort through the arena of foundational concepts, identifying important principles and ideas and explaining assets and liabilities associated with using various technologies. For example, it is important to know what is gained and lost when using digital technology to display images—knowledge that is critical to presenting, storing, and retrieving high-quality images. It is also useful to know when and how using simulations, games, podcasts, wikis, and blogs might benefit learning and when they might not. Such studies might describe the strengths and weaknesses of a technology used in instruction, evaluate whether a technology or system does what it is designed to do, and determine the efficiency and effectiveness of the system.

Other studies compare learning outcomes of students in courses with and without technology. Obviously, trying to compare assessments of learning outcomes in an online class, for example, to those from a face-to-face lecture-style class raises questions that go beyond describing limitations and opportunities associated with specific technologies in particular settings. Although it might seem that such studies can determine which circumstances produce better learning, in fact they do not. This misconception is perhaps at the root of legislation before the U. S. Congress to compare distance learning to classroom-based instruction.7

Teaching and learning online are technically different from teaching and learning face-to-face. Each approach involves distinctly different delivery systems, and, according to Lockee, Moore, and Burton, "comparing assessment scores from different learning systems is a serious, but common, error."8 A better approach might be to investigate effective teaching strategies, a category of knowledge that falls more into the foundational concepts arena. Studies might be designed to give fuller descriptions of the liabilities and assets of different technologies and, by extension, approaches to teaching and learning with and without technology to identify effective approaches.

The NRC report suggests new goals for instruction today that involve the educated use of modern information technology. It places intellectual capabilities at the top of a list of what students need to be FIT. The report says that students should be able to "engage in sustained reasoning; manage complexity; test a solution; manage problems in faulty solutions; organize and navigate information structures and evaluate information; collaborate; communicate to other audiences; expect the unexpected; anticipate changing technologies; and think about information technology abstractly."9

Many of the broad goals for intellectual capabilities related to information technology fluency apply across other content domains. In order to use domain-specific digital information in beneficial ways, students must simultaneously demonstrate FITness and information literacy related to domain competencies. To determine whether students have acquired the intellectual capabilities for FITness in the context of other technology-enabled domains, one needs, for example, to ask what achievements look like in sustained reasoning while considering what kind of technological fluency(s) might be brought to bear to demonstrate sustained reasoning in that domain. In this interdisciplinary iteration of FITness, content-specific information, and technology tools are obviously joined. They come together as interacting variables in the same teaching and learning plane, and students must have information literacy in a domain and be FIT to use information technology effectively.10

In turn, one must set goals aimed at such achievements and design assessments to measure student success in realizing these goals. Demonstrating fluency in information technology and domain competence at the same time will not be an inconsequential task for higher education. Assessment of student learning outcomes across content domains that tie directly to predetermined goals—with or without technology—is not common practice in the academy today.

Offering Frameworks for Action

In its proposed requirements for FITness, the NRC report may also have provided a useful framework to separate assessments of technology-enabled teaching and learning according to the three general categories identified. Using "basic skills," "foundational concepts," and "intellectual capabilities" as broad rubrics may help differentiate types of studies, and it may help sort through the myths and realities—both political and educational—of technology-enabled teaching and learning efforts. It might also lead to a recognized, potentially ordered accumulation of differentiated practices that benefit learning outcomes in technology-enabled environments over time.

In the face of so many calls for accountability for learning outcomes, a good portion of which involves appropriately using domain-specific content that resides in digital media and that uses digital media as an explanatory tool, higher education will be asked to provide evidence that is more direct than grades or seat time, for example, to demonstrate student achievement. Such measurements include acknowledging the importance of linking questions that relate technology fluency and domain competence as a critical starting point.

The public discourse suggests that the time has come to show—more transparently than ever before—that our approaches to teaching result in the kinds of learning we have identified as important for students today. Aligning our learning outcomes assessments with the multitude of creative, technology-enabled faculty and student activities already under way is a significant step toward understanding the progress we have made in higher education to this point, and it might also provide useful pointers to progress in a new economy.

Endnotes

1. C. Goldin, "The Human Capital Century and American Leadership: Virtues of the Past," The Journal of Economic History, Vol. 61, No. 2, 2001, <http://kuznets.fas.harvard.edu/~goldin/papers/humancap.pdf> (accessed April 17, 2007), pp. 263–292.
2. R. Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Viking Press, 2005), <http://singularity.com/>.
3. The Center for Public Policy and Higher Education, "Measuring Up 2006: The National Report Card on Higher Education," September 2006, <http://measuringup.highereducation.org/> (accessed April 17, 2007); U.S. Department of Education, "A Test of Leadership: Charting the Future of U.S. Higher Education," September 2006, <http://www.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf> (accessed May 22, 2007); The National Center for Public Policy and Higher Education, "Setting a Public Agenda for Higher Education in the States: Lessons Learned from the National Collaborative for Higher Education Policy," December 2006, <http://www.highereducation.org/reports/public_agenda/> (accessed April 17, 2007); P. T. Ewell, Making the Grade: How Boards Can Ensure Academic Quality (Washington, D.C.: Association of Governing Boards, 2006); the entire issue of Change, Vol. 39, No. 1, 2007, is concerned with the discourse on assessment.
4. R. S. Shavelson "Assessing Student Learning Responsibly: From History to an Audacious Proposal," Change, Vol. 39, No. 1, 2007, pp. 26–33.
5. National Research Council, Being Fluent with Information Technology (Washington, D.C.: National Academy Press, 1999).
6. Ibid, pp. 1–5.
7. "Legislation Offered To Study Distance Learning," Chronicle of Higher Education, January 16, 2007, <http://chronicle.com/wiredcampus/article/1811/legislation-offered-to-study-distance-education/> (accessed April 17, 2007).
8. B. Lockee, M. Moore, and J. Burton, "Measuring Success: Evaluation Strategies for Distance Education," EDUCAUSE Quarterly, Vol. 25, No. 1, 2002, pp. 20–26, <http://www.educause.edu/ir/library/pdf/eqm0213.pdf>.
9. National Research Council, op. cit., p. 4.
10. Ibid., pp. 48–49.
Anne H. Moore ([email protected]) is Associate Vice President, Learning Technologies, at Virginia Tech in Blacksburg, Virginia.