An Electronic Calendar for Organizing Assessments in a Large Faculty

min read

Key Takeaways

  • Using data from an electronic unit of study outline system, the University of Sydney's Faculty of Science built an assessment calendar to organize assessment schedules and help first-year students transition to university life.
  • The calendar helps students organize and understand their assessments using their preferred computers or mobile calendar applications.
  • The calendar also helps staff in a large faculty compare and rationalize assessment dates and ensure that a variety of assessment modes are used, irrespective of a student's course choices.

Students starting out at university must rapidly adjust to a new independent learning environment in which they are expected to be largely self-directed and self-reliant. In addition to often being in classes that are much larger than they've previously experienced, students have less direct contact with their teachers. To access help or advice, they might have to negotiate multiple student-support mechanisms rather than ask a single contact. Today's students also come from diverse backgrounds and have greater variations in preparedness and motivations for study than ever before. For some, the ability to be a self-directed and independent student can make the difference between success and failure while juggling university studies and part-time employment.1 All of these issues can be intensified in generalist degrees, where students have a wide variety of course choices and must shift from one subject to another, all while physically navigating their sometimes sprawling campuses throughout the day.

At the University of Sydney, our academic year begins with an orientation program for new students that introduces them to the university, the campus, and each other. This socialization (or transition) of students into the university setting2 also gives us the opportunity to help students understand university terminology and responsibilities and organize their studies. In the March 2013 of the Teaching@Sydney staff bulletin, a postgraduate student in physics reflected on his memories of the first few weeks of university, including the importance of building this socialization into class:

"One of the best methods a lecturer can employ to help a student ... is to encourage social interaction between their students. This point may be overlooked, as it seems that many academics believe the most productive thing for students to do in lectures is listen. Solitary study can lead to high dropout rates of first-year students since it does not provide them with an opportunity to develop a social network that will allow them to feel comfortable in the learning environment."

Several researchers have outlined and demonstrated the efficacy of "transition pedagogy" for cross-institutional integration, coordination, and coherence of first-year experience policy and practice.3 Curricular and co-curricular activities must be effectively managed for students to minimize overlap and maximize program coherence.4 Previously, we presented a first-year roadmap5 as a device for organizing the transition of students to the first year at university. Here, we present an electronic assessment calendar as a tool for helping staff and students organize and rationalize assessment timing and style in a large university. Our tool can also assist in aligning assessments with the goals of both individual units of study and the institution as a whole, as well as give students an easy-to-follow resource to use and personalize on their own devices and applications. We use our assessment calendar for face-to-face courses, but it would also be useful for distance or blended courses that require students to complete tasks according to a faculty-defined timeline.

Context of the Problem

Assessing student learning makes it possible to document and measure their knowledge, skills, and abilities. At our university, we use

  • formative assessment during each course to aid and scaffold learning, and
  • summative assessment at the end of a course or project.

Our formative assessments often lead to course grades — which motivates students to complete them — but they're primarily designed to provide feedback. In many universities, early assessment is recommended to give students prompt feedback on their performance and to identify those who might be at risk of not completing their degrees. Early assessments are often part of "institutional early warning systems"6 to target support in the first semester. Although such assessments do not count heavily towards the students' final grades, they do require (and are often designed to ensure) that students are organized and on-task soon after the socialization process ends.

In terms of learning activities, the style, timing, and emphasis placed on assessment probably have the most influence on how students approach their learning.7 How we assess students in the first semester, in particular, has the power to affect whether they will adopt a surface learning style — focusing on memorizing only the parts of the course on which they are likely to be assessed — or a deep learning style, in which they actively search for understanding.8 The types and timing of assessments in the first semester also influence students' organizational approach. At our university, we've put substantial effort into constructively aligning assessments with the course aims and objectives (learning outcomes), and thereby have purposefully strengthened the learning–assessment connection.

The University of Sydney's Faculty of Science administers five generalist degrees and enrolls approximately 1,000 students each year. The faculty is made up of eight schools and units and several research institutions, which — in the Australian context — are themselves large institutions and often have their own teaching approaches. Students from many other faculties, including a wide range of professional and generalist degree programs, enroll in our units. In the first year, students choose units in the enabling sciences, including mathematics, and can also take elective units from outside the faculty. In latter years, more specialized science units are available in a wide variety of traditional, medical, and cross-disciplinary sciences. These arrangements are quite typical of Australia's research-intensive universities.

In our degrees, courses are divided into four distinct, concurrent units of study each semester. In a generalist degree, these units might cover complementary or distinct material and be taught and assessed in different ways. Typically, the units are taught by various semi-autonomous disciplines. Conversations among disciplines about curricula and student outcomes are often limited to those involving administration, with little attention to the degree's holistic shape or the student experience. Poorly timed and planned assessments, for example, can lead to wide variations in student load during the semester, with knock-on effects for achievement, stress, and even cases of plagiarism.

In addition to assessing the achievement of discipline-specific goals, universities have a clear role in equipping their students for the future. Most universities define a set of target attributes or qualities that their students will achieve by graduation. These overarching threshold outcomes include knowledge, skills, and qualities that are both discipline specific and international; the content characterizes the individual institution's goals. In many countries, these outcomes are now being defined and measured externally. In Australia, the Tertiary Education Quality and Standards Agency (TEQSA) will require evidence for how institutions assess each of the nationally defined threshold learning outcomes. It is thus important that the learning outcomes for each unit of study align with and contribute to the degree-level statements and that each assessment builds into a portfolio of graduate attributes. It is also essential to assess the range of attributes to ensure that, for example, communication skills are not continually assessed through isolated writing tasks that fail to build on each other.

Because students have a wide range of proficiencies when they begin university, institutions often emphasize the development and assessment of the key academic skills in the first year. At our university, there are no core or introductory units of study in our main science degrees, and thus it is difficult to ensure adequate preparation for study at a higher level. Students can choose units of study from a range of fundamental sciences and mathematical topics as well as liberal arts electives. Unfortunately, research and information skills are often seen as "someone else's business."9 To address this, individually effective but isolated discipline-based interventions have been introduced. If students choose not to take these, however, the result can be gaps in their skills and ongoing issues for both staff and students in future years. Moreover, this approach also makes duplication possible, leading to increased workload and frustration for students.

Generating Data

At the University of Sydney, the unit of study outline is the primary document that details a course's learning outcomes. It contains assessment descriptions including the assessment type, mark available, and due date. According to the university's assessment policy, it must be complete and made available on the first day of semester and is the unit coordinator's responsibility. Changes to assessments are allowed after this point only in exceptional circumstances.

To assist unit coordinators in constructing their unit outlines, the authors developed an electronic template.10 In addition to providing a common interface for students and ensuring compliance with institutional and government policies, the template ensures that each unit captures six elements required by a new institutional assessment policy for each unit of study:

  • How assessments align with the unit's learning outcomes
  • How assessments align with the graduate attributes of the Faculty of Science and the university
  • Assessment descriptions
  • Assessment dates
  • Mark breakdowns
  • How students are being assessed

Our unit of study outline system makes compliance with all of these requirements as straightforward as possible. This standards-based policy is outcomes based, containing elements of criterion referencing and competence-based assessment11 for academic courses. The system measures student progress on the basis of demonstrated achievement of outcomes, which are clearly specified and transparent for students, faculty, and others (such as future employers). In addition to specifying how assessments align with the academic needs of the disciplines, the policy requires that assessments must be authentic experiences for students and aligned with the overarching goals of the faculty and university. The assessment policy is seen as a way to catalyze curriculum renewal; we developed our assessment calendar as a tool to inform and energize this renewal in the Faculty of Science.

The system is preprogrammed with the university and Faculty of Science graduate attributes. To use it across the entire campus would require only that the relevant faculty attributes be entered into the system. The system requires a web server and associated software, including ColdFusion and Access. Depending on how much of the above information is already available, it requires two to four hours of initial work for the coordinator or assistant to generate a new unit outline. Once set up, only a brief review (less than one hour) is required each time the unit is rerun.

One of us (Bridgeman) developed the system itself; although unit of study outline systems exist at other institutions, Bridgeman developed a new system to meet local requirements and reduce costs. The system consists of a series of forms that the coordinator completes to provide the required information. The assessment calendar part is, to the best of our knowledge, the first time that this information has been made available to staff and students in this way. The forms in figure 1 summarize the data required for the assessment calendar. This information could be generated by a simpler system if not for the additional information required by our unit outlines.

As we detail below, unit- and program-level coordinators and students can use the database generated from this information to map assessments across a single course or over any combination of units — such as those making up a semester or a major. This mapping could be useful for addressing government policies and for accreditation requirements, for example.

1a
figure 1a

1b
figure 1b
Figure 1. Template for unit coordinators to compile (a) assessment deadlines and alignments and (b) assessment descriptions and types

Figure 1 shows screenshots for the part of the unit outline template system used to input assessment information. In Figure 1a, the coordinator inputs a title for the assessment item, the number of marks available, the time and date that the assessment is due, and the unit learning outcomes being assessed. Elsewhere in the template, these unit learning outcomes are mapped against the Faculty of Science graduate attributes so that the latter are also automatically linked to the relevant assessment items. Figure 1b shows the form that coordinators use to describe assessments in more detail, including the type of assessment (for example, multiple choice quiz, a formal examination, or online assessment).

The coordinator uses a series of web forms such as these to complete the template. The data generated are stored in tables in a database for each unit of study. The unit outline is populated by the unit coordinator and embedded in the learning management system (LMS) or on web pages. At the University of Sydney, the LMS is Blackboard; our students use this system extensively and don't typically look for academic information outside of it. The open web version, however, is useful for providing information for prospective students and for ex-students seeking credit transfer information. (In our Blackboard implementation, we embed the unit of study outlines and calendars using i-frames. The web forms that coordinators use to complete their unit of study outlines use a ColdFusion front end to populate Access databases.) Figure 2 shows a screenshot for the assessment tab of the unit outline corresponding to the information entered via the forms in figure 1 (see a live version on our site).

figure 2
Figure 2. Part of the assessment tab for the CHEM1102 unit outline

The assessment tab displays, as succinctly as possible, the key information that students need, including the assessment title, the date and time of the deadline, and the learning outcomes assessed. Clicking on the Learning Outcomes column links takes users to an information window with a description of the learning outcome and how it aligns with the faculty graduate attributes. The tab also contains a brief description of the assessment and a link to the university assessment policy. (In our system, the information windows are Ajax lightboxes rather than pop-up windows, as lightboxes are cleaner and less likely to be accidentally blocked.)

As figure 1 shows, the coordinator can enter each assessment's due date as a single deadline or as a re-occurring event for activities — such as laboratory work — that occur repeatedly over the semester. The due dates are given in terms of their position in the semester rather than as a definite date. For example, the deadline for "spectroscopy problem solving assignment" in Figure 1 is week seven ("11.30pm on Sunday of the university week 7"). The software is preprogrammed with the start and end dates of the semester; using this information lets it calculate a date from any day in the semester using only the day of the week and the week of the semester. The software thus uses the semester dates and this information to calculate the actual date of the assessment shown in figure 2 ("Sunday, 16 September 2012 at 11.30pm"). The actual date will obviously vary each year and semester, but the semester week for each assessment is usually the same. The next semester's unit outline thus needs minimal updating; the dates are simply recalculated.

Before publishing the unit outline, the coordinator must check the assessment information and sign off on it. This process seeks to ensure that the dates on the live unit outline (and the associated assessment calendar described below) are correct.

The Assessment Calendar

The data generated in this process can be used in several ways, including to map learning outcomes and graduate attributes across collections of units (see an example). Such applications probably have limited appeal for most students. However, they can use the assessment data to generate a calendar, which has proven extremely popular with students and has uses beyond simple convenience.

We embedded the assessment calendar application in an e-community site in the LMS used by all students enrolled in at least one Faculty of Science unit.12 The application can also be viewed and tested on the open web. This page includes a link to a step-by-step tutorial showing students how to use the calendar. The tutorial is also available on YouTube. If you follow the link to the calendar and watch the video, you can produce your own assessment calendar.

Figure 3 shows a screenshot listing the available first-year Faculty of Science units that a student might take during semester 1. Students can choose almost any mixture of these subjects and must select their particular combination in the application. As we describe later, staff planning and rationalizing assessments also use the same site and can also choose any combination of units.

figure 3
Figure 3. Menu for choosing units to populate assessment calendar

Figure 4a shows part of the assessment calendar generated for this combination of units. Each selection generates a unique URL, which users can bookmark if they wish. For the combination of units chosen in Figure 3, the accompanying web page provides links to embed the assessment calendar in an iCal-compatible calendar. Initially, we also included a "print to PDF" option, but the documents generated were too long to be practical, and so we removed this feature. The calendar gives a week-by-week and day-by-day list of each assessment for the particular combination chosen. Figure 4b shows the information window that appears when users click one of the links. This window contains the assessment description and unit- and program-level learning outcomes information from the unit outline.

4a
figure 4a

4b
figure 4b
Figure 4. (a) Assessment calendar for week 7 for the combination of units chosen in Figure 3, and (b) the information window for one of the assessments

In addition to the calendar's web version in figure 4, the application generates an iCalendar format file. This format is used and supported by many products, including those used in mobile apps such as Google Calendar and Apple Calendar. The iCalendar file can be downloaded by the user as an .ics file by clicking a URL. For the combination of units in figure 3, the iCalendar file is available on the web. The application also generates a Quick Response (QR) code using the Google chart tools service, so the link can be input via a QR reader on a tablet or smart phone. Figure 5 shows the QR code for the unit combination in figure 3.

figure 5
Figure 5. QR code for the calendar generated using the unit selection in figure 3

Figure 6a shows an extract from the week 7 Google Calendar generated using this selection. The iCalendar file contains the same assessment description housed in the unit outline; figure 6b shows the information revealed by clicking on the calendar's link.

6a
figure 6a

6b
figure 6b
Figure 6. (a) Google calendar for week 7 using the combination of units in figure 3, and (b) information stored within the calendar's link for one of the assessments

Examples and Experiences

We designed the calendar primarily to assist students in planning. However, it soon became apparent that the tool was useful for giving faculty a holistic view of the program, as well as to drive reviews of curriculum and, in particular, the timing and nature of assessments. Given our degree program's complexity and flexibility, this data-driven approach enables a transparent, student-centered view of the semester, possibly for the first time.

Figure 7 shows an example of the assessment summary, which highlights two issues:

  • First, it becomes clear straightaway which weeks of the semester are crowded with assessments, throwing light on student complaints and stress points.
  • Second, it is also immediately obvious just how reliant our degree is on end-of-semester examinations.

Of course, such observations do not solve the issues; it is up to the disciplines to use the data and, perhaps, to compromise on the timing and weighting of their assessments to enhance the overall student experience. This is beginning to happen, at least for the first-year units. For example, the calendar has been used to shift assessments in two large semester 1 units to earlier weeks.

7a
figure 7a

7b
figure 7b
Figure 7. Summary of the assessments by (a) weeks and (b) types for the combination of units in figure 3. In (a), the different shades of gray correspond to the units of study chosen by the user.

The assessment type summaries, such as the one in figure 7b, have also focused our attention on previously suspected issues, including a reliance on multiple-choice quizzes (delivered online or in class), heavy summative examinations in our first-year courses, and an overemphasis on formal reports in later years. Again, it is useful for the faculty's learning and teaching leaders to have such evidence for their curriculum reviews. Currently, for example, faculty are reviewing the use of multiple-choice quizzes in many first-year courses and, in one first-year unit, the timing and length of essays. In faculty-level discussions, graphs such as those presented in figure 7 are now being used to highlight these issues.

Calendar Use

Figure 8 shows week-by-week assessment calendar use for semester 2 in 2012. (Unfortunately, semester 1 data could not be extracted from the LMS.) Overall, 730 hits were recorded on the site in semester 2, with approximately 600 of these being unique page views. This represents approximately one-third of the students enrolled in the e-community site. Compared to semester 1, where the assessment calendar was advertised in workshops and lectures, relatively little publicity was used to support engagement in semester 2. As might be expected, the most hits were recorded during orientation ("O-week") and week 1. After this period, the number of hits declined except for peaks in the mid-semester break, in week 11 (when the examination timetable was released), and at the start of the formal examination period.

figure 8
Figure 8. Number of hits on the assessment calendar per week for semester 2, 2012

Strategic Implications

Using any combination of units in our assessment calendar system, students can construct a personal diary of their assessments that includes descriptions of their nature and tags that indicate the learning outcomes and graduate attributes being developed. The calendar can be stored as a web link or downloaded as an iCalendar file for loading into a compatible calendar app on a computer or mobile device.

Although primarily conceived as a student-centered tool, the application provides information that lets faculty and their staff compare assessment information, including learning outcomes, dates, and types directly, easily, and accurately. In large institutions and in generalist degrees, the objectives of each discipline can easily become siloed. Breaking down these siloes requires a willingness to share information and make compromises, which can in turn inform decisions and strategic directions involving assessment reviews (and hence curricula) and deliver faculty and institution outcomes and goals. Aligning assessments and curriculum ensures that knowledge of the former can inform and drive change in the latter.

At our institution, we can now, for the first time, easily obtain data on assessment types and dates and map this information across a wide combination of units and programs. There are many ways in which this information might inform debate and, perhaps, catalyze renewal at the program rather than discipline level. Most straightforwardly, it can be used to rationalize assessment dates to avoid crunch points when possible, as well as to assist in delivering objectives — such as the timetabling of early assessments for all students in a generalist program. We have already begun work on both of these issues, with assessments in first-year units being moved to meet program-level — rather than solely discipline-level — objectives and preferences. One of the first-year coordinators in the Faculty, commenting on an "Assessments by Week" chart similar to that in Figure 7(a), said,

"This makes it so clear what we are doing to students. A student will look at this and think 'I don't need to do any work until the exam period.' This should shake us up to act. We are encouraging very poor study skills in our students."

Faculty can also use the tool to ensure that students are assessed in appropriate ways and using a variety of modes. As the graphs in figure 7 reveal, our first-year degree programs typically rely on many "traditional" end-of-semester examinations. These exams assess disciplinary knowledge and might not reflect or reward the development of attributes that we desire in our graduates. Further, the system can provide information on the ways in which generic skills are developed. For example, being able to communicate scientific results in various ways is very important for a science graduate. However, formal written reports might be the norm in many units, leaving other forms of communication undeveloped. Similarly, the system lets faculty identify overlap and repetition of skill development. In our first-year units, for example, we have identified academic honesty and the ability to identify scholarly resources as key academic skills that each student must develop. Using the mapping system, we were able to see that each student is assessed on these skills, but is tested only once.

One of our key faculty goals is to develop graduate attributes and to assess these in capstone experiences in the final year of study. In order to do this, these attributes must be developed throughout the degree program irrespective of the pathway that a student takes. The academic skills development (outlined above) in the first year is the beginning of our attempts to do this. With a model for shared responsibility across different disciplines in place, we now plan to widen the number and types of skills developed in the first year and beyond. In our initial steps, we have sought to identify and remove overlap and repetition of the development of basic skills. It will be interesting to see how this model changes when the required skill level increases and whether we will seek instead to ensure repetition in different contexts.

Notes
  1. Karen J. Nelson, Sally M. Kift, Julia K. Humphreys, and Wendy E. Harper, "A Blueprint for Enhanced Transition: Taking an Holistic Approach to Managing Student Transition into a Large University," Proceedings of the First Year in Higher Education Conference, Queensland University of Technology, 2006; Geoffrey Crisp et al., "First Year Student Expectations: Results from a University-Wide Student Survey," Journal of University Teaching and Learning Practice, vol. 6, no. 1, 2009, pp. 11–26; and Kerri-Lee Krause, Robyn Hartley, Richard James, and Craig Mcinnis, "The First-Year Experience on Australian Universities: Findings from a Decade of National Studies," DEST, Canberra, Australian, 2005.
  2. Nelson et al., "A Blueprint for Enhanced Transition."
  3. Sally M. Kift, "Organising First Year Engagement around Learning: Formal and Informal Curriculum Intervention," Proceedings of the 8th Pacific Rim First Year in Higher Education Conference, Melbourne, Australia, July 2004; Sally M. Kift, "The Next, Great First Year Challenge: Sustaining, Coordinating and Embedding Coherent Institution-Wide Approaches to Enact the FYE as 'Everybody's Business,'" keynote address, 11th Pacific Rim First Year in Higher Education Conference, Hobart, Australia, 2008; Sally M. Kift, "A Transition Pedagogy for First Year Curriculum Design and Renewal," Proceedings of the FYE Curriculum Design Symposium, Queensland University Of Technology, Brisbane, Australia, 2009; and Sally M. Kift, Karen Nelson, and John Clarke, "Transition Pedagogy: A Third Generation Approach To FYE—A Case Study of Policy and Practice for the Higher Education Sector," FYE International Journal, vol. 1, no. 1, 2010, pp. 1–20.
  4. Kift, Nelson, and Clarke, "Transition Pedagogy."
  5. Michael Arndell, Adam J. Bridgeman, Rebecca Goldsworthy, Charlotte E. Taylor, and Vicky Tzioumis, "Code for Success: A Roadmap as an Organising Device for the Transition of First Year Science Students and the Development of Academic Skills," Proceedings of the Australian Conference on Science and Mathematics Education, University Of Sydney, 2012, pp. 79–86.
  6. Vincent Tinto, Student Success and the Building of Involving Educational Communities, Higher Education Monograph Series, Syracuse University, 2005.
  7. John Biggs, Aligning Teaching and Assessment to Curriculum Objectives, Imaginative Curriculum Project, LTSN Generic Centre, 2003; Noel Entwistle, "Contrasting Perspectives on Learning," The Experience of Learning: Implications for Teaching and Studying in Higher Education, 3rd ed., University of Edinburgh, Centre for Teaching, Learning and Assessment, 1984, pp. 3–22; and Paul Ramsden, Learning to Teach in Higher Education, Routledge, 1992.
  8. Ramsden, Learning to Teach in Higher Education.
  9. Michael Arndell, Adam J. Bridgeman, Rebecca Goldsworthy, Charlotte E. Taylor, and Vicky Tzioumis, "First Year Science: When Information Skills Are Someone Else's Business," Proceedings of the Alia Biennial Conference, 2012.
  10. We will discuss the template in detail in a separate paper (to be published).
  11. Catherine A. Palomba and Trudy W. Banta, Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education, Jossey-Bass, 1999; Lion F. Gardiner, Caitlin Anderson, and Barbara L. Cambridge, Learning Through Assessment: A Resource Guide for Higher Education, American Association for Higher Education Assessment Forum, 1997; and Alison Wolf, "Competence-Based Assessment," Competence in the Learning Society, Peter Lang, 2001, pp. 453–466.
  12. Arndell et al., "Code for Success."
  13. Arndell et al., "First Year Science."