Main Nav

Peg,

 

We use the IDEA system for all end of course student evaluations and the IDEA Center has done research on student ratings in traditional vs. online courses. It is specific to the questions on the IDEA student evaluation but they did find that student ratings for online courses are similar to that of online courses. The research report is here and it’s worth reading even if your institution doesn’t use IDEA: http://www.theideacenter.org/sites/default/files/Research%20Report%207.pdf.

 

Elizabeth Smith

=======================

Associate Director, Center for Faculty Development

University of St. Thomas, Minnesota

Comments

Hi

I'd recommend reviewing the work of Professor Sid Nair.  http://www.uwa.edu.au/people/sid.nair

With a strong culture of student engagement and commitment to improvement, online surveys are highly effective. Students must see that evaluations make a difference to practice!

Cheers

Angela

James Cook University 
Australia 



On 05/06/2013, at 11:29 PM, "Wherry, Peg" <margaret.wherry@MONTANA.EDU> wrote:

Thanks for your reply. I’m familiar with the problem of response rate, because we recently adopted Class Climate as our end-of-course survey tool. And I know a low response rate may just catch the extremes. But my question really is, setting aside response RATE, this department believes that the actual overall score or rating the instructor receives is lower for online courses than for f2f courses, BECAUSE the course is online. You can see how this can feed a cycle of faculty perception that teaching online is difficult, ineffective and unrewarding. But my gut says their perception is not universally true. So I guess I’m revealing more about my reason for asking the question, which may in turn color the responses from the listserv, but I do want to be clear about the nature of my question. Thanks! - peg

 

From: The EDUCAUSE Blended and Online Learning Constituent Group Listserv [mailto:BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU] On Behalf Of Carol Sabbar
Sent: Wednesday, June 05, 2013 7:23 AM
To: BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU
Subject: Re: [BLEND-ONLINE] an online course question

 

Peg,

It the comparison in your question really a comparison of the COURSE delivery method (on-line vs. in the classroom) or is it really a comparison of the EVALUATION delivery method?  Here would be my theory, though it is based on logic and only partially on experience:

 

If the course is taught face-to-face, we know that paper evaluations have a much higher return rate and do on-line evaluations because, simply, you have direct access to everyone to make them fill out the paper.  More on that, below.

 

If the course is taught on-line, it would make sense that on-line evaluations should have a HIGHER return rate than in a face-to-face class because the paradigm of the evaluation matches the delivery of the course.  That is, the students are accustomed to being on-line for their on-line course, so they are more likely to do the evaluation as a regular and usual part of the course - like "one last assignment."  Students in a face-to-face course may never need to go on-line for their course, so doing so to fill out the evaluation becomes an unfamiliar obstacle.  Does that make sense?

 

We at Carthage implemented on-line course evaluations about 2.5 years ago, and the return rather for those is much lower than when the students are forced to sit and fill out the paper in class.  That said, there is strong evidence that the instructor has the ability to achieve ANY level of return rate that he/she would like.  I know instructors who get a return rate of 95% or more every semester, and they very purposely make that happen.  Others admit that they don't give any value to evaluations and, not surprisingly, they get return rates of 20% or less.

 

Our last observation is that, when you have a return rate of 40% or less, you only get the extremes.  Only the students who are extremely happy or extremely unhappy will take the time to express their opinion.

 

I hope that helps, and let me know if I'm answering the right question.

 

******

Register NOW for Carthage's virtual "course-ference" Forecasting Next Generation Libraries.  It goes on-line on July 1 and represents a whole new paradigm that combines on-line learning and virtual conferencing.  Nationally recognized experts, collegial discussion, and only $25 for a team of two people.  http://nextgenlibraries.org 

Here’s a bit of interesting research…

 

 

From: CESNET-L is a unmoderated listserv concerning counselor ed. & supervision [mailto:CESNET-L@LISTSERV.KENT.EDU] On Behalf Of Russell Sabella
Sent: Monday, April 15, 2013 1:52 PM
To: CESNET-L@LISTSERV.KENT.EDU
Subject: Answers to Faculty Concerns About Online Versus In-class Administration of Student Ratings of Instruction (SRI)

 

Source: http://derekbruff.org/blogs/tomprof/2013/04/11/tp-msg-1245-universities-hire-rankings-pros/

 

The posting below compares online student ratings of instructors with in-class ratings. It is from Chapter 7: Online Ratings, in the book, Student Ratings of Instruction: A Practical Approach to Designing, Operating, and Reporting. By Nira Hativa, foreward by Michael Theall and Jennifer Franklin. Information about the book and pricing in https://www.createspace.com/4065544 or at amazon.com ISBN : 978-1481054331. Copyrights ? by Oron Publications. All rights reserved. Reprinted with permission.

 

Answers to Faculty Concerns About Online Versus In-class Administration of Student Ratings of Instruction (SRI)

 

Many faculty members express reservations about online SRIs. To increase their motivation and cooperation, it is essential to understand the underlying reasons for their resistance and to provide them with good answers to counter their reservations and diffuse their concerns. The following are research-based answers to four major faculty concerns about online SRIs.

 

Concern 1: The online method leads to a lower response rate [which may have some negative consequences for faculty].

 

Participation in online ratings is voluntary and requires student motivation to invest time and effort in completing the forms. Faculty are concerned that these conditions will produce a lower response rate that may reduce the reliability and validity of the ratings, and which may have some negative consequences for them.

 

The majority of studies on this issue found that indeed, online ratings produce a lower response rate than in-class ratings (Avery, Bryant, Mathios, Kang, & Bell, 2006; Benton, Webster, Gross, & Pallett, 2010 ; IDEA, 2011; Nulti, 2008). Explanations are that in-class surveys are based on a captive audience, and moreover, students in class are encouraged to participate by the mere presence of the instructor, his/her expressed pressure to respond, and peer pressure. In contrast, in online ratings, students lack motivation or compulsion to complete the forms or they may experience inconvenience and technical problems (Sorenson & Johnson, 2003).

 

Concern 2: Dissatisfied/less successful students participate in the online method at a higher rate than other students.

 

Faculty are concerned that students who are unsuccessful, dissatisfied, or disengaged may be particularly motivated to participate in online ratings in order to rate their teachers low, blaming them for their own failure, disengagement, or dissatisfaction. Consequently, students with low opinions about the instructor will participate in online ratings at a substantially higher rate than more satisfied students.

 

If this concern is correct, then the majority of respondents in online surveys will rate the instructor and the course low, and consequently, the rating distribution will be skewed towards the lower end of the rating scale. However, there is robust research evidence to the contrary (for both methods?on paper and online), that is, the distribution of student ratings on the Overall Teaching item is strongly skewed towards the higher end of the scale.

 

Online score distributions have the same shape as the paper distributions?a long tail at the low end of the scale and a peak at the high end. In other words, unhappy students do not appear to be more likely to complete the online ratings than they were to complete paper ratings (Linse, 2012).

 

The strong evidence that the majority of instructors are rated above the mean of the rating scale indicates that the majority of participants in online ratings are the more satisfied students, refuting faculty concerns about a negative response bias. Indeed, substantial research evidence shows that the better students, those with higher cumulative GPA or higher SAT scores, are more likely to complete online SRI forms than the less good/successful students (Adams & Umbach, 2012 ; Avery et al., 2006; Layne, DeCristoforo, & McGinty, 1999; Porter & Umbach, 2006; Sorenson & Reiner, 2003).

 

The author examined this issue at her university for all undergraduate courses in two large schools: Engineering and Humanities (Hativa, Many, & Dayagi, 2010). The number of participating courses was 110 and 230, respectively, for the two schools. At the beginning of the semester, all students in each of the schools were sorted into four GPA levels. The lowest 20% of GPAs in a school formed the Poor group whereas the highest 20%, the Excellent group. The two intermediate GPA levels formed, respectively, the Fair and Good groups, with 30% of the students in each. Results show that the rate of response for the Poor, Fair, Good and Excellent groups were respectively for the school of humanities: 35, 43, 43, and 50, and for the school of engineering: 48, 60, 66 and 72.

 

In sum, this faculty concern is refuted and even reversed?the higher the GPA, the larger the response rate in the online method so that the least successful students seem to participate in online ratings at a lower rate than better students.

 

Concern 3: The lower response rate (as in Concern 1) and the higher participation rate of dissatisfied students in online administration (as in Concern 2) will result in lower instructor ratings, as compared with in-class administration.

 

Faculty members are concerned that if the response rate is low (e.g., less than 40% as happens frequently in online ratings), the majority of respondents may be students with a low opinion of the course and the teacher, lowering the ?true? mean rating of the instructor.

 

Research findings on differences in average rating scores between the two methods of survey delivery are inconsistent. Several studies found no significant differences (Avery et al., 2006; Benton et al., 2010; IDEA, 2011; Linse, 2010; Venette, Sellnow, & McIntyre, 2010). Other studies found that ratings were consistently lower in online than on paper, but that the size of the difference was either small and not statistically significant (Kulik, 2005) or large and statistically significant (Chang, 2004).

 

The conflicting findings among the different studies can be explained by differences in the size of the population examined in these studies (from dozens to several thousand courses), the different instruments used (some of them may be of lower quality), and the different research methods. Nonetheless, the main cause of variance between findings in the different studies is probably whether participation in SRI is mandatory or selective. If not all courses participate in the rating procedure rather only those selected by the department or self-selected by the instructor, the courses selected and their mean ratings may not be representative of the full course population and should not be used as a valid measure for comparison.

 

The author examined this issue in two studies that compared mean instructor ratings in paper- and online SRI administration based on her university data, with mandatory course participation. The results of both studies are presented graphically and reveal a strong decrease in annual mean and median ratings from paper to online administration. The lower online ratings cannot be explained by a negative response bias?by higher participation rate of dissatisfied students, because as shown above, many more good students participate in online ratings than poor students. A reasonable explanation is that online ratings are more sincere, honest, and free of teacher influence and social desirability bias than in-class ratings.

 

The main implication is that comparisons of course/teacher ratings can take place only within the same method of measurement?either on paper or online. In no way should ratings in both methods be compared. The best way to avoid improper comparisons is to use a single method of rating throughout all courses in an institution, or at least in a particular school or department.

 

Concern 4: The lower response rate and the higher participation rate of dissatisfied students in online administration will result in fewer and mostly negative written comments

 

Faculty members are concerned that because the majority of expected respondents are dissatisfied students, the majority of written comments will be negative (Sorenson & Reiner, 2003). An additional concern is that because of the smaller rate of respondents in online surveys, the total number of written comments will be significantly reduced compared to in-class ratings. The fewer the comments written by students, the lower the quality of feedback received by teachers as a resource for improvement.

 

There is a consensus among researchers that although mean online response rates are lower than in paper administration, more respondents write comments online than on paper. Johnson (2003) found that while 63% of the online rating forms included written student comments, only less than 10% of in-class forms included such comments. Altogether, the overall number of online comments appears to be larger than in the paper survey.

 

In support:

 

On average, classes evaluated online had more than five times as much written commentary as the classes evaluated on paper, despite the slightly lower overall response rates for the classes evaluated online (Hardy, 2003, p. 35).

 

In addition, comments written online were found to be longer, to present more information, and to pose fewer socially desirable responses than in the paper method (Alhija & Fresko, 2009). Altogether, the larger number of written comments and their increased length and detail in the online method, provide instructors with more beneficial information and thus the quality of online written responses is better than that of in-class survey comments.

 

The following are four possible explanations for the larger number of online comments and for their better quality:

 

? No time constraints: During an online response session, students are not constrained by time and can write as many comments and at any

length as they wish.

 

? Preference for typing over handwriting: Students seem to prefer typing (in online ratings) to handwriting comments.

 

? Increased confidentiality: Some students are concerned that the instructor will identify their handwriting if the comments are written on

paper.

 

? Prevention of instructor influence: Students feel more secure and free to write the honest truth and candid responses online.

 

Regarding the favorability of the comments, students were found to submit positive, negative, and mixed written comments in both methods of rating delivery, with no predominance of negative comments in online ratings (Hardy, 2003). Indeed, for low-rated teachers?those perceived by students as poor teachers?written comments appear to be predominantly negative. In contrast, high-rated teachers receive only few negative comments and predominantly positive comments.

 

In sum, faculty beliefs about written comments are refuted?students write online more comments of better quality that are not mostly negative but rather represent the general quality of the instructor as perceived by students.

 

References

 

Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53 , 576-591.

 

Alhija, F. N. A., & Fresko, B. (2009). Student evaluation of instruction: What can be learned from students' written comments? Studies in Educational Evaluation, 35 (1), 37-44.

 

Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic course evaluations: Does an online delivery system influence student evaluations? The Journal of Economic Education, 37 (1), 21-37.

 

Benton, S. L., Webster, R., Gross, A. B., & Pallett, W. H. (2010). An analysis of IDEA student ratings of instruction using paper versus online survey methods, 2002-2008 data IDEA Technical Report no. 16 : The IDEA Center.

 

Chang, T. S. (2004). The results of student ratings: Paper vs. online. Journal of Taiwan Normal University, 49 (1), 171-186.

 

Hardy, N. (2003). Online ratings: Fact and fiction. In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 31-38). San Francisco: Jossey-Bass.

 

Hativa, N., Many, A., & Dayagi, R. (2010). The whys and wherefores of teacher evaluation by their students. [Hebrew]. Al Hagova, 9 , 30-37.

 

IDEA. (2011). Paper versus online survey delivery IDEA Research Notes No. 4 : The IDEA Center.

 

Johnson, T. D. (2003). Online student ratings: Will students respond? In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 49-59). San Francisco: Jossey-Bass.

 

Kulik, J. A. (2005). Online collection of student evaluations of teaching Retrieved April 2012, from http://www.umich.edu/~eande/tq/OnLineTQExp.pdf

 

Layne, B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Research in Higher Education, 40 (2), 221-232.

 

Linse, A. R. (2010, Feb. 22nd). [Building in-house online course eval system]. Professional and Organizational Development (POD) Network in Higher Education, Listserv commentary.

 

Linse, A. R. (2012, April 27th). [Early release of the final course grade for students who have completed the SRI form for that course]. Professional and Organizational Development (POD) Network in Higher Education, Listserv commentary.

 

Nulti, D. D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment and Evaluation in Higher Education, 33 , 301-314.

 

Porter, R. S., & Umbach, P. D. (2006). Student survey response rates across institutions: Why do they vary? Research in Higher education, 47 (2), 229-247.

 

Sorenson, D. L., & Johnson, T. D. (Eds.). (2003). Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96). San Francisco: Jossey-Bass.

 

Sorenson, D. L., & Reiner, C. (2003). Charting the uncharted seas of online student ratings of instruction. In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 1-24). San Francisco: Jossey-Bass.

 

Venette, S., Sellnow, D., & McIntyre, K. (2010). Charting new territory: Assessing the online frontier of student ratings of instruction. Assessment & Evaluation in Higher Education, 35 (1), 97-111.

 

CONTACT

 

Nira Hativa: nira@post.tau.ac.il

 

------------------------------------------------------------------------------------------------------------------

Nira Hativa, Ph.D.

Prof. emeritus of Teaching in Higher Education

Former chair of the Department for Curriculum and Instruction, School of Education

Former director of the Center for the Advancement of Teaching

Former director of the online system for student ratings of instruction Tel Aviv University

------------------------------------------------------------------------------------------------------------------

 

 

 

From: The EDUCAUSE Blended and Online Learning Constituent Group Listserv [mailto:BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU] On Behalf Of Hill, Angela
Sent: Wednesday, June 05, 2013 9:37 AM
To: BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU
Subject: Re: [BLEND-ONLINE] an online course question

 

Hi

 

I'd recommend reviewing the work of Professor Sid Nair.  http://www.uwa.edu.au/people/sid.nair

 

With a strong culture of student engagement and commitment to improvement, online surveys are highly effective. Students must see that evaluations make a difference to practice!

 

Cheers

 

Angela

 

James Cook University 

Australia 


On 05/06/2013, at 11:29 PM, "Wherry, Peg" <margaret.wherry@MONTANA.EDU> wrote:

Thanks for your reply. I’m familiar with the problem of response rate, because we recently adopted Class Climate as our end-of-course survey tool. And I know a low response rate may just catch the extremes. But my question really is, setting aside response RATE, this department believes that the actual overall score or rating the instructor receives is lower for online courses than for f2f courses, BECAUSE the course is online. You can see how this can feed a cycle of faculty perception that teaching online is difficult, ineffective and unrewarding. But my gut says their perception is not universally true. So I guess I’m revealing more about my reason for asking the question, which may in turn color the responses from the listserv, but I do want to be clear about the nature of my question. Thanks! - peg

 

From: The EDUCAUSE Blended and Online Learning Constituent Group Listserv [mailto:BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU] On Behalf Of Carol Sabbar
Sent: Wednesday, June 05, 2013 7:23 AM
To: BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU
Subject: Re: [BLEND-ONLINE] an online course question

 

Peg,

It the comparison in your question really a comparison of the COURSE delivery method (on-line vs. in the classroom) or is it really a comparison of the EVALUATION delivery method?  Here would be my theory, though it is based on logic and only partially on experience:

 

If the course is taught face-to-face, we know that paper evaluations have a much higher return rate and do on-line evaluations because, simply, you have direct access to everyone to make them fill out the paper.  More on that, below.

 

If the course is taught on-line, it would make sense that on-line evaluations should have a HIGHER return rate than in a face-to-face class because the paradigm of the evaluation matches the delivery of the course.  That is, the students are accustomed to being on-line for their on-line course, so they are more likely to do the evaluation as a regular and usual part of the course - like "one last assignment."  Students in a face-to-face course may never need to go on-line for their course, so doing so to fill out the evaluation becomes an unfamiliar obstacle.  Does that make sense?

 

We at Carthage implemented on-line course evaluations about 2.5 years ago, and the return rather for those is much lower than when the students are forced to sit and fill out the paper in class.  That said, there is strong evidence that the instructor has the ability to achieve ANY level of return rate that he/she would like.  I know instructors who get a return rate of 95% or more every semester, and they very purposely make that happen.  Others admit that they don't give any value to evaluations and, not surprisingly, they get return rates of 20% or less.

 

Our last observation is that, when you have a return rate of 40% or less, you only get the extremes.  Only the students who are extremely happy or extremely unhappy will take the time to express their opinion.

 

I hope that helps, and let me know if I'm answering the right question.

 

******

Register NOW for Carthage's virtual "course-ference" Forecasting Next Generation Libraries.  It goes on-line on July 1 and represents a whole new paradigm that combines on-line learning and virtual conferencing.  Nationally recognized experts, collegial discussion, and only $25 for a team of two people.  http://nextgenlibraries.org 

Thank you, Scott. This is exactly the kind of thing I was looking for! It’s very valuable. - peg

 

From: The EDUCAUSE Blended and Online Learning Constituent Group Listserv [mailto:BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU] On Behalf Of Scott Robison
Sent: Wednesday, June 05, 2013 7:57 AM
To: BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU
Subject: Re: [BLEND-ONLINE] an online course question

 

Here’s a bit of interesting research…

 

 

From: CESNET-L is a unmoderated listserv concerning counselor ed. & supervision [mailto:CESNET-L@LISTSERV.KENT.EDU] On Behalf Of Russell Sabella
Sent: Monday, April 15, 2013 1:52 PM
To: CESNET-L@LISTSERV.KENT.EDU
Subject: Answers to Faculty Concerns About Online Versus In-class Administration of Student Ratings of Instruction (SRI)

 

Source: http://derekbruff.org/blogs/tomprof/2013/04/11/tp-msg-1245-universities-hire-rankings-pros/

 

The posting below compares online student ratings of instructors with in-class ratings. It is from Chapter 7: Online Ratings, in the book, Student Ratings of Instruction: A Practical Approach to Designing, Operating, and Reporting. By Nira Hativa, foreward by Michael Theall and Jennifer Franklin. Information about the book and pricing in https://www.createspace.com/4065544 or at amazon.com ISBN : 978-1481054331. Copyrights ? by Oron Publications. All rights reserved. Reprinted with permission.

 

Answers to Faculty Concerns About Online Versus In-class Administration of Student Ratings of Instruction (SRI)

 

Many faculty members express reservations about online SRIs. To increase their motivation and cooperation, it is essential to understand the underlying reasons for their resistance and to provide them with good answers to counter their reservations and diffuse their concerns. The following are research-based answers to four major faculty concerns about online SRIs.

 

Concern 1: The online method leads to a lower response rate [which may have some negative consequences for faculty].

 

Participation in online ratings is voluntary and requires student motivation to invest time and effort in completing the forms. Faculty are concerned that these conditions will produce a lower response rate that may reduce the reliability and validity of the ratings, and which may have some negative consequences for them.

 

The majority of studies on this issue found that indeed, online ratings produce a lower response rate than in-class ratings (Avery, Bryant, Mathios, Kang, & Bell, 2006; Benton, Webster, Gross, & Pallett, 2010 ; IDEA, 2011; Nulti, 2008). Explanations are that in-class surveys are based on a captive audience, and moreover, students in class are encouraged to participate by the mere presence of the instructor, his/her expressed pressure to respond, and peer pressure. In contrast, in online ratings, students lack motivation or compulsion to complete the forms or they may experience inconvenience and technical problems (Sorenson & Johnson, 2003).

 

Concern 2: Dissatisfied/less successful students participate in the online method at a higher rate than other students.

 

Faculty are concerned that students who are unsuccessful, dissatisfied, or disengaged may be particularly motivated to participate in online ratings in order to rate their teachers low, blaming them for their own failure, disengagement, or dissatisfaction. Consequently, students with low opinions about the instructor will participate in online ratings at a substantially higher rate than more satisfied students.

 

If this concern is correct, then the majority of respondents in online surveys will rate the instructor and the course low, and consequently, the rating distribution will be skewed towards the lower end of the rating scale. However, there is robust research evidence to the contrary (for both methods?on paper and online), that is, the distribution of student ratings on the Overall Teaching item is strongly skewed towards the higher end of the scale.

 

Online score distributions have the same shape as the paper distributions?a long tail at the low end of the scale and a peak at the high end. In other words, unhappy students do not appear to be more likely to complete the online ratings than they were to complete paper ratings (Linse, 2012).

 

The strong evidence that the majority of instructors are rated above the mean of the rating scale indicates that the majority of participants in online ratings are the more satisfied students, refuting faculty concerns about a negative response bias. Indeed, substantial research evidence shows that the better students, those with higher cumulative GPA or higher SAT scores, are more likely to complete online SRI forms than the less good/successful students (Adams & Umbach, 2012 ; Avery et al., 2006; Layne, DeCristoforo, & McGinty, 1999; Porter & Umbach, 2006; Sorenson & Reiner, 2003).

 

The author examined this issue at her university for all undergraduate courses in two large schools: Engineering and Humanities (Hativa, Many, & Dayagi, 2010). The number of participating courses was 110 and 230, respectively, for the two schools. At the beginning of the semester, all students in each of the schools were sorted into four GPA levels. The lowest 20% of GPAs in a school formed the Poor group whereas the highest 20%, the Excellent group. The two intermediate GPA levels formed, respectively, the Fair and Good groups, with 30% of the students in each. Results show that the rate of response for the Poor, Fair, Good and Excellent groups were respectively for the school of humanities: 35, 43, 43, and 50, and for the school of engineering: 48, 60, 66 and 72.

 

In sum, this faculty concern is refuted and even reversed?the higher the GPA, the larger the response rate in the online method so that the least successful students seem to participate in online ratings at a lower rate than better students.

 

Concern 3: The lower response rate (as in Concern 1) and the higher participation rate of dissatisfied students in online administration (as in Concern 2) will result in lower instructor ratings, as compared with in-class administration.

 

Faculty members are concerned that if the response rate is low (e.g., less than 40% as happens frequently in online ratings), the majority of respondents may be students with a low opinion of the course and the teacher, lowering the ?true? mean rating of the instructor.

 

Research findings on differences in average rating scores between the two methods of survey delivery are inconsistent. Several studies found no significant differences (Avery et al., 2006; Benton et al., 2010; IDEA, 2011; Linse, 2010; Venette, Sellnow, & McIntyre, 2010). Other studies found that ratings were consistently lower in online than on paper, but that the size of the difference was either small and not statistically significant (Kulik, 2005) or large and statistically significant (Chang, 2004).

 

The conflicting findings among the different studies can be explained by differences in the size of the population examined in these studies (from dozens to several thousand courses), the different instruments used (some of them may be of lower quality), and the different research methods. Nonetheless, the main cause of variance between findings in the different studies is probably whether participation in SRI is mandatory or selective. If not all courses participate in the rating procedure rather only those selected by the department or self-selected by the instructor, the courses selected and their mean ratings may not be representative of the full course population and should not be used as a valid measure for comparison.

 

The author examined this issue in two studies that compared mean instructor ratings in paper- and online SRI administration based on her university data, with mandatory course participation. The results of both studies are presented graphically and reveal a strong decrease in annual mean and median ratings from paper to online administration. The lower online ratings cannot be explained by a negative response bias?by higher participation rate of dissatisfied students, because as shown above, many more good students participate in online ratings than poor students. A reasonable explanation is that online ratings are more sincere, honest, and free of teacher influence and social desirability bias than in-class ratings.

 

The main implication is that comparisons of course/teacher ratings can take place only within the same method of measurement?either on paper or online. In no way should ratings in both methods be compared. The best way to avoid improper comparisons is to use a single method of rating throughout all courses in an institution, or at least in a particular school or department.

 

Concern 4: The lower response rate and the higher participation rate of dissatisfied students in online administration will result in fewer and mostly negative written comments

 

Faculty members are concerned that because the majority of expected respondents are dissatisfied students, the majority of written comments will be negative (Sorenson & Reiner, 2003). An additional concern is that because of the smaller rate of respondents in online surveys, the total number of written comments will be significantly reduced compared to in-class ratings. The fewer the comments written by students, the lower the quality of feedback received by teachers as a resource for improvement.

 

There is a consensus among researchers that although mean online response rates are lower than in paper administration, more respondents write comments online than on paper. Johnson (2003) found that while 63% of the online rating forms included written student comments, only less than 10% of in-class forms included such comments. Altogether, the overall number of online comments appears to be larger than in the paper survey.

 

In support:

 

On average, classes evaluated online had more than five times as much written commentary as the classes evaluated on paper, despite the slightly lower overall response rates for the classes evaluated online (Hardy, 2003, p. 35).

 

In addition, comments written online were found to be longer, to present more information, and to pose fewer socially desirable responses than in the paper method (Alhija & Fresko, 2009). Altogether, the larger number of written comments and their increased length and detail in the online method, provide instructors with more beneficial information and thus the quality of online written responses is better than that of in-class survey comments.

 

The following are four possible explanations for the larger number of online comments and for their better quality:

 

? No time constraints: During an online response session, students are not constrained by time and can write as many comments and at any

length as they wish.

 

? Preference for typing over handwriting: Students seem to prefer typing (in online ratings) to handwriting comments.

 

? Increased confidentiality: Some students are concerned that the instructor will identify their handwriting if the comments are written on

paper.

 

? Prevention of instructor influence: Students feel more secure and free to write the honest truth and candid responses online.

 

Regarding the favorability of the comments, students were found to submit positive, negative, and mixed written comments in both methods of rating delivery, with no predominance of negative comments in online ratings (Hardy, 2003). Indeed, for low-rated teachers?those perceived by students as poor teachers?written comments appear to be predominantly negative. In contrast, high-rated teachers receive only few negative comments and predominantly positive comments.

 

In sum, faculty beliefs about written comments are refuted?students write online more comments of better quality that are not mostly negative but rather represent the general quality of the instructor as perceived by students.

 

References

 

Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53 , 576-591.

 

Alhija, F. N. A., & Fresko, B. (2009). Student evaluation of instruction: What can be learned from students' written comments? Studies in Educational Evaluation, 35 (1), 37-44.

 

Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic course evaluations: Does an online delivery system influence student evaluations? The Journal of Economic Education, 37 (1), 21-37.

 

Benton, S. L., Webster, R., Gross, A. B., & Pallett, W. H. (2010). An analysis of IDEA student ratings of instruction using paper versus online survey methods, 2002-2008 data IDEA Technical Report no. 16 : The IDEA Center.

 

Chang, T. S. (2004). The results of student ratings: Paper vs. online. Journal of Taiwan Normal University, 49 (1), 171-186.

 

Hardy, N. (2003). Online ratings: Fact and fiction. In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 31-38). San Francisco: Jossey-Bass.

 

Hativa, N., Many, A., & Dayagi, R. (2010). The whys and wherefores of teacher evaluation by their students. [Hebrew]. Al Hagova, 9 , 30-37.

 

IDEA. (2011). Paper versus online survey delivery IDEA Research Notes No. 4 : The IDEA Center.

 

Johnson, T. D. (2003). Online student ratings: Will students respond? In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 49-59). San Francisco: Jossey-Bass.

 

Kulik, J. A. (2005). Online collection of student evaluations of teaching Retrieved April 2012, from http://www.umich.edu/~eande/tq/OnLineTQExp.pdf

 

Layne, B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Research in Higher Education, 40 (2), 221-232.

 

Linse, A. R. (2010, Feb. 22nd). [Building in-house online course eval system]. Professional and Organizational Development (POD) Network in Higher Education, Listserv commentary.

 

Linse, A. R. (2012, April 27th). [Early release of the final course grade for students who have completed the SRI form for that course]. Professional and Organizational Development (POD) Network in Higher Education, Listserv commentary.

 

Nulti, D. D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment and Evaluation in Higher Education, 33 , 301-314.

 

Porter, R. S., & Umbach, P. D. (2006). Student survey response rates across institutions: Why do they vary? Research in Higher education, 47 (2), 229-247.

 

Sorenson, D. L., & Johnson, T. D. (Eds.). (2003). Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96). San Francisco: Jossey-Bass.

 

Sorenson, D. L., & Reiner, C. (2003). Charting the uncharted seas of online student ratings of instruction. In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 1-24). San Francisco: Jossey-Bass.

 

Venette, S., Sellnow, D., & McIntyre, K. (2010). Charting new territory: Assessing the online frontier of student ratings of instruction. Assessment & Evaluation in Higher Education, 35 (1), 97-111.

 

CONTACT

 

Nira Hativa: nira@post.tau.ac.il

 

------------------------------------------------------------------------------------------------------------------

Nira Hativa, Ph.D.

Prof. emeritus of Teaching in Higher Education

Former chair of the Department for Curriculum and Instruction, School of Education

Former director of the Center for the Advancement of Teaching

Former director of the online system for student ratings of instruction Tel Aviv University

------------------------------------------------------------------------------------------------------------------

 

 

 

From: The EDUCAUSE Blended and Online Learning Constituent Group Listserv [mailto:BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU] On Behalf Of Hill, Angela
Sent: Wednesday, June 05, 2013 9:37 AM
To: BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU
Subject: Re: [BLEND-ONLINE] an online course question

 

Hi

 

I'd recommend reviewing the work of Professor Sid Nair.  http://www.uwa.edu.au/people/sid.nair

 

With a strong culture of student engagement and commitment to improvement, online surveys are highly effective. Students must see that evaluations make a difference to practice!

 

Cheers

 

Angela

 

James Cook University 

Australia 


On 05/06/2013, at 11:29 PM, "Wherry, Peg" <margaret.wherry@MONTANA.EDU> wrote:

Thanks for your reply. I’m familiar with the problem of response rate, because we recently adopted Class Climate as our end-of-course survey tool. And I know a low response rate may just catch the extremes. But my question really is, setting aside response RATE, this department believes that the actual overall score or rating the instructor receives is lower for online courses than for f2f courses, BECAUSE the course is online. You can see how this can feed a cycle of faculty perception that teaching online is difficult, ineffective and unrewarding. But my gut says their perception is not universally true. So I guess I’m revealing more about my reason for asking the question, which may in turn color the responses from the listserv, but I do want to be clear about the nature of my question. Thanks! - peg

 

From: The EDUCAUSE Blended and Online Learning Constituent Group Listserv [mailto:BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU] On Behalf Of Carol Sabbar
Sent: Wednesday, June 05, 2013 7:23 AM
To: BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU
Subject: Re: [BLEND-ONLINE] an online course question

 

Peg,

It the comparison in your question really a comparison of the COURSE delivery method (on-line vs. in the classroom) or is it really a comparison of the EVALUATION delivery method?  Here would be my theory, though it is based on logic and only partially on experience:

 

If the course is taught face-to-face, we know that paper evaluations have a much higher return rate and do on-line evaluations because, simply, you have direct access to everyone to make them fill out the paper.  More on that, below.

 

If the course is taught on-line, it would make sense that on-line evaluations should have a HIGHER return rate than in a face-to-face class because the paradigm of the evaluation matches the delivery of the course.  That is, the students are accustomed to being on-line for their on-line course, so they are more likely to do the evaluation as a regular and usual part of the course - like "one last assignment."  Students in a face-to-face course may never need to go on-line for their course, so doing so to fill out the evaluation becomes an unfamiliar obstacle.  Does that make sense?

 

We at Carthage implemented on-line course evaluations about 2.5 years ago, and the return rather for those is much lower than when the students are forced to sit and fill out the paper in class.  That said, there is strong evidence that the instructor has the ability to achieve ANY level of return rate that he/she would like.  I know instructors who get a return rate of 95% or more every semester, and they very purposely make that happen.  Others admit that they don't give any value to evaluations and, not surprisingly, they get return rates of 20% or less.

 

Our last observation is that, when you have a return rate of 40% or less, you only get the extremes.  Only the students who are extremely happy or extremely unhappy will take the time to express their opinion.

 

I hope that helps, and let me know if I'm answering the right question.

 

******

Register NOW for Carthage's virtual "course-ference" Forecasting Next Generation Libraries.  It goes on-line on July 1 and represents a whole new paradigm that combines on-line learning and virtual conferencing.  Nationally recognized experts, collegial discussion, and only $25 for a team of two people.  http://nextgenlibraries.org 

Hi all,

Attached is some data from JHSPH which may be of interest. Although it is a couple years old (I've recently requested last year's data but the person who put together this report has moved on so not sure what exists yet). Although the evaluation return rate is slightly lower for online vs. f2f (page 3), and the number of courses for f2f is 5 times higher than online, there are some interesting looking data points which may be useful. In particular, page 5 shows data for student evaluations of courses and instructors which simultaneously ran f2f and online during AY 10-11 (38 courses total) - and actually show online as receiving higher marks.

The addition of "meh" to the Likert scale may increase validity, there, Mr. Ketcham! 


Best,
Clark


Clark Shah-Nelson
Sr. Instructional Designer, Center for Teaching and Learning
Johns Hopkins Bloomberg School of Public Health
111 Market Pl. Ste. 830 Baltimore, MD 21202
voice/SMS: +1-410-929-0070 --- IM, Skype, Twitter: clarkshahnelson
http://clarkshahnelson.com



 

From: <Wherry>, Peg <margaret.wherry@MONTANA.EDU>
Reply-To: The EDUCAUSE Blended and Online Learning Constituent Group Listserv <BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU>
Date: Wednesday, June 5, 2013 12:10 AM
To: "BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU" <BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU>
Subject: [BLEND-ONLINE] an online course question

 

I was in a meeting here the other day in which a department head stated that student evaluations of instructors are always lower for online than for on-ground classes taught by the same instructor. I think the department head’s statement may be true for that particular department, but I don’t think it’s universal or even representative. In fact, I know of at least one instance at my own institution for which the student end-of-course reviews of the instructor’s performance were higher in the online section. But it seems that in one department, at least, it is taken for granted that online reviews are more negative.

 

This is a different twist on the age-old (and I mean going back to using radio for distance learning in the 1920s) issue of whether student learning outcomes are the same regardless of delivery format. For my purposes here, it is not the students’ performance that is being compared but the students’ perceptions of the instructor’s performance as reflected in end-of-course surveys. I don’t recall having ever heard this come up in our professional discourse, so I will be very interested in any insights (or better yet, data) that any of you may have. Thank you!

 

Peg Wherry

Director of Online and Distance Learning

Extended University Montana State University

128 EPS Building, P. O. Box 173860

Bozeman, MT 59717-3860

Tel (406) 994-6685

Fax (406) 994-7856

margaret.wherry@montana.edu

http://eu.montana.edu


********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.

At Suffolk County Community College we have similar results to Greg’s U, with the exception of courses where the instructor has included blind formative assessments throughout the course.  In those cases, where the professor has established a ‘culture’ of student feedback/engagement with the instructor we find the response is very high.  So our experience tells us that evaluation of a course depends on the same dynamic that most course activity does:  positive engagement by the faculty member with the students creates quality results.

 

From: The EDUCAUSE Blended and Online Learning Constituent Group Listserv [mailto:BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU] On Behalf Of Gregory Ketcham
Sent: Wednesday, June 05, 2013 9:40 AM
To: BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU
Subject: Re: [BLEND-ONLINE] an online course question

 

I would add to Linda's comments that I generally interpret student feedback on online courses as a "U" shaped distribution (think horseshoe). Yes, the very dissatisfied provide feedback, but so do the extremely satisfied. The middle majority, whose reaction might be unscientifically summarized as "meh", don't feel compelled either way to contribute feedback.

 

Greg

 

Clark, this is great! I enjoy the idea of adding “meh” to the Likert scale. Thank you very much! - peg

 

On a related note:

How does your institution handle low enrollment courses where only 2 or 3 students end up submitting evaluations, which then may jeopardize their anonymity?

 

Message from epderi@gmail.com

Dear Peg,

Just a few references addressing online course evaluation, touching on students’ perceptions and evaluation results…
As I recall, most cases report (pilot or campus implementation): positive student perception, longer response (longer than 3 words!), and perceivably more accurate results (vs. marking just one choice throughout the evaluation), despite the myth about negative response from students.
Lower participation rates are also commonly reported, while many institutions encourage students to take the evaluation in many ways.

Nice compilation of literature on Online Evaluation (Columbia College Chicago)
Presentations
NERCOMP 2009 (Tufts & Brandeis)

EDUCAUSE Mid-Atlantic 2009 (Philadelphia University)

Best regards,

Enoch Park
Director of Distance Learning
Pfeiffer University

**********

Reply-To: The EDUCAUSE Blended and Online Learning Constituent Group Listserv <BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU<mailto:BLEND-ONLINE@LISTSERV.EDUCAUSE.EDU>>
Date: Wednesday, June 5, 2013 12:10 AM
Subject: [BLEND-ONLINE] an online course question

I was in a meeting here the other day in which a department head stated that student evaluations of instructors are always lower for online than for on-ground classes taught by the same instructor. I think the department head's statement may be true for that particular department, but I don't think =
it's universal or even representative. In fact, I know of at least one instance at my own institution for which the student end-of-course reviews of the instructor's performance were higher in the online section. But it seems that in one department, at least, it is taken for granted that online reviews are more negative.

This is a different twist on the age-old (and I mean going back to using radio for distance learning in the 1920s) issue of whether student learning outcomes are the same regardless of delivery format. For my purposes here, it is not the students' performance that is being compared but the students' perceptions of the instructor's performance as reflected in end-of-course surveys. I don't recall having ever heard this come up in our professional discourse, so I will be very interested in any insights (or better yet, data) that any of you may have. Thank you!

Peg Wherry
Director of Online and Distance Learning
Extended University Montana State University
128 EPS Building, P. O. Box 173860
Bozeman, MT 59717-3860
Tel (406) 994-6685<tel:%28406%29%20994-6685>
Fax (406) 994-7856<tel:%28406%29%20994-7856>

********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.

Close
Close


Annual Conference
September 29–October 2
Register Now!

Events for all Levels and Interests

Whether you're looking for a conference to attend face-to-face to connect with peers, or for an online event for team professional development, see what's upcoming.

Close

Digital Badges
Member recognition effort
Earn yours >

Career Center


Leadership and Management Programs

EDUCAUSE Institute
Project Management

 

 

Jump Start Your Career Growth

Explore EDUCAUSE professional development opportunities that match your career aspirations and desired level of time investment through our interactive online guide.

 

Close
EDUCAUSE organizes its efforts around three IT Focus Areas

 

 

Join These Programs If Your Focus Is

Close

Get on the Higher Ed IT Map

Employees of EDUCAUSE member institutions and organizations are invited to create individual profiles.
 

 

Close

2014 Strategic Priorities

  • Building the Profession
  • IT as a Game Changer
  • Foundations


Learn More >

Uncommon Thinking for the Common Good™

EDUCAUSE is the foremost community of higher education IT leaders and professionals.