Main Nav

Here's a straw poll question, directed primarily at those whose responsibility includes the care, feeding, and promotion of the campus learning management system (LMS): what, currently, is your primary measure of a successful learning management system (LMS) project? Is it the percentage of faculty using it? I know there are probably a variety of ways you might calculate the success of your LMS. But my question is: what is the **primary** or most valuable way or metric you use? If you could use only one such measure, what would it be? Thanks Malcolm ---------- Malcolm Brown Director, EDUCAUSE Learning Initiative email: mbrown@educause.edu IM: fnchron (AIM) Voice: 575-448-1313 ********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.

Comments

Here's a straw poll question, directed primarily at those whose responsibility includes the care, feeding, and promotion of the campus learning management system (LMS): what, currently, is your primary measure of a successful learning management system (LMS) project? Is it the percentage of faculty using it? I know there are probably a variety of ways you might calculate the success of your LMS. But my question is: what is the **primary** or most valuable way or metric you use? If you could use only one such measure, what would it be? Thanks Malcolm ---------- Malcolm Brown Director, EDUCAUSE Learning Initiative email: mbrown@educause.edu IM: fnchron (AIM) Voice: 575-448-1313 ********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.
Good question and very timely.....and some follow-on questions (not a hijack!)
 
  1. How do you measure "success" of your LMS?.....or what metric do you use?
  2. Do you also measure "uptime" or "availability"?
  3. Do you run an "in-house" or "hosted" LMS?
  4. What was your primary deciding factor in #3…..what made you decide to run in-house?.....what made you decide to run "hosted"?
 
Thanks,
 
M
 
We measure success based on the number of courses and enrollments. Erica
Fascinating that one would select statistics such as percentage of faculty adoption or raw numbers of courses and enrollments to measure the success of the LMS. Shouldn't we be looking for metrics of improvements in teaching and learning? I understand such metrics are more difficult to develop, but the investment in a CMS or any other technology tool could only be predicated on an improved learning outcome. Joel Backon Director of Academic Technology/History Choate Rosemary Hall 333 Christian St. Wallingford, CT 06492 203-697-2514 Sent from iPad2
Joel:

What about a case where adoption and use of the LMS clearly increases communication between faculty and students? Is increased communication in and of itself an indicator of improved learning outcomes? Probably not; but could it be a related factor? 


regards,
Greg

Perhaps. If one identifies improved communication between faculty and students as an improved learning outcome, then yes.

Joel

-- 
Joel Backon
Director of Academic Technology / History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT  06492
203-697-2514




On Feb 3, 2012, at 1:54 PM, Gregory Ketcham wrote:

Joel:

What about a case where adoption and use of the LMS clearly increases communication between faculty and students? Is increased communication in and of itself an indicator of improved learning outcomes? Probably not; but could it be a related factor? 


regards,
Greg

This depends entirely on how you define "success." If you think success is about being able to demonstrate adoption (and by doing so, suggest that the investment of money was worth it purely because people are choosing to use the system) than to approach this purely from a standpoint of how many people use the system might make sense. 

If you're talking about "success" in terms of successfully demonstrating learning, then relying purely on numbers of adoptees would be inappropriate. 

I can't speak to our LMS specifically, but we think about the success of the the blogging system we run (UMW Blogs) from a number of perspectives: 

--we tout the number of faculty and student users and number of courses, in part because it is a grassroots initiative that is wholly opt-in (courses and accounts are not created for our faculty or students by default). We're proud of the way the system has been adopted by the community and we think it provides *some* evidence of its value and success. 

--we work closely with faculty to help them understand the ways in which the system might be used to promote and demonstrate student success with regards to learning outcomes 

--we talk a lot about anecdotal evidence of student learning, engagement, and the impact of publication on the ways students write and present themselves. We "watch" the system pretty closely for these anecdotes, and we are able to do so because the vast majority of the courses in the system are taught in the open. 

I think having a broader conversation about what constitutes success is vital to understand the impact any system is having. 

Martha Burtis

On Feb 3, 2012, at 3:11 PM, Backon, Joel wrote:

Perhaps. If one identifies improved communication between faculty and students as an improved learning outcome, then yes.

Joel

-- 
Joel Backon
Director of Academic Technology / History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT  06492
203-697-2514




On Feb 3, 2012, at 1:54 PM, Gregory Ketcham wrote:

Joel:

What about a case where adoption and use of the LMS clearly increases communication between faculty and students? Is increased communication in and of itself an indicator of improved learning outcomes? Probably not; but could it be a related factor? 


regards,
Greg

Message from trentbatson@me.com

Malcolm:  you posed such an open-ended question but maybe that's the secret to a good discussion.

LMS's such as Epsilen and Desire2Learn also include an electronic portfolio component.  Odd matching in a way:  the LMS harkening back to teacher-centered, the eportfolio sysem anticipating a time of active student learning with evidence as the basis for credentialing.

It is the eportfolio functionality that supports tracking cohort progress toward learning goals; it is what many of these systems are designed for.  LMS's are not really designed for that function.  There are about 20 vital and viable electronic portfolio systems (open source and proprietary) in the world.  

I can see many ways to measure the success of electronic portfolio components of LMS's -- for example, the degree to which they are used on behalf of high-impact practices (Kuh, 2008, AAC&U).  But I find it hard to find metrics to measure success in the broader sense for LMS's.  With eportfolios, you can use standard measures of student engagement because studetns own and keep their eportfolios over time whereas LMS's are "owned" by faculty and are limited to the time of the course.

I would think one measure of a good LMS today would be to what extent the functionality in the LMS can be used also within the eportfolio module.  

From a service standpoint, we always thought the best aspects of an LMS were how easy it was for faculty to create their course and to modify it.  Or, to what extent they had easy access to previous course content.  

The LMS market is huge and will be around for a long time, but it is certainly not the future, at least as the dominant teaching-learning communication space as the learning paradigm re-structures itself.

Great discussion, Malcom.
Best
trent
On Feb 3, 2012, at 3:24 PM, Martha Burtis (mburtis) wrote:

This depends entirely on how you define "success." If you think success is about being able to demonstrate adoption (and by doing so, suggest that the investment of money was worth it purely because people are choosing to use the system) than to approach this purely from a standpoint of how many people use the system might make sense. 

If you're talking about "success" in terms of successfully demonstrating learning, then relying purely on numbers of adoptees would be inappropriate. 

I can't speak to our LMS specifically, but we think about the success of the the blogging system we run (UMW Blogs) from a number of perspectives: 

--we tout the number of faculty and student users and number of courses, in part because it is a grassroots initiative that is wholly opt-in (courses and accounts are not created for our faculty or students by default). We're proud of the way the system has been adopted by the community and we think it provides *some* evidence of its value and success. 

--we work closely with faculty to help them understand the ways in which the system might be used to promote and demonstrate student success with regards to learning outcomes 

--we talk a lot about anecdotal evidence of student learning, engagement, and the impact of publication on the ways students write and present themselves. We "watch" the system pretty closely for these anecdotes, and we are able to do so because the vast majority of the courses in the system are taught in the open. 

I think having a broader conversation about what constitutes success is vital to understand the impact any system is having. 

Martha Burtis

On Feb 3, 2012, at 3:11 PM, Backon, Joel wrote:

Perhaps. If one identifies improved communication between faculty and students as an improved learning outcome, then yes.

Joel

-- 
Joel Backon
Director of Academic Technology / History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT  06492
203-697-2514




On Feb 3, 2012, at 1:54 PM, Gregory Ketcham wrote:

Joel:

What about a case where adoption and use of the LMS clearly increases communication between faculty and students? Is increased communication in and of itself an indicator of improved learning outcomes? Probably not; but could it be a related factor? 


regards,
Greg

Hi Malcolm, The percentage of faculty using it is probably the most important. BTW, this includes adjuncts. However, it's the one measure that Administration always asks about. This does not mean that we accept faculty usage as the only measure. We do look at student usage and the actual features that are used. We call these Bb Metrics and look at whether or not the class is a basic user (syllabus upload, announcements), substantive user (content uploaded, external links, etc.), and power user (discussion forums, grade center, blogs, journals, etc.). This gives us an idea of the depth of usage as well as amount of usage. Hope this helps, Sandie Sandra L. Miller, Ed.D. Director of Instruction & Research Technology William Paterson University 300 Pompton Road Wayne, NJ 07470 973.720.2530 millers@wpunj.edu Think before you print
1. The percentage of the student body using the system in any one semester is primary. (Most recently at about 95%.) The percentage of faculty is secondary. (Most recently about 85%.)

2. We haven't ever calculated or publicized "uptime", but is is almost never down during an academic period.

3. We run "in-house".

4. The decision was made several years ago, when internet connectivity to the middle of nowhere, upstate New York was less reliable. We initially tried to share servers between ourselves and a neighboring institution, but even that proved too unreliable for faculty acceptance. These days, however, the primary motivator is cost. We can meet our own needs much more economically than the vendor can.

Nikki Reynolds
Hamilton College

We can get the statistics on faculty and student participation in the system. We can't get the faculty to agree to let us even try to measure learning outcome differences.

In fact, I always question the push to measure the learning outcomes correlated to the use of technology in a course. One cannot really claim a change unless one has some accurate information on learning outcomes before incorporating the technology into the course. That is one measure our faculty, at least, do not wish for us to take. Then there is the problem of trying to claim that any difference in learning outcomes for any particular semester is "caused" by the use of the technology. 

The teaching and learning situation is much too complex to claim more than a correlation between the use of technology and the learning outcomes, whatever they may be. There are so many other variables: Is the instructor having a good year, or a bad one? Was he or she too focused on research, or an upcoming tenure review? Was the course scheduled at a bad time for the professor? Did he or she bother to consider changes in pedagogy related to the technology, or was the technology just "slapped onto" the course? How well prepared were the students in this particular course section for this material? Was the difference just due to the novelty of the technology? Was there adequate support for this technology application, or were the students (and possibly the faculty member) left to stumble through the process on their own?

It seems to me that if we want to make significant claims about the benefits of technology to learning outcomes, we need a baseline. Even with a baseline, we really need to take the same sorts of metrics over a period of several semesters of the same course to be able to "average out" so many of the variables that we can never control.

BTW - the foregoing was NOT an attempt to argue against measuring learning outcomes. I think we should do a great deal more of that, preferably across the curriculum, regardless of the technology application, or lack thereof. We should be trying to examine pedagogies, learning spaces, teaching and learning styles or modes, etc. We should be examining just as many of the variables we *can* control as possible, over time, searching for trends that might give a clue to cause and effect.

Nikki Reynolds
Hamilton College

Greetings All!

 

Indeed, a good discussion!

 

At UF we have relied heavily on adoption to assert success, i.e. courses and sections. That and how many instructors are howling for my head on a pike ;-)

 

However, as has already been observed, simple adoption is a quite narrow definition of success, especially when some units may mandate use while others may not. As a result, I am currently playing with the idea of trying to tease out differences between Return on Investment and Return on Value. ROI is, I think, appropriately addressed by adoption stats; but ROV strikes more deeply at less easily measured characteristics such as increased communication and improved student learning (both of which have rightly been mentioned.

 

Our first pass at measuring ROV is focusing on user satisfaction my means of student and instructor surveys (to launch later this Spring). Based on ideas that emerged in this tread, , part of these surveys will seek anecdotal evidence of improved communication and improved learning outcomes [Do you believe use of the LMS has improved your overall learning? – or some such question].

 

I agree with the posts identifying e-portfolios as a more effective tool to gauge learning outcomes; though tying those outcomes to the LMS becomes problematic. But cycling back to anecdotal evidence of LMS “success:”

 

What other measures or questions can you/we come up with that might add insight into success?

 

 

Peace,

 

Doug

 

-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

Douglas F. Johnson, Ph.D.

Assistant Director for Learning Services

Office of Information Technology | University of Florida

Hub 132 | 352.392.4357, opt 3.

 

Between the idea and the reality … Between the conception and the creation … Falls the Shadow.

-          T.S. Elliot, The Hollow Men

 

From: Trent Batson [mailto:trentbatson@ME.COM]
Sent: Friday, February 03, 2012 3:41 PM
Subject: Re: Measuring the success of the campus LMS

 

Malcolm:  you posed such an open-ended question but maybe that's the secret to a good discussion.

 

LMS's such as Epsilen and Desire2Learn also include an electronic portfolio component.  Odd matching in a way:  the LMS harkening back to teacher-centered, the eportfolio sysem anticipating a time of active student learning with evidence as the basis for credentialing.

 

It is the eportfolio functionality that supports tracking cohort progress toward learning goals; it is what many of these systems are designed for.  LMS's are not really designed for that function.  There are about 20 vital and viable electronic portfolio systems (open source and proprietary) in the world.  

 

I can see many ways to measure the success of electronic portfolio components of LMS's -- for example, the degree to which they are used on behalf of high-impact practices (Kuh, 2008, AAC&U).  But I find it hard to find metrics to measure success in the broader sense for LMS's.  With eportfolios, you can use standard measures of student engagement because studetns own and keep their eportfolios over time whereas LMS's are "owned" by faculty and are limited to the time of the course.

 

I would think one measure of a good LMS today would be to what extent the functionality in the LMS can be used also within the eportfolio module.  

 

From a service standpoint, we always thought the best aspects of an LMS were how easy it was for faculty to create their course and to modify it.  Or, to what extent they had easy access to previous course content.  

 

The LMS market is huge and will be around for a long time, but it is certainly not the future, at least as the dominant teaching-learning communication space as the learning paradigm re-structures itself.

 

Great discussion, Malcom.

Best

trent

On Feb 3, 2012, at 3:24 PM, Martha Burtis (mburtis) wrote:



This depends entirely on how you define "success." If you think success is about being able to demonstrate adoption (and by doing so, suggest that the investment of money was worth it purely because people are choosing to use the system) than to approach this purely from a standpoint of how many people use the system might make sense. 

 

If you're talking about "success" in terms of successfully demonstrating learning, then relying purely on numbers of adoptees would be inappropriate. 

 

I can't speak to our LMS specifically, but we think about the success of the the blogging system we run (UMW Blogs) from a number of perspectives: 

 

--we tout the number of faculty and student users and number of courses, in part because it is a grassroots initiative that is wholly opt-in (courses and accounts are not created for our faculty or students by default). We're proud of the way the system has been adopted by the community and we think it provides *some* evidence of its value and success. 

 

--we work closely with faculty to help them understand the ways in which the system might be used to promote and demonstrate student success with regards to learning outcomes 

 

--we talk a lot about anecdotal evidence of student learning, engagement, and the impact of publication on the ways students write and present themselves. We "watch" the system pretty closely for these anecdotes, and we are able to do so because the vast majority of the courses in the system are taught in the open. 

 

I think having a broader conversation about what constitutes success is vital to understand the impact any system is having. 

 

Martha Burtis

 

On Feb 3, 2012, at 3:11 PM, Backon, Joel wrote:



Perhaps. If one identifies improved communication between faculty and students as an improved learning outcome, then yes.

 

Joel

 

-- 
Joel Backon
Director of Academic Technology / History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT  06492
203-697-2514




 

On Feb 3, 2012, at 1:54 PM, Gregory Ketcham wrote:



Joel:

 

What about a case where adoption and use of the LMS clearly increases communication between faculty and students? Is increased communication in and of itself an indicator of improved learning outcomes? Probably not; but could it be a related factor? 

 

 

regards,

Greg

A great topic. Given that Granite State College serves over 50% of courses online, the choice of LMS is critical. To answer the question, however, is difficult because it's like asking what would be the most significant criteria in evaluating the plumbing system of a 4 BR house. It has to respond positively and consistently to the demands it was intended to serve. Turn the valve(s) and it works. As residents don't necessarily "buy into" plumbing and choose to use it, our instructional system is what it is. This is our LMS. Here is your training, support system, teaching community, etc. To some extend, our view of the LMS we chose (Moodle 2.x, as a migration away from Bb) was that the major LMS are mostly commodities in terms of communication features and access. The value came in the longterm cost, tech support, adaptability to emerging tools/methods/media, ease of integration with Banner. Some of these values pay off immediately, some later on down the road in ways we can't know for sure. Same applies to a plumbing system, perhaps: using PEX instead of copper, choosing to "T" a connection in anticipation of adding another bathroom, etc. To answer the question, I'd say the most valuable metric would be usability. Do we spend an inordinate amount of time explaining to instructors and students how to use basic functions? Does it play nice with most browsers? Is the opensource development community behind it committed and responsive to user needs and demands? - Steve Covello -- Steve Covello Rich Media Specialist Granite State College 8 Old Suncook Road Concord, NH 03301 603-513-1346 Skype: steve.granitestate Scheduling: tungle.me/steve.granitestate
Nikki, thanks for the thoughtful response. You're right that isolating the appropriate metrics is a real challenge, particularly since the LMS is really a delivery system. For those delivering online courses, the LMS has inherent value as a delivery system. For F2F courses, there are other reasons for the LMS. Metrics are difficult because of the many ways teachers use the LMS, ranging from posting documents to interactive learning. How do I measure the success of our LMS when some of the 77% of teachers using it are posting syllabi and assignment sheets while others are actively using blogs, wikis, assessments, and multimedia content? At the end of the day, it's all about he teacher, not the tools.

Joel


Joel Backon 
Director of Academic Technology/History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT 06492
203-697-2514
Sent from iPad2

On Feb 6, 2012, at 12:30 PM, "Nikki Reynolds" <nreynold@HAMILTON.EDU> wrote:

We can get the statistics on faculty and student participation in the system. We can't get the faculty to agree to let us even try to measure learning outcome differences.

In fact, I always question the push to measure the learning outcomes correlated to the use of technology in a course. One cannot really claim a change unless one has some accurate information on learning outcomes before incorporating the technology into the course. That is one measure our faculty, at least, do not wish for us to take. Then there is the problem of trying to claim that any difference in learning outcomes for any particular semester is "caused" by the use of the technology. 

The teaching and learning situation is much too complex to claim more than a correlation between the use of technology and the learning outcomes, whatever they may be. There are so many other variables: Is the instructor having a good year, or a bad one? Was he or she too focused on research, or an upcoming tenure review? Was the course scheduled at a bad time for the professor? Did he or she bother to consider changes in pedagogy related to the technology, or was the technology just "slapped onto" the course? How well prepared were the students in this particular course section for this material? Was the difference just due to the novelty of the technology? Was there adequate support for this technology application, or were the students (and possibly the faculty member) left to stumble through the process on their own?

It seems to me that if we want to make significant claims about the benefits of technology to learning outcomes, we need a baseline. Even with a baseline, we really need to take the same sorts of metrics over a period of several semesters of the same course to be able to "average out" so many of the variables that we can never control.

BTW - the foregoing was NOT an attempt to argue against measuring learning outcomes. I think we should do a great deal more of that, preferably across the curriculum, regardless of the technology application, or lack thereof. We should be trying to examine pedagogies, learning spaces, teaching and learning styles or modes, etc. We should be examining just as many of the variables we *can* control as possible, over time, searching for trends that might give a clue to cause and effect.

Nikki Reynolds
Hamilton College

Message from trentbatson@me.com

Hi, all -- just a clarification about "learning outcomes."  Learning outcomes have been created by most institutions -- a set of 6 or 8 or so of what students should have achieved by the time they graduate.  Based on those stated outcomes, on some campuses, rubrics have been created within colleges at a university, within major programs, within gen ed programs, and on down to the course level.  The rubric creates standards for what it means to reach a learning goal.  The disciplines decided on those standards.

For each level -- first year, second year, etc., those standards will have criteria that become more challenging as the students develop.  

To show that students do satisfy the criteria, the students collect evidence fo their work -- in a first year experience, in undergraduate research, in an internship, in field studies, or in other hight impact practices and in regular courses.  As faculty evaluate the evidence each semester, they can then indicate in the institutional eportfolio backend that the student is making progress.  The learning outcomes process, then, is providing criteria-satisfying evidence of work that fits the standards and definitions in the rubric. 

Learning outcomes are being tracked constantly, in other words, and the institution can run reports for accreditation based on the progress of cohorts of students and any one time toward filling all the criteria in the rubrics.

The problem with an LMSs is that they don't perform this task.  Electronic Portfolios do.  If you want srong assessment, you need an electronic portfolio, not an LMS.  This is not learning because of  the technology but this is a measure of learning by whatever means using technology.  (eportfolios can improve learning, but that's a separate topic).  

I know this is a bit off the original question, or maybe it's really is at the heart of the question?

Best
Trent 
On Feb 6, 2012, at 9:59 PM, Backon, Joel wrote:

Nikki, thanks for the thoughtful response. You're right that isolating the appropriate metrics is a real challenge, particularly since the LMS is really a delivery system. For those delivering online courses, the LMS has inherent value as a delivery system. For F2F courses, there are other reasons for the LMS. Metrics are difficult because of the many ways teachers use the LMS, ranging from posting documents to interactive learning. How do I measure the success of our LMS when some of the 77% of teachers using it are posting syllabi and assignment sheets while others are actively using blogs, wikis, assessments, and multimedia content? At the end of the day, it's all about he teacher, not the tools.

Joel


Joel Backon 
Director of Academic Technology/History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT 06492
203-697-2514
Sent from iPad2

On Feb 6, 2012, at 12:30 PM, "Nikki Reynolds" <nreynold@HAMILTON.EDU> wrote:

We can get the statistics on faculty and student participation in the system. We can't get the faculty to agree to let us even try to measure learning outcome differences.

In fact, I always question the push to measure the learning outcomes correlated to the use of technology in a course. One cannot really claim a change unless one has some accurate information on learning outcomes before incorporating the technology into the course. That is one measure our faculty, at least, do not wish for us to take. Then there is the problem of trying to claim that any difference in learning outcomes for any particular semester is "caused" by the use of the technology. 

The teaching and learning situation is much too complex to claim more than a correlation between the use of technology and the learning outcomes, whatever they may be. There are so many other variables: Is the instructor having a good year, or a bad one? Was he or she too focused on research, or an upcoming tenure review? Was the course scheduled at a bad time for the professor? Did he or she bother to consider changes in pedagogy related to the technology, or was the technology just "slapped onto" the course? How well prepared were the students in this particular course section for this material? Was the difference just due to the novelty of the technology? Was there adequate support for this technology application, or were the students (and possibly the faculty member) left to stumble through the process on their own?

It seems to me that if we want to make significant claims about the benefits of technology to learning outcomes, we need a baseline. Even with a baseline, we really need to take the same sorts of metrics over a period of several semesters of the same course to be able to "average out" so many of the variables that we can never control.

BTW - the foregoing was NOT an attempt to argue against measuring learning outcomes. I think we should do a great deal more of that, preferably across the curriculum, regardless of the technology application, or lack thereof. We should be trying to examine pedagogies, learning spaces, teaching and learning styles or modes, etc. We should be examining just as many of the variables we *can* control as possible, over time, searching for trends that might give a clue to cause and effect.

Nikki Reynolds
Hamilton College

Trent's response gets to the heart of what LMSs are designed to do. As already mentioned (and discussed in the literature, LMS success is measured by adoption and usage, particularly in terms of the affordances offered by the various communication & collaboration tools Kwikis, blogs, web conferencing, etcL) available in most of the latest LMS versions.  If faculty use the LMS reporting functions for each of their courses, they can also set up an early warning system to flag and follow up with individual students with below average "hits".  However, no LMS can measure learning outcomes. E-portfolios and other learning products can.

Shahron Williams van Rooij

Connected by DROID on Verizon Wireless


-----Original message-----
From: Trent Batson <trentbatson@me.com>
To:
INSTTECH@LISTSERV.EDUCAUSE.EDU
Sent:
Tue, Feb 7, 2012 13:30:15 GMT+00:00
Subject:
Re: [INSTTECH] Measuring the success of the campus LMS

Hi, all -- just a clarification about "learning outcomes."  Learning outcomes have been created by most institutions -- a set of 6 or 8 or so of what students should have achieved by the time they graduate.  Based on those stated outcomes, on some campuses, rubrics have been created within colleges at a university, within major programs, within gen ed programs, and on down to the course level.  The rubric creates standards for what it means to reach a learning goal.  The disciplines decided on those standards.

For each level -- first year, second year, etc., those standards will have criteria that become more challenging as the students develop.  

To show that students do satisfy the criteria, the students collect evidence fo their work -- in a first year experience, in undergraduate research, in an internship, in field studies, or in other hight impact practices and in regular courses.  As faculty evaluate the evidence each semester, they can then indicate in the institutional eportfolio backend that the student is making progress.  The learning outcomes process, then, is providing criteria-satisfying evidence of work that fits the standards and definitions in the rubric. 

Learning outcomes are being tracked constantly, in other words, and the institution can run reports for accreditation based on the progress of cohorts of students and any one time toward filling all the criteria in the rubrics.

The problem with an LMSs is that they don't perform this task.  Electronic Portfolios do.  If you want srong assessment, you need an electronic portfolio, not an LMS.  This is not learning because of  the technology but this is a measure of learning by whatever means using technology.  (eportfolios can improve learning, but that's a separate topic).  

I know this is a bit off the original question, or maybe it's really is at the heart of the question?

Best
Trent 
On Feb 6, 2012, at 9:59 PM, Backon, Joel wrote:

Nikki, thanks for the thoughtful response. You're right that isolating the appropriate metrics is a real challenge, particularly since the LMS is really a delivery system. For those delivering online courses, the LMS has inherent value as a delivery system. For F2F courses, there are other reasons for the LMS. Metrics are difficult because of the many ways teachers use the LMS, ranging from posting documents to interactive learning. How do I measure the success of our LMS when some of the 77% of teachers using it are posting syllabi and assignment sheets while others are actively using blogs, wikis, assessments, and multimedia content? At the end of the day, it's all about he teacher, not the tools.

Joel


Joel Backon 
Director of Academic Technology/History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT 06492
203-697-2514
Sent from iPad2

On Feb 6, 2012, at 12:30 PM, "Nikki Reynolds" <nreynold@HAMILTON.EDU> wrote:

We can get the statistics on faculty and student participation in the system. We can't get the faculty to agree to let us even try to measure learning outcome differences.

In fact, I always question the push to measure the learning outcomes correlated to the use of technology in a course. One cannot really claim a change unless one has some accurate information on learning outcomes before incorporating the technology into the course. That is one measure our faculty, at least, do not wish for us to take. Then there is the problem of trying to claim that any difference in learning outcomes for any particular semester is "caused" by the use of the technology. 

The teaching and learning situation is much too complex to claim more than a correlation between the use of technology and the learning outcomes, whatever they may be. There are so many other variables: Is the instructor having a good year, or a bad one? Was he or she too focused on research, or an upcoming tenure review? Was the course scheduled at a bad time for the professor? Did he or she bother to consider changes in pedagogy related to the technology, or was the technology just "slapped onto" the course? How well prepared were the students in this particular course section for this material? Was the difference just due to the novelty of the technology? Was there adequate support for this technology application, or were the students (and possibly the faculty member) left to stumble through the process on their own?

It seems to me that if we want to make significant claims about the benefits of technology to learning outcomes, we need a baseline. Even with a baseline, we really need to take the same sorts of metrics over a period of several semesters of the same course to be able to "average out" so many of the variables that we can never control.

BTW - the foregoing was NOT an attempt to argue against measuring learning outcomes. I think we should do a great deal more of that, preferably across the curriculum, regardless of the technology application, or lack thereof. We should be trying to examine pedagogies, learning spaces, teaching and learning styles or modes, etc. We should be examining just as many of the variables we *can* control as possible, over time, searching for trends that might give a clue to cause and effect.

Nikki Reynolds
Hamilton College

Malcolm posits a great question - one that I've attempted to reconcile for years. For reasons I've never been entirely clear, the LMS always seems to be under 'extra scrutiny' to justify its place, expense, and usage on campus. Never mind that the cumbersome ERPs would often be happily sidestepped if it were not for the fact that many campuses 'force' students through the portal in order to access the services they really want. Akin to a toll booth company telling everyone that their usage is very high - when the drivers simply want to get to their destination.

To help satisfy those questions, we conduct a student tech survey each year (based loosely on the ECAR annual survey) - and are starting a faculty survey this year as well. We've collected very useful qualitative information specific to the LMS. (Satisfaction, features most utilized, mobile access, etc.) We've been pleasantly surprised by the student resposes. The LMS is often the #1 most accessed and utilized instructional tool - even ahead of library databases. (Granted, this may be biased because the faculty are requiring the students access resources only available through the LMS. However, most students seem satisfied with the tool, nonetheless.)

The quantitative metrics pulled from the LMS itself (I'll not mention the brand) are useless. That a faculty member posts a syllabus should in no way be equated with a faculty member who relies on the tool exclusively. Yet, that's what the metrics often reflect. ("Shells opened and accessed.") A new LMS we're considering should allow us to extract more useful data from the system. (Fingers crossed.)

A sidebar note from something I heard at the ELI Annual Meeting that I found intriguing. A panelist from EDx indicated that she envisions a not-so-distant future of customizable learning environments based entirely on Big Data. If it comes to fruition, it will turn the LMS world inside out. Imagine a system that automatically adapts to, say, high-risk students or students who are returning adult learners. Talk about a game-changer. When that product comes out, I'm buying lots of stock...

RG



From: The EDUCAUSE Instructional Technologies Constituent Group Listserv [INSTTECH@LISTSERV.EDUCAUSE.EDU] on behalf of Phillip Long [longpd@MIT.EDU]
Sent: Saturday, February 23, 2013 9:47 PM
To: INSTTECH@LISTSERV.EDUCAUSE.EDU
Subject: [INSTTECH] Measuring the success of the campus LMS

Malcolm: to answer your 'success metric' for the LMS, sadly at my institution the answer is as you posited - faculty adoption rates. It has nothing to do with how they use it. And indeed it's a bogus measure because it's mandated that all academics must have a Bb course shell. There are exceptions permitted if the academic argues it and if the associate dean of T&L is open minded but the cultural norm is that the LMS is the corporate choice and thou shalt use it. Success circularly is in the nearly 100% adoption of this mandate…. Do you smell a problem here? Sign….

phil

(apologies for the delayed reply - it seems that for some reason this list no longer accepts posts to it from my UQ account only my MIT account - I don’t have the energy to correct this at the moment)

:: :: :: :: :: :: :: :: :: :: :: ::
Professor Phillip Long :: Ofc of the Deputy Vice Chancellor Academic :: The University of Queensland :: Brisbane, QLD 4072  Australia
ITEE 

Director: Centre for Educational Innovation & Technology




On 04/02/2012, at 4:45 AM, Backon, Joel wrote:

Fascinating that one would select statistics such as percentage of faculty adoption or raw numbers of courses and enrollments to measure the success of the LMS. Shouldn't we be looking for metrics of improvements in teaching and learning? I understand such metrics are more difficult to develop, but the investment in a CMS or any other technology tool could only be predicated on an improved learning outcome.


Joel Backon
Director of Academic Technology/History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT 06492
203-697-2514
Sent from iPad2

And, Rob, the irony of using data that are (for example) tied to "page views" as an indicator of course interactivity or student engagement is that a poorly-designed course can end up looking great because students spend so much time searching for materials or having to drill down unnecessarily to reach the relevant tools or resources.

Susan


Susan M. Zvacek, Ph.D.
Senior Director, Teaching Excellence, Learning Technologies, and Faculty Development
Fort Hays State University
Hays, KS 67601
785-628-4194
smzvacek@fhsu.edu



From:        Rob Gibson <rgibson1@EMPORIA.EDU>
To:        INSTTECH@LISTSERV.EDUCAUSE.EDU
Date:        02/23/2013 11:19 PM
Subject:        Re: [INSTTECH] Measuring the success of the campus LMS
Sent by:        The EDUCAUSE Instructional Technologies Constituent Group Listserv <INSTTECH@LISTSERV.EDUCAUSE.EDU>



Malcolm posits a great question - one that I've attempted to reconcile for years. For reasons I've never been entirely clear, the LMS always seems to be under 'extra scrutiny' to justify its place, expense, and usage on campus. Never mind that the cumbersome ERPs would often be happily sidestepped if it were not for the fact that many campuses 'force' students through the portal in order to access the services they really want. Akin to a toll booth company telling everyone that their usage is very high - when the drivers simply want to get to their destination.

To help satisfy those questions, we conduct a student tech survey each year (based loosely on the ECAR annual survey) - and are starting a faculty survey this year as well. We've collected very useful qualitative information specific to the LMS. (Satisfaction, features most utilized, mobile access, etc.) We've been pleasantly surprised by the student resposes. The LMS is often the #1 most accessed and utilized instructional tool - even ahead of library databases. (Granted, this may be biased because the faculty are requiring the students access resources only available through the LMS. However, most students seem satisfied with the tool, nonetheless.)

The quantitative metrics pulled from the LMS itself (I'll not mention the brand) are useless. That a faculty member posts a syllabus should in no way be equated with a faculty member who relies on the tool exclusively. Yet, that's what the metrics often reflect. ("Shells opened and accessed.") A new LMS we're considering should allow us to extract more useful data from the system. (Fingers crossed.)

A sidebar note from something I heard at the ELI Annual Meeting that I found intriguing. A panelist from EDx indicated that she envisions a not-so-distant future of customizable learning environments based entirely on Big Data. If it comes to fruition, it will turn the LMS world inside out. Imagine a system that automatically adapts to, say, high-risk students or students who are returning adult learners. Talk about a game-changer. When that product comes out, I'm buying lots of stock...

RG




From: The EDUCAUSE Instructional Technologies Constituent Group Listserv [INSTTECH@LISTSERV.EDUCAUSE.EDU] on behalf of Phillip Long [longpd@MIT.EDU]
Sent:
Saturday, February 23, 2013 9:47 PM
To:
INSTTECH@LISTSERV.EDUCAUSE.EDU
Subject:
[INSTTECH] Measuring the success of the campus LMS

Malcolm: to answer your 'success metric' for the LMS, sadly at my institution the answer is as you posited - faculty adoption rates. It has nothing to do with how they use it. And indeed it's a bogus measure because it's mandated that all academics must have a Bb course shell. There are exceptions permitted if the academic argues it and if the associate dean of T&L is open minded but the cultural norm is that the LMS is the corporate choice and thou shalt use it. Success circularly is in the nearly 100% adoption of this mandate…. Do you smell a problem here? Sign….

phil

(apologies for the delayed reply - it seems that for some reason this list no longer accepts posts to it from my UQ account only my MIT account - I don’t have the energy to correct this at the moment)

:: :: :: :: :: :: :: :: :: :: :: ::
Professor Phillip Long :: Ofc of the Deputy Vice Chancellor Academic :: The University of Queensland :: Brisbane, QLD 4072  Australia
ITEE

Director: Centre for Educational Innovation & Technology
http://ceit.uq.edu.au longpd@uq.edu.au




On 04/02/2012, at 4:45 AM, Backon, Joel wrote:

Fascinating that one would select statistics such as percentage of faculty adoption or raw numbers of courses and enrollments to measure the success of the LMS. Shouldn't we be looking for metrics of improvements in teaching and learning? I understand such metrics are more difficult to develop, but the investment in a CMS or any other technology tool could only be predicated on an improved learning outcome.


Joel Backon
Director of Academic Technology/History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT 06492
203-697-2514
Sent from iPad2

"Perceived student learning" is as far as we've taken this question. However, we measure it with a single Likert question, so it has many shortcomings (e.g., reliability, face validity). About 2/3 of our students consistently agreed with the statement "ANGEL has improved my learning". We stopped asking that question several years ago, but our data is still available at: http://www.kumc.edu/ir/moe/ANGELLearningResponse.aspx We also measure Number of ANGEL Course and Group Logins: http://www.kumc.edu/ir/moe/AngelLearning.aspx Dave ********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.

True, students don’t need the instructors present in the LMS to enjoy its benefits.  But do they expect them to be?  And if they do, how can those expectations be managed effectively?

 

(This is an area of fairly intensive debate on our campus).

 

 

With kind regards,

 

Marianne Schroeder | Senior Manager, Teaching & Learning Technologies

The University of British Columbia | Centre for Teaching, Learning & Technology

102 – 1961 East Mall, Vancouver BC V6T 1Z1

Ph: 604.822.0255 |Fax: 604.822.2157 | mailto:marianne.schroeder@ubc.ca

 

 

 

 

 

Message from longpd@mit.edu

Malcolm: to answer your 'success metric' for the LMS, sadly at my institution the answer is as you posited - faculty adoption rates. It has nothing to do with how they use it. And indeed it's a bogus measure because it's mandated that all academics must have a Bb course shell. There are exceptions permitted if the academic argues it and if the associate dean of T&L is open minded but the cultural norm is that the LMS is the corporate choice and thou shalt use it. Success circularly is in the nearly 100% adoption of this mandate…. Do you smell a problem here? Sign….

phil

(apologies for the delayed reply - it seems that for some reason this list no longer accepts posts to it from my UQ account only my MIT account - I don’t have the energy to correct this at the moment)

:: :: :: :: :: :: :: :: :: :: :: ::
Professor Phillip Long :: Ofc of the Deputy Vice Chancellor Academic :: The University of Queensland :: Brisbane, QLD 4072  Australia
ITEE 

Director: Centre for Educational Innovation & Technology




On 04/02/2012, at 4:45 AM, Backon, Joel wrote:

Fascinating that one would select statistics such as percentage of faculty adoption or raw numbers of courses and enrollments to measure the success of the LMS. Shouldn't we be looking for metrics of improvements in teaching and learning? I understand such metrics are more difficult to develop, but the investment in a CMS or any other technology tool could only be predicated on an improved learning outcome.


Joel Backon
Director of Academic Technology/History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT 06492
203-697-2514
Sent from iPad2

Malcolm posits a great question - one that I've attempted to reconcile for years. For reasons I've never been entirely clear, the LMS always seems to be under 'extra scrutiny' to justify its place, expense, and usage on campus. Never mind that the cumbersome ERPs would often be happily sidestepped if it were not for the fact that many campuses 'force' students through the portal in order to access the services they really want. Akin to a toll booth company telling everyone that their usage is very high - when the drivers simply want to get to their destination.

To help satisfy those questions, we conduct a student tech survey each year (based loosely on the ECAR annual survey) - and are starting a faculty survey this year as well. We've collected very useful qualitative information specific to the LMS. (Satisfaction, features most utilized, mobile access, etc.) We've been pleasantly surprised by the student resposes. The LMS is often the #1 most accessed and utilized instructional tool - even ahead of library databases. (Granted, this may be biased because the faculty are requiring the students access resources only available through the LMS. However, most students seem satisfied with the tool, nonetheless.)

The quantitative metrics pulled from the LMS itself (I'll not mention the brand) are useless. That a faculty member posts a syllabus should in no way be equated with a faculty member who relies on the tool exclusively. Yet, that's what the metrics often reflect. ("Shells opened and accessed.") A new LMS we're considering should allow us to extract more useful data from the system. (Fingers crossed.)

A sidebar note from something I heard at the ELI Annual Meeting that I found intriguing. A panelist from EDx indicated that she envisions a not-so-distant future of customizable learning environments based entirely on Big Data. If it comes to fruition, it will turn the LMS world inside out. Imagine a system that automatically adapts to, say, high-risk students or students who are returning adult learners. Talk about a game-changer. When that product comes out, I'm buying lots of stock...

RG



From: The EDUCAUSE Instructional Technologies Constituent Group Listserv [INSTTECH@LISTSERV.EDUCAUSE.EDU] on behalf of Phillip Long [longpd@MIT.EDU]
Sent: Saturday, February 23, 2013 9:47 PM
To: INSTTECH@LISTSERV.EDUCAUSE.EDU
Subject: [INSTTECH] Measuring the success of the campus LMS

Malcolm: to answer your 'success metric' for the LMS, sadly at my institution the answer is as you posited - faculty adoption rates. It has nothing to do with how they use it. And indeed it's a bogus measure because it's mandated that all academics must have a Bb course shell. There are exceptions permitted if the academic argues it and if the associate dean of T&L is open minded but the cultural norm is that the LMS is the corporate choice and thou shalt use it. Success circularly is in the nearly 100% adoption of this mandate…. Do you smell a problem here? Sign….

phil

(apologies for the delayed reply - it seems that for some reason this list no longer accepts posts to it from my UQ account only my MIT account - I don’t have the energy to correct this at the moment)

:: :: :: :: :: :: :: :: :: :: :: ::
Professor Phillip Long :: Ofc of the Deputy Vice Chancellor Academic :: The University of Queensland :: Brisbane, QLD 4072  Australia
ITEE 

Director: Centre for Educational Innovation & Technology




On 04/02/2012, at 4:45 AM, Backon, Joel wrote:

Fascinating that one would select statistics such as percentage of faculty adoption or raw numbers of courses and enrollments to measure the success of the LMS. Shouldn't we be looking for metrics of improvements in teaching and learning? I understand such metrics are more difficult to develop, but the investment in a CMS or any other technology tool could only be predicated on an improved learning outcome.


Joel Backon
Director of Academic Technology/History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT 06492
203-697-2514
Sent from iPad2

And, Rob, the irony of using data that are (for example) tied to "page views" as an indicator of course interactivity or student engagement is that a poorly-designed course can end up looking great because students spend so much time searching for materials or having to drill down unnecessarily to reach the relevant tools or resources.

Susan


Susan M. Zvacek, Ph.D.
Senior Director, Teaching Excellence, Learning Technologies, and Faculty Development
Fort Hays State University
Hays, KS 67601
785-628-4194
smzvacek@fhsu.edu



From:        Rob Gibson <rgibson1@EMPORIA.EDU>
To:        INSTTECH@LISTSERV.EDUCAUSE.EDU
Date:        02/23/2013 11:19 PM
Subject:        Re: [INSTTECH] Measuring the success of the campus LMS
Sent by:        The EDUCAUSE Instructional Technologies Constituent Group Listserv <INSTTECH@LISTSERV.EDUCAUSE.EDU>



Malcolm posits a great question - one that I've attempted to reconcile for years. For reasons I've never been entirely clear, the LMS always seems to be under 'extra scrutiny' to justify its place, expense, and usage on campus. Never mind that the cumbersome ERPs would often be happily sidestepped if it were not for the fact that many campuses 'force' students through the portal in order to access the services they really want. Akin to a toll booth company telling everyone that their usage is very high - when the drivers simply want to get to their destination.

To help satisfy those questions, we conduct a student tech survey each year (based loosely on the ECAR annual survey) - and are starting a faculty survey this year as well. We've collected very useful qualitative information specific to the LMS. (Satisfaction, features most utilized, mobile access, etc.) We've been pleasantly surprised by the student resposes. The LMS is often the #1 most accessed and utilized instructional tool - even ahead of library databases. (Granted, this may be biased because the faculty are requiring the students access resources only available through the LMS. However, most students seem satisfied with the tool, nonetheless.)

The quantitative metrics pulled from the LMS itself (I'll not mention the brand) are useless. That a faculty member posts a syllabus should in no way be equated with a faculty member who relies on the tool exclusively. Yet, that's what the metrics often reflect. ("Shells opened and accessed.") A new LMS we're considering should allow us to extract more useful data from the system. (Fingers crossed.)

A sidebar note from something I heard at the ELI Annual Meeting that I found intriguing. A panelist from EDx indicated that she envisions a not-so-distant future of customizable learning environments based entirely on Big Data. If it comes to fruition, it will turn the LMS world inside out. Imagine a system that automatically adapts to, say, high-risk students or students who are returning adult learners. Talk about a game-changer. When that product comes out, I'm buying lots of stock...

RG




From: The EDUCAUSE Instructional Technologies Constituent Group Listserv [INSTTECH@LISTSERV.EDUCAUSE.EDU] on behalf of Phillip Long [longpd@MIT.EDU]
Sent:
Saturday, February 23, 2013 9:47 PM
To:
INSTTECH@LISTSERV.EDUCAUSE.EDU
Subject:
[INSTTECH] Measuring the success of the campus LMS

Malcolm: to answer your 'success metric' for the LMS, sadly at my institution the answer is as you posited - faculty adoption rates. It has nothing to do with how they use it. And indeed it's a bogus measure because it's mandated that all academics must have a Bb course shell. There are exceptions permitted if the academic argues it and if the associate dean of T&L is open minded but the cultural norm is that the LMS is the corporate choice and thou shalt use it. Success circularly is in the nearly 100% adoption of this mandate…. Do you smell a problem here? Sign….

phil

(apologies for the delayed reply - it seems that for some reason this list no longer accepts posts to it from my UQ account only my MIT account - I don’t have the energy to correct this at the moment)

:: :: :: :: :: :: :: :: :: :: :: ::
Professor Phillip Long :: Ofc of the Deputy Vice Chancellor Academic :: The University of Queensland :: Brisbane, QLD 4072  Australia
ITEE

Director: Centre for Educational Innovation & Technology
http://ceit.uq.edu.au longpd@uq.edu.au




On 04/02/2012, at 4:45 AM, Backon, Joel wrote:

Fascinating that one would select statistics such as percentage of faculty adoption or raw numbers of courses and enrollments to measure the success of the LMS. Shouldn't we be looking for metrics of improvements in teaching and learning? I understand such metrics are more difficult to develop, but the investment in a CMS or any other technology tool could only be predicated on an improved learning outcome.


Joel Backon
Director of Academic Technology/History
Choate Rosemary Hall
333 Christian St.
Wallingford, CT 06492
203-697-2514
Sent from iPad2

"Perceived student learning" is as far as we've taken this question. However, we measure it with a single Likert question, so it has many shortcomings (e.g., reliability, face validity). About 2/3 of our students consistently agreed with the statement "ANGEL has improved my learning". We stopped asking that question several years ago, but our data is still available at: http://www.kumc.edu/ir/moe/ANGELLearningResponse.aspx We also measure Number of ANGEL Course and Group Logins: http://www.kumc.edu/ir/moe/AngelLearning.aspx Dave ********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.
Close
Close


EDUCAUSE Connect
View dates and locations

Events for all Levels and Interests

Whether you're looking for a conference to attend face-to-face to connect with peers, or for an online event for team professional development, see what's upcoming.

Close

EDUCAUSE Institute
Leadership/Management Programs
Explore More

Career Center


Leadership and Management Programs

EDUCAUSE Institute
Project Management

 

 

Jump Start Your Career Growth

Explore EDUCAUSE professional development opportunities that match your career aspirations and desired level of time investment through our interactive online guide.

 

Close
EDUCAUSE organizes its efforts around three IT Focus Areas

 

 

Join These Programs If Your Focus Is

Close

Get on the Higher Ed IT Map

Employees of EDUCAUSE member institutions and organizations are invited to create individual profiles.
 

 

Close

2014 Strategic Priorities

  • Building the Profession
  • IT as a Game Changer
  • Foundations


Learn More >

Uncommon Thinking for the Common Good™

EDUCAUSE is the foremost community of higher education IT leaders and professionals.