Showing posts with label Evaluation. Show all posts
Showing posts with label Evaluation. Show all posts

Sunday, July 11, 2010

8 Vital Pieces of Everyday Data for DL Admininistrators

I always have these on hand since this information is pure gold. Some of these I review every day, and others I review weekly or monthly. But each is critical to ongoing online program evaluation and continuous improvement.

  1. Student Retention Rates (Course Completion)
  2. Student Grade Performance, particularly as it compares to face-to-face counterpart courses.
  3. Enrollment
  4. Demographic Trends (is the population getting older, younger, etc.?)
  5. Student Usage and Satisfaction with Support Services
  6. Faculty Usage and Satisfaction with Faculty Support Services
  7. Graduation and Year-to-Year Retention (do students who take some or all courses online graduate sooner?)
  8. Student Withdrawals (why do some students not persist? is any of this related to the course, the instructor, or support services? are any of the causative factors within our control?)

Wednesday, December 31, 2008

My DLA Forecast for 2009


As we say goodbye to an unforgettable year, here's what I think is in store for 2009, at least in the world of online learning:



  • A greater emphasis on the financial benefits of distance learning. Higher ed administrators who were previously lukewarm about DL will work on number crunching to see exactly how online learning lowers the instructional cost per student.


  • Organizational structures will continue to evolve with DL departments increasingly aligned with academics rather than IT groups.


  • eLearning will make its formal entrance into the mainstream with its appearance on the mission statements of more and more traditional institutions.


  • Dramatically increased use of social networking tools (like Facebook) in online learning - for both instruction and support. Facebook will "age" a bit as more and more faculty and over 30 folks hop on for DL purposes and then get a little addicted themselves.


  • Less one-on-one support for faculty as resources get further stretched (more courses but little or no new staff). The bad news is we'll see more group training sessions, but the savvy administrators will develop better, easy-to-use (and locate) online tutorials as well as mentoring programs.


  • Big emphasis on green. Although gas prices are lower, the Summer of '08 is not forgotten. Online learning provides the obvious answer here. We'll slowly see more telecommuting of DL faculty and staff as well. Lots and lots and lots more online meetings.


  • More streamlined approaches to quality, evaluation, and retention. More software entries such as that offered by Starfish Retention Solutions that will help us become accountable in a systematic way.


  • Increased development and marketing of online courses and programs to the Hispanic community as well as those who support this population, such as teachers and healthcare providers.

Monday, December 15, 2008

The Most Important DL Course Evaluation


The most important DL course evaluation - the one that ultimately affects many of the others - is the formative evaluation within the course. Yet, it's the one that so many people skip or just don't take very seriously. End-of-course, summative evaluations are swell for proving to department heads and accreditors how good or not-so-good an instructor or course are. But what good does it do for the students in the course? Even in an established course that's been working well, I like to do a very simple formative evaluation a week or so before the course mid-point. Usually, it's just an email or discussion board posting asking students to list three things they like most about the course, three things they like least, and what the instructor could do to improve. I really get some valuable information from these simple questions - but I follow-up with probing questions if any key responses are too vague. Every group of students is different. Students who liked group projects the term before hate them the next. (Okay, I admit most students don't like groupwork at all.) After I get the results, I post them for the students to see, and let them know what course changes I am going to make as a result of their feedback. You can bet that making a few changes, no matter how minor, will make your end-of-course evaluations higher. Not doing a formative evaluation is like serving soup that you didn't taste and season during the cooking (credit Bob Stakes metaphor). Where I need to improve is better record-keeping or logging of these informal, formative assessments.

Monday, December 1, 2008

Four Ways to Get Students to Complete Online Course Evaluations

The primary downfall of online course evaluations is the low completion rate. In the traditional environment, students are more or less held captive while they rate their learning and their instructor on a piece of paper. To raise completion rates to a comparable level, use these ideas.
  1. Give students 1 point extra credit for completing the evaluation. Almost all online course evaluation software allows one to see which students completed the survey, while maintaining the anonymity of their responses. This is probably the most important tip, and the one not to skip. I use a grading scale of 1000 (900 is an A) for my courses, so I actually offer 10 extra points, which sounds better but is really equal to 1 point on a 100-point scale.
  2. Discuss the importance of completing the evaluation early on in the course - the first week is not too soon. Get it in their mind how important this is to you,
  3. Get your institution to agree to delay availability of grades if an evaluation is not completed. The University of Oregon has a policy that withholds grades and transcripts until the Friday after the grading deadline ends for those students who don't complete the evaluations (other students get them as soon as they are turned in).
  4. Send lots of reminders in various formats. Use email, the discussion board as well as course announcements.

Wednesday, October 29, 2008

Test Proctoring: Time to Ditch the Paper and the Pencils


Amazingly, a very significant portion of proctored testing of online students is still conducted via paper and pencil. Not only is this a less secure way of test delivery, but is enormously expensive in terms of mailing and labor to and from the testing sites and instructors. For years, we've had built-in testing tools in our CMSs, including Blackboard and WebCT. These tools also allow us to lock the test, enabling only the proctor to open it with a provided password. I think that there are two main reasons why some of us are stuck. First of all, many university testing centers simply don't have enough computers (in their testing centers) to test dozens of students at one time (such as during midterms). Another reason - plain old resistance to change. The entire process of testing and mailing is historically so cumbersome that once we have a system underway (even a greatly flawed one), any attempt to suggest obvious and available improvements is met with fear and panic. For those still wondering if the change is worth the initial invesment, let's review the benefits of using online testing in the proctored environment:


  • no mailing costs

  • significantly lower labor costs

  • no printing costs

  • instant grading (for objective tests)

  • earth-friendly

  • ability to randomize questions through test bank

Sunday, October 26, 2008

Evaluation: Why Student Satisfaction Matters


I was recently part of discussion regarding the importance, or lack of, student satisfaction as a measure of the success of a DL course or program. Clearly, it is student "learning" where we focus the brunt of our evaluation efforts - and must do so lest we wish to face the woeful proposition of being admonished by our accreditors. But student satisfaction measures are not fluff. They can be clear indicators of whether or not an online class needs a major fix, and whether of not our training programs for online instructors need to be realigned. Through student satisfaction measures, we can learn much - such as whether or not an online instructor is communicating on a timely basis, whether or not appropriate support systems are in place for online students, and whether or not we may hope to retain the student as a consumer of our online programs. Student satisfaction has a dramatic impact on our marketing, our enrollment, our retention, and even our course quality - if we utilize the data to thoughtfully make course, program, and administrative improvements.

Sunday, September 14, 2008

Five Essential Pieces of DL Admin Data

I have to admit - collecting data or even analyzing data is not my favorite part of my job. I have our data collection split up among various staff members, and in theory, we try to keep these updated weekly so that we don't have to scramble when asked for information. If you don't keep up with anything else, there are five pieces of data you really need to have for a DL program. These will help you with strategic planning, accreditation reports, justifying new resources, and more.
  1. Number of students enrolled in distance and online courses per semester. We define distance courses as those offered more than 50 percent online; and online courses as those offered more than 95 percent online.
  2. Student retention in distance and online courses. What overall percentage completed these courses? How does this compare to your traditional courses? Are you improving in this area? Related to this is data comparing passing and fail rates.
  3. Student satisfaction with online courses. Usually obtained from course surveys. We also use annual telephone surveys and focus groups.
  4. Faculty and student satisfaction with support services. We track all email, telephone, and f2f requests for assistance in Remedy software. Each caller is sent, by email, a brief evaluation of the services they received.
  5. Faculty course improvements based on evaluation results. Each faculty member completes a brief form summarizing their evaluation results, and how they will use this information to make course improvements. We keep the individual ones on file; and compile a summary which can be used to also make adjustments to our training programs. This makes accreditors happy.