Student surveys, an online form emailed after a course, or piece of paper thrust in front of students at the end of a semester at the second-to-last tutorial or seminar, are dead. They were once the mainstay of gauging student satisfaction in tertiary education as the pressures of end of semester exams or essays loomed.
The average response rate for these surveys is less than 10%. Research has shown falling response rates in student evaluation surveys in higher education.
Can you blame the students?
Surveys on satisfaction taken at the end of a course are of no benefit to the student who has just experienced the unit. The aim of any student survey is to understand how a student felt about the course; if it was well structured; if the curriculum was delivered to a satisfactory level; if they encountered difficulties; and if the course could be improved.
By the time the data is collated and analysed, sometimes two or three years later, the student has graduated from the institution entirely. Any change that comes along – if it comes along - is meaningless to that student who is now either on the cusp of graduation or already in the workforce.
Surveys are stuck in the past. Continuous feedback solves problems now.
Surveys are plagued by low response rates, a low perceived benefit to participate, and often, ambiguous, or poorly worded questions. In some instances, teachers may have to overcome a language barrier before getting meaningful responses. Many of these students are reluctant to participate as they’re asked to identify themselves. Some fear “retribution” if they criticise a lecturer or tutor they’re likely to encounter in the future.
Real-time continuous feedback is the way of the future. It helps teachers and students create the ideal environment to enhance learning outcomes as the course is delivered.
Real-time continuous feedback systems such as Loop empowers students to respond to questions in their own time and on their preferred device. Questions can be tailored to different learning styles and response requirements. They can be as simple as “did you understand today’s lesson?” with a yes or no answer. Teachers can also ask multiple choice or rank order questions. The crucial difference is that these systems are anonymous, scalable across different campuses, and can be analysed in real time.
The fact the samples are anonymous helps institutions increase their sample size. This means more meaningful feedback which students can see being implemented over the course of their unit. It gives them a sense of control over their learning and increases engagement.
Students who may be struggling with the course or curriculum can express their frustrations to their teachers without experiencing embarrassment as all responses can be anonymous.
SuniTAFE implemented Loop in 2017 and has improved outcomes for students as well given them a platform for developing a strategy of continuous improvement for training and assessment.
You can read more about SuniTAFE’s experience with continuous feedback in our case study.