This article explores how action research can provide valuable feedback for the teacher that, when responded to, will propel students forward.
Monitoring and documenting learning
As teachers, we can attest to the fact that feedback is the primary agency that propels students forward. In fact, feedback has been shown to double the rate of student learning! (Wiliam, 2011; Hattie, 2012). “What makes any assessment in education formative is…that the performer has opportunities…to reshape the performance to better achieve the goal.” (Wiggins, 2012, p.15). This improvement only occurs, however, if the teacher turns monitored learning into effective feedback and the learner gets a chance to use that feedback to improve.
So how do you know if the feedback you are giving your students will result in positive change? This is the question we asked ourselves after using a year-long math ‘problem-of-the-week’ (POW) program with our grade 5 students. The program involved grouping our students into small, mixed-ability groups and giving them authentic challenging problems to solve. Our goal was to have them confidently apply our adapted version of Polya’s four steps to problem solving (Polya, 1945).
For the first year, feedback was provided verbally, with individual scaffolding as needed. We would support students to collaborate and share their responses with each other. Then we would end with a whole group discussion in which we compared strategies that were effective with those that were less so. We pondered whether students were improving because of this approach, or whether more personalized feedback was needed. In other words, were we helping our students move forward?
Our action research
To answer that, we decided to do some inquiry ourselves. First, we inquired into what effective feedback looks like. We reviewed existing research, primarily from Brookhart (2008), Wiliam (2011) and Wiggins (2012). Armed with our new knowledge, we were ready to create some effective feedback tools that would result in improved student learning; however, what would work best with our students? With that question in mind, we created a study with different treatment groups. Students’ learning would be assessed with either a rubric with headings or scores, or written feedback without rubric formatting. The feedback language would be identical; only the formatting would change for each group. We would then evaluate the effectiveness of each assessment by having the students write a pre-test at the start of the study, and a post-test after 4 weeks of learning; we also tracked their performance on each problem.
Our results supported the evidence we had already collected through our research: 95% of the students improved because of getting some sort of descriptive feedback! The surprising news was that the group with the most success (the rubric group) was not who we originally expected (the feedback group). This goes against research that suggests written feedback alone should result in the greatest change (Wiliam, 2011).
Our team reflected on why this might have happened. We are a rubric-heavy school, so perhaps students are more familiar with that format. We also thought that rubric scores (1, 2, 3, 4) might not be perceived in the same ways as letter grades or percentages, which have been shown to negate the effect of descriptive feedback (Wiliam, 2018). Lastly, we hypothesized that our written feedback (without the rubric) might not have been personalized enough or effective enough to power students forward. Regardless of the reason why, we were now able to act. “If you collect evidence, use it!”, Dylan Wiliam, 2018.
Becoming a responsive teacher
It is not only the student who has a responsibility to respond to feedback; the teacher also has a professional duty to use assessment data to adapt their teaching according to student needs. At Mulgrave, we now know that assessing students with a rubric after each problem accelerates their learning and leads to greater achievement. So, our first change was to implement rubrics after each POW.
In addition to the quantitative data we collected, we conducted a student survey to see how students felt about the process. After reviewing their responses, we decided to make other changes to the program. The students requested more opportunities to collaborate and more time to solve each problem, so we will modify the program to allow for those things. And to enhance student agency, we will use last year’s student learning as exemplars for this year’s students.
By implementing action research, we are engaging in evidence-based education and ultimately, becoming more responsive teachers. What evidence could you collect this year to inform your teaching? It doesn’t have to be big; it can be as simple as a quick check-in during a lesson. The important thing is that you respond to what you find.
Black, Harrison, Lee, Marshall and Wiliam. 2004. Working inside the black box: assessment for learning in the classroom. Phi Delta Kappan, Vol. 86, 1, 8-21.
Brookhart, S. 2008. How to give effective feedback to your students. Alexandria, VA: Association for Supervision and Curriculum Development (ASCD).
Haddie, J. 2012. Know thy impact. Educational Leadership, September 2012, 18-23.
Polya, G. 1945. How to solve it. A new aspect of mathematical method. Princeton, New Jersey. Princeton University Press.
Wiggins, G. 2012. 7 Keys to effective feedback. Educational Leadership, September 2012, 11-16.
Wiliam, D. 2011. Embedded formative assessment. Bloomington, IN: Solution Tree Press.
Wiliam, D. 2018. Embedding formative assessment. Lecture conducted for the BC School Superintendents Association in Vancouver, BC, Canada.
Ann Walters (B.Sc., B.Ed., MA) has spent the last 3 years as the PYP Mathematics Coordinator at Mulgrave School, Canada. Her goal has always been to improve the teaching and learning of mathematics by teaching through inquiry and including authentic, transdisciplinary, problem-based activities. In her free-time, she writes for her own blog. You can follow her on Twitter @seeanngo33.