In much of the discourse on third level education it seems to be assumed that the dominant form of formal teaching occurs through the medium of the lecture. But for anyone who embarks on a science or engineering degree, just as much time will be spent in laboratories. Personally, I find laboratory modules to be a fantastic way to not only teach students but also to understand them. For example, when you observe a group of students taking 40 minutes to do a calculation that should have taken them less than five minutes, you get a good sense of the level of their computational skills.
In management theory, there is a concept called ‘management by walking around (MBWA)’. It came to fame in 1982 with the publication of the famous book In Search of Excellence. I’m no management guru so I don’t know what the current thinking is on the MBWA concept, but there is no doubt that TBWA, i.e., ‘teaching by walking around’ can a very effective way to teach.
I thought about this idea again recently because I’m trying to put together a paper (on some laboratory experiments I’ve designed) for an engineering education journal. Having taught on a multidisciplinary program for many years, I have a backlog of teaching-related material that is genuinely novel (I think!) and I think some of it might be worth sharing with the community. But one of the hurdles that you encounter when trying to get education-related papers published is that you need to be able to present some sort of evaluation of whatever novel approach you are presenting. This presents a real difficulty I believe, not just for me who’s trying to get stuff published, but for the teaching innovation field in general. Many education-related papers I read incorporate an evaluation process that involves little more than some sort of student survey. Inevitably it is found that students ‘engage’ more and report improvements in their understanding of the material. On occasion, grades are shown to improve. But most of these evaluations raise as many questions as they answer and while I am an amateur in this kind of research (as are most of us who dabble in education research), I always get the sense that there is an element of confirmation bias at work. There is so much that is uncontrolled in studies like these that I tend to see them as new approaches that I might try out rather than as methods that are scientifically proven to work. I think that when you are dealing with groups of human beings, it is very difficult to be scientific and as educators we have to rely a lot on our experience and our instinct.
My approach to evaluation is to simply be reflective about what I’m doing. And, when it comes to laboratory-based initiatives, I like to evaluate by walking around (EBWA). I watch the students, listen to them and interact with them throughout the day, getting an intuitive sense as to whether the new experiment is working or not. If I think it’s not, then I get rid of it and come up with another. Whether that will be good enough for the reviewers of my next paper remains to be seen.