On Repeatability and Educational Research

Approximate Reading Time: 4 minutes

New Cloning Machine  by KepowObCame across this a while back (OK, it was nearly a year a go) which sparked a rather strong reaction from a fellow educationalist. I don’t want to make it personal, so I’m just going to relay the exchange, as I think it is representative of the views of a lot of education faculty.

Almost no education research is replicated, new article shows @insidehighered

The word “replication” has, of late, set many a psychologist’s teeth on edge. Experimental psychology is weathering a credibility crisis, with a flurry of fraud allegations and retracted papers. Marc Hauser, an evolutionary psychologist at Harvard University, left academe amid charges of scientific misconduct. Daniel Kahneman, a Nobel-Prize-winning psychologist at Princeton University, entered the fray in 2012 with a sharply worded email to his colleagues studying social priming. He warned of a “train wreck looming” that researchers would avoid only if they focused more diligently on replicating findings. And the journal Social Psychology devoted its most recent issue to replication – and failed to replicate a number of high-profile findings in social psychology.
Yet psychologists are not the worst offenders when it comes to replication, it turns out. That distinction might belong to education researchers, according to an article published today in the journal Educational Researcher.

I said that I found it disturbing to think that most of what we claim is known in education comes from studies that are never replicated.

The response was that it shows I have a surface level understanding of the purposes and methodologies in educational research. Scientific research is, I was told, too limited. Further, research in education has different purposes and goals. She then went on to list quite a number of methodologies that are commonly employed in Educational Research.

I admit I’m somewhat offended at the accusation of “surface-level understanding”, but OK. I’ll bite. You described classic scientific study quite well, but of course, only a very small part of science research actually comes from studies conducted using this classic model. There is a great deal of research in science that is also done using what you call applied research. These have to be corroborated as well, and my take on the article was that this is what they are comparing.

The goal of improved practice is an important and worthy one, but nearly impossible to measure, so almost anyone can claim “improved practice” if they know how to write it up. I have often seen conclusions drawn from a very small number of case studies, or “statistics” calculated from studies involving 10 or 20 responses, which then get used as justification for change.

Applying natural science research methodologies and purposes to social science disciplines is a mismatch. The goal 0f educational research is improved practice and sometimes also contributions to theory.

Having been in various research worlds – including pure & natural science as well as social science – I am well aware of the differences.
The article states that, “education journals routinely prize studies that yield novel and exciting results over studies that corroborate – or disconfirm – previous findings.” Conducting replications, the researchers write, “is largely viewed in the social science research community as lacking prestige, originality, or excitement.” The article is well worth a read, and it should prompt a long overdue conversation. I think dismissing it as not understanding the difference between science research and educational research only perpetuates the problem.

A fellow scientist said: Science has issues with complexity – it can’t deal with it very well. Social science is not the same as science, nor humanities or arts, but some of the methods used in science can be applied in some cases. So, while the idea of holding all variables but one constant when doing an experiment (ceteris paribus) is a nice idea in theory, it cannot always be done.

However, what *can* be done is to repeat experiments and studies to ensure that the effect being observed is not an artifact of the researcher, the location, the selection of subjects, or some other spurious variable. This is essential. 

Another colleague also chimed in: From my experience as an associate editor on an education journal (and as a program chair on education conferences), reviewers definitely tend to downgrade their rankings for the rare papers that replicate or adapt a previous study or survey instrument.

I’m certainly not trying to suggest that educational research needs to adhere to the “gold standard” of scientific experimentation. That would be silly. Also, impossible in a social context. However, educational research could really use some additional rigor. Working on my upcoming book I have looked at quite a lot of educational research on the use of games for learning. According to a 2011 review of the state of game based learning, many early studies were flawed and of limited use, but more studies now are paying close attention to the design of their studies as well as the kinds of games they choose to study (Felicia, 2011), and that’s only going back 10 years. Games for Learning is an area of educational research that receives quite a lot of scrutiny; I can only imagine what is happening in those fields that don’t get this kind of scrutiny.

I would also add that educational research is not the only field where it is hard to publish a study that is a replication, it just happens to be the field that’s under the microscope in the article that started all this.

—–

Felicia, P., & Egenfeld-Nielsen, S. (2011). Game-Based Learning: A Review of the State of the Art. In S. Egenfeldt-Nielsen, B. Meyer & B. H. Sørensen (Eds.), Serious Games in Education : A Global Perspective (pp. 21-46). Aarhus, DNK: Aarhus University Press.

 

1 person likes this post.


Leave a Reply