Why Education Research Is Failing Us: Begley – Sharon Begley – Newsweek.com

Approximate Reading Time: 3 minutes

Why Education Research Is Failing Us: Begley – Sharon Begley – Newsweek.com.

Synopsis:

This article reports on a meta study comparing inquiry methods against a more trditional approach. What was found is that:

“There is a dearth of carefully crafted, quantitative studies on what works,” says William Cobern of Western Michigan University. “It’s a crazy situation.”

I tend to agree with the conclusions, but I am concerned that we are not going to make any progress by simply re-doubling our efforts. I posted this on my Fabebook page, and this is part of the conversation that resulted. I found it interesting. Maybe you will too.

Michael Barbour suggested that:

This article is a crock – as it continues the myth of the double-blind, quasi-experimental model as the gold standard. Unfortunately educational research has often been driven by what will be funded or, in the case of unfunded research, what is easy to accomplish. In both instances this has resulted in poor research – and as long as the method of medical research is used as the measure of what we consider good or what we consider as working (as evidenced by the “What Works Clearinghouse” – another laughable initiative), educational research will get no better.

What folks won’t tell you is that the double-blind quasi-experiment model isn’t blind. Real medications have side effects, sugar pills don’t. Real medications often have scents or textures that placebos don’t, to the point that in most instances those administering the treatments know whether a patient is getting the medication or the placebo.

Let’s also not forget that most medications work with the body and in randomized instances, most differences in bodies will be a wash. This is not the case with educational research, as while a randomly selected group of students has the same chance of having a higher percentage of free or reduced lunch students in both the treatment and the control groups, it doesn’t guarantee it. But any noticeable difference in the percentage of this population in your two groups should yield widely differing results, regardless of the instructional intervention.

This is why many folks have begun to argue that design-based research (also called developmental research) is the direction we should be heading. The problem is that no one will fund a study that is designed to address local situations, and not designed to be generalizable.

My thoughts:

There really is no “science” in educational research, nor should there be. To do scientific studies we need to be able to actually control the variables – and we can’t in education. There are just too many of them. Also, to do stats we need numbers – volume. There are few studies where N is large enough to warrant statistical analysis, but they do them anyways and the results get used as though they have some validity.

Academia and formal education have subscribed to the notion that anything “scientific” is better than anything that isn’t, so we bend and stretch the notion of the scientific process to the point that it becomes meaningless. Adding the word “Science” to something doesn’t make it so. Very few things are actually sciences. Social Science is NOT a science. Nor is computer science (or math), for that matter.

So long as we keep pretending that we are doing ‘science’ in Ed Research, we will not make any real progress.

To which Michael replied:

But is the double-blind, quasi-experimental model really even as good as everyone claims it to be? I specific try to avoid saying the science model, as I see no problem with the science model (i.e., funded research into undirected avenues, hoping that something might lead to a breakthrough about something else). What is being valued here is the medical model (i.e., double-blind, quasi-experimental) and that model is fundamentally flawed.
My thoughts on that:

No model is perfect but it is a reasonable model for certain kinds of studies, including medical ones. The problem with this kind of study lies with the people administering it. Drug tests are run by big drug companies with a vested interest in a particular outcome.

Educational research shares many of the same problems as media effects research has – assumptions are made about what factors influence the outcome and how they are related long before the study even begins. This allows the researchers to ignore things they don’t find interesting and to find the results they want.

Most of the researchers have no formal training as scientists.  Having come to formal education as my terminal degree (my first two are in CS) I have noticed that there is often a

People formally trained in education realize that learning to teach takes work while many academics outside of Education have little respect for the discipline. Similarly, people formally trained in science realize that learning to do science takes work and many academics outside of Science believe they can do science on anything. Many educational researchers fall into this category.

This is Cargo Cult Science and a lot of people do it, including lots of scientists. Doesn’t make right, or useful. (here is an excerpt from Feynman’s speech: http://tinyurl.com/3yh92pn)

If we’re going to do science, then let’s do REAL science. (I’m not sure that’s possible in social contexts, but I’m willing to be found wrong).

Be the first to like.


Leave a Reply