Shared publicly  - 
 
The following study concluded that “the online format had a significantly negative relationship with both course persistence and course grade, indicating that the typical student had difficulty adapting to online courses.”

It further found that “males, Black students, and students with lower levels of academic preparation experienced significantly stronger negative coefficients for online learning compared with their counterparts, in terms of both course persistence and course grade.”

And, if that weren't enough, it also found that "the relative effects of online learning varied across academic subject areas ... two academic subject areas appeared intrinsically more difficult for students in the online context: the social sciences (which include anthropology, philosophy, and psychology) and the applied professions (which include business, law, and nursing)."

Social sciences are "intrinsically" more difficult! Oh, but there is a small detail, that this study was based on, get this, "traditional online courses," which typically have about 25 students and are run by professors who often have little interaction with students. Ah, professors have little interaction with students. That changes everything. Everything. Why, o why, would anybody base a study on such classes? What is the point? It's like specifically focusing a study on bad face to face classes just to see what outcomes we get! I just don't see the point. What do you think +Laura Gibbs and +Meg Tufano? What is your take on this +Larry press?
 
Columbia University study slams "traditional online classes" -- we need to move beyond traditional
http://bit.ly/URnHKA

I posted a review of a Columbia University study of the efficacy of online classes.  The study concluded that “the online format had a significantly negative relationship with both course persistence and course grade, indicating that the typical student had difficulty adapting to online courses.”

Online results were poor across the board, and poorer for some students and courses than others.

They also suggest policies for coping with these problems.
5
Doug Holton's profile photoDonna Murdoch's profile photoLaura Gibbs's profile photoLarry press's profile photo
72 comments
 
Well, you know I do NOT agree with the idea that there is no point in paying attention how to do smaller online classes right, and I certainly do NOT think that the only thing worth our attention is how to design large classes (500, as Larry suggests here) or massive ones.
IT IS AFFORDABLE if we are teaching 25 students in a class. We need excellent small classes, excellent medium classes, and excellent large classes - they all have a role to play. Details here:
https://plus.google.com/111474406259561102151/posts/creJvU3WXrc
 
+Vahid Masrour I worry about any sweeping comparisons of any sort - it's like saying "Chinese food is as good as French food" (or better than or worse than) ... as if there were a single thing called "Chinese food" that you could compare to a single thing called "French food" with everyone agreeing what "good" would be...
 
+Laura Gibbs since i am (for the most part) French, i must agree with you :D

However, because i expect great things to come out of the competency-based approach (at least for the most technical skills), i think that some measurements can andshould be made. I'm not naive enough to think that online and F2F will always be equivalent, but since we (educators) want to achieve learning (we'll discuss the meaning of that word some other day), we should try to figure out whether both approaches can be equivalent, when, for what type of achievements.
 
OH YES, absolutely - I am a big fan of competency-based measures. That will clarify the discussion enormously! I fear that is a long way off, though, because that is going to be a real challenge (even a threat) to current classroom-based teaching, far more so than online learning has been. I expect a lot of opposition.
 
To carry on the food metaphor: we need nutrition information for ALL our classes.
My courses have lots of storytelling and grammar nutrients. And technology nutrients too! :-)
 
+Laura Gibbs thanks for pointing me to the discussion you initiated about the 10k degree. The possibilities of personalizing higher education, bringing down expenses and raising academic quality are truly fantastic. One thing is for sure, though, and that is any alternative program like the ones mentioned in that other discussion will rely on online programs. If we believe the results of the study I commented on above, this whole scheme falls apart. So, quality online classes will be front and center. If that can be done massively as you propose certain courses can be, that is fine. Studies like these surely make people who are less familiar with online learning run away from it at top speed, and push back the discussion about how to move higher education forward.

+Vahid Masrour thanks for sharing that talk by +George Siemens, very interesting, and much more believable than the results of this study!
 
+Justin Scoggin I don't want to generalize about "online courses" as if they were a monolithic entity (a "thing" you can study - like I said about Chinese food - there are all kinds of Chinese food, good, bad, and everything in between)... but one generalization I will make is that online learning is still something very new in the history of teaching and learning and EXPERIENCE, both on the part of students and on the part of faculty, is often severely lacking, which can result in setbacks for the teacher and for the student - but instead of blaming the online medium, I think we just need to be honest about our lack of experience and do everything we can to learn from our mistakes so that we are continually improving what we do. Since it is something new, I am guessing we have a LOT of room for improvement, and when I compare the quality of the online courses I teach now to what I was doing 10 years ago, I can see a HUGE improvement - I would contend that my online courses even of 10 years ago were as good as a classroom class (I would not have had the incentive to carry on if they had not been as good)... but now, not only are they as good as a classroom class, they are SUPERIOR to many classroom classes. Why? Because I am very aware of the weakness of classroom classes when it comes to teaching in my discipline (it was dissatisfaction with the severe limitations of the classroom that led me to go online to begin with) AND because I am a resolute experimenter in my work online, constantly trying new things, keeping only the best and setting aside what does not work. I am honestly not that interested in the online courses that are being taught right now by people who have no experience - I guess that is a harsh thing to say, but that's how I feel. All these Coursera instructors who have never taught an online course before: OMG, that seems laughable to me. I want to hear from people who have been doing this for four or five or six years, so that they have had time to really improve the quality of their classes. There is still lots I know I can learn, and I am glad to keep on learning and keep on experimenting, and I am very glad to have the online space as a place to what what I have learned with others.
 
+Laura Gibbs 
The study concludes that we are not teaching excellent small, online classes at the JC level -- they found that face-face classes were better than online classes across the board, and the gap grew for certain students and subjects.  That is not to say you and many others are not teaching excellent online classes, only to say that they found the typical online class to be less effective than the typical face-face class.  We should not compare an excellent online class to an average face-face class.

I didn't mean to say there was no point in paying attention to how to do smaller classes online, but that MOOCs are giving us an opportunity to develop new technologies and practices rather than trying to get by with a minimal re-purposing of what we have been doing for years in face-face classes.  
 
It's that phrase "across the board" which simply cannot be true: there is no way you can generalize for ALL teachers, ALL students, ALL subjects - if you want to say "on average," well I guess you can do that, but even there I am not sure how helpful that is - "Chinese food is, on average, better than French food...?" Moreover, education is not composed of replicating averages but of every teacher and every student making choices every day. Whether they make good or bad choices is the product of many factors, and is certainly not just determined by the medium of instruction.
 
+Laura Gibbs 
The study did not conclude that every online class performs worse than every face-face class, but that for every student category they studied and for every topic they studied, the average FF grade and completion rate was better than the average on-line class.

They found differences among different subgroups -- bad students, old students, black students, social science classes, business classes, etc. -- but for every subgroup, FF did better than online.

As I said in my comment, there are excellent, small online classes, just as there are excellent, small FF classes.  There are also crappy online and FF classes.  Can we agree on that?
 
+Larry press Sure, we can agree on that - but we don't need a study to tell us that there is variety. We knew that already. But in your blog post, you provide this summary of the study - Here's the bottom line -- the study concluded that “the online format had a significantly negative relationship with both course persistence and course grade, indicating that the typical student had difficulty adapting to online courses.”
Essentializing "THE online format" and "online courses" like that does not do anyone any good at all in my opinion (and I have my doubts about "the typical student" too of course). That is my opinion, based on the reasons I have explained above, and yes, I understand that you do not agree, since that is the pull quote you chose for your blog post.
There was nothing qualitative about the study - all they looked at were course titles and grades/completion, right? Have I misunderstood the study? They analyzed GRADE TRANSCRIPTS. They actually know NOTHING about what went on in a given class. Without a qualitative dimension involving actual assessment of the course design and content, this study says nothing of any use to me as an online instructor.
 
+Laura Gibbs 
I tried to convey what they found using their methodology for the data they looked at. 

As you say, they did not concern themselves with what went on in a given class -- they were doing large sample research, not case studies.  

An interesting follow up would be for them (or you) to look at the instances in which online did much better than FF and drill down on those cases -- try to see what those folks were doing differently.
 
+Larry press To me, the complete lack of a qualitative dimension (they could have at least reviewed the syllabi of courses for some basic parameters) means that the statistical results are meaningless. By adding in some qualitative dimension, they could have truly had something useful to say - but without that, it seems to me these results are totally useless. How would you know whether to expect similar results for any actual online class, since you have zero information on which to assess the contribution made by the class itself...?
They might as well have compared the efficacy of classes offered on MWF as opposed to TTh classes. Or classes taught by men as opposed to classes taught by men. Or classes sorted based on the cost of the textbook.
Sure, you would get data. You might even get what look like significant differences.
But the data would provide you no real help of any kind in designing a course or designing a curriculum.
 
+Larry press and +Laura Gibbs they CERTAINLY concerned themselves with what went on in class by pre selecting online classes with low levels of tutor - student interaction! For this to be a valid study they needed to compare those with classroom classes (as Laura calls them) with similarly low levels of tutor - student interaction. That makes sense. Otherwise they are comparing apples with oranges. 

Why, I ask, didn't they look at all types of online and classroom classes and then break down the data into classes with lower levels of interaction and others with higher levels of interaction, and compare those with their counterparts? How do online classes with lots of quality interaction compare with classroom classes with lots of quality interaction? That seems to be the question, and I bet the results would be substantially different, as Laura points out.
 
If someone does a study about online education, and the G+ online education community starts saying the study is useless, we are doing ourselves a disservice. We should make our opinions heard, and make recommendations on how to design further studies, but we should not discredit a study altogether just because we don't agree with some of the premises. No study is perfect. Until there's a better study, well, that's the best body of information we have.
On the other hand, some policymakers may use this study to attack online education in general. The limitations of the study should be clear at that moment.
Notwithstanding, I think the study has some valuable information. It is important to know that some groups are doing better than others on these online courses.
 
Furthermore, (now I am nearly hoppin' one one foot), why specifically pre select courses with low levels of tutor - student interaction? My experience shows that it is precisely the level of meaningful interaction that determines a whole variety of factors useful in determining student performance, including grades. I pored over data from my online classes given over 5 years and invented what I call a "Consistency Index" that measured student activity on the platform and then I correlated that to grades. Here is what I used, and I quote from the paper I wrote about this, but, alas, never got published:

https://docs.google.com/document/d/1ySDaknB7oRN-3C5DbqlQ9p0hmMLZ1uEJJ4_b5EDzQ6U/edit?usp=sharing

For this study, the following formula was devised using data taken from the Moodle log file to determine on-line presence.

(h/w)x(d/td) = Consistency Index
In which:
h = total hits (logs, clicks) during the course
w = total weeks in the course
d = total days trainee logged in during the course
td = total days in the course

Here are the results: The Pearson product-moment Correlation Coefficient comparing the Consistency Index with module Grades is r = 0.44.  The following scatter diagram further illustrates the relationship between these two variables.

Which means the following: Figure 1 indicates how a high consistency index coincides to high grades, while a low consistency index coincides with low grades in equal proportion.

This all means that the more active students were on the platform, the better their grades were. Why were students active on the platform? To interact. With the teacher and with their peers.

So, excluding this factor seems very damaging to the study. Further, why not look at online tutor experience? How about class size? How about quality of instructional design? How about assessment methods? There is no reason to exclude these factors from a study because they are all registered on their platform. No need to send observers into classrooms.
 
+Bernardo Trejos Some did better, some did worse - but can you learn anything from the patterns they demonstrate? Tell me what you, as a teacher, can get out of this study and/or what you as a (hypothetical) administrator can get out of this study. In their recommendations, they actually say that schools should re-allocate resources based on the results of this study. Really? I think that would be negligent in the extreme; these findings are not a basis on which to reallocate resources surely. So, please tell me what you got that is useful out of the study. I am ready to listen (it might redeem the time I spent reading the darn thing to find out if ANYWHERE in there they have something useful to me... I found nothing).
 
+Justin Scoggin Agreed - I think they did what they did with the data available, which was... transcripts. If that is the only data you work with, well, I just don't see how you can really expect to get anything useful out of it - but you can spend a lot of time cranking through the data, because transcripts are full of data. But it is not sufficient data from which extract conclusions that are useful either to students or teachers or administrators.
 
+Laura Gibbs Yes, there are a lot of useless research papers. I agree. This is not one of them.
First, they did take the time to build a large dataset. Many studies are criticized for just the opposite, that is, being case studies on small populations.
Second, it's done on technical and community colleges, not Ivy League universities or MOOCs. That should give us information we didn't have before. It gives us a better picture of struggling college students.
Third, now we know who struggles more with these courses, and we can focus our attention on helping them. I'm not sure it can help your specific courses, but it should help a technical college make more informed decisions. Should a community college dive headlong into online education? Maybe not. There are still many things to sort out for that to be feasible and recommended.
 
+Bernardo Trejos Again, just because it is data does not mean that it would help you to make those decisions because it tells us NOTHING about those online courses - literally, nothing. If you are going to dive headlong into online education, you realize the first question is what KIND of online courses... and the study tells you nothing about that. Which renders the data useless. It is a big data set - but all they did was get permission from the colleges to use the transcripts and to link up data from the transcripts with some student demographic data. They actually collected nothing (if I understand the study correctly) but instead did what sleight-of-hand they could with the existing data.
So, I am still waiting to find out how you, as a hypothetical community college administrator, would use this data to make decisions about an individual course, or about a departmental program, or about a software investment, or about a faculty hire, or about student admissions. ANYTHING. But I can think of nothing you can actually use this data for.
 
+Laura Gibbs What kind of online courses? The ones that have been taught. Subjects are on Table 2.

Decisions on individual courses
General data should be used to make general decisions, while specific data should be used for making decisions on specific courses. Why would you want to use this study to make decisions about an individual course?

Decisions about departmental programs
I think many departments will soon feel the pressure to convert a substantial part of their programs online. Is it recommended? Maybe not. At least not yet. Evidence? Look at this study.

Decisions about software investment
The study is not about software. You need another study for that.

Decisions about faculty hire
The study is not about this.

Decisions about student admissions
The study is not about this.
 
+Bernardo Trejos I have an informed decision for a technical college: design your online classes well and hire experienced, trained and caring online teachers. I guarantee that completion rates, grades and satisfaction rates among students will increase, perhaps significantly this way.

With this condition, a community college definitively should dive headlong into online education. Otherwise, as the study indicates, why bother? Maybe this can be something useful in this study, +Laura Gibbs, online classes with low levels of teacher - student interaction will generate low grades, especially among certain groups. I am not sure we needed a study to tell us that, but now that we have one, all the better.
 
+Bernardo Trejos There is no evidence here about whether or not departments should adopt online learning. The only thing the study tells you is that CAUTION is required. Uh... we needed a study for that?
So, I am still waiting for you to tell me one single decision that is better informed by looking at the results of these studies.
There are lots of questions to ask about online - the list I provided is just a few of the most obvious questions that every single school is asking. This study provides no help in answering those questions, nor in any other questions - unless you can say what those other questions are...?
 
+Justin Scoggin Great point. The truth is that we don't need that evidence. We need the evidence to prove our point in front of other people.
 
+Laura Gibbs  Yes, we need evidence that caution is required. This is important when online educational product salespeople come knocking on community college doors. Decision makers at these colleges may not know as much about online education as you do.
 
+Bernardo Trejos that is exactly my problem with this study. If it studied all kinds of online and classroom courses and then compared the highly interactive ones from both delivery methods, and the less interactive ones from both delivery methods, we would get just such evidence for all the world to see.

So, why, oh why, didn't the authors choose to do that?
 
+Bernardo Trejos Caution is required in LIFE. And if EVERYBODY doesn't know that already, a tedious study like this is not going to make a difference. Moreover, it does not do anything to help administrators sort out the specific claims of specific salespeople in any way, shape or form - not the people selling online, not the textbook salesmen, not the people who want to sell you classroom equipment. So, I am still waiting to find out what is useful here...? It does NOT help to sort out the claims of salespeople.
 
+Laura Gibbs I bet the study was done in a short time, so it was not tedious to make. Did you mean tedious to read? It's not fine literature, but that's not the objective. It's 26 pages long, and you can scan through it pretty easily to find what you need.
I think it will be useful when the MOOC-cum-textbook providers arrive, but at this point it may be better to agree to disagree and move on.
 
+Bernardo Trejos the study surely has access to the data from the platform on which these courses were delivered. Using a consistency index that uses elements like total hits (logs, clicks) during the course, total weeks in the course, total days students logged in during the course and total days in the course to calculate student activity in the course would be helpful. Although they are all "traditional online courses" I imagine that some were more traditional than others. So, classifying them into "more" and "less" traditional would provide some sort of distinction between the two regarding grades, satisfaction rates and completion rates. I can't guarantee these results would be any more interesting or useful than the original, but it might be worth a try.
 
I think that's the point, +Bernardo Trejos - they started with abundant but superficial data, so they reach sweeping but superficial conclusions. They used transcripts and student demographic data.
 
+Bernardo Trejos let's say some salespeople visit a decision maker who has read this study and knows to exercise caution. Ok, that is good. But, then what? What should the decision maker do at that point to gather the necessary elements to make an informed decision about an online program? This study provides no useful further steps.

What a decision maker would have to do at that point is to call somebody who is more informed than him/her about the matter: consult an expert or a few. So, I think Laura's point is why not use a good study to give these elements to decision makers? Why make them take the further step to look for an expert when a study should take into account what the experts have to say about this matter?
 
+Bernardo Trejos I don't see how it will be useful in any way when asking questions of MOOC providers or textbook salesmen. If you think of any useful questions in future, you can let me know. I've read your comments carefully and have not seen mention of any single actual question that you think this study helps to answer.
+Justin Scoggin I'm not even sure they had access to that much information (no information was provided re: platforms that I recall - and at least at my school, the data analytics for our CMS are surprisingly poor because we have not bought their additional "analytics" service, which is not part of the standard package) - although I was surprised that they did not include any evaluation data. Schools usually do some kind of student evaluation for courses which could have been folded into the study. I'm not sure how useful that would have been, but it might have helped to try to tease out how much instructor quality (as perceived by students) was an important factor - but if it's anything like at my school, the evaluation questions are designed with face-to-face classes in mind, and some of the questions don't even make sense when applied to online classes, but we use the same form for all classes.
 
+Bernardo Trejos By tedious, I mean I read the whole thing and I took nothing away from it. It answers no meaningful questions; it makes no useful recommendations.
 
+Laura Gibbs I agree, they used the wrong data sources. If you are going to publish an article about online course effectiveness, well then, buy the additional CMS data analytics service! For my TEFL study, I used the meager information available on the Moodle and student evaluations that I designed specifically for these courses, so they did give me lots of useful information. I took the time to mine that data, and I think the authors of this study should have as well.
 
I'm looking forward to reading your paper, +Justin Scoggin ! Every day I try to pick out something inspiring from the day before to read with my coffee. :-)
 
+Justin Scoggin 
What should the decision maker do at that point to gather the necessary elements to make an informed decision about an online program?

A community college should not fire a bunch of face-to-face instructors and use xMOOC-type textbook/online course material providers for courses taken by at-risk students. These materials work best for students that are not struggling. That's all. The study is far from perfect. 

Why not use a good study to give these elements to decision makers?

No study is perfect. Better studies are possible, but are they available?

Why make them take the further step to look for an expert when a study should take into account what the experts have to say about this matter?

Studies are only for reference. The study will not tell you what you should do.
 
+Bernardo Trejos Why do you think this study even has anything to do with xMOOC-type courses...? The courses in the study were ones that students enrolled in from 2004-2009.
From my experience, online courses CAN be a better choice for students who are struggling ... for example, my courses are a great choice for students who are struggling with their writing because they receive more individual attention online than would be possible in a face-to-face course. All you have to do is decide that this is the kind of online course you want, or not.
 
+Laura Gibbs The study has nothing to do with xMOOC-type courses.
Yes, online courses with a caring instructor can help a lot. On the contrary, evidence indicates that canned online courses do not help struggling students.
 
+Bernardo Trejos Unfortunately, this study provides no evidence about any of those important distinctions. If anyone uses this study to justify the use of anything, I don't see on what basis they do so - there is no way of knowing what resemblance, if any, someone else's "online course(s)" would bear to the ultra-generic, unidentifiable (and therefore meaningless) "online courses" included in this study.
Were the courses in this study "canned" as you call them? Who knows? The study does not say.
Were the instructors caring, as you put it, or not?
Who knows? The study does not say.
 
+Laura Gibbs  I don't think the courses in the study were "canned". That's what the future holds.

A caring instructor should give extra help to disadvantaged students. Nonetheless, we know from the study that disadvantaged students are affected negatively by current online technical college education. This is a general tendency that does not include your courses. 
 
+Bernardo Trejos We have now completely left the study behind - the ramifications that you are drawing from the study (xMOOCs, canned courses, caring instructors) actually have no basis in the study itself. I agree that discussion of online learning is important; the study provides no useful contribution to that discussion that I can find, which was my original point.
 
Yes, we drifted from a scientific discussion to a political discussion. But that's because of the questions asked and the challenges raised.
 
2004 - 2009 ....online course....first of all, when did we get true broadband proliferation?  Which means the demographic of the students they studied may be of those with less access than others, perhaps having to go to the library with minimal time to engage.....tools were less available.....and instructor know-how???  again, as I said on +George Station 's post, what the heck do they even mean by online class?  How can you make such a generalization, every one looks and feels so different. Podcasts, video, mindmaps, text, adaptive learning, blogs, wikis, tools, collaboration, diigo, what??   Apples to apples!  And the assessments?  A grade, a subjective grade?  Or a number based on?  Not exactly dissertation quality here. Courses online should be just like f2f courses.  It's about pedagogy and content and every one is different, never mind online or f2f.  Except until recently, everyone but +Laura Gibbs (just about) thought an online course was read some text and post on message boards.   Also - realize - NYC schools had a joint effort during those years (it ended just shy of 2009)  I am blanking on the name - lost tens of millions.  Bad taste in everyone's mouth.  I'm sort of ashamed of this study but it didn't come from Teachers College.
 
+Bernardo Trejos No, there is nothing political about pointing out the weaknesses of a scientific study. Left, right, whatever - my complaints against the study are scientific, not political. It is ill-conceived in terms of scientific method, regardless of its motives (about which I have no interest in speculating). The reason we drifted is that the article is itself without substance, offering nothing to really discuss.
 
+Donna Murdoch What's really staggering to me is that in their list of caveats about their findings, suggestions for future study, etc., they don't even seem to realize that there is anything flawed about this empty "online course" criterion.
 
BTW - I even take issue with the phrase "typical student" any more.  Check your demographics folks.  Rapidly changing.  What is a "typical student" anyway?  Grrrr.
 
+Bernardo Trejos Well, then I will repeat my starting point: the study is an example of useless science.
 
+Bernardo Trejos I agree that studies are only for reference, and that is precisely my suggestion, that the study provide more reference for decision makers. If the study concluded, for example, that the greater the level of interaction the course fosters, the more effective courses will be for universities and colleges, well then that provides some very clear reference for decision makers. It provides a great starting point for all administrators to define a minimum conceptual framework upon which to base any online program. That is useful. The way it is now, is simply not useful.
 
+Donna Murdoch you point to the reason I started this discussion, the fact that they pre selected "traditional online classes" for the study. To do this is a poor design choice, but if it is clearly defined and justified, well then at least we know what the limitations of the study are. Otherwise, as you say, we just don't know where we are standing in this study. Clearly defined paramiters are the hallmark of effective research, and this one doesn't get there.
 
I went back to +Larry press  's blog post which prompted this discussion just to see where we stood. I noticed that in the blog post he repeatedly refers to "traditional online courses" - as especially in the rather inflammatory title of the post: Columbia University study slams traditional online classes -- we need to move beyond traditional - but the study never gives any information AT ALL about the online courses in question, so I got curious about that label which Larry used, "traditional online courses." I checked: it is not used anywhere in the study at all.
The only information provided about the specific online courses under consideration are that they "limited the sample to Washington residents enrolled in academic tarnsfer track and to courses offering both online and face-to-face sections" (that's in a footnote). That, to me, is already a warning bell; at my school, the best online courses are the ones that were designed to be online courses - not courses that exist as online offerings of the "same" course offered face to face.
Anyway, aside from that, I could find zero information that characterized the online courses in any way at all. Please correct me if I am wrong! The reason the authors selected the courses this way is a requirement of their statistical modeling; they want to use statistical methods to compare the "same" (i.e. statistically speaking the same) student taking a face-to-face course v. the "same" course online. Such statistical modeling seems to me misguided, but that's a separate discussion (this is the same kind of modeling being used to do the "value added" types of teacher evaluations, for example).
Based on my reading of the article, I'm not sure where Larry's label of "traditional" courses comes from, nor what such a label means - does everybody agree on what a "traditional" online course is? I personally have no idea what that would mean. Larry says: "The study was based on traditional online courses, which typically have about 25 students and are run by professors who often have little interaction with students." I am not sure where that comes from; I did not find it in the study. Usually smaller courses like that do feature a lot of interaction with students - but that varies, of course. If we are talking about a part-time adjunct faculty member who is juggling three different jobs, even in a small class, yes, there might be limited faculty interaction. But if we are talking about full-time instructors with reasonable teaching loads, classes that size are ideal for student interaction. The study, as I've said above, reports no data that helps us assess that directly (levels of interaction) nor indirectly (faculty employment status, faculty course loads, etc.)
The authors of the study would in fact have enrollment data (all they do is count things), but I could find no information about class size in the report anywhere. (Again, correct me if I am wrong).
So, in every way I can see, this study seems to be badly ill-conceived. Even when they had available data (class size), they did not report it. Of course, size is just the first question to ask about an online course - but there are many other questions to ask as well. The failure of the authors of the study to even report the available data on class size suggests to me that the thought about the variability between online courses and the way those variables affect student outcome did not even cross their minds. I wonder: have they ever taught an online course...? Have they ever taught at all?
I did a quick check on the authors.
Di Xu - http://ccrc.tc.columbia.edu/person/di-xu.html - appears to be research staff only, not teaching faculty. She has a Ph.D. from Columbia Teachers College.
Jaggars - http://ccrc.tc.columbia.edu/person/shanna-smith-jaggars.html - has a PhD from UT Austin in Human Development and Family Science and she appears to be an administrator, managing "a suite of studies funded under the Bill & Melinda Gates Foundation" as part of the CCRC Community College Research Center at Columbia Teachers College - my guess is that she also does not teach.
And,  +Donna Murdoch , it turns out this is a 100% Teachers College production after all. :-)
 
+Justin Scoggin  It is not that they pre-selected traditional online classes -- the classes were traditional in that they were taught in fall 2004.  Here is their description of the sample:

"Primary analyses were performed on a dataset containing 51,017 degree-seeking students who initially enrolled in one of Washington State’s 34 community or technical colleges during the fall term of 2004. These first-time college students were tracked through the spring of 2009 for 19 quarters of enrollment, or approximately five years."

A valid criticism of the study might be that online classes have improved overall since 2004 and that they are now achieving better results.  In the absence of a new study, the faculty at any school could ask themselves whether they have changed online delivery since the time of the Columbia study.

Maybe there has been general improvement, but <Anecdote alert>at my school, the biggest change since 2004 has been the spread of online classes from the masters level to the undergraduate level.  The delivery mechanism seems to be the same -- use a traditional textbook with its ancillary material and tests and discuss the material in a Blackboard forum.  Class sizes have also risen during this time</Anecdote alert>.

I think we need to move beyond this approach to online teaching.  Re-purposing old material and methods is like putting old wine in new bottles.  We need to look for new techniques and technology -- perhaps experimentation with MOOCs will push us to invent and test this sort of innovation.

I am responsible for some of the heat and misunderstanding in this discussion in my choice of wording  I just went back to the title and italicized the word traditional  -- we need to move beyond the traditional.
 
+Larry press I still don't see how you can reach conclusions of any kind about what was going on in the online courses analyzed in the study. The only thing is that they had course number labels on students' transcripts that indicated these were the online versions of face-to-face classes. That's it. I see no other data of any kind. What am I missing?
The fact that something exist in fall 2004 means they existed in fall 2004. I was teaching what you would consider non-traditional classes in the fall of 2004... just because something was offered in 2004 doesn't mean it was "traditional" - and I'm still not even sure what you mean by traditional.
You mention traditional materials here - what does that mean? And traditional methods - what does that mean? More importantly, how do you know what materials they were using or what methods? Or how those materials and methods changed from 2004 to 2009? (which is the course of the study - the incoming cohort of students dates to 2004, but they analyzed courses that those students took over that five-year period and students were increasingly likely to take online courses later in their careers rather than earlier; I'm not sure if you can tease out just how many online courses were included year by year, but I had the impression that it was an increasingly large number as the study moved from 2004 to 2009).
 
+Laura Gibbs Laura, by traditional I meant what I saw and still see on my campus -- using the same textbook and ancillary materials to teach the course as they would use in a FF class, but holding threaded discussions on Blackboard instead of holding meetings in a classroom.

Again, I am sure you and many others do a way better and more conscientious job of teaching online than the folks I see or the folks who were teaching in Washington in 2004.

I agree they could do a follow up study to see if things have changed.  Maybe they are.  But, absent that, any school could look at what they themselves are doing online today and ask whether it has changed over time -- in other words do a self-study of their online techniques, technology and results. 

(If this study encourages some institutional introspection, it might not be as worthless as you think it is).

I also agree that there could be (have been) many other sorts of research like case studies or surveys of changing practice.  (Have you published any research on the efficacy of the changes you have made to your teaching during this time)?

Anyhow, this discussion seems to be in a loop -- you might talk with the folks at Columbia and see what they are working on today.
 
+Larry press I don't think you can justify taking what you see on your campus, labeling that "traditional," and then assuming that the same holds true for the courses in the system being studied. I have no interest in talking to the people at Columbia - why would I? I think they have done a shoddy and time-wasting study which lends itself exactly to the misapplication of its results, as in your blog post and other reporting I have seen about it. The only reason I even participated in this discussion was because of the conclusion you put forward about rejecting "traditional" online courses on the basis of the evidence in this study - and I still don't understand exactly what you mean by "traditional" online courses much less your basis for rejecting them. I am still waiting to understand why you think the study supports the recommendations you are advancing in your blog post, and I am also dubious about the recommendations advanced in the study itself because of the shoddy nature of the study's design. I'm also perfectly willing to abandon this discussion - I learned nothing from the study of use to me at all, although the more I look at their actual data and methods the more convinced I am that it has zero usable implications for future action. It might - might - have implications for future research if additional data (such as data needed to control for "same" faculty as they control for "same" student) were available.
 
Forgive me for not reading through all the comments here first, but one thing I noticed is the courses analyzed are from 2004 - before youtube, facebook, twitter, etc.  There have been several meta-analyses comparing online and face to face courses over the past 15 years.  In the late 90s, online courses on average showed slightly worse learning than face to face, then later meta-analyses found no difference, and by 2008-2009, multiple meta-analyses have found a slight advantage of online over face to face (and blended beat both by a large margin).  I haven't seen a more recent meta-analysis, but there seems to be a trend that instructors are slowly getting better at teaching online, but of course there are so many possible confounding effects, too.  Instead of looking at which is best, I'm more interested in how we can make all education better, and why some courses are better than others.
 
+Doug Holton Sadly, the article is not a useful addition to the literature in any way whatsoever. And I agree with you - we need to ask about why courses succeed, when, for whom, why, how, etc. This article helps with none of that at all.
 
+Doug Holton I agree -- hopefully we are improving.  I am starting to feel like a defense attorney for the folks at Columbia, but they also found variance among student categories and course categories and suggested a few policy changes that took those effects into account.  Those category differences might be of use to a school thinking about which courses to put online.

Have you got links to some of the metastudies you mentioned?  You might also be interested in http://www.nosignificantdifference.org/ which has been tracking efficacy studies for all sorts of technology for some time.
 
This white paper summarizes several meta-analyses (as well as other articles related to online education): http://academicpartnerships.com/docs/default-document-library/white-paper-final-9-22-2011-(1).pdf?sfvrsn=0
I guess I mis-remembered, in the 90s, there was no significant difference, and then it improved starting from around 99-00.  Maybe in the 80s it was inferior.

To tell you the truth though, I'd pretty much ignore stuff from before about 2006 or so.  So much has changed.  Most of the tools in the Top 100 Tools for Learning survey Jane Hart does (http://c4lpt.co.uk/top100tools/) weren't even around back then, except for stuff like Powerpoint and Word.  Most people still used desktops, had slow Internet (okay, we still have slow Internet in this country :), etc.  And attitudes about the Internet have changed a lot, although there still is a common belief (among some instructors and students) that online learning is inherently inferior to face to face, and even that belief (epistemology) may negatively impact learning.
 
+Doug Holton Agreed! The study out of Columbia that prompted the discussion provides no descriptive features of the face to face classes OR the online classes. The complete lack of information about the actual modes of instruction means the paper can be summarized as follows:
Adaptability to Zorgnat Learning: Differences Across Types of Students and Academic Subject Areas. Using a dataset containing nearly 500,000 courses taken by over 40,000 community and technical college students in Washington State, this study examines how well students adapt to the zorgnat environment in terms of their ability to persist and earn strong grades in zorgnat courses relative to their ability to do so in hixrum courses. While all types of students in the study suffered decrements in performance in zorgnat courses, some struggled more than others to adapt: males, younger students, Black students, and students with lower grade point averages. In particular, students struggled in subject areas such as English and social science, which was due in part to negative peer effects in these zorgnat courses.
 
Well, +Doug Holton, thanks for saving the day! The information you provide is quite interesting. We got all up in arms here and come to find out they did take into account all online classes given in the analyzed university. Thanks +Laura Gibbs for clearing this up. Obviously +Larry press, we don't really know if the online classes given were "traditional" or not because I was giving fully interactive online classes in 2006, and I imagine that at least some of the teachers in the survey were too. I should have read the entire study before posting this, before getting all huffed up!

So, +Bernardo Trejos, to answer your earlier question, they do have the data to perform the analysis I suggested above that compared higher and lower levels of interaction in both delivery methods. They can still save this study, that is if they want to go back over the data and explore any new conclusions that come of it.
 
Maybe we can wait a couple of days until the conversation finally settles, and e-mail them the link.
 
I figured this discussion had run its course, but I came across a link to another review of the study at :

http://www.insidehighered.com/news/2013/02/25/study-finds-some-groups-fare-worse-others-online-courses

The reviewer acknowledges that courses taught online today are hopefully of higher quality than those at the time of the survey, but focuses on the paper's analysis of the variance among groups of students and courses.  (Most of the paper is devoted to that ANOVA).  
 
+Larry press Even assuming people are fans of this type of statistical modeling (personally I am not, but it is obviously a huge focus of Gates Foundation efforts right now), until the researchers make some effort to control for "same teacher"  as they do for "same student," even these limited results are vitiated by the failure of the model. They have completely ignored teacher effects and treated the learning experience as a product of "student" factors and "course delivery" factor, which itself is very poorly theorized as a variable, as we discussed at length - they realize "student" is composed of many factors, but they act as if "course delivery" is a single factor variable - ridiculous, as we all know full well. Even if the media do gobble it up. Recklessly.
 
+Laura Gibbs As a non-fan of multivariate data analysis, you eliminate a lot of social science, pedagogical research, experimental physics, etc.  In this case, I suspect the study authors would point to the large sample size as washing out the effects of teacher quality, though that can also be argued with as there may be systematic differences between online and FF teachers.

A study where the same teacher was online and FF would also be fraught with methodological difficulty.

To try to be more constructive, is there any sort of research study that the folks reading this thread would accept as shedding light on the question of the relative efficacy of FF and online education for different groups of students or different groups of courses?
 
In general the data available for education does not lend itself to this type of analysis because the methods used for control are far too reductive - that is certainly the case here, with the way they have defined the student variable in terms of very limited factors, and have defined delivery mode with NO factors (burden is ON THEM to do that... they failed), and they have left out all teacher factors.
I am not a fan of poorly conceived multivariate analysis - and I have yet to see a value-added study in education that was well conceived. Here it is value-added for delivery method, but it is subject to the same failings as the value-added studies for teacher value. Which makes it all the more ironic that they left out teacher effects ENTIRELY from this study.
Posing the question as FF v. online will never result in truly useful data because of the variety of FF and the variety of online. As I said way back in the beginning of this conversation that is like trying to research whether "Chinese food" is healthier or "French food" is healthier. No denying that some food items are healthier than other. No denying that we need research into which foods are healthier (just as we need to know about effective teaching practices and learning environments). Labels like "Chinese food" and "French food" (like "face to face" and "online") are not adequate for research modeling.
Add a comment...