Skip to Content

Feed aggregator

 

Michael Feldstein: Head in the Oven, Feet in the Freezer

Planet Sakai - Mon, 04/14/2014 - 7:19am

Some days, the internet gods are kind. On April 9th, I wrote,

We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

On the very same day, the estimable Larry Cuban blogged,

So it is hardly surprising, then, that many others, including myself, have been skeptical of the popular idea that evidence-based policymaking and evidence-based instruction can drive teaching practice. Those doubts have grown larger when one notes what has occurred in clinical medicine with its frequent U-turns in evidence-based “best practices.” Consider, for example, how new studies have often reversed prior “evidence-based” medical procedures. *Hormone therapy for post-menopausal women to reduce heart attacks wasfound to be more harmful than no intervention at all. *Getting a PSA test to determine whether the prostate gland showed signs of cancer for men over the age of 50 was “best practice” until 2012 when advisory panels of doctors recommended that no one under 55 should be tested and those older  might be tested if they had family histories of prostate cancer. And then there are new studies that recommend women to have annual mammograms, not at age  50 as recommended for decades, but at age 40. Or research syntheses (sometimes called “meta-analyses”) that showed anti-depressant pills worked no better than placebos. These large studies done with randomized clinical trials–the current gold standard for producing evidence-based medical practice–have, over time, produced reversals in practice. Such turnarounds, when popularized in the press (although media attention does not mean that practitioners actually change what they do with patients) often diminished faith in medical research leaving most of us–and I include myself–stuck as to which healthy practices we should continue and which we should drop. Should I, for example, eat butter or margarine to prevent a heart attack? In the 1980s, the answer was: Don’t eat butter, cheese, beef, and similar high-saturated fat products. Yet a recent meta-analysis of those and subsequent studies reached an opposite conclusion. Figuring out what to do is hard because I, as a researcher, teacher, and person who wants to maintain good health has to sort out what studies say and  how those studies were done from what the media report, and then how all of that applies to me. Should I take a PSA test? Should I switch from margarine to butter?

He put it much better than I did. While the gains in overall modern medicine have been amazing, anybody who has had even a moderately complex health issue (like back pain, for example) has had the frustrating experience of having a billion tests, being passed from specialist to specialist, and getting no clear answers.1 More on this point later. Larry’s next post—actually a guest post by Francis Schrag—is an imaginary argument between an evidence-based education proponent and a skeptic. I won’t quote it here, but it is well worth reading in full. My own position is somewhere between the proponent and the skeptic, though leaning more in the direction of the proponent. I don’t think we can measure everything that’s important about education, and it’s very clear that pretending that we can has caused serious damage to our educational system. But that doesn’t mean I think we should abandon all attempts to formulate a science of education. For me, it’s all about literacy. I want to give teachers and students skills to interpret the evidence for themselves and then empower them to use their own judgment. To that end, let’s look at the other half of Larry’s April 9 post, the title of which is “What’s The Evidence on School Devices and Software Improving Student Learning?”

Lies, Damned Lies, and…

The heart of the post is a study by John Hattie, a Professor at the University of Auckland (NZ). He’s done meta-analysis on an enormous number of education studies, looking at effect sizes, measured on a scale from 0.1, which is negligible, to 1.0, which is a full standard deviation.

He found that the “typical” effect size of an innovation was 0.4. To compare different classroom approaches shaped student learning, Hattie used the “typical” effect size (0.4) to mean that a practice reached the threshold of influence on student learning (p. 5). From his meta-analyses, he then found that class size had a .20 effect (slide 15) while direct instruction had a .59 effect (slide 21). Again and again, he found that teacher feedback had an effect size of .72 (slide 32). Moreover, teacher-directed strategies of increasing student verbalization (.67) and teaching meta-cognition strategies (.67) had substantial effects (slide 32). What about student use of computers (p. 7)? Hattie included many “effect sizes” of computer use from distance education (.09), multimedia methods (.15), programmed instruction (.24), and computer-assisted instruction (.37). Except for “hypermedia instruction” (.41), all fell below the “typical ” effect size (.40) of innovations improving student learning (slides 14-18). Across all studies of computers, then, Hattie found an overall effect size of .31 (p. 4).

The conclusion is that changing a classroom practice can often produce a significant effect size while adding a technology rarely does. But as my father likes to say, if you stick your head in the oven and your feet in the freezer, on average you’ll be comfortable. Let’s think about introducing clickers to a classroom, for example. What class are you using them in? How often do you use them? When do you use them? What do you use them for? Clickers in and of themselves change nothing. No intervention is going to be educationally effective unless it gets students to perceive, act, and think differently. There are lots of ways to use clickers in the classroom that have no such effect. My guess is that, most of the time, they are used for formative assessments. Those can be helpful or not, but generally when done in this way are more about informing the teacher than they are directly about helping the student. But there are other uses of clicker technologies. For example, University of Michigan professor Perry Samson recently blogged about using clickers to compare students’ sense of their physical and emotional well-being with their test performance:


FIGURE 2. EXAMPLE OF RESULTS FROM A STUDENT WELLNESS QUESTION FOR A SPECIFIC CLASS DAY. NOTE THE GENERAL COLLINEARITY OF PHYSICAL AND EMOTIONAL WELLNESS.

I have observed over the last few years that a majority of the students who were withdrawing from my course in mid-semester commented on a crisis in health or emotion in their lives.  On a lark this semester I created an image-based question to ask students in LectureTools at the beginning of each class (example, Figure 2) that requested their self assessment of their current physical and emotional state. Clearly there is a wide variation in students’ perceptions of their physical and emotional state.  To analyze these data I performed cluster analysis on students’ reported emotional state prior to the first exam and found that temporal trends in this measure of emotional state could be clustered into six categories.


FIGURE 3. TRENDS IN STUDENTS’ SELF REPORTED EMOTIONAL STATE PRIOR TO THE FIRST EXAM IN CLASS ARE CLUSTERED INTO SIX CATEGORIES. THE AVERAGE EMOTIONAL STATE FOR EACH CLUSTER APPEARS TO BE PREDICTIVE OF MEDIAN FIRST EXAM SCORES.

Perhaps not surprisingly Figure 3 shows that student outcomes on the first exam were very much related to the students’ self assessment of their emotional state prior to the exam.  This result is hard evidence for the intuitive, that students perform better when they are in a better emotional state.

I don’t know what Perry will end up doing with this information in terms of a classroom intervention. Nor do I know whether any such intervention will be effective. But it seems common sense not to lump it in with a million billion professors asking quiz questions on their clickers to aggregate it into an average of how effective clickers are. To be fair, that’s not Larry’s point for quoting the Hattie study. He’s arguing against the reductionist argument that technology fixes everything—an argument which seems obviously absurd to everybody except, sadly, the people who seem to have the power to make decisions. But my point is that it is equally absurd to use this study as evidence that technology is generally not helpful. What I think it suggests is that it makes little sense to study the efficacy of educational technologies or products outside the context of the efficacy of the practices that they enable. More importantly, it’s a good example of how we all need to get much more sophisticated about reading the studies so we can judge for ourselves what they do and do not prove.

Of Back Mice and Men

I have had moderate to severe back pain for the past seven years. I have been to see orthopedists, pain specialists, rheumatologists, urologists, chiropractors, physical therapists, acupuncturists, and massage therapists. In many cases, I have seen more than one in any given category. I had X-rays, CAT scans, MRIs, and electrical probes inserted into my abdomen and legs. I had many needles of widely varying gauges stuck in me, grown humans walking on my back, gallons of steroids injected into me. I had the protective sheathes of my nerves fried with electricity. If you’ve ever had chronic pain, you know that you would probably go to a voodoo priest and drink goat urine if you thought it might help. (Sadly, there are apparently no voodoo priests in my area of Massachusetts—or at least none who have a web page.) Nobody I went to could help me. Not too long ago, I had cause to visit my primary care physician, who is a good old country doctor. No specialist certificates, no Ivy League medical school degrees. Just a solid GP with some horse sense. In a state of despair, I explained my situation to him. He said, “Can I try something? Does it hurt when I touch you here?” OUCH!!!! It turns out that I have a condition called “back mice,” also called “episacral lipomas” when it is referred to in the medical literature, which, it turns out, happens rarely. I won’t go into the details of what they are, because that’s not important to the story. What’s important is what the doctor said next. “There’s hardly anything on them in the literature,” he said. “The thing is, they don’t show up on any scans. They’re impossible to diagnose unless you actually touch the patient’s back.” I thought back to all the specialists I had seen over the years. None of the doctors ever once touched my back. Not one. My massage therapist actually found the back mice, but she didn’t know what they were, and neither of us knew that they were significant. It turns out that once my GP discovered that these things exist, he started finding them everywhere. He told me a story of an eighty-year-old woman who had been hospitalized for “non-specific back pain.” They doped her up with opiates and the poor thing couldn’t stand up without falling over. He gave her a couple of shots in the right place, and a week later she was fine. He has changed my life as well. I am not yet all better—we just started treatment two weeks ago—but I am already dramatically better. The thing is, my doctor is an empiricist. In fact, he is one of the best diagnosticians I know. (And I have now met many.) He knew about back mice in the first place because he reads the literature avidly. But believing in the value of evidence and research is not the same thing as believing that only that which has been tested, measured, and statistically verified has value. Evidence should be a tool in the service of judgment, not a substitute for it. Isn’t that what we try to teach our students?

  1. But I’m not bitter.

The post Head in the Oven, Feet in the Freezer appeared first on e-Literate.

Categories: Planet Sakai

一つとなる 2014/04/13

Sakai Feeds - Sun, 04/13/2014 - 8:44am

2014年4月13日 シリーズ:アンコールチャーチ part3 一つとなる メッセンジャー: 大窪秀幸牧師 / Pastor Hide メッセージノート: http://www.lighthousechurch.jp/message.html 日曜礼拝時間: 11:00〜12:15 ライトハウスキリスト教会 大阪府堺市堺区砂道町3-6-19 http://www.lighthousechurch.jp

AAC&U GEMs: Exemplar Practice

Sakai Feeds - Sat, 04/12/2014 - 8:04am
A while back, I wrote about my early experiences as a member of the Digital Working Group for the AAC&U General Education Maps and Markers (GEMs) initiative and promised that I would do my homework for the group in public. Today I will make good on that promise. The homework is to write-up an exemplar practice of how digital tools and practices can help support students in their journeys through GenEd. As I said in my original post, I think this is an important initiative. I invite all of you to write up your own exemplars, either in the comments thread here or in your own blogs or other digital spaces. The template for the exemplar is as follows: Evocative Examples of Digital Resources and Strategies that can Improve General Education: What are these cases a case of? Brief Description of practice: In what ways is the practice effective or transformative for student learning? What’s the evidence? How do we know? (If you can tie the practice to any of the outcomes in the DQP and/or the LEAP Essential Learning Outcomes, that would be great.) How does the practice reflect the digital world as lived student culture? What are the skills and content associated with the digital practice or environment? How does the practice deepen or shape behavior of students with digital tools and environments with which they may be variously familiar? What does it take to make the practice work? What is the impact on faculty time? Does it take a team to design, implement, assess? What are the implications for organizational change? How is it applicable to gen ed (if example doesn’t come from gen ed)? Are there references or literature to which you can point that is relevant to the practice? I decided to base my exemplar on the MSU psychology class that I’ve written about recently. Flipped and Blended Class with Homework Platform Support In this practice, every effort is made to move both direct instruction and formative assessment outside of class time. The “flipped classroom” (or “flipped learning”) approach is employed, providing students with instructional videos and other supplemental content. In addition, a digital homework platform is employed, enabling students to get regular formative assessments. In order to give students more time for these activities, the amount of in-class time is reduced, making the course effectively a blended or hybrid course. In-class time is devoted either to class discussion, which is supported by the instructor’s knowledge of the students’ performance on the regular formative assessments, and by group work. In what ways is the practice effective or transformative for student learning? What’s the evidence? How do we know? This is a particular subset of a practice that the National Center for Academic Transformation (NCAT) calls “the replacement model”, and they have a variety of course redesign projects that demonstrated improved outcomes relative to the control. For example, a redesign of a psychology Gen Ed course at Missouri State University produced the following results: On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement). Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement). Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%) Over a three-semester period following the redesign, the course DFW rate improved from 24.6% to 18.4% (most of which was because of a significant drop in the withdrawal rate). One of the investigators of the project, who also was a course instructor, indicated that the quality of class discussion improved significantly as well. Possible reasons why the practice is effective include the following: Teacher/student contact time is maximized for interactivity. Regular formative assessments with instant feedback help students to be better prepared to maximize discussion time with the teacher and with peers. Feedback from the homework system enabled the instructor to walk into class knowing where students need the most help. Reduced number of physical class meetings reduces the chances that a student will withdraw due to grade damaging absences. How does the practice reflect the digital world as lived student culture? What are the skills and content associated with the digital practice or environment? How does the practice deepen or shape behavior of students with digital tools and environments with which they may be variously familiar? Students are used to getting their information online. They are also often very effective at “time slicing,” in which they use small increments of time (e.g., when they are on a bus or waiting for an appointment) to get things done. This exemplar practice enables students to do that with the portions of academic work that are suited to it while preserving and actually expanding room for long and deep academic discussion. What does it take to make the practice work? What is the impact on faculty time? Does it take a team to design, implement, assess? What are the implications for organizational change? The redesign effort is significant and, because the creation of significant digital resources is involved, is often best done by a team (although that is not strictly necessary). For the purposes of this design, the homework platform need not be cutting-edge adaptive, as long as it provides formative assessments that are consistent with the summative assessments and provides both students and instructors with good, regular feedback. That said, implementing the technology is often not seamless and may take several semesters to work the kinks out. The shift to a flipped classroom also puts new demands on students and may take several semesters for the campus culture to adjust to the new approach. How is it applicable to gen ed (if example doesn’t come from gen ed)? This model is often used in Gen Ed. It is particularly appropriate for larger classes where the DFW rate is high and where a significant percentage of the subject matter—at least the foundational knowledge on the lower rungs of Bloom’s taxonomy—can be assessed through software. Are there references or literature to which you can point that is relevant to the practice? http://mfeldstein.com/efficacy-adaptive-learning-flipped-classroom/ http://mfeldstein.com/efficacy-adaptive-learning-flipped-classroom-part-ii/ http://www.thencat.org/PlanRes/R2R_Model_Rep.htm http://www.thencat.org/PCR/R3/TCC/TCC_Overview.htm http://www.flippedlearning.org/ The post AAC&U GEMs: Exemplar Practice appeared first on e-Literate.

Michael Feldstein: AAC&U GEMs: Exemplar Practice

Planet Sakai - Sat, 04/12/2014 - 8:04am

A while back, I wrote about my early experiences as a member of the Digital Working Group for the AAC&U General Education Maps and Markers (GEMs) initiative and promised that I would do my homework for the group in public. Today I will make good on that promise. The homework is to write-up an exemplar practice of how digital tools and practices can help support students in their journeys through GenEd.

As I said in my original post, I think this is an important initiative. I invite all of you to write up your own exemplars, either in the comments thread here or in your own blogs or other digital spaces.

The template for the exemplar is as follows:

Evocative Examples of Digital Resources and Strategies that can Improve General Education: What are these cases a case of?

Brief Description of practice:

  • In what ways is the practice effective or transformative for student learning? What’s the evidence? How do we know? (If you can tie the practice to any of the outcomes in the DQP and/or the LEAP Essential Learning Outcomes, that would be great.)
  • How does the practice reflect the digital world as lived student culture? What are the skills and content associated with the digital practice or environment? How does the practice deepen or shape behavior of students with digital tools and environments with which they may be variously familiar?
  • What does it take to make the practice work? What is the impact on faculty time? Does it take a team to design, implement, assess? What are the implications for organizational change?
  • How is it applicable to gen ed (if example doesn’t come from gen ed)?
  • Are there references or literature to which you can point that is relevant to the practice?

I decided to base my exemplar on the MSU psychology class that I’ve written about recently.

Flipped and Blended Class with Homework Platform Support

In this practice, every effort is made to move both direct instruction and formative assessment outside of class time. The “flipped classroom” (or “flipped learning”) approach is employed, providing students with instructional videos and other supplemental content. In addition, a digital homework platform is employed, enabling students to get regular formative assessments. In order to give students more time for these activities, the amount of in-class time is reduced, making the course effectively a blended or hybrid course. In-class time is devoted either to class discussion, which is supported by the instructor’s knowledge of the students’ performance on the regular formative assessments, and by group work.

In what ways is the practice effective or transformative for student learning? What’s the evidence? How do we know?

This is a particular subset of a practice that the National Center for Academic Transformation (NCAT) calls “the replacement model”, and they have a variety of course redesign projects that demonstrated improved outcomes relative to the control. For example, a redesign of a psychology Gen Ed course at Missouri State University produced the following results:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • Over a three-semester period following the redesign, the course DFW rate improved from 24.6% to 18.4% (most of which was because of a significant drop in the withdrawal rate).

One of the investigators of the project, who also was a course instructor, indicated that the quality of class discussion improved significantly as well.

Possible reasons why the practice is effective include the following:

  • Teacher/student contact time is maximized for interactivity.
  • Regular formative assessments with instant feedback help students to be better prepared to maximize discussion time with the teacher and with peers.
  • Feedback from the homework system enabled the instructor to walk into class knowing where students need the most help.
  • Reduced number of physical class meetings reduces the chances that a student will withdraw due to grade damaging absences.

How does the practice reflect the digital world as lived student culture? What are the skills and content associated with the digital practice or environment? How does the practice deepen or shape behavior of students with digital tools and environments with which they may be variously familiar?

Students are used to getting their information online. They are also often very effective at “time slicing,” in which they use small increments of time (e.g., when they are on a bus or waiting for an appointment) to get things done. This exemplar practice enables students to do that with the portions of academic work that are suited to it while preserving and actually expanding room for long and deep academic discussion.

What does it take to make the practice work? What is the impact on faculty time? Does it take a team to design, implement, assess? What are the implications for organizational change?

The redesign effort is significant and, because the creation of significant digital resources is involved, is often best done by a team (although that is not strictly necessary). For the purposes of this design, the homework platform need not be cutting-edge adaptive, as long as it provides formative assessments that are consistent with the summative assessments and provides both students and instructors with good, regular feedback. That said, implementing the technology is often not seamless and may take several semesters to work the kinks out. The shift to a flipped classroom also puts new demands on students and may take several semesters for the campus culture to adjust to the new approach.

How is it applicable to gen ed (if example doesn’t come from gen ed)?

This model is often used in Gen Ed. It is particularly appropriate for larger classes where the DFW rate is high and where a significant percentage of the subject matter—at least the foundational knowledge on the lower rungs of Bloom’s taxonomy—can be assessed through software.

Are there references or literature to which you can point that is relevant to the practice?

http://mfeldstein.com/efficacy-adaptive-learning-flipped-classroom/

http://mfeldstein.com/efficacy-adaptive-learning-flipped-classroom-part-ii/

http://www.thencat.org/PlanRes/R2R_Model_Rep.htm

http://www.thencat.org/PCR/R3/TCC/TCC_Overview.htm

http://www.flippedlearning.org/

The post AAC&U GEMs: Exemplar Practice appeared first on e-Literate.

Categories: Planet Sakai

Apereo OAE is now responsive!

Sakai Feeds - Wed, 04/09/2014 - 2:24pm

Apereo OAE: Apereo OAE is now responsive!

Planet Sakai - Wed, 04/09/2014 - 2:24pm

The Apereo Open Academic Environment (OAE) project team is extremely proud to announce the next major release of the Apereo Open Academic Environment; OAE Emperor Penguin or OAE 6.

OAE Emperor Penguin brings a fully responsive UI, ensuring that OAE works seamlessly on mobile and tablet devices. OAE Emperor Penguin also adds a range of usability improvements and a full Brazilian Portuguese translation.

Changelog Responsive UI

An increasing number of people expect to be able to use applications on mobile and tablet devices, and Apereo OAE is not an exception. Usage statistics already show that many of our users are accessing OAE through these devices.

Apereo OAE uses Twitter Bootstrap as its CSS framework. When they released their latest version, introducing support for responsive applications, it seemed like an appropriate time to make the OAE UI fully responsive. Despite using this responsive CSS framework and the fact that OAE has been designed tablet first, making the UI fully responsive has been a massive undertaking that has ended up touching most of the application.

However, we are extremely pleased with the end result and OAE now works well on a wide range of mobile and tablet devices. Whilst all OAE functionality works seamlessly on these devices, it is especially pleasant to keep track of your user and group activity feeds.

 

Brazilian Portuguese translation

A complete Brazilian Portuguese translation is now available for the OAE UI. Many thanks to César Goudouris for providing this translation!

Try it out

OAE Emperor Penguin can be experienced on the project's QA server at http://oae.oae-qa0.oaeproject.org. It is worth noting that this server is actively used for testing and will be wiped and redeployed every night.

The source code has been tagged with version numbers 6.0.0 and can be downloaded from the following repositories:

Back-end: https://github.com/oaeproject/Hilary/tree/6.0.0
Front-end: https://github.com/oaeproject/3akai-ux/tree/6.0.0

Documentation on how to install the system can be found at https://github.com/oaeproject/Hilary/blob/6.0.0/README.md.

Instruction on how to upgrade an OAE installation from version 5.0 to version 6.0 can be found at https://github.com/oaeproject/Hilary/wiki/OAE-Upgrade-Guide.

The repository containing all deployment scripts can be found at https://github.com/oaeproject/puppet-hilary.

Get in touch

The project website can be found at http://www.oaeproject.org. The project blog will be updated with the latest project news from time to time, and can be found at http://www.oaeproject.org/blog.

The mailing list used for Apereo OAE is oae@apereo.org. You can subscribe to this by sending an email to oae+subscribe@apereo.org.

Bugs and other issues can be reported in our issue tracker at https://github.com/oaeproject/3akai-ux/issues.

Categories: Planet Sakai

Efficacy, Adaptive Learning, and the Flipped Classroom, Part II

Sakai Feeds - Wed, 04/09/2014 - 12:45pm
In my last post, I described positive but mixed results of an effort by MSU’s psychology department to flip and blend their classroom: On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement). Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement). Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%) They did not get a statistically significant improvement in the number of failures and withdrawals, which was one of the main goals of the redesign, although they note that “it does appear that the distribution of A’s, B’s, and C’s shifted such that in the redesign, there were more A’s and B’s and fewer C’s compared to the traditional course.” In terms of cost reduction, while they fell short of their 17.8% goal, they did achieve a 10% drop in the cost of the course…. It’s also worth noting that MSU expected to increase enrollment by 72 students annually but actually saw a decline of enrollment by 126 students, which impacted their ability to deliver decreased costs to the institution. Those numbers were based on the NCAT report that was written up after the first semester of the redesigned course. But that wasn’t the whole story. It turns out that, after several semesters of offering the course, MSU was able to improve their DFW numbers after all: That’s a fairly substantial reduction. In addition, their enrollment numbers have returned to roughly what they were pre-redesign (although they haven’t yet achieved the enrollment increases they originally hoped for). When I asked Danae Hudson, one of the leads on the project, why she thought it took time to see these results, here’s what she had to say: I do think there is a period of time (about a full year) where students (and other faculty) are getting used to a redesigned course. In that first year, there are a few things going on 1) students/and other faculty are hearing about “a fancy new course” – this makes some people skeptical, especially if that message is coming from administration; 2) students realize that there are now a much higher set of expectations and requirements, and have all of their friends saying “I didn’t have to do any of that!” — this makes them bitter; 3) during that first year, you are still working out some technological glitches and fine tuning the course. We have always been very open with our students about the process of redesign and letting them know we value their feedback. There is a risk to that approach though, in that it gives students a license to really complain, with the assumption that the faculty team “doesn’t know what they are doing”. So, we dealt with that, and I would probably do it again, because I do really value the input from students. I feel that we have now reached a point (2 years in) where most students at MSU don’t remember the course taught any other way and now the conversations are more about “what a cool course it is etc”. Finally, one other thought regarding the slight drop in enrollment we had. While I certainly think a “new blended course” may have scared some students away that first year, the other thing that happened was there were some scheduling issues that I didn’t initially think about. For example, in the Fall of 2012 we had 5 sections and in an attempt to make them very consistent and minimize missed classes due to holidays, we scheduled all sections on either a Tuesday or a Wednesday. I didn’t think about how that lack of flexibility could impact enrollment (which I think it did). So now, we are careful to offer sections (Monday through Thursday) and in morning and afternoon. To sum up, she thinks there were three main factors: (1) it took time to get the design right and the technology working optimally; (2) there was a shift in cultural expectations on campus that took several semesters; and (3) there was some noise in the data due to scheduling glitches. There are a number of lessons one could draw from this story, but from the perspective of educational efficacy, I think it underlines how little the headlines (or advertisements) we get really tell us, particularly about components of a larger educational intervention. We could have read, “Pearson’s MyPsychLabs Course Substantially Increased Students Knowledge, Study Shows.” That would have been true, but we have little idea how much improvement there would have been had the course not been fairly radically redesigned at the same time. We also could have read, “Pearson’s MyPsychLabs Course Did Not Improve Pass and Completion Rates, Study Shows.” That would have been true, but it would have told us nothing about the substantial gains over the semesters following the study. We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention. The post Efficacy, Adaptive Learning, and the Flipped Classroom, Part II appeared first on e-Literate.

Michael Feldstein: Efficacy, Adaptive Learning, and the Flipped Classroom, Part II

Planet Sakai - Wed, 04/09/2014 - 12:45pm

In my last post, I described positive but mixed results of an effort by MSU’s psychology department to flip and blend their classroom:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • They did not get a statistically significant improvement in the number of failures and withdrawals, which was one of the main goals of the redesign, although they note that “it does appear that the distribution of A’s, B’s, and C’s shifted such that in the redesign, there were more A’s and B’s and fewer C’s compared to the traditional course.”
  • In terms of cost reduction, while they fell short of their 17.8% goal, they did achieve a 10% drop in the cost of the course….

It’s also worth noting that MSU expected to increase enrollment by 72 students annually but actually saw a decline of enrollment by 126 students, which impacted their ability to deliver decreased costs to the institution.

Those numbers were based on the NCAT report that was written up after the first semester of the redesigned course. But that wasn’t the whole story. It turns out that, after several semesters of offering the course, MSU was able to improve their DFW numbers after all:

That’s a fairly substantial reduction. In addition, their enrollment numbers have returned to roughly what they were pre-redesign (although they haven’t yet achieved the enrollment increases they originally hoped for).

When I asked Danae Hudson, one of the leads on the project, why she thought it took time to see these results, here’s what she had to say:

I do think there is a period of time (about a full year) where students (and other faculty) are getting used to a redesigned course. In that first year, there are a few things going on 1) students/and other faculty are hearing about “a fancy new course” – this makes some people skeptical, especially if that message is coming from administration; 2) students realize that there are now a much higher set of expectations and requirements, and have all of their friends saying “I didn’t have to do any of that!” — this makes them bitter; 3) during that first year, you are still working out some technological glitches and fine tuning the course. We have always been very open with our students about the process of redesign and letting them know we value their feedback. There is a risk to that approach though, in that it gives students a license to really complain, with the assumption that the faculty team “doesn’t know what they are doing”. So, we dealt with that, and I would probably do it again, because I do really value the input from students.

I feel that we have now reached a point (2 years in) where most students at MSU don’t remember the course taught any other way and now the conversations are more about “what a cool course it is etc”.

Finally, one other thought regarding the slight drop in enrollment we had. While I certainly think a “new blended course” may have scared some students away that first year, the other thing that happened was there were some scheduling issues that I didn’t initially think about. For example, in the Fall of 2012 we had 5 sections and in an attempt to make them very consistent and minimize missed classes due to holidays, we scheduled all sections on either a Tuesday or a Wednesday. I didn’t think about how that lack of flexibility could impact enrollment (which I think it did). So now, we are careful to offer sections (Monday through Thursday) and in morning and afternoon.

To sum up, she thinks there were three main factors: (1) it took time to get the design right and the technology working optimally; (2) there was a shift in cultural expectations on campus that took several semesters; and (3) there was some noise in the data due to scheduling glitches.

There are a number of lessons one could draw from this story, but from the perspective of educational efficacy, I think it underlines how little the headlines (or advertisements) we get really tell us, particularly about components of a larger educational intervention. We could have read, “Pearson’s MyPsychLabs Course Substantially Increased Students Knowledge, Study Shows.” That would have been true, but we have little idea how much improvement there would have been had the course not been fairly radically redesigned at the same time. We also could have read, “Pearson’s MyPsychLabs Course Did Not Improve Pass and Completion Rates, Study Shows.” That would have been true, but it would have told us nothing about the substantial gains over the semesters following the study. We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

The post Efficacy, Adaptive Learning, and the Flipped Classroom, Part II appeared first on e-Literate.

Categories: Planet Sakai

WebLearn unavailable on Tuesday 15 April 2014 from 7-9am

Sakai Feeds - Wed, 04/09/2014 - 10:53am
It is planned to upgrade WebLearn to version 2.8-ox10 on Tuesday 15 April 2014 7-9am. There will be no service during this period. We apologise for any inconvenience that this essential work may cause.

Adam Marshall: WebLearn unavailable on Tuesday 15 April 2014 from 7-9am

Planet Sakai - Wed, 04/09/2014 - 10:53am

It is planned to upgrade WebLearn to version 2.8-ox10 on Tuesday 15 April 2014 7-9am. There will be no service during this period.

We apologise for any inconvenience that this essential work may cause.

Categories: Planet Sakai

The Myths of Technology Series – “Technology equals engagement” | The Principal of Change

What I'm reading on the web (via Diigo) - Wed, 04/09/2014 - 6:19am

Great post by @gcouros on the difference between engaging and empowering students: http://t.co/ok941THKes

Tags:

Categories: Ian

Dear Google, Like You, I Just Don’t Care…

Sakai Feeds - Tue, 04/08/2014 - 9:18am
As many know, when Google App Engine came out I became immediately enamored with it. I saw it as a way to democratize access to server-hosted code. It meant everyone in the world could have server space at no charge and I hoped it would unleash creativity. I wrote the first book on App Engine (released through O’Reilly and Associates). I switched the course I was teaching from Ruby on Rails to use App Engine and it was taught that way for over 3 years. I started doing more and more server side development in Python. I started moving some of my production stuff from PHP to App Engine to show the faith. But my enthusiasm and rush to embrace all things App Engine was not to last. I could write a book on went wrong with App Engine but here are a few of the high points. (1) They never would help you with performance problems unless your name is “Sal Khan” – they just were the Honey Badger. (2) Once they used us early adopters to Beta test their code by building free applications – the “free” resource levels went down to force more folks to the pay version. (3) They just decided to break working code as they went to Python 2.7 – no need to support legacy (say like Microsoft does) – again the cries of “foul” fell on the deaf Honey Badger ears. That is not to say that App Engine had zero value. It motivated companies like Amazon to create truly useful services like EC2 that actually met user’s needs and let users do what they wanted and let users log in to diagnose why their code was running slowly. So App Engine was not quite a Google Wave that never took off. More like a Google Reader that told Yahoo that there was demand for such a service. Google has done a lot of forcing innovation upon us – like the AJAX revolution through Google Maps and GMail. And I love Chrome (I am using it to write this post). So I have been recently getting mail that my App Engine and Apps stuff that I built during my post-Google I/O (2008, 2009) high are not active so they will be deleted. I have many opportunities to simply “click this button” to extend the life of these things. But this time, it is my turn to be the Honey Badger. Because you see I simply do not care. I just cannot depend Google for anything other than AdSense, Search, Maps and Mail. You are good at supporting your applications and terrible at supporting my applications. Your developer stuff is so freakishly proprietary and you have no commitment to continuity. I like Amazon a lot – they “get” me – Amazon is my partner – they want me to succeed and get a cut of my success. Google makes me feel like a side of beef – an asset to be managed. Here is the mail I just got. I am putting it into this blog post so I can gleefully delete it and ignore it – like the Honey Badger would. Hello, There hasn’t been any activity on your Google Apps account for the domain cloudcollab.com since we sent your termination notice 30 days ago. Your cloudcollab.com Google Apps account has been closed. You can still check or save your data. Just sign in to admin.google.com as drchuck@cloudcollab.com in the next 30 days and export your data. If you forgot your username or password, click the “Need help?” link, and we’ll help you access your account. Your account will be automatically terminated on May 8th 2014. Once your account is terminated, you can no longer access any Google Apps services with this domain name. All of your account data, such as your Gmail messages and contacts, will be permanently deleted to protect your privacy. No one will be able to access your old data by creating a new Google Apps account with this domain name. Visit the Google Apps Help Center to learn more about closing inactive accounts. We hope you’ve enjoyed using Google Apps. If you would like to continue using these services, we invite you to create a new Google Apps for Business account. Sincerely, The Google Apps Team

Dr. Chuck: Dear Google, Like You, I Just Don’t Care…

Planet Sakai - Tue, 04/08/2014 - 9:18am

As many know, when Google App Engine came out I became immediately enamored with it. I saw it as a way to democratize access to server-hosted code. It meant everyone in the world could have server space at no charge and I hoped it would unleash creativity. I wrote the first book on App Engine (released through O’Reilly and Associates). I switched the course I was teaching from Ruby on Rails to use App Engine and it was taught that way for over 3 years. I started doing more and more server side development in Python. I started moving some of my production stuff from PHP to App Engine to show the faith.

But my enthusiasm and rush to embrace all things App Engine was not to last. I could write a book on went wrong with App Engine but here are a few of the high points. (1) They never would help you with performance problems unless your name is “Sal Khan” – they just were the Honey Badger. (2) Once they used us early adopters to Beta test their code by building free applications – the “free” resource levels went down to force more folks to the pay version. (3) They just decided to break working code as they went to Python 2.7 – no need to support legacy (say like Microsoft does) – again the cries of “foul” fell on the deaf Honey Badger ears.

That is not to say that App Engine had zero value. It motivated companies like Amazon to create truly useful services like EC2 that actually met user’s needs and let users do what they wanted and let users log in to diagnose why their code was running slowly. So App Engine was not quite a Google Wave that never took off. More like a Google Reader that told Yahoo that there was demand for such a service.

Google has done a lot of forcing innovation upon us – like the AJAX revolution through Google Maps and GMail. And I love Chrome (I am using it to write this post).

So I have been recently getting mail that my App Engine and Apps stuff that I built during my post-Google I/O (2008, 2009) high are not active so they will be deleted. I have many opportunities to simply “click this button” to extend the life of these things. But this time, it is my turn to be the Honey Badger. Because you see I simply do not care. I just cannot depend Google for anything other than AdSense, Search, Maps and Mail. You are good at supporting your applications and terrible at supporting my applications. Your developer stuff is so freakishly proprietary and you have no commitment to continuity. I like Amazon a lot – they “get” me – Amazon is my partner – they want me to succeed and get a cut of my success. Google makes me feel like a side of beef – an asset to be managed.

Here is the mail I just got. I am putting it into this blog post so I can gleefully delete it and ignore it – like the Honey Badger would.

Hello,

There hasn’t been any activity on your Google Apps account for the domain cloudcollab.com since we sent your termination notice 30 days ago.

Your cloudcollab.com Google Apps account has been closed.

You can still check or save your data. Just sign in to admin.google.com as drchuck@cloudcollab.com in the next 30 days and export your data. If you forgot your username or password, click the “Need help?” link, and we’ll help you access your account.

Your account will be automatically terminated on May 8th 2014. Once your account is terminated, you can no longer access any Google Apps services with this domain name. All of your account data, such as your Gmail messages and contacts, will be permanently deleted to protect your privacy. No one will be able to access your old data by creating a new Google Apps account with this domain name.

Visit the Google Apps Help Center to learn more about closing inactive accounts.

We hope you’ve enjoyed using Google Apps. If you would like to continue using these services, we invite you to create a new Google Apps for Business account.

Sincerely,
The Google Apps Team

Categories: Planet Sakai

Managing Climate Change in Higher Education

Sakai Feeds - Mon, 04/07/2014 - 4:10pm
It seems that Arizona has been the beneficiary of climate change this winter. Friends and colleagues from across the eastern United States have experienced severe storms and frigid temperatures while our weather here in the desert southwest has been unusually pleasant. Don’t worry friends, we’ll get what’s coming to us when it’s 130 degrees this summer and we run out of water. But for now, I’m enjoying it and trying to be responsible about my water and energy use. In fact, every time I walk by one of my Nest thermostats it reminds me with a friendly little leaf icon that I’m contributing to the 1,774,469,650 kWh that nest users around the world have saved. There’s another, mostly unrelated, climate change going on in higher education today and it’s causing clouds of a different sort. Colleges and universities, like many of their industry counterparts, are moving systems off campus into aggregated above campus services at an accelerated pace. These above campus or cloud services take advantage of great economies of scale so that computing capacity and application services like email, learning platforms, and even ERP systems can be quickly and easily scaled up and down to meet business demand. I’m seeing evidence of the trend from several angles: The Educause Core Data Survey (2013) found more than half of all institutions had at least one core information system in the cloud, half of those had two, and twenty-five percent had three. The Campus Computing Project Survey (2013) reported that more than half of all institutions consider it strategic to move their ERP to the cloud. The survey also predicts thousands of instances of mission-critical applications like research administration, HR, student services, and financials will move to the cloud by 2018. Eight of the Top Ten IT issues highlighted in the Educause Review for 2014 include some cloud angle. Personally, I’m finding it increasingly common to hear that an institution’s strategy is to first look for new application services in the cloud. And, to only consider introducing new services in the campus data center after cloud options have been ruled out. At rSmart, we’re both a provider and a consumer of cloud services. Strategically we look for services that increase our effectiveness as a team without taking focus away from delivering on the company’s mission. We want as much of our energy focused on helping colleges and universities keep their money in their mission, and as little as possible running our email, marketing, and finance systems. The same seems to be true for an increasing number of higher education institutions. There are enormous benefits to treating computing and application services like an elastic commodity that can scale and adapt with an organization’s needs. There are also material risks that need to be thoughtfully addressed and managed. Mission-critical applications that are highly configured to the organizations business processes, and have hundreds or thousands of campus users, deserve particular consideration. Brad Wheeler, CIO at Indiana University and Kelley Business School professor, recently recorded a guest lecture at Stellenbosch University in South Africa on the economics of open source. The full presentation is a good listen, but if you only have a few minutes, Brad touches on an important aspect of the trend toward cloud computing at about 28 minutes into the lecture Rights and Provisioning Matrix His focus is on two dimensions: Ownership (Y-axis) and Location (X-axis). Brad makes it clear that giving up control on both dimensions dramatically increases the risk. If you don’t own it and it’s off-site, you better hope that the vendor’s values and direction stay aligned with yours. I use a lot of cloud-based services so I understand this risk well and have occasionally experienced the impact. One recent example happened when Beats Music acquired MOG. Many years ago I decided that there was no need to own physical media for music anymore. I could get the music I like streamed to me at home, in the car, at work, and on the road. My service of choice has been MOG. It is integrated with my Sonos at home and set up on all of my devces. I have all of my playlists there, etc. When they announced the acquisition I hoped that Beats would leverage MOG’s great platform and that I’d eventually move to Beats. Well, as it often happens, it didn’t work out that way and now I’m starting over with Spotify. So it goes. Music playlists and personal entertainment are trivial examples compared to the disruption that occurs when mission-critical enterprise systems are used by hundreds or thousands of people in the organization. When the application is running in the cloud and the vendor owns the software, the vendor holds all the cards. And in a situation where the vendor goes in a direction that’s not aligned with your organization, you just don’t have many options. Fortunately, there is another option. Kuali (the “K” in Brad’s slide) is a global collaboration of more than seventy colleges, universities, and companies working together to create an option that is “owned” by higher education. rSmart, a co-founder of Kuali, is one of those organizations and our unique role is to provide colleges and universities with a trusted cloud option for these mission-critical systems. With rSmart and Kuali in the cloud, institutions can leverage the economies of scale of the cloud and retain peace of mind. That’s because control of the software’s direction lies firmly in the hands of higher education. Climate change in higher education is inevitable. As consumers of higher education continue to face soaring tuition costs, their expectations are rising accordingly. Now is the time for institutions to find ways to be more agile and leverage the benefits and economies of scale that come with the cloud. With rSmart and Kuali, we can help you make sure your cloud resembles the kind that accent on a beautiful blue sky day. And, you won’t be put in a position to have to weather an untimely storm with ominous looking thunderheads.   Tagged: cloud, education, erp, kuali, open source, rSmart, SaaS

Chris Coppola: Managing Climate Change in Higher Education

Planet Sakai - Mon, 04/07/2014 - 4:10pm

It seems that Arizona has been the beneficiary of climate change this winter. Friends and colleagues from across the eastern United States have experienced severe storms and frigid temperatures while our weather here in the desert southwest has been unusually pleasant. Don’t worry friends, we’ll get what’s coming to us when it’s 130 degrees this summer and we run out of water. But for now, I’m enjoying it and trying to be responsible about my water and energy use. In fact, every time I walk by one of my Nest thermostats it reminds me with a friendly little leaf icon that I’m contributing to the 1,774,469,650 kWh that nest users around the world have saved.

There’s another, mostly unrelated, climate change going on in higher education today and it’s causing clouds of a different sort. Colleges and universities, like many of their industry counterparts, are moving systems off campus into aggregated above campus services at an accelerated pace. These above campus or cloud services take advantage of great economies of scale so that computing capacity and application services like email, learning platforms, and even ERP systems can be quickly and easily scaled up and down to meet business demand.

I’m seeing evidence of the trend from several angles:

  • The Educause Core Data Survey (2013) found more than half of all institutions had at least one core information system in the cloud, half of those had two, and twenty-five percent had three.

  • The Campus Computing Project Survey (2013) reported that more than half of all institutions consider it strategic to move their ERP to the cloud. The survey also predicts thousands of instances of mission-critical applications like research administration, HR, student services, and financials will move to the cloud by 2018.

  • Eight of the Top Ten IT issues highlighted in the Educause Review for 2014 include some cloud angle.

Personally, I’m finding it increasingly common to hear that an institution’s strategy is to first look for new application services in the cloud. And, to only consider introducing new services in the campus data center after cloud options have been ruled out.

At rSmart, we’re both a provider and a consumer of cloud services. Strategically we look for services that increase our effectiveness as a team without taking focus away from delivering on the company’s mission. We want as much of our energy focused on helping colleges and universities keep their money in their mission, and as little as possible running our email, marketing, and finance systems. The same seems to be true for an increasing number of higher education institutions.

There are enormous benefits to treating computing and application services like an elastic commodity that can scale and adapt with an organization’s needs. There are also material risks that need to be thoughtfully addressed and managed. Mission-critical applications that are highly configured to the organizations business processes, and have hundreds or thousands of campus users, deserve particular consideration.

Brad Wheeler, CIO at Indiana University and Kelley Business School professor, recently recorded a guest lecture at Stellenbosch University in South Africa on the economics of open source. The full presentation is a good listen, but if you only have a few minutes, Brad touches on an important aspect of the trend toward cloud computing at about 28 minutes into the lecture

Rights and Provisioning Matrix

His focus is on two dimensions: Ownership (Y-axis) and Location (X-axis). Brad makes it clear that giving up control on both dimensions dramatically increases the risk. If you don’t own it and it’s off-site, you better hope that the vendor’s values and direction stay aligned with yours.

I use a lot of cloud-based services so I understand this risk well and have occasionally experienced the impact. One recent example happened when Beats Music acquired MOG. Many years ago I decided that there was no need to own physical media for music anymore. I could get the music I like streamed to me at home, in the car, at work, and on the road. My service of choice has been MOG. It is integrated with my Sonos at home and set up on all of my devces. I have all of my playlists there, etc. When they announced the acquisition I hoped that Beats would leverage MOG’s great platform and that I’d eventually move to Beats. Well, as it often happens, it didn’t work out that way and now I’m starting over with Spotify. So it goes.

Music playlists and personal entertainment are trivial examples compared to the disruption that occurs when mission-critical enterprise systems are used by hundreds or thousands of people in the organization. When the application is running in the cloud and the vendor owns the software, the vendor holds all the cards. And in a situation where the vendor goes in a direction that’s not aligned with your organization, you just don’t have many options.

Fortunately, there is another option. Kuali (the “K” in Brad’s slide) is a global collaboration of more than seventy colleges, universities, and companies working together to create an option that is “owned” by higher education. rSmart, a co-founder of Kuali, is one of those organizations and our unique role is to provide colleges and universities with a trusted cloud option for these mission-critical systems. With rSmart and Kuali in the cloud, institutions can leverage the economies of scale of the cloud and retain peace of mind. That’s because control of the software’s direction lies firmly in the hands of higher education.

Climate change in higher education is inevitable. As consumers of higher education continue to face soaring tuition costs, their expectations are rising accordingly. Now is the time for institutions to find ways to be more agile and leverage the benefits and economies of scale that come with the cloud.

With rSmart and Kuali, we can help you make sure your cloud resembles the kind that accent on a beautiful blue sky day. And, you won’t be put in a position to have to weather an untimely storm with ominous looking thunderheads.

 


Tagged: cloud, education, erp, kuali, open source, rSmart, SaaS
Categories: Planet Sakai

アンコールチャーチの3つの心 2014/04/06

Sakai Feeds - Mon, 04/07/2014 - 9:55am

2014年4月6日 シリーズ:アンコールチャーチ part2 アンコールチャーチの3つの心 メッセンジャー: 大窪秀幸牧師 / Pastor Hide メッセージノート: http://www.lighthousechurch.jp/message.html 日曜礼拝時間: 11:00〜12:15 ライトハウスキリスト教会 大阪府堺市堺区砂道町3-6-19 http://www.lighthousechurch.jp

for the is community driven and I have all faith

What I'm reading on the web (via Diigo) - Sat, 04/05/2014 - 6:16am

for #OpenAnnotation the @w3c is community driven and I have all faith that standards effort will be expeditious and true #ianno14

Tags:

Categories: Ian

next.iu.edu

Sakai Feeds - Thu, 04/03/2014 - 5:15pm
Syndicate content