Skip to Content

Feed aggregator

 

WebLearn unavailable on Tuesday 15 April 2014 from 7-9am

Sakai Feeds - Wed, 04/09/2014 - 10:53am
It is planned to upgrade WebLearn to version 2.8-ox10 on Tuesday 15 April 2014 7-9am. There will be no service during this period. We apologise for any inconvenience that this essential work may cause.

Adam Marshall: WebLearn unavailable on Tuesday 15 April 2014 from 7-9am

Planet Sakai - Wed, 04/09/2014 - 10:53am

It is planned to upgrade WebLearn to version 2.8-ox10 on Tuesday 15 April 2014 7-9am. There will be no service during this period.

We apologise for any inconvenience that this essential work may cause.

Categories: Planet Sakai

The Myths of Technology Series – “Technology equals engagement” | The Principal of Change

What I'm reading on the web (via Diigo) - Wed, 04/09/2014 - 6:19am

Great post by @gcouros on the difference between engaging and empowering students: http://t.co/ok941THKes

Tags:

Categories: Ian

Dear Google, Like You, I Just Don’t Care…

Sakai Feeds - Tue, 04/08/2014 - 9:18am
As many know, when Google App Engine came out I became immediately enamored with it. I saw it as a way to democratize access to server-hosted code. It meant everyone in the world could have server space at no charge and I hoped it would unleash creativity. I wrote the first book on App Engine (released through O’Reilly and Associates). I switched the course I was teaching from Ruby on Rails to use App Engine and it was taught that way for over 3 years. I started doing more and more server side development in Python. I started moving some of my production stuff from PHP to App Engine to show the faith. But my enthusiasm and rush to embrace all things App Engine was not to last. I could write a book on went wrong with App Engine but here are a few of the high points. (1) They never would help you with performance problems unless your name is “Sal Khan” – they just were the Honey Badger. (2) Once they used us early adopters to Beta test their code by building free applications – the “free” resource levels went down to force more folks to the pay version. (3) They just decided to break working code as they went to Python 2.7 – no need to support legacy (say like Microsoft does) – again the cries of “foul” fell on the deaf Honey Badger ears. That is not to say that App Engine had zero value. It motivated companies like Amazon to create truly useful services like EC2 that actually met user’s needs and let users do what they wanted and let users log in to diagnose why their code was running slowly. So App Engine was not quite a Google Wave that never took off. More like a Google Reader that told Yahoo that there was demand for such a service. Google has done a lot of forcing innovation upon us – like the AJAX revolution through Google Maps and GMail. And I love Chrome (I am using it to write this post). So I have been recently getting mail that my App Engine and Apps stuff that I built during my post-Google I/O (2008, 2009) high are not active so they will be deleted. I have many opportunities to simply “click this button” to extend the life of these things. But this time, it is my turn to be the Honey Badger. Because you see I simply do not care. I just cannot depend Google for anything other than AdSense, Search, Maps and Mail. You are good at supporting your applications and terrible at supporting my applications. Your developer stuff is so freakishly proprietary and you have no commitment to continuity. I like Amazon a lot – they “get” me – Amazon is my partner – they want me to succeed and get a cut of my success. Google makes me feel like a side of beef – an asset to be managed. Here is the mail I just got. I am putting it into this blog post so I can gleefully delete it and ignore it – like the Honey Badger would. Hello, There hasn’t been any activity on your Google Apps account for the domain cloudcollab.com since we sent your termination notice 30 days ago. Your cloudcollab.com Google Apps account has been closed. You can still check or save your data. Just sign in to admin.google.com as drchuck@cloudcollab.com in the next 30 days and export your data. If you forgot your username or password, click the “Need help?” link, and we’ll help you access your account. Your account will be automatically terminated on May 8th 2014. Once your account is terminated, you can no longer access any Google Apps services with this domain name. All of your account data, such as your Gmail messages and contacts, will be permanently deleted to protect your privacy. No one will be able to access your old data by creating a new Google Apps account with this domain name. Visit the Google Apps Help Center to learn more about closing inactive accounts. We hope you’ve enjoyed using Google Apps. If you would like to continue using these services, we invite you to create a new Google Apps for Business account. Sincerely, The Google Apps Team

Dr. Chuck: Dear Google, Like You, I Just Don’t Care…

Planet Sakai - Tue, 04/08/2014 - 9:18am

As many know, when Google App Engine came out I became immediately enamored with it. I saw it as a way to democratize access to server-hosted code. It meant everyone in the world could have server space at no charge and I hoped it would unleash creativity. I wrote the first book on App Engine (released through O’Reilly and Associates). I switched the course I was teaching from Ruby on Rails to use App Engine and it was taught that way for over 3 years. I started doing more and more server side development in Python. I started moving some of my production stuff from PHP to App Engine to show the faith.

But my enthusiasm and rush to embrace all things App Engine was not to last. I could write a book on went wrong with App Engine but here are a few of the high points. (1) They never would help you with performance problems unless your name is “Sal Khan” – they just were the Honey Badger. (2) Once they used us early adopters to Beta test their code by building free applications – the “free” resource levels went down to force more folks to the pay version. (3) They just decided to break working code as they went to Python 2.7 – no need to support legacy (say like Microsoft does) – again the cries of “foul” fell on the deaf Honey Badger ears.

That is not to say that App Engine had zero value. It motivated companies like Amazon to create truly useful services like EC2 that actually met user’s needs and let users do what they wanted and let users log in to diagnose why their code was running slowly. So App Engine was not quite a Google Wave that never took off. More like a Google Reader that told Yahoo that there was demand for such a service.

Google has done a lot of forcing innovation upon us – like the AJAX revolution through Google Maps and GMail. And I love Chrome (I am using it to write this post).

So I have been recently getting mail that my App Engine and Apps stuff that I built during my post-Google I/O (2008, 2009) high are not active so they will be deleted. I have many opportunities to simply “click this button” to extend the life of these things. But this time, it is my turn to be the Honey Badger. Because you see I simply do not care. I just cannot depend Google for anything other than AdSense, Search, Maps and Mail. You are good at supporting your applications and terrible at supporting my applications. Your developer stuff is so freakishly proprietary and you have no commitment to continuity. I like Amazon a lot – they “get” me – Amazon is my partner – they want me to succeed and get a cut of my success. Google makes me feel like a side of beef – an asset to be managed.

Here is the mail I just got. I am putting it into this blog post so I can gleefully delete it and ignore it – like the Honey Badger would.

Hello,

There hasn’t been any activity on your Google Apps account for the domain cloudcollab.com since we sent your termination notice 30 days ago.

Your cloudcollab.com Google Apps account has been closed.

You can still check or save your data. Just sign in to admin.google.com as drchuck@cloudcollab.com in the next 30 days and export your data. If you forgot your username or password, click the “Need help?” link, and we’ll help you access your account.

Your account will be automatically terminated on May 8th 2014. Once your account is terminated, you can no longer access any Google Apps services with this domain name. All of your account data, such as your Gmail messages and contacts, will be permanently deleted to protect your privacy. No one will be able to access your old data by creating a new Google Apps account with this domain name.

Visit the Google Apps Help Center to learn more about closing inactive accounts.

We hope you’ve enjoyed using Google Apps. If you would like to continue using these services, we invite you to create a new Google Apps for Business account.

Sincerely,
The Google Apps Team

Categories: Planet Sakai

Managing Climate Change in Higher Education

Sakai Feeds - Mon, 04/07/2014 - 4:10pm
It seems that Arizona has been the beneficiary of climate change this winter. Friends and colleagues from across the eastern United States have experienced severe storms and frigid temperatures while our weather here in the desert southwest has been unusually pleasant. Don’t worry friends, we’ll get what’s coming to us when it’s 130 degrees this summer and we run out of water. But for now, I’m enjoying it and trying to be responsible about my water and energy use. In fact, every time I walk by one of my Nest thermostats it reminds me with a friendly little leaf icon that I’m contributing to the 1,774,469,650 kWh that nest users around the world have saved. There’s another, mostly unrelated, climate change going on in higher education today and it’s causing clouds of a different sort. Colleges and universities, like many of their industry counterparts, are moving systems off campus into aggregated above campus services at an accelerated pace. These above campus or cloud services take advantage of great economies of scale so that computing capacity and application services like email, learning platforms, and even ERP systems can be quickly and easily scaled up and down to meet business demand. I’m seeing evidence of the trend from several angles: The Educause Core Data Survey (2013) found more than half of all institutions had at least one core information system in the cloud, half of those had two, and twenty-five percent had three. The Campus Computing Project Survey (2013) reported that more than half of all institutions consider it strategic to move their ERP to the cloud. The survey also predicts thousands of instances of mission-critical applications like research administration, HR, student services, and financials will move to the cloud by 2018. Eight of the Top Ten IT issues highlighted in the Educause Review for 2014 include some cloud angle. Personally, I’m finding it increasingly common to hear that an institution’s strategy is to first look for new application services in the cloud. And, to only consider introducing new services in the campus data center after cloud options have been ruled out. At rSmart, we’re both a provider and a consumer of cloud services. Strategically we look for services that increase our effectiveness as a team without taking focus away from delivering on the company’s mission. We want as much of our energy focused on helping colleges and universities keep their money in their mission, and as little as possible running our email, marketing, and finance systems. The same seems to be true for an increasing number of higher education institutions. There are enormous benefits to treating computing and application services like an elastic commodity that can scale and adapt with an organization’s needs. There are also material risks that need to be thoughtfully addressed and managed. Mission-critical applications that are highly configured to the organizations business processes, and have hundreds or thousands of campus users, deserve particular consideration. Brad Wheeler, CIO at Indiana University and Kelley Business School professor, recently recorded a guest lecture at Stellenbosch University in South Africa on the economics of open source. The full presentation is a good listen, but if you only have a few minutes, Brad touches on an important aspect of the trend toward cloud computing at about 28 minutes into the lecture Rights and Provisioning Matrix His focus is on two dimensions: Ownership (Y-axis) and Location (X-axis). Brad makes it clear that giving up control on both dimensions dramatically increases the risk. If you don’t own it and it’s off-site, you better hope that the vendor’s values and direction stay aligned with yours. I use a lot of cloud-based services so I understand this risk well and have occasionally experienced the impact. One recent example happened when Beats Music acquired MOG. Many years ago I decided that there was no need to own physical media for music anymore. I could get the music I like streamed to me at home, in the car, at work, and on the road. My service of choice has been MOG. It is integrated with my Sonos at home and set up on all of my devces. I have all of my playlists there, etc. When they announced the acquisition I hoped that Beats would leverage MOG’s great platform and that I’d eventually move to Beats. Well, as it often happens, it didn’t work out that way and now I’m starting over with Spotify. So it goes. Music playlists and personal entertainment are trivial examples compared to the disruption that occurs when mission-critical enterprise systems are used by hundreds or thousands of people in the organization. When the application is running in the cloud and the vendor owns the software, the vendor holds all the cards. And in a situation where the vendor goes in a direction that’s not aligned with your organization, you just don’t have many options. Fortunately, there is another option. Kuali (the “K” in Brad’s slide) is a global collaboration of more than seventy colleges, universities, and companies working together to create an option that is “owned” by higher education. rSmart, a co-founder of Kuali, is one of those organizations and our unique role is to provide colleges and universities with a trusted cloud option for these mission-critical systems. With rSmart and Kuali in the cloud, institutions can leverage the economies of scale of the cloud and retain peace of mind. That’s because control of the software’s direction lies firmly in the hands of higher education. Climate change in higher education is inevitable. As consumers of higher education continue to face soaring tuition costs, their expectations are rising accordingly. Now is the time for institutions to find ways to be more agile and leverage the benefits and economies of scale that come with the cloud. With rSmart and Kuali, we can help you make sure your cloud resembles the kind that accent on a beautiful blue sky day. And, you won’t be put in a position to have to weather an untimely storm with ominous looking thunderheads.   Tagged: cloud, education, erp, kuali, open source, rSmart, SaaS

Chris Coppola: Managing Climate Change in Higher Education

Planet Sakai - Mon, 04/07/2014 - 4:10pm

It seems that Arizona has been the beneficiary of climate change this winter. Friends and colleagues from across the eastern United States have experienced severe storms and frigid temperatures while our weather here in the desert southwest has been unusually pleasant. Don’t worry friends, we’ll get what’s coming to us when it’s 130 degrees this summer and we run out of water. But for now, I’m enjoying it and trying to be responsible about my water and energy use. In fact, every time I walk by one of my Nest thermostats it reminds me with a friendly little leaf icon that I’m contributing to the 1,774,469,650 kWh that nest users around the world have saved.

There’s another, mostly unrelated, climate change going on in higher education today and it’s causing clouds of a different sort. Colleges and universities, like many of their industry counterparts, are moving systems off campus into aggregated above campus services at an accelerated pace. These above campus or cloud services take advantage of great economies of scale so that computing capacity and application services like email, learning platforms, and even ERP systems can be quickly and easily scaled up and down to meet business demand.

I’m seeing evidence of the trend from several angles:

  • The Educause Core Data Survey (2013) found more than half of all institutions had at least one core information system in the cloud, half of those had two, and twenty-five percent had three.

  • The Campus Computing Project Survey (2013) reported that more than half of all institutions consider it strategic to move their ERP to the cloud. The survey also predicts thousands of instances of mission-critical applications like research administration, HR, student services, and financials will move to the cloud by 2018.

  • Eight of the Top Ten IT issues highlighted in the Educause Review for 2014 include some cloud angle.

Personally, I’m finding it increasingly common to hear that an institution’s strategy is to first look for new application services in the cloud. And, to only consider introducing new services in the campus data center after cloud options have been ruled out.

At rSmart, we’re both a provider and a consumer of cloud services. Strategically we look for services that increase our effectiveness as a team without taking focus away from delivering on the company’s mission. We want as much of our energy focused on helping colleges and universities keep their money in their mission, and as little as possible running our email, marketing, and finance systems. The same seems to be true for an increasing number of higher education institutions.

There are enormous benefits to treating computing and application services like an elastic commodity that can scale and adapt with an organization’s needs. There are also material risks that need to be thoughtfully addressed and managed. Mission-critical applications that are highly configured to the organizations business processes, and have hundreds or thousands of campus users, deserve particular consideration.

Brad Wheeler, CIO at Indiana University and Kelley Business School professor, recently recorded a guest lecture at Stellenbosch University in South Africa on the economics of open source. The full presentation is a good listen, but if you only have a few minutes, Brad touches on an important aspect of the trend toward cloud computing at about 28 minutes into the lecture

Rights and Provisioning Matrix

His focus is on two dimensions: Ownership (Y-axis) and Location (X-axis). Brad makes it clear that giving up control on both dimensions dramatically increases the risk. If you don’t own it and it’s off-site, you better hope that the vendor’s values and direction stay aligned with yours.

I use a lot of cloud-based services so I understand this risk well and have occasionally experienced the impact. One recent example happened when Beats Music acquired MOG. Many years ago I decided that there was no need to own physical media for music anymore. I could get the music I like streamed to me at home, in the car, at work, and on the road. My service of choice has been MOG. It is integrated with my Sonos at home and set up on all of my devces. I have all of my playlists there, etc. When they announced the acquisition I hoped that Beats would leverage MOG’s great platform and that I’d eventually move to Beats. Well, as it often happens, it didn’t work out that way and now I’m starting over with Spotify. So it goes.

Music playlists and personal entertainment are trivial examples compared to the disruption that occurs when mission-critical enterprise systems are used by hundreds or thousands of people in the organization. When the application is running in the cloud and the vendor owns the software, the vendor holds all the cards. And in a situation where the vendor goes in a direction that’s not aligned with your organization, you just don’t have many options.

Fortunately, there is another option. Kuali (the “K” in Brad’s slide) is a global collaboration of more than seventy colleges, universities, and companies working together to create an option that is “owned” by higher education. rSmart, a co-founder of Kuali, is one of those organizations and our unique role is to provide colleges and universities with a trusted cloud option for these mission-critical systems. With rSmart and Kuali in the cloud, institutions can leverage the economies of scale of the cloud and retain peace of mind. That’s because control of the software’s direction lies firmly in the hands of higher education.

Climate change in higher education is inevitable. As consumers of higher education continue to face soaring tuition costs, their expectations are rising accordingly. Now is the time for institutions to find ways to be more agile and leverage the benefits and economies of scale that come with the cloud.

With rSmart and Kuali, we can help you make sure your cloud resembles the kind that accent on a beautiful blue sky day. And, you won’t be put in a position to have to weather an untimely storm with ominous looking thunderheads.

 


Tagged: cloud, education, erp, kuali, open source, rSmart, SaaS
Categories: Planet Sakai

アンコールチャーチの3つの心 2014/04/06

Sakai Feeds - Mon, 04/07/2014 - 9:55am

2014年4月6日 シリーズ:アンコールチャーチ part2 アンコールチャーチの3つの心 メッセンジャー: 大窪秀幸牧師 / Pastor Hide メッセージノート: http://www.lighthousechurch.jp/message.html 日曜礼拝時間: 11:00〜12:15 ライトハウスキリスト教会 大阪府堺市堺区砂道町3-6-19 http://www.lighthousechurch.jp

for the is community driven and I have all faith

What I'm reading on the web (via Diigo) - Sat, 04/05/2014 - 6:16am

for #OpenAnnotation the @w3c is community driven and I have all faith that standards effort will be expeditious and true #ianno14

Tags:

Categories: Ian

next.iu.edu

Sakai Feeds - Thu, 04/03/2014 - 5:15pm

next.iu.edu

Sakai Feeds - Thu, 04/03/2014 - 5:15pm

Efficacy, Adaptive Learning, and the Flipped Classroom

Sakai Feeds - Wed, 04/02/2014 - 2:10pm
Speaking of efficacy and the complexity of measuring it, I had an interesting conversation the other day with Danae Hudson, a professor of Psychology at Missouri State University, about a course redesign effort that she participated in. The initial contact came from a P.R. firm hired by Pearson. Phil and I get a lot of these and turn down most of them. This one interested me for several reasons. First, it was an opportunity to talk directly to a faculty member who was involved in the project. (A tip to all you P.R. folks out there: You will have a better chance of getting our attention when the focus of the call is to put us in direct contact with your customers about what they are achieving.) Second, the project was facilitated by The National Center for Academic Transformation (NCAT). I am a big fan of NCAT’s work, despite the fact that they seem to have an almost pathological urge to sabotage efforts to bring their work the attention that they deserve. Pearson’s interest lab in the call was that MyPsychLab was an integral part of the course redesign. My interest was to see what I could learn about the interaction between educational technology products and educational practices in delivering educational efficacy. What I heard tended to confirm my suspicions (and common sense): Educational technology products can produce significant learning gains, but they often doing so by supporting changes in classroom practices. The Goals Like all NCAT redesign projects, this one has a complete write-up on the NCAT site. The document summarizes the redesign context and goals as follows: Introductory Psychology is a semester-long, general education course at Missouri State University (MSU). The course falls within the self understanding/social behavioral perspective area of general education and is by far, the most popular choice for students within that area. Each academic year, at least 18 traditional face-to-face sections are offered with a total enrollment of 2,500-2,700 students. The course is lecture-based and typically taught by 65% full-time faculty and 35% adjunct instructors. While there are common general education goals across all sections, each instructor makes individual choices of content and delivery. Despite being a popular choice among students, Introductory Psychology has traditionally experienced a high DFW rate (approximately 25%). The department wants to find ways to develop a more engaging course that will result in improved student learning outcomes and student satisfaction. Due to the large enrollment and numerous sections offered throughout the year, a significant number of adjunct instructors teach the course, which has contributed to some course drift and grade inflation. Currently, each section of 153 students is taught by one instructor, which significantly limits the type of activities that can be assigned and graded. The vast majority of the final course grade is derived from a series of multiple-choice exams. The goal is to redesign the course to be much more engaging and interactive, with an emphasis on true mastery of the course material. To sum up: We have a popular Gen Ed course with a high failure and withdrawal rate. Danae also told me that the psychology department had long delivered a formative exam at the beginning of that class, and that they were unhappy with the level of improvement students were showing between the formative and summative exams. The faculty wanted to improve those numbers by making the course “more engaging and interactive, with an emphasis on the true mastery of the course material.” This is typically where we start hearing that teaching effectively is expensive. But NCAT has a strong track record of proving that to be false. It turns out that ineffective teaching methods are usually often inefficient. Let’s pause and think about the formulation of that last sentence for a moment. It’s not always the case that effective teaching measures are cost-efficient. Of course we know that good seminars with low teacher/student ratios can be very effective but, to adopt the current parlance, “don’t scale.”  In that situation, there is a tension between effectiveness and efficiency. But despite appearances, some traditional styles of teaching—most notably the classic ginormous lecture class—are both ineffective and inefficient. Why is that so? For several reasons. First, both the physical plant and the labor structure of the large lecture class limit its ability to scale. If you run out of lecture hall seats, or you run out of TAs, you have exhausted your ability to increase the number of students taught with the faculty that you have. The central innovation of video-based xMOOCs is that they remove this limitation without changing the underlying pedagogical model of the large lecture. But the central problem is that cost and effectiveness is a two-way street in education. In my last post, I discussed David Wiley’s argument that cost of curricular materials impact effectiveness insofar as cost limits student access to those materials. But it goes the other way too. There is a cost for every student who fails or withdraws from a class and therefore has to retake it. The direct cost is in the tuition paid for two classes rather than one—a cost paid but the financial aid providers in addition to the student—but indirect costs include increased chances that the student might have to stay an extra semester or drop out altogether as well as the knock-on effect of the student blocking the seat for another student in an enrollment-capped but graduation-required course. NCAT typically doesn’t even look at these indirect costs and are often able to find significant direct cost savings by restructuring courses away from ineffective pedagogical approaches toward more effective pedagogical approaches that also happen to be more scalable. In MSU’s case, they projected that they would be able to lower the direct cost of the course by 17.8% while still achieving the primary goal of increasing effectiveness. The NCAT report notes, The cost savings will remain in the psychology department and will be used to provide support for the redesigned course in the future, faculty wishing to take on additional course redesign projects and faculty travel to present at conferences related to the scholarship of teaching and learning. But How? MSU decided to redesign its course around what NCAT calls “the Replacement Model,” which can be thought of as a combination of flipped and hybrid. At this point most people have at least a basic idea of what “hybrid” means, but “flipped” is a lot less clear. The Chronicle of Higher Education recently published a column by Robert Talbert highlighting a group that is trying to establish definition and best practices around what they call “flipped learning,” which they describe as follows: Flipped Learning is a pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. That’s it in a nutshell: Move direct instruction (i.e., lectures) out of class time so that there can be more direct student interaction time. Which sounds great, but it leads to a frequently asked question. If students have to do all the homework they were doing before plus watching all the lecture videos at home, isn’t that going to dramatically increase the amount of time they have to spend on the class? How can they do all of that work? NCAT’s answer is that you give them back some of that time by making the class “hybrid” in the sense that you reduce their in-class seat time by 50%. That’s why it’s called the “Replacement Model.” While Danae never used the term “flipped learning”, she did talk about the flipped classroom and made it very clear that she meant using it to increase the amount of class time spent interacting with students and focusing on their particular needs. But the MSU plan called for decreasing class time by 50% while doubling the number of students per class from an average of 153 to 300. How was that supposed to work? Part of the answer lies in using traditional techniques like group projects, but a lot of it is in using data to provide students with more feedback and fine tune the classroom experience. This is where Pearson comes in. I wrote a while back that the promise of adaptive learning programs is to transform the economics of tutoring: The simplest way to think about adaptive learning products in their current state is as tutors. Tutors, in the American usage of the word, provide supplemental instruction and coaching to students on a one-on-one basis. They are not expected to know everything that the instructor knows, but they are good at helping to ensure that the students get the basics right. They might quiz students and give them tips to help them remember key concepts. They might help a student get unstuck on a particular step that he hasn’t quite understood.  And above all, they help each student to figure out exactly where she is doing well and where she still needs help. Adaptive learning technologies are potentially transformative in that they may be able to change the economics of tutoring. Imagine if every student in your class could have a private tutor, available to them at any time for as long as they need. Imagine further that these tutors work together to give you a daily report of your whole class—who is doing well, who is struggling on which concepts, and what areas are most difficult for the class as a whole. How could such a capability change the way that you teach? What would it enable you to spend less of your class time doing, and what else would it enable you to spend more of your class time doing? How might it impact your students’ preparedness and change the kinds of conversations you could have with them? The answers to these questions are certainly different for every discipline and possibly even for every class. The point is that these technologies can open up a world of new possibilities. This is exactly how MSU is using MyPsychLab. One of the biggest benefits that Danae cited was being able to walk into a class knowing what students were doing well with and what they were struggling with. This enables her and her colleagues to focus on the topics that those particular students need the most help with in class while simultaneously communicating to the students that their teacher is aware of how they are doing and what they need. Likewise, she said that the students are coming to class more engaged with better questions. MSU also uses clickers in class to augment the feedback loop that they are getting from the homework platform. This certainly was a critical enabler at a class size of 300 and would be useful in a significantly smaller lecture class as well. Did it work? The results are overall very positive but mixed: On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement). Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement). Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%) They did not get a statistically significant improvement in the number of failures and withdrawals, which was one of the main goals of the redesign, although they note that “it does appear that the distribution of A’s, B’s, and C’s shifted such that in the redesign, there were more A’s and B’s and fewer C’s compared to the traditional course.” In terms of cost reduction, while they fell short of their 17.8% goal, they did achieve a 10% drop in the cost of the course. Intuitions and Open Questions The study of the course redesign was intended to measure the overall impact of the effort rather than to research the components of efficacy, which means that we don’t have good data from which we can draw strong conclusions on the most interesting questions in this regard. But I’m not afraid to make some guesses and I asked Danae to do the same with me. To be clear, her first answer to any of the questions I’m going to bring up in this section of the post was consistently along the lines of, “I don’t have data that speaks to that question.” Which is the right answer. I want to be clear that wherever I reference her opinions here that it was in this context and that she was appropriately tentative. First of all, what role did MyPsychLab have in the improvements? Here we have at least one hard number: A significant portion of the redesigned course utilized publisher-customized digital learning technology. A correlation was calculated between the students’ online total score of assigned material and the total of five exam scores. This correlation was .68, p < .001 suggesting a strong relationship between the completion of online learning activities and exam performance. But why? Obviously, practice is part of the equation. Students who do the homework tend to do better in classes in general. I asked Danae what she thought the big drivers were beyond that. She cited the feedback to faculty and student engagement. The product seemed to succeed in getting students engaged, from her perspective. When pressed about the degree to which the adaptive component of the product made a difference, she guessed that it wasn’t as big a factor. “My gut tells me that it is less about the personalization,” she said. But then she added that the personalization may have helped to drive student engagement by making the students feel like the content was tailored to their needs. “I think personalization is the part that appeals to the students.” This raises the question about the degree to which any gains that we see added to an adaptive product may be because of a digital…er…analog to teaching presence and as opposed to the software’s real ability to adapt to individual student needs and capabilities. Second, I asked Danae to characterize how much she thinks adopting MyPsychLab would have driven improvements had it been added to the original class before the redesign. Her instinct was not nearly as much, which is my instinct too. We don’t have numbers to separate the impact of the practice from the impact of the tailored instruction that resulted from having the student data in the product. Nor do we know how much student engagement with the product was impacted by the fact that it was integrated into the whole course redesign. These would be important questions to answer before we can have a clear and nuanced answer to the question of the product’s “efficacy.” Efficacious under what circumstances? Finally, I’d like to return to David Wiley’s question about cost as a proxy for access and its impact on efficacy. Danae was traveling and didn’t have access to the course materials cost information when I reached her by email today, but she was confident that the cost had not gone up significantly and thought it might have actually gone done post-redesign. (And no, Pearson did not subsidize the cost of MyPsychLab to the students.) So we have no before/after data from which we can make inferences regarding the impact of cost on student outcomes. But it’s possible that MSU could have had a more significant impact on its DFW rate had the total cost to the students been lower. It’s also worth noting that MSU expected to increase enrollment by 72 students annually but actually saw a decline of enrollment by 126 students, which impacted their ability to deliver decreased costs to the institution. Would they have seen different enrollments had the curricular materials been less expensive? Or free? We don’t know. But this raises the point that efficacy cannot be reduced to one aggregate number. Improving student aggregate test scores and reducing the number of students who fail or withdraw are two different goals which certainly need to be measured differently and probably need different sorts of interventions to achieve. Postscript After this post went live, Danae shared some data with me from the semesters after the NCAT report was published. As it turns out, the course did see significant reduction in its DFW rates and enrollments bounced back over the course of several semesters. You can read about the details, as well as possible explanations, here. The post Efficacy, Adaptive Learning, and the Flipped Classroom appeared first on e-Literate.

Michael Feldstein: Efficacy, Adaptive Learning, and the Flipped Classroom

Planet Sakai - Wed, 04/02/2014 - 2:10pm

Speaking of efficacy and the complexity of measuring it, I had an interesting conversation the other day with Danae Hudson, a professor of Psychology at Missouri State University, about a course redesign effort that she participated in. The initial contact came from a P.R. firm hired by Pearson. Phil and I get a lot of these and turn down most of them. This one interested me for several reasons. First, it was an opportunity to talk directly to a faculty member who was involved in the project. (A tip to all you P.R. folks out there: You will have a better chance of getting our attention when the focus of the call is to put us in direct contact with your customers about what they are achieving.) Second, the project was facilitated by The National Center for Academic Transformation (NCAT). I am a big fan of NCAT’s work, despite the fact that they seem to have an almost pathological urge to sabotage efforts to bring their work the attention that they deserve. Pearson’s interest lab in the call was that MyPsychLab was an integral part of the course redesign. My interest was to see what I could learn about the interaction between educational technology products and educational practices in delivering educational efficacy.

What I heard tended to confirm my suspicions (and common sense): Educational technology products can produce significant learning gains, but they often doing so by supporting changes in classroom practices.

The Goals

Like all NCAT redesign projects, this one has a complete write-up on the NCAT site. The document summarizes the redesign context and goals as follows:

Introductory Psychology is a semester-long, general education course at Missouri State University (MSU). The course falls within the self understanding/social behavioral perspective area of general education and is by far, the most popular choice for students within that area. Each academic year, at least 18 traditional face-to-face sections are offered with a total enrollment of 2,500-2,700 students. The course is lecture-based and typically taught by 65% full-time faculty and 35% adjunct instructors. While there are common general education goals across all sections, each instructor makes individual choices of content and delivery.

Despite being a popular choice among students, Introductory Psychology has traditionally experienced a high DFW rate (approximately 25%). The department wants to find ways to develop a more engaging course that will result in improved student learning outcomes and student satisfaction. Due to the large enrollment and numerous sections offered throughout the year, a significant number of adjunct instructors teach the course, which has contributed to some course drift and grade inflation. Currently, each section of 153 students is taught by one instructor, which significantly limits the type of activities that can be assigned and graded. The vast majority of the final course grade is derived from a series of multiple-choice exams. The goal is to redesign the course to be much more engaging and interactive, with an emphasis on true mastery of the course material.

To sum up: We have a popular Gen Ed course with a high failure and withdrawal rate. Danae also told me that the psychology department had long delivered a formative exam at the beginning of that class, and that they were unhappy with the level of improvement students were showing between the formative and summative exams. The faculty wanted to improve those numbers by making the course “more engaging and interactive, with an emphasis on the true mastery of the course material.”

This is typically where we start hearing that teaching effectively is expensive. But NCAT has a strong track record of proving that to be false. It turns out that ineffective teaching methods are usually often inefficient. Let’s pause and think about the formulation of that last sentence for a moment. It’s not always the case that effective teaching measures are cost-efficient. Of course we know that good seminars with low teacher/student ratios can be very effective but, to adopt the current parlance, “don’t scale.”  In that situation, there is a tension between effectiveness and efficiency. But despite appearances, some traditional styles of teaching—most notably the classic ginormous lecture class—are both ineffective and inefficient. Why is that so? For several reasons. First, both the physical plant and the labor structure of the large lecture class limit its ability to scale. If you run out of lecture hall seats, or you run out of TAs, you have exhausted your ability to increase the number of students taught with the faculty that you have. The central innovation of video-based xMOOCs is that they remove this limitation without changing the underlying pedagogical model of the large lecture. But the central problem is that cost and effectiveness is a two-way street in education. In my last post, I discussed David Wiley’s argument that cost of curricular materials impact effectiveness insofar as cost limits student access to those materials. But it goes the other way too. There is a cost for every student who fails or withdraws from a class and therefore has to retake it. The direct cost is in the tuition paid for two classes rather than one—a cost paid but the financial aid providers in addition to the student—but indirect costs include increased chances that the student might have to stay an extra semester or drop out altogether as well as the knock-on effect of the student blocking the seat for another student in an enrollment-capped but graduation-required course. NCAT typically doesn’t even look at these indirect costs and are often able to find significant direct cost savings by restructuring courses away from ineffective pedagogical approaches toward more effective pedagogical approaches that also happen to be more scalable. In MSU’s case, they projected that they would be able to lower the direct cost of the course by 17.8% while still achieving the primary goal of increasing effectiveness. The NCAT report notes,

The cost savings will remain in the psychology department and will be used to provide support for the redesigned course in the future, faculty wishing to take on additional course redesign projects and faculty travel to present at conferences related to the scholarship of teaching and learning.

But How?

MSU decided to redesign its course around what NCAT calls “the Replacement Model,” which can be thought of as a combination of flipped and hybrid. At this point most people have at least a basic idea of what “hybrid” means, but “flipped” is a lot less clear. The Chronicle of Higher Education recently published a column by Robert Talbert highlighting a group that is trying to establish definition and best practices around what they call “flipped learning,” which they describe as follows:

Flipped Learning is a pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter.

That’s it in a nutshell: Move direct instruction (i.e., lectures) out of class time so that there can be more direct student interaction time. Which sounds great, but it leads to a frequently asked question. If students have to do all the homework they were doing before plus watching all the lecture videos at home, isn’t that going to dramatically increase the amount of time they have to spend on the class? How can they do all of that work? NCAT’s answer is that you give them back some of that time by making the class “hybrid” in the sense that you reduce their in-class seat time by 50%. That’s why it’s called the “Replacement Model.”

While Danae never used the term “flipped learning”, she did talk about the flipped classroom and made it very clear that she meant using it to increase the amount of class time spent interacting with students and focusing on their particular needs. But the MSU plan called for decreasing class time by 50% while doubling the number of students per class from an average of 153 to 300. How was that supposed to work?

Part of the answer lies in using traditional techniques like group projects, but a lot of it is in using data to provide students with more feedback and fine tune the classroom experience. This is where Pearson comes in. I wrote a while back that the promise of adaptive learning programs is to transform the economics of tutoring:

The simplest way to think about adaptive learning products in their current state is as tutors. Tutors, in the American usage of the word, provide supplemental instruction and coaching to students on a one-on-one basis. They are not expected to know everything that the instructor knows, but they are good at helping to ensure that the students get the basics right. They might quiz students and give them tips to help them remember key concepts. They might help a student get unstuck on a particular step that he hasn’t quite understood.  And above all, they help each student to figure out exactly where she is doing well and where she still needs help.

Adaptive learning technologies are potentially transformative in that they may be able to change the economics of tutoring. Imagine if every student in your class could have a private tutor, available to them at any time for as long as they need. Imagine further that these tutors work together to give you a daily report of your whole class—who is doing well, who is struggling on which concepts, and what areas are most difficult for the class as a whole. How could such a capability change the way that you teach? What would it enable you to spend less of your class time doing, and what else would it enable you to spend more of your class time doing? How might it impact your students’ preparedness and change the kinds of conversations you could have with them? The answers to these questions are certainly different for every discipline and possibly even for every class. The point is that these technologies can open up a world of new possibilities.

This is exactly how MSU is using MyPsychLab. One of the biggest benefits that Danae cited was being able to walk into a class knowing what students were doing well with and what they were struggling with. This enables her and her colleagues to focus on the topics that those particular students need the most help with in class while simultaneously communicating to the students that their teacher is aware of how they are doing and what they need. Likewise, she said that the students are coming to class more engaged with better questions. MSU also uses clickers in class to augment the feedback loop that they are getting from the homework platform. This certainly was a critical enabler at a class size of 300 and would be useful in a significantly smaller lecture class as well.

Did it work? The results are overall very positive but mixed:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • They did not get a statistically significant improvement in the number of failures and withdrawals, which was one of the main goals of the redesign, although they note that “it does appear that the distribution of A’s, B’s, and C’s shifted such that in the redesign, there were more A’s and B’s and fewer C’s compared to the traditional course.”
  • In terms of cost reduction, while they fell short of their 17.8% goal, they did achieve a 10% drop in the cost of the course.
Intuitions and Open Questions

The study of the course redesign was intended to measure the overall impact of the effort rather than to research the components of efficacy, which means that we don’t have good data from which we can draw strong conclusions on the most interesting questions in this regard. But I’m not afraid to make some guesses and I asked Danae to do the same with me. To be clear, her first answer to any of the questions I’m going to bring up in this section of the post was consistently along the lines of, “I don’t have data that speaks to that question.” Which is the right answer. I want to be clear that wherever I reference her opinions here that it was in this context and that she was appropriately tentative.

First of all, what role did MyPsychLab have in the improvements? Here we have at least one hard number:

A significant portion of the redesigned course utilized publisher-customized digital learning technology. A correlation was calculated between the students’ online total score of assigned material and the total of five exam scores. This correlation was .68, p < .001 suggesting a strong relationship between the completion of online learning activities and exam performance.

But why? Obviously, practice is part of the equation. Students who do the homework tend to do better in classes in general. I asked Danae what she thought the big drivers were beyond that. She cited the feedback to faculty and student engagement. The product seemed to succeed in getting students engaged, from her perspective. When pressed about the degree to which the adaptive component of the product made a difference, she guessed that it wasn’t as big a factor. “My gut tells me that it is less about the personalization,” she said. But then she added that the personalization may have helped to drive student engagement by making the students feel like the content was tailored to their needs. “I think personalization is the part that appeals to the students.” This raises the question about the degree to which any gains that we see added to an adaptive product may be because of a digital…er…analog to teaching presence and as opposed to the software’s real ability to adapt to individual student needs and capabilities.

Second, I asked Danae to characterize how much she thinks adopting MyPsychLab would have driven improvements had it been added to the original class before the redesign. Her instinct was not nearly as much, which is my instinct too. We don’t have numbers to separate the impact of the practice from the impact of the tailored instruction that resulted from having the student data in the product. Nor do we know how much student engagement with the product was impacted by the fact that it was integrated into the whole course redesign. These would be important questions to answer before we can have a clear and nuanced answer to the question of the product’s “efficacy.” Efficacious under what circumstances?

Finally, I’d like to return to David Wiley’s question about cost as a proxy for access and its impact on efficacy. Danae was traveling and didn’t have access to the course materials cost information when I reached her by email today, but she was confident that the cost had not gone up significantly and thought it might have actually gone done post-redesign. (And no, Pearson did not subsidize the cost of MyPsychLab to the students.) So we have no before/after data from which we can make inferences regarding the impact of cost on student outcomes. But it’s possible that MSU could have had a more significant impact on its DFW rate had the total cost to the students been lower. It’s also worth noting that MSU expected to increase enrollment by 72 students annually but actually saw a decline of enrollment by 126 students, which impacted their ability to deliver decreased costs to the institution. Would they have seen different enrollments had the curricular materials been less expensive? Or free? We don’t know. But this raises the point that efficacy cannot be reduced to one aggregate number. Improving student aggregate test scores and reducing the number of students who fail or withdraw are two different goals which certainly need to be measured differently and probably need different sorts of interventions to achieve.

Postscript

After this post went live, Danae shared some data with me from the semesters after the NCAT report was published. As it turns out, the course did see significant reduction in its DFW rates and enrollments bounced back over the course of several semesters. You can read about the details, as well as possible explanations, here.

The post Efficacy, Adaptive Learning, and the Flipped Classroom appeared first on e-Literate.

Categories: Planet Sakai

Altering a UNIQUE Constraint in a MySQL Table

Sakai Feeds - Tue, 04/01/2014 - 8:57pm
It took me a while to figure out how to drop and recreate a UNIQUE constraint on one of my tables. So I figured I would record the slick little sequence of commands here to help my memory and save me time next time: SHOW CREATE TABLE t_lti_link;   CREATE TABLE `t_lti_link` ( `link_id` int(11) NOT NULL AUTO_INCREMENT, `link_sha256` char(64) NOT NULL, `link_key` varchar(4096) NOT NULL, `context_id` int(11) NOT NULL, `title` varchar(2048) DEFAULT NULL, `json` text, `created_at` datetime NOT NULL, `updated_at` datetime NOT NULL, PRIMARY KEY (`link_id`), UNIQUE KEY `link_sha256` (`link_sha256`), KEY `t_lti_link_ibfk_1` (`context_id`), CONSTRAINT `t_lti_link_ibfk_1` FOREIGN KEY (`context_id`) REFERENCES `t_lti_context` (`context_id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB AUTO_INCREMENT=74 DEFAULT CHARSET=utf8   ALTER TABLE t_lti_link DROP INDEX link_sha256;   SHOW INDEX FROM t_lti_link;   ALTER TABLE t_lti_link ADD UNIQUE(link_sha256, context_id)   SHOW INDEX FROM t_lti_link;   SHOW CREATE TABLE t_lti_link;   CREATE TABLE `t_lti_link` ( `link_id` int(11) NOT NULL AUTO_INCREMENT, `link_sha256` char(64) NOT NULL, `link_key` varchar(4096) NOT NULL, `context_id` int(11) NOT NULL, `title` varchar(2048) DEFAULT NULL, `json` text, `created_at` datetime NOT NULL, `updated_at` datetime NOT NULL, PRIMARY KEY (`link_id`), UNIQUE KEY `link_sha256` (`link_sha256`,`context_id`), KEY `t_lti_link_ibfk_1` (`context_id`), CONSTRAINT `t_lti_link_ibfk_1` FOREIGN KEY (`context_id`) REFERENCES `t_lti_context` (`context_id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB AUTO_INCREMENT=74 DEFAULT CHARSET=utf8 Sweet. I am loving that MySQL magic.

Dr. Chuck: Altering a UNIQUE Constraint in a MySQL Table

Planet Sakai - Tue, 04/01/2014 - 8:57pm

It took me a while to figure out how to drop and recreate a UNIQUE constraint on one of my tables. So I figured I would record the slick little sequence of commands here to help my memory and save me time next time:

SHOW CREATE TABLE t_lti_link;
 
CREATE TABLE `t_lti_link` (
`link_id` int(11) NOT NULL AUTO_INCREMENT,
`link_sha256` char(64) NOT NULL,
`link_key` varchar(4096) NOT NULL,
`context_id` int(11) NOT NULL,
`title` varchar(2048) DEFAULT NULL,
`json` text,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
PRIMARY KEY (`link_id`),
UNIQUE KEY `link_sha256` (`link_sha256`),
KEY `t_lti_link_ibfk_1` (`context_id`),
CONSTRAINT `t_lti_link_ibfk_1` FOREIGN KEY (`context_id`) REFERENCES `t_lti_context` (`context_id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=74 DEFAULT CHARSET=utf8
 
ALTER TABLE t_lti_link DROP INDEX link_sha256;
 
SHOW INDEX FROM t_lti_link;
 
ALTER TABLE t_lti_link ADD UNIQUE(link_sha256, context_id)
 
SHOW INDEX FROM t_lti_link;
 
SHOW CREATE TABLE t_lti_link;
 
CREATE TABLE `t_lti_link` (
`link_id` int(11) NOT NULL AUTO_INCREMENT,
`link_sha256` char(64) NOT NULL,
`link_key` varchar(4096) NOT NULL,
`context_id` int(11) NOT NULL,
`title` varchar(2048) DEFAULT NULL,
`json` text,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
PRIMARY KEY (`link_id`),
UNIQUE KEY `link_sha256` (`link_sha256`,`context_id`),
KEY `t_lti_link_ibfk_1` (`context_id`),
CONSTRAINT `t_lti_link_ibfk_1` FOREIGN KEY (`context_id`) REFERENCES `t_lti_context` (`context_id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=74 DEFAULT CHARSET=utf8

Sweet. I am loving that MySQL magic.

Categories: Planet Sakai

Efficacy Math is Hard

Sakai Feeds - Tue, 04/01/2014 - 12:12pm
David Wiley has a great post up on efficacy and OER in response to my original post about Pearson’s efficacy plan. He opens the piece by writing about Benjamin Bloom’s famous “2 sigma” problem: The problem isn’t that we don’t know how to drastically increasing learning. The two-part problem is that we don’t know how to drastically increase learning while holding cost constant. Many people have sought to create and publish “grand challenges” in education, but to my mind none will ever be more elegant than Bloom’s from 30 years ago: “If the research on the 2 sigma problem yields practical methods – which the average teacher or school faculty can learn in a brief period of time and use with little more cost or time than conventional instruction – it would be an educational contribution of the greatest magnitude.” (p. 6; emphasis in original) So the conversation can’t focus on efficacy only – if there were no other constraints, we actually know how to do “effective.” But there are other constraints to consider, and to limit our discussions to efficacy is to remain in the ethereal imaginary realm where cost doesn’t matter. And cost matters greatly. David then launches into a discussion of what he calls his “golden ratio,” or standard deviations per dollar. I have long been a fan of this formulation and quote it frequently. I’m not going to try to summarize his explication of it in his post; you really should go read it. But I would like to tease out a few implications here. Cost/Effectiveness Analysis By expressing cost and educational impact in a ratio, David is engaging in something called cost/effectiveness analysis. You may be more familiar with the closely related term “cost/benefit analysis.” The main difference between these two is that in the latter benefit is expressed in financial terms while in the former it is expressed in non-financial terms (such as learning gains, in this case). This is a powerful tool which is unfortunately misapplied more often than not. When people invoke cost/benefit, what often mean to invoke is cost, as in, “Do you really think this is worth it?” It is used to selectively question an expenditure that somebody doesn’t like. (Note that I am not accusing David of making this error; I’m just talking about common usage.) In Congress, cost/benefit is often a requirement tacked on to a bill to decrease the likelihood that the thing the amendment author doesn’t like will actually get funding. Likewise in education, cost/benefit or cost/effectiveness is loosely invoked for things that the invokers don’t think are worth the money up front, whether it’s textbooks, LMSs, or teacher salaries. But the better way to apply the tool is comparatively across the range of possible investment decisions. “Given X amount of money, do we get more standard deviations for our dollars by investing in A or B?” This moves us away from a focus on preventing spending on things we don’t like and toward a focus on maximizing utility, which is what David is after.  And this is where it gets complicated. A good part of David’s post is about the complexities of measuring and impacting the numerator in standard deviations per dollar. Unfortunately, we have a lot of trouble tracking the denominator as well. Even the institutional costs can be complex, as Phil’s recent response to Chris Newfield regarding the true cost of the UF/Pearson deal illustrates. It gets a lot more complicated when we start asking, “Cost to whom?” The controversy around the UF deal centers around the cost to the institution and ultimately to the state. Textbooks are paid for by students. Mostly. Sort of. Except when they spend university scholarship money on them. Or state or Federal financial aid on them. None of this argues against the framework that David is presenting. It just makes the practical application of it more challenging. But It’s Worse Than That So far, we’ve been talking about the ratio as if “efficacy” is represented in the numerator. David reinforces this impression when he writes, So the conversation can’t focus on efficacy only – if there were no other constraints, we actually know how to do “effective.” But there are other constraints to consider, and to limit our discussions to efficacy is to remain in the ethereal imaginary realm where cost doesn’t matter. But that’s not really his argument. His argument is cost impacts access which impacts efficacy. If students fail to use the prescribed product because they cannot afford to buy it, and they therefore do poorly in class, then the cost of the product is inextricable from the measure of its efficacy. This is an excellent example of what Mike Caulfield meant when he referred to the “last mile” problem. An educational product, technique, or intervention can only be said to be “effective” when it has an effect. It can only have an effect if it is actually used—and often only if it is actually used in the way it was designed to be used. Of course, if students can’t afford to buy the product, then they won’t use it and it therefore is not effective for them. So maybe the entire ratio, including numerator and denominator, collectively expresses a measure of effectiveness, right? Not so fast. There are two colleges that are fairly close to where I live. Once, Berkshire Community College, has a total non-residential cost of $5,850 per year for Massachusetts residents taking 15 credits per semester. The other, Simon’s Rock College, has a total residential cost of $60,000 per year. A cost of $100 for curricular materials could have a dramatic impact on access (and therefore efficacy) in the former environment but negligible in the latter. Standard deviations per dollar does not capture this difference. We could instead express the denominator in terms of percentage of total cost, which would help somewhat for this particular purpose. But what we really need is empirical data quantifying the impact of cost on student access under different conditions. Doing so would enable us to separate the numerator and the denominator once again. If the impact of cost for a particular educational population is already factored into the numerator, then we can get back to a discussion of bang for the buck. We also can make more nuanced evaluations. It may be that, because of the access issue, a commercial product is more effective for Simon’s Rock students than it is for BCC students. Further, we could (theoretically) perform a calculation to determine its effectiveness for University of Massachusetts students, which would presumably be different from either of the other two. I guess what I’m trying to say is that efficacy is complicated. It’s a great goal, but teasing out what it means and how to measure it in authentic and useful ways is going to be very difficult. The post Efficacy Math is Hard appeared first on e-Literate.

Michael Feldstein: Efficacy Math is Hard

Planet Sakai - Tue, 04/01/2014 - 12:12pm

David Wiley has a great post up on efficacy and OER in response to my original post about Pearson’s efficacy plan. He opens the piece by writing about Benjamin Bloom’s famous “2 sigma” problem:

The problem isn’t that we don’t know how to drastically increasing learning. The two-part problem is that we don’t know how to drastically increase learning while holding cost constant. Many people have sought to create and publish “grand challenges” in education, but to my mind none will ever be more elegant than Bloom’s from 30 years ago:

“If the research on the 2 sigma problem yields practical methods – which the average teacher or school faculty can learn in a brief period of time and use with little more cost or time than conventional instruction – it would be an educational contribution of the greatest magnitude.” (p. 6; emphasis in original)

So the conversation can’t focus on efficacy only – if there were no other constraints, we actually know how to do “effective.” But there are other constraints to consider, and to limit our discussions to efficacy is to remain in the ethereal imaginary realm where cost doesn’t matter. And cost matters greatly.

David then launches into a discussion of what he calls his “golden ratio,” or standard deviations per dollar. I have long been a fan of this formulation and quote it frequently. I’m not going to try to summarize his explication of it in his post; you really should go read it. But I would like to tease out a few implications here.

Cost/Effectiveness Analysis

By expressing cost and educational impact in a ratio, David is engaging in something called cost/effectiveness analysis. You may be more familiar with the closely related term “cost/benefit analysis.” The main difference between these two is that in the latter benefit is expressed in financial terms while in the former it is expressed in non-financial terms (such as learning gains, in this case). This is a powerful tool which is unfortunately misapplied more often than not. When people invoke cost/benefit, what often mean to invoke is cost, as in, “Do you really think this is worth it?” It is used to selectively question an expenditure that somebody doesn’t like. (Note that I am not accusing David of making this error; I’m just talking about common usage.) In Congress, cost/benefit is often a requirement tacked on to a bill to decrease the likelihood that the thing the amendment author doesn’t like will actually get funding. Likewise in education, cost/benefit or cost/effectiveness is loosely invoked for things that the invokers don’t think are worth the money up front, whether it’s textbooks, LMSs, or teacher salaries.

But the better way to apply the tool is comparatively across the range of possible investment decisions. “Given X amount of money, do we get more standard deviations for our dollars by investing in A or B?” This moves us away from a focus on preventing spending on things we don’t like and toward a focus on maximizing utility, which is what David is after.  And this is where it gets complicated. A good part of David’s post is about the complexities of measuring and impacting the numerator in standard deviations per dollar. Unfortunately, we have a lot of trouble tracking the denominator as well. Even the institutional costs can be complex, as Phil’s recent response to Chris Newfield regarding the true cost of the UF/Pearson deal illustrates. It gets a lot more complicated when we start asking, “Cost to whom?” The controversy around the UF deal centers around the cost to the institution and ultimately to the state. Textbooks are paid for by students. Mostly. Sort of. Except when they spend university scholarship money on them. Or state or Federal financial aid on them. None of this argues against the framework that David is presenting. It just makes the practical application of it more challenging.

But It’s Worse Than That

So far, we’ve been talking about the ratio as if “efficacy” is represented in the numerator. David reinforces this impression when he writes,

So the conversation can’t focus on efficacy only – if there were no other constraints, we actually know how to do “effective.” But there are other constraints to consider, and to limit our discussions to efficacy is to remain in the ethereal imaginary realm where cost doesn’t matter.

But that’s not really his argument. His argument is cost impacts access which impacts efficacy. If students fail to use the prescribed product because they cannot afford to buy it, and they therefore do poorly in class, then the cost of the product is inextricable from the measure of its efficacy. This is an excellent example of what Mike Caulfield meant when he referred to the “last mile” problem. An educational product, technique, or intervention can only be said to be “effective” when it has an effect. It can only have an effect if it is actually used—and often only if it is actually used in the way it was designed to be used. Of course, if students can’t afford to buy the product, then they won’t use it and it therefore is not effective for them.

So maybe the entire ratio, including numerator and denominator, collectively expresses a measure of effectiveness, right? Not so fast. There are two colleges that are fairly close to where I live. Once, Berkshire Community College, has a total non-residential cost of $5,850 per year for Massachusetts residents taking 15 credits per semester. The other, Simon’s Rock College, has a total residential cost of $60,000 per year. A cost of $100 for curricular materials could have a dramatic impact on access (and therefore efficacy) in the former environment but negligible in the latter. Standard deviations per dollar does not capture this difference. We could instead express the denominator in terms of percentage of total cost, which would help somewhat for this particular purpose. But what we really need is empirical data quantifying the impact of cost on student access under different conditions. Doing so would enable us to separate the numerator and the denominator once again. If the impact of cost for a particular educational population is already factored into the numerator, then we can get back to a discussion of bang for the buck. We also can make more nuanced evaluations. It may be that, because of the access issue, a commercial product is more effective for Simon’s Rock students than it is for BCC students. Further, we could (theoretically) perform a calculation to determine its effectiveness for University of Massachusetts students, which would presumably be different from either of the other two.

I guess what I’m trying to say is that efficacy is complicated. It’s a great goal, but teasing out what it means and how to measure it in authentic and useful ways is going to be very difficult.

The post Efficacy Math is Hard appeared first on e-Literate.

Categories: Planet Sakai

Clarifications on UF Online Payments to Pearson Embanet

Sakai Feeds - Mon, 03/31/2014 - 5:35pm
I wrote a post over the weekend that included information from the Gainesville Sun about the University of Florida Online (UF Online) program and its expected payments to Pearson Embanet. Chris Newfield from Remaking the University also wrote on the subject today. Chris raises some very important issues in his post, including his point: Universities may have a cost disease, but they now have a privatization disease that is even worse. In the article, however, there seems to be a misunderstanding of how the revenue sharing agreement works. Given the importance of the questions that Chris raises, I think it is important to understand the payment model used by most Online Service Providers (OSP) such as in place at UF Online. The part of the blog post that is mistaken, in my understanding, is this [emphasis added]: UF did admit that it had to pay Pearson cash up front: it just wouldn’t say how much. A week later, Mr. Schweers reported that through various documents he’d been able to show that UF would pay Pearson Embanet $186 million over the 11 year life of the contract. The business plan sounds much like the Udacity-Georgia Tech deal. It involved very large growth projections to 13,000 students paying full in-state or non-resident tuition for an all-online program by 2018, with Pearson getting, in addition to its fee, $19 million of $43 million in projected revenues. 13,000 is the size of UF’s first year class. The revenue estimates are worth pondering. Even if Pearson fails, it will effectively pocket all of the state funding that was given to UF for online, and some internal UF money besides. Pearson is owed $186 million over time for getting involved, and the state provided $35 million. Pearson will contractually absorb all of the state money and then be entitled to another $151 million of UF’s internal funds. (UF Associate Provost Andy McDonough says that Pearson will get $9.5 million in the first five years, but it is not clear whether or how this reflects the still partially redacted contract.) If somehow the Pearson dragnet finds thousands of students to pay full tuition for an all-online program with the University of Florida name, UF is slated to gross $24 million in 2019, which is projected to rise to $48 million five years later. In this best possible scenario, UF will get back its initial $151 million around ten years from now. The University will thus be ready to earn its first net dollar in 2025. The basic idea is that the OSP provides up-front investment, spending far more money in the initial years of an online program than it makes from the school. This is why 2U is growing quickly ($83.1 million revenue on 49% growth) but still is losing big ($27.9 million last year, with unclear prospects on breaking even). Most of 2U’s programs are in the early stages, when they are investing more in the online program than they are making in revenue. In the UF Online case, they appear to be paying Pearson Embanet $9.5 million for the first five years as partial compensation for these up-front expenses. I believe that Pearson will internally spend far in excess of $9.5 million over the next five years, running a loss. During that same startup period, however, the Florida legislature will fund UF Online with $35 million. Pearson will only make 27% of this money if the Gainesville Sun is correct in its analysis of the contract. After 2019, all payments shift to revenue from tuition and fees paid by students, as described by the Sun: After 2018, UF will also stop paying Pearson directly and Pearson’s income will come entirely from its share of tuition revenue and any fees it charges. UF projects it will have over 13,000 students in UF Online generating $43 million in tuition revenue by 2019 — of which Pearson will get close to $19 million. By 2024, with 24,000 students anticipated, revenues generated will be about $76 million, with $28 million going to Pearson, McCullough said. OSPs typically take a percentage of the tuition revenue based on enrollment targets. What is important here is that the revenue for the OSP depends on enrollment – if UF Online does not hit the enrollment targets, Pearson Embanet will not get $186 million in revenue. They make a percentage of the revenue without guaranteed payments. In the best possible scenario for UF Online and for Pearson Embanet, the school will start making money from students on day 1. In 2019, if UF Online hits enrollment targets, UF Online will net $24 million ($43 million of revenue, paying $19 million to Pearson Embanet). As the enrollment grows (again, assuming that it does), then UF Online will make more and more over time, estimated to be net $44 million in 2024 ($76 million of revenue, paying $28 million to Pearson Embanet). If UF Online does not hit targets, both UF Online and Pearson Embanet will make far less than the projections in the article. As mentioned before, Chris raises some important questions, but this is not a matter of a school paying all revenue to an OSP without seeing a dime of net revenue until 2025 and beyond. Update (3/31): I found the spreadsheets in the business plan, and these contract numbers are directly derived from the plan. The key is that they label Pearson Embanet (OSP) as “P3″ for Public / Private Partnership (see page 87 for explanation. As for the mechanism to pay Embanet, they use a moving scale, with different percentages of revenue split per year and per in-state or out-of-state tuition. In 2015 Pearson Embanet makes 40% of the in-state tuition and 60% of the out-of-state tuition, and then in 2022 they make 30% and 42%. This also shows the “Additional Fixed Fee” of $9.5 million broken up over the first five years. See here on page 84: On page 82 these numbers are applied to the estimated enrollment, with the resultant fee to Pearson Embanet labeled as “P3 Services”. This combines the tuition sharing along with the additional fixed fee. For example in 2016, ($3.2 m * 0.4) + ($7.2 m * 0.6) + $2.0 m = $7.6 million. If you add up the row labeled “P3 Services” you get the total of $186 million. What is unknown from this analysis is the internal costs from Pearson Embanet. The document on page 87 includes the following language, which seems to quantify the Embanet “investment” as “direct cost savings realized from these transfers” at $14 million per year. There are some recognizable cost transfers in the service purchase, “partnership” plan. It is admittedly difficult to capture all of the services that are part of an external package in an internal matrix subject to per unit, per student, or per activity pricing. However, there are recognizable cost transfers in the market assessment, marketing services, recruitment, contact call center, production (on demand), program coordinators (retention), digital content and tutoring. The direct cost savings realized from these transfers is estimated at about $14 million per year. The present value of the P3 services annualized is approximately $15 million. The University believes the summation of the immediacy of the expertise, the on-request availability, the joint research opportunities, and the expanding innovative digital content represent greater value added than the differential. Update (2): Note that these are projections that seem to be best-case scenarios. Full disclosure: Pearson is a client of MindWires Consulting but not for OSP. All information here is from public sources. The post Clarifications on UF Online Payments to Pearson Embanet appeared first on e-Literate.
Syndicate content