Aggregation of feeds from different web 2.0 resources (Youtube, Flickr, Delicious, Slideshare, Twitter, Vimeo) regarding Sakai
Updated: 44 min 9 sec ago
As part of the OXCAP project we have had to devise an algorithm to decide when a course should be shown as ‘upcoming’ and when it should be marked as ‘previous’ in WebLearn’s SES tool. I thought it may be useful to outline that algorithm here in case training providers need to reference it.
The process is complex because specifying the start and end date of a course in not mandatory, nor is it mandatory to say when booking opens and closes.
In the following ‘dtf attribute’ is the date and time of a course specified in ISO 8601 format. The and XCRI-CAP elements represent the course start and end date respectively. (For training providers supplying data in XLSX (Excel) format these correspond to columns 10 (J) and 29 (AC) of the spread sheet.)
The and XCRI-CAP elements correspond to the booking closes and opens date. None of these XCRI-CAP elements are mandatory.
if dtf attribute is in future, course as ‘upcoming’, if it is in the past mark course as ‘Previous’
if dtf is missing look at dtf attribute of
if dtf attribute is in future, course as ‘upcoming’
if dtf is missing look at dtf attribute of
if dtf attribute is in future, course as ‘upcoming’, if it is in the past mark course as ‘Previous’
if dtf is missing look at dtf attribute of
if dtf attribute is in future, course as ‘upcoming’
otherwise for the course to be legal it must have text between the tags so must be ‘upcoming’
The upshot of this is that it is important to supply course end dates otherwise, assuming a start date has been specified, a course will be earmarked as ‘previous’ as soon as it has begun.
I will be submitting this abstract to the 2013 Apereo Conference in San Diego June 2-7, 2013.
The Sakai project is nearly ten years old, having started informally in June 2003 and then formally funded by the Andrew W. Mellon Foundation in January 2004. There is no question that Sakai has brought tremendous value to the LMS market – even for schools that have never used or installed the product. Sakai has been a force for good and shown other LMS systems the right path to meet the real needs of their users. Sakai is the only Apache-style open source / open community LMS project in the marketplace. As of November 2012, Sakai represented six percent of the US LMS marketplace. These are all impressive results and with the release of Sakai 2.9, we have a product that we can be very proud of (and everyone should upgrade to it as soon as possible). But at the same time, we cannot rest on our laurels and need to think carefully about the kinds of activities that we will undertake beyond Sakai 2.9 to maintain and strengthen our place in the marketplace in an increasingly standards-oriented, component-based learning systems, trends toward multi-tenancy, software as service, MOOCs and extreme scalability. In this presentation, we will take a look at the past, present, and future of the Sakai Collaborative Learning Environment.
This is just a draft – comments welcome.
The transformation of the higher education LMS market continues, and I expect more changes over the next 2 – 3 years. However, it seems time to capture a key segment of the market based on two recent announcements that directly impact large online programs.
UMUC Selects Desire2Learn
Following a multi-year strategic study and product selection, the University of Maryland University College (UMUC) selected Desire2Learn to replace the homegrown WebTycho LMS in use for over a decade. While there has been no press release, there are now official documents publicly available that describe this effort. Per the Faculty Advisory Committee (FAC) Newsletter:
As announced last year, UMUC is planning to replace WebTycho with another learning management system. After a great deal of questions for the vendors, references checks and tests of functionality, Desire2Learn was chosen. More than 200 faculty, staff and students worldwide were part of the decision which was reported to have been by an overwhelming majority. The roll out will occur gradually over the next year or two. [snip]
Not only is UMUC moving away from the WebTycho platform to the Desire2Learn (D2L) learning management system, but additional changes are in the works which will transform the online learning experience for our students and faculty.
UMUC is part of the University of Maryland system, and it has a global focus – serving over 90,000 students worldwide.
Mississippi Community Colleges Select Canvas
The Mississippi Virtual Community College (MSVCC) provides the LMS for the system’s online program as well as for all 15 campuses. After a 6-month RFP process, MSVCC will be migrating from Blackboard Learn and Desire2Learn and implementing Canvas as the new LMS. From the press release:
The Mississippi Community College Board and the state’s 15 community colleges recently selected Canvas by Instructure as the learning platform for the Mississippi Virtual Community College (MSVCC). This partnership marks the beginning of a five-year engagement with Instructure, an innovative company in eLearning technology. Canvas will provide the critical tools necessary to ensure that the MSVCC accomplishes its strategic goals centered around teaching and learning. Because Canvas runs in “the cloud,” MSVCC will recognize a cost savings of approximately $1.5 million per year. The MSVCC is expected to “go live” on the Canvas platform beginning with the summer 2013 term. [snip]
The MSVCC is a collaborative effort among all 15 community colleges to offer the full array of academic, career, and technical courses necessary to earn an Associate of Arts or an Associate of Applied Sciences online. The MSVCC dates back to fall 2001 and is recognized nationally as among the best eLearning models in the nation. During the initial launch of the MSVCC, a total of 4,781 students were enrolled in 8,281 classes. During the fall 2012 semester, 28,576 students took 60,883 classes.
Disclosure: I have advised on both UMUC and Mississippi projects as well as several additional schools listed below. All information for this post, however, is based on public information unrelated to the consulting projects.
Growing Role of Non-Profits in Online Education
Both of these selections highlight the growing role of non-profit institutions – both public and private – in online education. Throughout the 2000s, for-profit institutions such as the University of Phoenix and DeVry provided the bulk of large online programs, with only a handful of non-profits providing online programs at scale. In the past two years, however, traditional non-profit programs have grown significantly while most for-profit schools have shrunk in enrollment.
In the following graphic, the size of the bubble represents enrollment in online programs or courses. For example, the University of Phoenix has almost 300,000 online students, while UCF has approximately 30,000 online students. Bubble sizes are estimates only, as there is no standardized method for accredited institutions to report online enrollment. The source of enrollment numbers is a combination of IPEDS data, web site research, press releases, and secondary studies such as Parthenon Group’s report.
A few notes:
One major difference between large online programs and the broader higher education market is the strong usage of homegrown systems in online. Michael and I have written about the University of Phoenix and their investment in learning platforms, but Rio Salado, WGU and ITT also use homegrown systems.
The online program market is where Pearson eCollege (aka LearningStudio) does most of its business. The commonly-cited numbers from the Campus Computing Project – that eCollege has a 1% market share – does not include for-profit schools and is based on number of institutions. By looking at enrollments, it is clear that Pearson is one of the biggest players in this segment of the market.
Like Pearson, LoudCloud Systems has focused on the online market, particularly with for-profit institutions.
Blackboard faced a dangerous situation with the end-of-life of WebCT and ANGEL product lines (with the ANGEL line subsequently revived). As seen here, Blackboard has successfully moved many of their large online customers into Blackboard Learn – including UMassOnline and SUNY Learning Network. Blackboard still has online schools (e.g. Excelsior) facing an LMS decisions due to perceived or real end-of-life of LMS products.
Desire2Learn has built up a niche of public online programs, particularly with the UMUC selection, added to the Colorado Community College Online program.
While Canvas has based its growth mostly on traditional face-to-face programs, the additions of University of Central Florida and MSVCC give Canvas its most significant presence for large online programs.
American Public University is the only large online program in the US using Sakai.
The post Snapshot of LMS Market for Large Online Programs in the US appeared first on e-Literate.
via Ian Dolphin , Executive Director, Apereo Foundation:
I am pleased to announce that the following candidates have been elected to the Board of Directors of the Apereo Foundation, subject to ratification by a meeting of the membership:
Alain Mayeur, President, ESUP‐Portail Consortium, IT Director, University of Valenciennes (France)
David Kahle, Vice President for Information Technology and Chief Information Officer, Tufts University (USA)
Dr. Douglas Johnson, Assistant Director for Learning Services, University of Florida (USA)
Professor Shoji Kajita, IT Planning Office / Academic Center for Computing and Media Studies. Kyoto University (Japan)
Sincere thanks to all the candidates willing to serve, and congratulations to those who were elected.
And as a personal note from me, congratulations and I look forward to working with you on the Board!
I had an unexpected opportunity to chat with the Apollo Group’s Rob Wrubel last week. Rob is their Chief Innovation Officers and Executive Vice President. It was a short conversation—only fifteen minutes—but boy, was it dense with information.
For starters, I got more clarity on the $1 billion technology investment that we keep hearing about. That’s not just for their LMS. It’s basically their entire learning- and learner-focused technology portfolio. If you think about the strong shift to online education that’s been happening in the for-profit education sector and the fact that Apollo serves over 300,000 students, it makes sense that they would need to make a massive investment in modernizing their IT infrastructure, including the LMS, but also their registrar software, their student/customer tracking software, their data centers—pretty much everything from soup to nuts. A billion dollars is still an impressive amount of money for a company to invest in just about anything, but to put it into perspective, Apollo makes about $4 billion in annual revenues. So an investment of 25% of that over a few years to upgrade their entire mission-critical IT infrastructure sounds forward-looking but reasonable.
We also talked a little bit about Apollo’s learning analytics. Wrubel identified a number of different layers. The first layer is their persistence and retention early warning system. This is the kind of learning analytics that is most widely adopted and well understood. (I have written about Purdue’s Course Signals as an example of this kind of system.) The second layer is cohort matching. Apollo places students into twelve- to eighteen-person learning cohorts. As you might imagine, getting the right mix of students in terms of their skills and abilities can be critical to the success of the cohort. The company has invested in analytics that help them get the right groups together. (By the way, this is one frontier that I think the MOOCs are going to eventually going to have to tackle. I suspect that one reason Coursera’s peer review functionality has gotten panned, despite a growing body of literature that calibrated peer review can work, is that they have vastly more heterogeneous groups than a traditional university class setting.)
The third layer is tracking student progress across the curriculum, looking for the drop-off points. In a way, this is a complement to the retention early warning analytics, but looking at it from a perspective of finding the rough patches that are likely to cause students trouble and sanding them down, as opposed to finding students who are in rough patches and helping them through. Wrubel put a lot of emphasis on completion, and specifically contrasted that emphasis to the high drop-out rate that we see in MOOCs. He also talked particularly about what he called “foundational learners,” which I suppose is a euphemism for remedial learners. ”Foundational learners just carry more risk factors coming into the risk factors,” he said. “We have done a lot in this area.” He talked about providing many small doses of remediation over time, as opposed to pulling a student out for an eight-week block of remediation.
Next, he talked about using the technologies and approaches of Carnegie Learning to improve outcomes. I have written previously about the powerful techniques being developed at Carnegie Mellon University and the University of Pittsburgh to identify skill ladders in learning a particular subject and remediating students as they work their way up those skill ladders. Apollo acquired Carnegie Learning, a commercial spin-off that develops courseware based on this approach. The education giant clearly has big plans for leveraging their acquisition. Wrubel really emphasized the value that Carnegie brings to the table. He scoffed at using Google-like tricks to personalize learning through big data magic (which is very much in line with my recent critique). Instead, he talked about automating the thus-far labor-intensive process of discovering skill maps for different subjects and disciplines. That’s a pretty ambitions and important research project, and I will be curious to see what they are able to accomplish.
Finally, he talked a little bit about that activity stream work (and patent) that Phil wrote about recently. We were at the very end of our time, so I didn’t get as much on this as I would have liked, but the gist is that the innovation is not so much on the idea of an activity stream as it is figuring out which bits of activity stream data are important to which stakeholders. What do the students need to see? How about the teachers? How about the analytics systems? At this point I’m extrapolating (and speculating) from a few remarks, but my sense is that the technology may be more properly thought of as a data bus than as a Facebook-like interface. “Activity stream” should be thought of in this context as the underlying data structures that are being routed, filtered and prioritized, but it’s the verbs in the sentence rather than the nouns that Apollo seems to be emphasizing.
I am hoping to have some follow-up conversations with Rob and other Apollo stakeholders and will let you know what I learn.
The post Apollo Group’s Technology Investments appeared first on e-Literate.
メッセンジャー： 大窪秀幸牧師 / Pastor Hide
In piece for the New Republic that I missed when it came out, Kevin Carey provides a compelling diagnosis of the cost problem in higher education that’s worth quoting at length:
[H]ighly-profitable lower division courses in common subjects like Economics, Calculus, and Psychology have similar curricula at most colleges and rely on many of the same nationally-marketed textbooks. They are often taught by people with no formal training in teaching. These courses are, in the education context, commodities….
Non-profit colleges don’t pay taxes, even when they have billions of dollars in the bank. People can use their publicly-financed college vouchers—and, increasingly, claim lucrative tax credits—for private college tuition. Because nobody really knows which colleges provide the best education, consumers have been trained to think of colleges like a luxury good: The best are the most expensive, by definition.
Non-profit colleges also don’t have shareholders demanding that they maximize the difference between revenues and expenses. Instead, they’re run by administrators and faculty who are most interested in competing for status with other colleges, which is determined by the size, expense, and ornateness of the academic greenhouses in which basic research and scholarship are produced.
For-profit colleges, on the other hand, do have shareholders, and the for-profit sector has expanded rapidly in recent years. But most have made the very rational decision to get in on the subsidy scam….
In other words, everyone currently in the four-year higher education business has a host of strong incentives to raise prices and hardly any incentives to lower them. Unsurprisingly, prices often go up and almost never go down. In the long run, this will badly undermine the legitimacy of higher education and weaken the case for public subsidization. College will become a private good affordable only to the minority subset of the population that can afford it. America’s aggregate level of human capital will suffer and our competitive position relative to other nations will decline. According to the OECD, many other industrialized countries are already increasing their levels of college attainment faster than we are.
Colleges have a strong collective interest in preventing this from happening. But each college has a strong individual interest in mainlining student tuition hikes for as long as they can. After all, if only rich people can afford to attend your college, that means you have a selective-admissions college full of rich people—which is what most colleges want to be. It’s mathematically impossible for all colleges to win this game, but they all think they can be among the winners. And the people running them today are concluding, correctly, that they’ll likely be long gone before the day of reckoning comes.
All of which is to say that college tuition addiction, like any serious dependency, can’t be cured by gentle moral exhortation. College won’t kick the habit of raising prices until the things they care about—money and reputation—are seriously threatened by competitors. Therefore, federal policymakers should help create those competitors by helping establish many brand-new colleges and universities.
I think he goes a little too far in ascribing a particularly selfish and short-sighted mindset to college administration leadership, but that’s really irrelevant to his argument. At its heart, he is claiming that we have a tragedy of the commons situation. As long as you assume that the individuals making the decisions are rational economic actors, then their personal attitudes don’t really matter. And I think he’s right about the fundamental dynamic.
His answer, which he saw echoed (or at least hinted at) a few weeks later in President Obama’s State of the Union address, is to use the power of Federal Pell Grants to do an end run around existing accreditation processes, unbundle the degree, and empower non-traditional providers to compete. He writes in the original article:
Congress and the Obama administration should create a new policy framework under which organizations can become officially recognized providers of higher education. Note, I do not say “officially recognized colleges or universities.” That’s because one of the things that makes college so expensive is that colleges (and the college experience these institutions provide) are expensive and currently people can only receive government-subsidized higher education services from colleges. Under the new system, any provider could receive payment via Pell grants, federal loans, or other current and imagined federal aid systems if they agree to a few baseline conditions.
First, they would be subject to strict price regulation. They would be free to offer courses for less than the maximum allowable amount per credit, but not more. Second, they would have to be extremely transparent about quality. They would be required to provide public information about how much their students learn, and have their access to federal aid rescinded if students are not learning enough [emphasis added].
These new providers would not have to be approved by independent accrediting bodies run by existing colleges and universities, as recipients of federal aid are today. In fact, they wouldn’t have to be colleges at all. InsideHigherEd recently reported that a pair of well-known Stanford professors are currently teaching an Artificial Intelligence course to about 200 Stanford students—and more than twenty thousand students around the world, online. The non-Stanford students won’t receive credits from Stanford, but they will receive official documentation from the professors as to how they scored on course tests and their overall rank. Under this new system, those professors would be free to set up their own business teaching Artificial Intelligence over the Internet, and students would be free to pay them with federal aid. Other providers might take advantage of the fast-growing body of open educational resources—free online courses, videos, lectures, and syllabi—and add value primarily through mentoring, designing course sequences, and assessing learning.
Students, of course, won’t want to pay for these courses if they can’t receive college credit that can be translated into a degree. So as part of the new system, any existing colleges that want to continue receiving federal financial aid will be required to accept any credits granted by participants in the new system in transfer. Because these new providers will have the imprimatur of United States government approval, they will be able to compete for students who want degrees backed by sufficient reputation. And because they will be inexpensive and attached to verifiable data about how much students are learning, they will make a compelling value proposition when competing with traditional colleges that have no such data, charge more money, and are weighed down by legacy expenses and change-resistant cultures [emphasis added].
I like the general idea of opening up opportunities for new entrants and changing the financial dynamic to break the tragedy of the commons vicious cycle, but there’s a bit of magical thinking regarding the verification of quality. How would we do that? What would it look like? Frankly, we’re not very good at verifying the quality of even traditional programs. There’s a certain amount of social capital that these institutions earn through their long histories, reputations and, above all, their alumni. As Kelly points out, this system doesn’t work very well. But dismantling it without a clear sense of what would replace it is dangerous, and I see no indication that he (or anybody else) has good ideas about quality verification in an unbundled system. I’m not throwing up my hands and giving up on that question or saying that it should be an excuse to justify the status quo. But I don’t think we should ignore it either.
The post Kevin Carey Gives the Right Diagnosis; I’m Less Sure About the Prescription appeared first on e-Literate.
I use these bits of technology to record my MOOC lectures:
http://www.techsmith.com/camtasia.html (for Mac)
http://www.amazon.com/Logitech-Widescreen-Calling-Recording-960-000764/dp/B006JH8T3S/ref=dp_ob_title_ce (Logitech 920)
I record on a 4-CPU MacBook Pro 15 with an SSD drive – and it seems to labor a bit -I tried a recent 2-CPU MacBook 13 and it could not keep up. Camtasia does a great job of compressing the video without loss – but it is a bit CPU heavy. If you look the Camtasia files are surprisingly small and so easy to archive the original high quality materials instead of the rendered MP4 files.
The Logitech camera drivers for Mac are kind of weak and so the WebCam Settings tool is very important to adjust and fix color balance and turn off auto-focus to keep me from looking too blue or randomly changing colors and having the focus wander while I wave my hands.
I have derived some settings for the screen layout and came up via experiment with some compression settings for YouTube and for MP4’s that I make. I find that I need to make my files about 2X larger to keep them looking good on YouTube. In Camtasia when I export, to get good results I need the quality at the 3/4 mark. But for just making files to be played with Quicktime or to keep for archive I export with Camtasia’s quality setting at the 1/2 mark.
For very wide-screen videos with a big version of me on the right hand side that I produce for MOOC / Distance education like this:
I use a 1280×525 Canvas in Camtasia.
For situations where I make a screencast be played in a classroom make the Camtasia Canvas 1024×768 and move the image around or even remove it to keep it off the slide content as in:
Folks have more pixels on their computers than in classroom projectors :)
To help with testing foriegn character sets I have created two additional character sets on my Mac that remap the standard British QUERTY keyboard to create readable but non-standard characters. This allows me to easily enter characters outside the normal ASCII range that are still readable. I created these character maps with Ukelele that allows me to load the British character map and then remap the keys. Once I've remapped the keys I export the character maps as a bundle and put them in /Library/Keyboard Layouts and then use System Preferences to add them to the list of available keyboard maps.
So the two character maps are:
British - Foreign : This uses accented characters that are below 0xFFFF in Unicode and so in plane 0. Ḥėṙė ïṡ ȧṅ ėẋȧṁṗĺė ȯḟ ṫẏṗïṅġ ẇïṫḥ ṫḥïṡ ḳėẏḃȯȧṙḋ ṁȧṗ.
British - Mathematical : This uses mathematical versions of the standard lating characters and are above 0xFFFF, in this case in plane 1 in Unicode.
So an issue has come to light in our local Sakai deployment with character encodings. Some request were being incorrectly interpreted to be encoded using ISO-8859-1 instead of UTF-8. Before I explain what was going on here is some background.
The HTML specification has stuff about encodings and the short of it is that if you are making a request to a web server using a GET then you shouldn't have any foreign characters in the URL, you should just be using ASCII. In practice you can use foreign characters if you UTF-8 encode them as that's commonly assumed to be the used encoding by browsers, but it's not part of the standard. For example here is Google Chrome displaying a URL with UTF-8 encoded characters at the end.
The URL in the referencing page ends 096/%E1%B8%9F%C8%AF%E1%B9%99%C3%AF%C4%97%C4%A1%E1%B9%85.%E1%B9%AB%E1%BA%8B%E1%B9%AB which is the URL encoded version of the UTF-8 characters "096/ḟȯṙïėġṅ.ṫẋṫ".
If you have a browser making POSTs to the server, then you have a choice of two ways of submitting the data, application/x-www-form-urlencoded (which is the default on a
tag) ormultipart/form-data. If you are using characters in your form outside ASCII then you should use multipart/form-data as browsers don't typically say what encoding they are using when performing application/x-www-form-urlencoded, although lots of people assume it to be UTF-8.
Servlet request decoding.
When a request comes in to Tomcat a HttpServletRequest object is built and this includes the raw request path as well as a decoded one. Commonly containers such as Tomcat will use ISO-8859-1 to decode the path although this can be overridden in configuration. If the request is a POST and the content type is application/x-www-form-urlencoded then the container must also make the form data available as parameters and it will decode any characters using the character set supplied by the browser, however most browsers don't appear to sent a character set when submitting urlencoded POSTs and so it falls back to the containers default which in the case of Tomcat is ISO-8859-1, this can be overridden by calling ServletRequest.setCharacterEncoding(String).
If a request is a POST and the content type is multipart/form-data then the container doesn't do any decoding and it's up to the application to decode the body of the request and extract any parameters from it. This is one reason why people adopt application/x-www-form-urlencoded forms as it means they don't have to deal with parsing the requests, although there are lots of frameworks that help with this.
Part of the reason for only having the container decode application/x-www-form-urlencoded requests is that multipart/form-data are often used when file uploads are performed and may be large in which case you have to be careful about when you consume the upload and where you put the data.
Sakai and character sets
So Sakai supports unicode and uses UTF-8 as it's default encoding. But it does this through configuring all requests to use UTF-8. So in the Tomcat connector configuration the URI encoding is specified to be UTF-8 instead of ISO-8859-1 and the Sakai request filter which preprocesses all requests to Sakai (RequestFilter) sets the request encoding (if not already set) to be UTF-8 for any URL encoded form submissions.
This means that you can create a form in Sakai and leave the encoding as application/x-www-form-urlencoded and because the RequestFilter sets the encoding to UTF-8 everything works, really this is a bug and the form should be changed to use the correct encoding but as it generally works nobody notices. The more technically correct solution would be to have the original form submission made using multipart/form-data as this way you normally get the character encoding used by the browser in submitting the request.
Back to the problem...
We had a filter that was doing some authentication (OAuth) before the standard Sakai request filter. It was all working correctly but we started seeing bugs when people submitted some requests with foreign characters in them. After some investigation it turned out that the cause was the OAuth filter and through it's inspection of the request parameters.
The OAuth filter needs to look at the request parameters to extract any authentication information but in doing so it causes the servlet container to decode all the request paramaters. By default (and following the spec) a servlet container will decode URL encoded parameters according to the ISO-8859-1 character set. Once decoded the parameters remain decoded with that initial character set.