2013年5月19日 シリーズ：プラグイン week2 「聖霊によって活かされる」 メッセンジャー： 大窪秀幸牧師 / Pastor Hide メッセージノート： http://www.lighthousechurch.jp/message.html 日曜礼拝時間： １１：００〜１２：１５ １４：３０〜１５：３０ (J-on ※) ※ユースとヤングアダルト中心の礼拝。もちろん誰でも参加できます。 ライトハウスキリスト教会 大阪府堺市堺区砂道町3-6-19 http://www.lighthousechurch.jp
2013年5月19日 シリーズ：Eve and Adam week2 「女性が信じてしまっている二つのウソ」part2 メッセンジャー： 大窪秀幸牧師 / Pastor Hide メッセージノート： http://www.lighthousechurch.jp/message.html 日曜礼拝時間： １１：００〜１２：１５ １４：３０〜１５：３０ (J-on ※) ※ユースとヤングアダルト中心の礼拝。もちろん誰でも参加できます。 ライトハウスキリスト教会 大阪府堺市堺区砂道町3-6-19 http://www.lighthousechurch.jp
We are pleased to announce the publication of our white paper on California’s bottleneck course issue. Many thanks to the paper’s sponsor, the 20 Million Minds Foundation, for giving us the support and freedom to write exactly what we believe. If there is anything that you find wrong or objectionable in the paper, then blame us.
The central idea in the paper is that California should adopt the principle that students have a right to educational access. There is a fundamental difference between saying that we should do whatever we can to give students access and saying that we have an obligation to enable students to exercise their right to access. And that change of frame is critical to solving the problem of bottleneck courses.
The current incarnation of SB 520, which we have written about here repeatedly, has been accused by its detractors as being a potential vehicle for gutting and privatizing California’s public higher education. We believe that concern is legitimate. However, in the context of a larger bill supporting the students’ right to access, it could be not only positive but essential as path of last resort. As part of supporting every citizen’s right to due process when accused of a crime, the government is required to provide access to a public defender. But few people who have financial means are likely to choose a public defender over a private attorney because private attorneys, by and large, have access to resources (including time for individual attention) that public defenders do not. Likewise, we believe that access to third-party online courses disconnected from a student’s home institution is a poor solution to the student’s access problem. The only worse solution is not to have one at all, which is the current situation. If Californians believe that students should have a right to access, then they must provide a means of last resort for students to exercise that right.
But the best solution would be to eliminate bottleneck courses altogether, which is why much of our proposal centers on providing mechanisms and funding to empower faculty members, campuses, and systems to solve these problems within the California public education system, where students have the benefit of the campus support network and expertise of local faculty. Even the main funding for the third-party course provisions, which we characterize as the “safety valve” of the plan, would go toward developing infrastructure that would be equally useful to support students taking courses from other campuses within the California systems. If the faculty and administrators will lead an effort to solve the bottleneck course problem organically, with appropriate support from the state, then the actual use of the safety valve option by students could become a rarity.
We acknowledge that technology is not the only possible solution to the bottleneck course problem; nor do we assume that the underlying budget challenges should be accepted at face value. We have written about technology as one avenue to solve the problem because educational technology is what we know about and what we were asked to write about. None of what we suggest precludes discussions about allocation of funding in college budgets, levels of state funding support, allocation of faculty time to lower-division courses, or other relevant questions.
We believe strongly that students should have a right to educational access and that technology can be one useful tool in enabling them to exercise that right. We also believe that the educators in California’s public college and university system are still critical enablers of that right and have a central role to play in making that ideal a reality. And we think there is real value in bringing together educators across the state to focus on sensible application of technology to solve a real educational problem. The culture and collaboration, knowledge and infrastructure that could be created to solve the access problem could also be applied to problems such as improving completion rates, improving course quality, and lowering tuition costs.
You can read the white paper here.
Last month, I wrote this narrow defense of automated essay grading, hoping to clear the air on a new and controversial technology. In that post’s prolific comments section, Laura Gibbs made a comment echoing what I’ve heard from every teacher I speak to.
I am waiting for someone to show me a real example of this “useful supplement” provided by the computer that is responding to natural human language use – I understand what you want it to be, but I would contend that natural human language use is so complex (complex for a computer to apprehend) that trying to give writing mechanics feedback on spontaneously generated student writing will lead only to confusion for the students.
When we talk about machine learning being used to automatically grade writing, most people don’t know what that looks like. Because they don’t know the technology, they make it up. As far as I can tell, this is based on a combination of decades-old technology like Microsoft Word’s green grammar squiggles, clever new applications like Apple’s Siri personal assistant, and downright fiction, like Tony Stark’s snarky talking suits. What you get from this cross is a weird and incompetent artificial intelligence pointing out commas and giving students high grades for hiding the word “defenestration” in an essay.
My cofounder at LightSIDE Labs, David Adamson, taught in a high school for six years. If we were endeavoring to build something that was this unhelpful for teachers, he would have walked out a long time ago. In fact, though, David is a researcher in his own right. David’s Ph.D. research isn’t as focused on machine learning and algorithms as my own; instead, his work brings him into Pittsburgh public schools, talking with students and teachers, and putting technology where it can make a difference. In this post, rather than focus on essay evaluation and helping students with writing – which will be the subject of future posts – I’m going to explore the things he’s already doing in classrooms.Building computers that talk to students
David builds conversational agents. These agents are computer programs that sit in chatrooms for small-group discussion in class projects, looking by all appearances like a moderator or TA logged in elsewhere. They’re not human, however – they’re totally automated. They have a small library of lines that they can inject into the discussion, which can be automatically modified slightly in context. They use language technology, including machine learning as well as simpler techniques, to process what students are saying as they work together. The agent has to decide what to say and when.
Those pre-scripted lines aren’t thrown in arbitrarily. In fact, they’re descended from decades of research into education and getting classroom discussion right. This line of research is called Accountable Talk, and in fact there’s an entire course coming up on Coursera about how to use this theory productively. The whole thing is built on fairly basic principles:
First, students should be accountable to each other in a conversation. If you’re only sharing your own ideas and not building off of the ideas of others, then it’s just a bunch of people thinking alone, who happen to be in a chatroom together. You don’t get anything out of the discussion. Next, your thought process should be built off of connecting the dots, making logical conclusions, and reasoning about the connections between facts. Finally, those facts that you’re basing your decision-making on should be explicit. They should come from explicit sources and you should be able to point to them in your argument for why your beliefs are correct.
David’s agents are framed around Accountable Talk, doing what teachers know leads to a good discussion. Instead of giving students instructions or trying to evaluate whether they were right or wrong, they merely ask good questions at the right times. Agents were trained to look for places where students made a productive, substantial claim – the type of jumping-off point that Accountable Talk encourages. He never tried to correct those claims, though; he didn’t even evaluate whether they were right or wrong. He was just looking for the chance to make a difference in the discussion.
He used those automated predictions as a springboard for collaborative discussion. Agents were programmed to try to match student statements to existing facts about a specific chemistry topic. “So, let me get this right. You’re saying…” More often than not, he also programmed the agents to lean on other students for help. “[Student 2], can you repeat what [Student 1] just said, in your own words? Do you agree or disagree? Why?” Automated prompts like this leave the deep thinking to students. Instead of following computer instructions by rote, the students were being pushed into deeper discussions. Agents give the authority to students, asking them to lead and not taking on the role of a teacher and looming over them.Sometimes computers fail
In the real world, intervention to help students requires confidence that you’re giving good advice. If David’s agents always spout unhelpful nonsense, students will learn to ignore them. Perhaps worst of all, if the agent tries to reward students for information it thinks is correct, a wrong judgment means students get literally the opposite of helpful teaching. With all of this opportunity for downside, reliability seems like it would be the top priority. How can you build a system that’s useful for intervening in small groups if it makes big mistakes?
This is mostly accounted for by crafting the right feedback, designing agents that are tailored to the technology’s strengths and avoiding weaknesses. In large part this comes down to avoiding advice that’s so clear-cut that big mistakes are possible. Grammar checking and evaluations of accuracy within a sentence are doomed to fail almost from the start. If your goal with a machine learning system is to correct every mistake that every student makes, you’re going to need to be very confident, and because this is a statistics game we’re playing, that kind of technology is going to disappoint. Moreover, even when you get it right, what has a student gained by being told to fix a run-on sentence? At best, an improvement at small-scale grammar understanding. This is not going to sweep anyone off their feet.
By basing his conversational agents on the tenets of a good discussion, David was able to gain a lot of ground with what is, frankly, pretty run-of-the-mill machine learning. Whiz-bang technology is secondary to technology that does something that helps. When the system works, it skips the grammar lessons. Instead, it jumps into the conversation at just the right time to encourage students to think for themselves.
Sometimes, though, the agent misfires. When using machine learning, this is something you just have to accept. What we care about is that this doesn’t hurt students or start teaching wrong ideas. So let’s think about the cases where an agent can make a wrong decision: first, where the agent could have given feedback but didn’t, and second, where the agent gives the wrong feedback at the wrong time.
First, the easy case. Sometimes a student will say something brilliant and the agent will fail to catch it. Here, the balance of authority between agent and student matters. If students get used to the idea that the agent is a teacher, they’ll be looking for it to tell them they got every answer right. This is a danger zone for us – with statistical machine learning, we don’t want to back ourselves into a corner where the system has to make judgments every single time. Sometimes, we’ll be uncertain. If an agent misses its cue and the students expect a teacher’s authoritative judgment of correctness, the group will flounder when the agent misses a beat, or take the discussion into their own hands and leave the agent in the dust.
Let’s see what that looks like in practice. Here’s a transcript from an earlier study, when conversational agents weren’t as heavily rooted in the theory of Accountable Talk:
Tutor: You should now move on to discussing Condition C.
Tutor: [Student A], now would be a good time to ask [Student B] to build on what [Student C] is saying.
[Student B]: I’m so confused!
[Student A]: [Student B], would you like to build onto what [Student C] is saying? And me too!
Tutor: When you agree, write down your predictions on your worksheet.
This is dysfunctional; the tutor isn’t asking discussion-generating questions, and it’s pushing forward regardless of context. Focusing on the task single-mindedly doesn’t give students room to think.
By taking on the role of facilitator, though, the agent’s comments aren’t expected every time. We can use the system to chime in with an Accountable Talk question when we’re highly confident that the machine learning system is making the right prediction, but if the system stays silent, the students won’t even notice it’s missing; the agent is purely there to help scaffold learning, rather than judge correctness. When you’re augmenting teaching, rather than replacing humans entirely, you can afford to be cautious. Look at how the agent interacts when we start asking questions that are better suited to what we know we can do:
[Student D]: ok, boiling pt will go up and vdW will go up for all of them consecutively… right?
Tutor: Do you concur with [Student D]? Why, or why not?
[Student E]: hmm not necessarily
[Student F]: area goes up for each
[Student E]: would it?
[Student E]: im not sure
[Student D]: yea for sure area goes up
[Student F]: dipole increases first one
In this example excerpt the tutor didn’t give an instruction or evaluate anything that the first quoted student line. It simply asked a basic question in response to machine learning evaluating that spot as a good opening. The comments from these new agents use Accountable Talk principles, and get student groups discussing ideas.
Of course, these systems aren’t perfect. What we’re finding out, though, is that we can frame the discussion right for automated assessment by not trying to make our automated system the perfect arbiter of truth. What I’m describing isn’t a dire portrait of machines taking over the education system. It’s agents contributing meaningfully to learning by cautiously intervening when appropriate, using machine learning for educated guessing about when it’s time to get students to think more deeply. These agents are tireless and can be placed into every discussion in every online small group at all times – something a single teacher in a large class will never be able to do.
The results with these agents were clear: students learned significantly more than students who didn’t get the support. Moreover, when students were singled out and targeted by agent questioning, they participated more and led a more engaged, more assertive conversation with the other students. The agent didn’t have to give students remedial grammar instructions to be valuable; the data showed that the students took their own initiative, with the agents merely pushing them in the right direction. Machine learning didn’t have to be perfect. Instead, machine learning figured out the right places to ask questions, and worked towards making students think for themselves. This is how machine learning can help students.For helping students, automated feedback works.
We should be exercising caution with machine learning. Skeptics are right to second guess interventions from technologists who aren’t working with students. The goal is often to replace teachers, not help them, especially with the promise of tantalizingly quick cost savings. Yes – if you want to make standardized testing cheaper, machine learning works. I don’t to dismiss this entirely – we can, in fact, save schools and states a lot of money on existing standardized tests – but if that’s as far as your imagination takes you, you’re missing the point. What’s important isn’t that we can test students more, and more quickly, with less money. Focus on this: we can actually help students.
Not every student is going to get one-on-one time daily with a trained writing tutor. Many are never going to see a writing tutor individually in their entire education. For these students, machine learning is stepping in, with instant help. These systems aren’t going to make the right decision every time in every sentence. We need to know that, and we need to work with it. Rather than toss out technology promising the moon, look carefully at what it can do. Shift expectations as necessary. In David’s case, the shift was about authority. He empowered students to take up their own education, and chimed in when it saw an opportunity; it positioned the automated system as guide rather than dictator.
This goes way beyond grading, and way beyond grammar checking. Machine learning helps students when teachers aren’t there. Getting automated feedback right leads to students thinking, discussing ideas, and learning more – and that’s what matters. In my next post, I’d like to launch off from here and talk about what these lessons mean not just for discussion, but for writing. Stay tuned.A last note
The work I described from David is part of an extended series of more than 20 papers and journal articles from my advisor at Carnegie Mellon, Carolyn Rosé, and her students. While I won’t give a bibliography for a decade of research, some of the newest work is published as:
- “Intensification of Group Knowledge Exchange with Academically Productive Talk Agents,” in this year’s CSCL conference.
- “Enhancing Scientific Reasoning and Explanation Skills with Conversational Agents,” submitted to IEEE Transactions on Learning Technologies.
- “Towards Academically Productive Talk Supported by Conversational Agents,” in the 2012 conference on Intelligent Tutoring Systems.
I’ve asked David to watch this post’s comments section, and I’m sure he’ll be happy to directly answer any questions you have.
The post Getting students useful feedback from machine learning appeared first on e-Literate.
The following questions emerged at the lunchtime session on the Site Info tool on 14 May 2013:
Q: How do you delete a site?
A: Good question – this option is not part of the Site Info ‘dashboard’. To delete a site, you need to remove it from the hierarchy and delete it. Both operations can be done at the same time from the Hierarchy Manager (small menu on the lower left side > Arrange site). Go to the desired site; click on ‘Arrange site’ on the lower left menu; click on ‘Remove site’; select the box ‘Also delete the site’; click on ‘Confirm remove site’. If you do not see the Hieararchy Manager (small menu on lower left side), you will need to speak to your Local WebLearn Coordinator to give you the required permissions.
Q: Do site participants receive an email when a site is deleted?
A: No – so it is not necessary to first remove them as site participants before you delete the site.
Q: Do site participants receive an email when creating internal sub-groups?
A: No – so you can create, edit or remove internal sub-groups without any notifications being sent.
Q: External users (without an Oxford single singon account) – how do they get access to a WebLearn site and how do they get their login details?
A: WebLearn allows you as the site owner to add either Oxford University particpants, or anyone else, simply by using their email address. If the new participant is an external user (e.g. a research collaborator at another university, a visiting expert etc.), they will automatically receive a one-time system-generated email message giving them a link to activiate their account and create their password. They then log in to WebLearn by clicking on the ‘Other Users’ login link. You can try this out by adding yourself to a WebLearn site with your external email address (e.g. gmail, yahoo etc.), give yourself the ‘access’ role, and then log on as an external user to see how the site operates from the access point of view. (This gives a more authentic experience than using the ‘Switch to access user’ toggle option.)
Q: When might you want to make someone ‘Inactive’ in your site?
A: You might be testing something new in the site – instead of unpublising it, you could make the participants temporarily ‘Inactive’ which means that it is as if they have been removed from the site. Later you can make them ‘Active’ again. Another scenario is if you have added a bulk ‘Participant Group’ – you cannot remove individuals in the bulk group, but you can make someone Inactive if you know that they have left the group or course.
Jill Fresen has sent me this:
Every member of Oxford University has their own personal ‘cloud’ in WebLearn, called ‘My Workspace’. You can enter your contact details, upload your photo into your profile, and make connections with other WebLearn users. You also have a personal file storage area (max 100 Mb) which you can use to back up files, access them from any other computer, or build a personal web page.
Come along to the Learn at Work day session entitled “WebLearn: An online space for learning and collaboration” at IT Services, 13 Banbury Road on 23 May 2013 from 3:00 to 4:00 pm.
More information and bookings at: http://courses.it.ox.ac.uk/detail/TLW12
This is going to be a more personal blog post than I typically make here at e-Literate.
The open letter from San José State University’s philosophy department in protest of the edX JusticeX course being taught at SJSU is getting a lot of attention, as is the follow-up statement from the SJSU faculty senate. I have some concerns with both of these letters—particularly the one from the philosophy department—but before I get into them, I’d like to emphasize my points of agreement and solidarity with the department:
- As a former philosophy major and a former teacher of philosophy courses to seventh and eighth graders, I strongly believe that a course in social justice is critical to every American’s education.
- I also strongly agree that, in order for such a course to be effective, it must be up-to-date, relevant to the students, and involve in-depth facilitated discussion.
- I agree that there is a bit of a bait-and-switch going on, possibly unintentionally, with the rhetoric about MOOCs providing superior pedagogy over lecture classes (which is probably somewhat true) and then moving to swap out discussion classes for MOOCs instead.
- I agree that some MOOC fans (though by no means all of them) have simplistic notions of how MOOCs can make university education cheaper without thinking through the consequences either to the quality of education or the fiscal health of the colleges and universities that still provide tremendous value to our nation and our culture.
- I agree that intellectual diversity is very important, particularly when discussing complex issues that are essential to a functioning democracy, and that the potential for an intellectual monoculture is a concern worth taking very seriously.
- While I have no knowledge of the negotiations between edX and SJSU, I strongly agree that such partnerships should be conceived and implemented with active consultation and collaboration with faculty unless there is exceptionally strong justification to do otherwise.
Despite all this common ground on values that are dear to me, I find aspects of the department’s letter to be deeply problematic.
To begin with, there is this:
Good quality online courses and blended courses (to which we have no objections) do not save money, but pre-packaged ones do, and a lot.
That statement is demonstrably false. Good quality online courses and blended courses can, in fact, save money. How do we know? For starters, the National Center for Academic Transformation has a long list of course redesign projects they have been doing in collaboration with colleges in universities since 1999, many of which have achieved substantial cost savings. And some of them actually achieved substantial improvement in outcomes while achieving substantial cost savings. Nor is NCAT alone. There is a growing body of empirically backed academic literature showing that we can teach more students more effectively for less money across a variety of subjects. Some subjects are easier to redesign than others. But cost savings in high-quality courses is possible as a general proposition (and does not require open content licensing, by the way). The SJSU philosophy department’s blanket denial of this possibility is not credible.
As a result, the authors of the letter are also less credible when they write,
In addition to providing students with an opportunity to engage with active scholars, expertise in the physical classroom, sensitivity to its diversity, and familiarity with one’s own students is just not available in a one-size-fits-all blended course produced by an outside vendor….When a university such as ours purchases a course from an outside vendor, the faculty cannot control the design or content of the course; therefore we cannot develop and teach content that fits with our overall curriculum and is based on both our own highly developed and continuously renewed competence and our direct experience of our students’ abilities and needs.
There appears to be a significant disconnect here. On the one hand, the department argues (correctly, in my view) that philosophy students gain great benefit from “the opportunity to engage with active scholars.” On the other hand, they assert that the philosophy department has “expertise in the physical classroom” and a “highly developed and continuously renewed competence” despite the overwhelming likelihood that most of the faculty have not had significant opportunities to engage with active scholars in pedagogy-related fields.
They could have made their case just as effectively without foreclosing the possibility of improving on what they already do. As the letter from the SJSU Faculty Association notes in response to the improved completion rates of the edX course,
The pedagogical infrastructure and work that has gone into the preparation of the edX material could easily be replicated if SJSU made a commitment to pedagogy and made training in pedagogy central to all faculty.
This is a defensible argument that the philosophy department could have made. But it didn’t. Instead, it implicitly denied the existence of the scholarship of teaching and explicitly blamed the university’s financial issues on “industry” for “demanding that public universities devote their resources to providing ready-made employees, while at the same time…resisting paying the taxes that support public education.” The collective effect of these rhetorical moves is to absolve the department of all responsibility for addressing the real problems the university is facing.
By ignoring the scholarship of teaching, the department missed an opportunity to engage the MOOC question in a different way. Rather than thinking of MOOCs as products to be bought or rejected, they could have approached them as experiments in teaching methods that can be validated, refuted, or refined through the collective efforts of a scholarly community. Researchers collaborate across university boundaries all the time. The same can be true in the scholarship of teaching. The faculty could have demanded access to the edX data and the freedom to adjust the course design. The letter authors seem deeply invested in positioning the edX course as something that is locked down from a third-party commercial vendor. But in reality, the edX course is developed by a faculty member and provided by a university-based non-profit entity. Perhaps the department felt that there wasn’t sufficient opportunity in this particular course design to make a request to have a collaboration worthwhile. But their rhetoric gives no indication that there is any room for such exploration under any circumstances, or indeed that the department has anything to learn about use of educational technology that could lead to either improved outcomes or lower costs.
Equally disturbing is the tendency in both letters to dismiss the fiscal crisis as something caused solely by greedy capitalists. It’s worth requoting the earlier referenced comment from the philosophy department letter here:
Industry is demanding that public universities devote their resources to providing ready-made employees, while at the same time they are resisting paying the taxes that support public education.
To begin with, “industry” isn’t alone in demanding that public universities devote their resources to producing employable graduates. Students and their parents are asking for it too, as are individual human taxpayers. On this last point, I am not a Californian, but I understand that individual human taxpayers have an unusually direct say regarding tax rates in the state of California. The purpose of education as a public good is a serious and complicated question that deserves more careful treatment from people who should know better.
Nor are taxes the only issue. While it is true that there has been progressive defunding of public colleges and universities in the United States, it is also true that tuition costs have been rising dramatically across the country in private as well as public schools. And it is true that the public colleges and universities in California in particular are struggling with unanticipated swelling enrollments as they strive to meet the as-yet-unfulfilled moral imperative of universal access to education. Given all of this, it is not a morally defensible position to simply point the finger at the rich guys and say, “It’s their fault. Make them fix it.” To the degree that course redesign can positively impact student access to education, faculty have a moral obligation to be leading the charge. And from a strategic perspective, they are more likely to prevent dumb ideas—such as gutting quality residential education in favor of least-common-denominator, video-driven xMOOCs—from taking hold.
But perhaps the worst aspect of the simplistic finger pointing is the way in which it pollutes the civic discourse. It encourages individual stakeholders to harden into an “us vs. them” position that reduces the likelihood of citizens coming together to solve real, hard problems that are deeply intwined with issues of social justice. Here’s an example of a comment made on this blog in response to a post about the California SB 520 bill:
Remember that when the Nazis led the people into the gas chamber they told them that it was a refreshing shower after a long train ride. Do not be fooled! This sweet sounding bill is the gas chamber of good education in California. Once we are in the questions will be pointless. As the pellets drop we will realize we should have questioned things sooner.
Setting aside the fact that the only justifiable use of genocide as an analogy is when talking about another genocide, this kind of rhetoric is enormously damaging to the possibility of a productive dialectic regarding how to solve the very real and complicated problems that our system of higher education faces, including both the need to increase access and the complexities of funding that imperative. And, sadly, this comment was written by a member of the SJSU philosophy department.