December 2013 Archives

Learning project 2: What's different

In this post, I’d like to talk about what’s different between this project and other similar things I’ve seen. I haven’t seen everything (by a long shot), and I certainly would love to see more prior art posts — I’ve now at least glanced at all the ones mentioned in the comments to the last post, though it’s been a while since I looked at the Kahn Academy, and that’s probably the closest to what I’ve got in mind.

Mo’ money, mo problems

A lot of the replies to the previous post asked if this is a commercial project or a hobby/personal one. This is very much a noncommercial project. I wouldn’t mind if it brought me money, fame, and the adoration of the masses, but that’s not a major motivation. I don’t want to teach sales-people for some great faceless company how to sell more protection plans for small electronics. I want to teach everybody whatever they want to know, and it’s the people who have the least money (but the most free time) who can use something like this the best. Ideally, the same system will have a wide variety of knowledges in all sorts of different fields — you shouldn’t have to have 27 accounts at 27 different online academic institutes to get a rounded education.

Content & Uncontent

Of course, any system needs content, which is one of the reasons I’m not focusing on content directly. I don’t want to put lessons in this system, and certainly not courses (more on that later). I want to put links to content in. There are plenty of resources for learning things in the world already, the hard bit is finding ones that are right for you — that don’t assume knowledge that you do not have, and don’t teach you things that you already know. Linking to tutorials & lessons means that we don’t have to reinvent those wheels.

Bragging Rights

It’s not hard to find quizzes on the internet. In fact, it’s hard to use the internet without coming across 10 million quizzes, most of which tell you what disney princess you are, or something similarly useless. A large part of the remainder are useful only for bragging rights. While I’m sure you could use your scores in this system to brag with, that’s not what I want to focus on. Rather, the questions we ask are a way of pinning down what we think you know and don’t know, in order to help direct your experince. The scores we give aren’t there for bragging about, but informing you and letting us guide you.

Courses vs Lessons

Most e-learning platforms seem to take the classroom experience and either replicate it online, or help it go smoother in real life. That is, you are a student, you enroll in a course, you watch lectures, do tests, and at the end, you (hopefully) get a thing that says “congratulations, you passed”. That system was developed in order to make administration easier and teaching parallelizable. Those aren’t really concerns of ours. When you take a course, especially when it’s not an intermediate one of a linked series of courses (IE, inside a university), you often end up sitting through many lessons which are designed to get everybody on the same page before the meat of the course starts. In some cases, you will discover at the end that you really knew almost all the material, and you just wasted your time, and in the case of a real university, quite a lot of your money. If we administer on much smaller scale — every lesson — then we waste less of your time. Of course, if you are a single teacher in a one-room school-house, you can’t do that — you’d spend all your time keeping the log-books up to date, and trying to teach 30 different kids 30 different lessons all at the same time. Our educational system grew out of that, and hasn’t caught up yet.

Smart Statistics

Ideally, I’d like a large portion of this thing to be self-curating (that’s something that will hopefully get added shortly after the very basics are in, and improve slowly over time). To do that requires much smarter scoring then the majority of online quizzes, which count all questions as equal. There will need to be a complex network of interactions between questions, knowledges, tutorials, and students, in order to improve upon our initial guesses as to what is harder, what is required for a tutorial to work well, and what it teaches. I’m currently thinking that Item Response Theory (thanks for the link!) is an excellent place to start from — probably the two-parameter form, since it seems a happy medium. That should allow us to be smart, at the least, about knowing what question to ask next.

Learning project: introduction

Past & Prologue

So, one of the great projects that have been rattling around in my head for some time has recently come back into the front burner of my brain again: computer-assisted learning. Jess has more or less been constantly thinking of this for the last decade+, from a rather different viewpoint — the writing of documentation and tutorials, especially around DBIx::Class and Catalyst. I think my current ideas are worthy of implementation, and I think that without building an active community around them, the project will falter, and that’d be a shame. So, enough of the past / prologue for the moment…

Synopsis

The basic idea goes like this: first, ask questions to determine what the student does and does not know.
Then, point them at something to read or watch, which requires only things that they do know, and will teach them things that they do not know. Now, since that basic idea is a bit … vague, a more detailed, and less formal English sketch of the system will follow.

Sketch

1: Knowledges. Knowledges are things like “conjugation of Latin verbs”, “how to spell conjugation”, “SQL data definition language”, “sines that you should remember while doing trig”. Rather specific things that a reasonable human can learn in 15 minutes, if they understand the other knowledges that it is based on.

2: Questions. Questions are how we assess knowledges. Each question has, in addition to the obvious, a set of knowledges that it is related to, with strengths. When a student gets a paticular question right, we are more confident of their knowledge in each of the related categories. Contrarywise, if a student gets a question wrong, we are more confident in their lack of knowledge. That is, for each student and each knowledge, we track not just a score, but a confidence interval. Somewhat sadly, for reasons of ease of implementation, we will likely start with multiple-choice questions, though I’d ideally like to have some system which would allow for more free-form answers, be less subject to luck, and to confirm-rather-then-determine behavior.

3: Tutorials. These are, from one point of view, the entire point of the system. Each tutorial has a set of prerequisites, and a set of knowledges it is expected to improve. The actual content, I think, wouldn’t be a part of this site, but rather a link to it. That allows the easy use of existing content, a variety of forms of content (interactive applets, youtube videos, blog entries, blocks of text, etc), and allows the content authors to use whatever tools they like to create their content.

4: Students. From another point of view, these are the entire point of the system — it’d be a bit useless without users. As, I suppose, is true with any such description, students are well-described above. We track their score in each knolwedge, along with a confidence interval. Once we’re confident that they meet the prerequisites, we start showing them some tutorials that we believe may fill in holes in their knowledges.

Solicitiation

Now, what I want from you guys… First, holes. Poke holes in my design above. What are we forgetting about?
Secondly, hope. Tell me you’d like to use such a system, tell me you’d like to translate it, to write content for it, to create bug reports, and to fix them, to style and make it look pretty… Thirdly, content. This should probably wait a bit, but it’s always good to get a good feel for the concrete when you are still working on the abstract layers. Fourthly, algorithms. What’s the best way to create a level and confidence interval off of a series of questions like this. Can we dynamically tweak question difficulties and/or tutorial parameters based upon results? How should we determine what question is best to ask next?

Looking forward to reading interesting, intriguing, informative, and introspective responses.

About theorbtwo

user-pic Adventures in eclectic geekery.