Learning project: introduction
Past & Prologue
So, one of the great projects that have been rattling around in my head for some time has recently come back into the front burner of my brain again: computer-assisted learning. Jess has more or less been constantly thinking of this for the last decade+, from a rather different viewpoint — the writing of documentation and tutorials, especially around DBIx::Class and Catalyst. I think my current ideas are worthy of implementation, and I think that without building an active community around them, the project will falter, and that’d be a shame. So, enough of the past / prologue for the moment…
The basic idea goes like this: first, ask questions to determine what the student does and does not know.
Then, point them at something to read or watch, which requires only things that they do know, and will teach them things that they do not know. Now, since that basic idea is a bit … vague, a more detailed, and less formal English sketch of the system will follow.
1: Knowledges. Knowledges are things like “conjugation of Latin verbs”, “how to spell conjugation”, “SQL data definition language”, “sines that you should remember while doing trig”. Rather specific things that a reasonable human can learn in 15 minutes, if they understand the other knowledges that it is based on.
2: Questions. Questions are how we assess knowledges. Each question has, in addition to the obvious, a set of knowledges that it is related to, with strengths. When a student gets a paticular question right, we are more confident of their knowledge in each of the related categories. Contrarywise, if a student gets a question wrong, we are more confident in their lack of knowledge. That is, for each student and each knowledge, we track not just a score, but a confidence interval. Somewhat sadly, for reasons of ease of implementation, we will likely start with multiple-choice questions, though I’d ideally like to have some system which would allow for more free-form answers, be less subject to luck, and to confirm-rather-then-determine behavior.
3: Tutorials. These are, from one point of view, the entire point of the system. Each tutorial has a set of prerequisites, and a set of knowledges it is expected to improve. The actual content, I think, wouldn’t be a part of this site, but rather a link to it. That allows the easy use of existing content, a variety of forms of content (interactive applets, youtube videos, blog entries, blocks of text, etc), and allows the content authors to use whatever tools they like to create their content.
4: Students. From another point of view, these are the entire point of the system — it’d be a bit useless without users. As, I suppose, is true with any such description, students are well-described above. We track their score in each knolwedge, along with a confidence interval. Once we’re confident that they meet the prerequisites, we start showing them some tutorials that we believe may fill in holes in their knowledges.
Now, what I want from you guys… First, holes. Poke holes in my design above. What are we forgetting about?
Secondly, hope. Tell me you’d like to use such a system, tell me you’d like to translate it, to write content for it, to create bug reports, and to fix them, to style and make it look pretty… Thirdly, content. This should probably wait a bit, but it’s always good to get a good feel for the concrete when you are still working on the abstract layers. Fourthly, algorithms. What’s the best way to create a level and confidence interval off of a series of questions like this. Can we dynamically tweak question difficulties and/or tutorial parameters based upon results? How should we determine what question is best to ask next?
Looking forward to reading interesting, intriguing, informative, and introspective responses.