Arsenal

We use several different activities and methods to support course participants. Some of these are well-known and well-used in MOOCs, some are less common or have been developed by ourselves. Some are fully automatic, some require a human. Some are publicly available (e.g. in courses.cs.ut.ee), some are restricted to only course participants (e.g. in Moodle).

  • Automatic
    • Troubleshooters (public)
    • Self-assessment questions (public)
    • Tests (restricted)
    • Automatic assessment and feedback of programming exercises (restricted)
  • Manual
    • Live programming (restricted)
    • Helpline (restricted)
    • Forums (restricted)
  • Other
    • Weekly video “What’s happening in the course” (restricted)
    • Thonny (Debugger) (public)

 

Troubleshooters

We provided so-called “troubleshooters” for every programming exercise. Troubleshooters contain answers and clues to questions that can arise when solving a particular exercise. Murelahendaja is our own custom environment that allows finding solutions to these common problems in an option tree format. It can be used for various problems and is used with problems ranging from registering to the course to finding a solution to specific error messages. As a result almost every programming task is supported by the troubleshooting system. It must be pointed out that in order to have an educational effect, the troubleshooters cannot give straight solutions to the tasks, but should help with systematic hints.

Self-Assessment Questions

Programming materials of all courses contain self-assessment questions. Self-assessment questions are designed to help learners self-evaluate the essential programming points of the material. Feedback is provided for every answer. Notably, the feedback does not only specify whether an answer was wrong or right, but it also explains for every answer why this answer is wrong or right. Using these questions effectively can reduce the amount of individual questions. Weekly videos and letters encouraged students to study feedback on every possible answer.

Different teaching methods are used in composing the material with self-assessment questions. The learning material is structured in different ways so that sometimes there is material with a new topic and self-assessment questions follow and sometimes the self-assessment questions are presented first and then the material explaining a new topic. The teaching method “learning by mistake” is used in some self-assessment questions, as it is a very powerful method in teaching programming. The questions have to be interesting; every (right or wrong) answer has to have a didactic value. Composing such questions is not an easy process and is certainly a challenge for us.

Tests

The main purpose of the weekly tests is to make sure that the student has read and understood the general learning material of the course (programming and additional information). Each weekly quiz has 10 questions about the course learning material. The quiz is passed if 90% (9 questions out of 10) of the answers are correct. The quizzes mainly comprise multiple-choice questions with examples of short programs to confirm that the learner can understand and analyse programs. The quizzes also include open-ended questions, requiring short answers (e.g., a number). Depending on the answer, the student receives positive or negative feedback.

Feedback is received directly after submission and the number of submissions is not limited. After submission, if the answer is incorrect, the correct answer is not given, but a negative feedback with a helpful hint for the right answer is given. Unlimited quiz submissions per student and hints enable students to learn from feedback and correct their answers. However, the downside is that it allows learners simply to try every answer to find the correct solution without really thinking about the quiz responses. This is the main difficulty and problem with quizzes in a MOOC.

Automatic Assessment and Feedback of Programming Exercises

Weekly programming exercises require creation of a computer program as the outcome. The exercise specifies the input and output data of the program and sometimes also some additional requirements or constraints that the program must comply with. Input and output of the program is typically defined as a sequence of strings that must be read from or written to standard input-output streams or text files. Therefore the correctness of the resulting program is defined by whether or not it outputs the expected data given some input data. This is very convenient since it allows us to automatically feed predefined input data in the submitted programs and check if the output is correct for that particular exercise. If a test case fails, automatic feedback can be given about what was expected of the program
and how it actually performed.

The number of submissions is not limited to allow students to learn from feedback and correct their solution. We currently use Virtual Programming Lab for this kind of automated assessment. It allows us to define test cases for each exercise that contains the input and expected output of the program and feedback messages for different situations. During the design and creation of automatic assessment tests we faced three critical requirements: the tests must accurately verify the correctness of the solution, they should give constructive feedback and they should not impose strict constraints on the output formatting or program architecture. Each of these requirements is further discussed in detail.

Firstly, the automatic tests must be very accurate since the submissions are not reviewed by humans. This means that any submission that passes the tests must be correct in terms of the exercise and any submission that fails to pass at least one test must be incorrect. If submissions were reviewed by staff then the tests could remain imperfect such that a solution could be accepted manually even if it does not pass all tests, but no such exception can be made in the case of automatically assessed MOOC exercises with thousands of submissions.

Secondly, the tests must provide constructive comments because that is the main source of feedback for the students whose submission is not accepted. Assuming that the data used for testing a solution is public, basic feedback such as “Input: aaa. Expected program to output xxx but got yyy instead” can always be given. Sometimes more advanced feedback can also be very useful, for example: “Function aaa was defined but it should also be called inside the main program.”; “Output of the program was correct but variable bbb was not used. This variable should be used in the output message.”; “This exercise does not require a user input so the ‘input()’ function should not be used.”

Thirdly, the technical constraints on the solution must be as gentle as possible. Testing program output means determining if the output contains the expected data and does not contain any unexpected data. This can be done easily if the output format of the data is clearly defined. However, following strict rules for the output format can often be counter-intuitive for students with no background in exact sciences. For example, a student might choose to use two empty lines between the first and second output messages instead of one or even choose a synonym for some word. Based on student feedback we have noticed that these types of mistakes can be very discouraging since the submitter’s correct solutions are claimed to be faulty. Therefore, we have attempted to design tests such that these technical constraints are kept to a minimum.

Catering for all three requirements simultaneously is a challenge. Designing of tests very often involves trade-offs between these requirements. For example, accurate constructive feedback and even suggestions could be given if the architecture of the solution was fixed. However, there are many different solutions to a given problem, which means that defining a ‘correct’ architecture is not an option. All possible correct solutions must be considered. Extracting these pieces of logic that are invariant over all correct solutions has been one of the most difficult challenges for us when designing both automatically graded exercises and tests.

Live Programming

We set up live streams where one of the course organizers answers viewer questions with live-constructed explanations and examples. All Live Programming sessions are recorded and can be later viewed by anyone.

Helpline

Courses About Programming and Introduction to Programming organize a dedicated helpline. If a participant has a question or problem that they can’t solve themselves, they can write to the helpline. We aim to answer questions in  a couple of hours. It is important that participants are never given the final solution to an exercise, instead they are gradually directed by clues and examples to find the answer themselves.

Forums

Students have access to course forums where they can ask or answer questions. All discussions are public and non-anonymous. Posting solutions or parts of solutions to exercises is prohibited. The forums are especially important in Introduction to Programming II where they are one of the main support mechanisms along with Live Programming.

Weekly video “What’s happening in the course”

Every week or sometimes biweekly a “What’s happening in the course” video is released. Its main purpose is to create a feeling of a “real” course that is really happening now with real people behind it. This feeling of “real teachers and happening now” can be very easily lost in a heavily-automated course such as a programming MOOC.

Thonny (Debugger)

For programming, we recommend using Thonny, an integrated development environment specifically targeted at beginners. Thonny provides many different beginner-friendly features such as a very detailed but simple to use debugger and a pip-based package manager GUI for installing third-party packages. Thonny also saves detailed logs about how a program was created. These can be useful for a number of different reasons.