Follow

Attempt Tracking in SCORM

Avatar

Clients often ask us how attempts are tracked in SCORM. The answer that there is no good answer.

SCORM has a formal notion of an attempt, but it is a technical concept which may or may not correspond to an intuitive understanding of what constitutes an attempt. It is perfectly legitimate for content authors to implement the content in a manner where many "intuitive" learner attempts will be contained in one "technical" attempt. And, it is also possible for many "technical" attempts to correspond to only one "intuitive" attempt".

In SCORM, the "technical" notion of an attempt simply indicates that the run-time tracking data associated with an activity has been reset. It means that if an activity was previously completed, has a score of 85%, and a saved bookmark, now all of the data is re-initialized and to the SCO, it appears as if the learner is starting fresh. This behavior can be useful in a sequenced course, and it makes sense that if a learner is starting a new attempt, the run-time data should be refreshed to reflect that fact. 

In SCORM 2004, several actions can trigger a new "technical" attempt. Most commonly, if a SCO exits with a cmi.exit value of anything other than "suspend", a new attempt will be started when the SCO is reset. Sequencing rules can also lead to activity data being reset. Most commonly, when an activity is "retried" as a result of remediation, all of the activity state data is reset for a new attempt. In SCORM 1.2, the behavior is a little more ambiguous and varies across different LMS's. The only defined trigger for starting a new attempt is the value of "cmi.core.exit". In some LMS's not setting this to suspend will trigger a reset of run-time data. In other LMS's run-time data will never be reset. Unfortunately this behavior is quite limiting. Completely resetting run-time data is the desired behavior only for a few narrowly defined use cases. In the Rustici Engine, we implemented a package property that allows course administrators to precisely control when SCO run-time data should be reset in all of the standards.

In most cases, if the learner re-attempts a SCO, the SCO wants to remember the data from the previous attempt. Perhaps the SCO wants to remember the previous location so it can let the learner jump directly to the test. Sometimes, the SCO wants to remember the learner's previous response to test questions so they can be displayed to the learner. Often, the SCO just wants to know that the leaner has been there before so it can tell the learner to "try again". For these reasons, and many more, content authors will choose to maintain just one attempt on a course to avoid the resetting of run-time data. This leads to the situation where many "intuitive" attempts are contained within one "technical" attempt.


Alternatively, in courses that rely heavily on sequencing, the normal flow of a course might very well contain sequencing actions which create new attempts. For example, sequencing rules might repeatedly retry an exercise to let the learner practice before moving one. This would show up as several "technical" attempts, but wouldn't indicate that the learner had to repeatedly try in order to achieve mastery...it is just part of the normal flow of the course. We now get into the definition of an "intuitive" attempt. Some people would want to see that the learner went through a section several times, others would only want to see new attempts initiated when the learner is really starting from scratch to "try again". And, now you start to realize why this article began by saying "there is no good answer" (and we haven't even touched on the distinction between a "launch" and an "attempt").

From a content developer's point of view, there is still the predicament of how to best indicate to the LMS that multiple attempts have occurred. There are several options, each with their own pros and cons. The biggest obstacle though is that SCORM doesn't specify any requirements for how LMSs are supposed to report data. Some LMSs will report data in great detail, including a complete history of the learner's interaction with the course. More commonly, LMSs will provide a snapshot of the current state of the course, but not a historical record. And, unfortunately, some LMSs will only display the high level completion and satisfaction status of a course. As a content author, you have to make your best effort to report meaningful data and then you can do no more.

Option 1: Use "technical" attempts to represent intuitive attempts. This option is the most clear cut way to actually represent attempts, but unfortunately it can limit the flexibility you have within your content since you can't effectively persist much state data. There are some ways to get around this lack of persistence by using global objectives in SCORM 2004, and in 2004 4th Edition, there is a new concept of shared data that is designed to alleviate this precise concern. In short, there are some limitations, but they can be overcome with sequencing work.

Option 2: Use the run-time data model to indicate attempts. If you avoid the use of "technical" attempts, your implementation will have more flexibility and be more straight-forward, however an LMS is less likely to be able to explicitly report on the attempts. You can use the run-time data model to expressively indicate how the learner has progressed through the content. One simple solution is to store an objective for each attempt. Give each objective an identifier of "attempt_#" and set its status value to the value the learner attained at the end of that attempt. Also, to give a precise indication of how the learner progressed through the SCO, you can use a journaling scheme for recording interactions (and remember that an interaction isn't necessarily a question result). Fully populate the interactions data model, especially the timestamp, and you can give a very detailed account of what the learner did while in the SCO.


From an LMS developer's point of view, you should strive to provide as much data as possible. In the Rustici Engine, we provide a detailed snapshot of the current run-time data (to accomodate content vendors who use Option #2) and also a detailed record of how data changed as the learner progresses through the course (to accomodate Option #1).

Was this article helpful?
6 out of 6 found this helpful
Have more questions? Submit a request
Powered by Zendesk