Over the last year or so, many people have written and spoken about the new Experience API (also known as Tin Can API).
It’s creating quite a buzz in the learning and development community, with many seeing it as the death of SCORM, and as the means to really understand how my learners behave.
So much so, that the question: “Does it do Tin Can?” is starting to appear in general conversations about learning systems. This reminds me of when “Are you SCORM compliant?” started to become a standard requirement in the early 2000’s – regardless of whether SCORM was actually relevant, useful or necessary for a particular context.
In this article I are going to look at three key questions and examine whether the Experience API really is the answer to all my problems, or whether it might just bring different ones!
I will not be dealing with the technicalities of how the API works, although you can find some useful references below.
At its most basic, the Experience API allows one system to send a message (known as a statement) to another system about something a user has done.
The statement is usually of the form:
The data sent can be much, much richer than just that simple statement. Within the API specification, there are fine-grained levels of detail available for each component of the statement.
At the receiving end for these statements is a system known as a Learning Record Store (LRS). There are currently a few standalone LRS applications, and also some of the LMS vendors are building LRS functionality into their systems.
Given the richness and quantity of the data that can be sent to the LRS, a lot of people are now thinking about how to use analytics techniques (similar to Google Analytics for websites) to pull meaning out of the data. This then involves high-powered statistics to ensure that the meaning that’s derived is valid and accurate.
One of the really exciting things about the Experience API is that it isn’t just about learning. In fact, if it was just about learning then we’ve not really gained very much at all.
Imagine an IT support system that recorded:
So I could then pass a statement to the LRS that said something like:
If you then combine that with a system that provides training or support for the help desk agents, which could send statements like:
You can then start to do some analysis to show whether people that are solving problems well are the same ones as those reading the support materials.
You could even do some cause and effect tests, seeing what the resolution scores are like, then releasing some support materials and watching how the scores change.
Of course, that’s a simple example. But I hope it illustrates what’s possible.
As well as just sending messages to a Learning Record Store, it’s also possible, using the Experience API, for the LRS to act as a repository for any arbitrary data. This is just like the Kindle app on your phone, which can store information about bookmarks and annotations. When you then read via the Kindle app on your tablet device this data gets picked up and used.
For those who know SCORM this is similar to suspend data, but there is no size limit, and the data can be stored in a structured way, rather than one long string of characters.
Before I answer that question, let’s have a quick reminder of what SCORM is, and what it is for.
The Shareable Content Object Reference Model (SCORM) is a collection of ideas that describes how to put “Content Objects” together in a way that they can be reused (Shared) across multiple, different systems.
These ideas cover:
In corporate learning, (1) and (4) are very rarely used – because they’re pretty complicated and very few Learning Management Systems know what to do with the data.
In theory, by using SCORM…
In reality though…
Outside of corporate learning very few people will have ever heard of SCORM. Yet many people will have used learning content delivery systems such as Youtube, Scribd, Slideshare and Flickr.
All of these take learning content in a particular packaged format and deliver it in the most efficient way possible to the end user.
But these platforms do so much more than I get from my corporate learning systems:
All these are for the benefit of the end-user. If they weren’t they wouldn’t get used, and no-one would publish content to them.
In contrast, how do I use SCORM packages?
For most current corporate elearning, I could easily get by without using SCORM, if my systems allowed it. The ideal, of course, would be for my systems to be able to accept multiple content types (including SCORM) and display them all in a consistent user-centred way, like Slideshare, Youtube etc.
If you still want to track who has done what, that’s where the Experience API comes in.
You can take a content management system, like WordPress, and add Experience API capability using a plugin such as GrassBlade. Every time someone views a page, a statement is sent to your LRS with information about the user and what they did. Grassblade even allows you to embed content from compatible sources (like Articulate Storyline) with statements fed from the content via Grassblade to the LRS.
So, with the Experience API you get all the benefits of using a proper content management system (or whichever system you’d prefer your users to be in), along with the ability to track activities, with rich data, in an independent system.
Each implementation of the Experience API will be different. Unlike SCORM, which is very Learning &Development / Training focussed, and relatively simple to implement, the Experience API, used properly, can have organisation-wide implications, and will require quite detailed planning.
For the data to be useful to your organisation you need to really be pulling it in from operational systems as well as L&D systems. How you divide up the responsibilities is up to you, but Operations should certainly be involved in the decision-making.
But also, you’ll need to consider the end-user’s perspective. Much of the data may be very personal to them. Should it be transportable to other systems when they move on?
At the moment, the Experience API is almost exclusively being driven by the Learning Technology community. As Analytics capabilities develop, and as Operations start to see what might be possible, that will hopefully change.
In the meantime, you can use the API’s provided by your operational systems vendors to connect to your LRS, by translating data to Experience API statements. Reuben Tozman’s post on Rethinking Design with Tin Can demonstrates the possibilities of connecting Salesforce to your LRS.[
This is the biggest question of all. As I know with SCORM, it’s very easy to collect lots of data that actually means very little. The trick is to make the important measurable, rather than the measurable important.
Which brings me onto the verbs. These are the parts of the statement that will have the most impact on what information you’re able to gather.
The Experience API is completely open in that you can choose to use any verbs you like. However, your system vendors (particularly in the learning technology space) may well have preset verbs embedded in their systems or their content. You will need to assess whether these verbs work in your context.
If you are using the Experience API to build custom integrations with other systems, then you can choose your own verbs, and what conditions generate them. That will require careful designing so that you can extract meaningful information from the inevitable noise.
Anecdotal tales of early, large-scale adoptions of the Experience API speak about drowning in data, or systems that are unable to cope with the quantity of data being transferred. That doesn’t mean the API is at fault, but perhaps how it was being used might be.
This question seems to come up at most Experience API workshops. If you think of the Learning Record Store as a data warehouse that is storing information about what individuals have done, how they’ve performed, and the characteristics of that individual, then you might be right to be concerned.
This is the sort of thing that Enterprise systems like SAP and Oracle do, so I can’t see that it’s that big an issue. But you will need to make sure your data protection and information security policies and practices are up to scratch, and that you comply with whatever jurisdictions you’re working within.
Version 1.0 of the API was released in April 2013. So it’s still very early days. My hope is that we’ll see vendors that are not in the traditional learning technology market adopting the API. Once I get statements coming from CRM systems like Salesforce, social media tools like Yammer, and help desk systems like Zendesk then we’ll know it’s really making an impact on performance, not just learning.
Hosted by ADL, the people behind SCORM, and sponsors of the Experience API
Provided by the people tasked with producing and promoting the API
How the Experience API messages are built up.
A great list of open source tools, applications and documents that will help people to understand and use the Experience API
An opinion piece on the Experience API, but without the usual hyperbole
The Experience API from the point of view of improving performance, rather than just tracking activity
A post that asks key questions about trust and what the data can and cannot show us. The comments, in particular, are very valuable.
Reuben looks at the exciting possibilities of connecting learning systems and operational systems (eg. Saleforce) with the Experience API
Koreen picks up from her conversation with Eric Fox and widens it out; asking questions on privacy and meaningful analytics, and getting great responses from the wider community.
One of the few available standalone Learning Record Stores, with analytics functions
An LRS service, and also an open-source system to install on your own servers, which is designed to allow learners to take control of their data. It’s from HT2, the same people that brought me Curatr.
An Experience API plugin for WordPress
A content authoring tool which produces Experience API statements.
: #_ftn1 “”
Posted: 06 September 2013