Monday, 25 November 2013

Catching the stragglers



Originally published to eBridge on Wednesday 19 December 2012

I'm pleased to say that the anti-bribery e-learning has just about run its course, with well over 1,000 people completing the e-learning, and the focus on gentle reminders to people who have yet to finish it off. I've also had a good chance to run some live online sessions for staff who were struggling with the assessment, and an actual instance of accessibility problems to deal with - successfully I might add!  Some of the remaining issues are around cross-platform operability and occasional failure of the SCORM tracking, possibly calling for some digging around in the support forums for the authoring tools.



Barring some of the aforementioned negative feedback, I'm satisfied that the e-learning has done some good in raising awareness and should provide a solid basis for assuring regulatory bodies of compliance. I'm quite confident that the organisation's charitable ideals are largely mirrored in my colleagues, but with extensive and ongoing change in our operations, and natural turnover, it makes sense to have the training in place. However, I do have to ponder how the training could be modified for continuous improvement, particularly in light of criticisms of compliance training, helpfully summarised by Jennings (2012). For genuine behavioural impact, I think that broadening the range of contexts covered, and increasing the game-like qualities, will be essential.

References:

Test design - meet the professionals!


Originally published to eBridge on Friday 16 November 2012



My anti-bribery e-learning has now been in play for two full weeks, and there's a mixture of good and bad in the feedback. The biggest positive is that well over 400 people have completed every part of the e-learning, with a fairly positive reception to some of the more experimental aspects, such as the decision making scenarios. But...

Working for an organisation that designs assessments has shown that people have some very rigid ideas about what constitutes a valid test. In particular, one colleague in our research department seemed to expect that the questions should conform to the standard knowledge tests designed to discriminate between different abilities at GCSE. Others have attacked the complexity of the wording, and the variation in how questions are to be answered, for instance having to decide how many and which statements are correct from a set of judgements about scenarios. There are some points that I will take on board for future - perhaps having questions alternate between spotting statements which are either true or false was a step too far!

However I'm definitely not alone in thinking that the test should be complex, after all it is designed to protect the pupils who take our tests from bribery, fraud and corruption, so a test that's too easy and doesn't prompt recall later on is a dangerous falsehood. My reasoning behind the complexity of the questions was inspired by an article by Dror (2006) where he suggested that having learners make judgements about the information would encourage better recall in future. The reason for my being so passionate about this topic is having read accounts by Bennett (2001) and Chapman (2002) of how bad corruption in education can be, I'm determined that we need to take an attitude of 'the buck stops here' rather than doing the minimum requirement to meet our legal obligations. I also have a deeper, more subtle motivation for taking active ownership of this policy - Ronfeldt (1996, p.17) lists corruption as the key risk associated with hierarchical organisations (i.e. governments). With the government tightening its grip on the education system, I must ask the eternal question: 'Who watches the watchers?'

References:


Convergence

Originally published to eBridge on  Saturday 10 November 2012


As I start the process of evaluating how well my latest project has progressed towards a game-like model, I came across Quinn's (2005, Ch.2) idea of a convergent model, where he links the use of decision-making in games to the higher levels of cognitive engagement. Whilst multiple choice and true-or-false questions are technically decisions, we need to move from knowledge test to knowledge application. I believe that my use of scenario-based questions to ask learners to identify dishonest behaviours, even though they are strictly in a multiple-choice format, has helped to bring my design closer to engaging higher-order thinking. I will of course be sure to see what the feedback from my learners has been though! My key point for reflection and future development will be considering how successful this has been in increasing engagement, and how to move to the next level - gaming pun intended!

References


  • Quinn, C. (2005). Engaging Learning: Designing e-Learning Simulation Games. San Francisco: Pfeiffer.

Putting things into practice (a ridiculously long post)


Originally published to eBridge on Saturday 3 November 2012


Yesterday was an interesting day as my anti-bribery, fraud and corruption e-learning finally went live, a little more than four months after first inception of the project. There were a few complaints from people who failed the test first time, including one colleague was a little abrasive to me on the phone! However, the test was designed to be challenging, and most of the failures seem to have been a result of people rushing in without reading the questions carefully. Some learning experiences have to be painful I suppose (Wheeler, 2012), although it's probably not advisable to use this as an off-the-cuff-response! I made some amendments to the feedback to make sure that people could understand exactly why they have gone wrong, and updated the materials live, so hopefully I can rest easy during my week off! In terms of completion we're off to a fine start, with over 100 of our 1,200 employees finishing the compliance training on the first afternoon, and a similar amount who have at least made some progress.

I was intially approached for advice about buying something 'off the shelf' back in June, but it quickly became apparent that this kind of stuff lacked any real context, and so I went to meet our HR stakeholder for some initial action mapping (Moore, 2008). From this I was able to start designing trial scenarios, building upon previous forays into branching scenarios (Shepherd, 2011), and aiming to go a little further towards a game-like approach to make the decisions more engaging. After some input from stakeholders and final tweaking I had two good working scenarios, with scattered ideas for two more that had to be abandoned.

Next came the design of assessment questions, though on reflection I think this should have been done first. Rowntree (1997) encourages us to start with this every time, and after being asked to do some heavy revisions to my initial question set I can see why. I had designed my questions using Dror's (2006) principle of getting learners to make judgements about the material rather than factual recall, and in the quest to produce convincing 'wrong' answers I had strayed into excessive detail, which might have been kept in check if I had started with the assessment and stuck to the desired behaviours identified in the action mapping process. Still, I think this has been a valuable learning experience in developing effective learning, based on some of the more positive feedback I've had!

I'll be interested to see how things have progressed when I get back from leave...

References

Gaming technology and reflection

Originally published to eBridge on Wednesday 31 October 2012


Prensky (2007, p.50) takes a moment to address one of the criticisms that is often directed at the use of digital games as a tool for learning - the lack of reflection. This is particularly apparent when the games we use are 'twitch speed', with no room for error. Race (2010) lists reflection, or sense-making, as one of the key factors for successful learning, and notes that:

     '...we can't make sense of things for our learners - only they can do it. So our job      becomes to provide them with the best possible environment...' (p.20)

When we make a decision to incorporate gaming technologies into a learning program, we then have a duty to ensure that the reflective part of learning is not sacrificed. Without giving people a chance to ground themselves, they do not learn any self-reliance in their learning, and will flounder when their new skills are no longer relevant - an increasingly rapid and certain occurrence in our evolving society.

References:


  • Prensky, M. (2007). Digital Game-Based Learning. St. Paul: Paragon House Edition.
  • Race. P. (2010). Making Learning Happen: A Guide for Post-Compulsory Education.  2nd Ed., Sage.

The spread of innovations


Originally published to eBridge on Monday 29 October 2012

I've just been reading the article by Robinson (2009) about how innovations spread, and it reminded me of a few blog posts that I have come across this year. Wheeler (2012) brings up a humorous analogy with the resistance to introducing technology in the classroom. This is somewhat closer to the 20:60:20 approximation that Robinson refers to.

More recently, Radick (2012) brings up similar themes when discussing the successful adoption of the new social media tool 'Pinterest', by harnessing the enthusiasm of the early adopters, rather than trying to encourage adoption through the more traditional influencers, and in doing so set desirable standards of behaviour within the medium.

References: