 Mae'r hyn yn rhoi, Petydhaf, fel Llywodraeth Leo yn ysgolwyddiant. Felly, rwy'n gweithio gyda'r cyd-fawr cyfath yma, a byddai'n gweithio cyllidio cyd-fawr cyllidio cyfath yma. Rwy'n gweithio cyfath yma, gyda'r cyfath yma, ac rwy'n gweithio cyfath fyddwch gyd-fawr cyllidio cyfath yma. Ond, yna'r gweithio ar ydych chi wedi gweld yma o'r ffordd o'r sgolwyddiant mewn gweld yma. Ac rwy'n gynnymhwys i'w gynhyrch yn fwy o rhai'r wneud, efo'r clwysiau sy'n gallu'n gwybodaeth o'r ffordd. Felly, ychydig o'r ffordd o'r cyfrifiadau sy'n dod, ymhyrch i'r llyfrfodau i'r awdraedd yma sydd yn gwneud yma sy'n gwybod os ymddangos, iawn o'r ddweud. A rwy'n gweithio yma, rydw i'n gweithio'r cyfrifiadau yma sy'n gweithio'r ddigon cyfrifiadau before one has data, one begins to twist facts to suit theories instead of theories hwt facts. Because if you don't measure the impact of your learning, you are at best beautifully bumbling along, surviving by guessing what works for your students. At worst you are using your in built bias to force learning upon them that may not be right for them. So what is the value of actually doing it in real world value, apart from what I've just said there. This is a slightly business focussed one, ond eich rhai ysgol a'r rhannu i'r wneud i ddweud i'r ffocws yw'r ffordd o yma, ond ym mwyaf ychydig o'r unig ymlaen. So, y ffordd o'r ffeiddiadau o'r ymddangos ymddangosais, ond y byddai ddweud â'r rhai ffordd o'r ddweud, y ffordd o'r ffordd o'r ddweud i'r ffordd o'r dweud i'r ddweud. Dwi'n ffordd o'r ddweud o'r ymddangos,りwb i nefyd yn yr ysbryd, yr yw'r hanes ar y cyfnod ddod ar yr ymarfer a gweithio'r ar cyfnod ar y cweddl undeil i'r bywm o'n teimlo i'r pryd chyfyddiant. Rydyn ni'n trefyd o ran yn geilodau lleol, roedd yn eich hollu'r leol, ar Lyio, yn cael eu gwirionedd a wedi bod hynny'n ymddangos yn ffordd a leol yng Nghymru, ond yn amliteil y mi oedd ymddangos oedd hynny'n mynd yn ei wneud i mi biasio ar y gwestiynau oherwydd yr hyn, a'r hynod yw ar Amyzyn sydd wedi'u gweithio ar gyfer y taff. Yn gyflawn ni'n wych yn ddechrau, yn pwy f Mengech, gobeithio, ond mae'r ddechrau'n lle. Felly byddai fydd yna ddefnyddio'r gysylltu ar gyfer y Llewni'r Analythu. Yn gyfer 3 ymddaint gweithio, ond mae'r hynny yn ystyheus. Mae'r gweithio erioed yn gweithio! Mae'r ddechrau yn chweithio mewn y busnys plac mae mae hynny yn fwy gweithio gyda fwy gweithio'r Uned Oed. O'r un sydd yng ngy遠idol newid y gallwn o'r anod fwy maerig o'r anod arwy o'r ychydig mae anod fel yng Nghyrwysgol Llywodraeth nid yw'r anod yw'r anodau. Mae'n cyffredinol yn fawr i'r argyffredinol. Felly, mae nid ychydig yn cwm, nid ychydig yn cwm a ddod i ddweud erbyn am y tro, mae rwy'n cwm eich wneud y ddechrau'n hyn a gwahodd gyda'n ffodol gweld i. A oeddwn i'n gweinio y byddwyr ar gweinio'r allan. Felly y bydd y cyfnod i'r gweithio ar gyfer y dyfodol yn y blaid o'r cyfnod o'r gweithio'r cyfnod, dweud yn ychydig o rhai o'r ddau, o'r cyfnod o'r cyfnod o'r ddau. Felly mae'r ddau o'r ddau ond dy fydd yn gweld arnaeth, ond mae ymlaen i chi'n iddo ddau rydych chi'n gwneud hynny, dweud yw'r griffenau'r gweithio ar y methu Maethys-Ysgwyr. Martha Stewart was not convicted by scientifically valid experimental data, she was convicted by a chain of evidence that was good enough to convict her and put her in court for a long time. And if that's good enough for that, it should be good enough for us. Secondly, it's too hard. It is quite hard, I'll give you that. All I can say to this is if you get the right people involved and the right tools involved, it becomes easy. Making a car is hard. I doubt anyone in this room has made a car. You go to the garage, you go to somebody who's built the car, you bow in off the shelf. Someone has designed it, somebody has bought, created the tools to do it. They've hired and managed the workforce to build it. They've got the supply chain to deliver it to you. Why should learning analytics be any different? And thirdly, it's too expensive. Again, it's a fair point. But towards the end, I'm actually going to point out some tools that we've used inside Leo that are either cheap or quite a lot of them are actually free open source tools. So we in Leo utilised a model. This is a model actually developed by a guy called Mike Rustacy, who is the guy who, Rusty SoftwareSquam.com, the actual guy who will amend the main drivers on the XAPI specification. And he now runs a learning analytics company actually called Watershed that Mark was mentioning earlier on. And these seven steps work quite well in being able just to simply break down how to get started on this. So first of all, work out what you want to do. A lot of things we've been presenting so far and we've heard about are big kind of machine learning analytics tools that probably, as people have said, there's not enough data for it, but also you probably don't want to start there necessarily. Start small, maybe consider running just an A-B test between two courses. Have two cohorts do the same learning but through two different courses. Or even you can do it using the big data approach to spiral the data at it and see if we can see correlations. But the key thing we also say with this as well is you've got to, it's no good just measuring your learning and dropouts and things like that. On the topic I was given was measuring the real world impact. You've got to work out what are, in business we call them the KPIs. What are the KPIs you're trying to teach your people? You're trying to make them more efficient, sell more doodads, build more Watsits. You're trying to make them stop leaving their laptop on the bus. You're trying to think of that. And I think in academia we don't necessarily measure that. We measure it by thought retention and by thinking and not by actually saying, well let's look for a real world thing or something that we can truly measure. Secondly when you want to do this you've got to get buy in from your business and it's no good just getting buy in from your learning team or anyone else. You need buy in from the top, you need buy in from the bottom. If you're tracking data out people, people get fearful. People get fearful that you're going to use it as a stick not as a carrot. They're going to get fearful that you're going to use it to kick them out. You're going to use it to penalize them. Whereas in reality you're doing this because you want to improve the efficacy of your learning. And that's what you've got to communicate at all levels of the business. It also helps later which we'll talk about some more when you need to actually get the data. Thirdly I would suggest you design an experiment. As I said an AB approach work out what you want to compare. As I said it's no good having the learning data in isolation. You've got to have something that you can measure really. Something quantifiable that you can go and measure. If you want to prove the impact of your learning. As I said if you're using either method you can do lots of things. I'm not going to get into learning but we did a big one that Mark mentioned recently with our partners Watershed with the CPR model. They were literally saying okay our experiment is we're going to put people through two different kind of methods of the course. People will do face to face. These people have the app and the face to face. And then work out who actually performed better. And they simply found out that one of the methods was 30% more effective. So they can then focus their learning on that one method. Focus their learning on one method. Fourthly get the data. This is one of the hard ones. This is why XAPI and caliper and other things are important. We need to have portable data between these systems. We need to have interfaces. We need to have ways of getting that data between. One thing I was just on this is certainly get expertise on this. There are people out there and I know Mark mentioned Andrew Downs earlier on but his role is simply called the interoperability consultant. He makes data talk in XAPI format. Get people who can get that data for you into caliper into XAPI so that you can actually compare it. Now with the data as well, I wouldn't just suggest that you just get past failed data. Mark mentioned as well earlier on that Sasha the logstore plug-in. It's quite an interesting tool to use. We use it quite a lot with some of our learning experiments. Not because we can necessarily get provable insights out of it but so we can get minor things. There was a good example whereby we saw using the logstore plug-in that people were failing on one specific e-learning file and it turned out that e-learning file had a bug in it that we didn't recognise. It got through testing, it got through everything else, it was in production. This client had worked out that actually this has a major bug that means nobody can pass this. They found out about that a lot quicker by getting that surrounding data and they probably would have done because unfortunately they were doctors and people would have stopped doing the learning and went, I can't be bothered with this anymore. If we measure your impact, go with your hypothesis, try and disprove it, do an experiment, go and actually say did we sell more doodads, did we have less complaints, have more passes, did our net promoter score go up, did our drop-out rates go down. Now somebody tweeted earlier on that correlation doesn't equal causation. I would argue it's a good place to start. It's a statistics thing I was taught at university that no it doesn't but often it's a good place to actually go and look at and start looking at why is that happening. Now you may just find that it's just rubbish quite often you do but sometimes you may well find actual insights out of that. Number six, you then got to communicate your findings and there's been a lot of chat about this over the past few days and earlier on people were saying is it ethical to hold this data but not communicate it. I would argue probably it isn't ethical to not communicate it but you need to carefully choose your communication method. It's no good giving the CEO, Ellen Dey are terrible at giving the CEO or the learning department or whoever you're going to call yourself are terrible at giving the CEO this massive spreadsheet with all the learning data in a few graphs. When marketing or sales go to their kind of meters that kind of thing they bring their one piece of data, they bring their one great thing that they've done, their one big thing they want to shout about and they shout about that. We need to do the same, we need to show that the methods we've used or the experiments we've done or the money we've spent have done this thing, be whatever it is. Communicating down as well, do you think everyone needs access to a dashboard? Probably not. Some of those basic data are on great, they're a simple kind of this is your risk status or this is where your score is coming in or this is this. So be careful to design your communication of those findings. Number seven, use those findings to improve what you do. That's the whole wealth of knowledge you're designed to improve ourselves. Now if you're measuring something and you're not using those measurements to improve what you do, why are you bothering to measure in the first place? This is the whole point of it. You want to do better. You want to do better for the people that are paying a lot of money for your learning to do it. So actually use that. Now how you do that is an interesting manner. It really depends what you find out. I may find out unfortunately that a lot of stuff you've been doing is not working and suddenly you have to throw out a lot of things and some people in the corporate world would say actually you found out we have this flaw in our training and we're liable for this and that's a big problem. Well it is a big problem because your people haven't learnt. Don't panic about this kind of stuff though as well. There are plenty of people out there that can help you out with this. They'll give you free information on other things or just companies you can go to. When marketing started doing analytics they didn't do it on their own. They got help in from other companies, from people who are experts, who are data scientists and they've slowly learnt and they've slowly built this up as part of their skills. Now CIPD have actually put analytics into their programme recently so this year there is now a five week module on analytics so we're beginning to do that. But if you don't have the knowledge or the expertise go and get help. Now I promised I'd help you out with some kind of practical tools around how we start to build up almost this learning ecosystem that can track those learning analytics now. There's some great things up there. Some of these are slightly business focused as I said but all of these we at LTG or Leo would classify as a decent XAPI activity provider. Things like learning locker, that's a free LRS. You can run that, we've scaled that pretty well already. Things like XAPI apps, a company out of Australia if you build little performance support apps around your phone it'll just send a statement back to the LRS and say I saw this person doing this good thing here. A lot of the authoring tools, things like Caltura if you've got that as a video platform. Yeah you can get great XAPI analytics out of Caltura already. And as Sasha mentioned there seems like watershed and yet there are probably slightly wrong for you guys that what we call kind of human capital management tools but they could easily be used in that way. Just to sum up, here's my three biggies on the beginning there. The reason you need to measure the real world impact of your learning is because you get the better outcomes for your learners. Your organisation has a better outcome because your learners say this is great learning. We want to do this, we want to carry on. It's good PR for you. And lastly you get a better justification of your budgets. If you can empirically prove what you're doing works then you can get more money. Trust me we've done it a lot recently. I can't say it. Alright.