 So mae'n fydda rhywbeth a'i gyd. Yn hyn gywg pobl, mae'n gallwch wedi gweld ni'n gwneud bod gwneud ar gyfer cynllun o ddig, wedi llunio'r gwybod, os ymwysig am yr ystod yng Nghaerfoedd. Rydych chi'n gweithio'r erbyn cymdeithas-fawr. Dw i'n mynd i'n cwestiynau ar unig o'i gweld complaints ar gyfer hyn, ac mae'n gweithio'n gweithio ar gyfer hyn. πian daeth mwyw Farol! Bo fi'n dweud chi'n arno...攢 ddiw gyda i fawr o'r fawr Mae'r ga shaveys wa't yna bynnag yw hoffir o unrhyw o'r symbols gyda'r seating Ond eisiau bod hynny'n collaw i ph ham I'r fawr hyn— yarnad i chi ei yn fawr mewn chaill Fy havwch yn angen gwno ddoa o bahas 신�frontol â'rDO Synrhyw ac yn ydy'r cyfrifadau yn ysgolfadau cwfrifadau ac mae mae'n dangos fel yna'ch defnyddio'r rhai. Mae'n rhaid i'r cyflwng hyn Aethafolwyr. Mae gysylltio'r cyflwng hyn yn fathlo, mae'n ddim yn fawr o'n ddweud yna'r ffordd, yna'n dweud y llaw mae'n dod yn ddiweddol erbyn hwn supply gyda'ch awddiad, ond mae'n gyrhaf ar gyfer allanol oherwydd mae'n gwsbyr. ond o'r ddwyfodol o'r cyhoedd, o'r ddwyfodol, os ydych chi'n dweithio'r ddwyfodol, y cam ond iddyn nhw ein ddwyfodol, ond yn siarad hynny yn cadwaniaeth ar y ddwyfodol mwy yw'r ddwyfodol. Ond ddyn nhw'n ddwyfodol i'r ddwyfodol i'r gündenau? Yn ddwyfodol o'r ddwyfodol yn ddwyfodol, os ydych gyda'r ddwyfodol, ond y gallwn a chlyw o'r ddwyfodol. Felly dyna allwn nhw'n ne sizes cyhoedd fel y cyhoedd. Bydd eich gattwch oherwydd mae'n gwneud. Felly ei gynhyrchu sy'n cael ei ffordd o'r anhygoel. Yn y tech yr anhygoel, yn cael ei gend格fyniol. Felly, maen nhw'n gwneud fel y pob gynnwysurio'r anhygoel ar safon nad ydych yn cael ei gandwyd. Ac mae'n cael ei gyrraedd, mae'n cael ei gyd yn cael ei gyd yn cael ei ffordd. Fe ar hyn yn ddod yna, yn y cwm yn jon yn yw'r hyn, felly hynny i'n mynd i ni'n cael ei nodi'r cymdeithas nhw. Felly mae'r bod yn gallu i weithio, sydd yn cychwyn ffordd hynny ymgylch. Rwy'n meddwl ffost y mynd i chi'n meddwl a chyfodd y chyfodaeth y mae'r byw i yw, mae'r byw i'n meddwl ymateb ar y pw achievements. Mae hynny y sgwpeth o'r syniad dros'na sgwphef. is in doubt. So, well actually the intervention, I mean, the piece of software, the artefact, the device, the thing that you've created, that's got all these usability, accessibility, emotional, engagement sort of principles captured within it and it supposedly represents what the users really want because you've done your requirements, catch at the start, all of that is in doubt if this doesn't work. especially if this is a design badly, if yours is a design badly. So that means that generally everything I've taught you up to yet, if this isn't done right, isn't worth crap. You can forget that at all. There's no point in doing it. So the reason why it's there is so that it helps us to focus our mind or the principles to focus our mind but the key thing is this, your evaluation. So really in the most extreme circumstance you can decide I'm not going to do any principles I've forgotten all of those. I'm not going to do any user testing at the start. All I'm going to do is evaluation and that companies do. They just do the evaluation because all they're bothered about is do people like it? Will people come and look at our work? Will people use the interface properly? Will they have a good experience? That's all they're bothered about. So everything else could go away theoretically. The point is that by doing the initial work with users by encoding the principles then what we do by the very nature is reduce the possibility of getting our evaluations wrong because the way the money, where it costs money is the number of times you have to do the evaluation. So that's the problem. If you have to do the evaluation a lot it's going to cost more and more money so all you want to do is just do the evaluation once and it turns out that the system is great. But it might not be the case. So that's what the principles are for. That's what the formative user system is for. It's just so that we can make sure that we don't have to go through this loop very much. Okay. So, there's nothing about science. Being computer scientist, who knows anything about science? It's a method. The scientific method. That's a good one. Okay. Anymore? Do we know anything more about it? It's awesome. It's awesome. It is awesome. That's very true. It's excellent. I need some explanation to what we see around it. What we're trying to do is in some ways create a model. I think of it as this and a lot of people can see it differently but I think of it as a model of everything in the end. So, therefore, does it matter that that model is exactly accurate? It doesn't, because think about the way that we model light. It has properties of particles and properties of waves. Does it matter whether those two things shouldn't coexist together but they do in light? So does that matter? Well, the answer is probably not. It doesn't matter that much as long as it allows us to understand the phenomenon better. To predict that phenomenon better. So that's the thing that we're looking at. Now, there's ways that we can do this. This is the main one that you are going to be using for most of the stuff. Inductive reasoning. So this evaluates and then applies to a general population. So this is statistical analysis. So we evaluate something in a sample and we apply it saying that these things in this sample are correct over the whole population. And the way that we know that we write is statistical significance. Theoretically. However, there's other kinds of reasoning, deductive reasoning. So who's done stuff like description logics? I don't know if you've done the DL stuff, description logics stuff. Okay, so with B-Jampars here and who sat there on those kinds of... Well, for those people who have, you've got this deductive reasoning. So therefore we say evaluates are such a phenomenon which necessitate a conclusion. So therefore we say, herbivores only plant matter. All vegetables contain only plant matter. Therefore all cows must be herbivores. Yeah, because that's the definition. Okay, therefore vegetables are a suitable source for cows. But the conclusion must be provided that the premises are true. So therefore if one of those premises is incorrect, everything else is possibly incorrect. Yeah? I mean they do an excellent one. Now somebody done the one with cows, the mad cow ontology. You do that one now where they talk about... So in the reasoning there they have this, it's called ontology. And it generally says that, obviously, cows are vegetarians, herbivores, herbivores are vegetarians, whatever. And then obviously, cow feed, it's got meat. So therefore generally it comes up saying, oh, we shouldn't be doing this. There's a big kind of error in the axiom due to the fact that you're not supposed to... cows are vegetarians so why feed the meat? So obviously they're not vegetarians if they're eating cow food, processed cow food that's got meat in it which is the main cause of the PSO. So that's the kind of reasoning that we get in with this. Now the point is that you'll normally find that this one is more about empirical science, so stuff that you're going to do, experimental work. And these ones are more about mathematical reasoning in a sort of theoretical way. So, to be scientific, a method of enquiry must be based on the generating of observable, empirical and measurable data, well, evidence, this is subject to specific principles of reasoning which is obviously what we've got, this inductive reasoning. So if you all know that, you all know that this is the scientific method, yeah? Good. Because a lot of computer science that I speak to, they don't know that. So they, because mainly in computer science we're looking at engineering. It's really a lot of it's computer engineering when we're building stuff. We're not looking at stuff that was created. Now, there's only in the old days natural science, natural science was called science, but things like engineering, et cetera, was called art, okay? Because art was everything that was created by man and science was the investigation of everything that was created by God. Yeah? That's the way it was originally conceived. Yeah? Until it's glitter. So, this is the scientific bedrock that we're going to work on and that all your empirical investigations and when I say empirical investigations what I'm actually meaning is your evaluations are going to be centered around. Yeah? So the evaluations that you're going to be doing or that you should be doing are all based upon this idea. Now, even when those evaluation methodologies have come from other domains like social science or anthropology, the idea is, especially with social science, that there is some scientific rigour to it. That's why it's now not called sociology so much. It's called social science because they want to have this scientific rigour. Now, a lot of the times the research method you're using for these kind of activities and for the activities that you're going to be doing as UXers is probably not that quantitative. It's quite qualitative. Small sets of data. Small participant groups. It's not going to be massive. It's also going to be something whereby you're going to get some bias because you're asking people questions. You're observing people and they know that you're observing them. So, therefore, that's going to be also a problem you need to overcome. With most sort of sciences like physics and chemistry, it's about observing, whereas a lot of the social sciences experiments about asking people or something, or getting them to do tasks in psychology and then measuring those tasks. But you're still there in the middle of the system. So there's various checks and balances that are put in place to make sure that we try to keep the scientific rigour high even though it's quite difficult. OK. So, generally it seems to me that the scientific method, which is really, I mean it's a simplistic view of it, but now we've got, we start off theoretically with hypotheses. And then we get procedures and experiments that we're going to do, in this case evaluations from you guys. So your hypotheses, what might a hypothesis be for a UXer? If you've created a system, what might a hypothesis be? Yes. This system is easy to use in the current one. Keep those words in mind. This system is easy to use in the current one, that's good. OK, next, any more? Any more? I know it's 5 o'clock. Is this a system that you use expected? OK, so is this a system that you use expected? That's another one. OK, any more? Any more? OK. So then, we do something and from it we get the data. OK, we get results. We do the analysis and we get the conclusions. Now, in some cases, it's a method called grounded theory. We don't start off with the hypotheses directly. We just go into procedures, we have an idea. We go into procedure experiments, do data and then we let the findings dictate the hypotheses, which a lot of scientists don't like. But it does happen. The much called grounded theory. You start with an idea, so that's the hypothesis, isn't it? You're preconceiving something. Not. No, because we'll see about hypotheses in a sec. The idea might be I want to investigate how people I want to investigate people if people are sick online. Going online makes people ill. But it's not hypothesis. It's just a general direction. You don't say it does make them ill. It doesn't make them ill. You don't say how it makes the ill. You're not quantifying in any particular way. That's the general scientific method. For us, good science or at least hardcore science we start off with these and it comes around in this way. We start off with hypotheses. An inductive example, first in the criteria hypothesis which in the best case cannot be realised and interpreted and is refutable. So who talked about this refutability? First of all, he talked about refutable. Now let's think about these two discussions. These two hypotheses we've just been proposed here. The first one was, what was the hypothesis? How is that testable? How do you test easier? The next one. Is this a system that you would expect? Right. How do you test expectation? So these things are called weak hypotheses because it's very easy to make them mean what you like. Now if you said a user will find this system I mean you'd have to rephrase it but if you may put a quantification in there to say that users would rate in a questionnaire after using these two systems this one higher by 3%, 4%, 5%, 10%, then you'd have a harder stronger hypothesis because hypotheses can be very small but they must be testable so that they can be refuted. If we can't refute them then we've got no solid building for the scientific method and what's more, if we can't refute them then we don't learn anything because all we do is confirm what we want to already know. So that's what we're thinking about here. So we're going to know about black spots. So in the world long time ago everybody thought we would travel the world and we encountered swans and swans were white. Swans were white and they were always white. You go, swans are white. So that seemed to be absolutely certain because it was observable. We have much many observations okay? We have a lot of observations and we have a lot of observations and we have a lot of observations and that's many observations okay? Swans must also refute them so if the hypothesis remains intact it must be correct. So what we tried to do is go to lots of different locations to see what colour of swans were and we went to the Americas we went to Europe we went to lots of different continents and we found that in all cases swans were white. However swans were white in Australia or are black in Australia or a certain type of swan is. So this is exactly what we are talking about with the scientific method it's very important to have something that you can refute. So we said swans are white, that's refutable because it's a swan that we know what the colour white is it's white. As soon as that doesn't occur anymore we know it's been refuted. What we tried to do is make many many many many observations and in those observations we're not trying to pile on we're not trying to pile on we're not trying to prove that swans are white what we're trying to do is pile on observations that disprove swans are white in all science you're trying to break the hypothesis that's the idea of it. Okay, there's been many debates regarding the question of whether we've inducted swans into truth. So it's very difficult to say because in this case the amount of knowledge that we had about white swans was such that generally we thought yes this is true it's proved swans are white but it wasn't just because we've not had a number of observations and what we know with sciences that we only need one observation to refute the inductive reasoning we can have as many as we like we only need one to refute it so that means that if we've got a hypothesis that we want to destroy and we can't destroy it then we can't say it's proved or we can say it's supported if it turns out that if we've got a hypothesis which we then disprove just once we can say it's been disproved but we can't test every case that's very difficult to do but in empirical work okay so we can make some inductive leaves that are based on good science so you guys that's why we use statistical analysis we'll be able to make some good leaps in your evaluations for UX if you do good science it won't mean anything the leaps may not be absolutely accurate as we've said we're just trying to have a model main all the systems in our understanding in the UX domain we use mathematics okay so that's pretty much what we're going to do we're going to use statistical analysis for all of this now what we're trying to do and it's very difficult in UX is to generalise to a population so in this generalisability to a population we have got a small sample because we can't look at the entire population and so we have a small sample and we try to generalise to that population okay to the wider population so that means that the sample size the sample that we're going to take has to be representative of the population yeah so what we're trying to do here is enable us to use a mathematically generalisable generalisable to a population which is called external validity so external validity is the thing that says that all the actual statistical analysis for this work is externally valid if it's not if we have a bad sample we don't have enough people it's called internal validity we're trying to make sure that the internal validity for all of those people is cohesive for all of the experiments is cohesive so that's called internal validity have we been certain that our sample is representative of the population okay we're going to come to that in a minute that's a good question we'll get to that okay so as we've said and you know we'll see that there's questions on this no 100% certainty because statistical analysis doesn't give us that so what we have is confidence now I'm going to get onto statistics and a bit more detail in a few two logical weeks that you like so after weeks so what we want to talk about is that we have a level of confidence in statistics but that might be incorrect and it certainly might be very incorrect if we haven't chosen a representative sample or if we've done the design one okay so when somebody says this is statistically significant that's what I'm going to tell you I'm going to keep telling you this until you believe me it's not that there is a significance so when we're looking at statistical significance what people often think is when they visualise it they visualise it somewhere there's like some kind of graph whereby we have whereby we have two datasets one here and one here and we say oh yeah that's significantly different from this one it's significantly different from this one is that what you understand the significance the statistical significance to be but the significance we've got this big difference yeah the reality is that's not the case what is the case the significance only says that let me think how I can phrase this to the best it says that we are confident that all these samples are the differences between them is not due to random chance okay so you could have a set of samples like this and we can say that this set is different to that set and that that is not by random chance we have a statistical significance or people will say two significance are 0.05 or 0.01 okay now this is called the p-value when it's enacted so when you've actually run this through a statistical test it's called the p-value but actually it's the alpha value before so these are alpha values and until we've run the statistical test then they become p-values so the p-value might be 0.056 but we're aiming for an alpha value 0.05 the p-value might be 0.0 0.21 but we're looking for an alpha value 0.01 because this says we're 99% sure and this says we're 95% sure that these things haven't occurred from due to random chance not that this is any more significant to the other to talk about significance here is the effect size and that's called a beta value so exactly the same as this but it's a beta value and it's normally about 8% but you don't know it's variable so for instance something that's significant has an effect size now if you say this is a significant effect it might be some but not significant to others as you measure of this random chance and I'll get to that in the stats if you like stats it's good so what we're going to do in stats there's a thing called inductive so it's deductive so there's inferential statistics descriptive and inferential statistics inferential statistics I'm some of the critical ones but mainly inferential ones allow us to get these p-values out and the statistical test will give us these p-values and we decide what alpha value we want to match so if we say we're looking for an alpha value before the test of 0.05 because we think that's the best representation of it then we're looking for p-values of less than 0.05 now we might say we're looking for an alpha value of 0.06 or 7 because we are because we feel like it because there's some reason what you'll find is people blindly follow these two they blindly follow the main and they don't understand statistical analysis so 0.05 is what is commonly held to be a good figure a good figure in social science 0.01 similarly so statistical analysis they have two good figures 0.05, 0.01 but some people will really go crazy if you have anything greater than 0.01 so we'll get to that statistics but all of this must be, all of this only works it only works if you've done the design right if you don't do the design right and you don't use the correct statistical test from the design you could have a p-value of 0.001 and it won't make it any more right or not ok we'll also get into at that time things called type 1 and type 2 errors so type 1 and type 2 errors are where the p-value mainly for type 1 errors where the p-value is too constrictive and that means that we get errors as well so all of this if somebody says 0.05 it's got to be 0.01 blah blah blah then you need to start questioning why is it got to be 0.001 how did you come up with the correct alpha values why are you aiming for that alpha value ok but it's all based on this correct design ok so let's have a look at the kind of variables that we can expect to see in a in a test in an exponent so first behavioural so we have this behavioural variable and that's equated to the user so the behavioural variable is all about the user in user experience this is the stimulus is the interface or the computer system so the stimulus might be some kind of test that you've decided, some task that you've decided that's the stimulus the observable response it's the thing that we're going to observe to measure so in some cases that might be task completion time or an excuse response and then we've got the subject now this word I'm going to tell you in a minute I'm using this term here because it's a term that most people use but it should be participant subject is a really terrible word to be using and we'll see why later on to ethics but these are the factors such as age, weight, gender to see whether they're any these are kind of descriptive facts about the person to see whether there's any reason why that might be the case do they have good eyesight or bad eyesight for instance is that why there's a problem there's some difference and what you can use you can then start to test whether these things have an effect does weight not weight does eyesight have an effect on say somebody doing a task on a computer screen that's too far away it's likely to have that effect now do you remember back in the BBC lecture where they were looking at the distance that the TV screens should have been for the different font sizes so in that case they're doing exactly this they're measuring to make sure that somebody's vision is what about is there a route where somebody's got 20-20 vision and that's okay or do you have to vary that based on their vision what vision they have is that we'll be able to find that out from understanding the subjective factors the subjective factors yes, how many categories do we have if you look at the main categories of a participant because there's only variables that dictate rather than just being this age group you would do this how many do you mean expertise, you have to decide and that's why it can also be wrong but yeah, you have to decide where the same main ones are like age, weight, gender whether they say maybe if they're doing reading tests, whether English is their first language say or whether the language of the interface is their first language because that might slow people down yeah, if they're social where are they from socially now what you will then see is you can throw all these subject factors into well, SPSS or one of these statistical packages and you can come out to see whether in the analysis stage to see whether there's anything that seems to look like it's to do with social opinion or eyesight or age or gender yeah, that kind of thing so collecting meetings is a really good thing when I was at an ethics meeting today we were setting ethics committee and we were looking at one one ethics proposal and when the science said it wasn't I think it was pretty reasonable I wondered why the experimenter was taking these subject variables, lots of subject variables there's always a a tussle between how many subject variables you take and why you're taking them I think you take them all the better because that way you get you never know what you're going to in real science, you never know what's going to be a factor but other people don't basically today basically negatively because why should the person tell you all these things so you're kind of breaking their ethics if you like, the human rights okay so then we have these two things independent and dependent variables independent variables is the thing we're manipulating we want that thing to be we don't want many independent variables okay the lower number of these independent variables the better because you've got lots of independent variables so you're not quite sure after time what the independent variable is that's the problem what's more if you do it for every independent variable you have to have another set of participants okay so you'll see these studies which are sort of studies of 10, 2 to the 2 to the 2 which means that generally it just means that the number of participants you need to put through just goes off massively so try to keep these independent variables down dependent variables is the thing that we measure so we can measure lots of stuff we can measure heart rate at the same time and we can measure galvanic response we can measure gaze that kind of thing so we can do that a lot okay so we're up to this point we've got way more to go but I'm conscious I'm just talking and there's lots of data here you're all intangible for this it's all binary numbers right it's worthy but done but it's not that done for me because I've got a lot of stuff so how do you measure these dependent variables so we have a number of scales so all of these just like we have loads of jogging in computer science we have loads of jogging when it comes to these kind of experimental work so what I'm telling you is because if you work in a UX company or for a UX department they'll be talking to you in this kind of language they'll be saying oh I'm going to depend on variables what independent variables are going to be what subject data are we taking they'll say what scale are we using to measure dependent variables because this detains the statistical test that you're going to enact so nominal scale that's the worst because it's just about names doesn't say anything ordinal scale a bit better because it's got magnitude interval scale it's also denotes identity and magnitude and is equal intervals so this ordinal scale could be logarithmic and then the ratio scale which has true zero point which is most like a real number but this is the gold standard and this one here these ones here go increasingly worse we can know less the more we go up what we normally get for a lot of these is so for instance we can have things like this interval scale so we've got the identity, the magnitude and it's equal intervals so for instance on a thing called a mycard scale then you've got one, two, three, four, five and it says good or bad so all those things so that's why they're choosing it because it denotes identity in some way that they're saying good is one or bad is one good is five and then they say it's got magnitude because it's going one, two, three, four, five and they're equal intervals because we know that one, two, three, four, five are equal intervals so let's look at what do we mean by identity in this case okay so the identity aspect is that you know that one side that good is on so here's the other side I'll show you so if you've just got the scale so if you've got something which is okay so let's do this standard scale so we have excellent good okay bad, terrible so this one is nominal it just denotes identity excellent, good, okay, bad, terrible we don't know what those things are we just know that this one is excellent and that one is terrible but we don't know how much more bad is from terrible but how big is the distance between bad and terrible how big is the distance between okay and bad or okay and good or okay and excellent so that's the first thing we've got this nominal scale okay now the ordinal scale is better because it denotes magnitude so therefore we've got something where we can say that we've got very good very bad okay and we can see that because we've got magnitude because we've got magnitude we can say this is five and this actually let's just say we've got two one zero three for instance okay so here we can kind of see, well what we can see is that we've got one two three six eight okay so here we've got one two three six eight now numbers we can see this has got magnitude one is smaller than eight okay but the intervals we've got between them so this interval says one and two but this interval is six and eight so therefore it's not equal we don't have equal intervals so it could be anything, we don't know it just could be so then if we have one two three four five and six then we've got equal intervals but when would you ever want an unequal interval oh because you might want to decide that actually there's more weight that goes on the bad ones than there are the good ones a what wouldn't you just present to the subject of what has been an even scale and then you can be your own translation afterwards that's a good point and you actually most of the time that's exactly what we do we definitely do the translation afterwards but in some cases for such statistical tests that doesn't work so for instance you might not be using the date magnitude but you could have other things that could donate donate magnitude so you could have so then we've got very good and then we've got kind of good and then we've got say so if you had words to denote this you'd see that they can denote magnitude because it's about high, very kind of even because it makes it more and you need to reuse it but it doesn't tell you whether these are the same it doesn't tell you whether the interval is the same you might not look like it or you could have you could have a bad or good and less good I've seen strange ones where it's not so polarised where at this point we're going from good and we're going from bad which is on one side of an equal point and you have ones which are not doing that where you've got ones which are just going a graduated scale but that graduated scale might be logarithmic so then you've got the one at the bottom where you've got very good and very bad so this way you've got this zero point now also with this ratio in scale you can also have numbers that just stretch so you've got zero but you can have numbers that are way bigger than one to one to one minus one minus two because it's got zero at the point so you know where the fixed zero point is so you know that this is exactly zero you know that this is two that's the way it is that'd be two one zero that'd be two one zero this way that'd be nice inspired for can you have the mixture of the intervals for the endpoints but you can't mix the statistical test for which you're applying in which you're applying them so therefore if you take a series of data that's like this then the statistical test that you're going to use is going to be different to the statistical test you're going to use if you're going to do this on the scale on the nominal scale I'll get you some examples of these because I've got a whole book on various questionnaire based issues they're using, NASA etc you've got the like art scale but then there's the thing called the NASA bipolar scale so this is to measure various aspects of the flight in the flight stuff of NASA you might need lots more information because you've got speed in there or something like this same for time to task if you've got a test time and you're actually measuring somebody then obviously you're measuring them in seconds so therefore you've got a fixed zero point and then you've got seconds so it's not all about questionnaires even though that's what I'm trying to show you on here because it makes it kind of, well, it was supposed to be easier but you can obviously you can use no timing and therefore that's going to be on this ratio scale because it's got a fixed zero point yeah so how does the temperature go up from zero when you want to see that but you might do but you can see but the thing is the zero, you don't want to see it necessarily but what you do want to know is how distant you are from this fixed point from zero which is dead pretty much unless you're that guy you're that footballer guy wasn't beating for two hours on his own fun so here we can measure then how critical that was by having beaten something so that's what you're trying to measure the distance so that you can see there's a proper starting point okay now, they're important okay, I think you've seen this so hypothesis test you've got two kinds of hypotheses really you've got the null hypothesis and the hypothesis the null hypothesis just says it's the same as normal okay everything's the same as normal there's no change this is the same as the population over there but there's no difference between the two but the hypothesis says there is a difference so all we have to start off with this is the null hypothesis we say there's no difference and what it's asking for there to be no difference we want that to be no difference okay okay, strong and weak I think we spoke about already nothing is ever proved as I think we've also said so hypothesis are supported or disproved but not ever proved in empirical work okay you could never prove a hypothesis because you never know you've tested every single case every possible case so we should almost know that when anybody says oh I proved this hypothesis if it's not some kind of theoretical proof then they haven't there's no way you can prove it because you don't know I mean that's really bad you often find it when people are trying to say science is better than religion because science doesn't rely on belief well it doesn't but it also can't prove a hypothesis it can only support it by the fact that you're trying to break it you're trying to give it test cases but it doesn't break but you don't know you've given it every single test case okay do you want to do this or do you want to just plow on through plow plow plow plow yeah you just try to get to the pub aren't you you're right I can't I've got it this week I'm running on Saturday I've got to have no beer don't you have okay so let's move on we've got the design of the experiment has got to do with the number of participants you've got the number of participants how you've collected those participants how you run the test itself how you design the test the results that's all about the experimental design because it also feeds into the analysis so how can you analyse this okay in statistical analysis we've got parametric and non parametric tests based on how likely the information is to follow a standard distribution those things all need to go into your understanding of the evaluation design because you need to make sure that you've taken these things into account now in some ways this could have been done at the end of all the other work because using concepts that are not actually lived through yet it will all come together in the end this is the logical sequence that you would do things so if you know how to do these tests of course you do the evaluation design first and in fact the ethical application that you're going to probably have to put in in most companies will require you to do the evaluation design to start with before you do many work at all okay data collectioning tools I'm hoping actually I'm just babbling on because I've not got a slide so data collectioning tools though so how are you going to collect this data so there might be lots of ways to be collected there's lots of screen capture software that you can use to collect time to task and see what people are doing there's lots of stuff that you can use for qualitative work such as Mvivo for coding or various different tools that allow you to upload audio files or video files so that then you can annotate them with what people are doing we're working currently on a data set I'm not working on a data set I'm not able to see it but it's a data set of people with lung cancer and they interviewed 31 people and 17 clinicians and they came back with transcripts of 1,700 pages so there's a lot of data there to analyse okay what's more it should for every hour that you video it will normally take you to transcribe and code it about 10 between 8 and 10 okay that's the standard because you have to keep going back over and listening to what they're saying thinking about what that means it doesn't mean that you know the data inside and out so data analysis now in qualitative work call we use data analysis tools which are really very much prevalent in anthropology in sociology so these tools exactly like this they're kind of tools which might be more diary based tools tools which are more word based so you have to go through with highlight categories I don't know if you've done have you done that stuff where you highlight nouns to get functional requirements in software engineering it's kind of like that but you're not looking for just nouns you're looking for concepts and then you look for how many times those concepts run through other systems not other systems, other participants so you can see the magnitude of the times those concepts are important but of course the problem with it is there's nothing to highlight so you have to go back through this process lots of ways to make sure that you give all of the emerging terms of their chance okay but mostly it happens through in quantitative work through statistical analysis as we've said okay and we have these two kind of statistics descriptive statistics and inferential statistics descriptive statistics tells us allows us to describe the data set itself just the data you've collected now that data could be an entire population so I could say if I gave you a questionnaire about this course I could theoretically say that all you guys were the population so therefore I don't need to generalise anything there's no influence required because my statistics represent the population and therefore they're not called statistics anymore they're called parameters okay so if my population is the same as my sample then it's not a sample anymore it's not statistics it's just parameters okay if my set is a subset of the population then it's statistics but what you want to do from that is say well I do my work here say for instance on the 20 people that you've got looking at your product and then you want to say but actually these 20 people are a representative sample of everybody else in the world okay and all these other people in the world will then love our product they'll find it as easy to use what you're trying to say is that the 20 people you've looked at are an indicator that everybody else who uses the product will have the same user experience as these guys yeah or you need to change things and you can't test everybody so what we look for is internal validity which goes with descriptive statistics and external validity which goes with inferential statistics is the validity of the data internally is the data structure is the statistical data structure itself valid internally okay or and then is it external and valid can it be generalised inferential set can it be generalised inferential statistics intelligibility scripted statistics and we have this other thing called confounding variables confounding variables are very important and you'll hear lots of lots of psychologists and cognitive scientists talk about confounding variables and they'll say to you what kind of confounding variables did you get what's going to do with confounding variables is cut down the number of variables that will have an impact on your statistical analysis you're trying to cut the likelihood that they will happen down that's why we're doing the evaluation design yeah so the way that we try to cut down these confounding variables is to minimise the number of independent variables that we're testing for what that we make sure that the experimenter doesn't bias the subject bias the trial that's why we might have trials where we've got double blind or triple blind trials now I've mentioned these before so who's going to tell me what a double blind trial is yes double blind trial is where the subject and the participant and the and the tester doesn't know what the desired outcome is and the triple blind is where the participant tester and the analyst doesn't know what the that's very good okay so that means we can make sure that there's no experimenter bias okay so that cuts down some of the confounding variables but you might have other confounding variables which you're also trying by taking the subject data I'm calling it subject but it should be called participant data by taking the subject information then what we say is that we can cut down we can understand if you're only confounding variables there because maybe a confounding variable is language maybe a confounding variable is height we don't know but we'll soon know that if there seems to be some other thing going on in the wider set and people who are over 8 foot tall or 6 foot tall are don't exhibit the same trace as the people in the main set so we can say this one why is that then a confounding variable okay so let's look at the kind of sampling we're going to do or you might want to do you probably won't do any of these in your UX career unless you go to do sort of hardcore more science based stuff okay or you're working for a core company that's got resources and want to know something scientific scientific okay so simple random sampling so don't read this but what do we think simple random sampling means don't read that do you think anyone I mean this but I assume it means pick anyone as they walk around you might that's kind of right but it's not right anyway yes I was going to say something because they're like a source in the street in the clipboard and then you just get random passes by in an area and they do that around just to get kind of a random set that's kind of the same okay yes yeah I've read this slide and I understand why it's in the wrong bit then kind of the wrong bit you get something to choose that has no bias can't see what the people are so it's a randomly generated number something that will randomly pick it so it has when you pick one it has nothing to do with the actual participants that it will be chosen the reason why the ones that you talked about it's often used with people in the street because it's convenient there's also a thing called convenient sampling but why do you think your way of doing isn't as rigorous as this way of doing even though it seems randomly going oh that person that person okay I guess by default some people say you're not approachable so I guess you are kind of biased yeah yeah actually we're going to be more attractive to certain people to say I was going to say you avoid certain people like if someone's walking on the street looking pretty angry you are not going to react to everything so yes I typically walk down a street see someone on the clipboard and know that as I get closer people just fear off the way so people who don't want to be in the survey won't be in it yes that's true and so when these these quota-based sampling was so quota-based sampling was it was trying to get around some of that as well because of this convenient sampling but so the reason why it's different is exactly for these reasons it's been shown that through actual observation proper observation test that people who are out on the street with clipboards if you're a woman you'll approach men over 50 more often than you'll approach anyone else and if you're a man on the street with a clipboard you'll approach women under 30 more than anybody else secondly if you're just on the street if you're going to go around houses you always pick the ones on the lower floors of the apartment blocks you don't pick anyone on the top floor you don't pick more work you don't pick houses with big fences around them and barking dogs these kind of things they seem obvious but then therefore your sample is not random anymore okay so so this random sample you can think of as it's like a load of balls in bingo or a tumbole or something and all that happens is that a ball pops up and you grab it pretty much reality is it's a random number generator that's how we really do it but it's often easier to think of that now there is a technique called bootstrapping and it's only good well there's debate most people say it's only good for random samples or the people say we're all in cares but when you've not got enough people in your sample you've only got 20 people and you're picking and what you do is how those 20 people can get all the data from you but then when you're doing statistical tests what you think of is this bowl, this bingo normal you pick one out and you take all the data from it and then you throw it back in so you don't pick it out and chuck it away you pick it out and throw it back in so then it's got another random chance of you picking it up and then that means that with those 20 people you might extrapolate it to 10,000 because you're just picking 10,000 times but it's the same stuff if the sample's been chosen correctly it works kind of so then you get a better statistical analysis because you can look at 10,000 people it's called bootstrapping sample so some people say it's only good for random samples other people say it's better for you can use it on other kinds of samples like convenient samples the other thing is that you won't ever see it in medicine so clinical samples no way so we were doing some medical stuff today and there was a sample with 800 participants and 200 base case participants that's blended way 1,000 people and it was a pilot we're looking to do a pilot you'll be looking to do a pilot in UX with 5 1,000 people on a pilot and they're all going to make it bigger ooh you know that that's crazy so in terms of applying this random sample to human beings for example how would we make that work because I guess in 1930s America they did it by looking at the telephone at random and picking telephone however unfortunately only a subset of the population had telephones so that's a problem but you might choose social security number national insurance number you might choose any kind of numeric that denotes a person via a number say now the other thing to think about is with all of this is you've got then a sex of non-respondents so you have to keep the number of non-respondents because the non-respondents might tell you something if you're doing some work and only 20 people respond and you want 100 people to respond why is only 20 responding if it's about medical services it's likely that the 20 respond the people who don't like it or really like it so that's why you need to keep your non-responses if your non-responses are 10%, 20% it looks like maybe lots of people are responding regardless of medical treatment so you have to just keep that in mind that's why we collect non-responses the question I'm going to ask it might be a rhetorical question but why do we need randomness because when we are doing something we have something in mind of all the audience we are targeting towards the design of the system we don't want that something that we have in mind to bias the results we get that is the point we are trying to reduce biases confounding variables is what we are trying to get rid of but we have to be real about it generally we might not be able to get a random sample of enough people if people just put the phone down and tell me there's a problem that might happen generally in this data set sorry in this set of participants simple random sampling that's the best but then we go down to systematic and we'll go down to the worst but it's just levels of what we can get sometimes we just can't get it just because people it's way difficult to get this kind of data set a good random sample is difficult to get okay so then we've got systematic so what do we think systematic nobody read that you read it so systematic what do we think systematic is it's impossible to put in what we're going to follow to get this sample that's what we're asking in the data a method in that way okay so that's one way of doing it any model any of this yes on the platform on the street you can be like oh I'm going to ask every 10th person that walks past me so as long as the person sticks to it they know it's their bias and it's still it mitigates their bias so that's what we're talking about so generally what you might do is say for instance we're doing some work on website complexity so we want to look at we want to draw a sample but how can we look at every single website and is that going to be useful because you know if we just look at if we just pick around my IP addresses that relate to web addresses then it's not going to be very it might not be very useful because we're going to get lots of data in the long tail maybe we don't want that long tail data so what we want to look at is say oh I don't know Alexia top 500 but we don't want too much data we don't want too little we don't want it all clustered in some way because it's on a usage so we don't want everything clustered randomly it could still be all clustered at the top if our sample is smaller we want to pick 10 theoretically it could cluster right to the top so what we do is we do this systematic version where we say we'll pick every 10 that's starting at 5 so that's exactly that but you're picking normally from you're trying to reduce you've got a focused population to reduce the amount of and you only want to pick a few samples not loads and you know that those samples might either be scattered or they might be all in one place so the way to get a thin slice through the entire data set is to pick a stratified a systematic way of doing it so that's what this is a systematic sampling a stratified sampling so generally this means that this is just a normal sampling variation so this one means that a stratified sample would be something like University of Manchester students so we'd look and we'd say how come we look at so we want to just look we want to generalise to the population of the University of Manchester but we look at a stratified sample which would be people from computer science so this multi-stage thing where we say we've got the University of Manchester but that's not good enough so what we then need is all the different not University of Manchester we've got computer science people at the University of Manchester and we've got but that's not good enough so what we need now is multi-stage so we do the computer science people but then we do English and then we do physics which means that we've got these stratas but they're all they're not picked from the general population they're picked from these groups that we've already picked out Is there any irony that the multi-stage is not that we're doing it this way at first? Well the multi-stage normally just draw them by any of these other by random systematic sampling yeah stratified sampling is to get more representative sample the population so that if there's say 1,000 people doing English and only 100 people doing sport science or whatever so you're going to take more people from the English and less people from the sport science That's right so you're trying to represent so for instance it's the same with population demographics as well for instance you might want to take you aren't sure that your application that you've got the correct demographics for age now we'll see that there's some non-probabilistic versions of this as well so therefore you have a stratified sample to make sure that you've got people in people who are the correct representation of people who are old who are over 80 so you make sure that in the population you know how many people are over 80 and then you make sure that you take a strata through them Does the multi-stage sampling allow you to form some kind of hierarchy in terms of examples that you've collected not normally no because what you're trying to do is say that you take it from one location and then you say well that doesn't generalise because it's just it could be a factor about the University of Manchester so we now need to take it about, take Birmingham and then we can say that might generalise to people to students in England then we might want to take another one from Edinburgh that might be England and Scotland and you might need to do a few more but that's the way that it works I think all you can say about the University of Manchester is the University of Manchester students at the University of Manchester that would be the general intent and then we go on to Qwerty-based sampling so Qwerty-based sampling is exactly as we talked about what we're trying to do is replicate the we're trying to replicate our understanding of what the population is so for instance we're trying to say we're out on the street with a clipboard I've been told to get 8% new 16 men 16 men and 14 men because that represents a population of the age of the gender demographic they're told to get so many people between these age ranges and these age ranges to try and represent the population so they're doing it because it's more convenient for them to do it it's very inconvenient to do it and it's cut more cost effective for them to do it this way Snowball sampling let's see if I've gone through here Snowball sampling is really generally just like a snowball so generally all that I'm doing is say do you do one thing and then you say oh my friend will do your thing and then you say my friend will do your thing your test so it's like a snowball rolling I roll and say you're going to do it then you suggest somebody or two people but it's like a pyramid scheme pyramid scheme for this kind of stuff so do you see how long it goes there's no orphanage, it's very simple and there's no rigour to it directly it's just convenient in some sampling process what do we do about the sample that we're missing out because in everything I understand from different points of view that there is a sample that we got or that we chose from particular method but it's potentially a really different sample that might do the different thing people who really don't want to be into the survey should they change the way but they might actually have a right thing to say I don't understand that's very true and so that's why the random the random sampling is there, that's why all this has a sampling there because you might go to a quota bay sampling and somebody says with your clipboard, no, go away and if you miss anything, possibly they might have some valuable and useful information just the way it works, there's nothing we can do about it it just comes out in the statistical analysis what we try and do is build the sample so big that we get rid of those effects the bigger the sample the more likely those effects are to disappear it's a little bit of a sample convenient sampling so this is just exactly so this is just a convenient sample so I just say to you my questionnaire and suggest somebody else will do it I just say you fall you'll do it, because you're convenient so that's what I say but if I'm on the street with a clipboard and I've got no form telling me who to grab I just grab anyone, the most convenient people who walk by who look at that and I'm going to kill them and don't do the big door those kind of matters that's it and then mental sample then you're just going oh well I think you look like the kind of person I want to sample that's it I think this is a good sample I think you'll make a good sample so that's it but sometimes that bias might be alright because it might be that you think I'm looking for people who are into heavy metal so generally I'm not going to look for, I'm not going to go on the street and go oh my quota, you'll do you're on my clip, you'll do put two-year-old Johnny or I'm doing whatever no, what I'm going to do is say you're wearing a big death maiden whatever that is I don't know but yeah that kind of thing you look like the kind of person to make death maiden it's a mean man obviously not so that kind of thing or I might be looking for some of the festival goers so I'm looking for all those guys with the dangly festival dudes oh yeah so yeah greenfields and dangly festivals so that's what I'm thinking, if I want to know about festivals you're more likely than my dad so that's my judgement how does that apply to someone who's actually on a clipboard as a job rather than having expertise or looking for a sample? well most people who are on a clipboard don't have the expertise and so generally they are they're going to be told the quota they need to get of the different demographic breakdown of the people they need to get and they'll just have to fill that with this one this person fills that quota if they get 20 guys straight away and then they get 20 in space over the next the next 8 hours that might be the wrong demographic because you're getting all the guys in the morning who are going to work in a city gents and it might be getting all the different kinds of people you might be getting so there's problems in that as well because you're not being told when to sample, the order in which to sample but they can't do everything it should just come out theoretically it comes out in the wash so if you have more people who are doing the with clipboards, more people doing the experiments that also means that you have different choices ok the reason why there's question marks over here is because there wasn't a label in my original latex file for these notes that I've given you out and now I've put the label in and that's supposed to generate but I've not regenerated it anyway there we are these are the different kinds of tests we can do single group, post test what we think that means it's easy, it's all in the words what do those words mean we ask them after the interventions being created so we have the thin the device we haven't asked anything beforehand we asked the single group after they've had no more experience of that device or interface this is the first time I've seen after it's been created then we have single group, pre test and post test what we think that is tell me something more about the intervention what they're going to do before they've heard about the intervention control and then yes, I mean it can happen in two ways you can have it where it's before and after the intervention or you can have it that's before and after you have you have changed the stimulus done something to the independent variable so you might have the same interface they see it first and then you make one challenge on that interface where you change the colour or something then you show it again and see whether that makes a difference because then you can then say it's just this one thing that's changed in that one interface if you're showing a system if you're showing another system there's too many compounding variables that are often likely to be in there yeah okay natural control group for a test post test so this one gets around this where we've got this big system so we've got a control group which might be used to the old system it's the control and then we've got pre and post test for a new system that we've put in and we can compare the pre test and the post test to the new system to the control group so therefor we've got a baseline control group and this is naturally occurring which means that it's really just like anybody who fetches up yeah so it's just like you guys a naturally occurring control group for blackboard because you're students and you'll be the people who use it the randomised control group is better because this control group here means that you've got a randomised sample in the control group it's just not naturally occurring so I look for everybody in the population and just randomly pick them normally like a random sample to make the control group yeah now this goes in a hierarchy from not so good down to good very good so it gets stronger you've got this other thing within subjects so this within subjects trial means that this pre test and this post test could be different people between subjects it could be but this within subjects means it's the same people and then we've got others I'm not going to get into the others there's plenty of other types of experiments that are there to try and get over confounding variables different make-ups of different kinds of populations or samples okay we're nearly there perhaps a kind of the core process is why do we care about ethics I'm a home yes I know right why do we care about ethics because we don't people yell at us you know people yell at us that's a good thing yes that's one but why else because you end up skiing results you annoy it's a great process if you annoy it's a great kind of that's one where you've seen it any others yes I think this way is subjective in subjective you move within the person who is maybe ethical and not ethical in anything and that's sort of the basis of what our life is ethics not ethical I'm not sure about that anymore okay so generally with ethics you've got two things you make sure that you don't screw up the participants and you might make sure that you don't screw up the experimenters okay those are the things that we're bothering about and that's the simplest way you can do it and one of the reasons why we talk about this is that we want good science because generally if you're putting people through various trials you want to make sure that the scientific method is correct so you don't have to keep doing it again when you realise you've collected data but it's now worthless yeah it's important maybe it's not so much important for you guys doing UX but it is important for lots of other things and the reason why it's important is because back in the old days people were called subjects because experimenters were normally from higher class backgrounds and they thought everybody was pretty much for them to use as they liked so that's why they were called subjects there was a direct power dynamic the problem is with subjects subjects just tell you what you want to hear so of course it gives you data too the other point is that lots of really really crap things were done in the name of science in the past lots and lots from the decapitation of our original heads and the removal of those kinds of artefacts so we can study them better without any ethical discussion to people like Birkenhair or sort of Dean Up coffins with people in them so we can do some scientific experiments down to the human experimentation project it was called in Nazi Germany where people had no no prisoners or people in general had no input into what was going to happen to them and it was normally bad but it was always bad and painful so really after the Second World War and through the 50s there was this move to understand people were shocked by the traits of the mainly of the Nazis in Nazi Germany reality is though everybody was doing it the UK England was doing it in the colonies previously everybody was to blame it wasn't just his guys we could vilify it pretty much because we would theoretically so that's one of the main problems now in America they found that also there was lots of people especially in mental asylums or people who had very difficulties anyway so we're being taken advantage of too and so there was a chord created and ethics are in constant change so the American Psychological Association had its own set of ethics there's a number of various international chords which are protections to make sure that we don't screw it up again or at least we don't screw it up as bad as we did again so that's why it's there so these checks and balances that are there are so that we can make sure that we're not taking advantage of people and so that we can make sure that we're not taking advantage of the people who are doing the experimentation either that their needs are met as well and the first thing is it's got to be good science so generally if it's not good science there's no point in doing it because you're going to get bad data so therefore why are you putting people through this now there are times when harm to a subject, to a participant can happen medical trials and clinical trials when one subject group or one participant group is getting a placebo against another in a trial where they're not getting a placebo then it's seen that the benefits of the possible cure at way of the possible negative effects to that participant group the point is as soon as we can see statistically that there is this is mitigated by the fact that as soon as we can see statistically that there is a problem that there is a benefit to taking this other drug and not the placebo everybody on the placebo group is switched on to the drug immediately the problem is that some of those people may have very well died by that point because they were doing the placebo so but the possibility of there being a better outcome for the general population is much higher I'm just trying to give you a background for why it's there sometimes we don't really use the experience we don't care but you never know especially in a sort of a student situation or maybe when you're in a work situation you've got people who maybe look up to your lower down the power of hierarchy if you like in an organisation they're going to be more likely to do for things that you want them to do even though it might be against their best interests ok ok so the way that we get around this is ok so here these are some of the various things you can see them for some of the decorations and some of the accords you can see them in the notes the main one, the real one is this decoration of Helsinki so that's one of the big decorations which will be better for the rest of the ones that's mainly used as well is the American Psychological Association especially for your kind of UX work ok so this is the kind of thing you also see if you're writing technical reports or papers they'll want to see not just the ethics but they'll want to see the full write-off in American Psychological Association format which specifies how you write the number of participants, how you write the methodology, how you write, how you got them and the data set ok so in brief we're looking at some things competence so first of all we want to check are you competent to do the work if you're not competent to do the work then you shouldn't be doing the work because we're going to get bad scientific data and you might screw somebody up that's the thing integrity having an axe to grind or a desired outcome it sounds good everybody has an axe to grind or a desired outcome really you're all going to get that in a job situation due to the fact that people want this thing to work ok they want you to say it's great you're under a lot of pressure, you won't realise how much pressure you all will be under if you're in a UX environment because everybody, the software engineers have wrote the best code and it's brilliant everybody else has done brilliant jobs everybody's ready to release it and you're going to tell them that it's in a pile of crap like somebody should have told them on the write-off give them a wave there we are so that's the problem following the scientific method following the scientific method means that we're going to have good science which means that we won't have to put people through this again that's the main thing we won't have to do that experiment again it also means that the results will be correct and if the results are correct everybody will be happy because the product when it gets released will be a great success theoretically and you're under the UX experience at school respect so you need to respect your participants so make sure that they're capable of self-determination make sure that they can agree that they know that they've got the mental faculties to be able to understand what's going to happen to them and that they're able to agree you are trying to make sure that their welfare is paramount within the particular framework that you're actually going out benefits you're trying to maximise benefits to people what happened in the old days is that for instance an experiment to be enacted but the people who would benefit wouldn't be the people who were being experimented on it would be the rich people who could afford the procedure if it worked or they could afford this or whatever it might be if it worked normally the people at the bottom of the tree were the people who were actually experimenting the benefits of that work and they're, if you like, paid them with people higher up the tree what you need to do is maximise these benefits to the people who you are working with so for instance ethical approval for say, I don't know, contact lenses if the people are going to suffer from problems with eyesight while doing these contact lens experiments their benefits are going to be very low and it's not a it's then weighed up against but it's not a life threatening condition or why should somebody else benefit when the people when the participants themselves won't just this undertaking of participants will benefit from the results of that research so we're trying to make sure that they will benefit from the results of that research in theory they might not all want to use your interface or your device but they have the option to use your interface or device that's the point no, it's the benefits wouldn't that skew the data and allow the biasness in how we're using our samples no, because we're choosing the sample randomly say but we're making sure that there's ethical questions that we've asked ourselves ethical questions in place so that we don't screw up the random sample that we've actually taken so therefore we're in our random sample I'm going to benefit from it in the whole then we have to ask ourselves why did we choose them if they're not going to benefit they're not the population are they so why are they here okay trust we need to maintain trust by anonymity that's a big thing in any ethics confidentiality is also a big one we do I've seen work where people have been going to Garza Strip West Bank looking to see how strangely theatre increased people's well-being sense of community but of course they have to anonymise the data they have to make sure it's confidential so they're putting it on an encrypted drive but then also they have to say to their participants we need to maintain trust if we get arrested we're going to hand over this if it's anybody else we're not telling them anything but if it's the police or the secret service or whatever it might be then we'll hand over the drive and it's up to them it's up to those people then to decide whether they want to participate or whether they want to withdraw you have a duty of care so the people who are doing it what to yourselves a lone worker agreement a lone worker agreement a lone worker agreement is something that we also have so for instance if you're going into a situation whereby you're going to be you're on your own you're going to go into somebody you're interviewing say on their premises behind closed doors there are many things called lone worker agreements so that you can make sure you're called in that you don't get you're not put in homes where yourself there's an experimental as a UXer you might we also put into place checks and balances or systems whereby if you experience violence you can get back up if for instance you listen to some harrowing stories then those harrowing stories might really screw you up and you might need additional help so those things we also have to make sure in place that there's some place to those ok wow that's good right so generally we're now moving on to the disfishing topic the coursework for now somebody wrote me and said about the Wednesday about yesterday's lecture the handling dates the second of May somebody said it was the second of May but it's not it's the 18th of April for tomorrow's and this one today's is the second of May so generally yesterday discussion topic 3 18th of April discussion topic 4 which because obviously we should be having this lecture after Easter is the second of May we'll get them in as quick as you like ok pop quiz for the next logical week I'm happy to say because there's not that many of you here this pop quiz will mean that you will look like stunning gods this intro you should concentrate on these eight ethical principles what are these ethical principles that we're trying to look at we don't need to know everything in detail it's not the point to remember you'll see in the notes it goes on and on in the appendices to the notes it goes on and on that's to give you a full understanding of what's no generally is when you're doing these things what kind of things do you have to keep in mind if you're good people if you want to do a good job you'll keep these in mind and it'll just work out all right very obvious read the boys' loops by next time it's 10% it's the last one discussion topics, read your notes to the end of the course that's the questions on page 239 if you're lucky well if you're really going for it there are loads of notes for lecture 10 which you should have had has anybody not got any of these notes because I've got the one okay so we can get over to the end there's quite a few notes so don't freak it out okay so I'm done have you got Easter thank you and I shall see you after Easter oh loads hehehe what the hell is that is it a walk back yeah it's a walk back put some of the secrets in your notes put some of the secrets put some of the secrets it's a trip oh it's all right there's a castle in place there's a castle in place there's a castle in my materials ..a wneud hynny ymwneud y maen nhw'n ystod sydd wedi bod yn ymblwysig yn ymddangos... ..yna'r prinsifol yn gwybod, fel y prinsifol yn hollu'n ddysgu sydd yn oed... ..yna'r prinsifol yn llunio'n llunio'n llunio'n llunio'n llunio'n llunio, ..yn llunio'n llunio'n llunio'n llunio, yn ymwneud, yn ymwneud, yn ymwneud. Mae'n rhaid i'r mwylo dw i'n ddechrau, ond mae'n gweithio o'r ddysgu gyfnodol yn yr unigol o'r ddechrau. Maen nhw, yn falch, ond mae eich takï bach yn dechrau. Mae ddau'n ffynol i gwybod logaidd.