 Good morning. We continue the epistemic with the first review talk of this meet by Professor Tina Overton. As the program says, she will be talking about problem solving, strategies, solutions, and successes. Tina, Professor Overton, is Professor currently at the Monash University, as you can see from here, which she has spent and done a lot of her work, spent her life and done a lot of her work in England. She also had a career in industry and the National Health Service, much maligned, before joining the chemistry department at the university there. She is published on the topics she is going to talk about, a considerable number of publications, book form, chapters, journal articles, research papers, learning resources which have been adopted in many institutions, textbooks in organic chemistry and skills development. She has been awarded, of course, as some of us do know, the Royal Society of Chemistry's Higher Education Teaching Award, the Tertiary Education Award, the Knee Home Prize, and is a national teaching fellow and senior fellow of the Higher Education Academy. Professor Overton? Thank you. Please. Thank you very much. Good morning, everyone. Thank you for that lovely introduction. I'm very pleased to be here. I'm going to talk to you this morning about problem solving, one of my particular passions when it comes to research and when it comes to working with undergraduate students. So I'm going to review some of the literature for you, particularly literature that has influenced my own work and influenced how I go about researching problem solving with our own undergraduates. And then I'm going to finish with a summary of some of our own published work that we've done primarily at the University of Holland, the UK, and laterally at Monash University in Australia. So I'm going to start by getting you to consider what is problem solving. So it's a term that we use quite widely, and we use it to describe a broad range of activities that we get our students to do. So I think it's just worth looking at definitions of problem solving from the literature. So Wheatley describes it as problem solving is what you do when you don't know what to do. So that very simple definition, I think, is a very useful one. And for me, it tells me that many of the activities that we ask our own students to do, and we call problem solving or not problem solving at all because students know what to do. They know what's expected of them. So those are exercises rather than problems. So here's defined it as whenever there's a gap between where you are now and where you want to be, and you don't know how to find a way across that gap you have a problem. I think that's a very useful definition. So we're taking students into the realm of the unknown. We're giving them something to do. We know where we want them to get to, and they don't know how to get there. So that's more of a problem. And then Krulik and Rudnick defined problem solving as a situation, quantitative or otherwise, and we tend to focus on the quantitative in science. I think we should be getting students to think more qualitatively as well, but confronts an individual or group of individuals that requires resolution and for which the individuals see no apparent or obvious means or path to obtaining a solution. So that's basically the same as the Hayes definition. You know where you are. You know where you want to get to, and it's not obvious how you get there. Whether you work in individually or in a team. So let's bear those definitions in mind as we go forward. So I find this a very useful table. It defines possible types of problems. It was published by Alex Johnston, quite a long time ago now, still very, very useful. Alex Johnston has done some wonderful work in chemistry and physics education around problem solving. Very sadly, Dad just before Christmas just a few weeks ago, but his work is still very valuable to certainly chemistry education researchers. So Alex defined problems by three variables. Whether you give the students the data or whether the data is incomplete, whether the method that the students have to use to solve the problem is familiar to them or unfamiliar to them. So whether they use a known method or develop their own method. And whether the outcomes are given or open. So whether there's a single correct answer that you're looking for, or whether the answer is a bit more open. All right, so if we use these, so it starts with from the top, give them all the data. They use a familiar method. There's a single correct answer. Our undergraduates are very familiar with those sorts of problems. Those are what we would define as algorithmic problems. I'm more interested in the problems at the bottom of this table where data would be incomplete. You don't give students all the data they need. Or maybe you give your students lots of data and they only need a subset of it. Unfamiliar methods. So students have got to develop a strategy for themselves. And a more open or less defined outcome. So one out, there isn't a single correct answer. There might be a range of answers or a sensible answer or answers that you can discuss the value of. So I'm very interested in these sorts of problems. And what I'm particularly interested in is solving these types of problems need the same skills and approaches as these types of problems. And that's one of the things we've been looking at for some years. And I'll share some of that work with you later. Okay, so these are your algorithms and these we're gonna call open-ended problems. No complex problems. Okay, so as I said, one of the things I'm interested in is problem solving one thing. Is it one set of skills or does it vary with the type of problem? So Stuart Bennett published a paper in 2004. He did a big study of first year examination papers across the UK and across Australia. And he found that 98% of problem solving on first year examination papers were algorithmic. 98%. All right, so you might say, well, that's all right, as long as they're doing different types of problem solving elsewhere. I don't think he was convinced that students were. So heavily rely on algorithmic problem solving. We know why. The reason to set and the reason quick to mark. All right, that's why we default. And it's sort of low risk for students. So what we're interested in is there a continuum in terms of skills from the algorithmic through the conceptual through to open-ended problems. And just being proficient at algorithmic or conceptual problems, the more traditional problems lead to success in open-ended problems and indeed the other way around. And does it matter? Should we care? Well, I care, you may decide you don't. All right, so here's some examples. I'm a chemist, so I apologize to all of you who are not chemists. Some of the examples I will draw on are from chemistry. So this is a very traditional first year chemistry, algorithmic problem. This is a type one problem. So very simple, potassium dioxide reacts with calm dioxide according to these equations. What mass of potassium dioxide would be required to convert 50 grams of carbon dioxide to oxygen? All our first year undergraduate chemists could tackle this problem. I've given them all the data. They're very familiar with the problem. It's a straightforward mole calculation. There is a single correct answer that I can go take or cross. They're very familiar with that. This next problem I'm going to show you contains the same chemistry, if you like, but the same chemical skills, but it's presented as a more open-ended problem. All right, so this is the same chemical skills but presented as an open problem. So you've been lucky enough to be invited to give a talk at the Episteme Conference in Mumbai. The fright from London is nine hours. To provide breathable air on an aircraft, recirculation cells containing potassium dioxide are used. What mass of potassium dioxide would be needed for this flight? And to make it easy, I've given the students the equations. You could decide to give them or not to give them. So the content, the core of that problem is the same. I've not given them any data. The final calculation will be an algorithmic calculation, but they've got to develop a strategy to get to that point and there certainly isn't a single correct answer. So it's a very different type of problem. So what sort of information would students need to think about and to provide for themselves in order to start to tackle this calculation? What sort of strategy, what sort of model would they develop? So discuss it with the person next year for two minutes. All right, so come up with some ideas about how you would start to tackle this problem. Go, two minutes. Don't be shy. What sort of information would you need? What sort of information data would you generate in order to tackle this problem? Thank you, sir. Do you have a question? Maybe I can tell the way that... Yes, please. The information and the way. First, we know the time of the flight is nine hour. That is the given. And we don't know the size of airplane. That is how much oxygen we are generating that is unknown in this case. And we know the chemical reaction with a balance. So these are the things. The only unknown is the amount of oxygen that we're going to generate. Okay, thank you. That's a good start. So the size of the airplane. So you'd have to make an assumption. 300 people, 400 people make an assumption. All right, so you need to know how much carbon dioxide you're generating. All right, so you need to think about how much carbon dioxide do we breathe out in one breath because that's what's going to be removed. So then you have to make assumptions on how big a breath is, what's the volume and how often you produce a breath out the rate of breathing. All right, so you need to start to generate, build a model and then generate your own data. So although the chemistry content is exactly the same, you can see it's a very different type of activity for students. Plus you've got this real life context and there's lots of research that shows using a real life context motivates students. Okay, so if we do a quick comparison of those two, the students are very familiar with the first type of question. They're doing them all the time. Less familiar with the more open ones. One's very clearly defined, the other one less so. They know the method they're going to use for the first one. For open-ended problems, they're going to have to develop their own method. Algorithmic problems are often abstract whereas open-ended problems present you with an opportunity to use some real life context to motivate students. We usually give students the equations in algorithmic or they already know the equations. With open-ended problems, you can choose not to give them the equations and expect them to develop some for themselves. Algorithmic problems we tend to use based on taught content. So we teach them, then give them some examples. Open-ended problems are a bit more flexible. The math is very formal for the first type but very informal for the second type. They really do want to do some back of envelope calculation. Accuracy really isn't important in the second one. We're just looking for ballpark figures. And there is a range of acceptable answers. And I think this is really important because so often science students are very used to solving problems and there being a single correct answer that they say correct. They're very unused to being in situations where there is a great degree of uncertainty. Where all you can say is, well, that answer looks sensible. But it's different to your partners here but they're both sensible answers. That there isn't always a single correct answer. And I think that often attracts some students to study science. They like the perceived certainty and I think we have to introduce them to the fact that science isn't all about certainty. Oh, okay. All right, so you can see that superficial analysis seems to suggest that open-ended and algorithmic problems are quite different. And if they are, maybe they'll require different skills in order to be successful. Okay. So there has been some work, some research comparing open-ended with algorithmic problem solving. This paper by Surif et al. says that the attributes required to answer algorithmic and open-ended problems are thought to be different. They look at the difference in attributes between them and identify that participants who were good at algorithmic problems who were successful were less successful at the open-ended problem. So being good at one type of problem solving didn't necessarily mean you were good at the other type of problem solving. And they put that down to an inability to transfer knowledge between different styles of problem. All right, so this again suggests that all problem solving isn't the same. Okay, so what we're going to look at now is different models of problem solving. And there have been several published in the literature of varying levels of complexity and usefulness. So this is a very early problem model. So this is Paulian. This is often quoted in the literature. So his model starts here, understand the problem, develop a plan, execute the plan, reflect on the solution. It's a very simple model. And you would think, if we could teach our students this method, then they'd all be expert problem solvers. We know they're not. It's a very simple model. This one I must buy you. More recent is still, you can see that it's still much based on the Paulian model. So define and analyze the problem. So this is how I understand the problem. Collect some data, which is what you were doing when we were thinking about that aeroplane problem, collecting some data. Generate potential solutions. Select the optimum solution and then evaluate and revise the solutions. And then you're back to define and analyzing the problems. This is a big step here from collecting data to generating solutions. You know, what goes on here, that's a very, there's lots and lots of work and thinking and activity going on here in that single step. This is a model published by Weep Language. I think for me is much more realistic. This is the anarchistic model of problem solving. I think this reflects, if you look at this model, I think you'll see it reflects much more accurately what you see your students doing. So Wheatley says it, from observation of his students, read the problem, read the problem again, write down what you hope is relevant for all picture, make a list, some sort of representation, try something, see where it gets you, read the problem again, try something else, see where this gets you, test your results, read the problem again, where appropriate, strike your forehead. Write down an answer, not necessarily the answer, just write an answer down. Test the answer to see if it makes sense. If you start over, if you have to, celebrate if you don't. And I think that's much more realistic. I think that's what we see students doing. And I think we think back to your student days or when you're starting to tackle something novel and complex, this is more like, I think, what we see observed students doing. It's not simple, it's not straightforward. But if we break down Wheatley's strat model, if you like, we can superimpose on it some of those steps that Paulier and you used as well. So you can see we've got understanding a problem is in there, developing a plan is in there, again understanding the problem, executing the plan, reflecting, execute the plan, reflect, understand the problem, execute the plan, reflect. So those steps from the previous models are there, but it's a much more complex model. It's not four or five simple steps. You can see it's iterative, understand, execute, reflect, reflect, understand, execute, reflect. It's this iterative, trying to understand what the problem is telling you, doing something, thinking about it, going back and having another go. Okay, but George Bowden has said that a big part of problem solving was gradually exploring or playing with the question, particularly when it's a complex question, reading the problem, noting relevant information. He found drawing representations and equations was key. Encourage your students to write something or draw something, that is a key step. Examining intermediate results, reread the problem, and again you've got this iterative model. Right, George Bowden also says, it is possible to construct a unified theory of problem solving. I have done so, unfortunately. I'm afraid our unified theory will differ significantly from one another. All right, so we can all develop our own theory, but actually we all go about it in our own way, to a large extent. So what I'm going to do now is look at some predictive studies. So we've defined problem solving. We've looked at some models of problem solving. Are there things that we can do that can predict whether students are going to be successful at algorithmic or problem solving activities? So we're going to look in terms of some cognitive factors. All right, so the cognitive factors that you can measure quite easily with a paper and pen test in about 20 minutes and get a number for your students, and then you can correlate that with their performance in a range of things, but problem solving is one of them. And so there are three main cognitive factors that have been used in predictive studies in problem solving in science, in undergraduate science. Okay, so the first of these is working memory, and I'm sure that's something that most of you are familiar with. So this is the information, excuse me. This is the information processing model. All right, so this diagram represents what's going in our heads when we're solving problems or doing anything really. So this is the outside world, right? Events, instruction, things that's happening. It enters our consciousness through attention. We pay attention to it, and it enters our consciousness. That information then passes into the working memory space, and that's where we interpret, rearrange, compare, and process information. Once we've processed it, we might produce a response, i.e. an answer solution to a problem. And that information then passes into the long-term memory where it's stored and linked with other information. And then when you present it with a new situation, you can retrieve information from the long-term memory into the working memory space in order to use it, right? All this is influenced by something called the perception filter, which links our long-term memory to what we're taking. And that perception filter is really about what our prior knowledge is, what our prior experiences, how our attitudes are. So this is about what we know and who we are, really. So this is what we're going to focus on this working memory space, right? So this leads us to cognitive load theory. And cognitive load theory says that our working memory can hold between five and nine discrete pieces of information and can process those pieces of information. But if we present students with very complex tasks, that leaves little room for processing, right? The experts can handle complex information much more effectively than novices. So in chemistry, for instance, this is a benzene ring. It's one piece of information for a chemist. For a novice student, and maybe some of you are not chemists, they will see a benzene ring as six carbon atoms, six hydrogen atoms, alternating double and single bonds. If you see in a benzene ring like that, it takes up much more space in your working memory than if you've seen it as a single entity, right? So this is called chunking. We take complex piece of information and collapse it into one. So your telephone number, you don't remember as 11 discrete numbers. You remember it as one piece of information. This familiarity with it has enabled you to chunk it as one piece of information. Experts also use schema. So schema are when you become familiar with a type of problem, you set up little mental subroutines that collapse together several steps into one step, cognitively, anyway, right? So cognitive load theory has a massive impact on performance in problem solving. All right, so, come on, let me just, yeah, okay. So this is Alex Johnston again. So this is the number, this plot, the number of steps in a problem. And he did this study across chemistry and physics as well. So the number of steps in a problem that you give them to your students. And this is the score, problem solving score. So you can see that along here, when you get up to about five, about five steps, students are getting about 90%, right? So you're not overloading them. You're handling five bits of information that are very successful. And we might expect that as the load in the number of steps increases, we might expect, Alex expected, to see a gradual decrease in performance, right? Makes sense. What he actually found was once you get to five or six pieces of informational thought steps, there is a threshold and the student performance just drops like a brick off this threshold. And you can see once your number of thought steps is higher, students are really not performing at all. So what you've got to be careful of is, when you set a problem, you have to be careful of whether you are measuring students' problem solving abilities or the complexity of your problem, right? Because if you're setting a problem that's down here or somewhere, you're not measuring students' ability, you're measuring the working memory capacity, right? So you need to be aware of that when you're setting problems. I'm just going to give you an idea about what this working memory capacity, cognitive overload feels like, all right? And then you can remind yourself of this when your students are struggling with something. So the test for working memory capacity is called the digits spam test. It's a very simple test. You just read out strings of numbers to your students and they write them down, all right? Very straightforward. So I'm gonna do that now. So the top end of the test is nine numbers. So you're all variable people, so I'm going straight in at nine numbers, all right? So I'm gonna read these numbers out to you and then when I finish, you tell them back to me, all right? No writing them down, this is all up here, right? No cheating, right? Are we ready? Two, five, nine, seven, one, three, six, eight, four. Excellent. Right, the real test is called the reverse digits spam test. All right? So now I'm going to read the string and unfortunately I'm gonna make it easy for you and use the same string of numbers. When I finish, I want you to reverse them in your head and then tell them back to me, all right? So that means you're doing some processing as well, which is what we're asking students to do. Are we ready? Two, five, nine, seven, three, one, eight, six, four. Yeah. All right, so you go up to about seven or eight first time. You go up to about four or five second time. All right, so you get a feel for what that cognitive overload feels like, right? You get to a point where you're trying to process so much information that you can't handle it. All right, so that's what it feels like if you're overloaded with students. And we do it in lots of situations. We do it in problem solving. We do it in laboratories particularly, you know, students go in the lab with new equipment. There's a new lab manual. They're working with somebody new. They don't know where anything is. And then they're having to perform this complex experiment as well. Complete cognitive overload for a lot of them. All right, a closely related construct is called M-capacity, the mental attentional energy available for a particular task. It's measured with a slightly different test. It's often used interchangeably with working memory, but actually it's more about the processing than just capacity, all right? So it does often give you slightly different results. And I said to you before about as an expert, we do a lot of chunking. So this figure here has got, I think, 11 lines and three symbols. So if I ask you to look at that for a second, could you then write it down accurately? Right, probably not. If I showed you that one, if you're a chemist, if I showed you that one, which has the same number of lines and the same number of symbols, and actually write it down, you could do because you're familiar with that one. So this is the difference between experts and novices. And we've got to keep reminding ourselves, we're experts in that we're very familiar with the terminology, with the symbolism. And our students and novices, they're not at that stage. So they're using a lot more mental capacity than we are, which is why we get frustrated with them sometimes and they fail because we're missing that. So obviously if we couldn't do something to reduce this cognitive load, none of us would function if we could only ever process five pieces of information. So of course there are ways to deal with it. When it comes to helping students, there are some easy ways to help them. One is to make them aware of prior knowledge right at the beginning. So to remind them, today we're going to talk about this, I'm going to remind you what you already know, that helps. And present all that supporting information that they're going to need right at the beginning, so they get out about the way right at the beginning. To remove what I've called noise. So noise is all this extra stuff we might put in a problem, maybe to make it more interesting, but that students don't really need. Right and you have to make some decisions here because sometimes we want to put the interesting context and additional information in to make it interesting and motivate the students, but we just have to be aware that it's adding to the cognitive load. Scaffolding, so helping your students through the problems. There are different ways of doing that. You can simplify the problem, break it down into individual steps, that makes it much more accessible to students. There's a technique called fading. So you're aware of the traditional worked examples, method you work through an example on the board, then the student has a go. In fading, you work through a worked example, but finish before the end and they finish it off. Then you do another one, but you finish earlier, so they have to finish more of it off and so on until they're doing the complete work solution for themselves. And that's been shown to be much more effective than straightforward worked examples. And then the other important feature is something called field independence or disembedding ability. That's another of these cognitive variables that we can measure, and that helps if you have high field independence or high disembedding ability, then you're much more able to deal with complex pieces of information and I'll show you why in a minute. All right, does anybody know who this is? Yeah? It's Wally. Are you familiar with him? He stars in a series of children's books. All right, so where's Wally? These children's books are picture books and what you have to do is find Wally. So the picture books look like this. All right, and our kids had them when they were little and loved them. So can you find Wally on that picture? Anybody spotted? Hands up when you've spotted him. He's in there, yeah, you've spotted him? Anybody else spotted him? Okay, so Wally is here. All right, so there's Wally. So you can see that there's lots and lots going on in this picture. Lots of interesting things going on that kids love to look at. All right, but you've got to look through all the interesting additional stuff that's going on and just focus on finding where's Wally, which is a bit of a shame really. Kids don't just focus on where's Wally, they enjoy all this other stuff. But this is a good analogy with what field independence is. Field independence or disembedding ability is the ability to look past all the additional information that isn't relevant and get straight to what's important. So we're seeing the wood from the trees or cutting out the noise. So if you are highly fielded independence, then you'd have gone straight for Wally. I'm not interested in any of this, I just want to find Wally. If you feel dependent, you'll have enjoyed looking at everything else that was going on before you finally found it. So if you are field independent and you can, given a complex problem, go straight to the important bit and not be distracted by all the context, then you're not gonna be overloaded cognitively as easily. All right, so if you had a low working memory but were highly field independent, that would help the low working memory. All right, so all these three factors and capacity working memory and field independence have been studied in some predictive studies. So Johnston and Sopales both found a correlation between working memory capacity and problem solving ability. So that was that graph, Johnston's that was that graph I showed you. We also found the correlation between M capacity and disembedding ability and algorithmic problem solving. And Sopales also found working memory overload was more marked in field dependent subjects, which is not surprising. So if you feel dependent, you can't pick out the important bits of information. Therefore, your working memory is gonna be overloaded more easily. All right, okay, so here's a study we published a while ago. So there were those predictive studies that had been done on traditional problems, algorithmic problems, sort of thing that we used to see in students do. We wanted to look at open-ended problems to see if there were any differences. So our first study, we had a couple of a hundred. First year undergraduate, we had their scores on algorithmic problem solving. They did some open-ended problems and we measured M capacity, working memory and field independence. We also did attitude questionnaires. Their attitude questionnaires told us that they found open-ended problems much more challenging, but much more enjoyable. So they enjoyed doing them. All right, so what we found was same as the Johnston and Sopales studies, we found a correlation between M capacity and problem solving scores and algorithmic problem solving. We also looked at overall degree scores, the overall scores that students were getting on their degree program, and we found a correlation between problem solving scores and overall degree averages. For open-ended, we saw the same correlation between M capacity and problem solving. We also saw correlation between field independence and problem solving, but no correlation with degree results, which for me says that these things are definitely different. Got strong correlation with degree average for algorithmic and no correlation with open-ended. They must be requiring different cognitive skills. Here's the lack of correlation with open-ended problem solving. I don't think any of us would be happy drawing a straight line through there, but here's the algorithmic one, and with our entire study, this is the strongest correlation we've found. So it tells us that open-ended and algorithmic problem solving are definitely different. What it also tells us is that we are rewarding algorithmic problem solving. We're not rewarding or assessing open-ended. Yeah, do you have a question? Sorry, I think the microphones come in. Oh, the axis, sorry. This axis is the... Just a clarification, what do you mean by degree score? And the degree score is that over the entire undergraduate chemistry degree, at the end of the degree or at the end of the year, they get a percentage score. Oh, you mean like... So all the exams, all the assessment, everything they do on their degree, they get a year average. So this is degree score, and this is the score on their problems, on the algorithmic problem. So that was quite alarming, really. But it did suggest to us that these were different types of problems, so I was lucky enough to work with Helen Sinclair Thompson, who was a psychologist, who knew much more about working memory and M-capacity than I did. She had some tests that evaluated the different components of working memory. There's not a single thing, it's got different components. The central executive is about attention. The phonological loop is really about sort of oral reasoning, and the visual, spatial sketch pad is really about visual reasoning, drawing representations and that type of thing. So we had another study, about 100 students, they each solved some open-ended problems and we measured these three cognitive features. Okay, so what we found was the score on the counting recall test that measures working memory was the best predictor of algorithmic problem-solving scores. And predicted previous chemistry grades. So again, we're rewarding algorithmic problem-solving. The scores on the figural intersection test and the backward digit recall, which measure M-capacity and higher-order skills were best predictors of open-ended problem-solving. So these measure mental capacity processing information, which is the high-order skills. So we found the lower-order cognitive skills were good predictors of algorithmic problem-solving and the higher-order cognitive skills were the best predictors of open-ended problem-solving. So if you're not developing higher-order cognitive skills, you're not gonna be successful in open-ended problem-solving. So we went from there to thinking, well, that's all very well, but can we understand how students go about tackling open-ended problem-solving? What are they doing? How do we define success and can we help students develop strategies and approaches that will help them? So I'll speed up a bit. This was a qualitative study. So we got individuals to solve three open-ended problems. We used a think-aloud protocol. So we used a live-scribe digital pen to capture what they were writing and what they were saying. We transcribed those, looked for emerging themes, and did interact and reliability. These are some examples of the problems we used. Every student did how many toilets are needed at a musical festival, which was supposed to be a science-free example to get them into it. What is the matter of the atmosphere? We had students from different STEM disciplines. We used that one with the chemists and physicists. How much carbon dioxide is produced during a marathon? I think that would give that to our sports scientists and biologists. So they all had three problems which were relevant to their discipline. They all found this one the most difficult. They found the science one's much easier than the one we thought was the easy warm-up one. All right, so we identified nine themes when we looked at what they were actually doing, what approaches they were using. So the themes we identified were identifying information needed, making approximations and estimations, using algorithms, losing numbers basically, evaluation, using a logical and scientific approach, identifying and framing the problem, developing a strategy, not being distracted by the details of the problem and confidence and lack of confusion. And for each of these themes, we saw positive and negative, so students who were unable to identify the information needed, as well as students who were successfully doing it. So we gave each code a positive and a negative one. So then we went back and we coded up each individual. Right, so those are just some examples. So here's a couple of individuals that we've coded. So these are the number of times that this student used, exhibited each of these approaches and we gave a percentage to each one and then we took the highest percentage codes for each individual. And then for each discipline, so this would give us a lot of detailed information but it wasn't really very easy to get a handle on, wasn't very easy to visualize what was going on. So we, oh, and we also saw for all students, these confidence, it's not really an approach, it's more of a behavior really, but a lack of confidence and exhibiting confusion. So what we did was bring all the individual's results together for each discipline and we've exhibited them as radar diagrams. So this is for our chemistry undergraduates. So the blue dots are positive approaches and the orange dots are the negative approaches. So you can see for the chemists, this one here is identify the information needed, this one is using algorithmic, using numbers if you like and this one is identifying the problem and framing. That's what they're doing mostly. This one is confusion. All right, if we look at the physicists, very similar, very similar shape. The major positive approaches and the major negative approaches are exactly the same. Not a great deal of difference. We also, there's the physicists again, we also had some interdisciplinary science students. Now these are students at a university, the University of Leicester actually, and it's an interdisciplinary program and it is delivered entirely by problem-based learning. It's a really interesting program. So we expected these guys to be really good at this sort of thing. And they're really not very much different. They do a bit more making approximations and estimations and they're slightly less confused. But other than that, they're still doing the major things. Right, we also looked at some experts. So we went to industry. And this is the chemical industry. So these are industrialists. They're all qualified to at least masters and have worked in industry for at least 10 years. And you can see that these look pretty much the same as the undergraduates. Right, still showing some confusion. Right, still the three major approaches, identifying the information, using numbers, identifying the problem. These are the odd group. These are academics. And you can see these are quite different. They don't need to identify the information needed as much. They still use algorithms and numbers and they don't identify and frame as much. But they're doing more evaluation and more making approximations and estimations. Right, but a much more rounded profile. Right, and they're also a bit of confusion, but not too much. So these are different. And we don't know why, we don't know whether this is because academics are an odd, self-selecting bunch of people or whether it's the training and the way we think as academics. We don't know that yet. All right, and then it got interesting. We went to the Life Sciences and these were completely different. His psychology, look at this. So lots and lots of negative approaches, lots of confusion, really not doing those things that the physical scientists did at all. Much more rounded profile. Same with the sports rehabilitation students. Lots and more negative approaches. So what about the quality of solutions? What was their success? We used a traffic lighting method. So we gave a really poor, we defined red, amber and green, but basically red was very poor, amber was so, so green was a really good solution. Just looking at our, these are chemistry undergraduates, industrialists and academics. And you can see that the industrialists are more successful than the undergraduates and the academics are more successful again. We also tried to quantify a bit, so we gave a red solution a score of one, an amber solution a score of two, and a green solution a score of three. And calculated average scores. I've not got error bars on this data, so take it with a pinch of salt really. But you can see that the most successful group were the chemistry academics, then the physics students, the chemistry industrialists, the interdisciplinary science students, and then the chemists, and then the life sciences, bringing up the rear. And the most successful group we had was a group of academics at a conference who were doing these in groups, and they were the most successful, not surprisingly. So when we analyzed that, what we found was that the key steps were framing a problem for success, framing the problem, making approximations and estimations, and trying to build confidence through practice when students become more confident with these things or much more likely to have a go. But the key thing was that the academics were the most successful, and everybody did some evaluation. The academics did the most evaluation, but they did evaluation throughout the solution, whereas all the other groups only did it at the end. So they did the solution, looked at it, evaluated it. The academics were evaluating it every step, all the way through. Does that sound sensible? Does that look about right? So that really was the key step for success. So far, we know that algorithmic and open-ended problem solving are different. That they require, open-ended requires higher order cognitive skills, that we can't apply the very simple model to complex problems. That algorithmic problem solving is generally well-rewarded in assessment schemes, and I really think we ought to be trying to build assessment of more open-ended, higher order problem solving into the assessment and rewarding it. We're beginning to understand what a successful approach looks like, and we need to think about whether we can accelerate our undergraduates towards that. We don't know at the moment whether academic success is a selection process or a training process, and that would be a difficult thing to analyze, but we do see that academics are very successful, and we need to do a bit more work on cognitive effects. We did see a few strange, not strange interesting results for field dependence and some of the aspects of problem solving. So that's all I have to say. Those are your references that have been made available previously, and thank you very much for your attention. I'll stay, I'll stay. Thank you, Professor Overton. That was very enlightening. Some questions. Given the time, we'll keep it to a few questions. As you asked them, if you could kindly identify yourself before getting into the question. Karen. Karen? Yeah, this is Karen Haddock. Not Karen. Yeah, actually I have some problems with the, I find the assumptions, the assumptions on which your work is based to be rather problematic. So I wonder if you could talk a little bit about what, how people have challenged the whole idea of cognitive overload theory. And what is actually the evidence that there are some universal laws of development that are based on the cognitive structure of the mind? For example, I know Anna Statsenko and other people have challenged this. When you were talking about Wallace finding Alice, for example, you mentioned that children have to learn not to be distracted by the confusion and by the very interesting parts of the picture. Well, this implies what I'm trying to get at is that actually there's some dangers in this kind of thinking that there's some, it reminds me of these arguments about IQ and genetic determinacy and trying to measure mental capacity. And these are very problematic, as we know they can even result in teachers thinking that children's learning is genetically limited and therefore they don't even have to try to teach these children. It almost sounds like what you're trying to measure when you're measuring this cognitive overload is actually just whether students are able to put everything else out of their mind and concentrate upon the question which the teacher is asking rather than themselves asking questions. And let me just finish, almost done. And so actually it results in kind of imposition of prejudice and categorization and sifting of students. The assumption that some problems are inherently too difficult for some students. So the danger is that we're actually teaching students to conform and concentrate on the questions that are given instead of deviating off this path and asking their own questions which is maybe actually what they're doing when they're not quick enough at telling you what the calculating what the reverse order of a set of numbers is. Or maybe they are actually having thinking in another direction and realizing that this is a stupid question to ask. So are we actually turning students into robots or are we ignoring the different social, political, economic, cultural, hierarchical differences between people that could actually be giving the results you've shown. Could I be doing the one? Shall I answer that one first? Because I won't remember the question. No, we're not trying to, I've said we're not interested in trying to turn students into robots or ignoring other social whatever features. I see all my students as individuals but I think there's plenty of evidence that cognitive load is a real challenge that students meet. And I think it's helpful for academics to be aware that everything we ask students to do there is some inherent load in there as well as the intellectual and content challenge there are other features in there. And I just want academics to be aware of that. Well there's lots of evidence and I've shown you Alex Johnston's work. For me it's quite compelling evidence. That doesn't mean that every student we can measure their working, I don't measure my students' working memory as set in these studies. But if we do have students with lower working memories than others then that is gonna affect their performance. But there are certainly ways around it and we know there are ways around it and there are plenty of papers in the literature, rainfalls and others on how you can reduce cognitive load to enable students to access the material that you're giving them, access the problems that you're giving with them and to give advice on how you can help scaffold your students through them. Three questions, Jayashi, sorry, I have to take, yeah, please. One, two and Jayashi, three. Hi, I'm Ashut, I work here at HBCC. So my understanding is that there is substantial evidence that if students solve problems in small groups like three or four students, some of the things that you mentioned, the gains are much more, I mean, not just the content understanding or the problem solving skills, but a lot more than that scientific practice skills like argumentation, construction of explanation. And I think it also partially, at least partially address some of the concerns which Karen just mentioned, if it properly worked out. Yeah. So maybe you can tell us something about that. I'm a big fan of problem-based learning where students are working in groups to solve very complex problems for just those reasons. Yeah, that they're less likely to become overloaded and the collaborative learning that they do together is very beneficial for them and they make much better progress. Yeah, please. The next, sorry, the next, please, sorry. My name is Kanchana and I teach high school mathematics in Bangalore. So from a teacher's perspective, there are two challenges with setting open-ended questions. One is of course, I mean, assuming you're able to and you set them, one is to facilitate the learning process. How do you, for each child, figure out the learning process and help them move in the direction that makes sense to them. Second one is evaluation. How do you evaluate a group of students in open-ended situations? So I'm not claiming that a teacher's life should be easy, but at a practical level, my question to you is, have you been able to work with organizations that have implemented this successfully at a reasonable scale and do you have some numbers or data to tell us that it is possible to do it in a practical manner because there are challenges of recruitment, teacher training, et cetera. Okay, so I've not worked with organizations myself. I've used open-ended problem solving extensively in my own teaching, both individually and with groups of students and supported those students through those processes and it's really about asking them questions, probing them, getting them to ask questions themselves to start to try and keep that identifying and framing the problem for themselves and starting to understand what it is you're asking them to do is key and that's whether it's short open-ended problems or extensive project work or problem-based learning and the evaluation of groups of students. I've done that with undergraduate students quite a lot. I would imagine it's maybe more challenging in a secondary school situation. Peer assessment works very well in tertiary level so getting students to evaluate each other's performance and each other's contribution to reaching that solution and I think if you have a dialogue with your students before you start and they understand how you're going to evaluate it and you're discussed with them the criteria that you're going to use so they have a good understanding and they're very able to evaluate each other's performance and contributions so I've found that works very well and it also alleviates a problem when students are working together in groups that they often object to that because they feel particularly very able students feel that other students are going to pull their grade down so using some form of peer assessment often helps ameliorate those worries for students. Yes, please. So I was trying to understand you had basically three kinds of studies that you did and in the first one you used M capacity as a measure which you correlated with problem solving in the second one you used working memory and then the three components of working memory and then the final, the emergent themes part of it I could not connect to the M capacity and working memory so what was the kind of thread which ran through this you know how did you go from one to the other what's the connection? I suppose that the thread was this difference between algorithmic problem solving we were really interested in how students set about doing open-ended problem solving so the quantitative studies really built on previous studies to see if there was a difference between algorithmic and open-ended and we found that indeed there was and then that led us to look in more detail about but what are students actually doing when they're solving these open-ended problems and which steps in those approaches are the ones that are likely to lead to success but why did you go from M capacity to working memory and to your final formulation in which neither of these occur? Well in the first study we did working memory M capacity and field independence and then in the second one it was the three components of working memory and then the last one really was just a qualitative it grew out of the previous studies but there wasn't a strong, that's not true we also measured field independence and there is some indication that high field independence correlates with some of the approaches that we're seeing we've not really had a big enough sample for that yet. Okay, so thank you very much Professor Overton we shall end this session with a lot of thought I'm sure there are more questions but we can carry them on over lunch later after the next session.