 OK, I'm just going to get that out of the way. Or I'm going to trip over it. My name's Joe Pierce. I'm non-binary. I'm a developer. I'm a general science wumble. That is, I'm not an academic, but I love to learn about a wide range of sciences and see if I can make good use of the things that I find. Recently, I've been wumbling around the field of cognitive psychology. And I want to use it to explain some ways we can manage information overload. So what is information overload? Why do we need to manage it? Well, in 1970, futurist Alvin Toffler popularized the term information overload in his book Future Shock. He considered the book a thesis on surviving the collision with tomorrow. In it, he wrote that there are discoverable limits to the amount of change that the human organism can absorb. And that by accelerating change without first determining these limits, we may submit masses of people to demands they simply cannot tolerate. We may submit them to information overload. Among the symptoms he ascribed to it were anxiety, hostility, apathy, physical illness, depression, and even senseless violence. The examples he used to illustrate the effects of overload came from stories of extreme environments like the battlefield of World War II, where a soldier falls asleep while a storm of machine gun bullets splattered around him. Not due to physical tiredness, but a sense of overpowering apathy. Soldiers became hypersensitive and would hit the dirt at the slightest stimuli, increasingly showing anxiety and anger at the slightest inconvenience. The source of overwhelming stimuli in extreme situations is clear. But what about here? Now, where there are no bullets or glass flying through the air around us? 20 million words of new technical information are recorded each day. At 1,000 words a minute, reading for eight hours a day, this is six weeks' worth of reading. After reading the information of that one day, you would have fallen behind by five and a half years' worth of reading. And these statistics are from 2001. Toffoli seems even more prescient that as there's now medically recognized condition called Information Fatigue Syndrome, with symptoms that include hurry sickness, the belief that one was constantly rushed to keep pace with time, and pervasive hostility, a chronic state of irritability near anger or even rage. These might sound familiar. I believe we see evidence of information fatigue all around us. This is just a small sample which, I think, illustrates that symptom of pervasive hostility, the sad state of web development. 2015 is when web development went to shit. Programming sucks and why I quit. My particular favorite, a lot of work is done on the internet and the internet is its own special hellscape. Remember that stuff about crazy people and bad code? The internet is that except it's literally a billion times worse. Take a look as well at how many tech blog writers use the word rant in the blog title. So how might we, as developers, be overloading ourselves? Let's look at front-end web development as an example. There has been a huge growth in the number of frameworks and tools. This means that it's not only hard to keep up with what's out there, but that no one can know everything. If you ever work in a project with multiple unfamiliar tools and frameworks, information overload is a risk. So development is always going to be a learning process because there's always a lot to learn. Perhaps that means we can manage information overload by managing how we learn. It helps to understand a little bit about how we learn in order to hack the process. The two partners of human learning are our working memory and long-term memory. Working memory is where we process information. It's the site of our active conscious thinking and learning processes. Long-term memory has enormous capacity but cannot engage in conscious thinking or learning. They work together. Learning takes place in working memory, and the results are stored in long-term memory. But what are the results that we store? The learning process involves the creation of what we call schemas. These are memory structures that allow us to treat a large number of information elements as though they were a single element. We have schemas for everything. For example, while all trees are different and include an enormous amount of sensory information, we instantly recognize a tree because we have a schema for trees. Schemas are constructed additively for a novice learner. Every new thing is unconnected and must be managed separately. But as we gain experience, we form connections between the bits of information in our head allowing us to treat these collections as single schemas. But there are limits on our ability to learn. Back in 1956, cognitive psychologist George A. Miller, Gesundheit, investigated these limits on our ability to process information. You may have come across Miller's Law or the magical number seven plus or minus two. Miller's research suggested that there's a definable limit on the amount of information we can usefully process in our working memory at any one time. And when that limit has exceeded, information overload leads to our learning becoming inefficient. Some web designers have used Miller's Law as justification for only putting seven or fewer things on a page. But this is a misinterpretation. It wasn't the number of things that was the important result, but the fact that there was always a limit. We now equate the things themselves to the schemas that I mentioned. Someone with experience who has more connected schemas will be able to handle more complex information in working memory at once. So how do we use the result of Miller's research? Well, in the late 1980s, John Sweller, an educational psychologist, asked what we can do to learn more efficiently given the limitations that Miller defined. His research led to cognitive load theory. What is cognitive load theory? Well, very simply, it defines cognitive load as the total amount of mental effort being used in the working memory and describes a universal set of principles for managing cognitive load that lead to more efficient learning. I wanna give you just a brief overview of the core concept. Total cognitive load is comprised of three types. Intrinsic load, extraneous or irrelevant load, and domain or relevant load. Intrinsic load is imposed by the complexity of the task being performed. So learning to juggle with 10 balls is more complex than learning with three. We can only reduce intrinsic load by juggling fewer balls. Or in other words, if we're faced with a large complex task, we wanna break it up into smaller ones. We can then tackle these individually. You might be familiar with the agile hierarchy where we define a high-level epic to then break down into user stories which we further break down into tasks. Estimating an epic would impose an extremely high intrinsic load. Estimating user stories at tasks should be more manageable. Extraneous or irrelevant load is imposed by distractions or tasks that are irrelevant to the goal. These could be noise distractions, needing to learn an unfamiliar tool or a set of tools, or trying to decipher code that's unreadable or difficult to follow. To manage a relevant load, we could try working in a quieter environment or wearing headphones. Try reducing the number of tools or libraries we use to a minimum. There's a growing movement originating from Keith Circle to use NPM scripts alone for front-end task running, thus eliminating grunt, gulp, and webpack. Make sure that the code we write is readable. Never forget that the person unable to decipher your code might be future you. Style conformance tools like Lintas might also help you maintain consistency without having to impose the extra load of learning a style guide. How many times have you seen a style guide that's gathered dust due to the extra load that's required to maintain or learn it? So, we can manage extraneous load by reducing irrelevant tasks and distractions. Germain or relevant load is a beneficial load imposed by tasks that are relevant to an overall goal. It helps us to connect the bits of information and form more complex schemas. Repetition and context variation give us the skills to apply knowledge in a wider variety of situations. By repetition, I mean practice. Schemas need reinforcing the repeated usage. This makes them easier for the working memory to retrieve in process. You can vary the context by seeking out multiple sources of information. There is no such thing as the definitive tree. It might be that only after seeing something expressed in five different contexts are you able to confidently recognize it yourself in a sixth. This might be easier to understand if I give an example in another context. So, the goal of a developer is to better understand the code base. We might assist that goal by varying the areas of the code base that they're assigned to work on. We might also pair a junior developer with one more senior to help them understand varied approaches and develop more complex transferable schemas. So, to sum up, we constantly need to learn, but there are limits on our ability to learn. Cognitive load theory can help us work within those limits, giving us a set of guiding principles. If we manage intrinsic load by breaking large tasks into smaller ones, reduce extraneous load by eliminating irrelevant tasks and distractions, and increase relevant load with appropriate repetition and varied learning context, we promote efficient learning, improve productivity, and escape the horrors of information overload. Thank you. Any questions? Beginners reading the subject. That's a good question, actually. I've looked at lots and lots of books on this, but my main one has actually just been a big cognitive psychology first year textbook. I think it's by Isink, Michael Isink. I wouldn't necessarily recommend it as a beginner book on this, though. There are some good books on safari. I think there's one by someone called Ruth Colvin, which is worth reading. I can't remember the title of it, though. It's probably because it's about a paragraph. Yes. John, when I code and do nothing else at work, I use focus. The only thing that helps me to stay focused is I talk with somebody on Skype. On Skype, not Viber, not WhatsApp, I talk to people on Skype, and then I actually code faster. I measure it. Can you explain this to me? Is it a normal fact? Well, I'm not a psychotherapist, so possibly not. Different things will definitely work for different people. So this is just a set of guiding principles. In your case, maybe whatever you're doing with Skype helps you to focus attention. Attention being one of the main things that Cognitive Law Theory actually writes about. There are various ways of focusing attention on learning material or whatever it is you're doing. If that helps you focus attention, then great. Why it does that? I don't know. Because the term is boring? I can't comment on that. Jamie, you mentioned that, didn't you? Yes, it's a terrific course. Yeah, I never got a chance to look at it before, but yeah. Does the topic, let me ask you, do you use things like meditation or mindfulness or things like that to help channel your focus? No, no, not at all. No. The closest I probably get to that is if I'm working in an open office, open plan office rather, I will have to put some kind of electronic music on my headphones purely to block out whatever's going on around me. But the research on whether music can help you focus your attention is slightly inconclusive. It might work for you, it might not. One of the things that the Learning How to Learn course that you all mentioned touches upon is simplified as the brain having a, I think we call it a diffuse mode that it works in, and a more focused mode. So sometimes you are kind of in this state where maybe you're more receptive to external information and in others where you're more channeling stuff that's already in your head, I suppose, that form part of this thesis is that an area that's of interest? Excuse me, I'm just talking about, is it diffuse state versus flow? That's it, yeah. Yeah. Cognitive load theory is sort of educational psychology, so it's more about learning rather than doing, but it's probably tangentially related, somehow. Learning. Yes. Videos versus reading versus talking to somebody who knows. Of course, now I know the answer. A person that knows and makes it sure is the best. So learning. That's an interesting question, actually, because assuming that the quality of the information is roughly similar, cognitive load theory does have something to say on whether you should have visual audio and visual material, visual material alone, so textual graphic, because there's part of it which I didn't go into, which is the modes of absorbing information, which is in auditory and visual, and that they can work together. So if you have visual graphic, something like a diagram, and you have auditory information describing the diagram, that will work better than simple text on the diagram. However, if it gets a bit complicated because of this text on the diagram and the audio information is slightly different to the text that's on the diagram, you'll actually expend load on comparing the two in your head going, hang on, they're not the same, and you won't pay attention. But yeah, that's a whole area I didn't really go into. Is there any research on how similar or different we as individuals learn, because I think in a lot of fields now, I feel like the 21st century is more about recognizing that we're quite different, each one of us, and that this might be one of those areas in which the learning process is going to vary, and that's true. What can we do as individuals to kind of test our? Well, what they, like I said earlier, they reckon that it's a universal set of principles that it works for everyone. It doesn't tell you specifically what will work for everyone, but that, for example, there's a whole area on cues and signals within learning material. It doesn't tell you that the same cues and signals will work for everyone. And it doesn't say anything about actually making it interesting. You know, engaging interest with learners is a big problem, and it doesn't mention anything about that. It just assumes that the people want to learn the information. And if you want to learn the information, this is got what you need. But yeah, that's a whole other problem. Is Future Sharks still relevant? Sorry? Is Future Sharks still relevant? Oh, Future Sharks. I've always wanted to read it. I just never have. It's surprisingly so, yeah. Yeah, it's a little bit, yeah, a little bit out of date. I mean, some of the language is a little bit, you wouldn't use it now. But yeah, some of it still seems quite pressing. Yeah? To memorize new information, often I use software like SuperMemo or Anki. The software about space repetition, the optimal time to repeat the same information in order to learn it. Oh, is it like flashcard type things? Is it like flashcards that repeats it to you periodically? SuperMemo is like it shows the information that you need to learn and optimal time. Just if you have some experience about this software, like SuperMemo or Anki, and if you can suggest something. I can't. I don't have any experience with those, but it does remind me of the recall graph that I was looking at the other day, which is probably how they calculate the optimal time. Yeah, because we don't forget in linear way. No. It's like a sort of like a decay curve. Yeah. Last question. Are there any views or theories that challenge this theory right here? Is there anything else we should be thinking about? Is this definitely what it is? Or is there any other views? I mean, it's hard to say. This is true. It's not going to be an extra cut to that. To be fair, not being an academic, cognitive psychologist, I'm not really up with the current state of affairs. As far as I can tell, cognitive load theory is still fairly well regarded, but if there are, there's always things that challenge academia. So I'm sure you could find something. On that note, another round of applause, I think.