 Hello, my name is Joe Pierce, I'm non-binary, I'm a developer, and consider myself something of Science Wumble. It's only just now that I've kind of realized the reference to Wumbles might actually be a bit before certain people's time. Does anyone not know what a Wumble is? And I've just realized I can't actually explain what a Wumble is. So turn to the person next to you who does know what a Wumble is and ask them. So anyway, by Science Wumble, I mean I'm not an academic, but I love to learn about a wide range of sciences and see if I can make good use of the things that I find. Recently I've been wumbling around the field of cognitive psychology. I want to use it to explain some ways you can manage information overload. So what is information overload and why do we need to manage it? Well in 1970 futurist Alvin Toffler popularized the term information overload in his book Future Shock. He considered the book a thesis on surviving the collision with tomorrow. And he wrote that there are discoverable limits to the amount of change that the human organism can absorb and that by accelerating change without first determining these limits, we may submit masses of people to demands that they simply cannot tolerate. We may submit them to information overload. Among the symptoms Toffler ascribed to it were anxiety, hostility, apathy, physical illness, depression, and even senseless violence. Experimental psychology uses a term orientation response for our reaction to change or novelty. It's also sometimes called the startle reaction. Our pupils dilate, the hearing becomes more acute, our muscles tense, blood rushes to the head and our breathing and heart rate alter. The examples Toffler used to illustrate the effects of overloading this orientation response came from stories of extreme environments like the battlefields of World War II where a soldier falls asleep while a storm of machine gun bullets splattered around him not due to physical tiredness but a sense of overpowering apathy. Soldiers became hypersensitive and would hit the dirt at the slightest stimuli, increasingly showing anxiety and anger at the slightest inconvenience. But the sources of overwhelming stimuli in extreme situations is clear. What about here and now where there are no bullets or glass flying through the air around us and the earth is steady beneath our feet at least most of the time? Toffler went on to say that the orientation response occurs not merely in response to simple sensory inputs. It happens when we come across novel ideas or information as well. So are we faced with an overwhelming amount of new information 46 years later? 20 million words of new technical information are recorded each day. At 1000 words a minute reading for eight hours a day. This is six weeks worth of reading. And after reading the information for that one day, you would have fallen behind by five and a half years worth of reading. And these statistics are from 2001. Imagine how much worse it is today. Toffler seems even more prescient as there's now medically recognized condition called information fatigue syndrome. With symptoms that include hurry sickness, the belief that one was constantly rushed to keep pace with time. And pervasive hostility, chronic state of irritability near anger or even rage. These might sound familiar. I believe we see evidence of information fatigue all around us. This is just a small sample which I believe illustrates that that symptom of pervasive hostility. So the sad state of web development, random thoughts on web development, going to shit. 2015 is when web development went to shit. Or programming sucks and why I quit. My particular favorite, a lot of work is done on the internet. And the internet is its own special hellscape. Remember that stuff about crazy people and bad code. The internet is that except it's literally a billion times worse. And take a look at how many tech blog writers use the word rant in the blog title. So how might we as well, I'm assuming there are some developers in the audience. Hoping, yes. Excellent. How might we as developers be overloading ourselves? Any front end web developers in the audience, that'll do. It's enough. You don't actually need to know front end web development to be able to get anything out of this talk. But I use it as an illustration. So let's look at front end web development as an example. There has been a huge growth in the number of frameworks and tools. This means that it's not only hard to keep up with what's out there, but that no one can know everything. If you ever work on a project with multiple unfamiliar tools and frameworks, information overload is a risk. So development is always going to be a learning process, because there's always a lot to learn. Perhaps that means we can manage information overload by managing how we learn. So it helps to understand a little about how we learn in order to hack the process. The two partners of human learning are our working memory, our long-term memory. Working memory is where we process information. It's the site of our active conscious thinking and learning processes. And comprises visual and auditory subcomponents. There's actually another subcomponent called the appropriate exception, which is a sense of where your body actually is at all times. But that's less useful to leverage for learning, at least non physical things. Long-time memory has enormous capacity, but cannot engage in conscious thinking or learning processes. So they work together. Learning takes place in working memory and the results are stored in long-term memory. I want to briefly outline three of the main processes we can hack to alter the way we learn. Attention, elaboration rehearsal, and encoding. Attention is critical to processing information. To illustrate how critical, in 1972, an Eastern Airlines flight crashed as a result of cockpit distractions. The crew became so preoccupied with a malfunction that no one noticed the altimeter reading or warnings until it was too late. We can help focus attention with cues and signals. For example, bullet points, paragraphs and headings, visual indicators like really big red arrows, and signalling language, such as saying it is important to note that. We can also help focus attention by leveraging both the visual and auditory components of a working memory at once. As I've done here, potentially badly, I don't know, by giving you a diagram to focus on visually while I describe what it represents. And this is called the modality effect, using both modes, auditory and visual. Elaboration rehearsal helps promote something called automaticity. This is a mechanism that allows us to largely bypass working memory. Skills that are repeated often enough become established in long-term memory and can subsequently be performed with little or no working memory resources, such as writing, reading and speaking. When we encode information, we create what are called schemers. These are memory structures that allow us to treat a large number of information elements as though they are a single element. And we have schemers for everything. For example, while all trees are different and include an enormous amount of sensory information, the shape of the tree, the color of the leaves, you know, the size of the tree, the smell of the tree, we instantly recognize a tree because we have a schema for trees. Schemas are constructed additively. So for a novice learner, every new thing is unconnected and must be managed separately. But as we gain experience, we form connections between the bits of information in our head, allowing us to treat these collections as single schemers. But there are limits to our ability to learn. Back in 1956, cognitive psychologist, George A. Miller, wrote a paper highlighting these limits on our ability to process information. You may have come across Miller's Law, a magical number seven plus or minus two. Miller's paper suggested that there's a definable limit on the amount of information that we can usefully process at any one time. And when that limit is exceeded, information overload leads to our learning becoming inefficient if it even remains working at all. Some web designers have used Miller's Law as justification for only putting seven things or fewer on a page. This is a misinterpretation. It wasn't the number of things that was an important result, but the fact that there was always a limit. We now equate the things themselves to the schemers I mentioned earlier. Someone with experience, who has more connected schemers, will be able to handle more complex information in working memory at once. So, how do we use the result of Miller's research? Well, in the late 1980s, John Sweller, an Australian educational psychologist, asked what we can do to learn more efficiently given the limitations that Miller wrote about. And he published the paper in this journal here. His research led to something called cognitive load theory. So, what is cognitive load theory? Well, very simply, it defines cognitive load as the total amount of mental effort being used in the working memory and describes a universal set of evidence-based principles for managing that cognitive load that leads to more efficient learning. So, I can give you a brief overview of the core concepts. Total cognitive load is comprised of three types. Intrinsic load, extraneous or irrelevant load, and domain or relevant load. Intrinsic load is imposed by the complexity of the task being performed. For example, learning to juggle with ten balls is more complex than learning with three. And we can only reduce intrinsic load by juggling fewer balls. In other words, if we're faced with a large complex task, we want to break it up into smaller ones and we can then tackle these individually. Now, I want to make a brief aside here, actually, because since I've written these slides, there's actually another way that you can do this. A nice simple illustration of this is, while juggling with three balls might be simpler, juggling with three chainsaws is probably a bit more complex, even though you've actually reduced it to the same number of things. So, there's also a different kind of simplification. If anyone knows about the difference between general relativity and Newton's laws of motion, then Newton's laws of motion works in most cases, but general relativity is required for some of the edge cases. And you use the simpler form of the law according to what your need is. But anyway, going back to the slides, you might be familiar with the agile hierarchy, where we define a high-level epic, which we break down into user stories, which we further break down into individual tasks. Estimating an epic would impose an extremely high intrinsic load, but estimating user stories or tasks should be more manageable. Extraneous or irrelevant load is imposed by distractions or tasks that are irrelevant to the goal. These could be noise distractions, needing to learn an unfamiliar tool or set of tools, or trying to decipher code that's unreadable or difficult to follow. To manage a relevant load, we could try working in a quieter environment or wearing headphones, or there are certain cases where that doesn't actually work. But anyway, that's a bigger side. Or try reducing the number of tools or libraries we use to a minimum. There's a growing movement originating from Keith Circle to use MPM script alone for front-end task running, which won't mean a lot to actually anyone who is in the audience who isn't actually a front-end web developer. But then we can eliminate tools called Grunt, Gulp and Webpack, which would be great. Make sure that the code we write is readable. Never forget that the person unable to decipher your code might be future you. We can do this by using better cues and signals in our code. It might not be me. I'm sorry that this is an example in JavaScript. I'm really really sorry. But it's actually from the angular source code, if that means anything. But yeah, I couldn't quite contemplate writing a terrible bit of code for myself. So yeah, it might not be immediately obvious this function does two distinct tasks. There's the null check and the actual work itself. But by adding a bit of appropriate white space and line breaks, we cue the developer the information that there are two distinct tasks here. Comments are also important to use wisely, as Jeff Atwood succinctly puts it. Code tells you how. Comments tell you why. Don't let anyone suggest to you that good code should be self commenting because it's complete nonsense. Good comments signal that all is not as straightforward as it might appear. This reduces the load on anyone trying to maintain your code. If you've ever spent more time than expected, wondering why a section of code doesn't do what it looks like it should do, then you may understand what I mean. Consistency is also important to reduce load. Style conformance tools can help without having to impose the extra load of learning a style guide. I don't know if anyone's ever used a style guide or tried to use a style guide or tried to implement one, but how many times have you seen such a style guide just gathered dust because of the extra effort required to maintain or learn it. So by reducing relevant tasks and distractions, we focus attention. Lastly, germane or relevant load is a beneficial load imposed by tasks that are relevant to an overall goal. Helps us to connect bits of information and form more complex schemers. Repetition and context variation give us the skills to apply knowledge in a wider variety of situations. By repetition, I mean practice. Schemas need reinforcing the repeated recall and usage. This makes them easier for the working memory to retrieve in process. You can vary the context by seeking out multiple sources of information. There is no such thing as the definitive tree. It might be that only after seeing something expressed in five different contexts are you able to confidently recognize it yourself in a sixth. So how about a different context? This might be easier to understand if I give another example. So the goal of a developer is to better understand an existing code base. This might include understanding the framework that has been used, particularly if it's nuanced. We might assist that goal by varying the areas of the code base that they're assigned to work on until they can confidently develop a feature from scratch. We can also develop more flexible schemers through pairing. It can help us discover varied approaches and methods that we haven't yet encoded within our own schemers. This can also leverage both subcomponents of the working memory, auditory and visual, particularly if you have one person visually examining the code while another explains what it's doing. So Jermaine, relevant load, helps us elaborate and rehearse schemers, encodes new information and hopefully promotes automaticity. So to sum up, we constantly need to learn. Cognitive psychology can tell us how we learn, but it also tells us there are limits to our ability to learn. Cognitive load theory can help us work within those limits, giving us a set of guiding evidence-based principles. If we manage intrinsic load by breaking large tasks into smaller ones, reduce extraneous load by eliminating irrelevant tasks and distractions, and increase relevant load with appropriate repetition and varied learning context, we promote efficient learning, improve productivity and escape the horrors of information overload. Thanks. Any questions? Okay, thanks Joe. So if anyone's got any questions, I'll bring the mic to you so I can get it on the video. Hi, I'd like to hear a little bit more about the headphone distractions that you talked about. I use headphones a lot of work and I'd just like you to elaborate on that if you can. Yeah, there's actually a really nice article that I sort of retweeted earlier last week or a week before, which goes into a lot of detail about this. Basically saying that if you're trying to learn something completely new, auditory distractions of any kind are bad. They will weirdly not only inhibit your ability to learn and create schemas, but they would all they will also prompt you to form weird connections. It didn't go in in depth into what the weird connections were, but I think it means that you sort of you construct the schema in a very sort of cluttered manner. But it also mentioned things like if you're doing boilerplate code, something like music that you've heard before a lot and preferably not vocal music, hard trance works really well for me, works quite well, because I spoke well if you're in an open plan office, if you're sitting somewhere like home where you actually don't have any noise distractions, then nothing on your headphones, no noise at all is better. But yes, it did mention the thing about vocal music is particularly bad because you spend some working memory processing to understand the lyrics. And new music is bad because of course you're essentially learning it, so it takes up a chunk of your working memory in processing. Do I like this or not? There is definitely more on this. The jury is out on whether you are listening to music while learning is particularly good or bad, but it depends. Hi, back here. My name is Sai, I did CogSai for my degree. Oh crap. And Cognitive Neuroscience later. So one thing you didn't cover explicitly is chunking, which is pretty related to schemas, but not quite. So the number seven plus or minus two refers to seven plus or minus two chunks and chunk is not... Yeah, the unidimensional chunks as well. I mean, he mentions it in the paper, I think, doesn't he? He does in the paper, but it's been elaborated more in studies. Yeah. Letters are a good example of chunks, aren't they? Letters from an alphabet can be a good example of chunks and then you've got words and then you've got sentences, paragraphs, etc. Yes, and then that's a good example of why chunks are not well defined and a thing that you can manipulate. So if you're just starting to learn, if you're a child, then you look at words where letters are chunks. If you are learning sign language, you look at fingerspelling and you see one letter shape at a time, but with skill you see the shape. So same thing with chess, you can see the shape of a board as opposed to a specific one. Absolutely, we don't read letter by letter once we're floating with reading, we read word by word and sometimes phrase by phrase depending on how predictable it is and that's the chunks getting bigger, I guess, isn't it? Or different, yes. Not with what? Different, yes. Okay, I think we've got time for one more. I can't see a thing, so something else. Another can I wave if you need a question? Okay, well I've got one then. How you're a developer, yes? You seem to know front-end. Yes, yes. How important do you think this is for individual developers and for managers to learn to manage tasks in the workplace? Framing as code with load theory perhaps not that important, but it does give an alternative explanation as to why we do a lot of the things that we should be doing already. Things like writing clear maintainable code, writing code which isn't all bunched up by where we put appropriate cues and signals in our code, you know why we should use comments to explain why we're doing a particular thing. And actually part of it explains why auto-generated comments, why auto-generated comments are actually a terrible idea because while comments can help novice developers understand code, auto-generated comments just become noise and they will actually hinder not just novice developers but actually experienced developers from understanding the code base. Thank you Joe. Thank you.