 So with that, it is my pleasure to introduce Dr. Tony Breik to our convener. And contrary to what you may have inferred from the introductions, I do not walk on water. I drown just like everybody else does. But I'm delighted to be here this morning to share with you some of these ideas. And actually my talk this morning is pretty high level. It's about the basic principles and how these things connect together. But a few points along the way, I'll drill all the way down to some very specific work to give you a flavor of what this looks like as educators try to work inside these ideas to solve what they think are really pressing educational problems in their particular learning context. I'd like to begin with a story. It's a story that takes its root in a town just up the road a little way from here called New York City. And I'd like to introduce you to Alexa. She's a sophomore at Boys and Girls High School in New York. She's working very hard, but she's also struggling to succeed. Graduating high school is a challenge for her and success in post-secondary would be even more so. As she walks down the halls on her way to class, she might stop at that water fountain for a drink of water. She has good clean water to drink every day. And this is true everywhere across New York City. Good clean water reliably delivered day in and day out. It's easy to take that for granted at every fountain and every building across the city as big as New York. But it wasn't always that way. I'll take you back to the 1700s. As settlers arrived in New York and formed communities, they would dig individual wells. They would dispose of their waste nearby these wells, and you can imagine what that led to. Well, this kept happening all the way into the 1800s. The population grew up around wells, the wells got polluted, and they had to find another source of water. So they kept digging more and more of the same. In fact, people were getting sick from the water. It got so bad that by the summer of 1832, 150 people in the city of New York died from cholera. New York City's water supply was a disconnected set of wells. Residents just couldn't count on the delivery of good water. What used to work in an earlier time just didn't work anymore. They needed a new idea, and more importantly, they would need to work very differently together to achieve better outcomes. They would have to solve this problem together as a community. The inspiration for a solution, a very different source of water. The Croton River, some 40 miles north of New York City, and a basic scientific principle, gravity. That's how they get the water from 40 miles north all the way into the city. Nothing like this, quite like this, had ever been done before. There were constant challenges along the way. They had to invent new tools and new construction methods. It took thousands of hands and minds working together to complete that project. The new system delivered its first drop of water to New York City on July 4, 1842. It was regarded as a triumph of engineering and innovation. It was also vivid testimony to what thousands of people working in very different ways to address a big problem working together could accomplish. Over the last 150 years, the city's water system has continued to evolve and develop in response to changing needs. Today, New York's water system remains the envy of the world. It is literally one of the largest municipal water supplies on the planet. The scale is remarkable, and so is the quality. New York City water regularly wins recognition for good taste. Believe it or not, that's what they tell me. How incredible that every New Yorker can enjoy good, clean water every day, yet not see the complex system that brings that water to their faucets every day. I share this story with you because many of the challenges I think we're confronting in our educational system today look a bit like New York City's water problems of the past. Back then, it was a lot of different fragmented efforts to solve a big problem. Sometimes that's what we've been doing in education too. Lots of people working really hard, but seemingly unable to make the big improvements we seek. The challenge for New York City was delivering good water reliably for all of its citizens. Our challenge in education is a good education reliably for all of our students. A rapidly increasing number of new ideas are being thrown out at our educational systems about what we should do to improve. But we typically lack the practical know-how to convert what often are good ideas into effective action. And we typically lack the time and the human capabilities and the institutional infrastructures to make any of these ideas actually work productively at scale. What do we do? We tend to over promise this thing is really going to do it for us. This is going to solve all of our problems. We get frustrated about how difficult it is to move from this idea to actually get it to work on the ground, effective execution. We then abandon the idea and we move on to the next new idea. There's a basic tenet in improvement science. If we keep doing what we've always done, we will continue to get what we've always gotten. All of this, this way of doing reform over and over again can feel absolutely overwhelming to educators. It's no wonder that so many of us are experiencing initiative fatigue because we go through these cycles over and over and over again. The truth is that our schools and colleges are gradually getting better if you look at data. The problem is that our aspirations for what we want schools to accomplish is increasing at an even faster rate. And a chasm has been growing for some time between our rising aspirations and what we can routinely achieve. And this chasm is greatest for our most disadvantaged students and in our most disadvantaged school communities. This chasm has become one of the great social justice issues of our time. Education, we have a learning problem. We have a learning problem. We have a lot of people working very hard, students, teachers, faculty, faculty, whole faculties, schools, colleges, district leaders. Yet we can't seem to connect really hard work, really good intentions and often really good ideas with widespread improvement. By analogy, we may dig a good well here or there, but it often doesn't last and it surely doesn't scale. We need a better way to get better at improving. So this perspective let us think about, well, how else might we do this better? And so we scoured both nationally and internationally looking for other sectors and organizations that had something in common with this big complex enterprise called Public Education in the United States, but who seemed further along. There was something we thought we could learn from them. So let's go out and try to study them. So we asked the question, well, how does quality improvement actually work on the job floor industry? How do design firms develop new products and services that actually make our lives better and easier? How do very large complex healthcare institutions, how are they going about the task of making their places safer and more patient centered? And more recently, how are structured scientific networks speeding progress on problems that previously were thought too complex to ever be able to address? Calling this together plus about 20 years of working in and around issues of school improvement in Chicago, doing everything from working directly with principals, teachers on the ground and some of the most disadvantaged Chicago public schools to two year leave of absence working in the superintendent's office and a lot of things in between there. Calling all of this together, we call it an improvement paradigm and this paradigm is anchored around six core principals and let me just introduce them very quickly. They are to be problem focused in user centered. So focus in on specific problems, attend to variability in performance, see the system that's generating that variability of performance, combine measurement with discipline inquiry, these kind of iterative steps that Andy talked about so we can learn as we go along and then organize ourselves in networks as practical scientific communities trying to solve problems together. Those are the kind of six big ideas that stand behind this work. Now let me tell you a little bit about more of each of those and how these work. The first principle, be problem focused and user centered. Improvement science starts with a deceptively simple question, what is the specific problem or problem we're trying to solve? And the key word here is specific and Andy actually, that little interplay that he and I had some time ago, is a classic example of this. Oftentimes people come in and they'll say something like we want more national board teachers. Well, that's a solution. That's not a problem. And then if you ask, well, what's the problem? Well, we want to transform education in America. Well, yes, that's an aspiration, but it's not a specific problem we can work on. So what is the specific problem and therefore how am I developing more national board teachers potentially be a solution to this specific problem we're trying to engage? So in our own work, our biggest improvement effort has focused at the community college level. And it started with a broad problem, a broad concern, the very low success rates of students through community colleges. But again, framed in this way, this is a very big problem because there are many different factors that contribute to why we have these very low success rates. So what improvement science says is, well, let's try to understand this problem a little bit more explicitly. We're the gatekeepers. We're the main blockers to student success. We need to understand what's going on here through the student experience. So they're called user-centered, empathy-centered exercises. How do students actually experience what's going on here? So that meant talking to students. It meant directly engaging with some of their experiences. It meant bringing together teachers and educational leaders from these institutions because everybody sees it's a very big problem. Everybody's going to see it from a slightly different angle. And so part of the task of solving the problem is, what are the really key things going on here that we really have to direct attention to? In going through this process, the key observation emerged. The single biggest impediment to student success in community colleges is the high failure rates in developmental math courses in community colleges. Across a country, upwards of 60% of students who go to community college are assigned to developmental math courses and upwards of 80% of them never get out. If you don't complete a college math requirement while in your community college, you can't transfer to a four-year institution. You can't qualify for many technical certification programs. Your opportunity to a better future is blocked right here. That's the problem that we thought, gathering together a group of faculty and institutions we could actually do something about. Where typically only 15% of these students who go down the traditional pathway acquire college math credits in two years, this networked improvement community now is consistently achieving at least 50% of the students achieving college math credit in one year. So the community has tripled the success rate in half the time. And these marked improvements are occurring in every participating college, which is about over 50 institutions now doing this, and for every distinct subgroup of students. Brings us to the second principle. If we want quality outcomes to occur reliably for students in the many very different contexts in which they're educated, then variability and performance is the problem to solve. Education is complex work. Regardless of sector, when you find the kind of task complexity that we ask teachers to engage in their classrooms and you see the kind of organizational complexity in which that work is occurring, you will find wide variability in performance. This shows up in healthcare institutions, just like it does in education. You see outcomes from healthcare institutions that look like this broadly distributed normal curve. This is what organizational studies teach us. We're not alone in seeing these kinds of outcomes. We need to learn then how to achieve outcomes reliably for different subgroups of students and teachers and in the many different kinds of organizational and community contexts in which you do your work. It's only by focusing explicitly in on this, on these sources of variability, that we can move from the wide unacceptable variability that we tend to see to a future where many more students succeed and where disparities and outcomes continue to shrink. That's the essential quality improvement question, how to get good outcomes reliably over the many different sets of circumstances and contexts in which you might work. Very different from how we think about research, evidence-based research. Here's a case example of this, actually. I want to just share with you first-year results from a large randomized field trial. This is a $40 million study, by the way, that was funded under the I-3 initiative. And it's of the Reading Recovery Program, which, for those of you who might not be familiar with it, Reading Recovery is this very intensive one-on-one tutorial by a specifically trained teacher for first-grade students who are at risk for not learning to read. It's one of the most well-developed educational interventions that exist in the United States. There's 30 years of work and more behind this. All right, first-year results. Big field trial. The effect size is 0.7 from the first-year results. That's a very big effect size for an educational intervention. So you see something like this and the, oh, let's go spread this. Everybody should be doing this. But fortunately, the way the study was done is we're actually able to estimate not just the average effect, but what this effect looked like in every school in which Reading Recovery was introduced. And you do that. What you see is that wide variability in performance. As you go across the schools, what you see, for example, is there's a good 20% of the schools down that left-hand tail where either negative null or very small effects occurred. For the cost of this intervention, if you're in that 20% group, you're probably not thinking this is a good idea. So quality improvement perspective says, well, we have some failures here. What's going on? Why is it not working here? Is it something about the, maybe they have a specific subpopulation of students that the program isn't well-attuned to. Maybe the program doesn't align well with the way the base of instruction is working. Maybe they were organizational issues. So the Reading Recovery teacher isn't actually able to do this in the very systematic way that it's planned out. Or maybe we just have somebody who wasn't especially well-trained. There are lots of potential explanations. We need to understand why these failures occur because embedded in that are opportunities to get better. Equally so true. It's at the other end of the distribution. We have some schools that are getting effects almost twice the size of the average. These are referred to as positive deviants. Might there be something going on in that 20% that we could conceivably learn from so that as a quality improvement goal, we'd like to eliminate that left-hand distribution. Learn from the right so we could pull the center even further in that direction and get even better. That's the general idea of quality improvement. And it leads you to a very different set of questions in a very different way of working and just saying the average effect is .7. What that tells us is this thing can work, but it doesn't tell us how to make it work. It doesn't tell us whether it will work in your setting with your students and your constraints. It doesn't tell us how to get better at making it work under those sets of circumstances. This is the practical knowledge building we need to achieve quality reliably across lots of different circumstances. In order to do that, we take us to the third principle. We have to see the system that's producing this variability that we're observing. And to do that, we need to focus in on the work that people do and why they do it as they do. Only then can we begin to see the key problems that often are obscured from sight. But simply improvement means thinking about how organizations actually work. It means seeing how various people and processes and different parts of the school system, for example, interconnect to set up some constraints that ultimately produce the outcomes we get. In the Community College Pathways Initiative that I mentioned before, there are multiple strands to this. It's anchored in some shared instructional materials. The improvement efforts also integrate understandings about essential social supports for students to persist to completion. Faculty are working in new ways and aim to support their own efforts to improve. So there's a kind of social community around faculty, so they're learning from each other what's working for whom and under what set of circumstances. The network needs data. It's a different kind of information. It's not just knowing what the outcomes are because by the time you've got the outcomes, it's often not very helpful. We need information while students are progressing through this to figure out how to get better at what we're doing. And all of this needs policy support so that this kind of continuous improvement work can move forward. Lastly, and in some ways most important, all of these pieces must fit together in a coherent way in order to achieve quality outcomes that we seek. It's, I'll just go back to sitting. As we think about any kind of educational problem that we want to try to make some headway on, typically it's no single process that's the cause or for that matter there's no one person to blame if you're of that orientation. The key idea coming out of this kind of systems view is that failures are typically a function of the system. And if we don't understand the system that's producing the failures, we simply can't solve them. Unfortunately, this idea of seeing the system tends to get lost on policy leaders who want to grab on something or blame someone and then that's the answer. All right, so take a look at this. Now, again, in the context of the work of this community college network, there was a problem, extraordinary high failure rates among students assigned to developmental math. And how do we understand what's driving this? Well, again, we did this combination of looking at extant data, student empathy exercises, and then engaging in lots of conversations with different educators involved in this work to see the different explanations that they think were going on. And we actually culled this all together that to come to understand there are sort of six things that seem to be driving this. The traditional pathway set up is students take a series of courses and if you fail one of them, you've got to take it again. Analyzing data, we've found that we actually lose more students between the courses. We're not able to enroll in the next course than who fail within the courses. So, transitions between courses are a problem. So that's got to do with what the registrar office does and how, what, how many sections are actually offered of different things from semester to semester. Clearly, things in the core of instruction, the materials, the way the teaching and learning is organized. There were embedded literacy and language barriers that one of the visitors in a community college in South Los Angeles and population of students were almost all, English was not the first language for almost all of those students. Interestingly, English was also not the first language for the instructor, but he was of a different ethnicity than the students. And you have the language of mathematics, which is what we're trying to teach, and it's all occurring in English. So, you sort of look at anything like, well, this is a four language problem. So, I mean, for many of these students, it's not that conceptually they can't understand the mathematics, but we have some really significant literacy and language barriers to take on. There's just a student motivation engagement, because by the time they get to this point, they don't see any relevance in value in what we're asking them to learn. And they've come to think about themselves as, I'm not good at this. So, we can engage them and get them in believing that they can achieve. They're just not going to commit the effort necessary to do so. Some folks have done sort of careful qualitative research actually following students through this experience. It actually turned out students disengage early on. They're making the decisions to disengage often in the first or second weeks, but nobody sees it till the end of the semester when the W's and the F's show up in the records. So, that says we've got to pay a lot of attention to what's happening in those first two weeks to actually pull on to students. And then there are also some faculty practices and beliefs to challenge, because in some math departments at the post-secondary level, faculty actually think it's a sign of rigor that many of our students fail. So, this is something we have to figure out a different way of thinking about it together. All right. So, that's kind of digging into the problem. And that's that activity, which is called causal system analysis or root cause analysis, is I believe something that you're going to try to work through a little bit this morning after my talk, and is a tool called a fishbone diagram that helps to sort of organize that conversation, which you'll get, I think, some chance to work through. On to the next principle, embrace measurement. Achieving improvement also means constantly challenging ourselves. We tend to see what we want to believe. This is a very basic human phenomenon that psychologists call confirmation bias. We see what we want to believe. It's not an individual week, this is just part of our human nature. So, this is why we embrace measurement. We need data to test whether what we're doing is actually working. We need data to kind of push back at us, to cause us to stop and think, well, maybe there's something else going on here that I just haven't been focusing enough attention on. So, that's the key part of measurement in this. Now, today we have a lot of student outcome data, and we have much better ways to look at these data than ever before. And so the variability in student outcomes is much clearer than ever before. But to guide improvement, we actually have to get down into the actual work that people are doing and the kind of norms and rules that shape why this work happens this way. Since this is where it's in the work that people do, is where the outcomes actually take root. So, we need to get data at the level at which we might change practices to inform whether the changes we are introducing are actually an improvement. I'm fond of a, it's an old little story that Al Schenker, the late Al Schenker, once told. And he said, if you're an assembly line and you had a lot of defects coming off the assembly line, probably the last thing you'd do is put more people at the end of the assembly line counting the defects that were coming off the end of the assembly line. You'd go back up into the organization and figure out where the problems are occurring, how are we going to fix these things? So, that's very much in that kind of spirit. I want to take you into this a bit in the context of that developmental math initiative in community colleges. As we went from analyzing the problem to what we call next step, is developing a working theory of improvement. What is it you think you need to work on? If this is the problem, what is it you need to work on in order to actually move on that problem? And it led to four big, these are called primary drivers. They had changed how teaching and learning was occurring. That's obviously the core of the work. Because of the concerns about student motivation and engagement, we had to have some explicit strategies about how students would more productively persist to completion. The language and literacy issues and then how would we support faculty to learn to teach new content in new ways and to develop some different mindsets about what it's actually possible for these students to achieve. Each one of these begins to flesh out in more detail as you go along. I'm going to work just quickly illustrate one of these, productive persistence. Well, to do this, we thought we had to work on five things. First of which was a relatively simple thing. We got rid of those transitions. So we reorganized this experience as a pathway. Students enroll in the fall. You are automatically enrolled through the spring. You're part of a chartered community, group of students working with a faculty member to achieve a valued goal. We're going to get college math credits together. It's a very powerful strategy for which evidence can lead to improvement. So that's the first one. That issue about starting strong. We're going to pay a lot of attention to what happens in the first two weeks because it's really... This is actually a basic change idea. Early successes or early wins are very important at building agency. Well, it's true for students. Well, it's for adults. So we're going to focus on math. We're going to focus on issues of the relevance and contents of the math and a couple of other things. I want to push out. So we're going to focus in on starting strong. What does that mean? Well, that actually translated into five things that the network thought it had to work on. There are some direct interventions, particularly around this idea of shifting students from a fixed mindset to a growth mindset that actually has a very strong evidence base that works. So I'm going to bring those in. They want a class. We're going to really work on opening lessons to really think about content that's going to convey relevance, meaning and value to students. We're going to work on classroom norms and culture. So faculty tend to come in and start teaching content. We're going to bring the things that many of you know about how to actually form a productive classroom community and get to know every one of your students on a first-name basis in that first week. So a bunch of routines like that. We need data to know who might be disconnecting before they actually have left. So how are we going to identify at-risk students in those first few weeks? And again, then a professional development around starting strong. Each one of these things, there is some research evidence that says this could make a difference. But then the question arises to put these things together. How do we know that this set of changes is actually addressing the started strong work and ultimately is leading to students persisting more productively to completion? Well, to do that, the faculty came together and decided, well, we're going to have to get some information from students. We're going to gather some data on day one. We're going to go back and ask the same sort of questions of student on a week four. And they put together this working together, what we call a practical measure. Because this isn't measurement for research. It's not measurement for accountability. It's something we're going to put into the web of instruction. Everybody's going to do it as a regular part of their teaching and learning. So it's got to be really, it's got to be the smallest amount of information that could possibly help to inform this work. So faculty concluded, well, maybe we can get three minutes for the first day of class. So what's the best 20 to 25 questions we could conceivably ask students that would help us know whether we're working in the right direction. What came out of this was, on average, changes from day one to week four, much longer reports about students' growth mindset, reduced fears about not belonging in classroom, increased reports about seeing relevance value in the mathematics that they're learning. All of this was on average, which is great news for the team that was working on all of this. But you didn't get the same level of improvement, necessarily in every college and every classroom. So then, there were some places where this really seemed to be working well, which then created a second set of questions. Well, what seems to be going on there? What can we learn from this subgroup within our community that could actually help us get better at starting strong? So the point is that the measurement then fueled a second iteration of improvement work because it became clear that some of us here were actually starting stronger than others of us. So how do we learn together? All right. Next principle, learn through discipline and query. To make the right changes requires being very systematic about how we go about doing things. This idea of being very systematic is important. There's a classic model in decision-making. It's called the garbage can model of decision-making. And it's very evocative. It's kind of like a lot of ideas go in and they kind of bounce around and somehow through some process that's sometimes very hard to understand, something comes out and that's what we're doing. This is being, it's not like that. It's being very systematic about how you go about doing the work of improvement. In this being very systematic, as Andy mentioned earlier on, learning from failures is essential to ultimately achieving the quality outcomes we seek. Recognizing that learning from failures is key has two big implications. First, it tends to encourage us to start small, aiming to learn quickly while we're minimizing costs. At this point, we don't want to affect a lot of people's lives or spend a lot of money or take a long time because you know that great idea we have? It might not actually work. So we want to figure out really quickly whether we're on the right trajectory or not. Then second, it also means being very deliberate about how we expand out the change efforts. As we think we've got something that's working in some places, we need to accelerate learning about how to get it to work in many other places. And so we're very deliberate here too. As you take something that you've got some evidence that seems to be working out, you literally want to take it to more diverse contexts because you literally want to break it. You want to find the places that it doesn't work because you need to make manifest that variability in performance because every time it doesn't work somewhere, there's an opportunity to learn how to make this reform, this initiative, this innovation better. So that's part of the...this is all about learning to improve. It's not about finding something and implementing it everywhere and then you're done. It recognizes that to get this kind of quality improvement work to occur, this is all about a learning journey. So how do we structure the journey in ways that increase the likelihood we will actually learn as we proceed forward? All right, here's an example of this. One of the other networks, small networks that we worked on for two years, and this was actually done in partnership with the American Federation of Teachers. It was called the Building Teaching Effectiveness Network. And the context for this, the problem for this, was that the large number of...the weak and incoherent systems that exist are bringing new teachers into teaching, typically fail to support them well, to learn to practice well. They often feel overwhelmed, unable to achieve their aspirations for students. Their students often don't learn as well as in other classrooms, and then they're back out the door in a few years and we bring new people in. So it operates like a revolving door. And in trying to focus in on what are the array of problems that exist here, the team, this work was done principally in Austin, Texas, but there are also folks working in this in Baltimore and New Visions Public Schools in New York, folks focused in on what they thought, so some evidence to believe is really the high-leverage problem, that this is really a school-based issue, that it really depends on the quality of the professional community that exists in the school, that supports new teachers, depends on the kind of feedback that new teachers are regularly getting to improve, and it depends on whether the new teacher develops a supportive relationship with their principal, because this is basic organizational sociology. If you're going to persist in this work, do you feel like you're being supported by your supervisor? So all of that then focus attention. Let's look at this feedback process that's occurring between principals and new teachers and how it interconnects to the support teachers, these new teachers, are receiving. So it started with one school principal working with new teachers at his school, who's a team engaged in this work with him, and it started with developing a basic protocol. Well, in that feedback conversation, what's a reasonable, you know, what are the things that should happen in a feedback conversation, because as the system went to look at it, there was no system. I mean, there was no training for this, some people didn't even do it, and there really wasn't any understanding about what might be a good way to organize this conversation. The first step is, is there a protocol or scaffold for this conversation that actually might work? So started with one principal and a few teachers and so productive. So then it went out to five schools. So now it's going from one person out to five, and what happened when it got out to the five schools is that some of the task complexity that principal was going on in those principal teacher conversations became more manifest, because part of the protocol is identifying specific targets for improvement and connecting supports to it, and some of the other principals had some difficulty right there, that part of the protocol. So that involved building some professional development supports around principals on that critical aspect of the principal-teacher feedback conversation. So the task complexity started to manifest itself right there, and then it went out to a whole vertical team, elementary, high schools, middle school, high school. And as it moved out there, well then you started to see some of the organizational complexity, because some of these schools now got very big. There were 40 new teachers and principal couldn't do it. Principal couldn't do all these conversations. They had to have a team of people doing it, and as soon as you bring a team of people into it, you get issues of coordination and communication around the feedback conversations, because teachers started telling us, well, I'm getting different advice from different people. Sometimes it's too much advice, it's unmanageable, it's uncoordinated, so you get a new set of problems that now emerge at this level that have to work through on improvement. And then finally, as it started to go district-wide, then you start to engage a whole set of district policies around this. Well, how are we going to develop professional development more generally for new principals and people coming into this work? If we really think this is important work, it ought to become part of the principal evaluation system. That is, we ought to be signaling to principals that development of their new teachers really matters to us. So what is that going to look like? So it opens up a new set of areas in which this kind of improvement activity needs to occur. All right, sixth principle, and I kind of worked my way to conclusion. Organize as networks. The educational systems that we've built and the problems embedded within them are now so complex that few can solve them alone. We need coordinated collective action involving a range of experts, teachers, teacher leaders, administrators, researchers, depending on the problem, maybe technologists and others. This means we need to break down the silos in which we tend to live and engage with one another in this improvement journey. And we call these intentionally designed collaborations networked improvement communities. Each, Nick for short, is organized again around a specific problem. That's why we're all coming together because there's some specific problem we're going to work on. Having gone through the problem analysis, we're going to get to some shared working theory of improvement. These are the things we think we need to work on in order to solve this problem. Those are the purple boxes I showed you before. We need to have some common measures. We're going to have to invent some things sometimes like these productive persistence survey measures that Community College Nick developed. And we're going to need some common tools to inquire together so that we can learn whether the changes we're introducing are actually moving us in the right direction. And each networked improvement community, one other feature of it, it's deliberately designed to accumulate this practical knowledge and make it quickly accessible to others for further testing. This is actually the workings of a scientific community, but one dedicated to practical problem solving. Embedded here is extraordinary potential for educational improvement. We are the quintessential large network. If you just think about it, we have hundreds of thousands of people doing similar work every day. So that says to me for virtually any question that I might ask about that you're kind of pressing in my classroom or my school, chances are there's somebody somewhere out there who's been working on this for a long time and figured out something on which I could build. I just don't know who they are and I don't know exactly what they've learned. But we could if we chose to organize ourselves as networked improvement communities. This means pushing aside all norms about your researcher and my personal research interests or views about the uniqueness of my classroom. It means breaking down the silos around teaching that separate teachers from one another and it means challenging the traditional research practice divide. We're all joining together now as improvers, building knowledge, this kind of practical knowledge about how we actually improve teaching and learning and the organizations in which this activity happens. When we initiated a community college network improvement community, the idea that faculty from diverse institutions would subscribe to common learning goals, base their lessons on common instructional materials and collect common data to learn and improve together seemed like a totally anathema idea to long standing norms about faculty autonomy. But these faculty came together to solve an important problem. They found that working together joined by leading educational researchers supporting them in what they were trying to do and operating within these norms and practices of a scientific community helped them achieve what they cared most about dramatically improving the learning and success of their students. This improvement paradigm is different. That little figure up there on the left, that's it. That is a fishbone diagram, by the way. This other thing is a driver diagram. It's supposed to look like one. The improvement paradigm is different. It joins together this discipline of improvement science with the power of networks to accelerate learning to improve. It values being very disciplined, analytic and systematic in the ways necessary to test ideas and figure out whether changes are moving us in the right direction. It's inclusive in drawing together the expertise of practitioners, teacher educators, researchers, a whole host of folks. And it's very deliberate in the way organizing its improvement activities and something that's akin to a scientific community. Now, we can talk about strategies and methods all day long, but in the end, improvement is really all about the people. So to start to think about doing this work deeply in our educational institutions, it actually starts with thinking about, well, how do we prepare new teachers? Systematic improvement needs to be a part of their basic socialization into the profession. They need to learn about improvement science methods and imagine if as a routine part of their clinical preparation, they hadn't experienced something as follows. They become a novice member on an improvement team inside a school. Perhaps they're iterating through what you referred to as a plan-do study act cycle around some unit of instruction. So expert group of teachers has been a unit of instruction people have been working on it for a while. And lead teachers have some questions about this unit. To some part of this, we think it should be better. So novices join with these more experienced teachers, study the lesson, identify this particular piece. We're going to try to get better. What information we're going to get typically is something from students' work that's going to tell us whether this is an enhancement or not. We're all going to go out and teach it, and then we're going to come back together and we're going to talk about what we've learned. Imagine if every new teacher had that experience and contrasted what we typically do where the culminating task in teacher education is go out and invent a new unit of instruction and go teach it. We're basically signaling to people that improvement is a solitary act that we do in the confines of our own classroom rather than something we build together as a professional community. So I would say it starts there. It's anchored in what you do. This work needs the active full engagement of teacher leaders. This is very different than other conceptions about improvement. You're no longer passive recipients of knowledge that somebody else is developing that you're supposed to implement with fidelity. You're where the work of actual improvement occurs in classrooms. You're the improvers. So that this will, your efforts are central to making this kind of activity happen. Making space for your workload won't be easy, but the profession needs you to engage in this kind of work. It also means engaging academic researchers, but in very different ways here too. You and your students are no longer the subjects of their inquiries. We're now collaborators trying to solve some common problems together. Professional organizations like NEA and state and district policy leaders, we need them to advocate for and enable capabilities among educators to engage in this kind of activity. And it will take supportive environments where it's safe for educators to learn from failures so that we can all get better. We need to nurture supportive relationships that already exist and rebuild them where they may have atrophied. Improvement requires sustained collective action. In such contexts, the quality of relationships really matter. So we need to work together to make this happen. When any of us, or in contrast, when any of us feel threatened, disrespected, vulnerable, it's just really hard to learn. In 2020, the third New York City water tunnel will open. It's hard to believe that a man-made system built 150 years ago continues to bring fresh water to millions of people every day. But it does. We believe in the same boundless possibilities for our educational systems. I truly believe that we can do better by Alexa and by so many other students just like her. Imagine a future in which the kinds of learning to improve that I've been trying to describe. This kind of work is occurring every day in thousands of settings and involving tens and maybe even hundreds of thousands of educators, scholars, designers, and many others. Our field could become an immense network improvement community. We could greatly accelerate how we learn to improve. We could achieve valued outcomes that we now aspire to but have no realistic strategy to actually accomplish. Whether it's all children reading by grade three, all children career in college ready by the end of high schools, all children achieving an occupational, a valued occupational certification or a two or four year degree, or for that matter, all new teachers succeeding in educating their students. It's going to take different kinds of thinking and learning and ways of working together to get all of this to happen and to happen faster and better. This is a challenge that I would be grateful for the opportunity to engage with you on. This is taking on together the journey of learning to improve. In closing, I just want to say how great flam for the opportunity this morning to share some of these ideas with you and look forward to being a resource as you go into your groups for the remainder of the morning and engage in some of this kind of thinking on concerns that are specific to your particular districts. For those of you who'd like to learn more about these ideas, explore them further and meet a growing community of others who engage in improvement, I encourage you to think about joining us at the Improvement Summit that we now run every year in March in San Francisco. Again, thank you.