 Good morning or afternoon depending on where you are. I'd like to welcome you to this one hour webinar hosted by the Learning Policy Institute. I want to let the audience know that this webinar is open to the public and is being recorded. The recording will be emailed to you in a few days and available at the link just shared in the chat. We'd also like to announce that we'll be holding our next webinar, Opening the Gate, using deeper learning to expand college access. On September 6th at noon Pacific. You can register and find more information about that webinar and the webinar series by visiting the Learning Policy Institute website for using the link pasted in the chat box. Today, we'll begin with a presentation by Paul Leather. Paul is the director for state and local partnerships at the Center for Innovation and Education. He'll be talking about CIE's four state performance assessment project, which supports the design and use of performance based assessments across multi district networks, serving large numbers of students from historically disadvantaged communities in California, Colorado, New Hampshire, and Virginia. We'll then hear from Stephen Pruitt, former commissioner of education at the Kentucky Department of Education. We will discuss how the state performance assessment learning community is supporting states across the country in designing and implementing assessment systems that include performance based components, starting with the subject of science. We'll also hear from Don Cope and Ellen Ebert from the office of the superintendent of public instruction from the state of Washington. Washington is one of the 27 states involved in the state performance assessment learning community. Don and Ellen will discuss two major statewide efforts that incorporate performance assessment in ways that engage students in three dimensional science learning. And then we'll have some time to respond to questions from the audience. We encourage you to submit your questions throughout the presentation in the chat box at the lower right of your screen. Please choose all participants from the dropdown to ensure we can see your questions. Before I turn the webinar over to Paul for his presentation, let me briefly introduce him. As I mentioned, Paul is the director of state and local partnerships at the Center for Innovation and Education. Paul's background and experience in education, counseling and administration in New Hampshire stands for decades. He served as the deputy commissioner of the Department of Education in New Hampshire for eight years. And for 18 years, he served as the department's director of the division of career technology and adult learning. In 1997, as part of the New Hampshire school to career effort, Mr. Leather began the journey to create a state model for competency based student transcript. This effort resulted in the development and implementation of the New Hampshire competency based assessment system and ultimately to the student mastery model now in place as part of New Hampshire school approval standards. More recently, he led the development of a first in the nation next generation educational accountability model called performance assessment of competency education or PACE. He served as a pilot program at four New Hampshire districts in March 2015. I'll now turn the webinar over to Paul. Well, thank you, Renita for having me back for this seminar. As Renita shared for the last six months, I've been working with a small network of states, including California, Colorado, New Hampshire and Virginia, as they look deeply into the use of performance assessments. I'm working with the Center for Innovation and Education at the University of Kentucky, where I work with Gene Wilhoi and Linda Pettinger, known for many across the country for their work with CCSSO and the Innovation Lab Network, as well as with Sarah Lynch, who heads up the assessment for learning project. Next slide. The four state project has a number of goals, including looking at performance assessment models at both the state and local levels, engaging leaders at both levels in the four states, addressing the issues of validity and reliability with performance assessments. And over the longer term, looking to see state and local systems develop deeper learning assessments at both levels of state and local. And also developing an educator workforce prepared to work within such systems. Next slide. Over the last several months, state and intermediary leaders have met and established the work expectations for our learning community, including identifying promising practices with a particular emphasis on equity, being clear on what the research tells us, providing a focus on communications with a number of stakeholders interested in state and local assessment, the roles of state education agencies, intermediaries in local districts and schools, how performance assessments can address career readiness and development, as well as college admissions and placement, and finally how to scale and sustain these new systems of assessment. So we have a big agenda before us. Next slide, please. Early on on in my work in New Hampshire, when we're considering how to support emerging competency based and student centered practices, several of us considered three cornerstones for building a system, including the need for robust and sustained professional development. The requirement for high technical quality for performance tasks and performance assessments and building leadership and policy support to support the work in the larger community. These have stayed with me over the years and I found that attending to each of these concerns in a balanced way has been crucial to our success. Next slide, please. Many of you have seen this assessment continuum developed by Linda Darling-Hammond before. We are now seeing all four of the columns towards the deeper learning end are represented in the four states, which shows great movement from even a couple years ago. What continues to be true, however, is that the farther to the left of the columns you go, the more you see those assessments that are used by states for accountability purposes, while the deeper you go to the right of the columns, the more we see assessments used at the local level. And so what we see happening is that schools that are now predominantly using extended performance tasks and student co-designed assessments in their schools are often held accountable by their states by more traditional tests. This disconnect creates some degree of incoherence across the system as these assessments measure different depths of knowledge and student skill. Next slide, please. In the last webinar in this series, I shared this definitional page derived from a paper by Scott Marion and Katie Buckley pointing out that what we call performance assessment can differ from place to place and from system to system. I wanted to contrast this slide with the next slide today. Next slide, please. There's beginning to be some separation between short and long form performance task models depending on what network or philosophy is shared by schools. We also see that the model of portfolio defense is really expanding across the nation spurred by the work of Envision Partners and other national networks and the national influence of the portrait of a graduate movement led by Ed Leader 21. Also, we are seeing the emergence of digital badging now being adopted in both K-12 and higher education. With digital badges seen as performance assessments, we see different processes and methods emerging. For example, badges are often juried primarily by noneducators outside the classrooms specific to skills required for careers. Next slide, please. So when we look at each of the four states individually, what we see is a microcosm of what is happening across the country in many states. An interesting aspect of California is that you have the strong adherence to the Smarter Balanced Assessment by the State Board and the State Education Agency. While very significant networks of schools develop like the California Performance Assessment Collaborative, and we see these emerging locally. CPAC also has several performance assessment models at the local level depending on the school or network. Many have student co-design exhibition and long-term project or capstone models as well as short-term performance tasks. A note, you have some of schools also a part of CPAC which not only look at cognitive skills, content knowledge, and habits of success as many schools do, but as well as something they call a sense of purpose as areas for assessment. California to some degree solves the problem of multiple systems through its LCAP system where schools and districts are encouraged to develop multiple indicator dashboards at the local level for accountability purposes with minimal statewide requirements like Smarter Balanced in place. Next slide, please. Colorado solves the same dilemma in a different way. They have a state assessment, CMAS, put in place this year based largely on the park assessment. Colorado has also developed a menu for graduation guidelines where schools can select high school assessment models including local performance assessment systems for graduation purposes. Envision Partners Graduation Portfolio Defense is a prominent model in Colorado high schools. The Colorado Department of Education as well as the Colorado Education Initiative also support formative assessment networks. Each district and in some cases individual schools solve the issue of comprehensive assessments differently as long as they take the CMAS and elementary and middle school levels and select from the menu in the guidelines at the high school level. Next slide, please. In yet another model, New Hampshire has developed the PACE pilot as Ronita shared with you. PACE is primarily a short-duration performance task model that is curriculum embedded. A state assessment, the New Hampshire SAS, which looks and feels a lot like Smarter Balanced, is implemented still by elementary and middle levels with the S8-4 application, the innovative assessment and accountability demonstration for New Hampshire districts is to select into the innovative performance assessment PACE system as they are ready. What is notable in New Hampshire is that they are evolving PACE to be more student-centered, bringing in aspects of portfolio defense and other student co-design assessments as we are seeing in other states. New Hampshire has a pretty well-defined technical quality system in place for a large-scale calibration of scoring, standard setting, and social moderation for their common performance tasks, as New Hampshire has had a federal accountability waiver in place to implement the system for the last four years. Next slide, please. Virginia is one of the more complex systems where they have a very well-defined state assessment and accountability system that has been homegrown over the last 15 to 20 years. Recently, there has been much support to open the system up in certain content areas like social studies and science to allow for locally developed performance assessments. There is broad support within the educational community, the state board, the state education agency, and the legislature for this effort, and the SEA has begun large-scale performance assessment protocol training over the last year. The portfolio defense and exhibition system, once again, is very strong in Virginia, this time tied to the state-adopted portrait of a Virginia graduate. Virginia is addressing technical quality while attempting with jobs for the future to maintain and grow public will and support. Next slide, please. Hopefully, you are seeing a theory of action in these states starting to take hold. For change to happen, we start with proof points. In many instances, state education agency or national networks supporting early adopter schools and districts investing in networks of schools to show the deeper systems of assessment can be implemented. The SEA or statewide intermediary then collect and test evidence as well as develop lessons learned and models that work. They reach out to the governing bodies, the state board, the legislature to build supportive policy, and then they look to build out from there. This is how our Center for Innovation and Education recommends that states proceed with the work, start small, show evidence of progress, evolve the system to support scaling and growth, and improve and innovate all along the way. So, thank you, Renita. I'm going to stop now and we'll pass it on to Steven. Thank you. Great. Many thanks, Paul, for your presentation and just a reminder to the audience. If you want to ask any questions or engage in discussion, please use the chat box in the right of your screen and select all participants from the dropdown. Now I'd like to introduce our next speaker, Steven Pruitt. Steven most recently served as Kentucky's sixth commissioner of education. He came to Kentucky with an extensive background in education at the local, state, and national levels. Prior to his tenure as commissioner, Steven served as senior vice president for Achieve, a national, nonpartisan, non-profit education reform organization based in Washington, D.C. During his time with Achieve, he coordinated the development of the next generation science standards. Steven also served as president of the council of state science supervisors and was a member of the writing team for the college board standards for college success. In addition, he served on the national academy's board on science education committee on a conceptual framework for new K-12 science education standards, which developed a framework for K-12 science education. This is a foundational document for the next generation science standards. He currently serves on the board of directors for the national council for chief state school officers. The floor is yours, Steven. Thank you very much. So I have been asked to talk a little bit about how we approached performance assessment in Kentucky and sort of the origins for our performance based assessment learning community. So I guess I would start just by a little bit discussing how we approached this in Kentucky. With regard to the science standards, and that's where we started. Now, if any of you are listening today know me or you've heard me speak before, then you know the fact that science is my background, it's my love, it's where my heart is. So naturally, when we got ready to get started on this and we started in science, of course I was accused of showing favoritism to that particular content area. Well, the reality was that it was actually the next thing up for assessment, but it was also something very important that I felt like we needed to do at the outset. The next generation science standards call for a very different type of assessment, but there were also other factors that impacted why we approached things the way we did. I'll talk about those a little bit as well. So, first of all, we embraced the Board on Testing Assessment document that came from the National Academy of Science. In that document, it calls for a full system of assessments. One of the things you heard Paul mention a lot with systems. So we wanted to create a real system for science assessments in Kentucky. At the root of that, it's building a state assessment that actually, or a state assessment system I should say, that actually not only appreciates but actually hopefully invigorates classroom assessment. That means that we had to do a whole new approach to how we were approaching our science test at the state level as well as it's how we were going to build the overall system. As you can imagine, the biggest fear when you do any type of performance assessment is, and Paul even mentioned this, well the state test doesn't look this way, we could be setting ourselves up for a problem. Well, you know, first of all, I would say there's plenty of research out there that shows that that's not actually the case. But in the case of Kentucky, we wanted to build an overall system that started with classroom assessment. So we have a three-legged stool. We talked about classroom embedded assessments. We provided training for teachers on how to do those. If you're familiar with the next generation science standards, you know that we have three dimensions in that science, in those science standards. We have science engineering practices, which is sort of how scientists communicate and accumulate knowledge. This is very core ideas, which are really, you know, the stuff, it's the content that you get. And cross-cutting concepts, which are very focused on helping students connect concepts across not just content areas, but be able to connect it to phenomena that they see and deal with in nature. So it's a very different way of thinking in that you don't test each of those then separately. The idea is to build toward a three-dimensional or evidence of three-dimensional thinking. So we provided training around classroom embedded assessment, which is our first leg of the stool. Our second leg of the stool were developed, were state-developed tests that we call through-course tasks. Those three-course tasks were developed by teachers, coordinated by the Kentucky Department of Education. And then there were three per grade level that were developed. Teachers were, and I think this is a really important aspect of this, this was not part of accountability. The idea was let's recognize the importance of instruction. So those first two legs of the stool were really about focused instruction. So the relief of those items, teachers could use them at any point in the year. They could use them for, we didn't even allow them to change them. The only deal was they had to tell us how they changed them. And each school was asked to give one piece of student work that they felt was exemplary toward the use of these three-course tasks. We brought teachers in during the summer to do a review of those and to give feedback to the feedback. We had provided a protocol that we asked all schools to do in regard to evaluating those three-course tasks. And the groups of teachers that we brought together in the summer basically would go through and evaluate the student work, but also the work of the folks at the local level. That's not a gotcha and that's not, was not meant to be a hey, we did this right or wrong. It was meant to feedback. It was meant, for instance, in some cases when three-course tasks were changed, we actually gave the feedback that actually when you change it, you lost three dimensions or you lost the intent of the task. But more often than not, it was, you know, a real look at a deeper dive into how we provide feedback to students. And then the third and final part, of course, is the state assessment. Within the state assessment, one of the things that we recognized early was that, you know, through no child left behind and history since then, and I don't blame no child left behind just per se, but I didn't believe it started there. And that is that people have been trying to make the state assessment do too much. They try to make it do more than what it was really intended or able to do. So in recognizing that, having this three-pronged approach to assessment was critical. But to build the trust of our teachers in the field, we also needed a state assessment that could come as close as possible to what was going on in classrooms. So all of the items at the state level were developed based on phenomena. They were based on clusters of items, meaning that there were several items that went with each individual phenomena. And so the three of those together, we feel, has given us a pretty strong perspective. Now, one thing I will go back to very quickly is I mentioned that we asked each school to provide one piece of student work. It was to give feedback to them, but also is different because it wasn't about accountability for them. It was actually about accountability for the Department of Education. We approached that as a way to monitor how well implementation is going of the new standards, but also of just how the assessments were going. So that's in a nutshell what was going on within Kentucky. Now, I had, as was mentioned before in my bio, I was at the center of the development of the next generation science standards. When I was at Achieve, I was hired there specifically to work in the lead to develop those. I worked with the 26 states who originally signed on, including Washington, to actually help develop those standards. We knew that when we did that, we were actually getting into an area that was going to be unchartered. The way the standards are constructed themselves require a different way of looking at assessment and a different way of looking at instruction. In fact, I would say that it was actually about bringing instruction out of the shadows and making it far more prominent. So I've been working with the NGSS and Science Education General at national level for a long time. When I became commissioner of education, just being honest and blunt, there's not a lot of chief state school officers that are science people. But not just that, there are a lot of folks out there who love to claim that they already have the NGSS down. Whether they be textbook companies, assessment companies, whatever, and I'm not intending to throw shade at anybody. I'm just simply saying that there's a lot of people who think they have a handle on that. So in some conversations that I have with Linda Darling-Hammond and Paul and Gene Willhoide and others, we felt that time was right and the science gave us an opportunity to start bringing people together and talk about systems of science assessment. But with the crux of that really being very focused on performance assessment. And so in dealing with performance assessment, there was a lot of work that we were going to have to do, but we also need states to start buying into that. Now we're very cognizant of the fact that states are very leery about going into any type of assessment consortia or anything like that. Our focus was more around how can we all learn from one another. So 27 states came together. I met along with Linda and Gene and Paul with other chiefs at the Council of Chiefstates to Loser meetings. And we pitched this idea that, hey, let's all get together and start working toward this idea of a learning community around performance assessments specifically in science. Now I believe our hope is that we eventually branch out for that. Even though I am no longer commissioner in Kentucky, my intent is to continue to work with that project because I believe it adds a lot of promise and a lot of hope. So from there, once chiefs started saying, hey, this looks like a good thing, they went back and talked to their staff. And as we brought people together, we had several meetings. We brought people like Don and Ella, which are going to meet in just a moment together to talk about this and where we could go and what could we do. In fact, what we found I think is there's more work than any of us to get our heads around. So we started to really focus on how can we start doing some things together specifically around the development of classroom performance tasks. So it's been a great opportunity for us. I really believe that it's been a, I believe it's moving the ball down the field. But the origins of the initiative in and of itself are really born out of the fact that we recognize that both the science standards need a different way of approaching science, but also I think there's a greater recognition, regardless of content area, that we've got to get back to a place where classroom instruction and assessment matters. And that assessment shouldn't be a dirty word. Assessment is actually part of good instruction. And helping us get to a place that our performance tasks aren't merely predictors of what will happen on the state test, but actually more evidence of how well students are attaining knowledge and how well the instruction is going. You know, hopefully at some point we'll all reach the place that we recognize that the best test prep is good instruction and with good instruction comes good assessment. So with that, I think I'll close down and I will turn it back over and I believe Don and Ellen are next. Great. Thank you so much, Steven, for your presentation. And just a reminder to ask questions or engage in discussion, please use the chat box in the right of your screen and select all participants from the drop down. A few of you are selecting all attendees. If you select all attendees, the panelists cannot see your questions. So you need to make sure you are selecting all participants so the panelists can see your questions. Now I'm going to turn it over to our next two presenters. Don Koch is the Science Assessment Lead at the Office of Superintendent of Public Instruction in Washington where she coordinates the development of the State Summit of Science Assessment. Don taught high school science in Washington State for 19 years prior to moving to the superintendent's office in 2012. Ellen Ebert is the Director of Science in the Learning and Teaching Program at the same office. She is past president of the Council of State Science Supervisors. She is currently focused on coordinating the implementation of the Washington State Science Learning Standards. I'm going to turn it over to Don. Thank you, Renita. And good afternoon, good morning to all of you out there. Washington State adopted the next generation science standards in October of 2013. And since then, the Learning and Teaching Team and the Science Assessment Team at our State Department have been facilitating a four-year transition to three-dimensional science instruction and a three-dimensional State Summit of Assessment. Our two teams have worked very closely together during this transition. We attended each other's meetings. We presented each other's events. And we coordinate our messages to ensure quality and consistency in our work. The implementation of the new standards required the collaboration of various critical stakeholder groups as well as our colleagues and organizations throughout the United States. We've been very fortunate to have access to and the support of these individuals and these organizations. Ellen and I bring from the few of these organizations to include here as examples, but this is definitely not a complete list. Our involvement with the State Performance Assessment Learning Community or the last few months has been very helpful in helping us further our thinking about our state science system, excuse me, and how we can use performance assessments in a variety of different ways within that system. What Ellen and I are going to present to you today are two of the ways that performance assessments are currently being used in Washington State. I'm going to talk about the role of performance assessments on our state summative assessment, which we call the Washington Comprehensive Assessment of Science, also known as WCAS or WCAS, and I'm sure that's how I'll be referring to it as I speak. And then Ellen is going to talk about the role of performance assessments in a climate science education grant in our state known as the NGSS and Climbside Ed. So I will get us started talking about the WCAS. One of our main goals for the state summative assessment is to reflect how science is taught and tested in the classroom. We want to be sure that the WCAS are phenomenon based and that they reflect student interest and relevance. To that end, like Kentucky, we've included state educators at several steps in our test design and item development. Our educator work groups are very carefully chosen to represent the demographic of our state. We include classroom teachers, principals, higher education professionals, science coaches, and curriculum specialists, just to name a few. We look for educators with experience in special education, career and technical education, informal education, et cetera. And we be sure to include experienced work group members as well as try to bring in new work group members every time we hold a new meeting. Leading up to the first administration of the WCAS, we also held a few meetings to elicit input for decisions like our test blueprints and reporting claims. But really the main avenue for educator involvement in our state on an ongoing basis is the assessment development work groups. Every item in rubric for our state tests are written and reviewed with the help of teachers. Teachers also help prepare training materials for scoring our constructed response items. They review field test data and help us decide whether or not items are going to make it into our operational bank. To facilitate this work, we hold multi-day work groups four times during the year. And during the work groups, we provide professional development that includes training on the standards, training on our test design and how we got to where we are. In the early stages of development, back in 2015, when we first started holding our work groups, we brought in a researcher to help us, meaning the state and our teachers, everybody kind of elevate our understanding of the NGSS. Most of the participants in our work groups tell us that this is some of the best professional development they receive. And I believe they appreciate the time, the interaction with colleagues to really dive into the standards and to think about not just the state test, but also what's going on in their classrooms. I really feel though that our science teams get the most benefit out of these work groups because of the getting the teachers' perspectives, how they're focused on the teachers and getting their content expertise and just all their good feedback that helps to guide our work. Okay, it's been about four and a half years since Washington adopted the Next Generation Science Standard. And we're really excited that our state test, the WCAS, was being administered in grades 5, 8, and 11 this spring. Whether or not a student is in grade 5, 8, or 11, when they take the WCAS, they're going to experience a comprehensive test that is composed mainly of item clusters, which is what we're considering performance assessments or performance tasks. For this presentation. And those performance tasks will require the students to use practices, core ideas, and cross cutting concepts to explain science phenomena. And they'll explain life science focus clusters, physical science, earth and space science, and engineering is thrown in there at various places. An item cluster, these performance assessments are a set of related stimuli and items. Each cluster is based on a science phenomenon and as a student works their way through a cluster, they have the opportunity to demonstrate their knowledge and understanding of the dimensions in several different ways. Each cluster will have around one to four stimuli, averages probably two, and around three to six items. A stimulus can include text diagrams, graphs, animations that will provide relevant information at the appropriate point in the cluster. The items in the cluster could be one of many types like drag and drop, short answer, multiple choice, multiple select. I think we have about nine different item types that we are currently using. Each item is aligned to two or three dimensions of the performance expectation or expectations that the cluster is assessing. And the cluster as a whole must be three dimensional or aligned to all of the dimensions from the PE or PE that it's assessing. The clusters that we choose for our tests are really carefully chosen to mirror the representation of the science domains and the standards. Careful planning ensures that a wide range of practices, DCI's and cross cutting concepts are also represented. I've given you a brief description of how we develop and use performance assessments on our state assessment. There are other design and delivery features that we still help allow for a fair and accurate assessment for all of our students and help ensure that we can draw valid and reliable inferences from our results. I've included a link to the resources for the WCAS at the end of the presentation if you're interested in diving into that information. I'd like to now turn this over to Ellen who's going to speak about another way that our performance assessments will soon be used in Washington. Thank you, Dawn. And thank you to the other speakers who came before me. I've been scribbling a lot of notes and maybe I can talk a little bit about what we're trying to do with this opportunity that we received from our legislature back in March. On the very last day of the legislature we found out that we being science education had been allocated four million dollars for grants to our educational service districts which we refer to as our ESDs and to community-based organizations which we refer to as CBOs for science teacher learning on the Washington State learning standards which are the NGSS and specifically to emphasize and include the climate science education standards which were calling Climbside Ed. And the reason for that is that our governor has a keen interest in climate science and ocean acidification and so part of this four million dollars was an effort on his part to acknowledge this interest in improving education around climate science learning. Dawn, next slide please. So in our, Dawn spoke extensively about our focus on equity. We took the framework for K-12 science education chapter 11 really seriously and the appendix D of the NGSS all standards all students extremely seriously. And we were thinking in terms of designing a lot of four million dollars which seems like a lot of money but when you start to really look at school districts and populations of students across the state it really is not very much money. So our targeted priority focus is on what we call our comprehensive and targeted comprehensive schools and communities that have been historically underserved by science education. And our list is not all inclusive we just named some of our historically underserved populations including tribal nations and our migrant students, English language learners and our students receiving special education services. Next slide, Dawn. Thank you. So in the proviso we've been asked to target sort of the grade levels that are not tested formally. So 5, 8 and 11 is our formal testing years and we're targeting fourth grade and we're trying to bookend elementary. So beginning in pre-K and going to fifth grade but so they are bookends receiving climate science education and education around professional learning around NGSS with fourth grade teachers being the actual target. At middle school and high school we're targeting teams of teachers and specifically at high school we're trying to target grade 11 teachers so teachers who historically did not have to support students for the biology and the course which was offered in 10th grade so we're trying to target our chemistry and physics and current tech teachers etc. Next slide. Thank you. And so we've listed some of our outcomes here. We have a lot. We have about seven pages of outcomes that we are really trying to zero in on. But importantly in bold you'll see that we acknowledge that we don't have a classroom system of assessments and we've had, Dawn and I have had many conversations with our leadership to try and allocate funding so that we could do that. That didn't happen for us when we made our request. But we see that with this provides some money that we have an opportunity to begin in kindergarten and tell the story of the kindergarten student as they progress from kindergarten to first grade, second grade, third grade, fourth grade and then some of it in fifth grade. So our intention is kind of threefold. One is to have a system of 3D performative assessment tasks and rubrics that can be used with teachers across the entire state specific to the performance expectations at their grade levels and so that we can identify students who are being very successful and perhaps identifying needs that we need to address in professional learning. So that's one piece. The second piece is that teachers have also asked us not only to have these common items that are equally shared across the state but then to have very much place-based that are specific to their local communities and to develop that type of 3D assessment. So that's the second level. And then the third level of assessment that teachers have requested from us at grade level are items that are similar to what students might see when they take their summative assessment. So there's sort of three levels. And we haven't that we're building this plan as we're flying. And so we haven't exactly defined all of that, but we've made it a requirement in our proviso plans. Next slide please. So we I didn't mention that this money is a little bit like Cinderella. It evaporates on June 30th 2019. So we have one year to enact all of this work. It's a very ambitious timeline. A lot of very nervous people, but very excited folks here in the state to try and do the work. I think that you'll see that we will we'll be back to talk to you in a year and let you know how things are going. John, do you want to go to the next slide? I think that one thing that I will say that Don and I really took the equity conversation seriously and in the training. So this is my compliment to Don and my assessment colleagues in Washington. So we took this idea about equity in assessment, not just in the learning aspect of the classroom, but also to the summit of assessments. And I feel that they these this collaboration has improved the work that we've been doing. Our teachers are better are better writers of items. And I think our students are hopefully going to show us that they appreciate student interest and relevance speaking to them through their assessment as well. Don, do you want to close us out? Thank you, Ellen. I completely agree with your with your the last few sentiments that are working together is hopefully really going to pay off as we move forward. We've included a few of a few links for you at the end of our section of this webinar. If you are interested in pursuing any of the things we've talked about further. And that's it for Washington State. Thank you. Great. Thank you so much, Don and Ellen for your presentation. Now I'd like to begin our discussion and address some of the questions we've received from the audience. There are quite a few. And so if you don't get your question answered today, please do feel free to follow up by email with the with the panelists directly. And so let's start with a question from Emily to Paul. She asked, can you talk more about the tension between accountability versus assessment for deeper learning in the four states you're working with? Paul, are you muted? Oh, Paul, I think you were muted. Oh, for heaven's sakes. Okay. Am I still muted? No, you're good. Okay. So I think I addressed this to some degree in my presentation, but let me just say that even when I was still at the Department of Education in New Hampshire as Deputy Commissioner, often I would go out after the release of the state assessment to local school boards as they presented the information around accountability. And one of the things that we saw was that time after time, superintendents and curriculum directors would present an array of data, including local assessments, classroom assessments and assessments are related to the class, to what was going on in their curriculum. And then at the very end, they would say, I know here is the state data. And they really valued and their state, their local school boards really valued their results from their more locally developed assessment system. So we ended up having two levels of accountability systems that that really did not line up with one another. The other thing that we found is that as schools and districts who are attempting to change their practice and innovate and put in new systems, their immediate response to us was, as long as we're moving to state standards, we have annual state assessments. The impact of our of the stakes involved with those assessments and standards is that it forces us to focus on the test to really prepare our students for taking the test. We see a narrowing of our curriculum and we see really an emphasis on lower levels of depth of knowledge as opposed to deeper involvement in hands-on projects and engagement with rich deep learning. So for these variety of factors, we really saw that we needed to change our accountability system. And so PACE was really designed as a curriculum embedded performance assessment system to address just that issue. As I've looked since I've taken on this new role at the three other states, I see that there are various approaches that are taken through the accountability lens, if you will, to try to address this issue. So as I mentioned, California has their multiple assessment, local assessment and accountability model that they put in place through legislation and is supported by both the legislature and the state board. And then both Colorado and Virginia are trying to have a comprehensive system where they have both kinds of assessments as part of the system of assessments. State level assessments to check just to make sure that students are learning basically, but on top of that they want to see how well their curriculum and instruction is doing. And for that they rely more on their curriculum embedded in their performance assessments. So it is a balancing act and we're seeing that probably over time there's going to be much more emphasis on classroom assessment at the center of systems of assessment and accountability. Great. Thank you. There's a question from Melissa and it's directed towards Steven. She'd like to know what professional development does Kentucky provide to teachers to develop items and tasks and how are teachers selected for work groups and task development? Great question. So one of the things that we, I guess I'll start with how people were selected. So in the first year, the very first year, going back years to where NGFS was actually adopted, the Kentucky Department of Education started performing training around the state. So in that initial year when we started developing assessments, we chose from the folks who had actually gone through those initial trainings. The idea though is we found it to be very difficult. Not because people weren't capable. It was simply a matter of developing assessments is not something that's easily done. And it's not a skill that's really developed in our pre service or even in actual service often. So what we did was we basically created a cadre of people to help us with that first round. And what we learned in that was some, so the way we develop all of our assessments, whether they're through course or our state assessments, or we use storylines. So we write a storyline that depicts in the storyline what skills and knowledge we're trying to actually elicit from the student. And then we write the items and we found where some people really good storyline and some people really good at writing items. And then there were some that were good at both. But what we realized is part of the role of KDE was to put people in the right seats on the bus, so to speak. So the idea was that we started with that initial group. We spent a lot of time both going through in GSS itself, going through the voter report. But then also really spending a lot of time deep diving into how to write these items. We allowed them to spend time doing peer review. We gave feedback. It was probably a little creepy for some of our teachers that the commissioner of education was actually reading them, giving feedback to. And we just developed a good learning community among those groups. The idea then was that we would basically replace a third each year. So you kept two-thirds of the people who have done it enough that they have become experts. And each year you add a new third. And that new third comes from people who have applied to be a part of it, people who in our work we had come across. There's a pretty big range of ways teachers can be involved. Because there's the actual development and there's also a review of the items. There's bias review. So we end up with a lot of teachers really involved over the whole stretch of the process. But the reason why we replace the third every year is one to keep fresh eyes on the items each year. But also to allow the third that worked on it to become really good ambassadors. And they actually can play a role in helping in their districts with PD themselves and that sort of thing. So yeah it was a pretty extensive process. And I think it was pretty tough on some folks. Basically I told and I mentioned this in the chat box if you didn't see it. Even in our state assessment our teachers wrote every item. Basically our testing vendor was put on notice that their job was to do whatever the teachers gave them and turn them into tests. But they couldn't rewrite them. And so it was a little bit different paradigm from where we had been historically. But I'm actually really proud of the items they wrote. But your question about PD it was it was pretty massive. There's no really other way to say it. But it was designed to be not just in sitting and getting but actually them doing item reviews them doing item development and putting them into learning communities where they were providing peer review to each other. Great. Thanks so much Stephen. We just have a couple more minutes. I'm going to have one more question that will help answer quickly. There's a comment here about how enthusiastic teachers that can apply these ideas given policy barriers. I'm wondering if perhaps our friends from Washington state can talk about the political and policy context that supported the work you've discussed and who the major key stakeholders were that were sort of bought in and allowed the work you're doing in science to develop. Well I can light it on. Well we kind of come from two different directions. So I'll just briefly talk for our state science assessment. I believe that we've had a lot of support from our state legislature from our agency in doing the work that we do being able to bring in teachers as a major factor in how we develop our state assessment. Giving us the time the money and the support to do that has been very helpful. So from my and this has been traditional in Washington state really since we started our state assessment. We've had lots of transitions over the years whether it was because of new standards or because somebody wanted to shorten the test or length of the test or make it a graduation requirement or whatever. The feeling at the moment was I feel like we've been able to do the work that we've wanted to do. Ellen did you want to answer from your perspective. So in addition to what you just mentioned I would say on my side we've worked hard to build networks over the last several years. When I went when I came to Washington there were several networks that pre existed. We have a network of science fellows that's about 350 teachers across the state. So they're very powerful group that are teacher leaders. We've had community based organizations that have been wonderful partners for us federal agencies state agencies research institutions the tech people. We've worked together really closely so it it was great that our policy makers were able to support us and and now we're actually activating those networks trying to do the to do this quick work for one year. Great. So unfortunately we have to wrap up and I know we could we have many more questions again do feel free to follow up with the panelists afterwards. Many thanks to Paul Steven Don and Ellen for such a great discussion today. I'd like to remind the audience that we are recording this webinar and we will email you in a few days when it's available. We'd also like to invite you to join us for the next webinar in the series opening the gates using deeper learning to expand college access that will take place on September 6 at noon Pacific. You can register and find more information about that webinar using the links pasted in the chat box. This webinar is part of a series called the cheating equity through deeper learning and we'll be having additional webinars throughout the year. We'll send notifications by email if you'd like to sign up for our email list. And finally we'd like to share the following online resources which will also be posted on this webinar page. Thank you for your time and have a good afternoon.