 The broadcast is now starting. All attendees are in listen-only mode. Good morning or afternoon. My name is Tom Schultz and I'm pleased to welcome you to the Pre-K to 3rd National Workgroup Webinar Series. We're going to be having the 7th of our eight planned webinars this afternoon. I'll be moderating this session to look at the issues of child assessment in particular, ways of using child assessment data to inform and improve teaching and learning across the Pre-K to grade 3 years. We have an exciting panel of presenters and I'm looking forward to the interaction that we're going to have with the audience. Before we get into introducing the presenters, I'd like to explain different ways that you'll be able to become engaged in the broadcast this afternoon. Just to clarify, everyone except the speakers will be muted throughout the webinar, but you can engage with us through some polling questions that we are going to put to you to respond to at several points during the webinar. We invite you to pose questions through the question box and those of you who are more technologically savvy than me can follow the conversation on Twitter at number Pre-K to 3rd. As I said, today's focus is on using assessment data to inform and improve an instruction and I work on early childhood issues at the Council of Chief State School Officers. Full information on our workgroup and on other webinar topics is available on our website at the site noted below. We can have the next slide. I'd like to just get us into this issue by highlighting four key things that strike me as I look at the practices of child assessment across the Pre-K to grade three years. It seems to me, first of all, that child assessment plays a critical role in the work that we do with young children. I see expanded efforts to work on assessment initiatives across these years. I think there are major differences as you look at the forms of assessment and uses of assessment between the Pre-K and K3 sectors and I think that there are some challenges and opportunities. Just to highlight those, if we can move to the next slide, as I said, to me the important role that child assessment plays in the Pre-K to years and through grade three is the ability to answer the key question that I think all of the participants in this enterprise have, which is how are the children doing, whether you are a parent of a young child and elementary school principal, a Pre-K program director or a state legislator. People are eager to know whether kids are getting a solid start in these early years when, as they enter preschool programs, as they move into kindergarten and primary grade programs, are they engaged positively in the learning activities that teachers are providing? Are they getting along with their peers? How are they progressing in relation to expectations for children of their age? What are their strengths? What are they good at and excited about? Child assessment allows teachers to answer those questions in a systematic way and, as we are going to learn this afternoon, provides a very powerful resource for efforts to improve the work that they do with children every day. I think a second key thing that strikes me as we look at child assessment across the Pre-K to grade three years is that we see rapidly expanding volume of assessments, if we can turn to the next slide. If you look at initiatives coming from the federal level, Head Start programs across the country are being asked to assess children's progress in relation to school readiness goals, children that are served through IDEA programs for young children with disabilities, are tracking child and family outcome indicators, states that have been funded with race to the top and the early learning challenge, are developing new assessment initiatives for primary grade children and younger children, and then at the state level we see expanded efforts to sponsor kindergarten entry assessments and assessments related to new mandates around whether children are reading on grade level at the end of third grade. I think I've covered the next slide so if we could skip past that one and get to the next slide on World's Apart, I'll wait for that to happen. I apologize for the delay. So I'm trying to, the slides are advancing on my end. I'm not sure why everyone else isn't seeing them. Trevor? Tom? Can anyone hear me? Yeah. Yeah, I can hear you. Try clicking the show my screen button, Christy. I have, I'm clicking on it. It says it's paused, but I haven't paused anything. Apologies for the technical difficulties. I don't know. Well, while they work on the slides, just let me share the other two overview points that I was going to try to make, which is I think as we try to look across the continuum of programs serving three and four-year-old children, I think what we tend to see in terms of the way assessments go on is that teachers are using assessments that look at children across multiple domains of child development based on state early learning standards. So they're looking at social-emotional development. They're looking at cognitive development, physical development, and approaches to learning. They tend to use observational tools that allow them to compare children against rubrics in those different areas of development. And the approach to using child assessment data to answer questions about accountability, whether programs are working well, has been carried out through systematic and rigorous program evaluations that are done by teams of researchers. So if you think of those attributes and then think about the world of kindergarten to grade three classrooms and child assessment, I think in general, in those primary grade years, there's more of a focus on looking at assessments in the content areas of early reading and mathematics. In some cases, there are expanded assessments beyond that, but not in every case. Now we're back on track. Some teachers in these kindergarten, primary grade classrooms are using observational tools, but there's a greater use of standardized tools that rely on children responding to the same set of questions or tasks. And finally, the approach to accountability in elementary schools, while not in some cases directly being implemented in relation to data from kindergarten to grade three classrooms is to look at population data on all children and whether children are creating evidence that they're progressing in relation to standards. So there's some substantial differences in terms of how child assessment tools are used, the focus of those assessments, and use of data for accountability purposes. What I think is a shared interest and a shared movement is more and more concerned about how to use assessment data as a tool for program improvement. And that's the function that we're going to focus on this afternoon in our broadcast. Finally before I get to introducing our speakers, just to highlight a few of the challenges and opportunities that I see that are presented as we are expanding efforts to look at young children and exploring new ways of using assessment data, if we can get to the next slide. I think we see some challenges in terms of initiatives in the area of retention of children at the end of third grade if they're not performing on reading tests up to the level of grade level proficiency. We also have new teacher evaluation systems that are operating in elementary schools or being piloted that are using assessment data as a factor in teacher evaluation systems, which is new for most teachers in those grades. I think there are also opportunities when we think about the core concepts of the pre-K to grade 3 model of working to build better systems to report data and pull together data on how children are doing, the quality of programs in classrooms, and the characteristics of the teachers that are working in these programs. And finally, as we're going to talk about for the bulk of this broadcast, I think that a key potential for the future of assessment in the pre-K to grade 3 years is to build the capacity of teachers to use assessments to improve teaching and learning. So with that, what I'd like to do is to turn to explaining to you the people that are going to be presenting for the balance of the afternoon. With us, we have people who are scholars and experts on child assessment tools and uses, leaders from the state level, from a local school district, and from a local early childhood program. And I think each of them is going to be zeroing in on the issue of using assessment data for instructional improvement. I think we'll hopefully have plenty of time for your questions at the end of the broadcast. I think we're ready for a polling question before I introduce Marty Zaslow, who's going to highlight a number of different purposes of child assessment data in the early childhood years. So what we'd like you to do, based on whether you work in a local program or work in a state level, whether you're in the early childhood sector or K3, is if you had to summarize the key purpose for which you're using assessment data in your programs or the children that you work with, just pick one of the choices and we'll take 10 or 20 seconds for you to give us that answer and then see what the pattern looks like. And as you do that, I'll just explain briefly that our first speaker, Marty Zaslow, is one of a number of people in Washington DC with multiple jobs. She's both the director of the Office of Policy and Communications at the Society for Research and Child Development, and she's a senior scholar at Child Trends Incorporated. She was also a member of the National Research Council panel on developmental outcomes and assessments for young children. And as Marty will begin, we'll just notice that in terms of the responses that we saw by far the dominant purpose that you shared with us, it was adapting curriculum and improving teaching. So that fits in well with the theme of our broadcast, and as well, there are other responses that indicated there are multiple things going on. So I welcome comments now from Marty who's going to talk to us about purpose of Early Childhood Assessment. Thanks, Marty. Just checking that everyone can hear me as the screen changes. Yes, we can hear you. Great. Well, I'm so delighted to be here and eager to hear your questions. I wish that I could see you all and be with you simultaneously, but this allows me to be with more people than I could be in the same room. So I'm providing a background presentation that gives a brief overview of key themes in a National Academies of Science report by the Committee on Developmental Outcomes and Assessments for Young Children. Next slide, please. This study was motivated by concerns that were surfacing in around 2004-2005 about whether young children should be assessed at all and if so, how to assess them, what domains to assess, issues of appropriate implementation and appropriate uses of assessment results, and these were sparked by some changes that were occurring in assessment practices. Next slide, please. And the National Academy of Sciences Committee was actually convened by a congressional mandate to hold a National Research Council panel for focusing on these specific tasks. So what are key outcomes in early stages of development? What is the quality and purpose of current techniques and instruments for developmental assessments? And then to prepare a report with recommendations for assessments, policy, and practice, as well as future research priorities. It's important to anchor my presentation in time. The report was provided in writing in 2006 and the Committee met in 2008, rather, and the Committee met in 2006 and 2007. And so there's been further progress since then, but this is still the document of record that articulated guidelines and practices. And here you just see the membership of the Committee had wide range of expertise, including infant toddler expertise, children with special needs, and focus on children learning two languages, and also a range of domains of development. Thank you. Next slide, please. I'm going to start with just the bottom-line consensus conclusions of the Committee and then go on to some key themes. So here you see the bottom-line consensus, and it really was that assessments of young children that are well-designed, implemented, and used have important benefits for children and the programs that serve them, but flawed assessments and flawed use of assessment findings can result in harm to children and programs. And there's this distinction between the actual assessments and their use in implementation throughout the report. And our task, our being people working with children as well as researchers and policymakers, we all need to work to maximize the benefits and minimize the harm of early childhood assessment. Next slide, please. One of the key themes of the Committee report is that there are fundamentally different purposes for conducting early childhood assessment, and the purpose drives all the other gears, which measure is selected, how measures are developed, how they're implemented, and how the results are reported and used. And one problem we see in the field is the selection of an instrument developed for a different purpose used for another purpose and implemented as if it were for another purpose. And a second key theme is that conducting the assessment is only one part of the system. You can see it's just one of the puzzle pieces here, and that there are multiple components that have to fit together well in order for assessments to be used well. Next slide, please. And these two themes are linked because the purpose of assessment will shape how the early childhood assessment is integrated with other components of the system. I'm going to turn now to a little bit more detail on the issue of purposes. Next slide, please. So the Committee articulated four very different purposes for conducting child assessments, screening and diagnostic testing, guiding instruction, which is really the purpose we're focusing on today, evaluating the performance of a program or policy, which Tom referred to, and advancing knowledge of child development. We're going to focus on purpose of guiding instruction here, and what I'm going to do is go through some of the implications of this purpose and how it differs from how might one select and use assessments for other purposes. So what are the specific goals of assessments used for guiding instruction? They are to get a picture of what children know and can do, and to track children's progress over time, and to use this information to guide decisions about instructions at the level of the individual or group. And this differs very much from assessments used with the goal of evaluating the effectiveness of a program or contributions of a policy where the focus is at a broader level of the functioning of a program, not of individual children or of a policy or indicators. And if you just think about issues of reliability with assessments for this purpose, when you're really trying to figure out what a child knows and can do, you can actually go in and do an assessment again because you're not sure of the data or correct an original score because you saw something you hadn't seen previously. But that is not true with assessments for policy and programs where the implementation of the assessment has to be standardized, done at the same time, so that the assessment is equal across programs. So you can just see from the starting line how very different the purpose of the assessment makes this election in use. Next slide, please. When using an assessment for purposes of guiding instruction, all children in a class, center or program need to be assessed, but for some other purposes, a sample may be completely appropriate, and this is often the case when assessments are used for evaluation or to advance knowledge. Next slide, please. What about how the information is collected? As Tom mentioned, it's usually collected with this purpose by observing children and collecting samples of their work and talking to children, and their progress is related to learning or behavioral criteria, which are criteria-referenced assessments or to progress on curricular goals, which are curriculum-referenced assessments. In assessments for other purposes, it may be a high priority to collect assessments using standardized assessments and procedures, as I mentioned. And the targets of the information are usually within an early childhood setting with this purpose, so directors, educators, caregivers, specialists and families, and that's very different when trying to communicate to policymakers, the public and researchers, even the way you summarize the information is different. Next slide, please. The committee affirmed previous descriptions of five important domains of development, and Tom mentioned them earlier. And the purpose of assessment determines the domains, domain or domain assessed, and the committee was concerned that the domains included when assessing child outcomes and the quality of educational programs should be expanded beyond those traditionally emphasized, and at that point in time, they were concerned that there was a narrow focus on language, literacy and mathematics and should include other domains such as interpersonal interactions and opportunities for self-expression. And the committee did note that we don't have as many tools available for assessing these other domains, and that this was an important need for future work. The committee felt that there were different challenges and strengths of observation-based and standardized assessments, and here are just some of them, that when using observation-based measures, it's really important to help teachers work towards and maintain the same interpretation and scoring. It's not enough just to hand the rubrics to the teachers. It's really important that teachers are doing them in a skilled way, and this is best accomplished through careful scoring guides and initial, but also ongoing training to make sure that there's not drift. And training and support is needed in whatever system is used to enter scores, so it's not enough just to do the assessments, but also to know how to enter the scores and produce results, and that may be a particular challenge when educators are unfamiliar, for example, with an online system, and care is needed in considering whether scoring might be biased by the perception of consequences to teachers, and the challenges with standardized assessments are very different. There are concerns about not just learning how to implement in a standardized manner, but also how to make children feel comfortable so they can show what they know and can do, and cultural issues may be very important here. For example, the comfort of a child responding to direct questions may differ from a child's culture, and language of administration is critical to think about the purpose of assessment from the point of view specifically of children learning two languages, and whether the goal is to assess how well they're doing in English, mastering English, or how well they're doing in both languages, and it may be perfectly fine that goal may be to assess concepts and then whatever language a child responds in, so it's very important with standardized assessment to think about the purpose from the point of view. Reliability and validity need to be documented not only in selecting assessment, but very specifically for the intended purpose, and also for the population that's being assessed, and this is a real challenge in the field because there are measures that have very limited documentation of reliability and validity for specific populations, and this is well documented in a document prepared for the Office to try to translate technical reliability and validity information into readily accessible language and summarize the populations assessments were developed for and saw this as a problem, and program directors, policy makers, and others who select instruments should receive instruction on how to select and use assessment instruments. Now just very briefly, a couple of issues regarding systems. Next slide, please. The first issue is that the committee agreed that early childhood assessments should not be conducted in isolation. They should be considered part of an assessment system with the goal of providing information to guide progress towards high quality early care and education and children's school readiness. Next slide, please. What are systems? They are organized around specific goals and they have components, each with their own goals, and what's really important is that components have to be planned so that they will work together. Missing or poorly operating components can cause systems to function poorly. Next slide, please. For early childhood assessments, here are some of the pieces of the system. Tom already mentioned some of these. Alignment with early learning standards and program quality ratings or criteria. And professional development, it's really important to think of this not only as focusing on curricula or instructional practices or being responsive to professional development when implementing and also understanding and actually using assessments. And you can see some of the other points here. Just skipping to the bottom an important example here is monitoring the burden on those using the assessments, making sure that this is information that's useful and not overly burdened educators so that they don't have time for instruction and responsiveness. Next slide, please. And lack of alignment can cause difficulty so here are just some examples. Poor coordination in the focus of early learning standards and child assessments. Or professional development as I mentioned on a curriculum but not on how to select and use assessments. Or lack of joint consideration of program quality and child assessments in providing input into program improvement. Next slide, please. And the last resources for initial training on selection and implementation will fall short if resources are not already also set aside for ongoing training to assure the child assessments and program quality data are collected with reliability and so that those receiving assessment reports interpret them correctly and make use of them in meaningful ways. It's not enough just to produce reports and this issue of timeliness is very important. If we don't have systems that can give us the reports in a timely manner that won't be helpful. I just want to mention that some of these key issues about the use of assessment data are being explored in new projects funded in September by HHSC Office of Planning Research and Evaluation actually looking at how information from assessments is being put to use in school readiness goals. And last slide in my series I just wanted to let everyone know the name and the publication information on the report. Thank you so much and now I'm really eager to hear the other presentation. Thanks very much Marty. As we kind of absorb the wonderful guidance and advice that you summarized from the National Research Council what we're now going to hear from a state leader from the state of North Carolina and I think increasingly states are in the driver's seat and on the hot seat in terms of building comprehensive early childhood assessment systems based on the advice of the National Research Council and other experts and to share some of the work that's going on in this area from the state of North Carolina we're going to hear from Cindy Bagwell. Cindy is the administrator for the Race to the Top Early Learning Challenge grant that the North Carolina Department of Public Instruction received along with a number of other states early in the year she has been involved with many early childhood programs and initiatives in North Carolina including their early learning standards preschool demonstration and play-based assessment centers their birth to three early learning guidelines their teacher performance appraisal instrument and revisions to their quality to their QRIS system. So we're looking forward to hearing about her efforts to chart a new course and expand the ability of teachers and programs in North Carolina to look at how young children are doing. Thanks Cindy. Thank you Tom. I appreciate the opportunity to be with you and the opportunity for North Carolina to participate in the webinar and share some of the work that it's doing related to the Race to the Top Early Learning Challenge grant. And I'd like to begin my presentation by sharing a little bit of the historical context for North Carolina's Race to the Top grant particularly as it relates to K-3 assessment. North Carolina has been thinking for quite some time about the appropriate assessment of young children. For example in 1997 the General Assembly passed legislation that actually prohibited the use of standardized assessment in kindergarten first and second grades. Instead the assembly charged the State Board of Education with creating developmentally appropriate assessments that were individualized and that addressed reading, mathematics, and writing. The purpose of these assessments really was to assist teachers to assess the progress of their students and to inform the resulting instruction. Next slide please. Subsequently the State Board of Education passed policy related to that and this policy required the use of individualized ongoing assessments throughout the school year in kindergarten first and second grade and again the purpose of these assessments was really to monitor the progress of students towards benchmarks that were in North Carolina standard course of study. In addition the Board required the use of individualized summative assessments at the end of the school year. As a result the Department of Public Instruction created the K-2 literacy assessment and the K-2 math assessment. These were made available to districts in the 2000-2001 school year. However the State Board allowed districts to make choices about that. They could use the assessments provided by the State. They could adapt those assessments for their own purposes in their community or they could create their own. So fast forward to 2008 for another example the State Board of Education established a blue ribbon commission to revision the State's testing program and accountability system. This commission made a series of recommendations but one of those recommendations focused on encouraging the State to expand the use of formative assessment as an integral part of a comprehensive assessment system and truly to inform instruction in classrooms. As a result the Department of Public Instruction established NC Falcon. Now NC Falcon is actually an online network for the formative assessment learning community. Essentially it's an online professional development or professional learning community and when it was established the goal was to help educators better understand formative assessment and the role it plays in a comprehensive assessment system. This committee actually developed a series of professional development online modules that were focused on formative assessment and the goal again was to help teachers understand formative assessment more completely more thoroughly in the ways it supports instruction and actually improves student performance. Next slide please. Then in 2009 when Governor Perdue came into office one of her priorities was the use of technology based assessment that would provide information to teachers in a more timely and efficient manner. Again to improve instruction. Her focus really was on the early years particularly literacy and math and in 2009 a small set of schools began working with the state and some private vendors in North Carolina to conduct diagnostic assessments using technology specifically we started with palm pilots. Over the years the number of schools that are participating in this work has expanded and the technology has evolved so that we are now conducting these assessments using web based applications. As a result of this effort in 2012 the senate in North Carolina filed a bill, the excellent public schools act that would incorporate the use of diagnostic assessments specific to reading in kindergarten first, second and now third grades. Pieces of this legislation were incorporated into the budget that was passed. The expectation is that further debate is going to occur in the upcoming long session so we'll hear more about that. Next slide please. In 2012 as a result of the senate bill and the pieces that came through in the budget the Board of Education modified their policy regarding assessment in the early grades to require the use of developmentally appropriate individualized assessments and now to require that each local school district use the state adopted and approved assessment system specific to reading in grades K through 3. The department of public instruction is now working to expand this assessment it's known as reading 3D and by the end of this school year that pilot will become a statewide project. Next slide please. So this is the context in which North Carolina is implementing its race to the top early learning challenge grant. For the rest of this presentation I'm going to speak specifically about Section E of North Carolina's grant measuring outcomes in progress and specifically I'm going to talk about North Carolina's effort to plan for a valid and reliable kindergarten entry assessment. Next slide please. So North Carolina's approach to the development of a kindergarten entry assessment is somewhat unique. We plan to take the best of what we've done today to create a kindergarten through third grade assessment system that actually provides additional support for the use of formative assessment in the early grades. We've put a lot of time, energy resources into formative assessment. All of our public pre-K programs have been using formative assessment historically and we want to build on these efforts and strengthen this in the early grades. We also want to build on the existing K2 literacy and math assessments and expand those to incorporate other content areas such as science and social studies as well as non-cognitive domains such as emotional social development. So we want to look at the whole child rather than focusing specifically on literacy and mathematics. This assessment will align both to our early learning and development standards as well as our state standard course of study which for North Carolina includes Common Core as well as our essential standards. And again this will expand from previously our K2 efforts into third grade. It is the initial administration of this kindergarten through third grade assessment that will be done at the start of the kindergarten year that will be North Carolina's kindergarten entry assessment. It will be our initial data point. This will be our beginning point for this assessment. Next slide please. While we've been thinking in this state about a series of steps that we're taking together input from various stakeholders into this process we've also identified three primary structures that will guide development and implementation of this K3 assessment. The first of those is a think tank. So we're asking some of our esteemed researchers and practitioners primarily from North Carolina but some from other states as well and this is the group that's going to help us think about the big ideas, the possibilities. They're going to help us answer some of the questions that we have. How do we assess young children in a way that is valid and reliable? Much of the things that Marty talked about, points that Tom made earlier, those are things that we're struggling with in North Carolina. So we want to use this think tank to help us think about those. The second structure will be our task force and this will be practitioners. This is where our content experts will come in, our technology experts. This is the group that's going to take some of those big ideas from the think tank and figure out how can we actually bring these ideas to fruition in North Carolina. This group is going to be in it for the long haul. They're going to guide the development of this assessment and then the third group that we have planned for is a state implementation work group. This is the group that will guide the scaling up process based on implementation science. They're going to help us think about statewide implementation and guide that process over the course of the years that we have before the grant is over. Next slide please. I would like to talk about one other piece of our race to the top. This is a piece that helps lay the foundation for later implementation of our K3 assessment. This relates to the invitational priority sustaining program effects in the early grades. In order to achieve the goal of developing systems and practices for using data that really support effective instruction in this pre-K through third grade continuum North Carolina will be contracting the first school. First school will be working in one of our school districts that is in an economically distressed area of the state. This will target a school district that has a high percentage of children living in poverty, a district that is performing the low state and federal requirements and a district that is currently not receiving any direct support from the Department of Public Instructions District and School Transformation Division. First school will work with teachers and administrators throughout this district to strengthen their abilities to use classroom and child data to guide instructional decision making. First school has been working with several other schools in North Carolina. So lessons learned from their work in those schools as well as this piece of our grant will inform the development and implementation of North Carolina's 323 assessment. Next slide please. In closing I'd just like to say the ultimate goal of our grant is really to improve instruction across that pre-K through third grade continuum. And we really are excited about the next three years learning lessons from others. Perhaps we'll have an opportunity to talk with you again. We can update you on the progress we've made. Thank you for listening. Thanks Cindy. And I know that as others are on the broadcast I'm tremendously impressed with the vision of North Carolina in terms of how you're using the opportunity of the early learning challenge to improve the tools that we would have to use for the purposes of improving instruction. As we move to our next presenter we'd like to ask you to respond to a second polling question particularly for those who are working in local communities and early childhood programs of school districts if you would kind of candidly rate your current efforts in your community to use child assessment data to improve teaching and learning what kind of grade would you give the school district that you're part of. As we wait for you to respond to that I'll also mention that there are other new assessment initiatives going on across the pre-K to grade three years including an effort in the states of Maryland and Ohio to collaborate on developing a formative assessment and kindergarten entry assessment and efforts at the park state collaborative assessment effort to develop formative assessments for K through 2 in several areas of the common core curriculum. So hopefully as these assessments come down there'll be new resources for local communities and programs to use. So I think we're seeing from the results that are being displayed that there's room for improvement in terms of this challenging task of not just having teachers spend their time gathering information and filling out assessment forms but being equipped, trained and given the time and understanding about how to use this assessment data to give us some thoughts about strategies for how to do that. We're now going to turn to a leader from a large and diverse local school district the San Francisco unified school district Carla Bryant will be our presenter she's chief of the early education department at SFUS and has over 25 years of experience in early childhood, elementary school programs. In particular right now she is responsible for administering 13 early elementary schools with pre-K and transitional K students supporting 74 elementary schools in aligning their assessment curriculum instruction practices pre-K through grade 3 as well as integrating community-based pre-K and district practices with the assistance of city departments. So we're looking forward to hearing from an urban school district perspective from the state of California how they're taking on the challenge of making assessment more effective as a tool for improvement. Thanks Carla. Thank you Tom. I have to say that I want to acknowledge a lot of what Marty Zaslow talked about when she talked about the concerns that the profession has around how do you implement a process, an assessment process that has fidelity and is reliable and valid at the same time. It was affirming. And what I want to share with everyone is just what it is. It's just San Francisco Unified School District's assessment story as we are trying to do this and what we consider a very in a way that does have fidelity to it. And to talk about it, what I would like to do is to give a little context around San Francisco and around our community. And then I will briefly just kind of talk about the tools and only so I can talk about our implementation process how we are rolling out the different tools in a different way. And then lastly I want to spend most of my time really highlighting how our commitment to fidelity to the assessment process through professional development that we have created is a multi-leveled process to ensure that teachers not only receive training but they also have time to learn about the tools their use of the tools and how they can actually take that information and implement it into the classroom. So to sort of get started I'd like to give a little bit of context and that's the next slide and say that we really looked at implementing a preschool to third grade process and that we recognized in the very beginning that we had to address a lot of concerns with our early ed community around is it appropriate to assess. And then we had to reconcile the K-12 system with the early ed system. And then we realized that we had a lot of partners that we were working with in our city in our city and county of San Francisco with the city. So what did we do? We went to our partner First Five who actually has a lot of influence around our community-based preschool programs. We wanted to make sure that whatever assessment process that was rolled out in the San Francisco Unified School District was also rolled out within our community preschools as well. And so we actually created a group where we would collaborate on how do we actually decide on what tools to use and then how do we roll it out in both places. We also realized that our Department of Children, Youth and Families had a lot of data that would be very helpful to us and we also wanted to partner with them. So first we needed to- we looked at our community, our city and then we went internal and we took stock of what was going on internally and we realized that in our early grades, kindergarten and you'll see there it says TK and TK stands for transitional kindergarten and that's just basically the first year of a two-year kindergarten program. But our TK, our kindergarten, our first and second grades had very uneven assessment processes and this was something that was mentioned by our earlier presenter. But we actually went into schools and realized that some schools within the schools had different classrooms that used different assessment tools. With that we had to make sure that we partnered with our union partners that whatever assessment process we rolled out they needed to be part of the conversation in agreement with us so we could then work together to make sure that the teachers were going to help a successful process. We also included us creating a collaboration with Seattle who had already started doing this work many years ago and we decided that we would do a city-to-city professional learning community so we could learn what they had gone through a lot faster so we would not have to have some of the same bumps in a row and then lastly we've had this ongoing profess relationship with Stanford University who would help us with doing this process that it was reliable and that there was some fidelity to what we were doing. Next slide. So the assessment tools and again the assessment I don't want to spend a lot of time on the assessment tools I just want you to see that when we looked at our assessment tools we wanted to make sure that we looked at our current assessment tools that we had a balance set of assessment tools that someone would run in records they had observations that they covered a wide scope of things not only literacy and math but also social emotional that they provided us a way with providing instruction in the classroom they were field tested they were user friendly and then also that when we talked about rolling out specific kinds of tools into the community that they did not cost a lot to actually implement them. You will notice that the tools are in three different colors the ones that are in the black or gray are the ones that had already been part of our system they are most of the time they are mandated a tool the DRDP for Pre-K is a California mandated tool for anyone who has a certain type of grant the CELT which is the California English Language Development Test is given at the end of Pre-K and that is given to students who parents have basically stated that their children speak another language than English in the home. So this is just to let you see what tools we have the DRDP you will notice that the ones that are in dark are the ones that we are dark red or the ones that we are implementing right now and we are doing different systems to implement them and then the ones that are in the lighter red are the ones that we will implement at a later date. Next slide. So what is our implementation process? Again we were absolutely committed to fidelity. So the three levels were one we would have training and we wanted to make sure that in the training process that the teachers not only understood the information but they could get it back to us so they had to demonstrate an understanding of it. From there and this is where I am going to spend the last part of my time is talking about the last two pieces. We wanted teachers to then take the information they learned about assessments and take them back to their community and work sit with their teachers their principals and talk about what did this assessment tool mean to their community? What did it mean to your school? What did it mean to your grade level? And how did that align with the other grades? And then to ensure that they actually implemented the tools and we are able to use them in an effective way we implemented the third part which is coaching where there is a coach who goes into the classroom and spends time with the teacher making sure that what they were seeing that they were actually seeing the same thing assessing the same thing in the same way and creating this process that the teacher had a chance to go back and forth about why what they were seeing and how they would put it down and then how would they use that information. Next slide. So what is a professional learning community? These are communities where first of all the grade levels get together and they talk about all of the assessments that they have taken how do they use those assessments and they do and in those groups with them they actually have a coach and this coach is a person who has expertise in assessment from there and I will tell you in the pre-k grade we actually are going to set up a learning community that also includes district operated pre-k and community based pre-k the teachers will sit together and talk. The next group that meets is the ones that meet across grades and we chunked the grades out pre-k to first and second and third and we did that because we recognized that some of the same types of things they were seeing across these grades the pre-k teacher could actually say to the pre-k teacher I'm seeing the same things and then they could then use each other as this collegial partner and they would have a coach there working with them as they talk about this data and then annually and some in some schools actually do this twice a year all of the data is shared throughout the grade so they all understand these are my pre-k kids but in three years I will also have these children so what do I need to know that will impact my practice and I want everyone to know that this process is also used all the way up to our superintendent's executive office it is used with the different departments these professional learning communities but again these were implemented to ensure that we had fidelity to the assessments and that we were using them appropriately next slide so the coaching model again this was just another level of ensuring that we were using our data appropriately first of all each teacher has a coach assigned to them and we're talking about approximately 300 teachers if you're thinking k and first so if you add the other grades you're getting into about 700 teachers so we're implementing this already pre-kk in one they will have coaches assigned to them these coaches will go into the classroom with the teacher look at their data look at how now how do I use that data to modify my environment change my classroom work with my lesson planning and we also found that teachers were getting a little confused about the different types of assessments so this is the time where they get the one on one with the coach around which assessments make more sense to use when how do we use this data which data is appropriate to share with parents and then how do we discuss it when we go back to our other group which is our professional learning community next slide so how is the data used I'm going to just give a quick example we used a set of data to one change a district program but also look at how in the classroom we needed to add another type of assessment to ensure that we were getting the right kind of data our district has approximately 35% Chinese speakers Mandarin and Cantonese another 27% I give or take Spanish speakers and then we have Japanese and so on so a good 70% plus of our students speak another language other than English in the home so we decided that we would implement an assessment in a curriculum or a process in pre-cake house Soy Billingway it is a dual language program where teachers provide instructions in both English and Spanish then down in the classroom level we implemented PALS and we're looking at PALS data and not only English but also in Spanish and with that the coaches work with the teachers on what are we talking about and why are we using the two different assessments in these classrooms and how they align not only with our pre-cake classrooms but align with our case system when we look at our different language pathways and then in closing I will say that we are learning we continue to use Stanford as our beacon to make sure that what we're doing is appropriate we make mistakes along the way but they are there to guide us and we look forward to not only learning from them but from the nation as they also struggle with these issues thank you Thanks Carla it's tremendous to see how you've taken some of the concerns and guidance that Marty shared and are developing a system that is really geared to help teachers use this information and looking at it across the grade levels in the fashion that you are I know there are several questions that we've already received that we hope to share with you as we get towards the end of the broadcast about particularly the coaching issue as we transition to our last presenter Ellen Friede we'd like to share one last polling question with you which is to get a sense from you from the communities that you work in the ways in which teachers across the pre-K and K3 years are sharing data on how children are doing and so we offered kind of three options that represent different patterns of practice one would be that there actually are cross grade level cross program level opportunities for teachers to talk about data together a second would be that there's basically one way transmission of data from the pre-K programs to the schools and the third would be that data use is limited to teachers just working with in their own program or grade level rather than across grade so if you take a second to answer that question I'd be glad to share the background of our last presenter Ellen Friede who's presently Senior Vice President of Early Learning for Research and Training at Acelero Learning headquartered in Harlem, New York but supports Head Start programs in several states she's a developmental psychologist she is a researcher teacher educator and teacher herself who's worked with early childhood programs at both the state and local level as well as a leading researcher including her service as co-director of the National Institute for Early Education Research at Rutgers University she's actually involved in a current launch to the research project on the effects of preschool on children's achievement at age 11 and she's going to share with us strategies in a multi-site head start program for addressing the issues of assessment use as we turn to her I'll just note that the dominant response to this question about how our teachers presently sharing the data is limited to their own program but more than a quarter of you have examples where data is being shared across program lines and creating the opportunity for teachers to see children's progress across the pre-k years so we're looking forward to hearing from Ellen and continue to encourage you to share questions with us through the email system. Thank you Tom and I want to thank you especially for involving me in this really interesting topic I've already noted a bunch of questions I have for the other presenters I as Tom said I am Senior Vice President for Early Learning in a large head start grantee and we have three delegate agencies in three states so it is a complicated task for us one of the things that I think is most important about ASILERO is that we have five values and one of those values is database decision making and so I'm very interested to be able to talk with you about that how we do that so next slide I'm just going to very briefly review the continuous improvement cycle that undergirds our work and then give an illustration through our head start database system only at the child level since we're focusing on child assessment we'll talk about how we set our school readiness goals which are required by the Office of Head Start and then align those two activities in child assessment but specifically focus the most on collecting, analyzing and using observation and assessment results next slide so I'm sure this is not new to any of you but just to reiterate that it is an iterative process and that we first have to develop our outcomes or what it is we hope to accomplish then we measure and assess progress toward that look at our data analyze and plan and then implement and pilot improvements we do this at every level but of course individually teachers do this at the child level within their classroom next slide and this just shows you that we do use have database decision making at every level and use the continuous improvement cycle at each of these levels and we aggregate data from the child level up all the way to the grantee to help us with other kinds of program improvement decisions but we combine much as they do in North Carolina we combine performance assessment or what they call their formative assessment with standardized on demand instruments so that we can ensure the fidelity of really in a sense ensure the fidelity of both methods because of course there are as Marty pointed or I think it was Tom pointed out there are concerns of both kinds of assessment next slide so one method for assessing progress to inform instruction and decision making is to use authentic ongoing assessment or performance based assessment we base that on our school readiness goals we made sure that our assessment tool mapped closely to our school readiness goals so that we could rely on that data and have more confidence that we were working toward our school readiness goals we aggregate those results quarterly and analyze them at every level we look at the individual child data within the classroom but we also look across classrooms across centers and all the way up to the grantee we particularly look at what percentage of our children are on track to achieve school readiness so that we can make course corrections as we go along and teachers do receive a report quarterly on each child's readiness and on their recent gains we also do all kinds of other methods of comparing within subgroups and across subgroups next slide as I said we also combine this with standardized or on-demand assessments we have a contract with outside evaluators who administers language and math assessments on a random sample of eight children in each classroom we unfortunately as Marty talked about there are not particularly good assessments to be used across all the domains of learning so we chose just these two we would certainly be happy to use other ones if they existed but the reason that we do this is partly to confirm the concurrent validity of our performance based or ongoing assessment so that we can count on those and trust them and know that they are reliable we also use this data to investigate and compare the impact of different program components, different curriculum models, length of day, different pilots that we're trying out and then we do carefully analyze our trend data over three years and we will be using that in conjunction with other information in teacher performance appraisals and actual decisions about teachers but we will only do that with a good constellation of data so what did we find when we first started looking at our assessment data the first ones we found that scores varied widely across domains so that in math and science in particular scores were considerably lower than our oral language scores and we wondered why might this be happening well one option of course with the ongoing assessment is teacher error so the teacher in a classroom in Central New Jersey might just not be scoring the tool in the same way that a teacher in North Philadelphia or a teacher in Clark County, Nevada or even the teacher in the classroom next to her it could also be that different curricula that we use are having better effects or that the children and the families are different next slide so how did we interpret these results I mean how did we use these results then because we couldn't answer any of those questions for sure we tried to improve across all of those areas one of the things we did was add more structures for teachers in our assessment system and also in our curriculum models so that we knew that there was a little bit more standardization across programs not teacher-proofing anything we certainly think that teachers have to adapt and respond to the individual children in their classrooms or why even do the assessments but in particular what we did with our assessment system was we incorporated on-demand assessment similar to the PALS that the last speaker spoke about but also we integrated what we call embedded assessment activities that help guide the teachers in finding out more about children in their authentic activities for example we might ask the teachers to focus on classification and describe a particular way for them to learn more about children in classification I should remind you that Head Start teachers do not necessarily have a bachelor's degree and do not necessarily have a teaching certificate so we are in many of our classrooms working with teachers who do not have the background in how to assess or how to create activities for assessment we also increased our math and science within the curriculum that we were offering so we realizing that one of the reasons our children may not be scoring as well is because the curriculum was not as weighty in math and science and as you probably know in preschool and early there is a great deal of research showing that math and science are woefully inadequate in those grade levels we also increased our professional development around math and science but what we focused a lot on was ensuring that we really trusted what the teachers were scoring so we started monthly assessment work groups which were very similar to the professional learning communities that have been described the purpose of these were to increase the usefulness of the documentation improve the accuracy of the scoring and to help teachers know how to use the information to improve teaching we also of course so we review individual documentations and teachers own data but we also look at the quarterly aggregated data and in fact we are starting those work groups this week with a facilitated webinar that comes from the grantee level on how to interpret the child data for differentiated instruction and we also establish systems for inter-rater reliability among teachers and between teachers so that we can feel again more confident and we have assessment focused coaching within each of our centers as well so that the center director will focus on assessment with a teacher and actually observe alongside the teacher and review their documentation together if the teacher is struggling I would also say that we did find that our concurrent validity is pretty good it's not fabulous but of course they're two different completely different methods one's a point in time on demand assessment and the other is within the day, throughout the day over multiple days understanding of the child so to have a perfect correlation between those two would not be expected but the correlations are very acceptable for concurrent validity so we're excited to see that and next slide I just wanted to say that we're always focusing on the places we go and particularly preparing our children to achieve in school thank you thanks Ellen very much I appreciate all the presentations and it does feel to me like we are making progress as a field by the combination of these efforts both the national experts offering guidance through the kinds of reports that Marty shared but also leadership you know across the board from folks that are in state government local school districts and local programs that have been represented this afternoon we have had a number of wonderful questions that have come in and we want to get to as many of those as we possibly can and hopefully allow the panelists to even comment on what they heard from some of their colleagues to kick that off and then raised from a listener to Marty's presentation understanding the importance of determining if assessment tools have reliability and validity for the purposes that they're going to be used and the question really was how people can get assistance with that job what resources are available where could you get help if you're not personally equipped to be able to do that yourself so a response to that and then I wanted to also offer Marty the opportunity to respond to what she heard from North Carolina San Francisco and Acelero Head Start in terms of the potential of these new efforts that are going on in the field Marty are you with us? Yes I'm here can you hear me okay? I would love to start with the last question and just say I was so excited listening to the presentations because they really show where the field is going in terms of making sure the teachers get the professional development not just to do assessments but to make use of the assessment information and to assure that the assessments are reliable and the thoughtfulness with which each of these presentations is going about was just very exciting to me and I thank you all so much. I have two responses to the first question about the purposes of assessment and reliability and validity for the purpose. One is that the technical language in this area the technical language in the manuals for the assessments is very terribly off-putting it's construct validity versus content validity versus this kind of correlation that kind of correlation and there is a huge need for translational work that distills and translates without distorting these summaries and there have been several attempts at this and I'm just going to talk about one that I was involved in and others on the phone might talk about others we at Child Trends we developed a compendium of measure a document called understanding and choosing assessments and developmental screeners for young children ages 3 through 5 and the intent was to go to the technical manuals but to interpret for readers who don't have psychometric background what the terms mean and to provide cut points for what is good reliability what is good validity and this is one of several different attempts to make the information more accessible the other issue of reliable for what purpose this becomes an issue especially when there are different purposes so when an instrument that was developed for formative purposes an ongoing observational instrument is suddenly used as tracking data for a whole population how are we doing at kindergarten entry as a whole population the reliability demands on that second purpose go up and you can't just say it was reliable and valid for the purpose of tracking and guiding children's instruction now you're using it for a different way to put protections in place and I think what we just heard from Ellen about using two different kinds of assessment two different kinds of assessment used in San Francisco as well as cross checks and ongoing checks on the reliability of observational data that's the kind of thing you need to put into place if you cross purposes and use an assessment developed for one for another I hope that's helpful Tom very much great thanks a lot so we had a couple questions for Carla Bryant about the substantial effort that you're making to develop a cadre of coaches to work with teachers around understanding the purposes of assessment and understanding assessment data and people were interested in how you're recruiting those people what kinds of skills you're looking for and also a very pragmatic question how are you where are you getting the funding to create these kind of positions within your school district budget so hopefully you can give us information on those questions I can and I will start with the second question first I can say that we have an amazing partnership and a wealth of resources in our county City County we have a funding group that's actually called the preschool to third grade funding group and it involves the Haas Foundation Mimi and Peter Haas Packard still giving so a lot of the ways we're able to do this is through our funders the first question around our coaching initially we looked for for teachers who were successful in the classroom and we still do that but we learned that to get actually some reliable coaches that we had to create a coaching matrix and we're in the process now of fine tuning what that matrix looks like how do we check for their reliability and I would ask me in a year and I will be able to kind of flush it out for you a little bit more but we're in the middle of that because traditionally what happened was coaches were selected by the principal or someone say this is a great teacher well the skills to teach may not always translate into coaching and so we've gotten real clear about our coaching model how it should look and the skills we need our teachers to have so we're in the middle of actually creating that coaching model that also translate not only to our work we're doing in the assessment but also in instruction as well great thanks very much there were also a number of questions for Ellen Breedy about the multiple assessment tools that you're using and particularly the different ways that you're reporting the information and sharing it with teachers and I guess one question that was raised was for you to share some of the reactions from your teaching staff as you've implemented these different initiatives that are requiring a lot of them in terms of data collection but also the ways that you're providing feedback to them about this from these assessments and some of the ways that you're using assessment data in your program so if you could give us either some examples or some general patterns of responses from the Head Start workforce in the different sites that you work with sure I the first we have multiple tools the tool that the teachers use themselves they get direct training on the ongoing assessment tool we do use different ones and that's for a variety of reasons partly has to do with State Pre-K that we are contracting with and because we want to be sure that the tools are consistent with our school readiness goals and measure hardest toward our school readiness goals the teachers I have to say do find it to be a struggle they don't and learning to have this be just a part of their teaching routine as opposed to something that they sit outside of the action and do is a training a training focus and it takes some skill because it's the kind of thing that as a skill teacher doing this you just do it automatically and naturally but you have to first understand how the skills develop that's one reason why we only choose assessment tools that actually scaffold teachers in understanding exactly how the skills develop and then remembering because you can't always document well right in the moment and that's one reason why we have reduced the number of items that we ask the teachers to do this with and also have provided them with more support in making many of the assessments embedded in their normal days in the normal day of the routine or activities in terms of how we give feedback to teachers we ask center directors to first check through the online anecdotal records and documentation that teachers are keeping and look for accuracy and also usefulness of those sometimes teachers who are new to this write down things that are more like a lot of people's tweets which are kind of went to the block area not a very interesting or useful response and then so we look for we analyze the online data give feedback to the teachers about that then we as I said we might go in and parallel and shadow them and then we have these assessment work groups where the teachers are really helping each other and they're really chewing at different data and documentation to understand what it means we report the data on a basically on a excel chart in a way that again we teach the teachers how to read them I hope that answered the questions it was a lot of questions I feel I can one so I hope that was what you wanted Tom absolutely thanks very much good I was curious myself to ask in terms of North Carolina how your state is thinking about the whole issue of teacher evaluation in your state I know I believe you're both a race to this top state in an early learning challenge state is that correct and I know that this is a challenging technical issue for a lot of states and that there are a lot of reforms going on at a recent meeting of our members the council of chief state school officers it was probably the number one topic among the commissioners of education trying to figure out how to move in the direction of getting a better approach to rewarding teachers and recognizing how to help teachers improve also it's obviously a contentious issue in terms of whether child assessment data is something that is is appropriate to use as part of the criteria so can you share where you are in thinking about that within North Carolina in particular in terms of how it would impact on teachers in the early childhood years up through third grade well thank you Tom and I second everything you said about the challenge of this issue for really for states for all teachers but particularly for those that are in the early grades so two comments for that first of all the state board of education happens to be meeting right across the hall from me today and they are tackling this issue specifically related to our K3 assessment that is development process and actually what they will be voting on is a measure that would prohibit the use of the data generated from this assessment for accountability or for high stakes decisions so we'll see I'm not over there I don't hear the conversation it actually was discussed last month today it is for action so we'll see how that conversation goes in the meantime there is conversation about four grades preschool through second grade a temporary plan presently to start looking at potentially the use of running records for this purpose but there's much conversation that has to be done and lots of debate going on around that issue in North Carolina so we're trying to deal with that issue in relation to this assessment we're developing but in the meantime there's lots of discussion going on and I'm not quite sure how it's going to end up absolutely very good thanks for responding to that I guess another question again at a very practical level was raised for Karla Bryant you're kind of setting up this infrastructure for helping teachers look at assessment data and use it in new ways how do you provide time for teachers in terms of their schedule during the week or the month so that they are able to both get the kind of training that they need overall but also have time to implement these new assessment initiatives effectively where you're adding new assessments but also the goal of using assessment data in a diagnostic and as a program improvement tool yes I will tell you that this is a very practical answer it was union negotiated we really sat down with our partner and said we need common planning time and we spent almost a year negotiating what that would look like it is what we have is a 30 minutes set aside a day that can be put together for two and a half hours and we do that occasionally during the week for training for the PLCs and it was decided as a district that this was the common this was the goal that we wanted to move towards that one that district-wide we should have common planning time we used the model that we saw in high school and we implemented that model in partnership with our union in our early ed and when I say early ed I am saying preschool to third grade it is specific to our preschool classrooms and then we are lucky in that we again we have amazing funders who are willing to work with us that they actually provide us with additional funds so we can even buy more time for the teachers to have time to actually do the assessments work with the coach work with their peers in a collegial manner and then actually implement and they also have time to actually go into other classrooms and watch other teachers so again San Francisco is lucky in that we have great partners with our union and with our funders wonderful that is terrific well in the last four or five minutes I thought I would just offer each of the presenters an opportunity to share a final thought if there isn't one that occurs to you a good question would be where do you hope that we will be as a field two or three years from now in this area but we will start in the order that we heard them present so we will start with Marty and then go to our other presenters and then sign off for the afternoon so Marty want to start us off Marty? I hope we get to a point where teachers feel supported and guided in the use of instruction so that they are trying to bring children to and that instead of feeling threatening or daunting this feels like clarification and there are good supports for the teachers in the system. Great, okay. Fendi? That was lovely Marty thank you for those comments and from my perspective what we really want to do in North Carolina and hopefully see on a broader scale is a greater emphasis on the process versus the end product really thinking about the purpose of this assessment and starting to look at children over time and considering growth right now there is a pretty big emphasis in our state and I think nationally on what things look like at the end point and so we really want to start thinking about the process in a beginning place so I hope that that works well for us and that others have success in that as well. Great, thanks Carla. First of all I'd like to thank the other panelists I personally learned a lot and I am honored to be even on the same phone with them but the second thing I would really like to say is that this is a community process that we embarked on this whole plan with our partners and because we did that we found that when we made mistakes they were very forgiving of us because they felt like they were part of the process and they were part of finding the solution and we will continue to probably bumble and fumble along the way but it will be okay because we will all learn together and we will all get to what we believe is the end result which is academic success for all of our children which goes from not only preschool but infant toddler all the way up into their career. Wonderful, thanks. Alan? Again I have to go last and so I feel like people have said such fabulously inspiring things so I'm going to be a little more prosaic and maybe self-serving as a researcher. I want better tools and I want them to be across multiple domains and I'd like for them to be online so that teachers can use them more easily in the classroom. Good. Excellent, excellent thoughts. I have to say on my behalf as somebody who has worked on these issues in a variety of ways over quite a number of years I'm encouraged by the work that's been done that's being represented by the people that shared this afternoon and I think we need not underestimate concerns about potential misuse of assessment data and the fact that we don't have the tools that we would like to have but I do feel like we're making progress and I'm very encouraged that states including North Carolina are in development of new assessment tools I think we have leaders in the research community that are helping us understand new dimensions of early childhood development like executive functioning that we need to seek to understand and I think this work in various places in communities and in states to make this idea of using assessment for instruction real are tremendous and a tremendous investment so I just want to thank everybody for their contributions this afternoon and as we sign off want to let you know that we are going to have one more webinar in this series on January 16th that's going to be working on issues of scale and sustainability and implications for state and district policy of supporting pre-K to grade 3 initiatives and we do want to thank the Bill and Melinda Gates Foundation for their technical support of these webinars and hope that you have found them to be productive and useful we hope you have a good rest of the day and that you'll continue to tune in to our work in terms of these broadcasts and our website thanks again for your participation