 On behalf of New America's Education Policy Program, welcome to Making the Grades, What State Policy Report Cards Reveal about Education Reform. I'm Ann Heislip. I'm a policy analyst here with our Education Policy Program, and I focus mostly on K-12 standards and accountability, NCLB and waivers, as well as sort of the intersection between state and federal policy reform. So to get a few logistics out of the way first, definitely grab coffee and food. Sorry for those who are watching online. I hope you have your second breakfast as well. And we'll be tweeting throughout the event using our Twitter handle at New America Ed, at Students First. We'll also be tweeting and using the hashtag Making the Grades. So please follow the conversation on Twitter, respond to us, ask questions, be snarky. We'll engage with the conversation that way. So I want to give you a little bit of context about how this event came about. Eric Laram at Students First actually approached us about whether we would be interested in hosting a conversation about these state policy report cards that every organization seems to be putting out these days. And I jumped at the opportunity and I really think one of the reasons that Eric approached me about this, or I'd like to think this, is that I actually wrote something last year about their state policy report card when it first came out that was perhaps a little critical of the report card. You could characterize it that way. I said it's appropriate for an organization centered around a policy agenda like Students First to produce rankings based on whether states have enacted those policies. That said, the only thing I learned from this report card was whether states had adopted Michelle Ri's favorite education reforms. So in that spirit, I like to think that is why Eric approached us because that's the sort of frank conversation I hope we can have today. What do we really learn from these report cards? What do they tell us? What are we supposed to do with them? And particularly as more and more organizations have them, we're getting a lot of information about what's going on in states and that sort of states have been leaders in educational reform over the last few years. The federal government hasn't been enacting much legislation lately. So a lot of what's happening really is at the state level and are these report cards the way that we can follow it? One of the things though that I wanted to make sure that we can have our conversation today or debate centered around what do we learn from these report cards? What do they really tell us? Is that I didn't want to get the conversation bogged down in what is the methodology behind this? So how is Students First different from NCTQ and the fine point? So I kind of wanted to go over first and foremost just some of the high level points to point out how these organizations are different, how their report cards reflect those differences and just get a real sense of what we're talking about. Because when you say state policy report cards, it's a very different thing from Students First to NCTQ to Ed Week or any other organization that's producing these. So making the grade infographic which hopefully all of you see online or picked up at the door goes through some of these high level differences and it shows sort of Education Week's quality counts report on the left, then NCTQ's state teacher policy yearbook in the middle and finally Students First state policy report card on the right. And you can see just from the get go that on average states don't do very well. So I'm not sure what that says about education reform but we clearly have a ways to go. That there's progress that all states can make. The average grade is no higher than a C plus. But that said, there are some high performers and some states that aren't doing so well and they differ depending on what grading system you're looking at. Florida performs well on all of the rankings actually. It's in the top 10 on all three. But if you look at other states there can be some extreme disparities. Washington DC's in the top five on Students First it's in the bottom 10 on quality counts for Education Week's. So why is that? Now in part of this is looking at what these organizations are actually considering. What subjects are we grading states on? So NCTQ no surprise being an organization that is very mission driven around promoting teacher effectiveness is really looking at five categories all devoted to effective teachers whether it's producing them, identifying them, retaining them. On the other hand you have editorial projects in education which is producing Education Week's quality counts and they have a much broader span of things that they're looking at. They're looking at a whole range of elements around standards based reform. From chance for success, school of finance, they also look at teacher policy. And they're taking a different take. And at the same token Students First they also have a broad agenda, a lot of different policies that they're promoting around improving student outcomes. They're not as sort of singularly focused on teacher effectiveness as NCTQ and that is also reflected in what they're looking at. Another interesting thing to think about is sort of how long these report cards have been in existence. Education Week started nearly 20 years ago before there was an NCLB, before standards based reform had actually been enacted in federal policy. And so a lot of their policies that they're looking at reflect that. Students First has only been grading states for two years, so their policies and the points that they're looking at are more reflective you could say of the current wave of reform that many states are thinking about from school choice to teacher policy to parental information. And finally I think a point that's really just important to make is just that these organizations are different. You have Education Week is one of the most widely read education journalism outlets. It covers a wide range of topics. Everyone I think is reading Education Week on a daily basis. Or I don't know how you do this job if you aren't sometimes. And that's reflected in maybe what the purpose of their rankings are. They're not an advocacy organization in the same way that Students First or NCTQ is where they're really hoping to enact certain policies and making some real prescriptions about what states should be doing. So I think that also is reflected when you look at the weighting within these report cards. Students First and NCTQ both choose to weight some policies more than others because they feel that they're more important. Looking at their mission, their values. So to do a little bit of a rundown just again to get at what we're talking about when we say Florida gets a B or Montana is failing, it's different if you're talking about Education Week or NCTQ or Students First. Education Week is the only one of these three that is considering things like student outcomes and the out-of-school factors. They are looking at data like poverty and employment rates in a state, average income. And that's a real distinction from the other two. NCTQ again is very focused on all of these teacher effectiveness policies from preparation through in-service supports for teachers and retention policies. They have just as many criteria as some of the others but they're only focused on those three components. And so they're a lot more specific there. Students First is distinctive. They're the only organization that's really focusing on school choice and a way that's much more focused than the other two. So if you're seeing a state do well on one or another it's reflective of what they choose to include. So I hope that this sort of drive by a little bit of what's in the grades just provides some context for today's discussion and we can really start to get at, now that we know what we're grading and how we're grading, what should we do with this? That's really the question. What is the value of these report cards? What is their purpose and how should we take this information to make better policy? Does the purpose depend on who the audience is? Is it helpful for the public to know sort of just the general state of education reform? Is it helpful for policy makers that are looking to tackle some of their big education challenges? Are they sort of so specific sometimes that they're maybe oversimplifying the challenge, ignoring some things when they shouldn't be? And these are some of the questions that I hope that we can tackle today. So I'm actually gonna go ahead and introduce our panelists because I want to jump right into the discussion. We have a great panel here and I'm really, really excited. So I'll ask them to come up while I'm just giving their introductions and this is going up as well. So if y'all wanna come on up. So Andy Smerick is our fabulous moderator today. Probably know him at Smerick on Twitter if you're not following him. Gave you a little shout out. He's a partner. Andy, at Smerick. At Smerick. He's a partner at Bellwether Education Partners in their policy and thought leadership practice. And before that, he was the deputy commissioner of education for New Jersey. So he has actually been on the other side of this as a state policy maker. And a lot of his work now revolves around school turnarounds, the SIG program, Andy's favorite, as well as sort of district reform and governance. And next to Andy, we have Chris Swanson. Chris is a vice president for research at the Editorial Projects in Education or Education Week. And previously has been at the Urban Institute, but Chris is sort of the man behind quality counts, as well as technology counts and diploma counts, these great research resources that Education Week produces every year and that we all rely on to figure out what's happening in states. Next to Chris, we have Sandy Jacobs, who's vice president and managing director for state policy at NCTQ, the National Council on Teacher Quality. And she's the driving force behind their state policy yearbook. Previously, she was at the U.S. Department of Education as a senior education program specialist for Reading First, as well as the comprehensive school reform demonstration programs. And she's previously taught, that was her past life, but she's been teaching for almost a decade in public school nine in Brooklyn, New York. And she's also a founding core member of Teach for America. So. Oh, I'm old. No. Why? I think, right? An experience. Period. And finally, we have Eric Laram. He's the vice president for national policy at Students First and has been working with Michelle Rhee out in Sacramento, but really thinking at their national, at the national level. Prior to that, he was chief of staff to the deputy mayor for education here in Washington, DC, playing a lead role in their Race to the Top application, their school reform efforts here around teacher evaluations. And previously, was a lawyer, or was thinking about being a lawyer and made the wise decision to instead do policy. So we have that to be grateful for. So with that, I'm all gonna sit down and we'll turn it over to Andy. So hi, everyone. How are you? Great to see such a big crowd for Snowy Wednesday. I believe that moderators moderate best when they moderate least. So a success measure for this. Maybe we can have a report card for it is how little I talk. I think will be the key here because you're here to hear them. I have been a consumer of these kinds of reports. I've worked for five government agencies now. So I'm often looking at these to see how my little organization is stacking up, but I've also been on the other side kind of researching and producing stuff like this. So hopefully I'll be able to add a little bit of color here, but no more throat clearing for me. Oh, I should say making the grade, hashtag. Making the grade. Making the grades with an S at the end. If you're out there watching this or in the audience, you can tweet that in, send us questions and they will get up to me and we'll get into the bloodstream. So let's just kick this right off. The senior report card on the panel, let's say. So Chris, can I just ask like straight away, why did you guys decide to do this at first? And if you don't mind talking a little bit about the evolution of how it's different today than when it was at the beginning. Sure, that'd be great. And thanks Ann and New America Foundation for hosting this, this is a great discussion. I was struck by the kind of report card of the report cards up there because Ed Week looks like an easy grader and I can assure you that's not how we're usually viewed by the states. But Education Week has been around for 18 years. We just published our quality accounts, just published our 18th issue of the report in January. The history of this is interesting. So Education Week is a non-profit news organization. We are very kind of mindful of our objectivity and we're not an advocacy organization, we're a news organization that doesn't mean we're disinterested, that doesn't mean we don't care, that doesn't mean we don't try to kind of improve the conversation to improve schools but we don't have particular policies or kind of objectives in that kind of sense. That's this kind of organization we are. But in the mid to late 90s, as folks may recall, from history books, if nothing else, there was kind of a new wave of reform that was really starting to take hold, standards based reform, it seems so old now, but at that time it was fairly new. Not that pieces of what became standards based reform hadn't been around for some time, testing, kind of early forms of standards, other types of policy, but they were starting to kind of come together in a more coherent kind of movement and kind of a nationwide initiative than had been the case before. And again, in the mid 90s, education was not the national kind of field and conversation that it is today. When education week started in 1981, there were a lot of people who were scratching their heads, who cares about a national newspaper and education? It's such a local issue, there's no national conversation we had. And I think that's obviously changed very much over time. And quality counts I think is an interesting kind of milestone along that way because we saw all this policy activity happen. This was a lot of state led activity. And you'd hear a little bit here, you'd hear a little bit there. You knew that there was a lot going on, but it was hard to kind of put your finger on kind of who was doing what and exactly how much of this was going on. And people in the field started kind of saying to themselves, well, somebody should really be tracking this, see what's going on so states can learn from one another just so we can be a more informed field, kind of really have a national dialogue about these issues and education week raised their hand. And this was a very big departure for Education Week and our kind of former president at the time saw it as a real kind of departure from Education Week in some ways. We had not been in the report card business, this was the first really major research project we'd undertaken, but kind of we as a news organization that cares about the field saw it as something important. And so even today, I think when you look at the structure of kind of what we grade on as part of our report card, you see the legacy or kind of the fingerprints of those issues, you know, standards assessment accountability is something we've tracked from the beginning. We've looked at teacher related policy from the beginning and some of these other categories. You know, so there is kind of a kind of through line that we see throughout these 18 years. That being said, as we track those issues over time, one of the things that we saw that the field saw is that there was real movement, what had been a fairly kind of new approach to kind of statewide standards really took hold. It got of course a lift with no child left behind and which effectively mandated some of the types of state led policies we've been tracking at that point for a decade. You know, and so kind of we see the need to evolve all the work we do. We kind of approach our reporting and Education Week, you know, with the idea that we need to serve the education audience with the information it needs and the way they need. Quality counts is a little bit like that, although it's been around for a while, it has evolved. And so if you look back at the earliest version of the report and I kind of pulled that out of the file cabinet this morning, one thing that strikes you is it's about this big, it's about 230 pages long. The report has shrunk over time to really kind of concentrate what we're looking at. And while some of the things we looked at, you know, 18 years ago are still around in some form or another, some are not. And that reflects just the change in the field and how we approach identifying what we think are the most important policy trends to watch. We don't follow particular policies because we think schools and states need to do them. We kind of cast a broad net through our own kind of research and reporting just our ears to the ground, we tap into experts to get a sense for kind of what seems promising. We don't make claims that any of these policies are effective with a capital E or, you know, kind of scientifically based. If we did that then the report would be about this small because there's so little kind of really solid, you know, kind of blue ribbon, gold star. So research, you know, on policy, it's a lot trickier to connect to ABC than a lot of people probably think. But these are the policies that we think are important. They're starting to take hold. There's a lot of promise. And so we have seen evolution over time. You see that within some of these categories that have remained around, whether it's kind of standards or teaching policy that have been around for a long time. Periodically, we'll, you know, kind of revisit. We'll look at, you know, the areas within them. Reports like NCTQs, kind of, you know, or the kind of things that have informed our thinking over time when we look at a particular area like teacher policy, for example. One of the things that kind of Ann noted at the beginning is that when you look across these three report cards, at least, Education Week is the only one that grades on kind of achievement related categories. And some, we call them kind of data. Find, you know, there's a kind of a lump of those categories. We've got three categories or six categories. Three are policy related. Fear or not policy related and achievement falls into that category. And that was a deliberate decision on our part. And I guess it was about 2008, we made kind of a fairly significant kind of shift in restructuring of the report. What had been kind of exclusively a policy-based report where we actually didn't issue summative grades, overall grades. You know, we had, you know, took stock like we do every now and then. And this is in part based on kind of feedback from the field and advisors. The feeling was that by not focusing on more of the outcomes and context side of the education world, we are missing an important kind of component. Policy is an important component. Outcomes are a component. What helps to make those policies work is important. So we kind of broaden the scope of it a little bit. We added an achievement category. We added something called chance for success, which is much broader than that. And we continue to look at finance, which we have from the beginning. So that's, so it's evolved in interesting ways over time. You'll see that you would have seen a little asterisk on the graphic up there, noting that some of the categories we have were updated this year and some were not updated this year. We're actually in the middle right now of one of these kind of periodic re-thinkings we do related to how we kind of approach quality counts. And so we've suspended our policy survey work and without getting into a kind of nitty gritty of kind of methodology, and I'll just kind of make the point maybe kind of folks would be interested in following us up later. I think it's important for the kind of education public to have some understanding of kind of how the data comes about, kind of where it comes from, what sort of processes behind it without getting into what this indicator is the one I would look at, not this one. But so we're in one of those phases right now. And so part of the context is obviously common core and everything that's been kind of coming along with that. I think a lot of state and federal policy for that matter is either kind of in stasis or in flux. And as we kind of think about what it means to kind of be a state with kind of strong approach to standards or assessment or accountability in 2014, 2015, it's not gonna be the same as it was in 1997, but it's not kind of clear to us quite yet kind of what we'd wanna focus on for the next five to 10 years. And so that's kind of where we are now and that's something that's probably more or less visible to some people, but it's something that kind of we do and kind of that we take seriously as a way for quality counts as it moves forward to keep on kind of giving attention to those issues we think will help inform the field. So Chris's presentation was so good, I had four follow up questions potentially to ask him and he answered all four already. So I'm moving on, Sandy, two questions. First of all, when you think about your report, who is your primary audience? Like who you're trying to influence? And then I wanna talk about what success looks like. So if I were a fund or some foundation and you needed more money to keep this going and you asked for money to do new editions, I would say, okay, how do I know that this thing is succeeding, it's having impact? How would you answer? Yeah, so our report has been around for seven years. We just released our seventh edition this fall and when we started doing this work, we didn't think about it as a report card at all. The very first edition had scores for the individual goals, but it didn't have an overall grade. Sounds similar, that way you kinda realize over time that there's a need for that. But our original goal wasn't so much to rank and rate the states as it was to provide a blueprint for reform. And that's really what we think the yearbook is. We're issuing something that's 150 pages per state. It's not just a snapshot of what we think a state's performance is in a certain area. It's sort of a detailed analysis instead of recommendations on how we think this policy could be better and better serve teachers and students. So over time, I mean, it's certainly correct to call it a report card, but we really do think of it as that blueprint for reform. We don't hide at all the fact that the things we're looking at are goals. We call them goals, especially when we started doing them. Very few states were meeting the goals. It's a reform framework, and we're pretty straightforward about that. One of the things that we incorporate into the report that I really value is we send the draft out to the state and we ask for their feedback. And for each of the goal this year, there were 31 of them. We print the state's comments for each and every goal. So there's a little bit of a dialogue there. There are states who wholeheartedly disagree with our recommendations and the way the goal is framed itself, and we include that in the report. We think that's an important part of the conversation. Our primary audience is policymakers. We are trying to give policymakers a blueprint, a roadmap, I've heard it called different things in different states, mostly positive things. But that they can use. We're trying to show states examples of what other states are doing that we think is good strong policy to really emphasize those best practices. I think when we started doing this, there were some places where states were truly shocked to see that anybody was doing these kinds of things. They would have told you with 100% truthfulness that 50 states do not have the policy that you're suggesting, but that generally wasn't the case. And as we've seen more and more states shifting, I think there are states that continue to be surprised as maybe once they were in the majority of states and now they're in the minority of where they stand. But we also try to include, I think the addition that came out in January had about 125 tables in it. So even if you're not interested in our scores, you're not interested in our ratings, for any policy area you can get the lay of the land, which states require annual evaluations in which states don't. You might not agree that annual evaluations is a worthy goal, but you can see how states fall out around that. So as far as how we assess our progress, I mean the beauty of a report card is itself. You have ways to compare from year to year pretty clearly. The most exciting thing for us has been the change over the last few years. I think states have put a lot of emphasis on teacher policies. It might maybe look from that infographic like we were, our average being a C minus maybe isn't that much to write home about, but when the first time we gave grades in 2009, that average was a D. And when we did it in 2011, it was a D plus. And so now it's C minus, we see real steady progress here. And that incremental progress on the average really doesn't tell the story about how many states have made leaps and bounds kind of progress on these issues. And in 2009, we didn't give a single B. In 2011, we gave four Bs, and this year we gave 14 Bs. So the field is really shifting very dramatically. So as far as showing the public, what the results are, that's fairly straightforward to be able to do. And while the primary audience is policymakers, we don't think that's the only audience by any stretch. I think teachers in states are increasingly interested in the policy framework. I think there are things that govern their lives that they don't know are dictated at the state level. I think they think a lot of those decisions happen locally. In many cases, it's just the implementation that happens locally, but it's really at the state level. And I think the public at large, I don't know that they wanna know the nitty gritty of a state's licensure policy for any specific kinds of teacher, but I think they are generally interested in teacher effectiveness and making sure kids have high quality teachers. Thank you. Remember hashtag making the grades, if you wanna throw in questions. There's a point that I wanna come back to at the end and it is this interesting link between, there can be a difference in making advances in policy and making advances in student achievement. And you pointed that out at the beginning and I wanna press some of you guys on that later on, but given that it was brought up twice, I thought I would highlight it. The question for you, Eric, is it's obvious that these reports have external value, like they inform and hopefully adjust people, but I wanna talk about what it actually does for your organization internally. So can you talk somewhat about how the report card helps you align the work that you do and then how you hold yourselves accountable for actually, I assume you want the state to move from an F to a C or a C to an A. So how are you organized and how does this report help you go about doing that work? Sure. So yeah, so we're a national advocacy organization. We focus on, and our primary mission right now is changing the laws and policies in states. Wanna move from sort of the education environment we have in most states to a more student-centered approach. We wanna base it on key elements or key levers for reform around elevating the teaching profession, teacher quality, choice, and accountability and spending. For us, what we found is that while we started this as sort of an overall goal of raising awareness, generating conversation about the policy, a lot of the things that Sandy talked about in terms of being able to drive that conversation, we have been able to really take a step back and as an organization orient our state-based work, our advocacy efforts around exactly those goals of taking a state, looking at their GPA, where they are currently, and then using that to frame the discussion, use that as a roadmap when we go to policy makers to say, look, this is what you can improve this year, this is what you can improve in the next two years, three years, and we wanna move your states great up. We wanna move your GPA. When we started last year, I think there were 14 Fs, states that received F grades this year, I think we're down to seven, so we're seeing, as Sandy mentioned, the policy change is moving in the right direction. We wanna be able to track that over time and eventually move at least a tipping point of states to the right place in terms of the policy environment. I think when we think about a lot of the factors that the Education Week report card measures, for example, really digging deeper into the policy piece I think is important because at the end of the day policy really matters. I think it helps explain what's going on there in terms of why you have the student outcomes you have, and I think it also helps explain what a state is doing about it. And when you look at achievement across the whole country and in any state, it's hard to argue that it's not just abysmally low, in most places, unacceptably low at best in any state in terms of overall achievement, in terms of achievement gaps. So peeling that back, looking at the policies and thinking about why that is and what we can change to change those outcomes I think is important. For us as an organization, setting our goals at each state, looking at our current year's plan and then our five year plan, how are we gonna move the states up through the GPA, the grade, the ranking? That becomes really the primary focus for us. That's great, and I'm gonna ask you a question about the research angle about this, but I wanna just press on one issue for a moment here. There is a difference between correlation and causation. So it could be the case that these policies are getting better and they would have gotten better had you guys never put out your report cards, had your organizations not actually existed. So I had a professor once who always lectured me, data is not the plural of anecdote. So I'm gonna go back on that actually here because I think anecdotal evidence might be the best thing that we have in this area. So can you point to us, like give us a couple examples of how you think either your report card or your advocacy that went along with your report card actually changed behavior, changed the policy, changed the statute, changed the governor's view? We have many examples of that because we've worked directly with states who've called us up as a result of the yearbook and said we want to do this, walk us through the steps. So we have loads of examples and there are plenty of other places where it's like, oh look, that state passed that where you don't know whether you brought it to their attention or whether it was already on their radar, but we've seen, I won't mention the state name, but when we, early on, there was a state that had in its regs that for teacher licensure tests, if you were within a standard deviation of the cut score, that would be okay. Now, then you don't really have a cut score, you have a cut score that's a standard deviation below what you're claiming the cut score is and we made kind of a big fuss about that because it just seemed so egregious and that policy went away almost immediately after we published that. Again, I can't tell you there was causation there, but it seems pretty clear that when you call out a state on something that's almost indefensible, they pay attention and act and we have lots of anecdotes like that, I'm sure you do as well. Yeah, I mean, like Sandy, I won't name names on states because I don't want to call them out, but I think their legislators are, can be a bit competitive, particularly in regional situations and so you'll have legislators in one state they're paying attention to what other states are doing if they want to move education policy, if they want to be seen as leaders in this field, they're paying attention and so and I think one of the benefits of grading states rather than simply describing the policy but actually assigning a score is that we will then have legislators who will ask not only what do we need to do, what type of policy do we need to pass, but they're paying attention as we are to the strength and rigor of the policy. Well, this is going to take me sort of halfway up your grade. What do I need to do to get the full grade? What are all the elements of the policy and then go back and amend the bill, put that in there and that's something that we're able to focus on with our state teams. Again, because we want the GPA to go up as much as possible, it's not enough to just put a teacher evaluation policy in place. We want it to be the strongest teacher evaluation policy we can get and being able to talk to that, to talk to policy makers about that, being able to talk to our members and our advocates on the ground so that they can push for the stronger policy. We've definitely seen it make a difference. And I would agree, I think having that overall grade or score, kind of whatever your particular rubric is, I think it makes a difference. It makes a difference in lots of ways. I think none of us want to boil it down to a horse race where we kind of want to see who's on red first and second and all that. But from our perspective, for example, with quality counts, the overall grade is kind of a way to kind of get an audience that's interested in education policy at a certain level kind of further into the conversation. We've got over 100 indicators kind of across all of our framework. That's too much for anybody to keep in their mind all at one time, including myself, right? And so kind of that overall grade kind of gets people hooked and it kind of pulls them in to get more information. And if you are a state legislator or if you're kind of state chief or kind of otherwise in the leader policy position in the state, you have kind of an incentive to kind of know what's going on. I think these are, you know, one kind of states that kind of want to rise up the rankings for the wrong reason. And we've gotten plenty of questions like that over the years. I'm sure other folks who've done similar types of projects have where somebody from a state will say, you know, we want to raise our state finance grade a little bit, or what would it do to our grade? And it's not our goal for states to kind of rise up the rankings. We're kind of more objectively kind of tracking state activity in these areas as well as some of the other outcomes. So that's kind of maybe a little cynical kind of reason that states might do it. But, you know, as we've already heard, especially projects that kind of get into real depth, you know, much more than we do in kind of particular areas, states really do want to know what's going on. They often look to these kind of reports as not just, you know, kind of which way is the wind blowing, but especially the more detailed winds, kind of what's best practice or what's, you know, really, you know, kind of taking hold. And that's where they learn about these things. They can talk to their colleagues here and there, but I think there's, you know, no substitute for a kind of systematic 50-state look at those types of issues. So Ann raised a really interesting point in her introduction about, she put a very direct language about effectively the students' report card being a, what, laundry list of Michelle Reeves' favor policies, something along those lines. But there's a very important kernel there. That is the majority of indicators, I think, in Education Weeks and Quality Counts, and all of the indicators and the other two are what we would refer to as inputs, not outputs. How do we think about that as like researchers? I mean, it could be the case that say five years from now that the states that progress the most on NCTQ and student source report cards say they all went from Fs to As, but not a single one of them had better student achievement. Yeah, I mean, I think that's the real question and that's sort of the gamble that we're taking with these report cards is that we, you know, you can sort of say, this is what we think, Chris mentioned, like, you know, your report card could be this long if you say what you think might be a good idea and this long if you actually included only things that had, you know, really valid, evidence-based research behind them, you know, to the gold standard that may be like a what works clearinghouse would use or something. So, you know, I think that's something that you'd have to watch for me as a researcher and I'm purely a consumer of these rankings so kind of coming from a different perspective, you know, I recognize how much work goes into this. 50 or 51 state surveys takes a ton of time and resources and I'm so appreciative that they're doing it and not me because if I'm, you know, trying to write something, you know, looking at NCLB waivers and teacher evaluation policies, you know, do I really want to go look at every state's waiver every day? No, I really don't, but I can go to MCTQ's yearbook and sort of have that data point and pull it out on, and sort of as another source of information or I can go to Education Week and pull the data for an individual state on any of these given topics and as Sandy was mentioning too, you know, there's not just an aggregate report. A lot of times there's also a very state-specific report so if you're really wanting to get information about a particular state, it's also a really helpful resource for me. To the bigger question though, sort of, is there gonna be a disconnect perhaps between these policies and then what happens with student achievement, I think that that's a risk but I think, you know, it's something that sort of is up to people like me who are on the outside to be writing about and pressing on and anytime, you know, I'm looking to think about, well, are our teacher evaluation reforms, are they working? That's obviously a question you guys are looking at too. It's not like you're just grading states. You know, I might look at what MCTQ's data are but I'm also gonna need to be needing to pull outside sources of data. You know, I think when you're grading states you pick and choose. You're not including everything. That would be impossible. And so there are certain things that aren't gonna be included in this and I think we need to consider that as researchers, you know, what data do we need? Are we actually collecting that data? Does the Department of Education have it? Do State Departments of Education have it? And if they don't, I think we need to press them to do a better job of collecting that and then, you know, we'll see. And I would say, you know, one thing that's, you know, despite my cynical blog post about students first, one of the things that I think is great about all of these three of these report guards is that they do actually reflect, you know, new research that comes out, changing conditions, you know, new laws. So, you know, students first this year, I think I called you out last year, perhaps on the teacher evaluation side of things and said, why do you have to have student test scores be 50% of a teacher evaluation? The Met Project report just came out and it said there's a range of reasonable rates. Wait, so it could be 33% to 50%. You know, why are you being so specific? And this year, the report card changed. So I think that it's great when you're listening to sort of the research field that's coming out and changing how you grade. Your goals have increased over the years, Sandy, as states have, you know, gotten further down the path of these reforms. You've changed what, you know, counts for an A. The goals have gotten higher. Yeah, and I think an important thing to underscore is, we could have graded on a curve from the beginning. You could have graded on a curve from the beginning and told the same underlying policy story with a different message to it. And I think because the goal was do things differently, you're not helping anyone do that by saying, well, you know, you're the top and so we're gonna give you an A, even though you have a lot of room for improvement underneath. I'm an educator, we've been grading on a curve for a really long time. Look, I think that the framing that you mentioned in your questions, interesting, Andy, and if I can answer to Anne's critique as well, I think, you know, this idea of policy being the input and should we focus on outputs, I think it's a fair question, but in my mind, policy may be the most important input we can measure. And I'll give you two quick examples, neither of which will deal with teacher quality because I'm sitting next to too big of an expert on it, but you know, if we want to talk about what states are spending and how much they're spending per student and say, you know, we lay out that X-State is spending $13,000 per student, we need to be asking the question about what are the policies in place? What are the parameters, the strings that the state's attached to that funding and how much of that goes down to the school level if we hope to have any sort of understanding whatsoever of what that funding actually does for kids because if we're sending $13,000 coming from the state, but only $9,000 reaches the schoolhouse and of that $9,000, I don't know, 7,000, 8,000 is already spoken for by sort of different restrictions from the state, from the district in terms of policy, how that school leader can spend those dollars, those become incredibly important questions that if we don't answer, trying to compare states and the dollars they're spending really becomes meaningless. Likewise, I thought Education Week's theme this year around the changing nature of governance in districts and really thinking about how the district definition is changing and a lot of things that you're interested in, Andy, around districts that are using charter schools as more and more of a tool within the district, we should be asking ourselves whether or not those charter schools are receiving a comparable share of the resources, whether they have access to facilities because if this is an increasing trend, if districts are looking to this as a model, because autonomy brings certain benefits, but we're not making sure the policies are in place to make sure those schools can be successful because those public school students receive the same types of resources. If we're not asking those questions, again, I don't know what we're gonna actually see on the back end of in terms of success. So these are important questions that hopefully this type of report card lays out. And I would say that thinking about policy and achievement as kind of input, output, that's one way to look at it, but I think it's actually not the only way and especially as you look at these issues over the long term, it's more complex than that. And I think we get an appreciation for this with quality counts because it's been around so long. We looked at least kind of this contextually at achievement since the very first issue. We didn't grade on it for a long time, though. But what you see, if you kind of dust off the history books and you remember what was going on in the kind of the mid-90s, the really big push for these kind of policies, standards-based policies, was coming from the South. There were a lot of governors in the southern part of the country who were kind of really kind of strong advocates for this approach. These are states that had historically low-performing schools and were looking to do something about it. So there's a little bit of a kind of chicken and egg sort of relationship here. If you look now, it's a little bit different, but it was actually kind of, you know, back and probably still today to some extent. Some of the states that are most aggressive on policy are also the lowest performing and they're kind of looking at policy as a way to remedy that. Now, as you look at this over the longer haul, you can kind of take a more nuanced, you know, kind of perspective on it. And we don't do that. We actually have only done this once in the history of quality counts, taken a more, you know, kind of, you know, intentional look at the relationship. I wouldn't call it cause and effect between policy adoption and changes in achievement. We did that around the 10th anniversary of the report. And it's very hard to do, and it's in my kind of academic background, you know, kind of prior to kind of doing what I do now, was kind of, you've been a kind of policy researcher. It's what did in my dissertation on it. And I'm kind of very kind of humble and modest about what you can actually accomplish through even really good policy research, just because the state policy environment is so complicated, you've got 50, 51 cases, which sounds like it's nice and manageable, but it makes it hard to sort out what's going on when states are doing 100 different things at the same time. And we saw some clear indications that certain types of policy were associated with gains and achievement over a kind of period of about a decade. But one of the other things that we've gotten appreciation for, especially more recently as we've been tracking kind of achievement in a more kind of systematic and intentional way, is it takes a long time for these things to take hold and to maybe show some kind of movement of the needle when you look at things like achievement. And so if you look at kind of states in the South, that kind of Florida is actually a great example, very strong in the policy side, especially around some of the types of policies we tend to track. If you look kind of back at the kind of early days of quality counts, it was a very low performing state. You know, it's kind of moved its way up, so it's kind of about average for the nation. And that doesn't seem really great, but in the longer scheme of things, I think that's a real success story. And while I wouldn't kind of say, you know, causation with a big C is part of it, certainly kind of the policy environment in the state is part of the story. And that's kind of for other folks to kind of who know the details on the ground a little bit better to figure out. But it is a little bit more complicated than input and output at a particular point in time. And so I think that's one of the reasons it's kind of helpful to look at these over the long term. Makes sense. We've also focused on, you know, in the absence of clear research on some of these policies, we've tried to focus on things that just make good sense. You know, the idea that a teacher's gonna go five or seven years without getting feedback on her practice, it doesn't make any sense. And so saying, you know, teachers need feedback on their performance every year, you feel safe promoting that. When we look at, you know, elementary licensure tests and the fact that most states were using a single composite score that meant that a teacher in most cases could get every math question wrong and still pass the test rate, that doesn't make good sense. And so saying, we need to make sure elementary teachers know all the subjects are gonna teach, you know, you feel confident putting that forward. Yeah, and I actually have no problem with just the focus on policy. I think it's completely appropriate for these organizations. I'm a policy analyst, I love thinking about policy and seeing what's happening. And so I would say though, as a researcher, the overall grades in particular are just not that useful to me. It's not that helpful for me to know that Florida's a B or a C, or that, you know, Mississippi got that grade. That's not really what I'm interested in. That doesn't hook me to the grades. It might get a policymaker's attention. It might get a governor's attention. It might get a journalist to write about the report that wouldn't if you just presented the data behind it. But for me, you know, just looking at the grades doesn't tell me much. I really am the person that's going and looking at the hundreds of pages behind it. And then figuring out, okay, now that we know this policy is happening, what does that tell us about the state of sort of policy reform and how could we figure out if these policies are working? Maybe I can't do sort of a gold standard research study, but what could I do to sort of either show what's working or what isn't working and make some further recommendations as someone who's outside of the grading process? So I would like to open this to the audience and our online folks after one more question for me. I want to be as provocative as possible. Hopefully get some juices stirring here. So the policy wonk in me, like the government bureaucrat kind of looks at these grades and part of me just shakes my head and says, oh, implementation, implementation, implementation. So Rick Hess and Mike Petrilli famously wrote when they were writing something about NCLB that Uncle Sam can make states and districts do something but he can't make them do it well. For example, there was this policy on giving kids in low performing schools a choice, going to some other school in the district or maybe even outside. And GAO ended up finding that only two, three percent of kids who were eligible actually got to exercise that choice. When I was a charter authorizer in New Jersey, I know that we had the same statute the day I got there as when we left, but because we work with NACSA, this organization that helps authorizers, our policies, our practices on charter authorization went from awful to actually pretty darn good. Again, no change in policy, better change in implementation. Similarly, we're seeing in some places that have changed their teacher eval laws that they're still getting the widget effect. There isn't this great differentiation in the grades that teachers are actually getting. So what I want to ask all of you to reflect upon is who is doing research or should you guys do more research in your publications on the difference between what is the cold letters and statutes and regulations and what is actually happening on the ground in implementation? It's certainly something that we've been thinking about since we started it. We've always viewed the policy report card as sort of the first step. The second phase of this, if you will, that I think we're thinking about whether we start doing it next year, maybe the year after and how we ramp it up, but is to go back and start to grade states on the implementation of the policy. That's still stopping from the third phase of what are the results you're achieving, but I think it would be interesting. It should be a critical analysis that we perform, that others perform, to look at, we passed this policy, what are the states doing to implement it? If you pass an evaluation policy, how many districts are implementing it? What do the models look like? Are they close to the state model or not? How are the numbers panning out in terms of how many teachers are evaluating what's the range? All those are questions that we have to ask because you're right, implementation at the end of the day, as much as policy matters, implementation probably matters a little more. So, but I still think that we're not even asking the implementation question until you pass the policy. And we see a lot of states that, frankly, sit in stagnation because they're stuck on the questions around how do they implement or we can't get it perfect, so therefore, let's not quite pass the policy yet. You never cross that threshold and start talking about the complexities of implementation until you have a policy that you start to try and live by, I think. Yeah, I completely agree. I mean, it's necessary, but no way sufficient to what you're trying to achieve. You have to have the policy in place in order to move it forward. We just, I don't even know how we would begin to track implementation at the level, the number of areas that we're looking at, certainly from our offices in Washington, DC, how we could do that, I just don't know. We count a lot on the local advocacy organizations taking our report about a certain state as a starting point and running with it and saying that these two things that we've given the state credit for, they are doing a terrible job with and other things where there's the need for the policy push that they can push on that. I just absolutely agree how important it is. And I would agree. We see kind of the work we do in quality counts as hopefully the beginning of a conversation and not the end, and we're realistic about what we can accomplish as an organization with any organization-limited resources. We focus on kind of policy, kind of adoption and enactment. That's kind of how we kind of look at policy. As important as we kind of understand and kind of know implementation to be, we just don't have the resources or given the kind of breadth of what we look at, the kind of knowledge and expertise to make more qualitative determinations that this is good implementation, this is not, this is halfway in between, that's kind of not where our strength lies. But whether it's local advocates who are big consumers of these reports within their state especially, or if it's other national organizations with a real specialty on teaching or kind of policies related to choice or whatever it happens to be, those are often kind of who takes these results and kind of using something that quality counts as a starting point will kind of dig deeper, will kind of build around kind of the basic information we're able to provide in the report. Makes sense. I mean, I would say the same thing. Like I think it's great. I don't know who could be the greater of implementation, but I think it's really important that we not just focus on policy adoption because I think sometimes, and you've sort of shared some anecdotes of this, that it can kind of just become a policy adoption checklist. Teacher evaluations, check. School choice policy, check. And that doesn't always lead to thoughtful implementation, it can lead to really shallow implementation. Getting reform to be implemented well, you have to really be thinking about the timing of it, the sequencing of it. What should you do first? Should you try and do it all at once? Should you have a more ordered process? How do you get buy-in? And we see all the time what happens, they're the best of intentions around policy, but an administration changes. You go from a Bloomberg to a DiBlasio and things change dramatically. Or you go from a Tony Bennett to a Glendoritz. What happens when you have those shifts? And a lot of times, policy staying in place depends on how it was implemented and how deeply it's sort of taken root. Regs can change, laws can change, and go back to what they were before. So it is difficult to grade implementation, but I think if we don't focus on it, you could just get a lot of sort of politicians or state legislators who are hoping to make a name for themselves just say, yeah, look at what we did on this, we went from a C to a B plus on this grade because of this bill that I passed. And not really caring about how that's implemented and how it affects schools and educators. So I hope that someone can answer that question. I'm not sure if you all have the capacity to, but I think if we don't think about this, these sort of report cards can just become a distraction and then we may not see the results from these reforms that we would hope to see in the end. I was just checking the feed here on Twitter and lots of interesting comments and questions which I hope we'll get to. The thing about implementation seems to really have struck a nerve with people, especially folks who are sort of in the system. So sir, I think you had a question? Oh, we have a mic coming. Oh, on its way. Hi, my name's Dave Price and I started in school as a five-year-old in kindergarten in 56 years. I'm still there as a senior. My question, two questions. First, to Sandy and Eric, giving Montana and North Dakota an F, do you have any fear of driving through that state later that you'd be identified as giving them that grade and your pictures might be there somewhere or you're pretty comfortable with that? Yeah, we've been giving Montana that grade for a while now. Yeah, we're, I think. You have no relatives there or they still speak to you? We're okay with it. They also have high speed limits so you can drive very fast. Now, here's the more serious question. It's all five of you or any who care to answer. You could choose policy, you could choose implementation. If tomorrow, hypothetically, you could make one change to make education better for the most students in the most states, we understand it's different all over. What would that change be and why would you choose that one? I'm happy to go first or last. I mean, I'm gonna sound like a broken record. I mean, everyone knows what my should be on this is. So why don't you guys go first? I'll go first and this is just me talking to not Education Week or Quality Counts because we're not in that business and this is just my own view. I think we need a more sensible approach to assessment, right? As a researcher, I think we waste a lot of time and energy with assessments that in some cases are bad, in some cases don't measure what students are learning or are supposed to be learning and are just completely non-comparable with one another. It drives me crazy as a researcher that we don't have like good, nationwide comparable measures of assessment that are good that require kind of real demonstration of not just kind of facts and figures but kind of how students learn and that are kind of become part of an infrastructure that can drive policy, that can help students as they're learning and not like maybe a couple years later once data finally gets out and that can really kind of be a backbone of kind of more integrated kind of curriculum and instruction and instructional intervention. This has been kind of tried, right? If you kind of go back to the 90s, I knew somebody kind of very well in the 90s who was working in the department, U.S. Department of Education and he was tasked with trying to promote what was at that point the voluntary national test that didn't go anywhere under kind of the common core and the common assessments. We've made a lot more progress. We're not there yet. We'll kind of see what happens. But I think we just kind of waste a lot of time and effort kind of as a field and just the way the Balkanized approach to assessment I think is in very little people's interest right now. So I obviously have to give a teacher answer right on the National Council on Teacher Quality and I think having the ability to identify teacher performance and teacher effectiveness is just absolutely central and connects to almost everything else. How we prepare teachers to how we compensate them, how we provide professional development that's to teachers' benefit and students' benefit. Almost everything that we include in the yearbook, if you put the teacher effectiveness measure at the center, it would connect out to it. So I think that and being able to do it well and right and get real differentiation of performance so that everybody can grow and develop and improve so that we know who our superstars are so that we use them really well and that we do something about chronic underperformers as well. Yeah, it's tough for me. I know, I'm gonna pick one. I'm not gonna fight that. So I would agree with Sandy on the teacher quality piece. Huge choice proponent, but every time I go into any type of school, whatever type it is, what I'm walking away from is always thinking about the instruction that I saw in the teachers and to the extent it's a charter school, what did their autonomy enable them to do in terms of the environment they created for teachers and this equity question that continually comes up, we're talking about our students with our greatest needs, what sort of resources are they getting? To me, the most important question we should be asking there is why our kids with the greatest needs aren't getting our very best teachers and what policies are prohibiting that from happening. So I gotta go back to teacher quality as well. Yeah, and this is a very tough question and I think one of the things, it's sort of maybe my answer touches on some of the things that others have talked about but I think that I would really like for us to actually be able to know through better data, through better assessment data but not just on the assessment side, what resources and opportunities students actually have at the school level because I think right now we give much better support, much better resources to the schools that our better resources to begin with are already at an advantage. We actually don't know how bad our equity gaps are, our resource gaps are, our access gaps are and I think a lot of these questions about, well, what should we do about it? You can't even start to begin to answer that question unless you have the data. So part of it is the assessment data but it's also actually looking at school level budget data. We don't have that. We don't actually know how much different schools have when you look at their teachers and different resources. So I think that that's something I would like to say and I think could actually spur a lot of great conversation about better targeting our interventions to where they're needed. So my answer is more governance related. I'm struck by the fact that every state constitution not only empowers but gives the responsibility of delivering a public education. That responsibility is at the state level. So that's when we have adequacy and equity lawsuits. The state is the one that is sued but in every single state, the state has made its decision to delegate that responsibility to these things we call districts. In the words of Ted Coldery, we gave the exclusive territorial franchise of public education to these geographic entities and in urban areas they've been failing our kids for at least half a century. And so the fact that we have a governance model that isn't working for tens of millions of poor kids, I find offensive. What I find interesting and really encouraging is over the past 20 years we have gotten away from that. First it was with chartering, saying that non-district entities could run schools. More recently, we have the RSD and the ASD in Louisiana and Tennessee respectively where the state said, yeah, these geographic districts aren't working, we're gonna have a statewide district that can create schools or take over schools. And we're seeing increasingly like in Camden, maybe in Kansas City, where the state is gonna take over a district and do something different with it. So I think we might be on the cutting edge of solving a century long problem where the state abrogated its duty to really deliver a great public education to every child. So let's, do you have any good Twitter questions you'd like to surface for us? Yeah, please. What I noticed from Twitter is most of you guys are commenting more than asking questions, which we appreciate. So this is Deborah, I agree that policy is easier to analyze, but how do we know we're looking at the right policies? How do we know what matters most? Start on the teacher. I mean, from our point of view, we've tried to be comprehensive, we've tried to include everything we can think of that is a policy at the state level that impacts the teaching profession, that's why we have 31 goals. So we have tried not to exclude anything. How we set those goals is of course, our priorities. But I think we do see over time, when you see 30 states shifting in some areas and not other areas, you have to ask, well, maybe this isn't just as important and start to think about it that way. And we have thought about that in terms of how things are prioritized. And this is in no way to be critical. We have the luxury of being really comprehensive because we're trying to go deep on one issue. I don't know how you'd do that more broadly because it would be a thousand page report just on a single state. And we worry, the flip side is we worry because we are putting out 150 pages per state that no one in the state wants to read 150 pages. So finding the sweet spot is very hard. Yeah, I think we look at research and we look at facts on the ground and common sense and try and put together the best combination. I think the research is clear on the importance of the teacher in the classroom. So what policies get us there to be able to put a great teacher in front of every kid? There are too many kids that are stuck without a better option because of their family background, their zip code, whatever it is. So what policies increase their number of choices that are available? Make sure those are high quality choices. Just go down, your piece around governance, I think it's important, it's why we focus on governance as well. So I think we focus on what we believe are the levers for reform, the policies that are gonna create conditions in which educators in schools can do better by our kids. Fritz, did you have a question? We've talked a lot about the focus of the two reports. One, for researchers, because it gives information and two, from the policy side, giving ideas. Going back in time, we've kind of referred to the grandfather of the thing being Edwick, but it goes back even further to the NGA reports on states when Mike Cohn was running the NGA side of it, or when Ted Bell was secretary and did the first report card, which people probably forget. But the question I wanna ask has to do with making a difference. In each of the reports, we talk about how it values the researcher or pointing out policy. But has anybody gone and looked at the reports and what difference it's made in states as a result of pointing out lack of policy or lack of practice and lack of implementation? And if you haven't, is that on the radar screen? Yeah, I mean, for us, again, our starting objective here was to create awareness and then use that as a roadmap for dialogue for change. And I think that we have seen that, and this is anecdotally, we haven't done a study on it or anything, but it is definitely playing out in the states that we're working in and even states that we're not working in where folks are talking about this. And so the awareness piece is sort of checked off because suddenly we're having conversations about these policies, we're having conversations about the state's grade. To the extent it makes a difference is because I think it needs to be aspirational. We want states to want to change. And even a state like Massachusetts, when we put out our report card last year, Massachusetts always does well on the Edwick Rakeings. I think people sort of take for granted that they have the best education environment. And yet we had an op-ed by legislators on both sides of the house saying, you know what, we do okay in Massachusetts, but it's time to change. There's a lot more that we can do. And this policy report card points out that there are a lot of areas where we're just not keeping up. Is it important to talk about that in your next set of reports a year if the report is making that kind of difference in the conversation or in practice? We include a progress indicator in each of our goals so that both the state or any reader can see whether this is something that not just this state but any state has moved on. Funders might not like this answer, but we don't much care whether they moved it because they saw us pointed out to them or any other reason that they moved it. We want, we are, I just totally agree with what Eric said. We're there to keep the issue out there and keep focus on it. But if we are the direct catalyst for the change, it's great and we have some good evidence of that. We don't try to systematically carry that. The thing that strikes me is that the two reports that I find to probably have been the most influential were national reports, not state level reports. So the first one, and this is going back way in time, was the Coleman report coming out of the 1965, the first iteration of ESEA, famous sociologist was the first one to find that out of school forces were overwhelming in school forces. So kids who were growing up in poverty, parents who didn't have much education, their schools weren't compensating, which really set off 50 years worth of policy changes or at least effort. The other big one is the 1983 famous A Nation at Risk, which uses huge language. Like if a foreign language had done to us on education, what we've done to ourselves, we would have considered it an act of war, like sort of getting everyone together, saying that we need to do standards assessments, curriculum, higher expectations better. And I think those things galvanized policymakers at the state level maybe as much as state level reports do. Well, I think one of the lessons from Nation at Risk is we need more Cold War rhetoric and who knows what's going on and cry me if we're kind of entering that new age. I would not argue at all with the importance of those kind of big reports. I've got a copy of the Coleman report actually on the shelf in my office. I've read and taught on it actually. But let's remember that those were very different, especially the Coleman report. I mean, that was a long time ago. And Nation at Risk, what really happened before there was a kind of state policy environment in education we were recognized today. Can't remember exactly if the Department of Education had actually been formed kind of as a cabinet level department. It was right about that time. Right, so kind of just ahead of that. So it was a very different environment. It's things like a Nation at Risk and kind of what it points out at a national level that I think was part of what spurred, what became a very active next decade in state level policy, which has kind of led us to where we are now, where national takes are important, but so much of what has kind of been driving momentum in education is at the state level now that I think that's just part of the history and the kind of earlier point about governance I think is also interesting. You know, it's kind of constitutionally kind of enshrined at the state level. And yet kind of for 150 years before that, we really think about this as something that's especially a very local enterprise. You look over the past kind of 20 years or so, I, you know, you would see just kind of pound for pound so much more activity at the state level. And I think that's just part of the same evolution. You know, whether you're looking at kind of governance, pure and simple, whether you're looking at kind of equity, you know, as a way to get at some of these kind of issues, I think the states just matter a lot more now than, you know, they have kind of historically part because of things like nation at risk, you know, for other reasons as well. I totally agree that, you know, right now if you're trying to figure out what's happening, you know, we're so confused nationally that you can't even get a clear picture of what's going on. So to have, you know, a state by state analysis is really the only way you're gonna figure out what's going on, you know, you have to look at state waivers. You have to look at, you know, changing teacher evaluation laws, changing standards, changing assessments. Everything is up in the air and in flux. And, you know, you can't, I think we haven't quite figured out what our next sort of big national message is or should be and states are really in the driver's seat right now. Maybe you just came up with the answer and I'm gonna take a complete flyer on this. We are at a standstill when it comes to ESEA reauthorization. I mean, what, now seven years over. Who has ever done a report card on the federal government's policies? On teacher quality, on school choice. In going through everything that was in NCLB, but including Title II stuff, IDEA stuff. If your reports are galvanizing action at the state level, would a federal government report card on education policy galvanize? I mean, the challenge is, and like I actually, you know, it's federal policy now with state waivers, more or less. I mean, you look at NCLB and the legislation is still there and there are still states, I always like to recognize them, that still have to live with NCLB. But for the most part, you're talking about waivers. Waivers can be thousands of pages long to read. I tried to look just at one of the three principles in the waivers, I looked at the accountability systems. I did 16 states and it took me over a year. So, and that's without the waivers to the waivers. That's without waivers to the waivers, waiver extensions, waiver monitoring. This is just looking at the initial waiver request which had been amended. So you have to keep going back and checking and see what states have done. So, you know, the time and capacity it would take to pick any one particular policy and grade it on a national level is, you know, I don't know if I question whether the U.S. Department of Education has the capacity or any single organization. It needs to be a much bigger effort if it's gonna get done and we just don't have the data right now. So it would be awesome to do it. I am, you know, all four trying to figure out nationally what's happening in these various reform areas but I don't think there's that drive right now or sort of, there's not demand for it. Not enough people are demanding it and it's a big challenge for any one organization to take on. So I think there'd have to be a big collaborative effort. Oh, I think, I mean, federal policy is obviously a much different animal in a lot of ways in state policy but even if we go back to say the heyday of no child left behind. And there's a lot of reasons a lot of people don't like no child left behind or didn't like no child left behind as a tarnish brand and all that stuff, right? Part of kind of what no child left behind brought about was kind of for specifically what happened at the federal level, right? A lot of what people did not like about the way it unrolled had to do a state level implementation of the policy framework in no child left behind. And one of the things that I've done a lot of work on is graduation rates and kind of high school policy. And a lot of what really kind of came to the fore in terms of a very dissatisfied national discussion about what the rates are, what the accountability and kind of standards are for graduation rate had very little to do with what was actually in the federal law. It had everything to do with the way a lot of states chose to implement the kind of fairly broad framework that you often see in federal policy, in federal statute. So I think it's interesting to kind of think about how the federal government has approached various policy issues. It's largely through funding. It's kind of often through a lens that has to do with say with kind of equity or showing particular populations. But a lot of what ends up mattering a lot is kind of how the states kind of take that framework and it's more or less restrictive in different areas and kind of put that into practice at the state level. Now there's quality implementation issues at the state level and that don't want to kind of give the federal government kind of an out. But I think it's a little bit more dynamic than just kind of this is what the feds do. This is what the states do. It's there's a lot of give and take. Okay, so we have about 10 minutes left. So I'd like to see if we can get at least one more Twitter and at least one or two from the audience, sir. Thank you, wonderful panel by the way. I'm Fred Winter with FA Winter Associates previously like probably half the room with the U.S. Department of Education. And I would say before I ask my question parenthetically, I hope we eventually get a question from the room that isn't from a male who is white and over 60. But at some point, my question is this, the president has tasked the U.S. Department of Education with developing a rating system for colleges and universities. What in your experience with report cards for the pre-college systems would you give his advice to the team that's developing the new higher ed rating system? What a great question. Good luck. Yeah, good luck. Well, we rate teacher preparation programs as part of our work. So we've waded into those muddy waters already. I mean, it's a challenge, especially very similar to the kinds of things we've talked about. You're not gonna be able to set indicators where every indicator is clearly grounded in research. It's just not gonna be there. And, or where the perfect implementation is already outlined to how you evaluate that. So, I mean, I think our, you know, NCTQ's take is you've gotta do it and not, you gotta stick your neck out a little bit and then make changes if you need to. But that is a thorny, thorny subject. I would say be careful and very thoughtful about the data that you choose to include in the rankings. You know, anytime you create any sort of accountability system, what you're trying to do is change behavior. It might be institutional or school behavior. It might be individual behavior. And what's included sends a very strong signal about what matters. And so, to be really thoughtful about when you choose a particular indicator and what the performance standard is for receiving a certain ranking, all that, you know, what action do you hope that that would spur? And trying to sort of see through what maybe some unintended consequences of those choices could be. And really make sure that you're choosing good indicators. You know, I think it's been interesting with NCLB, you know, best intentions, obviously, but a lot of the assessments, you know, that we've been basing all these decisions on, whether it's, you know, what schools are low performing or what teachers are low, are not effective. You know, we all would readily admit those assessments are not that great right now. They haven't been, they're getting better, but they haven't been that great. So, and I think that's gonna be a challenge in higher education is just the lack of data that we have right now. We really only have good data on students to take out loans and that's missing a huge portion of, you know, the population, it's not universal and there are a lot of data points we don't have. So, I think you're gonna have to really clean up the data side before you can even start to have a ranking system that sort of has the validity and, you know, will be something that, you know, creates the right incentives for schools. Yeah, I would also say, I agree with the tread lightly point of this. The thing that strikes me the most is, once we start doing the report cards on, for example, institutions of higher education, graduation rates, matriculation rates and what we're going to find is that there are, a lot of the ones that have the lowest ratings are actually some of the institutions that are the best at getting in students who otherwise might not go to college. HCBU's, for example, schools that are in more remote areas and you automatically have this tension while they're good at this, but they're not good at that. So, just be aware when you're putting these things together the information that you get might cause some uncomfortable conversations. And I should put a plug in, you know, I knew America does work from pre-K through higher education workforce and I'm sure some of my colleagues have a lot of recommendations for you on this very topic that would be probably more in depth than anything I could offer. So, check out our website. I thought I saw a non-50 year old white man over here with a question. My name is Katie Forder. I'm with the American Youth Policy Forum. And my question has to do with transitioning, you know, from state to state. Every state in the union is very different in a lot of ways. And I'm sure as a researcher as you all know that how one policy is implemented in, say, Tennessee would be very differently implemented in Wyoming. So, based on the research that all three reports present, you know, I would love to get your perspective on how a policymaker in a small state in the East Coast can take lessons from an effective policy in a more rural state in the Midwest, for example. So, I don't know that the distinction is actually as important as the lore around it would suggest, right? So, on the front end, any parent I've ever talked to in any state, they always talk about the same things that they want for their kids. And they want their kids to be able to grow and learn and achieve and do all these great things. They want them to be able to go to college. That never seems to change on the expectations end. And certainly, at the point where students graduate from college, they're competing with each other all across the country for whether they're going to go to college, what jobs they're going to get that's becoming increasingly popular that our younger population is more mobile. The idea that you're just going to stay in your hometown or even your home state is something that just isn't as true anymore. So, on the expectations end, that doesn't seem to be the issue and on the reality of what happens after the kid graduates, they certainly have to compete with everybody else. So, this idea that we somehow have to protect and insulate the K-12 and it has to reflect these unique characteristics of the state that are very different than some other state, I just don't see it, I don't get it. I will say this when we are looking at our report card and how we grade states, while we want states to implement a set of policies and we believe that there's a way to, we have a rubric for what we consider a bad policy versus a really great policy, we are thinking about whether there are different pathways to get to an A. We are thinking about the, how different policies interact with each other and the idea that it may not be the case that every state has to have every policy. For instance, one of our policies is mayoral control because we believe that it should be some governance alternative that exists. I think our principle would be there needs to be a governance alternative that exists, whether that means that every state, Montana needs a mayoral control policy on the books, I think probably doesn't make a whole lot of sense and it's something that we're thinking through then. What's the pathway that you can get there to demonstrate you've got some way of breaking up the current governance model when it's not working but it doesn't have to be that exact prescription. So objective end goal is the same, maybe the pathway for getting there can vary a little bit in terms of the policy itself. I think that is so right. In our experience for as much as every state tells you how unique they are, for most of the policy areas we look at, we generally see four or five, sometimes a few more distinct models that states spread them out there. There's certainly not one cookie-coater approach across the country but in most of these policy areas there are not 51 approaches. And we try to walk the same line of we have a policy goal but there's not one way that that goal can be met and we do try to make sure that states can see the different ways. We try to, where we can build into our best practices, different approaches that states are taking that will get you there. Yeah, there are different sort of policy levers that you might have in one state versus another depending on what powers the state board of education has versus the governor's office versus the legislature and who's in control of what but the approaches are sort of finite. So I think I would agree that there may not need to be so much tailoring to each individual state. What may need to be tailored though is your communications approach around that strategy because that's where a local audience might be slightly different from one to another or who are the key players and you need to know that. You can certainly share strategies between states but that might be where there's more tailoring. I think kind of where the center of gravity in terms of education and leadership kind of rests within a state at a particular point in time matters a lot. I think every state is not absolutely different from every other state and the same way that every district or every school is absolutely kind of exceptional and different. So there would be little point in these kind of 50 state exercises if that was the case, right? Then it wouldn't matter what was going on in Virginia if you're Maryland or vice versa. And I think the policy and political environment at a given point in time matters a lot. There's kind of go back to your kind of textbooks and kind of undergrad or grad school and kind of political culture. It seems like kind of very archaic and ethereal but it's kind of real in the sense that you do get regional patterns in terms of kind of local control states. They're still around in some form or another and that kind of speaks to what that state level environment and kind of where kind of momentum has a potential to be built and kind of how much can be addressed through state level policy versus other approaches. And just the way kind of state leaders, whether you're kind of the state chief or the governor or one of their deputies or whoever, you know, kind of receives kind of state report card types of projects and kind of makes use of them, varies a lot, right? So you spend a lot of time thinking about kind of states that, you know, want to be at the top of the rankings because they can kind of wave the flag and kind of the word number one or word number two or whatever. But actually, that's not always the way kind of this information is used. And there's plenty of cases where states that are at or very near the bottom of the rankings in quality counts or a particular area of quality counts, that's more important to a leader in some cases in terms of getting action and being able to say that we're at the top, right? There's a kind of a great example, kind of a few, number of years back, I was on a panel with, I think the state chief from New Mexico, New Mexico is very low ranked in quality counts that year. But that was okay because that was, that wasn't okay. But there was kind of a silver lining to that because that gave the chief and kind of the department a lot of kind of ammunition, if you want to think about that way to kind of go the legislator, to go the other leadership within the state, say we need more investment, we need to put more emphasis on kind of this, this and this. So even kind of low grades can kind of help support a kind of forward moving policy agenda. You don't always know how it's gonna play and it'll play differently in different states, depending on a lot of types of things. But it's an important factor, I think what goes on. So just briefly, I'm very sympathetic to your question and probably a lot more than I would have been like six months ago, my organization recently began two years of work on rural education reform. And so we've had some time to, especially in Idaho, but some states in the deep South take a look at things. And I've been very surprised at just how different expectations and the interaction between schools and communities are. So for example, there are a lot of very rural communities we've learned who feel this deep tension that they worry that if their students are very, very well educated, that they're going to go to very good colleges and never come back. And that by having great schools, these communities are seeding the demise of their own communities. And so they're trying to wrestle with this. So in those places, I bet they would say, in your report card, if you're gonna rate us, you also should be rating economic development in our communities. So these kids that leave can actually come back. So I think there should be some nuances. I don't have this all figured out yet, but if you're interested, the project is called R-O-C-I-DOHO. That's a handle for it and you can also find it online. I think we're just about at the end of our time. And should we give everyone a minute to maybe summarize, give final statements? Sure. Wanna start? Yeah, so again, thank you so much for coming out. I think this has really been a great discussion and I've appreciated hearing from the graders whose content I am absorbing and using. And I think it's clear that we all see the utility of these rankings and grades, but also the need to really figure out implementation and how to be thinking about that in a different way. And I think that's sort of one of the takeaways that I'm gonna keep chewing on after this event and moving forward is how can we sort of move from policy adoption to policy implementation and incorporate that also in our work because that's sort of gonna be what really drives whether we see the achievement results that we want at the end. So thank you, first of all. Real quick note on the last question for me to close out. We often get sort of characterizes as this is something new, this state by state advocacy work and changing education policy. And one of the most interesting things we found when we did our first report card last year and we broke open the state codes was how similar state laws actually were. And it goes back to this notion that they're somehow unique. Somebody, scratch our heads and wonder who, certainly engaged in an advocacy campaign over the last couple of decades to go state by state and make sure that a lot of the personnel policies looked an awful lot alike. Literally the same language, state by state. So this is nothing new. This engagement of changing state policy to look a certain way is nothing that we're inventing. I think what we're trying to do is change the conversation a little bit and say, well, if we're gonna change education policy, who should it really focus on and what should it be centered around? And so hopefully you're all able to take a look at our report card. It's at reportcard.studentsverse.org. Hopefully it's a useful tool. We'd love your feedback on it. And yeah, thank you. Yeah, well, thanks to y'all for coming. Thanks, New America for hosting. Our state teacher policy yearbook is at nctq.org. This year we launched it with a new interactive website that we hope is much easier to search and use and share information on before you couldn't do much besides download the reports themselves. And so we hope it's much more interactive and useful. We'd love your feedback on the website and also on the report as a whole. The one note that I'll end on is one of the things that was really fun for us this year was we went back and took a look at some of the comments because we have this dialogue with the states incorporated into the reports. We pulled out some old comments from states a few years ago on things that they were now doing that they, not that long ago in 2009 told us simply couldn't be done. That is impossible in this state. You know, that here's the 10 reasons why we will never go down that path and now they have that policy in place. So, you know, the shifts are there. You know, we just need to keep putting the pressure on. Thanks everyone. We don't want your feedback. No, just kidding. Yeah, edwik.org if you wanna find quality counts and all things edwik. I guess as a part of that I'm kind of struck by the way we often kind of think of these kind of state kind of report cards first and foremost as something we do two states. And, you know, I'm sure quality counts is perceived in that way, you know, in more than a couple quarters. But I think ideally we like to think of this work as something we do four states, state policy leaders of various kinds and advocates on the ground, teachers, students are really kind of who we're doing this for. That's kind of who we wanna kind of arm with better information to kind of take and, you know, kind of make better for themselves. But in our case, and we haven't spent a lot of time talking about kind of the nitty gritties on the operational side. This is something we do very much in partnership with the states, especially when we do state policy surveys. The states are the ones filling out those surveys and there's a lot of back and forth and we could not have done quality counts at all let alone for 18 years without a lot of involvement from the states, the state education agencies. And so we always appreciate that. And, you know, kind of we're looking forward to kind of continuous work in the future. And so many of the changes we've made over the years have come in part from kind of feedback, from kind of the general, you know, kind of educate K-12 field as well as states in particular. And so if you do have, you know, kind of thoughts that you wanna share with us, definitely let us know. We're in one of those phases where we're thinking about those issues quite a lot right now. Okay, for me just six quick thank yous. Number one to everyone in the audience for being here and participating. Number two to all the people who are tweeting in or watching or are we live streaming? Live streaming out there in the ether somewhere. And then to the panelists, like I want you to know how much I appreciate their work. If you have not read Ann's report on the waivers that she referenced, you were doing yourself grave harm. It is one of the best things I have read in a really long time and it is edifying a piece on federal policy that you could imagine. Do yourself a favor and read it. Eric and his organization have been pushing as hard as any org out there for the best interests of kids, especially related to ed policy. So they deserve a whole lot of credit. I have to thank NCTQ when I was working in New Jersey. We actually had NCTQ come to our state board and give a presentation because I think we'd gotten a D or a D minus that year on our policies. And I learned thanks to their report that correct me if I'm wrong. For our praxis exam, we were, our cut score was like a standard deviation and a half below the mean. That's the 9% that. Yeah, so we were allowing in through praxis, like 91% of people who took this test. I didn't know that we were able to change the policy. And it goes without saying quality counts is like it's the standard bear. It's been something that I and probably lots of people in this room have used for years. So please keep it up. And thank you to all of you for being here. And thanks for hosting, Ann. Happy to have you guys. Thank you for coming. Have a great day, everyone. Thank you.