 Hi, folks. This is John Ross. I just want to go over a couple of issues about using the link interface, if you've never used it before, and then we'll pass it on to our presenters. We will use a couple of different features in the session today, including if you would like to change the way you're viewing the screen right now, you might be seeing a bunch of small blue squares at the top. If you'd like to change it so the presentation is larger, there's a button on the lower right-hand side that allows you to pick from three different layouts. You can pick whatever one you like and it won't change anybody else's view. And we're also going to do some interaction today through chatting as well as using the phone. So in the far left corner of your screen, if you haven't already seen it, is the chat window. And if you click that little round button with the little comment icon, the chat window will come up. And I have to enter your comments and press return. I would like to pass this over now to Dr. Sharon Harst, who will present a welcome and introduce her speakers. Thank you, John. Hello and welcome to the ARC Webinar. This Webinar is one of a series of presentations on cutting-edge issues facing education today. I'd like to thank Dr. John Ross for his leadership in directing the Webinar series and for his skill and knowledge in conducting technology-based services. Thank you so much, John. Benchmarking capacity building is a topic whose time has come. The ARC and the three evaluators presenting today's Webinar are engaged in benchmarking the capacity building services that we're providing through the Regional Comprehensive Center. And we're conducting this Webinar today in response to interest across the Appalachia region in designing and implementing state-level capacity building services. Detectives, investigative reporters and evaluators all have one thing in common. They know how to look for evidence and they know how to use evidence to determine the existence of some fact or some event or something that they're trying to prove or disprove. In this Webinar you'll learn to locate and use existing data in your organization that will identify capacity building needs and look for evidence that will benchmark the progress that your organization is making in building capacity. I'm very pleased to introduce three highly skilled and knowledgeable evaluators who are presenting today's session. Thomas Horwood, Dr. Katelyn Howley and Dr. Keith Sturges. Thank you so much for doing the presentation. Katelyn? Thanks Sharon and welcome everyone. We're really delighted to have you with us today. As Sharon said, I'm Katelyn Howley and I serve as the Associate Director of the ARC. My background is in the sociology of education and in a previous life I was an evaluator of technical assistance programs. I'm going to tell you a little bit more about our other two presenters today. First up is T.J. Horwood. T.J. is the evaluator for the ARC. He also conducts research and evaluation for state and local education agencies and has investigated the effectiveness of a huge range of education interventions. So he knows quite a lot about how to measure things that are difficult to measure. T.J. is going to discuss today some great data sources for benchmarking capacity and then he's going to tell us the story of the Vandalia Department of Education Standards Initiative. Our next presenter is Dr. Keith Sturges. Keith is an educational anthropologist specializing in the evaluation of technical assistance and change. His work plans together ecological and stakeholder engagement approaches to program planning, development, delivery, and improvement. So he knows quite a lot about the messy work of organizational development and complex change. He's also an accomplished group facilitator so Keith will guide our discussion later during the webinar. No pressure please. We have three main objectives for today's webinar as you can see here. If I had to distill down to one sentence it would be something like this. Our discussion today is intended to help you begin to explore how to benchmark your SEA's development of capacity using data that you might already have as well as potential new data sources. So here's the big idea that we're considering today capacity. That's a really huge concept and it means different things to different people in different contexts. But for our purposes here we want to be as specific as possible. So in our work capacity isn't just potential, skills, or great systems. It's a group's ability to put their collective knowledge, skills, interdependencies, and systems into action so that they can overcome problems and achieve goals. And we all of us have individually and in groups capacity already. But even the highest flyers really can benefit from capacity building. But we really have to be honest as well about capacity building. It isn't easy at all. And that's for three main reasons. First it requires continuous change. This means continually cycling through assessment, refinement, implementation, and evaluation, and back again. Secondly capacity building really depends on your team's willingness to define its goals and then consider very seriously how to get there. And third capacity building can be difficult because it's disruptive. It disturbs things as they are and it reverberates throughout the organization. But when capacity building is successful it enhances organizational ability to respond constructively to disequilibrium by doing two things. First of all by empowering individuals with knowledge, skills, and tools that they can use to identify and understand the problems they face and then devise feasible, meaningful solutions to them. And second by strengthening processes that means things like strategies for ensuring interdepartmental education and structures, things like lines of authority. So that people collectively and individually can apply what they know and know how to do to addressing problems and ultimately to achieving their goals. Today we're going to talk about four main types of foundational capacity summarized on this slide here. Very briefly, human capacity includes things like people's knowledge and skills as well as their determination or will to accomplish goals. It's kind of what inherits in people. Material capacity is just kind of what you think it would be. It's things like funding, equipment, software, office space, kind of the material tools that we use. Organizational capacity is a bit more complicated because it has to do with dependencies and it's concerned with how people across an organization intersect, how they communicate and collaborate among individuals in various teams. And then finally structural capacity which includes organizational constructs that exist kind of independently of the people in those organizations. So structural capacity includes things like guidelines or policies, procedures and systems, kind of the legitimated ways that we do things around here. So very briefly, those are the foundational capacity types we'll be talking about benchmarking today. Now here are some of the questions we hope that we'll consider as we look through our vignette during today's webinar. How would you know if your State Department lacked human capacity? What kind of indicators would suggest to you that your team had developed material capacity? What evidence would let you know that your State Department had improved organizational capacity or strength and structural capacity? I'm going to turn things over to TJ now who's going to talk to you about some great data sources that can help you answer these kinds of questions. TJ, it's all yours. Thank you, Caitlin. And before I jump in, John, can you reiterate where the handout is available? I think it got sent around but is there a place to download it as well? There is. We sent it to registrants in an email but you can also open it by clicking on and the left hand of your screen, there are a couple of circles with icons in them. The one with the computer monitor is clickable so if you click on that it'll open up a pop-up box. There are a couple of options there's an attachments option at the top. If you select that it'll take you to two handouts. One is the Vandalia vignette and the other is a handout about data sources. Great, thank you Caitlin. So Caitlin mentioned the definitions of the different types of foundational capacity human, material, organizational and structural and I wanted to take some time first of all to talk through what are some of the great data sources that are available to SEA staff to gather evidence of these different types of capacity. So you can imagine that some of them are easier to observe than others and I think what you'll find too that as we dig deeper into the core definition and the different components of these types of capacity that what we hope that you'll gather today is just that you'd be able to think beyond what you might normally look at for evidence. So for example the first one is human capacity and so you know most of you obviously are very familiar with the staff who are involved in the various initiatives that you're planning and developing and implementing and so you might think about human capacity in terms of staff members background based on their education or their work experience do they have the intellectual proficiency to implement the change that you're hoping to implement. And so that's I would say kind of the easiest one to think through but what we also want you to think through is staff members disposition during the initiative planning and implementation of the initiatives you would know okay do these staff have the will to make the desire changes that the initiative will hopefully make and so that helps you dig a little bit deeper into the human capacity element that's the first part of the foundational capacity. The second one is material capacity and here I've listed three examples of great data resources that are available that FBA staff can look to. The first are the fiscal resources this would be documentation that funds are available to do the initiative and so that would usually come from your business office or as identified through a grant. The second one is technologies and software and the third is materials and equipment. With the last two the capacity is of having these things in place so have you purchased the correct technologies and software to do the initiative? Do you have the materials and equipment in place? Have you purchased that? We also want to make sure that you're looking for evidence that these items have been logged that they're installed or assembled and that they're actually available for use for the initiative. So that's another way to look at the material capacity to do these initiatives. The third area is organizational capacity and here we want to think about a few examples of areas where you might observe organizational capacity and what you're really looking for here is the extent to which the people involved in the initiative are interacting, collaborating, and communicating. So for example for interaction you may observe meetings to determine whether initiative leaders and stakeholders are regularly interacting to carry out activities and that would be interacting with each other as well as the leaders interacting with stakeholders, other stakeholders interacting with each other. A second example is that you could work to gather feedback from the team members directly to determine how they are collaborating to accomplish initiative goals and objectives. And thirdly, you can track how initiative leaders and stakeholders are communicating with each other about the initiative goals, progress, and outcomes of the initiative. And I like to think about structural capacity as being from paper to practice. So the different elements within structural capacity that you'd be looking for evidence of would be that the SEA has policies in place and that those policies are highly functioning. That means that the policies have been written and they've been adopted. And here you want to make sure that that would be the first step. The second step would be that the SEA has procedures in place and that the procedures are also highly functioning. And these I view as being the formal steps and the guidance that's provided to the staff who are involved in the initiative. And then the last area that you'd look for evidence of would be that the SEA staff are practicing what's written. So that the SEA has practices in place and that they too are highly functioning. This is what I like to think of is what people are actually doing in real life. So after going through those slides, are there any questions about the different types of evidence to look for? Excuse me. And if you do have questions feel free to either shout them out or type them in the chat box as well. I was going to remind people that if you muted your phone with Star 6 you can use Star 6 again to mute back off or you can use the chat box. T.J. Sarah Seiko has posed a question which is that Will is one of the components of human capacity. Seems a bit hard to measure. Any thoughts? Well we think of wellness as identified in the handout as interest, patience and persistence. So yes that is a great question. Thank you Sarah. I like to think of looking for evidence of that through your interactions with staff. So for example, as you're getting ready to plan an initiative, I like to think of it as those people who kind of step to the front of the line who say that they're interested in the initiative and that they have that they really want to see the initiative be successful. And so that's one way to think about it and one bit of evidence to look for. And the other components that we talk about when we discuss Will are patience and persistence. And I think that would be observable throughout the initiative particularly as things get more difficult with the initiative. If the group gets to a point where they might be experiencing some disagreement or having difficulty interacting. And I think that that would be something that you would look for during the initiative implementation or even during the planning. Great question. Does anyone have anything to add to that? I'm ahead then to our tale of the Vandalia Department of Education. Sounds good. So the example that we've come up with for today's session is the Vandalia Department of Education State Standards Implementation Initiative. And this SEA is facilitating a statewide initiative to implement new content standards in English and math. And I think this is timely given that it is summertime and there are several SEAs who are planning this initiative. But in this example we've focused on three components of the state's effort to highlight some of the takeaways in terms of what you might look for in terms of the types of capacity and evidence that those capacities are in place or if they are having difficulty. So the first component is pilot testing of the new state standard. And so the pilot test took place in 15 schools and it involved the development of formative assessments and curricular math and it also involved the creation of online PD modules around the implementation of the standard. And they assembled the Vandalia State Standards Pilot Team, or the BSSPT, which was comprised of district school leaders who are organized to guide the process and report on the implications for the statewide implementation the following from. And so the 15 schools we had nine from one large urban district, three from a suburban district, and three from different urban districts. And there were bi-monthly work sessions, the development of the formative assessments, curriculum maps, and the creation of online PD modules. And so upon the completion of the pilot the three capacities, human, material, and structural. So there was evidence of these three. So for human capacity, some teachers reported needing more content area PD to teach material more deeply as required by the standard. And they found that not all teachers knew how to use formative assessments. With material capacity, teachers in some of the pilot schools could not access digital material like the PD modules and the formative data. And with structural capacity, the pilot site selection process failed to ensure an adequate representation of district types. And as a result pilot findings were not sufficiently generalizable to all of the districts who would fully implement across the state. The second component was regional center professional development for principals and teachers. And these are the intermediate education agencies that were organized to provide these services. And from this the organizational capacity was that Vendalia facilitates regular web meetings for regional service centers to help them prepare to provide PD on the standard. And structurally some regional centers mainly offered webinar PD whereas others facilitated PD workshops on site. The third component is the online implementation support. To provide ongoing implementation support, Vendalia established and maintains a website for educators across the state. And in addition to the standards themselves, the site includes instructional materials aligned to the standard, the PD modules consisting of videos and regularly scheduled video conferences with instructional coaches. And the takeaways from this were that for material capacity they found that most districts used the digital resources provided by Vendalia but some rural districts could not access videos or interactive materials because they lacked robust broadband. And organizationally the site developer do not coordinate with RSEs to obtain support materials for inclusion on the site. Thank you very much T.J. Are there any clarifying questions before we move on to a discussion of Vendalia's foundational capacities and how we can gauge where they are? If you have any clarifying questions please feel free to speak up or type them in the chat box but at this point I'm going to turn things over to Keith. Hey good afternoon everybody. This is Keith and I'd like to use the remainder of our time today to hear your thoughts on Vendalia's rollout of the new state standards. And to do this I'd like to draw on your experiences, your knowledge, the cumulative wisdom of the people in the room to identify the strengths and weaknesses of the State Department's efforts to assess its capacity to implement the standards. So first what I'd like to do is talk a little bit about the individual activities. Maybe we can start with the pilot testing. What do you think the department did well in assessing its capacity with regard to the pilot testing? That's the Vendalia had those teams, state standards teams to be working on going, seemed to be a positive thing. There was something, a structure then in place to kind of keep track of how things and people were progressing. So that seemed positive. I agree. I think that was a real strength. What else? I also think that it was positive for them to recognize that there were different aspects of that capacity building like the human part, the sexual part, the organizational part. That was good. Yeah, I agree with that too. What else was really positive about the pilot testing? Folks have also answered some in the chat box. John for example said that a positive was that it involved lots of different stakeholders. And Candace says that creating online PD modules to support those pilots was great. Indeed. I think one more that I might add would be the compilation of results after the pilot period. The left work that was provided back to the State Department. What about some areas that Vendalia might improve its capacity assessment with regard to the pilot testing? The issue there with teachers is not feeling equipped with the ability to go really deeply into subject matter. There's a gap there. I agree. What do you think about the choice of pilot sites? Would you repeat your question please, Chief? Absolutely. What do you think about the choice of pilot sites? Well, it seems to be balanced in that they had rural. And I think probably I would have hoped they would have realized that the rural districts would have had problems with all of that online information and the fact that they wouldn't be able to access it. You would think they would have known that up front. You might. You might be surprised. I saw a couple of comments pop up there trying to see what they say. They disappear before I can read them. But one of the questions I would have is about the representativeness of the pilot sites. Do they really represent the diversity of the state? If I'm not mistaken, I think quite a few of them came from one large urban school district. Yeah, and John Ross has pointed out there that the results might therefore be kind of skewed in that direction. For that result might be more generalizable to large urban districts than to other kind of sites. Anything else on the pilot testing? What did you think about the regional service center PD? What did VSEA do well in terms of measuring its capacity there? The monthly meetings would be a positive aspect of this. Twice monthly via web conferences. So again people are keeping in touch and able to talk about what they're doing. The conferences with the coaches. Does that run to that part of it? I forget if that was the SDA or the other I'm sorry, I didn't quite hear your question. I thought that it was good that they had the follow-up conferences with coaches but I couldn't remember if the SDA did that or the other organizations or support groups. Yeah, the SDA was I think it was sort of facilitating with all of this activity. What do you think they could have done better with regard to their assessment capacity related to the regional service center activity? It would have been great if they coordinated what they were going to present to the LEAs that way because it seems from the survey data that there were some variations in the implementation of PD and how it supported teachers' felt in the implementation based depending on the RFC that was leading that PD. And so I think there could have been some better coordination there. Yes, great. Anything else on the regional service center PD activity? Was there anything related to quantitative and qualitative information on the training? Good, I think we're getting into the meaty stuff here and there might have been a little bit of a downside there. We'll come back to that. There are a couple of comments in the chat box as well. On the positive side John says it seems like they've given the participants some autonomy to think about and continue their professional development. He said the follow-up survey was a nice touch too. We have an opportunity to grow side of things. Sarah mentioned lack of consistency in delivery methods which I think I heard Candace mention earlier. Ann notes, is there any relationship between teachers feeling well prepared and professional development via webinar or workshop? Great comments and questions. What about the online support? I think that online support is good. The teachers have the time and access that I think that that's where that coordination with the coaches would need to come in to make sure that teachers knew that it was there, that how to access that easily and work closely with their teachers to make that online support accessible and usable for them. What about in terms of Vandalia's efforts to gauge its capacity with regard to the online support? There's no evaluation of it at all. Yeah, right. It would be really hard to know how they're using it and if they're using it correctly and if it's making it has an impact. Exactly. Yeah, it's hard to say whether or not something's working if you don't have any documentation. Now I'd like to move on, move our discussion to the more general question about Vandalia's capacity to assess its capacity. I guess the first question would be what kinds of data does Vandalia probably already have? What are some of the data sources that are already available to Vandalia for documenting capacity to roll out the new state standards? I think meeting agendas can be a good data source. I think we can really show who's meeting with whom about what and how often. Good, and I think I would add to that probably minutes of the meetings if there are any. Right, maybe just to have an implementation plan with people responsible for certain parts of that implementation and a way to hold them accountable and to get feedback from them about how it's going. Agreed. Anything else? Is it related to any associated component of the initiative? Those data would be useful. Yes. From the chat box, archives of webinars and web statistics. Web statistics, absolutely. Yep. This set of data that we've just generated or just this list of data that we've just generated of likely data types that Vandalia already has available. What do you think they still need? Help them assess their capacity with regard to this bigger initiative of implementing their new state standards? I would think we need feedback from the actual people who were the recipients of that PD. Say a little bit more. I get what you mean by feedback, but feedback and what kind of questions would you ask? Do you have the knowledge and skills that you need to implement the Common Core? Do you have the resources that you need? What roadblocks or what barriers are there? What do you still need? I think that to me, working with the FDA, the big missing piece in trying to implement something was that they wrote it out and they provided all this information and then they didn't follow up with the questions to make sure that the people who were supposed to receive that information got it and used it in the right way. That to me, and they weren't allowed to then say, well, I need more of this. I need this. I still don't understand that. There was just no feedback there. To me, that's the biggest missing piece in that capacity part. If you're really going to implement something, you need to have a way to have feedback loops in there. I don't think that FDA do that so much that I've seen. We feel there are pain when we know Vandalia is very busy trying to manage this big initiative. And personnel has always a problem. There's a couple of comments from the chat box. John says they need to know who's actually using the standards and implementing them with fidelity. I just suggested participant survey data and recommended website use data and evaluation. Sarah asks, since few schools are using formative assessment data is it an access issue or a knowledge and skills issue or something else? And John says the focus is more on use than efficacy. And efficacy is also an important question. Indeed. If we were to take what Vandalia already has available, what it's already using to assess its capacity. And then we superimposed, we added these new measures, these new data sources like the feedback and participant surveys. Would they have everything they need to really be able to assess their capacity? Or in other words, would they be fully equipped to improve the state standards rollout initiative? John says they really need that feedback loop that Janine mentioned. Yes, they do. And I guess the proof of the implementation will be when they get their achievement scores and they get, you know, how to get those test results. But that would be sad to have to wait that long. So here's another way to ask that same question. Aside from data, what else does Vandalia need to assess its capacity? Is it just a matter of data? Well, the way you ask that suggests no. Well, they'll elaborate for us. I think that first of all, they kind of need to conceive of their roadmap. Where are they now and where are they going? What kind of capacity do they need to get where they're going? And I think that they need a schema, a plan and approach to make sense of all their data. They can roll it together and analyze it to make meaning of it. Yes, absolutely. And there's one more piece that I would add to, at least one more piece, but one piece in particular that I think is really important. Anyone want to take a stab at it? John has asked, what are their indicators of success? Do they have the right data match to each indicator? Right. So we have the right data. We have the right systems for understanding or making sense of those data. But then there's one more thing that we probably need. And it looks like some more folks are typing. So maybe we have some answers. Can I ask a leading question, Keith? I think we're... I'm going to give it a second here. I'm going to see where we're going. Sarah has written, I think that feedback is necessary, but it's not efficient. What precise actions are taken based on that feedback and how do the actions help to achieve outcomes? Yes, Sarah gets the golden point. Doug Walker has also added, this SEA initiative is built on a collaborative effort involving three partners, the SEA regional service agencies and local districts and schools. That is the complexity that our assessment needs to understand and assess. Absolutely. That's a great point. Yes, it really is. So what are some of the barriers that might prevent Vandalia from accessing or gathering those data? People, time and money. Time and money. Tell me a little bit more about the people side of that. Well, you know, Sharon always talked about the head nodding syndrome, which is kind of what Sarah meant. I think, you know, you can talk to a teacher or someone about something, oh yes, yes, I know that I understand that, but do they do it? That's a different story. So getting someone actually into that classroom to see if they really are doing what they need to be doing and are teaching in the way that the Common Core says that they need to be teaching. That's different from them saying, yes, I understand and I know what I, oh yes, I'm doing that and I have a perfect example of that. I did some PD with district just in December and I did a self-assessment for teachers on, oh, there were about 50 different behaviors that they do, that the Common Core would like to see teachers doing in the classroom and I knew these teachers well and just to watch them go through this self-assessment was interesting. So I think that people sometimes think they're doing things and they're not really doing them sufficiently or they understand but they're not doing it, but anyway, I'm just saying I agree with Sarah that, and to get someone, get people in there to really get that evidence that it is being done right, that would take a lot of people in a lot of times. Yes, it would, that's right, it really would. So that's a good setup for the next question here which is so if Bandalia was using feedback from teachers say surveys from the pilot sites, let's say interviews with principals, let's say maybe student benchmark tests of some sort, and some of the other data types that we just discussed, which of those are probably, would you put sort of as the priority data types for assessing capacity? Can you list those again real quick? Yeah, sure. We have feedback from teachers surveys from the pilot sites, interviews with principals, and if I missed any please chime in folks. We had web based data, and then student benchmark tests. I would say the benchmarks would actually be benchmarks of what? Benchmarks of students taking benchmark achievement tests, or a teacher or benchmarks like in a roof work that you've made for your implementation. You met this benchmark, that benchmark. What kind of benchmarks? Why don't you answer that question? What do you think would serve our purposes best for these new state standards? Well all time and I would say that you would eventually want benchmark test data but not in the early part because we even know that our benchmarks are going to be appropriately aligned so I wouldn't want to go more at the teacher level or implementation of what's going on. In terms of feedback, I think of a lot of different kinds of feedback. I want to see videos, I want to see lesson plans, I want to have classroom observations. I don't want to just people to just reply to a survey. Okay. And then you were talking about earlier what people might lack and I think people sometimes don't have an understanding of all the material capacity as a teacher. What can be provided from different kinds of services? Often through web-based services there's all kinds of data you can collect about who people are and how they're using it. It's just the level of security that you're allowing or your level of access that you're allowing people to come in. So I think you could bump that up and very easily collect a great deal of data about who's using what and how you want to protect it that way. Good idea. Especially for the online I would think, for the online portion of the online activity, online PD modules. Sarah had asked an interesting question. Is there a distinction between feedback and evidence? Ooh, that is an interesting question. Who wants to answer that question? There is a difference. Is one a kind of the other or are they two different things? I always think feedback is self-reported. Because feedback is probably a type of evidence and it has some flaws built into it but it is some indication of what people are thinking and how they're feeling about the initiative. Yeah, I think I agree. I think I tend to think of feedback as more effective. It's not always effective but that is touchy failure. It's more subjective but it can be terribly important as a form of evidence. Especially if we think about what John just mentioned about having a multitude of data types, I think that was John. And when we put feedback into conversation, if you will, and to dialogue with other types of data, it can certainly help us interpret three types of data of all of those that we've discussed. Which of those would you think would be the most important for assessing the capacity of Vandalia in terms of its rollout of these new state standards? When you say assessing its capacity, are you talking about the administration, the central office people or the impact in the schools? That's a great question. I think what we're talking about is the State Department's capacity. This is added that she thinks that student benchmark test data might be most important. Yeah, especially as we get down the line. What else do the rest of you think? Written in the chat box that the Vandalia could probably assess their capacity to roll out the initiative without student data. Although that does seem like a long-term outcome that they want to look at. I'll jump in. I think that I would be most interested in some of the implementation feedback from pilot sites because that's a trial run. I want to know how people interface with the materials and what they wanted to improve in the PD and that kind of thing. That's right. I think there are a lot of missed opportunities with Vandalia's efforts. That's where it stems from initially. I think that when you take a look at all of the different activities related to the rollout of the state standards, a lot of the issues go back to what happened during the pilot study when they had a very skewed sample of folks. The implementation wasn't uniform at all, which isn't always a horrible thing, but when you do an assessment or an evaluation that relies on uniformity sometimes that can be misleading when we look at a boiled down set of lessons learned. I know we just have a few minutes left, so I'd like to turn our attention to the last set of questions here and talk about some of the implications of data collection and data systems. What are your thoughts about how Vandalia might benchmark its progress over time? For instance, thinking about its efforts to run the online support site. In what ways would Vandalia expect to see their data change as they increase their capacity to run the online support site? I would think if those online support sites had value then the web statistics would show increases in usage. And Janita said the same thing, more folks would be accessing the site. Okay, so more folks would be accessing the site. That makes a lot of good sense. What else? I might expect to see more contributions from other parts of the system like the regional service centers. We're supposed to be providing PD. Right, and I was thinking that if folks find the site really valuable then they're going to want to start accessing certain things to that site maybe and try sharing across schools ideas and things. I'm sorry Janine, we had difficulty hearing you. Oh, I'm sorry. I'm just thinking if teachers find a site or a tool to be helpful then they're going to jump on board with it and want to add their own things and more materials and more information and use it more and develop it more. In the interest of time I'm going to ask one last question here. How does data collection efforts overlook the likely burden on SEA staff, district personnel, and of course schools? So for example if principal interviews are used for each activity at least once a year and they might average say an hour a piece or at least that's what I tend to do. And how many interview hours will each principal be expected to participate? So what are the implications for all of this data collection on the folks who are supposed to be doing the work? One implication would probably be the question asked what do you want me not to do so I can do this instead. Perfect question. Really good. Any ideas about how the department might minimize its negative impacts on participants while trying to maximize its positive impacts in terms of assessing capacity? Well a lot of this is related to teacher evaluation I would think. And if a principal has to do observations as part of the regular teacher evaluation I would think this would just be a kind of a natural part of it and you could maybe try this two together somehow so it wouldn't be a huge job but one. Yeah I think that's sort of borrowing from that idea and thank you for saying that. Maybe using the same data sources for multiple purposes. A piece that might also want to consider is people at the end of the year or whatever you're looking at how much turnover have you had and is there a plan for additional training repeating this training for people who were not there or people who may need a second round of it just to you know it didn't quite take the first time so let's go back and do it again. It just appears to me a lot that's not done when it really needs to be done. Good point yeah so sort of optimizing the onboarding process is that kind of a really good point. And there were probably some people who were on board already but for whatever reason they didn't take the training. They were sick. They were you know whatever. Right. Hey Keith, this is John Voss and I know we only have about a minute left but can I ask you a question? The scenario was helpful but can you then turn on around and say what should NSC be thinking about when they have a different kind of initiative? How can they be thinking about this evaluating of capacity? The last part of that question again. What should they be thinking about when they're going to evaluate their own capacity? They might have an initiative that's different than this one. Can you give them some general advice about that? I agree with you first of all that having a multiplicity of data types is probably absolutely important. I wouldn't want to rely on one type of data. I would definitely advocate having as many different types of data as possible. In addition to that having a really strong data plan is important. How are we going to make sense of these data? And of course being able to collect information that's actually going to lead to some sort of change that's going to be implementable down the road. This isn't open sort of academic research. I'm out of time. I want to thank you all for sharing your thoughts and experiences. This conversation is really far from complete of course. I hope it was helpful in broaching the topic of assessing capacity building efforts. A big part of it is the question of accessing the right data. Another part is having the right office in place to understand what those data are saying or trying to say to you. Another is being realistic about use of those data to make improvements. And I would add to that that it's also making sure that the data you're collecting isn't derailing folks from doing what's most important. I'd like to turn it over to you all and ask if you have any questions for the panel. And I think because we're just about out of time if you have any lingering questions please feel free to email the ARC. If you have my email you're welcome to send it to me or we have a kind of organizational email info at ARCARCCTA.org So I'm going to thank Keith and TJ very much for their great presentation and also for sharing John and Kim Cook's contributions. I will say that this kind of work is part of what the ARC does. We help state departments figure out what they want to do, where they want to go how to get there, how to measure things along the way so that they know when they do arrive or when there's a potential road block and then help them think about how to work around that road block so they can ultimately achieve what they hope to achieve. So we'd like to thank you very much for your participation today and invite you to continue connecting with us. This kind of webinar is just the beginning of these kinds of conversations. I will also say that your goals are in many respects our goals and our services are designed to help you achieve them. Please feel free to visit our website, follow us on Twitter, check out our videos on YouTube, or call or email us anytime. Thank you again and have a great Tuesday.