 the committee enhancing coordination between land grant colleges and university. And I would like to welcome everyone to this webinar, which is an open on the record information gathering session for our committee members. The meeting is being recorded, and the recording will be posted on the project website about a week after this meeting. I ask any members of the public to be extremely mindful of the fact that the committee has made no conclusions about anything yet. So please don't leave here today thinking otherwise comments that are made by members of the committee should not be interpreted as positions of the committee. In addition, please recognize that committee members typically ask probing questions, and these information gathering sessions, but that those questions may not be indicative of their personal views. At this time, I'd like the committee members who are on the call today to briefly introduce themselves with name and affiliation. I'll introduce our speaker for today who will make a 45 minute presentation, followed by questions from the committee. If there is time remaining, we might be able to have a question or two from the listening public. So now let's have the committee introduce themselves and I will do my best to call on each of us. If I miss anyone, please do help me with that so I'll start with Dr. quarrels. Good morning Ryan quarrels Kentucky Commissioner of Agriculture, but also a former past president of the National Association of State Departments of Agriculture good morning. Thank you. Dr. Powers. Good morning and Wendy powers University of California. Associate Vice President of Ag and natural resources. Thank you. Let's see. Dr. Draper. We're not hearing you Marty if you're on I'm not sure. Let's see Dr. Cairo. Good morning. Moses Cairo. The School of Agricultural and Natural Sciences University of Maryland is some show. Thank you very much. Dr. Yanni. Thank you. Good morning, everyone. I'm Steve Yanni, the land grant director at Bay Mills Community College, which is a 1994 land grant institution located in Eastern Upper Michigan. Thank you. Thank you. Jan, if you would please. Sorry. You hear me now. Yes. I'm Jan Leach. I'm the research associate dean at Colorado State University for the College of Agriculture. Thank you. Harold, if you would please. Yep. Hi, I'm Harold Schmidt senior scholar. You see Davis graduate school of management and general partner and venture capital group called the March fund. Great. Thank you. Dr. Powers. He introduced herself. Okay, very good. I don't have a good checklist here. Did I miss anyone else? Hi, this is John McNamara from the finally sunny Eastern Washington. Professor Meredith Washington State University and past president of Washington Science Teachers Association. Thank you so much. Anyone else. Yes. Good morning. I'm Mike Harrington. I serve as a consultant for this group. And I'm former executive director of the Western Association of Experimentation Directors. Thank you very much. Did I miss anyone else? So actually we were looking for just the members of the committee to introduce themselves and we are missing a couple of people who may join us during the presentation. That includes Ronnie Green from the University of Nebraska and Karen Blau from Karen Plout from Purdue and Olga Bolden-Tiller who is from Tuskegee University. They'll join us during this meeting. Oh, one more person who has not yet introduced themselves is Dena Chacon-Ritzel from New Mexico State. Thank you so much, Robin, greatly appreciated. And so at this point, these other members, I hope will be joining us throughout the course of this presentation, but we will move on with my introduction of the presenter. And our presenter today is Dr. Jenny Cross, professor in the Department of Sociology at Colorado State University. She is a community sociologist conducting research with and for community partners to solve community problems and to improve quality of life. Her research spans many topics from public health to regenerative regional development, and her TEDx talk on the three myths of behavior change has been viewed over one million times. However, on the topic on which we've asked her to speak today, her research is on how transdisciplinary teams interact most effectively and how to cultivate effective science teams, the science of team science, if you will. Our committee is interested in this topic as a natural extension of our charge to identify how to enhance collaboration among land grant colleges and universities. On her webpage, Dr. Cross notes that she was born and raised in Fort Collins, Colorado, and that she holds a personal commitment to the land grant mission and strives to do research that enhances the capacity of individuals, groups and communities to grow and evolve. She holds a BA in sociology and women's studies from Colorado State University, and an MA and PhD from the University of California Davis. So Dr. Cross if you would begin please. If you could enable screen sharing for me, that would be fantastic. Great. Now it takes me just a second to get zoom back and running so that I can see you all while I talk. If you can all see my screen zooms does weird things where I can't tell always what's going on. I am delighted to be with you all today as the proud alum from two competing land grant universities in the West. As you probably are aware, the field the science of team science has been growing and evolving over the last 15 or so years the first conference held to talk about this topic was held in 2006. And in 2019 what I like to call the team science Bible was published strategies for team science success that is a really thick edited volume that really talks about some of the nitty gritty about how do we support teams. So in those intervening 15 years much has been accomplished to grow and develop this field. I started working on this field in 2015 when the Vice President for Research at my university initiated a program to support and and fund emerging transdisciplinary teams. Today I'm going to split my talk into three parts, and I'm going to pause after each one for 10 minutes of discussion and conversation, leaving some time at the end. There's a lot maybe to talk about and so you all can decide if you want to ask questions in the front end, or save them till a little bit later. First, I want to talk about what have we been doing at Colorado State University to develop teams, and to assess them. And the perspective that I bring to this was really inspired by the Office of the Vice President for Research at our university that began this program called the Office for Innovative Partnerships. The OVPR has been a really fantastic partner with me and for me and within four teams as we have been building the program assessing it and learning and growing together. I built evaluation and assessments into this project from the very beginning. The OVPR provides two different levels of support for what we call SIP that's catalyst for innovative partnerships and pre SIP. We got started because after the first year we realized that some teams are not ready for a $200,000 investment and they need more time to coalesce and to form. And if we're really about the business of supporting and catalyzing new partnerships, they actually need time. And in their first year, they're not really ready for $100,000 investment. So the program adapted and grew pretty quickly and has been growing since then. We're now in our third cohort of SIP and pre SIP teams and those programs are offered each every other year. We also built in, in addition to evaluation and assessment, facilitation and coaching and interventions for teams. It's not enough for us to study them from afar. We must also be investigating how do we support them and grow them. So that's one of the things that I have been working on is designing and building and assessing our own interventions. The framework that I use comes out of the field of evaluation, especially research and evaluation for complex initiatives. The framework is called developmental evaluation, and it is designed not to be an objective scientific evaluation that is separate from the program itself, rather it's designed to be actively engaged with the system itself, creating real-time feedback that is intended to accelerate the pace of change and to improve the capacity of all of the members in the system in building their own capacity to evolve and grow. This, I think, is the essential task for transdisciplinary teams is that we are working with them to expand their capacity to evolve and grow. The components that are shown here on this slide, facilitation, training, interventions, assessments in the forms of surveys, interviews, observations, network analysis, all are designed both to track how teams are doing to give them immediate feedback and to give them specific direction and targeted interventions to grow and develop. A couple of great examples from this, there's a rich body of literature in dozens of fields of social science that have looked at what predicts team performance, team creativity, and team innovation. So just to give you two examples, the toolbox initiative which comes out of humanities and was started dominantly by philosophers have uncovered how structured dialogues can help ease the tensions and improve the capacity of teams to engage in the production of shared knowledge and to establish shared language across really disparate fields. So here's another book on their process was recently published this last year, Sam Cainer from the field of collaborative and deliberative decision making has been publishing about the process is by which teams create good solutions and collectively innovate and create new ideas together that have been published and used in many fields in improving the quality of civic conversations to innovative design teams to transdisciplinary scientific teams. And so we rely on this published literature about how teams function better, how they learn together and how they engage in collaborative solving to form and create the interventions that we do with teams. Most recently, there have been, there's been a big uptick in thinking about what are the specific competencies and skills that scientific teams need in order to succeed just one of those publications that was published in 2021 by Latrachiano at all, and it identifies the sets of individual competencies that people need to build, and the competencies that teams need to develop and that can only be developed within the team. We see these as the foundation for all of our training sessions and our educational programming and work on building both individual competency and team competency. As you see in this concept map here. So, employ and train our folks to be expert facilitators. The research on this shows very clearly that when complex, transdisciplinary or transsectoral teams come together, they are able to innovate better, and to integrate faster when they are facilitated by third parties. And that doesn't mean that every scientific team meetings needs a third party facilitator, but when we are working on the hard work of scientific knowledge building creation and integration across fields. It really helps teams to be facilitated by third parties these are pictures from a retreat that facilitated earlier this month in the UK with people that are working on building urban green and blue spaces in the UK in order to improve public health outcomes. We also have been developing over the last few years and assessment of the stages through which teams move from just getting to know each other to actually having the capacity to really generate knowledge together. So we've built a team readiness assessment based on scientific scales that are used in a variety of fields, informed by thinking about what are the tasks and the predictors of team success and then assessing how much consensus there is on a team around the variables in this scale. We also are using social network analysis to document how teams are growing and developing over time and using this to give feedback for teams and using this as a diagnostic to help us understand teams that are succeeding and teams that are struggling. A little point of reference here on the bottom this is a team you'll see again and a little bit later in the presentation when teams first start working together. We see an average degree which has the average number of ties between nodes in the network usually starts between one or two. This is really consistent in public health collaboration networks. It's true in the college classrooms that we study and it's true in these scientific teams that when teams form they form with only one or two ties to other folks. As they work together and co less as a team, we start to see that average degree move up to five or six or seven. It's not to how many strong ties any individual person can hold so it's not an exponential growth curve here, but we do, we do, we have identified what some of the levels are of early teams, and teams that have gelled more and have co less so one of the teams that we studied in the CIP program at Colorado State University from when they first applied for funding in 2017 and then at the end of their funding in 2020. We are working with a variety of clients across the country we are working with universities that are creating programs like ours that are working to invest in teams and asking the question how as an institution. Can they do better to build and support teams we are engaged with them like we were at CSU in the process of learning in adapting the work that we're doing to their particular context and to the funding model in the organizational structure. That works for their university there is much for us still to learn about how do organizations actually best support teams, but in all of these we're using that combination of six things that I just showed you have consultations social network analysis. Other social surveys facilitation training and targeted interventions, we are also doing this with a variety of specific teams that have received large funding from on. Funders like the NSF or in the UK the UK PRP, which is a coalition of funders, and then we also are working with several of the CTS a programs that are funded from the National Institute of Health. That are thinking about how do we actually expand the capacity for team science in our translational medicine teams. So this is a little overview of the various people that we are working with that CSU since we started the program at Colorado So now I want to just open it up to questions what questions do you have about what we're doing at our university and with other teams. So I think if members of the committee can go ahead and jump in if they have a question. I'm just going to ask the question of. So with every different team that you work with, then do you ultimately publish your, your learnings and, and the successes or, or how successful your interventions how you come up with your interventions and things like that. And we're a little behind on publications in the next section you'll see what some of our publications are there are two things I would say one we each team that we work with we talk with them and work with them on what would they like to see published from their team and you'll see some of those in the next set. Number two, we're really interested in looking at team development over time and so that takes quite a bit of time to establish a good body of knowledge on each team. And then third, one of the things that we're doing is building a large data set on all of these teams, so that we can do bigger analytics on a larger set of data that also takes time just to build that data as you saw before what we're doing is building a data set that is based on mixed methods, which is observations, interviews, interactions, social surveys that measure things like psychological safety, and then also network analysis so that's a complex data set to create and form but we are building that and you'll see some publications coming up. So hi Jenny this is Catherine bore again and this is really exciting work. My question for you is, how important is it or how, how is it evaluated the, the, the convergence or the, the fit between the original expectations of the team and what the actual outcomes are. I mean, how much does that matter and, and, and how do you assess success in the context of that matchup. That's a great question. This is one of the reasons I told our Vice President for research that he needed to start doing this kind of mixed methods analysis early on, because what he was hoping where the outcomes are first what others call distant outcomes right that they happen a long time after the team first forms where we are interested in assessing some of those proximal ones what does it look like in those early stages, almost all teams end up doing something a bit different initially conceptualized and so the research on the science of team science, really, and in, and the research on teaming in general really shows that the most predictive things of team success of innovation of creation of new knowledge and of integration comes from how teams are integrated with each other, not fundamentally who is on the team, or specifically what they are working on, but how they work together and those are things that can really be trained. Harold. Hi, yeah, thanks for this. I just wanted to ask when I think it when I hear the phrase team science I sort of think of, you know, big science, you know, versus special operations, you know, all that sort of thing. So could you say a little bit more about, you know, how you're thinking about this is it a one umbrella or do you, you know, go down these different lanes of from big science, this, you know, not small science, you know, that's the wrong word but the type, let's say, yeah. That's a great question, Harold. I actually didn't include those slides, I do have one and I will add it to the reference list. When we're done here, I'll add a couple things to the reference list and then send this PDF out to you. We recently published an article in the encyclopedia of freshwater ecology, which is the limnology field. That article is called teams networks and networks of networks and it provides a nice typology of the different sizes of teams. In the team literature, a functional team, a work team is about five people, plus or minus two. Once you get beyond seven or eight, your team begins to split up into smaller sub teams and scientific teams and struggle because they have been, for the most part, trained and cost on how to build a functional work team, which is a kind of lab group that probably ranges somewhere between five and 12 people. When we're talking about team science, we're talking about two additional complexities. One is the integration of people from disparate fields with very distinct differences and then the expansion and sometimes very rapid expansion of teams, which changes how we manage and lead teams and is something that most scientists that I work with say they're really frustrated by. And when we do training with them, they say, oh my gosh, I needed this so much earlier in my career. So the answer to your question, Harold, is that team science spans the gamut from small teams to super big teams. I am studying teams mostly in the range of a dozen to several dozen so the super large teams that are over 100 are not the kinds of teams I'm studying I'm studying the ones that are working together but most of them are across multiple fields and several institutions. Robin did you have a question. All right, you just moved in my little video stream here. Okay. Thanks for those questions I have I think they help set us up for the next section. So I'm going to talk about what we have found, and this is going to be a mix of a couple of things I'm going to show you some specific concrete findings from individual teams. And I'm going to talk to you about some of the high level findings that are what we have learned in the process of assessing this program in Colorado State, and in working with other teams. The first is this teams take time to develop. We knew this one year in to the catalyst program at Colorado State University. That year, we set the goal to try to understand what this we call it the ladder of team development looks like. And we weren't sure what good measures were of it and we didn't know what the markers look like of team success there are you know frameworks that you might be familiar with that teams go through, you know, storming forming norming and performing but that didn't quite work for scientific teams because they scientific teams are unique they most scientists are working on multiple teams at a time. Most of them are working on teams of varying sizes, and those teams spent the have different life cycles. So scientists are constantly shifting gears between teams in different stages it's really different than if you're studying a work team or a public health coalition that has formed around a particular project and has a clear timeline now scientists have timelines to, but they're also multitasking across all these phases, and so we wanted to really think about what did this ladder look like. This chapter is published, you this group will be especially excited if this is published in the new textbook on food systems and modeling that just came out earlier this year. This is the back chapter in the end about how we studied a particular team at Colorado State University, the food systems team, and this is the first place that we have illustrated what we think the team science development ladder looks like. Teams need time to develop and teams also follow some pretty predictable patterns now I know you probably can't read the details in these items and questions but in the first one, we see people having agreement and co lesson around excitement enjoying being on the team and feeling like they have a unique contribution to bring. And then teams move into the stage where they're thinking about and articulating the team has a clear vision, and that they trust the team and that they feel that their team is supportive. After that phase, people move on to having agreement around the fact that they have clear roles, their expertise is important and that everyone contributes so if we go back to this previous slide. You can see this first phase idea and excitement and then commitment to the team and initial trust so those first one so that's the kind of emerging team phases. The next three things stage three four and five that teams have to work on getting to really know each other's expertise building a shared vision and building shared language are things that teams cycle through and around. They're not particularly linear, but we have found that teams can't work very well on a shared language until they have some co lessons around a shared vision so I don't want you to think that it's like a nice clean break from one step. But we have found that teams struggle with the higher ones when they haven't done enough work yet at the other so they might be cycling back down to some lower stages. At the top this clarity of roles and responsibilities clarity of team skills and unique strengths. So I know some people think wait didn't you say understanding expertise is number three so that first one is getting to know your team and understanding what they have. As we work with people we get to know a deeper level of what their best skills are, and that really takes time understanding how to bring people in understanding how to use their perspective understanding how and when to invite them is different than just knowing what their expertise might be. It takes teams a couple of years to really get up to those top levels stage six and seven. So when teams first come into the program they might be only at stage one or two and then it takes time to progress through this. And this teaming readiness assessment helps us see where teams really are and how they're struggling. Number three, we can detect very quickly, the teams who are struggling. And one visit with the teams is actually enough to tell us that that's a team in trouble. We often don't super trust qualitative data like everybody say hey this team is struggling people aren't necessarily willing to invest in the commitment that it would take to intervene with them. But here's a sample from one cohort from this project at Colorado State University, these are four teams. You can see that team one has much less agreement and coalescence around these key items that their contributions are valued that team is getting things done. They feel confident about the goals of the team versus team four. If we model those as networks, you'll be able to see the difference between these two teams right off the bat. This is a team that was not ever able to really get off the ground and be successful. When they came into the program and they did not know each other. When we did our team launch activity which was designed to help move people along on their strategic vision, we couldn't even do the activity because we had to spend the whole time with team introductions they had written a proposal, virtually, and through email without ever meeting and purpose and they didn't know each other so they weren't, they hadn't even moved beyond steps one and two at this point. And they had gotten a lot of money, but they actually really needed a whole year just to come together. In contrast, team four is the one that you saw on that received funding and you saw their progress before from 2017 to 2020 so this is year one for both of those teams. Again here is that picture of team four and their development from 2017 to 2020. This is a team that formed and grew really organically. It began with three faculty members who kind of worked hard to find each other. Once they did though, people were really excited about the topic that they were working on and other faculty kind of came out of the woodwork and emailed them and said hey, I'm working on similar things in a slightly different field but I care about those same big issues. And so they went from a three PI team to an eight PI team overnight in one year without any recruitment just with the university having kind of publicized what the team was up to and people learn about and said oh I want to join this team and make it bigger. This team is still going today with additional members. The other thing that we learned about teams being ready or not ready is that key members can make or break a team. Here's an example of a team that was funded in 2015. They didn't trust each other enough they didn't even want to fill out our surveys because they didn't trust each other enough to tell us who they had worked with, which I thought was a little ridiculous. Then they lost a very high powered very prestigious PI. And the second that person left a young early career faculty member took the helm and co less this team. A year after that they said Dr cross we see all these beautiful network pictures of the progress other teams have made, why don't we have them and I said, because you refuse to participate in our study and they said well we want to now and I said great, we will document your team progression. And so this is a case where losing a kind of hierarchical leader helped a team to grow and progress. We also saw some team regression. This is the same cohort. This was a team that was very highly rated when they first got their funding a team that for all appearances looked to be highly functioning. And when I got this data back in 2017 I said what the heck is going on this cannot be right how do they have less mentoring ties two years in than they did before. One quick interview helped me understand that their PI, who was a relationship builder went on sabbatical for a year and made no plans for who was going to shepherd the team while they were gone. So here's the example of an integrative team member that left, and it caused the whole team to walk backwards. So when we think about leaders coming or going researchers from organizational studies know that leadership change is one of the biggest predictors in teams and organizational performance. We absolutely see that on scientific teams but I want you to know that adding or leaving is not the predictive thing. What is predictive is whether or not that leader is a shared and collaborative leader. So that the team that progressed when they lost their leader they lost a hierarchical leader and they gained a collaborative leader. This team lost a collaborative leader. So training people and providing people skills and abilities in shared leadership, this team had a collaborative leader, but they didn't have enough shared leadership among all of the team. So whether one person being gone for a year. So both of these teams had a weakness in their shared leadership ability and capacity. And we didn't know that at the time, or we would have done an intervention to help them. We learned this after the fact. But now we know what to look out for. In 2019 some folks in the translational medicine field published an article about what are the characteristics that we need and we think that these characteristics of translational scientists apply equally to transdisciplinary scientists. Our PhD programs have four centuries been training people to be rigorous researchers and domain experts, team science demands that we train people with new skills. And those include being boundary crossers system thinkers skill communicators process innovators and fundamentally team players. Many of those things are counter intuitive to what we are still training people for in our PhD programs which is to become an independent scientist and to be a more hierarchical leader. What we're seeing in team science and what we're seeing across fields is that new teams are really innovating and some of the robust transdisciplinary teams are attracting young scholars who want to play these other roles and have some natural skills in them. I'm going to emphasize here, all teams need all of these kinds of members, even in team science and in large transdisciplinary teams there is space for that really geeky data analyst who just wants to spend time alone in his office, integrating data sets, and doing logistics. That person does not need to be at all of the integrative meetings. And when we talk about team science and supporting it we need to support all of these different types of players. Here is the food systems team that I told you about we made the publication about. This is what they look like at the end of their funding from the OVPR's program and on this team, we coded these team members based on how many of those roles that you saw in the previous slide. They were enacting, and you can see the middle of this network is five people, all of whom are using at least three of those integrating ones of communication. These are players, boundary crossers, systems thinkers, and you'll notice that they are clustered and collected in the middle of this network. This is what that team looked like in 2017. This is one of the teams that you saw in that. This is team five in that that first slide. This is what they look like to begin with. So we had a PI in the middle who really looked like an integrator. They are a strong team player, a strong systems thinker, a skilled communicator and a boundary crosser. So these are all those integration skills. That's who was leading this team. And this team attracted more people like that and grew it. In 2020, this team is bigger. Now notice on the periphery out here we have many people who are team players, who are domain experts, and who are rigorous researchers. They play a very important role. This is a food systems team. They are thinking about transforming systems. We, this team needs people that know and understand community engagement. This team also needs people that understand really complex modeling. And so I want you to look at this and think the goal is not for everybody to build all seven of these skills and to be equally strong in them. The goal is for people in science to find room to be the kind of scientists that they want to be and some of those people are naturally boundary spanners and integrators and other people are naturally people who want to just focus on their methods and their domain expertise and their culinary teams benefit from having all of them and having management and leadership structures that make space for them and integrate them in really intentional ways. So just a snapshot of those two teams side by side again this is in that same publication the chapter at the back end of the food systems book. Another finding Catherine you were asking about, you know, what, what have we found from teams and whatever we published. This team is that we studied in cohort one. What we discovered with this team is that there are hidden networks in every scientific collaboration, the field the science of team science first published social network analysis based on big data on publication available data and there are some great publications that helped us understand that the ideal team has some repetition of past and new members. In this team, we were able to look at their network data over time, and we gathered new data, not just about their publication, but about how they felt about each other about their psychological safety with each other about who was mentoring who who was doing advising. This team we learned a lot about what measures to use and what measures not to use. Unfortunately, that article is still not published because nobody's really excited about publishing methods pieces about bad measures. We would like it to be out in the team science literature so you can see what we learned by, you know, you know trial and error and which ones are really good. I mean I can tell you which ones are good but we think it would be better to publish the whole story so I'm not equating on many reviews. But this team. This article is published in one of the Nature journals and my PhD student Dr. Love is the first author here and she spent on three or four years really heavily integrated with this team, knowing them facilitating their team retreats getting to know them getting to know what teams really wanted what we found is that people's mentoring and advice networks are some of the big drivers of scientific performance and collaboration sociology tells us that trust and personal ties are some of the biggest predictors of team performance in the field of organizational network analysis. They have three measures that they like to use that tell you about what's going on in the network. Those three measures are who do you trust. Who do you go have beers with, and who do you seek advice from. And so we tried to model those kinds of questions in here so in addition to, who are you publishing papers with who are you sharing data with we also ask those other questions about who mentors you want to get advice from. One of our other big lessons, there's actually two in this slide. The first lesson is that individuals and teams are trainable. All of these skills about how do we get along better. How do we learn to really create collaborative knowledge. How do we build shared governance systems for teams. All those things are things that can be taught. We're currently training them in most of our PhD programs though there are some notable good examples in and our teaser I guards people are doing that in some of the NIH funded training programs people are doing co mentoring. So we're starting to see it, you know, appearing but overall it's not the dominant method. So the skills that people need to be successful on teams are really trainable. Unfortunately, our biggest barrier. Our organizations really struggle to support team science, and those are in all of the ways, and how we think about incentives and rewards I cannot tell you how many people are struggling to fund for the long term, they're really talented researchers who are not tenure track faculty members, and are not what NIH folks call trainees are not graduate students and postdocs, the research enterprise requires clinical study coordinators. It requires people who manage things. One of the teams I just did a shared visioning session with. They were really struggling and their faculty were stressed out and I couldn't figure out why. Because when I asked them what was stressing them out it sounded to me like the activities that were stressing them were just science and I was like why is doing your science stressing you out I'm confused by that. And then finally one of them said, I am being asked to do things that as a faculty member, I don't have time for, and I don't have anyone in my lab who can do that kind of integration of science and so as we talked with that team and you know why we had three young faculty members that were all stressed out. What we uncovered is that what they needed was a shared postdoc level kind of person who was working between the three labs who understood enough about the science between the three labs, and how to measure their time across all three of them in order to mentor and supervise the other postdocs and the graduate students in those labs so now you hear that story you understand. Why those three faculty were stressed because none of them had that time for that overarching kind of hub kind of position in a network. So organizations struggle with this we don't have good funding mechanisms for it, our tenure and promotion doesn't know how to identify it. And we're not really training people for the new variety of scientific skills and jobs that are out there. I just will tell you that individuals and groups are easier to change organizations and bureaucracies are the hardest and slowest to change so it's no surprise that when we think about team science, really helping organizations identify what their sticking spots are what could be transformative for them is some of the hard work of supporting and improving team science inside universities. I skipped twice. Okay. So that's the end of the finding section, what questions do you have here about what we have learned both from individual teams and kind of the high level. Catherine. So where do I start. So I'm going to start with an amalgamation of two questions that come to mind it's really based on the question that I see that raised by Norm Scott and it's very related to that it's in the chat. And the thing is, so now I'm talking about that that postdoc that's hired to serve as kind of a project manager but at a scientific level across these three different groups. So now this person isn't necessarily driving their own research project but but rather that they're integrating across others. And so, so so here's the two questions that emerge from that one is, you know, is this fair to this, this early career and putting them in a situation where they might then be less competitive for a faculty position down the road. And then, and then, and then really building on norm Scott's question the second question is, then how do you start to tease out individual contributions of people to these various projects and I've got tons more but I think I'll go with those two right now. Okay, those are great places to start. Number one, when we ask about fairness. That question goes away. If we transforms organizations to have stable and dependable funding for all kinds of scientists, not just tender track faculty so I want you to be a little paradigm there in what we're looking for. Number two, Gabrielle Bamer I don't have a picture of her book here and I, and I'll put it into that chat has written a book called disciplining interdisciplinarity, and it outlines what this role of integration specialists is, and that they are necessary in science. There are many places and many roles where we need that. And unfortunately we're losing some of the really talented integrators because our universities do not have positions and do not have homes for them. If we want to transform science we have got to get on top of that issue. And that's one and I'm going to talk in the next section, a little bit about I think we can improve our ability to assess people's contributions. So I'll leave that for the next section. Did I get both of your questions. Catherine. Yes, I think so and I think Harold static question. Okay, Harold. I do. Sorry, I just came off you can hear me. Yeah. So I'll follow Catherine lead on two questions. So my two questions are asking two questions. First question is, there's team science in the academic environment and there's team science in the industry environment, both are authentic and needed and valid but they're two different team environments so if you looked at teams that you know span industry and academia and in government for that matter and obviously that's crucial in the ag sector and with the land grant. So that's question number one, and then question number two is, what's the question. Are you asking me what's different academic versus. Yeah, the question is, have you looked at team industry team academia and team government from a science perspective. And the corollary to that is, one is, are those three, you know, team environments different and then, you know, a corollary to it is, have you looked at how they actually interact and how are those parameters that you're measuring on good teams not so good teams, you know, how would those parameters how would the spider plot change, you know, in that interaction sort of space. And then the other question is, you know, and it was cool that you brought up the data science geek. I'm always going to ask a data, you know, question here because I think that I think that data science and data analytics and the AI environment is profoundly, you know, fundamental to to actually enhancing collaboration amongst all kinds of teams including land grants. And so, in my experience, the data science element can serve to achieve some of the behaviors that you're actually hoping, you know, team members will exhibit it it's actually harder to exhibit them, unless there is a democratize, you know, high quality data platform that underpins it. So, you know, I'm not trying to lead the witness or anything like that but, in my experience, actually having a good data science architecture and objectives is fundamental to driving the right behaviors to begin with. And I'm just curious if you've looked at that or not. Harold, I am only one person. I have teams, I have many teams, but I don't have enough teams to study all those questions but I do want to tell you that yesterday I was facilitating a cohort of people at a team science and integrated ecology summer network and we were doing some of the basic education on team competencies. And one of the things we asked them is what are you missing from your team collaboration plan and charter and two of the teams said what we are missing is a data and code sharing agreement platform and you know, open science kind of strategy. And so I was delighted to hear that I totally agree with you that that is essential. Now, I want to go back to the question about these academic versus industry and academia. A couple of things. I know I've already said this but to reiterate teams in academia are struggling and they are struggling hard because they are missing the infrastructure and structures they need to succeed. One of them are integrative platforms. It's not just open science platforms that they need. It is also collaboration platforms. All of the digital collaboration platforms that exist. Maybe I shouldn't say I'll have a tendency to really, you know, essentialize most of them are built on business cases that are designed for industry. And they are designed around the idea that you're going to sell this big package of tools they're going to integrate with each other and they're all inside an organization that can manage governance and permissions. Really good teams are crossing over organizations and almost none of our platforms are really designed to help people collaborate across institutional boundaries I'm going to talk about that more again in the next section, but I got to emphasize that the larger teams get and the more dispersed they are, the more important it is that they have many, not one, not a single collaboration and coordinating platforms. That's what the science of team science says the bigger they are the more they need it and the more successful they are when they have it. The bigger they are the more complex they are the harder it is to find any constellation that fits the needs of teams. I spend so much time with teams frustrated over this issue and I myself have been struggling with it. Since I got my PhD in 2001 where I existed with two desktops, all the time I had my desktop on my desktop machine and then I also had a virtual desktop running through Citrix because my university couldn't figure out firewalls for my shared drives and I had two different colleges, and I still see teams struggling with those kinds of issues so getting back to Harold question about academia versus industry and are they different. One of the ways that they are different is our assumptions about what we're doing and why we're doing it are different, and our structures for supporting it is also different. Teams interact across a space. Academic teams also do not have in addition to not having the support like they don't have funded lines for project managers and they're always trying to shoehorn postdocs unfairly and inappropriately and not well matched with skills into that kind of a job because it's temporary and it's only funded by a grant. But academics are hoarders. They are hoarders of projects and exciting ideas and teams every academic I know has a hoarding problem they have data that is not published is not shared and is not out there but they still say yes to new collaborations. In the industry, people do not have that luxury to be free agents and to say yes I'll do this they have some constraints so we are academic teams are constrained by things that are not helping them and they are missing some constraints that would help them. I personally am not studying both industry and academic but I do want to tell you that the teams that we have been studying. The most successful teams are ones that are what NSF would call convergence teams that they are made up of academics. They are thinking genuinely about community issues and problems and are trying to solve them, and they have partners either from industry, or from local government, or from nonprofits and other kinds of community types. And when we looked at what are the most effective teams we found that they had more women in them. They had a handful of leaders that really understood and practiced collaborative leadership, and they really were working on genuine community informed solutions that team you saw team for that I showed you a couple of different slides on is a team that formed in response to a community question that's why the researchers had a hard time finding each other because the researchers didn't cook up that question. It was one researcher who heated the call from a community partner said, Can you study us and will you and they had to hunt around campus to find people that wanted to participate. Then more people said, Oh, we want to be part of this larger effort. So those teams that are not just creating their every tower questions are some of the most successful ones. Catherine you have another question. Actually, what I'm going to do is, is vocalize Dr draper's question he's a committee member but he doesn't have a microphone so. Okay, this question is, I'm curious how you build relationship and trust when in the middle of the project, you realize there's not the trust, you thought you had so that's Dr draper's question. Dr Draper this is so hard. Some teams need interventions that like we know we're going to move them from one level to the next but other teams, what they need is group therapy and and conflict resolution and I've seen a couple of teams that I saw just like spiraling south and I personally didn't have the knowledge or skills to help them. I think that if we work and develop teams. I think that's the right way early on that we can build that and we can end up with fewer teams that are midway through and have that crisis point and discover that they really don't have the trust that they need. A lesson from the preset program that is true here and it's also true with some other universities that I work with little amounts of funding so five to 20 or $25,000 that seed teams to kind of coalesce around an idea and if you fund 20 teams at the end of the year you'll probably only have five left that say we like each other enough we really want to keep going. But if you fund, you know, 20 teams with $10,000, and then you get five that are ready. Now you've got five that are ready for 200,000 and those five that you invest in are going to be successful you're not going to end up with a team one that's a failure to launch. And some of it I think is changing the way that we support and fund teams and that we really provide funding, you know the NSF has been really good about this recently that they're providing these $100,000 or $150,000 capacity building grants that run for nine months or 18 months and they're designed to help teams coalesce and I have seen teams get those and then realize that they're not going to be successful. So I think the answer there is small investment to build the right teams and to not be upset when teams that are not the right group or not the right topic fall apart we should expect some dissolution. I'm going to move to the next set. New directions, where are we going, where am I going, where's the field going, where do institutions need to go. This was published on Gabrielle Bammers blog I mentioned her book and disciplining interdisciplinary she writes this she shepherds and curates this fantastic blog. And I to insights so integration insights and a recent article from 2020 looks at the topics in the science of team science now I know you probably can't on your tiny screen CDs that top blue wave is evaluation and assessment. From 2010 through 2018 we see evaluation and assessment of teams running on the top. Now I got to tell you, as a scientist who is funded by NIH and NSF and is supporting teams funded by them and others. I'm pretty frustrated in the way they handle evaluation and assessment. I started my career as a community sociologist doing public health interventions. And the US Department of Agriculture and the US Department of Education and some other ones are much better about funding people to do stuff on the ground and setting aside an appropriate amount of evaluation dollars associated with that. So we do science people don't quite know what evaluation of scientific teams are, and the big funding agencies don't quite know how to support the evaluation of it so I struggle with teams and figuring out the budget puzzle, so that we can do good evaluation, and do what I talked about in my presentation on developmental evaluation. So that's number one. Number two, that orange line that started down in the middle, kind of the bottom, the top of like two thirds on team science training has now risen up to the top, because we understand that individuals and teams can be trained. This past summer I co-authored an article, a co-authored a special issue with several scientists from the translational science field in the Journal of Clinical and Translational Science Betsy Welland is the lead editor on that issue. And it's the first special issue on team science intervention, so we are just starting to document what are the interventions that can really make a difference. I'm still adding to my list of what needs to go in the references. So this is what we're doing. You can see that what we've been doing for the last five years is evaluation and assessment and training, as well as facilitation. What has taken a dive is bibliometrics. Bibliometrics was exciting at first because it's existing and available data, but understanding complex things actually requires mixed methods and longitudinal data collection, and that data collection is hard to do, and it takes time. It takes time to get results, and then it takes even more time for it to get published. One of the things that we also are seeing now this article is not making this argument. This is a fantastic article from Okamura in 2019 about integration and team science and the relevance and importance of it. I picked this graphic because it's a nice good looking graphic that shows the stream of connections across fields. What we are seeing at universities related to training from that past slide is that people who are working in core facilities and in departments and the people who are integration specialists, they are saying, scientists need more and more training from other fields. So many students on my campus need a course on Python, but they can't get it. Students across my campus need a course in or and doing social network analysis, and it does not exist. So academia is ridiculously slow, slower in fact than molasses in changing its curriculum to match the needs. Now we've seen a massive growth in recent decades in online for profit master's degrees in data analysts. So that is bringing money. It's a money generator. It brings money into departments, and it's helping to meet an industry need. It is not addressing the need on campuses for people to get cross training from other fields. Whether that's training from me and how to run good interviews. One of the guys I worked with an engineering dissertation I worked on recently he was really interested in people's ability and willingness to adopt a new innovative technology that is kind of a heat transfer and kind of technology. And so he was going to go to the big industry so he and his engineering professor are working on these heat transfer technologies, but he also wants to understand what are the barriers why are they not getting adopted. Those barriers are social. They are individual lack of knowledge. They are group norms. They are industry problems. Those are social science problems. He said, Hey, Jenny, I need to do good interviews. Can you train me and the right fit is not for me to say to him. Yes, we offer a two semester long sequence in qualitative data collection analysis. That's not what people need. The universities need to learn about adopt faster, and they need smaller units of education that bridge people's connection now we're starting to see this in like the NSF NRTs and other places that are funding PhD students to get cross training. And still, it is not enough. Here's what I promised to talk about Harold, the big fat band aid. Scientists are cobbling together, whatever weird mishmash of stuff they can find that will get their team communication out of email and into a platform that can be archived now all the good ones are not in here I just stole this graphic from the internet. Scientists need data platforms for democratizing data as Harold discussed, they also need communication platforms that match what we know means effective teams science which is kind of speedy and asynchronous conversation among small teams and archiving our, you know, longer stories and history with each other. Everyone I talked to every team when I tell them they need to find a set of communication tools, they say, please, Dr. Cross do not make me learn another one, because you know I have data on box and drop box, and on teams. I have data on five different universities teams networks which all require me to sign in and out of every single one of them I can't keep track of where anything is. But I have created a nice intervention for teams on how to have a really good conversation about this and to try to pick the best of a bad scenario. I'm not super satisfied with it, but I do feel like I'm helping teams do better than what they're doing when they don't have those conversations. So, and this is a challenge. What does problem solving science look like NSF has said that we should be engaged in convergence research, which is really addressing community problems. What we're trying to reach is trying to speed up the translation from bench to bedside, which is getting science into people's hands and work faster sociologists have identified a kind of new way to think about what public sociology is they're calling it problem solving sociology. This is not applied science, it's not devoid of social theory, but rather it begins with the really big intractable problems and it says, how do our theories in our tools help us. This is the call the earth, the world, human population require us as scientists to be thinking about the paradigm shift team science is a part of that. We still are flushing out. How do we do it well. Some of the things that we need Harold this is for you I stole this graphic from a blog because I don't have a good one. I mean that people, and I do not mean scientists, I mean people need data visualization platforms that allow them to interact with it that are flexible that answer their problems, and I am a sociologist who studies social change and I have been fed up with for a decade who say, Oh, Jenny, I have exactly the problem that's going to fix your behavior change. We have all this data. It's in this dashboard. That's all you need in order to change people's behavior and I say, you are wrong. People do not need information. What people need is an answer to their specific question. They need that data delivered in a place where they're trying to take action, and they need it to be easy. There is a wide gap between the questions and actions that people want to take and their ability to analyze this complex data and to get the kind of answer that they want. And we are not investing enough in the infrastructure that it takes to build these kind of integrative interactive flexible responsive platforms we might get money from NSF to build it but how do we maintain it for the long term for the common good. So one really specific applied example. In Denver, you have the city of Denver, which is also the county of Denver, and you also have their public housing authority. They have developed several different low income housing communities with different sets of funding from different people with the goal of absorbing urban space to be more livable to provide more urban agriculture to provide more green space and to improve the quality of health of everyone who lives there and to increase their access to jobs. The metrics that they are expected to deliver in response to the millions of dollars of funds that they got to do that are seven different things. There's the who sustainability metrics. There's the ones that hug created. There are other ones. So there are several sets of sustainability metrics. And no one at the Denver housing authority or the city and county of Denver wants to spend their time figuring out how to do that cross match or to figure out how to collect that original data and to mush it all into one place and to spit out a report with these seven different elements of that development, and these eight but it has two of those for this one this data integration and exploration and answering questions is something that has our need for it has grown beyond the capacity of individual actors whether that's the Denver housing authority or whether it's individual scientists but our idea about how do we build those and democratize data, put good quality data in, have the right kinds of controls that we need protecting or whatever. We're still struggling with that we have little bit examples. And every time we get traction on one, then we see it disappear and the funding go away or we see that the university who started working with it doesn't have funding to keep it going. So this issue about democratizing data and having platforms that actually empower knowledge development capacity growing I think we have a long way to go on this. And the social sciences have been dramatically under invested in four decades, all of our problems to quote one of my favorite engineers who spent a few years learning about social science with me. He said that every problem, whether it's an urban green infrastructure problem, or it's a natural environment conservation problem. And the housing problem requires not just technical specialists and scientists but it also requires social scientists from many fields, but so often we don't know who the social scientists are that we need. And there's not been funding for them, we absolutely are starting to see that change but if we are going to become problem solving scientific teams. We need to invest in the kind of hard original data collection that is required to look at the different situations in which and the interventions that actually help us to overcome those various barriers. What can help universities change. What we're working on is helping to combine data types so that it's not just the metric data that is taking a dive in the kind of team science literature but thinking about mixed methods. We want platforms that are interactive that different users can get different use can answer different questions out of the same platform right so it's not a platform for one user. We're working on changing our metrics and improving them to be more predictive of team performance so that we can reduce the amount of data collection original that we're doing with each team that we're working on. And the science of team science community is working on figuring out what really is convergence. So the scientific convergence and collaboration how do we see those early signs of it so the science of measuring it and detecting it is still in early stages and working on a couple publications with two different research groups on that topic. As for students, a data informatics person worked on this is combining publication data at the University of Colorado that and shoots medical campus, along with their HR data so that we can see people's rules, so that we can answer the question, what is the impact of PhD students in contributing to integration and science because we know that there are important people besides PIs that are really building these bridges across, you know those kind of people in the middle the network that we showed. So we built this interactive tool this was not my idea I had the question. Data Geeks are the ones that said let's build an interactive tool. What we found pretty quickly is that we could create snapshot so this is a network centered around a single person. We can actually see how that person has contributed to this network over time and how the network has grown. So in this little pet project, I want us to have good team science metrics that allow us to identify the informatics folks that do not, and probably will not ever have their own research agenda, but our key engines in integration science so we've identified, Karen identified three types of team scientists that we can look at have three different kind of network profiles, and we're working on building out a tool like this that can be adopted by universities that would merge the kind of qualitative data I collect publication data grant data and the like outcome and data related to team training and then also other kinds of outputs that scientific teams are interested in so we're working on a couple little betas with these in different places one for really big team and one at an institution. That concludes my comments so I think we have another 10 minutes for questions I think there's a couple in the chat. This is John McNamara. I want to thank you so much for this. I've said in the chat, I think I have lived through every one of your examples over the last 20 years serious I wish I had this before. I haven't seen anybody peg the problem more clearly than you have. And for the folks on the board, if you remember back in January February March when we saw the first kind of tentative draft from the working group, it was basically a list of problems with the university structure. And we talked about that. And, and, and you have have hit the nail on the head on every single one of those. We have great examples of teams working in all sorts of science egg science at different land grants we have great examples. And I think moving forward. That the kinds of things that you've talked about, get more directly worked into their report. I think the last two after the report I saw did, you know, they did speak to these issues of how we have to have to change. And I think that we need to be a little more direct and in depth on the things that you brought up. And I think it fits in with the whole what's going on in USDA and if for right now where they're addressing these problems from the funding side of how to do more of these complex systems pieces so thanks for that. Thanks for doing this. Thanks john. Carol Todd another question and then Mike. I mean I quickly address Norman's question in that chat Norman said that the teams don't appear to have shown to include stakeholders actually they do I probably talked more about them as being scientists but many of the teams we're studying do have community partners of all types and we are tracking them and studying them and as I said previously, some of our best teams really have strong integration with community partners so thanks for that comment. Okay, Mike. It's Harold real quick. So I just wanted to thank you echo that. And then also an observation as you were going through this is, I think there's, you know, and this gets to the industry academic crossover opportunity is. Well, you know, while classic industry let's say science seems to be these issues and, you know, things that you mentioned with systems and whatnot from the venture capital perspective like the startup face. As you were describing the academic environment the patterns that were important, etc, etc. I was seeing a lot, like a lot of similarities actually so it's interesting obviously, you know startups are in the industry space, however, I think the patterns, you know, more resemble the team science piece from the academic side that you were describing and that could be a really, really interesting interface area that sort of, you know, start to integrate some of this thinking across academia and industry. So, and I also send a note to Alan Rudolph saying you're giving him the shout out. Great job. Thanks Harold. Okay Mike and then Catherine again. Yes. Thank you I really enjoyed this Jenny. And you may or may not know I'm here in Fort Collins. A couple of things that that occurred to me you hit on a number of issues that are characteristics of successful teams high performing teams. And, you know, in my role as executive director of the Western Association. We went, I went out and oil as a former executive director went out to my old ed colleagues both an extension and and and in the experiment stations and ask, you know what are the characteristics of the high performing teams that you could tease out and you hit on all of them. And one of the questions that occurs to me is, and it might be worth looking at one or more of these high performing teams in the multi state research program that, you know, there are some of these that have well over $100 million annually and funding. And, you know, I, I, you know, my wife works for Alan so I, you know, I'm familiar with with this whole project to a certain extent but you know that the multi state program those folks don't get much in the way of actual resources from their home institutions. They get some travel funds they may get funding that's hidden from them in the form of a partial technician or graduate student. But they really go out and hustle. And so, I think it might be useful to to look at one or more of those also and I'm curious as to what they look like given given your, your assessment tools. Yeah, I think those on, you know, are really interesting things to think about and to consider there, there are a lot more teams that we could be looking at and really thinking about how do we, you know, learn from them and integrate and how do we how do we transfer knowledge from one kind of sphere on to another both and about how do we adopt better governance models for these kind of integrative teams and also how do we kind of shepherd resources that help us, you know, fund that the variety of folks that we need to make that be successful. Thanks Mike. Catherine. So, I'm just going to summarize a couple of the key learnings that I heard from you and just want to make sure I heard this correctly and by the way, this has been a great, great presentation so thank you so much. Well, so, so my distillation is that that we need to look at the structure of rewards and incentives and funding throughout the academic setting and that we really need to start with thinking about ways in which we're preparing our undergraduate students predominantly educationally and whether we're able to offer things like certificate opportunities for certain types of skill sets and short term strategies to pick up very specific tools and even vocabulary to some extent even if you don't have the tools, at least you can have the language so that you can integrate and help to solve problems by finding the right people. Did I kind of distill that. Yes, I would just clarify a little bit more by saying that universities actually need to reorganize themselves because there's a lot of turfiness going on about, well, we don't want to offer those service courses, right, we've organized academic units to to offer training within their program and we need them to offer training in different kinds of size bites for others so I think it's not just, you know, are we offering certificate programs but also how we organize and think about what teaching looks like for various folks. Terrific, that's very helpful thank you. Yeah. Tom said that the fact found through the specialty crop research initiative that some PhD level scientists are not interested in faculty roles and really enjoy project management. Absolutely. This is a growing space people who want to be research coordinators research staff and in some ways integration specialist like we talked about that postdoc who had that intellectual experience and the research experience to coordinate across labs and the interest to be an entrepreneur. We need more people like that and we need career tracks and opportunities for them. And if you, we're at the perfect constellation right now right we know that the academic PhD job market is flooded. Lots of people who are super talented researchers and interested in research don't want to go into academia because of the ways that it's not working. Jobs for them that are not tenure track faculty. When I was a graduate student in 1998, I tried to convince the Dean of the graduate school that they should let me develop a website to really look at all the alternative careers that PhDs from our university were doing and they told me that this was a quote, dumb idea, and didn't that mean to be done. So I got no support for that. But if you look at academic Twitter, you will see lots and lots of people leaving academia for industry and lots of people talking about the non traditional things that they're doing so. I don't think it was a dumb idea in 1998 and I think it's even more necessary now for us to be looking at and thinking about the many ways that the world needs really talented PhDs to be doing a whole variety of different types of work. That says, how are you tracking diversity and team assessments I cannot believe I didn't even slide in about that. There are some great people thinking about this this is really important. Some folks in Spain helped us expand the way we think about what those diversity measures are. So we're using the traditional kind of demographic ones but two additional ones that I think are really important that that we have added a new our assessments. One is having care taking responsibilities. We know that women are burdened by this not just in taking care of children but also in taking care of older adults, I myself experienced that in my early. I'm during my 10 year period taking care of a dying parent and a toddler. Then they also asked us to think a little bit better about career stability and whether or not people are on secure funding and I've already talked about that issue so we do have a set of measures and if anybody wants to email me or talk to me about that. We can talk to you about that. That's another those measures is another article that's currently under review. Well I think we're at time thanks for letting me keep you one minute over I really appreciate this conversation I'm really delighted to be here and I already gave a shout out to Alan but the reason we have this robust data and knowledge is because Alan was willing to talk to us when I told him that he couldn't invest $200,000 in teams and not evaluate it so he took me seriously and said, let's do something with it and it's turned into a really important endeavor here at CSU and impacting other institutions. Well thank you so much Dr cross and I think everyone who has joined us today for this really fascinating presentation and conversation it is greatly appreciated. Thank you all. All right, bye. Bye. Thanks johnny. Thank you. Robin should I send a Robin left already.