 Thanks very much. Glad the mic's off for the question and answer. And it's timely here to be talking briefly about global change because today is the first day of the COP 21 meeting in Paris where maybe a little bit might happen. We can only cross our fingers. So about six years ago now we pulled together a lot of researchers on the Berkeley campus because there's an enormous history of research covering global change biology, but there was no umbrella and especially a lot of it spans several departments, integrated biology, environmental science, policy and management, energy and resources group, geography, and like many things on this campus like BIDS, how do you get these people to put their heads together and make the whole greater than the sum of the parts, the classic Berkeley problem? We did two things right off the bat and one of them was funded by the Moore Foundation, which is a part of interest since their investment here. This was really blue sky research. We managed to sort of sell them a package of interdisciplinary projects that would bring together researchers who wouldn't normally work together across departments, across labs from paleontology to Native American resource management, fire ecology, the isotopes on the pollen on the legs of bees, like really cool stuff. So these projects played out over the last five years ago. They're mostly winding down but they set the stage for a lot of novel interdisciplinary research at Berkeley. The second thing that was funded at that time was the Moore Data Sciences Project, so I'll give you a really quick overview. And I didn't know Michelle was here, which is a little nerve-racking because it's her project and I'm talking about it, but also she can answer the questions. So this is really an effort at bringing together very different disparate data sources, which characterizes the data problem in biology broadly as really one of the variety of data sources. And in this case we have enormous historical data from museums, so I didn't know that was so one of Berkeley's great strengths here. We have incredible natural history museums. These are the libraries of natural sciences, which is to say physical specimens, just like the library has the books. These have plants and insects and animals that have been collected for 100 plus years, millions of specimens, incredible record. How do you tap into that and get it available for research? Berkeley has incredible, the University of California has incredible field stations. We have 40 natural reserves plus agricultural and forest reserves run by the University of California, the largest natural reserve research system of any university in the world. Tons of research going on. A lot of the data just ends up in individual researchers' projects or in filing cabinets, disparate data sets. Again, a lot of real wide diversity. How do you pull it together? We have a history of things like this, pollen cores, historical data on the vegetation of California based on coring the bottom of a lake and pulling up and counting pollen to look at how the tree species have changed over time. Largely the work of one researcher who's near retirement. If it's not captured now, a lot of that might never be captured as it goes off into someone's filing cabinets who retires. So a lot of legacy problems in data sciences. Other historical data back in the 20s, there was an incredible effort to map the vegetation of California. We have these historical maps covering most of California. We have plots, we have photos. A lot of work has already gone into digitizing them. And then of course there's tons of publicly available data. So what a lot of this has in common, it's not all the data sources that I've just talked about are not by definition California-centric. But UC has an incredible footprint and tradition of research in California. So a lot of the challenge here is can you bring more and more of these data sets into a common spatial environment where they're searchable, curable, comparable. And this led to building this project, the Holos, it's the eco engine which was kind of its project name, the front end is the Holos interface. And again, I'll leave it to Michelle afterwards if people want to get into nuts and bolts. It's a remarkable product. We're not the only ones in the space of trying to visualize climate and natural history and ecological and evolutionary data. But this is quite a novel effort and really I'll just say what I'll just say is it's very heterogeneous data sources. It's really trying to make the interface as simple as possible in terms of querying and subsetting and bringing things forward without having to set up very complex queries to visualize. I won't go through the visualization tools just quickly to put it back into it. Again these very disparate data sets, a lot of challenges of data structures and ontologies. Data sets that each on their own have traditions of how they're handled but very few attempts have been made to actually have them work through a shared interface, so a lot of heterogeneous data problems. So that's all coming through building the engine and then having several front ends, both a web portal but also an API and our open science with Carthage and Carl and others have already built an R package which queries directly to really open up the research side, the portal on the research side. So, oh there's my segue. Yeah, so R up inside has built an ecoinformatics package which can in a few lines as typical of a lot of these things in a few lines query the data set and visualize data which really opens things up for researchers in a way that's very different than the exploratory tools of the web interface. Another, actually another data source that comes from the field stations and again the query ability really for research is long term environmental data from weather stations and research stations so you're getting high temporal resolution now we can be getting measurements every, I mean just try to imagine it's like, okay some of the data sets are sampled every five minutes and some have specimens from a hundred years ago and it's like wow, there's just this incredible range of spatial and temporal scale and trying to bring it into one environment has really been a challenge and opens up some really exciting new future directions. So those are the two big projects of the initiative in global change biology. We've also put on workshops, we have helped to facilitate student interactions. Our goal is to be an umbrella under which interdisciplinary research in global change biology moves forward at Berkeley helping people facilitate writing grants, interdisciplinary grants is a big part of it. As individual researchers like, okay I'm going to write my NSF this year and then NSF puts out a call for something more interdisciplinary or something broader, it's like oh I don't really have the effort or the energy of the time to do that. With staff support, with infrastructure support, you can get these institutional grants written that bring together researchers where no one researcher could really do the lift to write them and that's what bids can do, it's what BigCB has done, it's really what it takes to step up our research effort beyond the level of individual labs. So in that vein one of the things I did was to write a grant to the NSF research traineeship program. Some of you may be familiar with IGERT, a program that ran for 20 years or so, interdisciplinary graduate education research training, something like that. A couple hundred program, thousands and thousands of students who went through these PhD students getting interdisciplinary training. That was shut down and NSF rebranded it, they wanted to kind of work, shuffle some things up. NSF research traineeships are trying to catalyze training, interdisciplinary training, trying to break down the barriers between departments. Again, longstanding challenge in graduate education. They are specifically trying to target more preparation for non-academic jobs. The feeling that most academics won't be faculty, most people who get PhDs will not end up with faculty jobs and most faculty who advise those students don't know how to mentor them except for anything, except to get a faculty job. So they're just trying to break down this problem a little bit of where people will end up in their career. And NSF identified themes and they identified what they called data enabled science and engineering, which is DESI. Of all the data sciences acronyms at NSF, one of those acronyms is DESI, and they've had two cycles and about 12 programs funded nationwide under this data enabled out of 18 funded total under the NSF, under this program. These are $3 million five-year grants designed to reach 40 to 50 60s 80 something on that order students for some portion of their career. We defined our program this way around three sort of two challenges and their synthesis. One is the challenges of data sciences, well known to this crowd. The other is the challenge being met, being faced in Paris, which is how does society respond to very rapid environmental change. A lot of the paradigms of social sciences and natural sciences really work well if the world is not changing much. And the world is changing fast. A lot of our basic paradigms for how we approach problems have to be rethought. Paradigms of conservation, natural resource management, economic development, any number of things. And we wanted to bring these together with a focus on thinking about design and solutions, not just problems, not just crime wolf about how a couple of changes terrible and species will go extinct, but how do you use data and these ideas to actually design more effective solutions. We had faculty from eight departments that in itself has been, that has actually been the primary challenge of the entire program. It's mostly just trying to get a meeting scheduled. That's been the primary challenge of the whole program is to get everyone in a room for an hour occasionally. Now it's been really exciting. We have social sciences, economics, policy, environmental design from the design school, environmental sciences, computer sciences and statistics. David Culler and Philip Starker involved in a couple strong overlap clearly with this community. The students have all been here. I don't have a slide of the students. We have our first cohort. They've been here for an introduction. We're beginning to build bridges to bids as we move forward. I won't go into this, but the basic idea is that we influence a portion of the first two years of their training. So PhD students, many of you know this already. Most of you, or all of you, know the first couple of years tends to be to have your course work, then you go do your dissertation. These interdisciplinary training grants step in in those first two years, craft some interdisciplinary course work, some team project work. So students have actually learned how to work across disciplines, do a little bit of research, which opens up the data sciences, so that when they go on to their PhD, they have a different toolkit, they have a different social network, which I think is a huge part of it. So they actually know each other. The environmental scientists know social scientists. They actually know who to go have coffee with, where they might even do a chapter in their PhD, which is really different because they've learned about these different perspectives. But we don't control, we're not mentoring the PhD. So we're not dissertation advisors. We're just shaping the early part of their training. That's the last slide, and I'm just at the end of 10 minutes. So the last thing I'll say, tons of work has gone into the undergrad data sciences curriculum. I'm sure a lot of people here have already heard those, the talks have been involved. And I'm assuming that as this moves forward in the next couple of years, graduate curriculum is really going to require as much attention as to what should be available for PhD students, just to all of them if they want it in data sciences. And at best we're probably little beta testers of trying some things and get some things going, and hopefully we'll be able to contribute to some dialogue about graduate curriculum in data sciences over the next few years. Almost within 10 minutes.