 I hope you agree with me that aside from the quarter of five in the morning excursion today, that it has been quite a rich and memorable conference. I've certainly seen some amazing things. I do want to note that we will be following up with presenters to make sure that we have their materials up on our website. And realistically, we probably won't get all that finished until after the holidays, just given the time of year. We've also got a nice collection of videos that will be coming up over the next eight weeks or so. And we'll be announcing those as they're ready on CNI Announce. So look forward to that. I look forward to seeing some of you at the International Digital Curation Conference in Edinburgh in February, which we are helping with. And many of you out in Albuquerque for our spring meeting in early April. With that, I just want to conclude my sort of broad remarks with two rounds of thank yous. One to all of the presenters that have made this meeting just wonderful. Thanks. And secondly, I'd just like to take a minute to recognize the whole team at CNI that's made this meeting run, I think, very, very smoothly. They do amazing things. And they're just wonderful. Please join me in a hand for them. We have a very special program for you today. We've got not one, but two amazing talks. We have a special briefing by Bob Kahn, who will lead off. And then we have a closing plenary by Ben Schneiderman. And I'll introduce Ben just before his section. But let me say a couple of words about Bob Kahn. Now, to some people here, of course, Bob is a living legend. To some of the younger folks here, you may not know that name, particularly if you've never really worked in computer networking, because he's, of course, most widely recognized for his role in the creation of TCPIP, along with his colleague, Vint Cerf. But simply recognizing that just keystone achievement totally underestimates his contributions. He has been consistently a decade ahead of his time again and again, and has directly or indirectly been absolutely fundamental in creating much of the world, as you know it today. A great deal of what we see in mobile computing and even mobile telephony has roots in the foundational work he did on packet radio back when he was at the Advanced Research Projects Agency, which sometimes goes by the Defense Advanced Research Projects Agency. He created years, I think, before many of us recognized its fundamental importance, the whole handle system and then the DOI, the digital object identifier system, that rides on top of that handle's infrastructure. And that is a wonderful achievement. What you may or may not realize is that amongst those few achievements I mentioned and many more, over the last, gosh, probably two decades genuinely, he has been hard at work in his role as the leader of the Corporation for National Research Initiatives at defining and bringing into being instantiations of what he calls the digital object architecture, of which the handles and DOIs are a small but key part. And this is finally, I think, beginning particularly outside the US to gain some traction. He has very kindly agreed to give us a short briefing on some of that work and where it's headed, where it came from. And I can't tell you how thrilled I am to welcome Robert Kahn back to CNI. Bob, please join us. Well, thank you very much for that lovely introduction. And for inviting me here to address this crowd. It's a nice, nice crowd. I know I know many of you in the room, but I know many of you probably haven't met me before. Probably not met you either. And look forward to sharing some ideas with you. In fact, that was such a nice introduction that maybe I should just go right to questions. What do you think? Or how do you give the presentation because you're so good at it? But I was asked to discuss briefly a project that I started in early 1992 with support from DARPA, which we called the Computer Science Technical Reports Project, or CSTR. And the associated architecture that emerged from that program and took shape later on. I know that some of you in the audience were part of that and may remember, including Cliff himself, who was at some of those meetings. In that timeframe, in mid-1992, the web had really not yet made its impact, in part because of the number of factors. Those of you who remember it know other technology seemed to be more likely at that time, whether it was Gopher, Archie, Veronica, and the like. But in a separate project that I was running and the PI on, having to do with gigabit networks, we had actually funded NCSA, which was part of the University of Illinois to develop what became the first widely used browser, a point and click browser, for what became understood to be the web. At the time, they were doing it to do point and click to their crane machines. But that particular effort launched the web big time. So it came after the material that I'm about to tell you about right now. And at that timeframe, both the library and the publishers were contemplating the digital revolution. There are a lot of things going on. But that was really decades, it really started decades before within the research and educational communities that had worked with things like ARPANET and more recently, NSFNET. And then there were the other governmental and private sector networks that emerged. The internet, of course, enabled them all to work together. It wasn't a separate network, as many people think, but it was the glue that allowed for the interoperability between all those different components. In the late 1980s, commercial email systems were allowed to interconnect with the internet. We played a major role in that and my organization did. But it really opened it up to kind of a broader view of what was going on. And in 1993, a bill was passed in the Congress, which I tend to refer to as the Boucher Bill after Rick Boucher, who was a congressman from Southern Virginia that allowed the NSFNET, the National Science Foundation, separate to be opened up for wider usage, including interconnections with commercial networks. The genesis of the CSTR project was a desire by a number of universities who were trying to deal with DARPA to get them to help digitize their collections, mainly their technical reports, but also some of what you would call the gray literature that existed that wasn't normally dealt with through typical publication channels. So DARPA had been approached to help it. If you've ever dealt with DARPA, a pure digitization effort wasn't really high on their list of interesting things to do. It wasn't even necessarily a research project from their point of view. But they wanted to be helpful, as they usually were. So they weren't entirely dismissive, but they certainly asked CNRI if we could help somehow, because they had other higher priorities. So this is what led to the CSTR project. DARPA provided funding. We made a proposal to turn this into an interesting effort for them to consider. They provided the funding to us, but they mandated that we work with five of the leading computer science leaders in the university community to help develop this further. And these were handpicked by DARPA, namely MIT, Cornell, Carnegie, Stanford, and Berkeley. Those were the five. And they were arguably the best of the bunch, or certainly they were all top 10, and probably most of them are all top five or close there too. For us to get involved, since I was interested in R&D, I insisted on really just two things from the universities if they were going to participate. They had to propose what to do. One was that each of them would require not only to digitize their collections, but to develop an in-house digital library system to manage them. And the caveat that I applied was since we didn't want to spend two or three years figuring out how to do a common system that everybody agreed to, they would do them all independently, and so we would now quickly get this off the ground with each university doing their own thing, and they did. And then importantly, each would have to propose an interesting research project that would make use of this digital library of their collections. And they all did that as well. So these universities all had different interests, as did we, but with separate implementation efforts, we knew that what would emerge would be five different information systems. Literally, that's what a digital library is. It's a microcosm of an information system dealing with traditional things that you would think of in the publishing and library world. These were digital libraries that would have to work together, we thought. And so the question there was, how would that work? Today, I would argue that this is a special case of the more diverse information systems that you might find emerging in the Internet of Things, because when you think of a thing today, most people are thinking about a thermostat, or a little switch, or an actuator, but if you think of an automobile as a thing, that's got lots of things in it that, of its own right that are collecting data, and you might wanna look at histories, if it's an airplane, the same thing, it could be something like a building, in which case you might want to know the plans for the building, and what's in the building, and what are the systems that are operational, and so forth. So to me, it seemed to me like this move to the Internet of Things was just another giant step forward that's still waiting to happen in large sense. So the need for interoperability, to me, existed in the case of these libraries that we were causing the universities to think about, but that I knew this would be a strategy that would be useful longer term. So as part of that, one of the things I also mandated for ourselves, that we'd be part of this process, that we'd be not just a bureaucratic manager on the side doling out money, so we took on two tasks for ourselves. One of them was essentially to develop an approach for linking these heterogeneous information systems. That's a problem we still have today. We came up with an approach, and I believe that that approach is still the right thing to be focusing on moving forward. It's not a turnkey solution for everything, just like the Internet was not a solution for every application in the world, but it was an enabler. It lowered the barriers to successful completion. And the second thing, since we were dealing with intellectual property here, was to work with the universities to help resolve some of the attendant intellectual property issues, particularly the rights issues that might emerge. So at the time, I thought I was in a good position to tackle the first task, me personally, because it mapped almost perfectly into the kind of Internet challenge that we faced in the early days. We were linking together different networks and computers. Well, this is sort of the same thing, but imagine a computer is a thing. It's just a very complicated thing. And Patrice Lyons, who is in the audience in the second row over here, was also working with us very closely at that time and dealing with legal aspects of the project. So that's why we thought we were in a good position, but we were also subsequently asked by DARPA to get the Library of Congress involved, which we did in the copyright office was the part that was agreed to by them. They became part of the issue of essentially handling the automation of deposits for the purpose of registering copyright claims. So I'm not gonna say anything more about the copyright aspects here, but there were several things on the technical side that I want to mention. So, and Cliff Alert alluded to some of them. In the mid-1970s, I led an effort to develop what today, if you look carefully, technically you would see was a forerunner of what we now know as a CDMA type direct sequence spread spectrum that's used in virtually all the smartphones today and cell phones. Of course, we didn't have those back then, but we did implement the technology and we actually used it. And it was actually one of the very first efforts, perhaps the first to not only use link encryption, which people knew about well, but to actually provide security for the end information itself in the systems that were transforming it, transporting it. In the mid-1980s, after C&I was founded, myself and my colleague, Vince Surf, who was also mentioned by Cliff, described the technique for using mobile programs in the internet to access information. We were thinking of freeing up the user from having to have fingers on a keyboard and a nose on the screen all the time. We call these things knowledge robots or nobots for short, but they presumed that some of them would program them to carry out a function or a task and we described how that would work. But it was about that same time that worms and viruses were first appearing. Remember the Morris worm in roughly 87. And most of the organizations that we talked to expressed concerns about having programs show up over the internet, even if they were connected to it that could be written by others unknown to them. So technically, we could try and explain why this wasn't really a problem and the same technology that was enabling it could also intermediate and deal with these issues, but it was perceived to be a problem and the net effect was they were uncomfortable just adopting it per se. Also, it wasn't really clear how these mobile programs would actually get written by normal users of the net. So we tried to help with that problem with a lot of support from the Library of Medicine later on and we actually built an interesting system for them, but we developed a language called Python that today is widely used and it is in fact one of them, I think it may be the most widely used language for even teaching computer programming in the country today. But this first one about security really did security of these ongoing things became a problem. So the CSTR project provided an opportunity to make some progress here. What we did first in essence was to try and remove the mobility from the mobile programs. It may seem like an oxymoron, but it's like removing the paper from a dollar bill. It was sort of an intellectual exercise to see what was left if mobility wasn't permitted. And so if nothing could move around, what we ended up with was something that is now the digital object architecture. That's really where it came from. So the Lingmar Franca for the internet was packets and eventually IP addresses, which would allow them to be routed. Those were the identifiers essentially used inside. Addressing was always an issue in networking, whether it was the wires in the internet, because they had only one computer, the IP addresses in the internet, because the computers could have been now anywhere in the globe on any network, or even files represented as URLs in the web context. But here the Lingmar Franca was digital objects. And the digital object was defined as a set of bits or a sequence of bits that had a unique persistent identifier so that you could understand from the identifier essentially how you could get to that information. Except that many of these identifiers were wanted to be created from existing identifier systems or even cryptographically, as was the case for some in the government. And so you couldn't actually depend on the identifier itself, telling you anything more than here's an identifier for it, but nothing was actionable about that. Today, people use different names to these identifiers. The OIs is a trademark of the International DOI Foundation. It's a handle, but they just call it DOI. We call them handles, but they're really just digital object identifiers. Other people call them pids or iders, and I think you'll see many more, including those that are not even expressed in Alphanumeric characters. So I presented this idea to the participants in the CSTR project at our first meeting, which was at the CNRI in October of 1993. And many of the discussion items led to questions and issues that we couldn't resolve at that meeting. And in fact, took, I would say, close to a few years to really sort out among everybody. And it was suggested that I work with a fellow named Bob Walensky, who at the time was, I think, chair of the CS department at Berkeley. And he and I spent the next year or two trying to put down in paper the ideas that I had presented in a way that was understandable to everybody in common terms. I won't go into all the issues that we had to deal with. They were very interesting, but the whole idea was to help the CSTR community at least have a methodology to interconnect and link these systems together. And originally, we started out by thinking that everything would be driven by humans, that they would be taking all the action. So we had debates about, should we use FTP just to move library files? That was 20 year old technology by then. Was that the right answer? There was this new thing called HTTP that had just emerged on the market. Should we use that? Some were saying, no, no, no, we don't need that. FTP is perfectly good. Where would browsers fit in, if at all? And how would this all work? So those were some of the things that we were debating. Well, the architecture consisted of really three main components. One was the notion of a digital object repository which would store these objects and from which you could access them by the identifiers. So that means whatever you're using for storage was sort of not on the table. You could push it just like the internet protocols didn't ask you what network, what computer, it was independent of the underlying technology. Registries which would store metadata records so you could discover things. You could browse, you could search and so forth and it didn't have to be through public facing things only which is the way things like Google eventually emerged. And then a resolution mechanism which would turn these identifiers into actionable information about the thing identifying. And I have to point out that these identifiers are probably the most important part of the architecture in sort of like a linchpin which glues things together just like IP addresses play that role in today's internet. I wanna mention the five key aspects of this architecture that I think will survive any way of discussing them. One is that it is an attempt to provide for interoperability. Now just like TCP, IP supported that the protocols that would be used to interface with these digital objects would allow any of them to interoperate with any others. So integrated security was the second one that you could literally protect the digital objects like I said that emerged from the work we had done on the packet radius that fairly before not only could you secure the objects but you could secure objects that were inside the objects. In fact you could think of a repository as a digital object which in the mobile world could move around but in this context it was stationary. And the protocol would interact directly with the information itself not with a wire or a system or a file or a piece of the technology but with the information. The third is a notion of open architecture and basically that means defined interfaces so that people know how to build things to connect and importantly independence of the underlying technology that it might make use of. And that was critical for a number of reasons which we'll get to in a second. Persistence so that if you create this information that you can access it virtually anytime in the future if people maintain it. And the problem that the publishers were dealing with at the time is they wanted to put out these journals have citations in the back and yet in the early days of the web if you recall the timing of that I would say half the URLs became obsolete in like a year or two and in five years 95% of them didn't work. This just couldn't work for most of the long term archiving issues that people were dealing with and in any of the governmental situations or corporate situations where you wanted the records that persist maybe indefinitely it couldn't work either. And then finally scalability. So let me just say a word about scalability. As an open architecture in the four plus the decades that this architecture has been around lots has changed. I mean all the underlying technology is different and the architecture still works even though the communications have scaled up by a factor of about a million in speed the processing power has gone up but also by about a million in speed and you can buy about a million times the memory for the same price. So this is scaled up by a factor of a million in a decade from now which I'm sure we're gonna have the internet around for another decade it'll be a factor of a billion and in the history of technology there been I think this is pretty unique. I can't name anything for which that kind of scale up has actually shown up before. So the goal in this case was to do the same thing for managing information and allow for this architecture to scale with the growing demands more generally. Now software implementations of these components have been available for quite a while but folks had to download them and install them and this was fine for the technical experts but pretty imposing for almost everybody else. Either they didn't know what to do didn't trust it or whatever but recently we merged the repository and the registry components together to form one downloadable package that's called Cordra. It's open source you can find it on the net at cordra.org but the reason for doing that is we realized that every registry needed a repository to store the metadata records and every repository needed a registry to know what was in there. So putting together made an awful lot of sense. And so you can find the specifications for this digital object interface protocol on the net. It's referenced in an IT spec that I'll mention in a moment but just as TCPIP maintained and enabled the interoperability in the computer to computer sense over multiple networks the digital object interface protocol is essentially its counterpart for information to information space. If you have something like a medical record which could be structured in many different ways you know one possibility is to think of well download the whole medical record but if you can actually interface with that information you can ask questions about it and that's what this allows and then the resolution capability basically implements a deployed PKI capability. Finally the digital object architecture makes use of all the existing internet capabilities or any of those you want. It doesn't attempt to replace any of them. So a lot of people are concerned oh is this a replacement or whatever it is not. I mean we need what's there on the current internet it's intended to build upon them rather than replace them and there's a recent ITU recommendation X.1255 which provides an overall framework for thinking about this problem. It's relevant to this and it's based on the digital object architecture. I commend that to you you can find that on the net. I wanna just say a few words about the identifier resolution component of the internet which we usually refer to as a handle or generally as a digital object identifier or simply an identifier. Why is it so important? Well first in this possible future every device could conceivably have a repository because people are gonna be storing information. Every device may wanna store it. So if every tablet, laptop, workstation, every device has it you will have probably as many of these in this hypothetical future it's not here today as there are people. So you might have billions, tens of billions or even trillions of repositories for starters. So you can't assume these identifiers will tell you anything about which repository to ask what are you gonna do with it? You can't just ask a trillion things or a billion things it do you have this particular digital object. So you need another way to deal with it and if they're cryptographically generated it's even more not profound. So the resolution system was initially deployed by CNRI as a kind of a single level thing to contain all the handle records there. So you could ask that system it would give you enough information to take the next step. But it was a centralized nodal system that potentially didn't scale could be very large and I think it was Bill Arms. I don't know Bill if you're in the audience today I understand your name was on the mailing list or the attendance list but I think it was Bill when he worked at CNRI that persuaded us to do a two level system here so that individual organizations could take responsible for creating their own records and that we would give them a kind of unique identifier that they could put in front of whatever identifiers they were using so they could make use of their existing systems. You didn't have to worry about overlap of identifier systems. So we did that but today they're on the order of at least thousands probably tens of thousands of such organizations that maintain their own records. We call these local handle services. So if you got an identifier how do you know which of these local handle services to ask for the handle record that lets you go to the information. So we had started out with a single system which we retained it was we now call it the global handle registry. What we stored in that global registry were just the numbers that we gave to these organizations. So given an identifier if you could tell what that number was it was always at the beginning. We required the structure to look like that you could ask the global to whom was that number given and you're good to go ask them. Do you have a handle record of the following identifier and if so they would give it to you and of course security was a clear part of it. So in many ways you can think of this GHR as the root of the resolution system. About 10 years ago, I'm getting close to the end here Cliff, about 10 years ago we were approached by some researchers in the European Union who were working on management of large research data sets and they wanted to implement their work using components of the digital object architecture. So they were concerned that if they made that commitment and they did it broadly this was pan Europe what would they do if something happened to CNR along the way and thus something happened to the root of the resolution system that was critical. So what we did after a lot of discussion agreed that we would work with them on a plan in the unlikely, that unlikely scenario they were positing were to unfold. But some, and that was fine with them so that was good enough for them to get started but we got then more and more inputs from other parties that for which a mere plan was not sufficient. They wanted something now that would give them confidence that this would be okay for them going forward. Some wanted us to transfer responsibility for the GHR to them. I won't mention them but some of these countries that historically we didn't get along with all that well. Some of them wanted to go to the ITU which was another major issue. We were not willing after all these discussions to take either of those steps but in many ways if you have followed the internet governance discussions this was a situation very similar to the internet governance that involved not only the views of the US and ICANN and the like but is still not really fully resolved today despite the transition of ICANN. So a solution to this issue was clearly needed to enable more widespread use of the architecture and in fact we were encouraged to do it at the time by the President's Science Advisor after meetings with him to talk about the advisability of taking that step. So we finally decided to take a two-part approach. First what we did was we we augmented the technology to allow for multiple parties to administer the global registry but we didn't ask anybody else to administer it initially so we were the only administrator but one of the questions that came up in this multiple administration mode was first how would it work at all and but if you start with multiple administrators who selects the next one? Is it an old boys club and you go ask them or was it some other thing? But more importantly, security was critically important here. You didn't want one group to be screwing up what was going on especially at the root level. So various parts of the system needed keys and signing so who would hold the keys, who would do the signing and so forth and more generally how would this overall GHR that involves multiple parties actually be administered? So it was quite a bit of discussion and after that discussion and multiple alternatives were considered and lots of discussions took place, governance and outside, it was decided to establish a non-profit foundation in Geneva, Geneva being a neutral party in neutral country and that would provide these functions. So that was decided on and we actually established the foundation in Geneva as planned in January of 2014. So it's about three years old at this point. It's got a nine member board at the moment it can grow to more than twice that if necessary and desirable. I chair the board, but it's drawn from around the globe and it's intended to stay that way kind of multi-stakeholder board representing different interests. Some of them have representation here and one of the early members was Norman Paskin who if you remember was the managing agent for the whole DOI system. Currently at Pence who runs an organization called Crossref sits on that board. Peter Wittenberg who handles, he's one of the leading proponents of all of the large research data initiatives in Europe sits on the board and so forth. The foundation has authorized seven organizations to administer the plan is to grow to 12 by the end of 2018 and again, we're trying to get broad representation around the globe from parties that can essentially help promote its use by different parties. Use of support for the GHR functions doesn't come from the foundation, it comes from the administrators. So it's like if you wanna fly an airplane to deal with one of the airplane companies you don't go to the FAA even though they play a key role in overseeing what's going on. Foundation doesn't do that but the foundation deals with the administrators and the annual funding for the foundation comes from these administrators to maintain the GHR functionality. That's a small part of what the foundation would like to do, it's an important one but one of the things they are looking for is foundation support from other sources for things like outreach, informational participation in conferences, let people know what's going on. Pilot projects around the globe, they don't need them in the developing world so much but in some of the poorer countries and other developing parts they do and standards activities as appropriate. It's currently exploring pilot projects in healthcare but it may grow into other ones as well but one of the support that it's looking for is these core standards that are unique to the digital object architecture but the intent is to do very limited amount of standards work in the foundation but rather to work with all the other relevant standards bodies around the world for the appropriate development and one key example which ought to be of serious interest to this group is the development of a standard representation for data types and if somebody gave you not only a digital library piece of material which is typed in various ways but big research data set, it doesn't help to get 10 terabytes of data without knowing how it's structured internally and typing is important. So we are playing a lead role with that, the US government is behind it and we're doing it through ISO which is another one of the main standards bodies. There have been a range of applications of this architecture, its components, some of which you're surely well aware of such as the initial ones in government applications and in libraries and publishing but it's also been applied in the entertainment industry for movie studios and Hollywood cable TV working together, the healthcare industry, the financial industry particular in the derivatives market, the construction industry for managing building plans over time to managing the contents of buildings, what pipes, what heating units and like we're in there and for the operational building systems as a big informational system in its own and for supply chain tracking, this is a big issue in China, they're dealing with the food markets there and shortly to smart cities and IOT certainly in Sub-Saharan Africa and maybe beyond. There has been much less uptake in the United States than I would have liked to see. The US industry I suspect has their own reasons, maybe they would like to see a different approach taken for the IOT but I hope that will change because everything needs to be interoperable with capabilities around the globe. So for now anyway, the leadership in applying this stuff is really coming from abroad or at least seems to be but in the long run I think the US still has the capability to play a leadership role maybe the leadership role here as it did with the early internet technology. Once it decided to jump in with both feet but the reality is that this train has already left the station. It certainly left it into the research community a long time ago but in the larger world, it's case and so I think they can still get on board but they're gonna have to get in the fast car and get on a station or two down the line. I hope that happens but meanwhile its use is really confined to all the applications that you've heard of before and I think there are many more in the pipeline. So I hope that's informative for you. That is the background as I see it and happy to take any questions if we still have time. I hope you understand why I thought it was really important for you to hear that and hear it now and you should recognize by the way that the interconnections here with the internet of things but also with the emergent global efforts to sort out research data through things like the research data alliance which is having a rich conversation about these kinds of things is very much ongoing. So I hope this has put a new body of work and potential collaboration on your radar screen and Bob will be around for a bit if you'd like to talk with him further. I need to push on now though we are a little bit behind schedule but we'll run a little late, life's like that. Ben Schneiderman, it's wonderful to have two speakers like this. I've known Ben for 20, 30 years. This is one of the people that really in some sense created the whole idea of studying and analyzing and designing user interfaces as we think about it today. Amazing body of work. Some of you may have seen really major contributions, not just sort of hard computer science like your book on user interface design but Leonardo's laptop, a wonderful piece of work and I was so touched today. Somebody, what Ben was signing books actually came and had a copy of this that they'd held on to for years and finally got him to sign it. He's not really here to talk about that today. He started thinking, well probably many years ago you've obviously been interested in this as a subtext all of your career but he started really thinking a few years ago about the whole nature of the research process and about the whole way it has been sort of stereotyped and narrowed in some sense by certain reports and visions and public misunderstandings about how research is done and has impact and he just wrote this amazing book about a year ago and I hope many of you have read it or after this will read it called the New ABCs of Research and there are a couple copies if you wanna look at one out on the desk he'll show you a picture of the cover. This is a very special book because it really talks both analytically and in a kind of a visionary way about ways in which research can be different and can really have a high impact on hard societal problems. It has fascinating to me connections with the thinking that has gone on about cyber infrastructure, about multi-disciplinary science all of this and it also is written in a way that speaks to researchers at all levels including really importantly young researchers who are trying to find their way through their chosen paths in life as well as leaders in the research and education process and I really am just so pleased he can be here and talk to us about this. So thank you. Welcome. Thank you, all right. Thank you, Cliff. All right, I'm very pleased to be here and thank you for this opportunity to reach this special audience. This really feels like the right place for me to be and this talks a little bit different. I've been listening to talks today and most of the conferences I go to people talk about the work they have done and this is a little different but this is about the work I want you to do and so I have a tough challenge here to suggest that there's an agenda of items by which you may change the way research is done at your institutions and that's where we're going. So the story which Bob Khan politely advertised for me by clicking on my slides as they were being shown through the talk goes back to the University of Maryland where our wonderful interdisciplinary group, the human computer interaction lab for more than 30 years has been doing the kind of work that Cliff described and it's a combination of computer science and the College of Information Studies but also these other units around campus including the wonderfully titled Maryland Institute for Technology in the Humanities myth. If you come to our website you'll see 800 technical reports, 200 videos, 200 projects over the years and a lot of information that I hope you'll put to work. Some of you know me for the book Design and User Interface just came out in sixth edition late this spring and it's required six co-authors to make this happen and tell the remarkable story of why seven billion people have something like this in their pockets that they can communicate with family or get their education or medical care or do business and that remarkable transformation happened because of the Moore's Law because of Bob Convin's surf and many others but it also is because of the people in this research community who actually made the designs for the visual apps and other kind of interactive platforms that made it possible. So it's a great success story, it's continuing and it's a remarkable evolution. So as Cliff was mentioning that my history has been involved with ideas and developing innovations and I've had the satisfaction to see them travel well. This is one academic project in the mid-90s and company formed in 1997. I served in the board of directors for five years and Spotfire grew to be the leading tool for information visualization and from 97 was then purchased in 2007. Here's an example from GE Healthcare in England, Nick Thomas, it's a single screen, a very simplified one of genomic data with three coordinated windows and it's a rare case that one slide, one image produced significant result that led to a published paper but it was the unusually high levels of the RBP1 gene that was shown in this slide and they're indicated here and in the coordinated windows in the scanner gram in the plate view and all that's set up by a single control panel that operates all of them in a coordinated way and that's become the strategy. Now Tableau's maybe the most widely used visualization tool dear friends who have run that and now that's a 3,000 person company and doing very well. Spotfire's grown and traveled well for pharmaceutical drug discovery, genomics, many other applications and here you see how 27 windows convey the information in high dimensional data space and we're not alone, there's lots of ways that information visualization has traveled well. I'll just mention this Bloomberg display, 300,000 people pay $20,000 a month to have that on their desktop, that's a $6 billion business and it's principles of design of spatial stability and having all the information available, minimal scrolling, minimal resizing and movement allow them to do their work. My own desktop has grown so there's now 16 megapixels and two very straightforward commercial displays. I can actually read 15 pages on one screen and I can have the editors or reviewers comments on the other screen and without scrolling and moving I can see what's going on, move a figure, expand a section, add materials and it makes my work much more productive and better quality. In fact, I will go to my office in order to do certain tasks not just because it's faster because I know I'm going to do better and increasingly these environments are creating more effective creative work for you and for others and more or less I would say the principle that we've learned from this is that your productivity and the quality of your work is more or less linearly proportional to the number of pixels on your desktop. So getting another screen might be a good idea going more resolution, being able to see more of the world and act on it with minimal interference is a high payoff. Now the design principles that make that possible are important also. So if you have Facebook and Twitter and YouTube and email and they're all jumping and bumping and sending out alerts, well that could be distracting. And so using it in the right way is still important and we've come to understand how to do that for individuals and for large groups doing nationally important kind of efforts that are vital to our continued existence. There's lots of small data displays. Here's my most familiar visualization sitting at College Park right over here inside the DC Beltway at the end of the day with one hand, with one thumb. I can bring up this display and what I'm looking for is the big data result that puts 15 red pixels on the Beltway to let me know that I need to take a different route to home in Bethesda over here. And so there's lots of ways that big data works even with small displays and these are others. Some of you may know me from the TreeMap. Here shows all 160 million jobs in the US and 12% government jobs, 11% goods, the rest services and the largest of those is the leisure and hospitality industry. And so we've got lots of things going on in small displays. So these projects have taken me in partnership with many companies and consulting with them towards research projects which produced academic research results and commercial changes. And that's what I'm talking about and as Cliff suggested in his introduction, this book has been something that's been building up in me for many years and my wife wisely said, go slow on this one and take your time and get it right. And so I've worked very hard on this and it's meant as a guide for junior researchers if you're working in the current research ecosystem here's how to increase the impact of your work. But it's importantly a manifesto for change for the senior researchers at universities, for the deans and the administrators, for the business leaders and companies and for the funding agencies. I had the pleasure to speak at NSF about this a couple of months ago and had a wonderful reception and was cheered to see their moving down the road described by this. So also there are fellow friends here in the audience Gary Marchonini, Dean of the iSchool at UNC bought copies for all his board of visitors for his deans and others have been buying 100 copies to give to their employees or their staff and putting it out there. The university industry demonstration partnerships bought 100 copies to give to all their people and I thank Cliff for giving me this opportunity to reach all of you and here's the publishers, the author's gift which is the mug with the book and the guiding principles on the back. So thank you Cliff. Cliff's a hero also, I mean for all these years been running CNI and I think it's admirable to have stayed with it and made such important contributions. So here's the story, maybe like some of you I was trained in an era that was dramatically influenced by the writings of Vannevar Bush, Vannevar Bush. He was science advisor to President Roosevelt during World War II and credited for starting the Manhattan Project. At the end of the war in 1945 he wrote a manifesto about how things would proceed after the war in the world of science and it was wonderfully titled Science, the Endless Frontier. Science, the Endless Frontier. You can read it on the web, it's free out there at 80 pages and a pretty remarkable document which made the strong claim that the way research goes is you start with basic research and then you go to applied research, look for commercial opportunities, carry out the technology transfer and make commercial products. This has come to be known as the linear model. Start with basic, go to applied, develop market ideas and then produce products. It's a wonderful idea, it's compelling, it's simple, it's clear, unfortunately it rarely works. It rarely works. And even if you read Vannevar Bush, his document, he alternates between celebrating the applied research results that triggered basic research like the Manhattan Project, like the medical researcher in World War II where the challenge of new kinds of tropical diseases, new kinds of weapons brought doctors many challenges which led to fundamental understandings. So that model though has persisted and here 75 years later we're still struggling to get out of it. Now others have pointed out or questioned this from the 70s, Tom Allen's book on managing the flow of information technologies and famously Donald Stokes book, Pasteur's Quadrant described a different way and Pasteur was the sort of poster child for the new way of working which was Pasteur worked with the Vintners whose wine went bad and he worked with the milk producers whose milk went bad and he comes up with the germ theory of disease, vaccinations and lots of things and he solves the problems of the farmers and the milk producers, pretty good. All right and Donald Stokes celebration of Pasteur's Quadrant was he had a basic research result, he had applied research methods, he worked with the producers, he had a theory, he tested the theory, he published the result and he disseminated the solution, the twin win. That's what we're after here. That's what we're after and yet the research agenda has largely gone down much narrower agendas to smaller results of less significance and some have pointed out recently of the failure, the number of retracted science documents, the number of science results which are not reproducible, many challenges. Daniel Sarowitz over the summer read his article called Saving Science. He talks about how science is falling apart because of this narrow devotion to the old model. Now Donald Stokes was clever enough to highlight the Pasteur Quadrant but he also appreciated that you could do good work in the Niels Bohr Quadrant of just doing basic research or in the Just Applied which is Thomas Edison. Thomas Edison had 1,060 patents but no papers. Okay, he didn't publish basic science, all right? So there are other ways but the message here is if we're going to produce powerful high impact results that influence society, that make a difference, that change the world, you've got a better chance of going there if you take the Pasteur Quadrant model. If you work on a real problem with real partners, take an academic theoretical approach, bring your theory to practice, validate. Hey, there's nothing better than the theory than a validated theory, okay? And then publish your theory and disseminate the solution, the twin win. Okay, so that became increasingly clear and others have echoed it most recently. Another book out Venkatesh Narayanamurti published a book last month called Cycles of Invention and Discovery and he also strongly attacks Vannevar Bush and tells the historical evidence about that and then he promotes this new method. Now he's significant because he was an important manager at Bell Labs and then became Dean of Engineering at Harvard and he continues at Harvard. So here's my story. I think there is a new way to reframe the research at the universities we represent, make them much higher impact and produce positive, more positive, more reliable societal benefits and here's the pitch. The rest of this talk is one slide, okay? So there's three things that have changed in the world in the context of research from the days of Vannevar Bush. First of all, the problems are very different from the days of science, the endless frontier, when reductionist models were okay, when working in a laboratory tested environment worked out just fine for doing a lot of problems. The problems we have today, healthcare delivery, community safety, energy sustainability, environmental preservation, cybersecurity, all right? These are large and often socio-technical problems. They're not about finding more knowledge. In fact, I participated in a National Cancer Institute panel about restructuring their research, which is saying we don't need more basic research, we need to apply more effectively what we already know, okay? So the problems are really different and so the methods of science which had this reductionist laboratory approach are still valid but we should be shifting to other models of research as well that are open, that are more contextual based, that recognize and validate qualitative research that make case study, rigorous case studies an effective method. Second, these immense problems are hard to work on but the good news is we have better tools. We have the internet, we have the worldwide web, we have collaboration environments, we have rich, powerful tools that let you work together with people at a distance to get to the right people for the problem that you need to bring the skills together and form a team that's effective in solving a problem. Not just the people who happen to be there but the people with the skills you need. You can, over the web, be able to get the papers that are relevant, you can find the potential collaborators, you can ask the questions, you can get to them with your early results for validation, raise questions and then when you have results disseminate it in ways far more powerful. Some of you may remember the days and I also, I would get a postcard in the mail that said, dear Professor Schneiderin, would you send me the paper that you published two years ago and I would print a, make a copy and mail it out and now when my students and I have a new idea, in three hours we've identified the top 10 papers, the top 10 researchers and we send them an email and says, hey, have you thought of doing that in this context? And we're off and running. You may also know, this is the right audience for that but Carol Tenapier's study showed 35 years ago the average researcher read 175 papers a year from a small set of six to eight journals that they subscribe to and flip through. Now the average researcher reads 400 papers a year and they don't subscribe to anything and they get these from very diverse sources and they learn about them through email and lists serves and blogs and YouTube and social media of all kinds. And so the social structure, the ecosystem of research has radically changed because of these new technologies. Collaborations have changed, 35 years ago a good medical research paper was 800 patients studied at a single university hospital with three or four authors. Now it's 800 patients at 10 different university hospitals around the world with 35 authors. The growth of authors is very apparent, 35, 40 years ago half the papers were single authored, now only 10% of the papers are single authored and the value of collaboration has gone up. Even 35 years ago multiple authored papers had more citations but now they have twice as many more citations that the capability of teams to work together and do better work is dramatically improved. I think the quality of research papers has gone up dramatically and the quality of work that's out there is strong. The competitive atmosphere is really difficult. You get 10 and 20 and 30% acceptance rates for grants and for papers at conferences but the good news is the work is getting better because of these stronger collaborations. I try in the book to make an evidence-based approach to reporting on these things. The third is of course all this has raised the ambition of researchers who are doing more who are working with more people who have larger teams and so you've had in recent years the extreme case of thousands of co-authors for genomic papers, for the large Hadron papers, for astronomy papers. In 1965 the most number of authors on astronomy paper was eight, okay? And now you have astronomy papers with up to 1,000 co-authors and people working together in ambitious ways they couldn't consider. All right, so that's when I began to sort out as what had changed and now I began to distill a set of guiding principles. So these are the, this is the core of the book and the first one is after the title, the ABC and the central belief, it says so right on the mug is applied and basic combined. ABC, applied and basic combined. And the suggestion is not A or B but to put them together and have a collaboration between an academic researcher theory driven in their mind working with a practitioner who's got a real problem. Academics are actually smart people they're just not very good at choosing problems. And if you can put them together with people with real problems you do a few things. First of all they get to work on something that matters. They get to validate their theory or solution in a real environment. They bring it back, refine the theory, publish the theory disseminate the solution, they get the twin win. Those are very powerful. Those are the things that travel well. Those are the things that make a difference. The old model of technology transfer of doing your research and then seeking a market rarely works but if you have the partners you've already got the market. You're closer to the goal and it more successfully brings you to a success in both domains. So the rather remarkable claim is that by working on real problems you get better theories. That's a strong one, hard to prove but that's what I'd like to argue for. The anecdotes I tell, the history examples and whatever evidence I could amass to push towards that I've tried to offer in the book. The second principle is science, engineering and design. When I was trained it was the error of Thomas Kuhn's structure of scientific revolutions. Research was science. Some of you may know, the National Academy of Science was founded in what year? 1863, Abraham Lincoln. National Academy of Engineering was founded in 1964, 101 years later, okay? Up to that point engineering was what train drivers did, okay? But after World War II, engineering became a research focus, it became a way of new knowledge. Not just making something but of developing new knowledge and throughout the universities we represent, engineering colleges are developing new knowledge. In a way they are doing science but they're doing it in an engineering way and the method of engineering is different from the method of science. It's about prototype building, it's about modular design, it's about measurement, about improving the goal toward, improving the process towards raising the metrics of quality and the iterative process of design and redesign. The third category, which became important as I worked in the book, was design. And the National Academy of Design was founded in? That's a trick question, there is none, okay? And the book proposes that by the year 2065 another 101 years after the National Academy of Engineering we would have a National Academy of Design. It's still a little early for that but that's where I think we're going. The rise of design and design thinking is been a very profound change and its methods are yet different from science and engineering. So I would say the next direction of design thinking and I advocate the idea that we teach our students the scientific method, the engineering method and the design thinking method and not only we teach it but we give them projects where they get to try it out. Now this was a bold idea and I was unsure about it so I began to look for evidence that this was maybe really happening. So I used the Google Engram viewer, you may know, it's 20 million books in English and you can search for the frequency of certain words and I did that from 1900 to the year 2000. The word engineering bubbles along, gets a little more busy, science a little more prevalent, bubbles along but design comes up and by 1970 crosses over and becomes the dominant topic in the literature in English. That was a pretty impressive result. Now I wasn't ready to bet the book on this single result but it certainly made me think and it was apparent that it wasn't just Steve Jobs and Johnny Ive who made this happen but they were part of a movement that businesses, that universities were moving in the direction of making design a new way of gathering knowledge as well as making products, okay. So design for some engineers is designed as part of engineering and yes there are overlaps for circuit design and structural design but there are other parts of design that are very different of social design and of product design that are not usually part of engineering. So I wondered about this and then while writing the book The New York Times put up its Chronicle Lab and allowed you to search the New York Times history and I did the same thing for 1900 to 2010. And so what did I find? Well engineers bubble along, scientists bubble along but designers cross over and become the dominant topic in 1975. Well that was two data points so that made me sit up and listen and both of these figures are in the book as a way of showing what may be happening that we didn't all notice, all right. Now I in my career followed that because I was trained as a scientist, as a physicist. I worked at Brookhaven Laboratories and I worked on science but if you caught it I'm a member of the National Academy of Engineering. Well that's kind of interesting and my book is called Designing the User Interface. So I have had these parts in my history and I came to value them in new ways and be more explicit in what had been going through my life. Okay, so here's where we are. We've got a new world where the new context has changed that warrants a re-examination of the Neva Bush's linear model. We have a couple of guiding principles which I'm advocating here which you may question but the suggestion of making applied and basic a joint force and then I'm suggesting it's no longer just science, it's engineering and design. Now I should say I limited explicitly in the book my focus that far. There are other areas of research in human endeavor, the humanities, philosophy, fine arts, sports, entertainment that I don't get to because I don't claim authority there but others feel that these arguments are equally valid in those environments, okay? So then I began to lay out the story of how the process of research might be changed and it starts with choosing the right problems, okay? So I looked through all the books on guides to research, famous, you know, books, Peter Medawar's book about advice to a young scientist and all he said was don't choose piffling problems, sort of cute English phrase but not very constructive and I came to find that there wasn't much discussion and most students choose their research topics by getting a mentor who says why don't you work on this but increasingly now in the open world of the web where there are so much information we've got more access to science road maps which is what NASA's favorite approach is and design and engineering challenges, the National Academy of Engineering has 15 grand challenges that are very much relevant and then a growing set of design challenges of what we might do. There are and those often come up by challenges like the SpaceX and XPRIZE that put out a million or more dollars for a particular contribution. You may know about InnoCentive.com which is a commercial website that lets people put up challenges, companies put up challenges with 10,000 or $100,000 awards. You get 90 days for the half million solvers who have signed up including me to look at these problems and try to produce a solution. So we have an increasing discussion about what are the important current problems that students could pick on. Now I do suggest that students work on problems with civic, business and global partners. So getting outside the classroom, getting outside the university and trying to work with partners in local government and you'll see some universities have made their focus of engagement with the state or city or national efforts. There's also partnerships with business that are growing. I mentioned already the university industry demonstration partnership group which promotes these ideas. The National Academy of Sciences has the government university industry research round table, GUIR, which also seeks to make these ideas more common because they have come to understand that big breakthroughs happen from this kind of work. So choosing problems should become something we discuss more and every student, every faculty member should give sense of there's an array of problems, what are the things I could work on. Then the methods we choose, traditional methods of observation and controlled experiments are fine but increasingly the idea of intervention at scale and this has radically changed research in many fields. The idea of A, B testing in the web design, Google, Netflix, Amazon, eBay have all done a lot. Microsoft does 200 of these a day, 200 a day? Okay, they're engaged with, where if you wanna know, if Amazon wants to know, if will I sell more books if I have a bigger picture of the book or if I have more texts? Well, you could put up two versions and get a half million people to use one and a half million to use another and in a couple of weeks, you'll have a pretty good model of how much that does make a difference. And the idea of intervention at scale becomes possible because we have these new technologies of web-based and internet supported environments where we can shift from the small laboratory studies to interventions at scale in vivo. That's a big difference. That introduces all kinds of new ethical problems, institutional review boards, but that's where we're going. That's where many opportunities emerge. Now, the biggest chapter and the most documented one, but I think the one that has the most messages that are immediately applicable is how to form teams with diverse individuals and organizations and how to run those teams. It's often been pressured that there should be interdisciplinary, transdisciplinary, cross-disciplinary, multidisciplinary, interdisciplinary work. And I wrote a whole chapter about that and then I threw it out. And I came to see that the evidence was just not strong that actually interdisciplinary mattered. And I wondered, what did they mean? And there's a list served, maybe some of you are members of, called interdis, any interdis members here? I'm on the interdis. Interdisciplinary research and education. And I posed this question to them. If a computer scientist theoretician works with a computer scientist system builder, is that interdisciplinary? Well, I get a 50-50 vote there. How about a computer scientist working with a genome biologist? Oh yes, that's interdisciplinary. How about a computer scientist working with a choreographer? Oh yes, that's really wonderful. And then I ask, what percent of the National Science Foundation budget would you give to each of those three categories? And you say, and the response is pretty much, well the work with choreographers is a nice idea and we should give them some and that might produce some interesting results. But there's not a big audience for that. It's not clear they have a national agenda attached to that. Something might come, we'll give them 5%, okay? But the genome biologist that sounds like pretty clear and the computer scientists of system builders theory working together sounds good. And I was really delighted to discover that NSF program formed two or three years ago while I was working on the book called Algorithms in the Field, AITF. AITF encourages is the word they use, the partnership between a theoretician and a person with a real problem. And increasingly NSF, and I cheered them on when I spoke there, has been going down the road of describing the kind of partnerships that they want for research teams. So they're getting more proactive in following the route. So it's not necessarily the interdisciplinarity but the right partners with the right skills. After all, look at the history of things. The transistor was created at Bell Labs in 1947 as a applied project to get a diode to replace the vacuum tubes and the famous team of Bardeen, Britain and Shockley who are a theoretical physicist, an experimental physicist and a mathematical physicist, all right? And were they interdisciplinary? Well, in a certain way, but they all had a role to play. They made the transistor. Only later did they get to the transistor effects. For which they get the Nobel Prize in physics, okay? There again and again and again, if you look at the Nobel Prizes, many of them in physics came from a real project. Arno Penzias, Robert Wilson were working for Bell Labs, trying to make microwave communications work. And they were told to reduce the noise in the background and whatever they did, they couldn't. Well, it turned out to be the cosmic background noise from the Big Bang. For which they get a Nobel Prize in physics, but it was applied problems that brought them to their work. And again and again, you'll see that happening. Not everywhere, there's still room for the lonely theorist and the solitary tinkerer, but I want to encourage people where possible to form teams with people who will have real problems as well as a theoretical approach. So the teamwork I was looking for were the people with civic business and global problems and those who had academic approaches that were relevant, not just putting teams together for the sake of interdisciplinarity, but putting teams together because they each contributed to the problem, okay? While working on the book, I attended the ACM Conference on Knowledge, Discovery and Data Mining, very competitive conference. They had 1,000 papers submitted, they accepted 140 of them. Pretty tough going, okay? Good papers, solid work. The program chair analyzed the ratings for these 1,000 papers and here's the gift to me, where I'm writing the book, the papers that had co-authors from industry and universities were statistically significantly higher rated than those that were all industry or all academic. Perfect little result. And I've later gotten further confirmation of that from other reference analyses. Now that's really kind of central to it. If you want to have big impacts, you need to work on big problems, you need to work with partners and the partnership and the teaming, which is a really strong part of this chapter, brings many, many interesting benefits. It brings not just more people to work and do more and not just different perspectives on how the work should be done, but the different personalities contribute to dramatic ways to assess how the work should proceed and that may be the biggest payoff. So there may be one researcher who says, look, conference deadline is June 15th, we gotta get this done, we publish whatever we have by June 15th. And the other person says, whoa, wait a minute, this work's really important, let's get it right and once we get it all done, then we'll look for a place to publish it. Now, both may be valid, but the process of arguing through of what's important enough and how much can we get done, that may be the way research really advances rapidly by having these discussions. And so it turns out there's all their evidence that teams that have older and younger researchers that have men and women that have different personalities, those turn out to be winners as well. Now, remember, team work's hard, okay, it often fails, it's a struggle, but as people learn how to do it, they get better and better and the quality of that work, I would argue goes up. The evidence is a little tricky, but I think I can say it there and you can take a look at how I presented that and so the idea of collaboration's really important. Now, that presents a real problem to the central structure of universities, which is we hire, we give tenure, and we promote people one at a time. And if a person has been collaborative in all their career, traditional members of that department may undervalue their collaborations, may undervalue how they published in related nearby disciplines, and so there's a need to change that agenda. University of Southern California provides a good model, how they changed their tenure policies to reward collaboration. Those who collaborated a lot were to be given especially warm receptions and they describe how young researchers need to document their collaborations and what their role is. And increasingly journals are recognizing this also as the number of authors go up, especially medical journals require at the end a little credit section that says who did which part of it. And we get to be like Hollywood films with hundreds of names with describe who did what on each part of it and I think that's gonna be a future aspect and I think we should encourage that movement. So changing the way we teach, changing the way we assess, changing the way we promote will all be important to reconsider in this environment. Working with companies is another culture shaking idea. Some traditional researchers think if you've been supported by the National Science Foundation, National Endowment for the Humanities, that's good stuff. Working with companies, working with civic or partners or governments, well that's not really that great. And so that tradition has to be changed but more and more people are signing up for it. I think you can look admirably at Arizona State University where Michael Snow, the president has written a wonderful book and changed that culture from a reasonable state university with 50,000 students doing their job to a really well considered new approach where there are 90,000 students, their research funding has gone up dramatically, they've moved up into the top 100 research universities and they've done an admirable job. Controversial, sure, but there's a pretty good story to be told and they were devoted also to the problems of Phoenix, Arizona, the state of Arizona and the national agendas. Next, once we get the data, there are gonna be all kinds of ways to analyze this and we're gonna be testing ideas in realistic environments. We're gonna be doing case study detailed analysis, we're gonna do rich qualitative data in a rigorous way and we're gonna do the big data analysis as well. I mean, that's the remarkable thing. That's so much of what we do is online and the capacity to see what actually happened is transforming the way sociologists and many other researchers are taking a look at their data. And finally, new games in promoting adoption and assessing impact. I attended the old metric session today which is one of those movements that's showing fresh ways of looking rather than just paper counts or citation counts. There are new ways of promoting adoption and I would say in the old days, scholars would say my research speaks for itself which may have been true but now there are two million papers published a year and if you don't speak up for your research, no one's gonna hear it or very few will hear it. It's much harder. And I would say spending an hour a month writing a blog post or writing a public paper or writing for alternate audiences or making a YouTube video is probably the right thing to do. I think even it will make the research better if you have to consider how to present it to different audiences. One of my satisfactions at the University of Maryland now onto our third year of doing this where I convinced the Vice President of Research to give an award for research communicators and those faculty who had published op-eds and leading newspapers for different audiences. First year there were 14 submissions then 38. We'll see how many we get this year but there's a growing number of faculty who get the message that by writing for wider audiences I will generate interest in my work. I will make the work better. I will generate students coming to me. I will actually have greater influence by that. As some other tricks in this or let's say ideas in this chapter, this one seems to have drawn a lot of interest, it's called Send Five and Thrive. Send Five and Thrive. And it's just two paragraphs, it's just a short idea. But it reveals a different way of thinking. And here's the situation. You've just completed a paper that's been accepted for publication. You have two weeks to revise it and submit for the final production, okay? I tell my graduate student, you know, take a look at the paper we've written and look at the references and find the five people who are most central to our work and get their email addresses. And then write them a note that says dear superstar professor, I'm a PhD student at the University of Maryland working with Ben Schneiderman, he sends his regards. We've just completed a paper, it'll be published. We have to submit the final version in two weeks. This work builds on what you've done and we have two questions for you. First, have we been fair in describing your contribution to this work? And two, have you written anything more recently that we should be citing in this paper? And I will bet you, you get 80% response with 48 hours. Even from superstar professor at Oxford, Tsinghua or Harvard or wherever, you'll get an answer because that kind of question, that kind of approach aligns with what their goals are to get to see how their work had impact to make it have even greater impact. But when I first did this, my students sort of got frightened as they looked at the paper and they realized what they had written as many young researchers do. They said, you know, the work of superstar professor failed to do so and so. We have corrected this omission in this paper and they suddenly realized they can't send that paper to superstar professor. But the fix is easy. I think you may already get the idea, you know. The pioneering work of superstar professor can be fruitfully extended by the work in this paper. Same message, but the right tone. I remind my students, you are part of a social community. You're involved with the leaders of the world. You should be, you should be writing to the leaders of the work you're building on. Let them know what you've done. Be part of the community. In fact, that's what Julia Lane for the National Science Foundation describes as the way you measure the success of a research project is by the size of the network it creates. The number of papers you publish, number of places you present your work, number of places your students go to work at, okay? And that model was also important in my thinking. Okay, so there are other particular pieces of advice to students and administrators, but the book has two chapters at the end about getting to work and how to make this kind of change to achieve this twin win of solutions for real problems and theories that are generalizable. So there are these two goals applied and basic are different, but by taking both at the same time, I think you get stronger. So I said in the front, I was here to ask for your help or to encourage you to go to work and that's what the book's about. And here's the things that have now emerged as the four actionable items and we will be holding a workshop with the Association for Public and Land Grant Universities on January 26th here in Washington about 20 universities have begun to make steps in this direction. And this is a small, it's by invitation only, but please send me a note if you're interested in doing this but these are the things we wanna discuss and understand how different universities have addressed these issues so we can take on the best practices and avoid the problems. So revising the hiring tenure and promotion policies to value collaboration both within the campus and beyond the campus. Raise the frequency of research partnership with government industry and NGOs. I've now coined this as GINs. The GINs are the groups you wanna work with and it's not a matter of getting a problem but staying with them, working with them and then the problems of technology transfer and translation are dramatically reduced, okay? Now, okay. Then third is broaden the research impact assessments. There's lots of good models here. Maybe you know the Becker model from Washington University of St. Louis, three librarians there wrote a wonderful document, 16 page of the five categories of research impacts that you could have and it details them all the list of things from producing data sets, patents and so on to influencing policy, changing textbooks, influencing education, public impact, et cetera, et cetera, very nicely done and those are places that librarians and those who track scholarship could help us understand better to make more effective research impact assessments for the work that gets done. And then reforming the teaching to include service oriented team research projects which have teams of students working outside the campus to produce benefits. There are many models, Northeastern University famously has a co-op program, University of Maryland has a wonderful group called Gemstone which is actually large teams of 14 or 15 students working together for four years on a problem together so they actually could take courses and guide their whole education towards the solution of this problem. Other places like Olin University have a new model of doing research as one place after another, the Stanford D. School the Northwestern Design for America efforts. One after another people have found ways to do some of the things that I'm describing here. So that's where the story goes, it's really now up to you and my message is to give you a set of things to remember for ABCs, the applied and basic combined that's the first part but then the subtitle of the book is achieving breakthrough collaborations the way to make that happen and then analysis based on creativity analyzing problems in fresh ways then actively build connections your network is the way your work gets done and then gets promoted outwards and then always ask bigger questions. Thank you for your time, attention I'm glad to answer questions. That is such a wonderful message. I know some have to go and so on but I'm glad to take questions or two brave souls challenges sometimes here we go thank you. I'm the director of the library at UC Davis and I also run the data science initiative there and I'm a huge fan of both your work and also the science of team science and this idea of collaboration around societal problems. The problem I have is that every team wants one of mine you know they want a data scientist or a librarian to participate and that would be great but there's a finite number of them so how do we think about scaling up our ability to participate in these teams in these really niche areas that apply to every research problem? I'm delighted you're having that kind of a problem. It shows a progressive atmosphere and also your own attitude I think probably help bring about a lot of those requests. So that is one of the problems of a successful movement here. We are seeing the development of data science education programs at most universities around the country. Some places are more advanced in doing this than others but it's a remarkably fast trend and there are good and bad things about the data science. I'm a great fan of that. Of course I'd like you to do visualization and Kwan-Li-Ma is my friend over there. So there are ways I think of getting more. Let's see I don't think there's an instant way of solving that it's gonna take time to increase the education. I think also these groups can develop those skills on their own and so you want to integrate that. I don't want them to say well you're the visualization person I'm the statistics person. No, I want them to be together and I want the statistics person to become a visualization person and the visualization person to be a statistics. These blended approaches are very strong. I don't have an immediate way but I would say NSF is recognizing this in terms of their research education efforts and the way the big data initiative with the regional hubs is designed exactly to promote the development of local skills in these areas. So you raise a good question. I'm actually pretty happy to hear that kind of problem. I do get, I mean the kind of push backs I get are people who say I'm doing that already we're just fine and some of that's true. There are places doing it and the others say we don't want to do that. We want our protected basic research and we don't want to work with others. So I think those are other forms of challenges. The problem you raise is a great one to have. I hope more universities will face that problem. Others, yes, thank you. Though I totally agree with this idea of applied and basic research combined with engineering and design this is not a new idea. I mean ancient Greek, ancient Rome, Archimedes principles taken into a whole infrastructure or we go back to Bellab or interval or park, things like that. More recently, I just look at my own history in academia when I went to a professional school. Professional schools, I went to a library science program 40 years ago where our faculty was creating something that spun off and became dialogue with Lockheed, indexing and abstracting services. 20 years ago as a faculty in the first high school the whole idea was this kind of take basic research become more applied research, deal with real life problems, work in engineering and work in collaborative groups. So this is not new in academia. I hope I've been fair to historical resources to Pasteur and other examples or many more in the book. So yes, certain people have been doing it but wherever I go I also get we need your book because I have to show this to my community to change the decisions they make and the biases that many senior faculty have still stem from Vannevar Bush's model and by the way the funding models within DARPA and Defense Agency 6.1, 6.2, 6.3 are exactly instituted that way and NSF is only slowly shifting. So yes, there are individuals and organizations that have done this. The book has 14 case studies of organizations and individuals that address these issues but I still think we need to do fundamental change. Well, specifically trying to look at a pragmatic way of dealing with that, one of the ways I think is to actually have our more what is considered the theoretical departments in our universities look more at the professional schools. I know just from my own experience being on university wide tenure and promotion committees I have been the only one at the beginning to argue for the promotion of someone who was highly interdisciplinary, who was doing pragmatic things because everyone else thought that was a bad idea but I think a lot of it is just getting the people from the professional schools to be more involved in the further academics of the rest of the university. There are a number of things. I would say those who work on practical things have an obligation to write and report and translate to an academic environment as well. I'm critical of designers or others who say my design speaks for itself. You know, I want you to tell me about that and put it in the context of what the sources were and where it's going and so I do want people who focus on practitioner partnerships to also be academics and publish the results in ways that can broaden and generalize. Thank you. I'll be glad to continue up front. Thank you for staying longer. Appreciate it. I hope you put these ideas to work. Let me know. Thank you. Thank you so much. Both Ben for a really memorable send off and Bob, thank you so much for coming and updating us on those really important developments. I mean, this was a very key window in time I think to remind us of those things that need on our agenda. I'm sorry to keep you late, but we had too many goodies to pass up. I wish you all safe travels home, a good holidays, a wonderful 2017 and I look to see you soon. Thank you for joining us.