 Carl Grant, I'm the interim dean at the University of Oklahoma Libraries. Thank you for reminding me. Okay, let me get going on the agenda here. So basically what I want to cover with you in my 30 minutes is what is voice search, make sure we're all on the same page there. Should libraries be embracing voice search? Is this something you should be looking at or not? And I'll show you some data and information about that. I'll show you a little bit about what we're doing at voice search at OU Libraries and just wrap it up real quick after that. So be fairly quick. So let's just start with what is voice search? Let me find a place to throw my slides there. Okay, what is voice search? So basically I think we're all pretty much familiar with this now. If you've got a cell phone these days, likely you're using it. You may have Alexa at home or Google Home. You've got these devices floating around and you're using them more and more frequently. So basically you say something and they kick off and are ready to do a command that you execute. Maybe it's say okay Google or something like that. And it takes your command, goes and executes it and brings you back some results. So it's fairly straightforward what this is and most people are now using it. What voice search will do is bring up questions of is this a good value for the libraries to be investing in? That I think is one of the very first questions you're going to get asked. If you want to invest in this, should you be? And the kind of questions you'll get asked is should libraries embrace voice search? Why are we investing in this technology when we already have perfectly functional discovery systems? Isn't this a luxury and not a core service? So people are going to ask about this when you say we're investing money in this. And obviously new technology comes and it goes. We've seen for instance Google Glass was a hot trend that never took off. And so there's new technology that emerges but in the classic book Crossing the Chasm some things don't make it across the chasm. They fall into the chasm. And so we have to look and think about okay how do we decide whether or not we should invest in this? And I think it's a natural thing that voice searching would bring up questions about perceived value. And most new technology does. When OU Libraries is looking at this we're looking at it from these angles. Evaluating library innovation. Does it meet information seeking needs? Is it going to fill a need that's out there? Is the technology affordable and ubiquitous? That is key to us being able to say it's going across the chasm. It's going to make it to the other side and be a permanent addition to our scene. And the last question we need to be asking and you heard Cliff mention this today in his opening is does it uphold our values? Particularly in the area of privacy. These are questions that we seriously owe it to ourselves to think about as we work with this technology. So I think I want to show you some data starting off to give you some indication of why we think you may want to be investing in this technology. And I think this one's a pretty interesting chart because it's saying look at the body of users that are out there. And when you look at the people that are making decisions right now, the Millennials, the Generation X, the Baby Booners. They are not as inclined to use this technology. That is a group that is a little more skeptical about it and they want to think about it more. But when you look at the younger people, you'll find out that these folks are actually very much embracing searching technology and are using it in big ways. They have really changing conceptions of how to get to information, how to use it, how to access it. Some interesting information in this report and this was two years ago so these kids are now two to ten years old. But look at these numbers. 98% live in a home with a mobile device, same percentage as a TV. 95% of families have a smart phone now. This is ubiquitous technology. 42% of children now have their own tablet device. So more and more this technology is out there. So I think it's pretty clear information-seeking needs is it's likely they're going to be using these tools. In fact, if we look at the next set of statistics what we found out is they use it to talk and play. 14% are already doing that right now. 10% use it right now to get information and 7% use it to search for videos. If you look at the next stat, 5% are searching for jokes. They want humor about obtaining jokes on these things. 3% are checking the weather and 1% use it to make calls. So it's being used. It's out there and it's getting used today. Now this report is a very interesting one and I thought what it was talked about was particularly interesting for our needs because it's pointing out that reading and writing isn't really a prerequisite for them to use these devices. So that isn't a gating factor anymore. This age group, they can use it as they are right now. And I think you really have to let that stat sink in a little bit. You don't have to be able to read and write to search for information anymore, right? That's a pretty powerful statement. They go on to point out in here that young children are very curious and prolific information seekers. Anybody that's had a child knows this, right? Why? Why? Why? We've all been through that routine. Why? And this is what's interesting about this is that these kids are willing to work with these devices to get that information answered for them. And they understand when they have a gap in their understanding. So that's really important. They understand when they don't understand something. And the virtual assistants are really good at filling this gap. They do that quite well. I think what's equally important is that when they are doing that, they're accessing a lot of content, right? There's text-based results out there, but there's also an explosive growth of online video. We all know this. If you watch YouTube and are dealing with YouTube, you know there's massive, massive amounts of content out there. And so kids want to access that. And I know when I talk to our campus IT people, they tell us YouTube is one of the most frequently accessed information resources across our entire campus. YouTube is one of the most accessed things. And they're using these things to get to it. So we're seeing that explosive growth in content and we're seeing them use it. And because they can do this with the natural language, right? They can just easily search and they're doing more and more of this kind of searching and looking for particularly videos to answer their information needs. This chart I thought was interesting. It had some very interesting stats in terms of how this information is being used and says where we're headed in the future. By 2020, almost there, 50% of all searches online will be voice search. 50% of all searches online will be voice search. That's a pretty powerful stat in and of itself. Two to five people say voice searches are essential to their lives today. They're using it. They believe it's going to continue to be important. 19% of people use Siri every single day. Think about these stats. 55% of teens use voice search daily right now. So how ready are we for voice search? Well, I think the new study shows that about 4% of businesses have a voice search capability right now today. And the question of course we need to ask is what about libraries? Are we ready for this? Now, another question about how ubiquitous this is. I think it's important to look at how many hours our kids are spending in front of their screen. And we look at this and we can look at it by their average daily screen use, by teens and tweens. Tweens are those that aren't quite teens yet, but will be soon. And we're seeing, you know, tweens are already at four hours, almost five hours a day. You get into your teens, you're at almost seven and a half hours a day that you're in front of your screen. That's a lot. And of course, if you just semi-observational, just walk around even here, how many people are walking around with their phone in their face, right? It's everywhere. And so these screens, they're part of life now. We've got to be dealing with that little reality as we move forward. All right, next up. Let's look at the time spent watching online videos. And again, looking by these groups, you see some interesting numbers here. You're seeing that a lot of time is being spent looking at online videos. Again, you've got to look at these numbers. We have teens and tweens again. The teens are the top number, about 60 minutes. And it's been growing from about 35 minutes, so the growth chart is pretty steep. We're looking at tweens. They were at 25 back in 2015. They're up to almost an hour in the current environment. So a lot, a lot, a lot. And then we look at, for tweens, the YouTube is the dominant platform. That's where they're getting the majority of their information is out of that platform. So a lot of information going through YouTube. Now, let's talk about the question of education and how do these personal devices, personal assistants play into this environment, particularly tutoring devices. And so what we're already seeing is that when they use personal tutoring devices, they have a much higher task outcome and a higher degree of collaboration quality. That's been researched and proven. That's what happens when they can use those devices to get to information. So that's an interesting result. We can also look at how do they feel about that. And I think one of the interesting things about these personal assistants like this is that they're far more comfortable. They aren't interacting with the teacher, may apply pressure on them in different ways. They're not being judged. They can work at it at their pace and therefore they build more confidence. A nice quote at the bottom of the slide about somebody that was using Alexa and how they felt in comparison to talking to a teacher about using this technology. And so I think we're seeing some interesting results in this technology and use of this technology. If we move on and we talk about business, what are they doing today? Well, I imagine most of you have already had experiences with it. You travel, obviously. You travel to get here. You've probably dealt with some voice assistant on the airlines. A lot of the hotels are starting to use these now. And so we've seen Amazon fire up a partnership with Marriott. It helps assist with the checkout procedures and helps manage the room. These are having very, very high user satisfaction rates when they're doing this kind of work with these devices. And so we're also seeing the healthcare sector use this a lot more. Again, if you've had any interaction with healthcare providers, you've undoubtedly been dealing with these. Sometimes you don't even know you're dealing with them. I'll get to that in a moment. But there's a lot more going on in the healthcare sector with this. A lot of basic routine transactions and even some more complex transactions are now being done using these personal assistance. All right. Let's talk about business in the future a little bit. Where are they headed down the road? And I think these are some other interesting stats. They're saying that by 2021, two years up, 25% of digital workers will have some kind of a virtual assistance that they're dealing with every day. So getting more common. By 2023, 25% of employee interactions with applications will be via one of these devices. So going to be quite, getting quite prevalent out there. And I think some of the examples include, of course, things like Alexa for Business, which is helping employees designate tasks right now, such as scheduling meetings and logistics. So we know Amazon's eating their own dog food here and they're using it and managing their own company. The Google Duplex is an interesting other piece of technology that we're seeing get used. And again, it's allowing people to carry on very natural conversations. I don't know if you've heard this video that I'll play you in just a moment here, but their technology is getting quite as advanced and its ability to interact with you and sound like a real person without any question. We all know what the robotic sound of a device can be, but it's improving pretty rapidly here. And so we're getting to this place where it's being designed to sound totally natural and totally workable. So let me play you if I can get this all to work here. A short sound clip from Google Duplex. That is a totally automated device you're talking to that's doing that. That's pretty impressive no matter how you look at it. So the other question, of course, we've got is does this thing uphold our values? That's a very critically important question and we've got to come up with good answers for this. And right now there are plenty of causes for concern. What I think we've seen is that we're finally building enough momentum out there and expressing concerns that at least they're starting to listen and they're starting to design in parameters. But of course the way it ships out the door isn't with those parameters set to protect you as users. So you've got to go in when you start working with one of these devices and set the parameters. And if you are setting it out there for the public to use, you need to be sure you're educating them how to set the parameters on their devices to protect them. Otherwise these things can be listening all the time, virtually all the time, or they can be capturing information about you that you don't want captured. It can be recorded. So there are concerns here. A lot of it you can now turn off in your setting through all of these news releases that have come out recently. There have been ways that you can go in and change your parameters. If you're going to use one of these devices, do that. Protect yourself. And then show your users how to protect themselves. There's lots of articles out there like these. These are ones that I just grabbed and clipped and put up here. And this has all just been within the last six months. So it's really starting to change. But we have to keep the pressure up and we have to educate our communities before we allow them to use this technology. All right, I think some of the questions that of course we started with were should we be embracing this technology? And I think the answer is yes, we have to embrace this technology. Generation Z and our Gen Alpham have radically different information-seeking behavior. That's just the way it is. And so we've got to be getting ready for them. They're coming at us soon. Some are already in our colleges. Some of our youngest students in our colleges are already doing this stuff. And you can, again, walk across your campus and just observe how many of them are asking questions of their phone. It's happening. But the question is, can they get to your library? Can they ask a question of your library? And if they can't, well, I don't think that's good. I think we have to be everywhere these things exist. So I think it's important that we're out there. In five to ten years, I think these things are going to be all sectors of society. It's going to be everywhere, and we're going to need to be working with it. It's the same kind of step-change in information, really, that we saw what happened with the computer versus the card catalog, right? It's going to be a big, big shift in what's happening here. And that means you want to be getting ready for it. I think we need to really pay attention to doing ethical development of this software. And this can be a challenge if you're working with a third-party vendor. I think that's another concern we have to have is making sure that that vendor buys into our values and or gives us the parameters to support our values in working with this technology. If you're building an in-house, then it's a little easier to do this and support it in the future. All right, so what are we doing with it at OU Libraries? Well, we believe we should be using this technology, and so we are. Now, the way we did it is a little different, I think, because we started with a chatbot that is driven by artificial intelligence. And if you were here last year and came to one of my sessions, you probably heard me talking about this. So we did something that's called busy, and it's a chatbot that runs when librarians aren't available in the library. So usually it's after hours. We are open until 2 in the morning. So between 2 and 7 in the morning, we're running this chatbot. And it will take a whole range of questions and provide you answers, and we've used some parameters to do this with. But we have a whole bunch of echo dots, Amazon echo dots in dorm rooms, and that's where we're headed towards. That's why we're taking what we've done with an AI chatbot and making sure that we can put voice search capabilities on top of it. So all those echo dots that they put in the dorm rooms when they built the new dorm can be used to search the library. I don't want the library not to be on their list of things that they can access and search. So here's a basic chart that shows you what we've been doing with our chatbot. So basically we're working through API layers. And our chatbot primarily is talking to lib guides and it's talking to Primo, both of those through standard API calls that are supported by the vendors. So we've got our software and third-party software that's sitting there running on top of that. And we've built the infrastructure so that we can translate calls into API calls and get it back and hand it to them in a format that is natural language, is what we're doing. Now you may wonder about the kind of usage we're seeing of that between those hours. Okay, that says 7 p.m. It's actually 2 a.m. to 7 p.m. Anyway, about 3,700 questions have been asked since we opened this thing up in July of 2019. The accuracy we asked the user when they get an answer do you feel like that was an accurate answer and they've ranked it at about 88% accuracy. Now we take all of those queries and we review them to make sure that the query is accurate. If not, we tweak it and we don't keep those queries, we get rid of them, but we review them before we do to make sure that the system is working. This is all in a sandbox and we make sure they know that. It's interesting that they've had about 1,300 different topics of conversation asked in the chatbot so far and out of that about 1,500 links have been clicked from within the chatbot and 9 a.m. seems to be the most popular time to use this thing and Tuesday seems to be the most popular day. Probably because they all come in on Monday and get loaded down with homework assignments and go home and do their thing. Now one of the other things we've done with all of the questions that we take in and process with our chatbot is we've used the read scale which you can find this out on the web, there's a link and this ranks the complexity of reference questions from 1 to 6 and we've limited the chatbot to currently only answering questions at level 1 and level 2 and why have we do that? Well one of the reasons we did that is because obviously we want to be able to look at what it's doing, we can't deal with a whole range of questions. We also wanted to build confidence with our staff that they were comfortable that they were working with this tool and could see the value it was providing but the other thing is because we want to be very careful particularly when you're working with a third party vendor and your data is likely going in and going to be the basis of a system that will be sold to others you want to make sure you're not putting in biased data and we see that as a really ethical issue for librarians that if we're going to do that we have got to spend time reviewing that data and the answers that were provided to make sure those responses are not biased and so we're building a whole infrastructure inside the library to review every question we're working with our diversity team at the university to assemble a panel to review answers to make sure that all various parties are concerned and of course in Oklahoma we have 39 Native American Indian tribes that we've got to deal with plus all the other 122 countries students that we're dealing with so there's a lot to be considered here. As we're doing that we're trying to note the specific steps we're doing to make sure that we move data from being potentially biased to unbiased and then we'll go back and try and automate those steps so that as we move forward we can continue to feed information in and get it cleaned up before we would move it into a long term set of answers so it's complicated but we're holding it back to the only levels one and two until we get more comfortable that we feel everything in here is unbiased so now we're introducing voice search so it really goes right into the chart you can see where the circle is goes in right where we currently have the chatbot same thing it's just a different method of input but we're writing all this software ourselves so our programmer Tim has designed this and built it and has got a basic interface going and we have shown it to students they gave me instructions and I could demo here and let these folks demo their product and not do mine since I'm going to be short on time and you'll see how this works but I think it's interesting we've demoed it for the students at our university and they liked it in fact they liked it so much they asked that we start working on these capabilities to be able to do this via voice search so you can see the list it's quite extensive a lot of things they want us to be able to do for them as we move forward okay I'm going to turn this over to Iowa State and let them talk about what they're doing hello everybody I'm Greg Davis from Iowa State University and Lisa I'm Lisa Smith from Iowa State University so we're going to tell you a little bit about a project we've been working on called Parks Libro and so my part of the presentation is I get to do the basic PowerPoint slide deck and then Lisa gets to do the live demo here in a hotel room we've never been in before with beta software so you you could decide who got the better deal here but we got our fingers crossed and it's going to go good so here we go so Carl already talked a lot about what here's what we're doing in today's presentation Carl already talked about the things happening around us out there in the world with smart assistants and reasons why we might want to be thinking about doing them here in our academic library in my slide deck I've got like four or five slides worth of these industry statistics four slides up to the SCED site for the CNI meeting so if you want to get the stats you find them there but I'm not going to take a lot of time today to talk about the stats just like Carl indicated there's just lots of motivating factors out there that I think would make academic libraries want to think about smart assistants and how they might play into our world so for us at Iowa State this was probably a couple years ago when we started to look at these industry statistics and we decided well it's probably going to happen it's not really a question of when it's going to happen it's just it is a question of when it's not if and so we wanted to start thinking about how are we going to think about that and so we thought about it we tried to think about it from a strategic planning point of view how can we be strategic about this and a couple years ago when we got started with the smart assistant project it was you were sitting there thinking sooner or later the vendors are going to start putting this stuff into the product and it might be bundled in around the box or it might be something you have to pay extra for but sooner or later it's going to happen it just has to why would we want to go through the pain and suffering of developing something ourselves and who knows how long before it just comes out of the box and we can throw our other stuff away so we thought about that and we wanted to try to do some things to experiment with this and we decided yeah well let's take a shot at it at least we'll learn about what the technology is about and how it works and just gain a better understanding of the technology even if vendors come along eventually and replace what we did with something better that's a good thing at least we'll be a step ahead and know more about what's happening with this type of technology and in fact just what has recently happened is we are an ex Libris customer we use Alma and Primo and I think in the next quarterly release of Primo which I think is this month they're going to add a microphone to the Primo search box and so you can click on the microphone and type in your search or voice your search request and it will do the translation and do your Primo search for you so that's not necessarily a true voice assistant but it's a step in that direction and so you can see that some of these technologies are starting to make their way into the off the shelf product so what we did at Iowa State instead of creating a team of developers like Carl did at Oklahoma we decided to look outside the university and see would there be a development partner out there that we could work with to do our experiment and it turned out that well there's probably more than one but we found at least one out there it was a company called Kovner site and they had a product in the public library sector called Libro and so we talked to them and said hey would you be interested in working with us to adapt your Libro project your product to work in academic libraries and they were willing to work with us and so we signed the contract and for a couple years now we've been working with this company to develop the Libro product and actually what we did each version of Libro is specific to the library it's working in so at Iowa State University it's a parks library and so we branded our Libro product as Parks Libro and so you'll see in the demo that we'll be asking Parks Libro questions that's the version of Libro that's specific to the Iowa State University library so we've gone through a couple phases so phase one of our project was really focused on Amazon Alexa and we had concerns right from the start about sharing personal information and concerns about what's Amazon going to do with this information when we capture it in their cloud and all those things and so we sort of drew a line and said our first version of the app is not going to use any personal information which is a good thing but also it limits what the app can do it can't tell you what books you have checked out because it doesn't know who the heck you are so about the best it can do is it can do a search and tell you what books the library has but it can't check it out to you because again it doesn't know who you are so we're not sharing personal information so that's a good thing in terms of not sharing PII stuff but also limited what the app could do it was a really good starting point and again it was a skill that we developed in the Amazon store and it was out there that anyone could implement the skill in Alexa and use the skill to search the Iowa State catalog and like Carl said we're using APIs and so the Libro app is using the Primo APIs to do the Primo searches we also connected it to our LibCal product our SpringShare LibCal and so you can use the SpringShare APIs to search for events and you can search for library hours and so those were the kinds of searches we set up in the first phase of our project again none of it required personal information then Phase 2 which is currently what we would call ourselves currently in Phase 2 we advanced the product in terms of trying to find ways that we could actually do more of it even though it required the ability for people to log in and share who they were as they came into the smart assistant and the way we decided to handle that is that we had our developer actually create a mobile app wrapper that sort of wraps around Amazon Alexa and so we're still using Amazon Alexa but what happens is our mobile app is the interface and so you enter your query into our mobile app and then the mobile app captures that and creates a package that sends to Amazon Alexa and then Alexa does the voice language translation and sends back the request to the mobile app and then the mobile app does the API calls and returns the information that the user is looking for and so what we're doing with that is we're able to keep the personal information sort of one step removed from Amazon and the other thing we do is that it still requires the user to set up an Amazon Alexa account and so we think there's some ability there that users have some control if they want to opt into this or not it's not as good as we'd like we'd like it to be a big red button saying here's all the things you need to be aware of before you opt into it but at least it's the start in terms of giving users some ability to decide if they really wanted to participate in the current version of the software they actually have to do they have to sign up for an Alexa account and so that's the step to get started with it going with the project and what it added to the mix is now you can say what items do I have checked out you can say I want to check out this item you can check information about what finds and fees you might have there's just more things you can ask the app because you've told it who you are and it can do more things on your behalf hopefully in a somewhat more secure way instead of just going directly to Amazon Alexa okay and we're looking at phase three of our project which should be in the coming springtime 2020 and still using the mobile app interface but providing an option our developer is going to provide an option that doesn't rely on the Amazon Alexa voice engine so they'll have another way to actually capture the voice and convert it to text and do the queries for us without necessarily having to use Amazon Alexa and we'll see how that works out I imagine we'll have different versions of the software so you can choose to use the Alexa version if you want to use the non Alexa version there'll be different flavors of our apps that users can opt into based on their knowledge and comfort level and what they want to do with the application the other thing that will happen in the new version is that we'll actually be able to have a chat interface with our project as well so you won't be limited just to doing voice queries you can actually use it in a chat mode with a chat box and type your queries into the tool and at first I thought that was that's very nice but yeah, who's going to use it and then our library hired a deaf employee who can't really use the voice recognition stuff and she's really excited about our ability now to add chat into our smart assistant and have that kind of interface with our assistant in addition to voice plus you have all the other accessibility things that come along for the ride in terms of language translations and just lots of cool things from an accessibility point of view that we really like about our smart assistant work and again you say you might look at this and say well again Greg this is all just going to come down the road why are you doing this why don't you just wait a few years and just let that naturally happen but there's one thing that we notice that the research company Infotech actually did some work and they claim that your users perceive you better if you are seen as being innovative if you are a library IT group and you are perceived as being innovative then the users that you support will see you in a better light that they'll give you more support and they'll be more supportive of the work you're doing and we see our work with things like smart assistants as being in the innovative realm so we think we get some credits and brownie points for doing that kind of work even though this might not be a product that lives for a long, long time it's a good starting point and I think it puts us in a good position in terms of being perceived in the academic library world and the last slide I had was again along those lines you can look at this and say what's the risk reward what are the risks with developing your own smart assistant and if you're successful there's not a lot of risk it's good news if you take the risk and you're successful then you've got some good things going on but even if you take the risk and you're not successful you can still get some benefits you can take the good try and learning from the mistakes that you may be made and taking that and building on that experience and if you don't act then you really know better off than you were before if you don't take the risk you didn't take the risk so you don't have to worry about the risk but you also maybe didn't move yourself forward as much as you could have if you would have given it a shot so with that we're going to attempt to do the live demo here okay so unlike Amazon where it's always listening I have to activate the microphone I'm going to press the button here open park sleeper hello Greg welcome to park sleeper you can now explore in the library here is your today's update you have one item on hold you have checked out a total of five items you can ask questions like can you search a book can you cancel an item or list all my items on hold go through your user guide how to learn more how may I assist you ask to list all books on hold you have one item on hold air war this item is on hold shelf do you want to on hold no what else can I do for you ask the list to book let stop it I am not trained on this request I know ask to list the books ready for pickup I don't see any available items for pickup right now please try again later what can I do for you ask to list all books checked out you have checked out a total of five items option one managing with data using acryl metrics and PLA metrics peter hernan Robert E. Dugan Joseph R. Matthews the due date is May 8th 2020 option two strategy maps converting intangible assets into tangible outcomes there's five of them it's a long list you have already authenticated please go ahead with your questions ask for the event for today no events available right now what can I do for you I'm going to reopen Park's Libro because sometimes it shuts itself off and we're going to try to reactivate it here Park's Libro hello Greg welcome to Park's Libro you can now explore in the library here is your today's update you have one item on hold ask for events for today no events available right now what can I do for you ask for event next week here is the event I found library staff holiday luncheon on December 19th 2019 starts at 12 p.m. and ends at 1 p.m. no registration required event is organized by Greg Davis what can I do for you so now I figured out why I didn't list the event because it's already in progress so that first one that I asked for event it was this event and it's already going on so I'm assuming that's why I didn't say anything bad planning open Park's Libro hello Greg welcome ask if the Park's library is open December 20th Park's library is available on December 20th 19 from 7 30 a.m. until 7 p.m. what can I do for you ask if the media center is open today media center is available today until 12 a.m. what can I do for you ask if the Park's library is open today Park's library is closed on December 9th 19 but will be open on December 19th 2019 from 7 30 8 p.m. until 2 a.m. what can I do for you okay so Greg and I figured out why that is so in our Lib Cal we are actually open 24 hours here the last two weeks of classes are in session so until next week the library does not close therefore it's not open open Park's Libro hello Greg welcome to Park's Libro ask to provide my current fee status I am not trained on this request can you ask differently what else I can do for you I think that concludes thank you okay so not too bad on the live demo huh give it for Lisa alright so that's the presentation we had to show you our Park's Libro application and I guess I would conclude this by just saying if you're there are some other academic libraries out there that are also working with the company that we're working with to develop the park's the Libro tool and further than we did when we started with it so if any of you happen to be interested in the Libro tool and maybe would want to get in touch with our developer to see if there's any interest in contributing to the code base I'd invite you to do that and if you need some contact information on that just let Lisa or I know and we'll be glad to get you in touch with the developer and you can see what they can do for you so with that I think we're ready to see if you have any questions