 around determining if you actually need one or not. There's been a small adjustment to the presentation in between when this was accepted and now. We actually had one of our clients which is Kmart out of Australia. For those of you who aren't aware, it's a very large bricks and mortar and online retailer. And they gave us permission to be able to actually use the use case that we built for them as the foundation for the presentation. So this means that instead of just talking theory, there will be some examples given of what was built, why it was built, the process that's been gone through, use case design, development, conversational design, tracking, learnings and next steps. So I'm hopeful that that's going to give you a little bit more insight than me just talking about what the market itself looks like. Usually I give this presentation in partnership with my creative director and we try and get people to figure out which one of us each of these is. Obviously with him not here, it'll make it easier and I'll just tell you. I am the one who obviously as a child thought maths was cool. Running down the street with a t-shirt on maths, you can imagine I was a very popular child. So just to give a bit of background around the industry as a whole and why it is that we actually even started doing this. We as an agency started out in 99 primarily doing SEO and then we moved over into UX, design and development. So software and website development. However, Australia had smart speakers. Google launched in the middle of 2017. It was actually presented in partnership with us the day that it was launched. Google and I were on stage. We were supposed to be doing a presentation on machine learning for retail and that morning they told me they were launching Google Home into Australia and they were taking the presentation and doing that instead. Little bit of a pain because we'd spent a lot of time developing that presentation but we got the chance to present it and launched into Australia which not many people get. So Australia is actually the second fastest when it comes to adoption of smart speakers or more correctly digital services. We managed to get to around about 5% penetration inside of one year. So this was Q1 2018. We have just completed a second consumer adoption report for Australia and now in the capital cities it looks like it's closer to 30% as an adoption rate. So this is an important step because these are people who have actually gone out and made a conscious decision to buy a smart speaker. Now, when we talk about digital assistance and the medium of voice, I always like to touch upon the fact that this is a little bit of a misnomer. Digital assistants live inside of devices and can be accessed via voice as one of the mediums. However, they're also able to be accessed via text. Anybody who has an Android phone will have Google Assistant as a native device. Apple phones, of course, have Siri, but you can also add on to that the Google Assistant as an app. Samsung devices are now rolling out Bixby. Baidu has its own voice speaker system. SAS does Ali Genie for the Alibaba side of things. They're also Cortana out of Microsoft. So these are devices that are able to be accessed through speakers but are more correctly a digital assistant. They're multimodal. So when it comes to developing these, though, the thought process that needs to go through is what are you actually going to be doing for your consumer or your user? And we tend to think of these things primarily as a utility. Most people were kind of conditioned by Amazon with the launch of Alexa to go and do a single task, turn on a light, change the station, reorder paper. These are utility-based tasks. However, the secret to these actually lives inside of the name. They're digital assistants. So as you're building them, you want to be able to build something that can go under the process of tell me or help me or guide me or find me. So what this means is that you're building something that can tell you a simple piece of information. It can help you achieve an outcome. It can guide you to be able to do something or it can find something for you if you talk to it. And as I mentioned earlier, it's a heavily multimodal system. The assistants themselves allow you to be able to transition across the device as and when you want to be able to do so and continue the process of your discovery. Volvo cars are now rolling off the production line with Google Assistant already enabled inside of them. There are numerous speakers and wearables that have actually got these inside of them already, Bose being the obvious one. In addition to that, there are televisions. LG have actually got assistants natively built into them as well. So over time, we're going to see the consumers are going to expect to be able to both conduct a conversation with these devices and expect that the way that they converse or the way that they get their information presented back from those devices is going to be different based upon what device it is. So examples of this are if you are speaking to somebody or you're interacting via a speaker, the answers that you give need to be shorter. They need to be something that can be held onto and understood by a user. If you're giving them options, you only have to give one or two. Otherwise, the cognitive load becomes too much and the consumer will only remember the last opportunity we've given to them. If it's on a screen device, you can give them three or four and allow them to scroll through it. If it's on a television, then obviously you can know it for video and things along these lines. And if it's in a car, then you need to take into account the fact that the user is driving. They're not going to be able to pay as much attention. And also usually somebody who's driving is looking for something that's going to be near them. So the answers that you give from an assistant perspective or the conversational design aspect needs to be reflective of this. So not many of these will mean much to yourselves over here in Singapore. However, this is some of the clients that we have built these actions for. Officeworks is a very large, I suppose the best way to describe them would be stationery and electronics business. Dimx is a bookseller. Kmart bricks and mortar retail. Target bricks and mortar retail. Suncorp financial services, banking. Double AMI and real insurance are both insurance companies. Laser site is an eye repair. So the people that do surgery to remove cataracts and so forth. Finder is a comparison site. Woolmark is the peak body for the wool growers of Australia. Bizarrely enough, they actually were very excited about the process of building this because it enabled people to be able to understand the process that goes into textile design. New South Wales rural fire service. This one here was designed around bushfires and the ability to actually have plans in association against them. Also for people to be able to identify as we're near them and what the current status of them was. And JB high fires and electronics retail. So the gift finder. What was the process that we went through for actually developing one of these? Firstly, we sat down with Kmart and said, and the basic premise behind developing one of these is, is the task that the consumer or the user is going to be able to perform via this digital assistant something that they can't do via another existing method or something that if they can do it via that existing method, it's faster to actually do it via this swarm of interface. So we went through the process with them. We were given a brief. As with all clients, the brief started with inspire and delight. I don't know where this came from, but I don't think I've ever been inspired or delighted by anything that I have received from a company. So this is always an exciting one for everyone involved. Experiment with a new platform on behalf of an engaged and proactive client. Inspire the home environment. Build a recommendation engine. Provide general information. Talk about local stores. Enable full searches at the shopping feed. Allow for faceted searching. Send transactional emails and do something that is simple one task action. So obviously that's a lot. We basically said to them, this is probably a little bit too much, but surprisingly enough in the end, we actually managed to build almost all of those functions into it. Now being a company that comes from design and UX, obviously we have a very strong and robust approach to the way that what we call it voice user experience in this instance here, but it's just a traditional user experience and design process. It goes through the four D's, discover, define, design and deliver. And obviously this involves sort of research. So you have the brief, you have your research and search history. One of the best and fastest ways to be able to develop a conversational interface is to be able to go back and look at search queries that have performed upon a site, both external, i.e. somebody going through a Google to find the site and onsite, because what you'll tend to discover here is what people are after, what questions they are asking. And once you can discover the questions that they are asking, then you can begin to look at ways that you can answer those questions, which is what conversation is really all about. Then there's the summarization of the methodology, the development of a persona, customer journeys, interactive, looking at the APIs, developing a prototype, usability testing. So that's the more traditional approach. It ended up being something a little bit more like this. We got the search data, we investigated the APIs to see what we could expose. We workshopped what people were actually looking for. We spoke to customers and staff, obviously people who are on the ground dealing with customers on a daily basis is some of the best to be able to give you this information. And then we did the persona development and brand voice. For this particular side of things, it's probably a very important aspect to think of. For most organizations, except for potentially a brand ambassador or a voice artist or somebody who's been hired to be a part of, for example, a television commercial, most times people have not got a voice of a company. However, when building a digital assistant, especially one that's going to be spoken to via conversational interface, such as voice itself, you're actually developing for the first time what this company sounds like. And the sound and tone of this is very important. For the first time, as a consumer, you are actually then assigning this voice and this person to this brand. And that will be in place then for at least a couple of years. It's hard to turn that around once you've done it. And people tend to develop and associate certain traits based upon voice. People will infer age, they'll know gender, they'll infer education, they'll infer social status. There's a number of different aspects that will come from this. So when you're developing the persona side of things, all of that needs to be taken into account because that's the way that consumers are now going to perceive this company from that moment forwards. Then, of course, we moved over into the development of lists of core intents, prototypes, review of intents, and building the logic. So as I mentioned earlier, going through and developing the use cases has done in a workshop environment. So we had to understand what customers were asking, what customer service teams were telling people, looking at SEO side and looking at the turn of voice and the persona that was actually going to be developed for this. When I mentioned earlier, if you're looking for things that can be performed better or faster via this medium is the best approach we're looking for. It's also time and context. So once again, as I mentioned, if somebody is in their car, their context is going to be different from somebody who is in the home. And it's different once again from somebody who is looking on a mobile device because we'll know from their GPS where they're located. So when you're looking at these things, we need to say, okay, can this be performed better in this environment? Is it useful to the consumer to be done in this environment? And are they going to be able to get enough information in the snapshot that we can give them for them to be able to actually make decisions? Those are what you have to actually look at. So when we eventually developed and designed for this one here, a gift recommendation guide, it came out of the fact that on a website, the way that you search doesn't really allow you to be able to search for a goal oriented outcome that isn't predefined by categorization, brand, attributes, SKU, et cetera. And a number of users were coming in with the challenge of I have a birthday party coming up, it's a five year old, I have no children of my own, they're my nephew, what do I buy them? I have a birthday party coming up for something else. I have a gift, I have a house warming. These are the kind of environments where they're non-usual buys and they're a buy that is not for yourself. And so as a result of this, it's a lot harder to be able to make an informed decision as to what you want to buy for another person. But websites inherently don't allow you to do this. You can't go into the search bar of a website and say, I need to buy a gift. There's a two way conversation that needs to happen to be able to understand what you're looking for. Much the same as if you're going into a store to buy a television, you go into the store, the person comes up to you, they say, what's the size of your room? What do you actually do with this television? You're a gamer, do you like sport, do you watch movies, do you stream a lot? Has this room got natural light? How far away do you sit from it? This is the conversation that's had when you buy a television in store. When you go onto a website, it is what size is the television? What's its definition? And what's the brand? This is the only information that it gives you. So the development of something that's going to allow for that conversation is probably a very good use case when it comes to voice. Conversation design. Conversation design for these environments is not like a chatbot. A chatbot is a very linear process. It doesn't utilize natural language in the same way. This here is a full natural language processing system. So this means that the conversation can either be as simple as step by step or as convoluted as, and to give the example for a gift finder, I need a gift for a five-year-old active child for under $20. Now there's three or four entities inside of that. You have child, you have age, you have price, you have active. And all of this needs to be interpreted to be able to give an answer. A chatbot isn't capable of doing that because a chatbot expects you to say yes, no, button, pick what the next step is going to be. So the conversation design really needs to be able to either step somebody through the process or understand all of those concurrent pieces of information and give an answer that's going to be correct without actually then saying who's this gift for? How old is this person? It's already been told. Don't make the mistake of avoiding that step. So I'm going to give a small demonstration of what these actually look like, how we've built them, what they actually perform like. I don't know if the sound is going to work. If it doesn't, that's okay. I can walk through it anyway. The other part of it is, there with me a second here. Let's see if this works. What are the most popular gifts? We've selected the 10 hottest products right now. You can say more details, check availability of next product. I want a gift for a kid. Okay, we have the perfect gift for girls and boys of all ages. So once again, there we've seen kid is a child, so we've managed to give the answer. Show only under $25. Okay, here are your products. So now we've done filtering. You can say more details, check availability of next product. Find a gift for a kid under $25. So now we've got concurrent, both child and price. Okay, here are your products. As you can see, because it's a screen device, you're able to scroll through more than just two or three. If this was a voice system, we'd only given two. Find a gift for my dad. Dad, we've equaled male and adult. Okay, we have a few ideas to inspire and delight him. And we've visualized the word of him. Help me find a gift. We've understood it's a male. I can help you find the perfect gift. It only takes a few seconds. Would you like to do that? Yes. So now we're stepping through a process whereby they happen to be an adult. Is a gift for a child or an adult? A child. Okay, how old are they? Nine years old. What's your budget? $30. Okay, we have the perfect gift for kids aged nine years. You can say more details, check availability of next product. And then we're going to begin to go into the actual utilization of location, which is probably one of the most important parts of this as well. Where is my closest store? To locate your closest store. I'll just need to get your current location from Google. Is that okay? Yes. As you can see, we're very excited. Your closest store is Kmart Broadway. Anything else I can help with today? So now that we've set where the store is located. What time does it open on Tuesday? It will actually keep the context of this store. So from now on, any questions are being asked. Kmart Broadway is open from 8 a.m. to 11.59 p.m. on Tuesday. Anything else I can help with today? Will always be about that store. What time are you open this Tuesday? Once again, combination of terms. To locate your closest store, I'll just need to get your current location from Google. Is that okay? Sure. Kmart Broadway is open from 8 a.m. to 11.59 p.m. Alex, remember the context of being Tuesday? I want printed paper. All right, for that, you might like office works. Wanna give it a try? So I'm just going to pause on this one for a second. The fact is that unlike the other assistants that are out there, Google's one is designed around discovery. So Alexa is designed around tasks. These tasks are either called out by a consumer or it's going to be a retail-based environment. But Google's one is actually around people being able to find things. This here is called an implicit invocation. You can build an assistant to actually be indexed by Google. Much the same as websites are built to be indexed by Google from an SEO perspective so that you can actually get serviced and presented to customers that don't know who you are. So the majority of digital assistants that are accessed through these are named by the brand. Talk to Kmart, talk to Dimx, talk to Uber, so on and so forth. In this instance here, all of the user has said is I want printed paper. And in return, Google has said that that you might want to talk to Officeworks. So what we've actually begun to see by this is that through building these correctly and allowing for the interface to be in the same mode as the consumer is looking in, i.e. we're not trying to push a static website into a voice or a digital assistant environment, Google is now recommending brands that have actually built products for it with the correct service built in. So now in Australia, anybody who's actually talking to an assistant and just asked to print a paper is being recommended one company. It's a very powerful thought when it goes around that, which is exactly why when websites that first built SEO became such a big aspect to everything. This is SEO for voice. Yes. Sure, what size paper are you looking for? A4. So now you've been taken straight into the Officeworks assistant. The environment has changed, you're no longer outside in Google, you're now into a closed environment. And not only that, it's remembered the context that what you've come through on was printer paper. So the first statement is not welcome to Officeworks. What can I help you with today? The first statement is actually taking you straight into the flow of continuing with your journey of wanting printer paper. Okay, I found J-Burrows 90 GSM Pro A4 copy paper 500 sheet ring for $5.49. Add to shopping list. Now we can move over into the transaction side of things. This product has been added to your shopping list. You have one item on your shopping list. Would you like this emailed to you now or can I help you with anything else? So what this shows as well is that there's the ability to not just build these to be able to service product, there's the ability to be able to then take this through to the next stage, which is transaction and completion, which means that you can develop an ROI around it, which is why it is that organizations are beginning to look at investing inside of these. So what we learned, these things aren't simple to build. Conversation is actually really complex. With the example of Officeworks, we were finding that people were asking for I want, I need, tell me about and buy. So I want printer paper, I need printer paper, tell me about printer paper, buy printer paper. So all of these are very different goals that the user is after, but they're a very similar statement and the continuous word in there is printer paper. So because of the fact that initially it was getting a little bit confused with the fact that which is the printer paper one it was looking for, so when we were training it in the conversation side, it was having a little bit of a challenge, we thought we would get more prescriptive. So we thought instead of utilizing the full machine learning side, we'd put it on rails. So we would actually write I want printer paper, tell me about printer paper, I need printer paper, buy printer paper, as completely separate conversations with different outcomes that would be against it. The problem with that is when you're looking at a database of a couple of 100,000 SKUs and you suddenly have four different aspects against every single one of those and at the category level, the conversational machine learning piece then begins to put weight upon I need, I want, tell me about and buy. It now begins to see these as entities, not just as the product is being what it wants. So then what was happening is when somebody was saying I want printer paper, it might be going away and saying I'm gonna give you laptops because more people previously when they said I want have searched for laptops. So in my waiting system that I'm giving into the machine learning aspect of what I think is the correct answer to give because it's always just an informed guess with these things. So as they learn the process to be able to get better, it's an informed guess that they put forwards and then if it worked, then they know they've done well. If it didn't, they try and fix it the next time around. So now I want is something that it's going to go with just default to the most often one that people have gone for. So now the most searched for product becomes the default answer for everything. So we actually had to go back strip all of that prescriptive aspect of it out and go back into conversational training with it which meant that we actually had to have people sitting in a room talking to these for a very long time so that it actually began to understand and learn what it was, almost like with a child. And that is the process that you have to do with these things. If you don't do the training of the conversation correctly, you just turn them on and expect the machine learning to do it for you, it's going to fail. Don't write anything before you've built and tested the triggers. This is actually the one that I'm talking about there. Also, you need to be able to look at how you begin to find choose those as you're going through. You're going to fail. You're going to have intense that don't work. You're going to have conversations that don't seem to quite answer what they're looking for. You're going to have to learn to pair things out, cut them, re-splice them and go back and find the conversation again. These things can't actually just be turned on and left. They don't work like an app. It can't be built, downloaded to somebody's phone, left for six months and have a new version released. You need to actually go into it and look at the conversations, look at what it's doing, look at what it's answering, identify that the fact that it's not answering what the consumer wants and then try and write an answer or try and guide it into a better outcome. The other part of it is these things don't have everything in them already. Google Assistant, Amazon Alexa, Bixby, Siri, they've got upwards of 95% recognition and correct response when somebody talks to them. But that's because they have a complete list of every single conversation that has ever happened to them. And as a result of this, the database they're going back to is huge and the training aspect of it is gigantic. You don't get that instantly inside of yours. So it can't answer everything. You'll see people come in and do this. You can break one of these things very simply. You open it up and say, I want a banana. And it won't answer because it database doesn't have a banana unless it's a supermarket. So at that point there, it goes into a loop. Sorry, I can't help you with that. Can you try saying something else? Milk, please. I can't help you with that. Chocolate and then it drops out. So what you need to do when you're looking at this is you need to then go, are these conversations that are being had with it that are making it drop out useful? So when we're actually looking at that, it needs to be, okay, this person's come in and said jobs at Officeworks. Maybe we should put a career's feed in there. This person has come in and said, what's your returns policy? Now we're putting FAQs in. So the conversations need to be something that's going to be a benefit to the user. Now, I'm going to go into technical notes and I'm sure a lot of you are going to get very excited about the fact that we're now getting into something which is technical. I am not technical. So I'm going to go through what we did, but please don't expect me to understand what we did. We had a development team that did that piece and I originally was going to have my senior developer here to present this one, but his son's birthday was on this weekend and didn't want to leave Australia. So my apologies, you have a lot of it. So we integrated with the retail database, Kmart provided us with the APIs. Not all the information was supposed to sort of support for the custom solution. So what we were looking at doing was we would be given all of the data and the data was completely unstructured. It was in the way that they had it. So SKUs, the items would be listed as brand, subcategory, category, make, model, information about the product itself, image, so on and so forth. That would reside in the product SKU listing. We would then have a database which would be locations. We would then have a separate database which was availability in store. And we had another database on the other side which would have store opening hours and store information. But as we mentioned earlier, a lot of these answers require three or four bits of that information. So we had to develop middleware. We, surprisingly enough, we utilize Amazon's system even though we're running a Google system on this one. We don't use cloud. Google are aware of this, don't like it, but Amazon's one is more robust when it comes down to what we can actually pull out and use it for. And this enabled us to be able to take into the requirements that we needed out of each of these silo databases and present them in one coherent response. So what we're actually doing is pulling little bits of information out of different locations to be able to give one centralized answer. So when somebody says, I want printer paper from my local store, what time does it close tomorrow, we are actually taking that out of five different locations, pulling it back together, answering, and this all has to happen inside of seven seconds. Because if it takes any longer than seven seconds, Google says this is too slow to respond than it cuts out. So send it from the speaker into Dialogflow, which is the tool that actually runs out the machine learning and the conversation, into Actions on Google, which is the back end piece of that one, out into the middleware, out from the middleware to the database. You have to have the database respond, and then it has to go concurrently all the way back through those again. So it has to be very well structured and built to be able to ensure that all of that can be performed from the multiple databases and still come back inside of seven seconds with the conversation. Pretty impressive really. The information of location services, so as I mentioned, local store data feed, smart presentation of the opening and closing hours, availability of product at that particular store, and suggestion of a product in another one. So this is where we did a theory which we have now called concentric recommendation. So what this means is if you ask the question, is this product available in my local store, most cases, most systems will answer that question. But we consider that to be the wrong question to answer. If somebody has asked about a product and they've said, do you have this product? Yes, I do. Is it available in my local store? It's a closed question. If it's not in that store, all you say is no. But instead, what we said was, well, if it's not available in this store, how far away is the next store? And can we actually say, yes, it's not available in your nearest store, but it is available in this one, which is five kilometers further away. Would you like to know more information about that? And then allow the consumer themselves to actually make a decision and a determination if they would be prepared to go an extra three, four, five minutes out of their way to be able to actually get the product they were looking for. To find a logic, we had to develop a flow which allowed people to both be able to step through the process and be able to skip or jump over parts. So as I mentioned earlier, we don't have to follow every step in the process. If the consumer has actually given us pieces of the information, all we actually have to ask for is the one piece that's missing. So by that, what I mean is, I'm looking for a gift for my dad, which means that I'm looking for a gift. It's a man and it's an adult. So in that case, all I need to ask is, do you have a budget? And once again, that's an interesting one because we wrote all of these budget questions, then we launched it, and on the first day, the gift finder broke over and over again and we had no idea why. And we went in and we had a look and to see what it was doing. And when people were being asked, do you have a budget, people were saying no. We of course built it to answer with a number. So we built it so somebody would say 10, 15, $20 and it was going to no and it just couldn't answer it because as far as it was concerned, no is not a number. So once again, it's these little simple things which to us as humans seem very reasonable at a conversational perspective, but to a computer is really confusing. And this is why to go with the long theme of what a number of people are saying, AI and machine learning in the near term is not going to replace people because it's just too hard for them to be able to take these very basic things that we take for granted and understand them without somebody explaining it to them. And they'll go into a loop otherwise. You need to track everything. We use three different reporting tools to be able to actually make this work. Dialogflow, which allows us to be able to see the conversation, the transcripts of every single conversation that has been had with the user. Chatbase, which is a chatbot-oriented reporting tool which allows us to see flow if something is succeeding and if something is failing. And then we utilize GA. So we put Google Analytics on the background of it. The reason we put GA on this is two-fold. One is because of the fact that GA allows us to be able to get session information and it allows companies that have it to be able to roll this up across their entire network so they can see returning users that have transitioned from a website to a mobile into voice. The other part of it is that the conversational reporting won't show what information is pulled from an API. So what it'll show is user came in, they said, I want a gift, what's your budget? $15 and then all you will see is event. An event at that point there is all that picks up. So you're now going, well, I wonder what was presented. And so for a long time, we couldn't actually see what the product was, we didn't know what was being presented. So instead of that with GA, we actually ping or pull the API report as it's coming out and push it into GA as a custom dimension. So what this means is when the API is returning a product and pushing that into the conversation, we are actually pulling that product, putting it into the session information so that we can go in and say, this conversation resulted in this product being recommended, even though we can't actually see it in the conversation flow. So why did we do it that way? Well, as I said, smart ecosystem will do the heavy lifting, linking simple use cases together and Google knows who, where, when third parties know what. And what I mean by that is, Google is actually really, really bad at voice at the moment. In fact, all of them are. Alexa's bad at it, Siri's bad at it. And the reason is this, they are trying to index the static web. And the static web was designed to trick search engines into sending users to their website. So everything was built around top 10 things that do this because that's what people searched for. And then Google would index it, they'd appear at number one, it was really spammy, it's a horrible experience. So because of this, when you go into Google now, or you go into any of them and speak to them, the response that they give you is horrible because it's the static web being represented and rendered in incorrect form. And an example of that is this. If I go to Google Assistant and I say gifts for men, I'm not in a third party action, I'm not sure if you can read this, the first result is ass face soap. So basically speaking, Google is recommending that every single man who has been asked about a gift should have ass face soap bought for them. Now I don't know about you, but I'm really not looking forward to getting that brown and white bar of soap on my birthday. So what Google wants, much like Google wouldn't exist without websites to actually provide it with content, Google wants businesses to build interfaces and experiences that are better for the actual environment that these users have decided to interface with. So they want companies to build voice and digital assistance so that when a consumer comes through and does this, they're actually able to get served with a response in the mode that they have chosen. Now the other part of this is going to be the long term play. We have a very close relationship with Google, we've been a Google marketing platform reseller for a number of years, we're a Google Premier partner, we started out as an SEO agency in 99. So we've seen a lot of what it is that Google does in their evolution. Google is going to want over time for these experiences to be as much first party, i.e. inside of the Google ecosystem as it can possibly be, not transitioning out into third party, which is what we've built with these, the Kmart action, the Officeworks action, these are third party. You pass out of being in the Google world into a closed world environment that is owned by the brand. But what we see is Google is going to want these assistance to perform tasks without the consumer having to think about it. And this is where they are going to start to do monetization because monetization for them will not be advertising, it'll be fulfillment of the action on behalf of the consumer with one step. So what I mean here is the example, I'm heading to San Francisco, this is actually a conference that I went to for Business of Bots earlier on in the year that I spoke at. And in this environment here, over time you self-select or you link to brands and you tell Google these are the brands that I use and they have an action that enables you to perform a task. So think about it, Qantas is the airline that I almost always fly with. So if I link my action to Qantas' digital assistant, that's my airline. Hilton Honors, I tend to stay at a Hilton hotel, I tend to use Uber. So here's three pieces of information that take place here. We can see in the future where you just say, okay Google, book my trip to San Francisco. It's got my calendar, it's got my email address, it's got my itinerary, and whilst I'm there I say I want to catch up with such and such and have dinner. So at that point there, Google goes and books my flight because it knows when I have to be there. Google actually transferred me to and from the airport utilizing Uber. Google can book me into the hotel. Google knows who the person that I'm meeting with is because they have my contacts, they have my diary, they know where I'm located, they know where the conference is located. So at this point here they're able to actually then send an email to that person with a meeting and invite saying, do you want to do this this night? There's probably only two or three steps in here that it can't complete and that's the restaurant. It might know the kinds of food that I buy so it could come back with two or three recommendations or it could actually just sell me to that restaurant. Those three restaurants offer to buy me sitting at a table at that moment in time. And at that moment Google says, well they like Chinese, there's three Chinese restaurants, we put them out to those three, one of them offers $10, one offers $15 and the other one offers $18. Google goes, you're going to the one that's $18. Now the restaurant's booked. And in addition to this, it then says, hey, while you're there, do you want to go mountain bike riding? So it now begins to introduce something else into the process. That's the way that we can see these digital assistants transitioning over the course of the next couple of years. So when people talk about 50% of all search is going to be performed by voice and these kinds of stats, that's interesting but not exciting. The fact that a lot of these companies, and that's why it is that all of these big players are heavily investing into these digital assistants now, is because this is where they're going to be going with it. They want to be able to own large aspects of everyone's lives and almost be able to complete where they're going to go with it. So the last part that I would like to say is if you're going to be going down this path, and it's something that you're going to be exploring either for yourselves or for businesses that you work with or in, the typical one of make a plan but assume it'll change. Trust me, it will. But don't do it unless you've actually spoken to customers or people who deal with consumers on a regular basis or your customers because that's the conversation. Otherwise, what you're going to do is you're going to write something the way that you think it should go and then customers are going to come in and have a completely different viewpoint of it. Have a big idea, explore them, but don't try and put everything into them at once. Like I said earlier, these things get confounded. They get confused. For insurance brands, we are actually building a different action for each insurance type because the word insurance will confuse it. So home insurance, life insurance, so on and so forth will be in a different one each time. Ensure that you track everything. Ensure that you go in and actually check things, report against them, see what's not working, and try and tailor it and try and train it again. And speak to Google or Amazon or, on this note here, Apple three weeks ago now just purchased a company called PoolString. PoolString was founded by an ex-Disney Pixar writer. He's a conversational designer. And his technology enabled you to be able to create conversations. Apple didn't have a third party set up for Siri previously. We believe the purchase of PoolString will mean that Apple will now open up Siri to third party developers inside of the next six months, which will be a big step. So look at the network that you want. Look at the reach that you want. Look at the partner that you want to work with. Determine which one's going to be the best one. Talk proactively to them. And then make sure that you go into this knowing that this is something that's going to be at least a one-year project from start to the point where it's going to be sort of OK to work with. And then it's going to happen every day.