 say hello to everybody and I welcome you to our innovation talk number 7. It's already number 7. We have great topics over the last time, but we have a very, very special and very interesting topic today. And I say welcome to Ben Ellermann, who is our speaker today. Hello Ben. We are starting with Abby from the UPU Direct Marketing Advisory Board. Hello Martin. Thank you. On behalf of the Universal Postal Union's Direct Marketing Advisory Board, it's a great pleasure to welcome you to this innovation talk 7. We're very happy to be back after taking a short summer break. For any newcomers, which I seem to see some new countries that represented there, this session once again is being hosted by the Direct Marketing Advisory Board, a group of postal operators and private sector members, which aims to promote direct marketing through postal channels. As I said, we've already hosted a series of six online sessions which address the main drivers of the new ecosystem and which direct marketing operates and how postal operators can adapt and respond by building omnichannel strategies. We also address the impacts of the current COVID pandemic on direct marketing. All of these sessions are available on the UPU TMAB website, so don't hesitate to contact Olivier Boussard or me if you need help finding them. Given the success of the previous sessions, we will continue the conversation with the support of Martin and Shay. Thank you, our moderator and also global experts. During our discussions, a number of topics emerge as being of high interest for direct marketers and postal operators, in particular the impact and value of artificial intelligence and the way we do business as direct marketers. In this session, we will continue to expand and explore artificial intelligence from a different emerging perspective. Therefore, we are very happy to welcome Ben Ellerman, CEO of Future Voice, who will guide this session on the theme, conversation AI, boosting voice marketing and customer service with artificial intelligence. Ben will not only show us the theory, but also take us into practice. As they say, what was science fiction 15 years ago is now a daily reality. Interacting with computers using natural language and complete sentences. So much has changed technically, but how can you make these new opportunities in marketing, sales, and service? And what obstacles still have to be overcome on the way to direct marketing and voice commerce? So on behalf of the Universal Poster Union and the Direct Marketing Advisory Board, we thank Ben for taking the time to be with us today. We wish you all a great hour of discovery and discussion. Back to you, Martin. Thanks, Abby, for this introduction from the UPU, Direct Marketing Advisory Board. And yes, I studied myself information technology and business in the early 90s. And one of the topics we talked about was, of course, artificial intelligence. And at that point of time, we thought, okay, maybe it's five years in the future and we will be able to talk to computers. Well, it wasn't five years. It was like 35 years. But nowadays, it's gun reality. We can talk to computers. And I can share one story with you, which is very interesting. I was buying one of the first Alexa's you could go to Germany. I was putting this on our lunch table and I had my wife with me and I had my three kids with me. And they were starting to talk with Alexa and they were starting to ask questions. The first questions they asked were about mathematics. So when Alexa was able to help them with their homework, so they started to work on mathematics, on geography. And then one of the smaller ones, the smallest one, it was 10 at one time, he asked Alexa, what is the sense of life? And the answer, of course, for all of them, is give me one second. Alexa, stop. I have one of these things in my room that you just heard to me. So the answer was 42, of course, everybody who has ever read science fiction knows why this was the answer. But the most important thing from my point of view, none, not one of my children asked the question, why can I speak with a computer? It was absolutely normal for them to talk to a computer, to talk in your normal voice to a computer and to get an answer from a computer, which was for me, absolutely astonishing for them. It was normal. So I think we are getting into a very different kind of society and, of course, in a different kind of business with these things. And Ben, Ben is the right person to teach us about it. Ben is not only CEO of this company, Future Voice, which, well, the name is already right. He is one of the best experts I know about the design and development of voice and chatbot clients and not only clients, but as well the strategy behind it. And he will give us a lot of examples of how to do it, why to do it, and probably, as well, he gives us, I know this already, a very interesting, well, let's preview of the future of voice. So over to you, Ben. I know that you work for a little of big companies. You have a great company. You're working for Alliance. You're working for Vodafone and all these big companies. We are very honored that you are with us today. And I would say the screen is yours. The audience is yours. And we are all very interested. Well, thank you, Martin and Evie. A lovely introduction. I'm very happy to have the possibility to join your innovation talks. And it's something completely new that we have people from all over the world listening to what I have to say. I guess this is my personal record, maybe. I'm doing some teaching at the university. And sometimes there we have pretty international groups too. But I guess this must be around 20 different nations here. And this is really impressive. Okay, but yeah, let's get to the topic conversational way. I boosting voice marketing and customer service with artificial intelligence. It's a very broad field that I want to cover in the next 30 minutes. So it will definitely not be possible to get into every detail aspects. I will have a discussion later on. So if there are questions on the way, make an open, let's discuss it afterwards. Let's have a slight look on the definition of conversational AI, because I really think that not everybody of you has the same understanding of this topic. And the reason for that is that the definition is pretty vague, and it allows pretty much instruments to be covered with this very definition. And basically, what you see here is on the left side, the three technological components that are essential to tell if it's something that has to do with conversational AI. And these are first of all, so-called conversational user interface. So front end for a chatbot. This could be something in voice or this could be something in text. And in the text world, you can probably look to the Facebook messenger or to WhatsApp and of course, in the audio world, we're looking to telephone me and so on. And then you have the natural language understanding, which is the part where you have artificial intelligence every time. There is no really alternative concept in modern chatbots than using artificial intelligence within the natural language understanding. This is where the interpretation of what the user wants is being defined. And then you have the automatic speech recognition. This is of course something that only comes in place when we talk about voice bots, because a voice bot is always a text bot with an additional layer of a voice in the very beginning and one in the end, you can say. So you're first taking the speech and transforming it to text and afterwards you're using the text to transform it. And on the right side, you have this quadrant where you see which channels come in place. We'll dig into that a bit deeper in a few seconds. But of course, you can have voice bots, for example, for the phone call and you can have text bots, for example, for chat and messaging. And the very interesting thing when it comes to conversational AI is that it has a standard and this standard is pretty, pretty easy. But it's also very robust and allows to create nearly everything you want on the standard because it's simply the dialogue is the standard. Now you have a ping and a paw. You're talking to somebody, you're getting an answer. You're talking to a machine, you're getting an answer and you can use this technique to come up with nearly every use case you really want. Now the essential channels for conversational AI are these six, I would say. You have the Amazon Alexa, which is pretty famous and stands for itself, I guess. You have the Google Assistant, same there, I would say. Maybe not that prominent, but most of you will have heard of this. And then you have, in the text world, you have WhatsApp and other messengers, of course. You have messengers within Instagram nowadays or in the agent sphere. You have WeChat and the Russian sphere. There is another Pondon, I must say, I do not know the name. If there is somebody from Russia, please type in the comments. And then we have the Phonebot, which is not that... When it comes to conversational AI, the phone is maybe not the channel that you have in mind in the first place, but we'll come to that a bit later. I guess it's really one of the most important channels because the phone, for customer communication, is a big thing. Then you have the website and app chatbots, so something that is happening on your website or within your app. And of course, you have something like proprietary channels. So, for example, we are doing AR steering with voice or we are doing in-car voice projects where you can steer parts of your car like the radio or the navigation system with voice. And of course, there can be a lot of other devices that can be controlled and doesn't necessarily have to do something with the channels mentioned here. Our customers are regularly doing more than one channel with us. I guess 60% of our customers are not only working with us with Alexa, for example, but we're also doing the Google Assistant or have a WhatsApp project. So, it's a good idea to find a technology where you can cover a lot of different channels. And I will get to that in a minute. But before I do that, the most important question for you as postal operators is maybe why should I consider using conversational AI instruments? And so, what we have here is the spectrum of goals that you can accomplish with conversational AI technologies and instruments. On the left side, you have the customer service automation, where the goal is clearly the reduction of the cost to serve, which can be reached via a full automation, where you really deflect the call. So, you fully automate a call that you don't need a customer service agent to conduct the call. And you have the part of automation, where the goal is clearly to reduce the AHT. So, the average handling time, for example, you can do the authentication. It's a German, English language. And you can then dispatch the authenticated customer to your customer service agent. And you have an advantage because the 30 seconds or maybe one minute of time that the agent would have needed is saved. Then you have the customer experience, where the goal is clearly to improve the customer experience for the customer. And this is something that can be beneficial for the loyalty of the users or, of course, for the advocacy, which is good for the recommendation. And in the end, you have the conversational commerce, where the goal is clearly to achieve incremental sales. So, you can make your conversion rate better, increase your conversion rate, or you can do upselling in different detailed use cases, I would say. But I really think that everything you can do with conversational AI is interesting for the six dimensions. And this is something that is a bit technological, but Martin and I read that it's maybe something that for you postal operators, marketeers, customer service experts, this is a level that is appropriate. And I really find it important to give you a glimpse of what is happening in the back. And remember the very versatile standard with the Lego plate, I just showed you all the flexibility that is within conversational AI can be shown with this technical pipeline. Because in the very beginning, you have a lot of different channels, the one I mentioned on the slide. And it can be the phone call, it can be the messaging dialogue, it can be the voice assistant, the user is contacting the brand via one of these channels. And first of all, you have the voice recognition, of course, only for the voice channels. We are working with Microsoft technology when it comes to the phone call. In the voice assistant world, this is of course something that is already included technologically. So the Amazon, Alexa and the Google assistants have their own voice recognition, of course. And afterwards, we have text from the different channels. And the text will be interpreted in the natural language understanding. If I am a customer and I'm calling a pizza place to order a pizza, and I have, for example, a phone call that tells me, hello, how can I help you? And I say, I like a pizza tonneau. Then we have two information that will be extracted in the natural language understanding. One is the intent. And the intent is clearly an order. And the other one is the entity, or sometimes called slot, which will be the pizza tonneau. And then we'll take these two informations that we have extracted with artificial intelligence. These both layers, the voice recognition and the natural language understanding are definitely AI layers. And then we take this information and put them in the dialogue management. Where we are working with rules and we're working deterministically. That means that there is no AI involved in this part. And this is the reason why we can come up with use cases for every of the goals I mentioned earlier. So for the customer service automation, for the customer experience and for the conversational commerce. We can have a stockpile of different use cases with an order of management, which is then connected with our core company systems. So you have your CRM system, you have your ERP system, you maybe have some sort of specific databases. I don't know where the US postal operators are storing your tracking information, for example, which can definitely be very interesting use cases. And you only have to connect these systems one time. And then you have to connection and reuse it in another use case. So this is really a very efficient way of organizing a stockpile of different touch points for the customers. And we will definitely not have the time to dig into every channel I just mentioned. So I will concentrate on two. First, I want to dig a bit deeper into the smart speaker, specifically the Amazon Alexa, because I think this is the most prominent one. And a bit later, I will give you a little glimpse of what will be possible or is still possible in the phone world. But let's have a look to the smart speakers first. What I find really interesting, and this is maybe something interesting for this discussion later, isn't everybody looking to the smart speakers with the expectation that there should be this one day where they are really, really well established and everybody is getting up in the morning and seeing, okay, we have our future world now, where we can use this voice assistance and smart speakers within our everyday life. Because this is really something that I think a lot of people have in their minds when they are thinking about smart speakers. And in reality, we see that the smart speaker ownership, these are German statistics, but I guess Germany is really average when it comes to that. The US is, of course, even more within the voice and conversational world than we are. But I guess there are a lot of countries that are maybe hesitating a bit more. So let's say we are average. And we saw that 26% of the German adults already have a smart speaker in their homes at the moment. And this rate rise through 34% during the Corona pandemic, which means that there is a lot of things going on. And in the future, we will have even more people using it. And when you're looking to the question of voice assistance, then this is even more interesting. The smart speakers are the devices like the Echo Dot or in the Google world, you have the Nest Hubs from Google. But of course, you can use these smart speaker ecosystems also with your phone. And in the Google world, for example, you're using the Google Assistant with every Android OS on the phone. So there are a lot of voice assistant users that don't necessarily have a smart speaker. It's always the bigger, bigger sum of people. However, in Germany, 45% of the German internet users already is using voice assistance. For the digital natives, we have 59%. And you see the increase from 2019 to 2020 here. And for the digital immigrants, it's of course, slightly less. And if you see these, then the question for me or no, no, I have an answer for you. The voice assistants are pretty established, but we didn't saw that coming. It happened very silent, I would say. So voice assistants are nothing evangelistic anymore, but are something that is really present in our everyday lives. But the use cases that are interesting for us as consumers are not the use cases everybody has in mind when we are thinking for this future world. These use cases are, first of all, play music or listen to the radio. 79% of the voice assistant users are using it to play music or other sorts of media. And 74% are using it to control devices in the house. And then you have a lot of use cases that are also relevant but are not that frequently used by the consumers. And this is something you should have in mind. So you have a strong basis of people that are theoretically able to use your smart speaker skill, your voice assistant application or whatever, but they are not really there to use it because maybe what you have to offer isn't that relevant at the moment. So this is something we can really work on. For us, one very interesting fact was that 45% of smart speaker users say they have replaced their traditional radio at home. So you don't need a radio anymore because you can tell Alexa, I want to listen to my favorite radio station. And this is something that we use for Vodafone in very interesting use cases. What we did there, we somehow converged media in a new way because the classical radio advertising looks this way. You have a brand and you have your radio advertising, you're shooting it out to the customer. And then you're hoping that maybe you can use some sort of attribution mechanism to tell whether or not your media in West was good or not. But it's always hard to tell because it's a traditional push media and it's hard to tell what the effect has been. And what we did with Vodafone was that we established an interaction that took place right after that message to the customer and it looked exactly that way. The video is, by the way, something Vodafone created for a conference where they wanted to show this case. But the use case is bottom line. I tell my Alexa to play my favorite radio station and then in that very moment, we deliver a pre-roll. So something you know from YouTube when you click a video and just before you can see the video spot, you got an advertisement and this is something we do for exactly that people that are using their Alexa to play their radio station. And we did that with a strong commercial and with a call to action at the very end to say, Alexa, you can just say, Alexa, start Vodafone Gigacube. And this was the message to the user. And if the user decides to say that in that very moment, then you have a direct conversion into the so-called skill, which is the voice application we created for Vodafone. And then you have no friction at that point. And the user is getting directly from the ad word directly into the skill where you have an interaction. You can measure it and you can of course even think about something to identify the user. So very interesting use case. But of course, the question could be, can we expect something in the future that the smart speaker, especially Alexa, will be even more present? And I really think they will be more present and I tell you why. First of all, this is an example from Amazon. Maybe you saw that in the last months that you have more and more devices like TVs, like smart home devices, and so on that have an information like works with Alexa. Now at an Alexa, buy an Alexa and you can steer this very device with your voice. And this is something that the ecosystem gets bigger and bigger and bigger, which means that there will be more use cases because more people are able to use it. Or you have something like Alexa for residential. This is a program where you as a landlord can integrate a form of Alexa that didn't necessarily need a specific login. Means you can build a hotel and put Alexa steering for the TVs and for the light and so on in every room. Give the guests the possibility to log in if they want, but they don't have to steer their route. And this is something that will of course help to get more people into this voice experience when you accidentally have the possibility to use it during your stay in the hotel. And this is something Martin mentioned to me yesterday. I found it so interesting that I put it in the presentation. It's Alexa for hospitals and senior living communities. So what you saw on the slide before for hotels is really coming in the healthcare sector. Because it's very easy even for people that don't look especially digitally native, I would say, to use them in the average day. And this is something that I really think will have a very strong impact for the usage of the smart speaker. This is the new Echo 15, Echo Shell 15 from Alexa. It's a very, very top. It's a nice looking new Alexa with a very big screen that can be put vertically or horizontally. And that can be used in a lot of contexts where you need the display additionally with your voice. Let's say a shopping experience. You want to you want to shop something or for you as postal operators, think about an auto tracking via Alexa where you have not only information where your shipment is at the moment, but maybe in the future have a map where you can see a live, okay, my postal carrier is here. It stays on the screen while you're washing your dish so that you know when the postal guy will be at your doorsteps. And in the bottom you have the new Omnia series, which is Amazon entering the TV industry. And these TVs have all the features that a normal TV has, but also a built in Alexa with a microphone microphone in it. So you will definitely be able to have a very immersive experience using your voice. And of course, then we can use it to build experiences within Alexa, where we are not only steering the voice, but also steering what's on what's on the screen. And at the moment, we as an agency see that the brands are a bit they're hesitating for investing in doing something visually with the smart speaker devices. The reason for that is it's only a small percentage amount of users who have the display at the moment. And in the future, I really think it will be a lot more. So and you have interesting use cases. So this will be, yeah, this will be a boost, serious boost, definitely. Yeah, now we will have something which is a bit more interactive. And I want to show you a little phone demo. Therefore, I will take my phone and I will put it on on the speaker here. And I will stop my screen sharing for one second because I want to show my my browser now. Just one second. So everybody can see my brown. Oh, this wasn't something I wanted. Sorry. A lot of windows here. Okay. So this is our technology that we're using for most of our projects. So we are not an, yeah, we are not a software software company, but we are an agency that is using tech agency that is using various softwares. But this one is the one we love most called Paloa. And I will I will show you or you will hear my dialogue with the bot that is that you can see here. So this is a so called graph where this dialogue is built. And I will show you on my screen, what is happening in the dialogue between myself and the bot. So it's maybe a bit, a bit small to witness, but I guess you will, you will get you will get a little impression. And this is a demo insurance, where, you know, I'm I'm an insurance customer with the so called Chronos insurance that doesn't exist, of course, because it's just a demo. And I think the use case could maybe be interesting for you as a postal operator, because it's about changing an address. And now I will call. Welcome to Chronos Insurance USA. What can I do for you? I want to change my address for my car insurance. Perfect. And what insurance is it about? It's my car insurance. I would be happy to record your new address for a car insurance. What is your insurance number? My insurance number is PA 555 666. So your insurance number is PA 555 666. Is that correct? Yeah, that's correct. All right. And when's your birthday? It's the 4th of December, 1982. Thank you. With this, I was able to authenticate you successfully. What is your new address that I may save in the system? It's 680 Point Lobos Avenue, San Francisco. The address is 680 Point Lobos Avenue in San Francisco, California. Correct? Yeah, that's correct. Okay. I have successfully saved your new address in our system. Is there anything else I can do for you? No. Thank you and have a pleasant day. Goodbye at your Chronos Insurance USA. Okay. I hope this was a bit impressive. You may be noticed that I didn't say the California, and that the system came up with California by itself. This is because we are using the Google Maps API in real time to make the address plausible and add some additional information. I'm just sharing my presentation in a second. And yeah, this is our approach of bringing the experience that you have on Alexa and the other smart speakers to the phone. And I guess we are pretty successful with this at the moment. You all know this, I would say, rather annoying phone experiences from the past where you have to use the touch tones to navigate. And you have very long menus because it wasn't really possible to work with artificial intelligence. And nowadays, this is possible. And this is why you can really leverage use cases that are completely new. And what I want to show you now is the so-called custom neural voice. This part is a bit complicated because the custom neural voice we just created is only available in German language at the very moment. But Martin and I agreed that this little play we prepared could be interesting for you. So time to play. I will play four voice samples in the next seconds. And some of them are spoken by myself. And the other ones are spoken by my custom neural voice, which is a voice that we created with artificial intelligence. So this voice learned how to sound exactly like me. And yeah, since you won't be able to understand because it's German language, you can focus only on the voice. First sample. I'm not there myself in the morning because I have an important meeting in the area of security in the plane. Second sample. Third sample. And last but not least, fourth sample. Okay, no, I guess everybody of you have an idea. I see Martin, this means something. Be a good chance for everybody to put into the chat now. Voice samples are Ben. So if you put one in three, you think that one in three was Ben. If you put two, you think only two was Ben. But I think Ben, you already told us two of them are you and two of them are the bot, right? Exactly. So which are Ben? One, two, three, and or four, what do you think? That's interesting now. Let's get a good idea, Martin. Let's dialogue. One and four are Ben, saying Wasil. Two and four is Ben. Can you play them again, Abby? Yes, do it again. Can you do it again? Sure thing. Okay, here we go. Ich bin morgen selbst nicht da, da ich ein wichtiges Meeting im Bereich Sicherheit im Flugzeug habe. That was one. Eine Ameise ist ein Apfel auf eine beachtliche Art und Weise. That was two. Der Grundton für die männliche Stimme liegt bei etwa 125 Hertz. That was three. Ich starte deinen Timer für neun Minuten. And that was four. Who do you think is Ben? One, two, three, and or four. Two is Ben. One is Ben. Two and four is Ben. One and four is Ben. Come on. Nobody thinks number three is Ben. That's interesting. But a lot of people are thinking number one is Ben. Number one and two. So I think on what we can already agree, we are not really sure. We are not really sure who is Ben. If I see this. So Ben, what would you like to give us the results? So the first one was the neural voice. So the second one was original. The third one was original. And the last one was neural two. So it would have been two and three. So again, nobody was right. Wonderful. Okay, Ben, up to you. What you can see here is that this really works very, very good. And of course, it's not only good to prank your mother or the next thing I want to do is speak French. This is something I wanted to accomplish all my life because I love spending time in France. And I'm not good at the language. And this is something you can do. But of course, the most commonly seen use cases are synthesized text media. And the other one that is especially important for us is if you have a phone board that works very well, then you want your own brand voice and use it in real time for dynamic content. So that you, for example, can closetize the customer number you just understood or say an address. This is something you cannot do if you have just a recording. You need a synthetic voice. And yeah, maybe very short, because I really think I'm a bit over time. What is necessarily to do this technically, of course, you can simply do this with us. But I find it very interesting that Microsoft, which is our technology partner here, use 3000 hours of audio material as training data to build a so-called transfer teacher model. So they created if you want so a voice that is very, very adaptable. And this is the basis for a so-called fine queue teacher model in German language. So we decide for the German model. And then we take several hours of material. I guess it's something I've been in the studio for two half days, I would say. So maybe have two to four hours of material and use it to generate specific training data. And then in the end, you have your custom neural voice model, which is there and available for every text you want to transform into my voice. This is how it works. This is something from the studio, but I will go over this specific voice files right now, because I guess this is not so interesting. But the message is you need a bandwidth of different so-called phonemes to be placed in different sentence contexts to have a voice that is then able to cover every sentence you can possibly come up with. So I saw in the chat that somebody raised the question regarding the dark side of chatbots. And I must say that regarding this custom neural voice, I find this a very interesting discussion. And I'm happy to tell you that for the Microsoft technology, you have a lot of restrictions and ethical process that guarantees that nobody can use it as a tool to, for example, clone the voice of Angela Merkel. Which would definitely be a thing that is possible. So you need to fill out a complex form where you say what the purpose of the voice is. And also as a speaker, you have to record a special sentence where you say that you're agreeing with the overall principles behind this custom neural voice. Yeah, and this was the third little glimpse from the voice and conversational AI world. In the end, I just want to return to the spectrum of goals of conversational AI project. This is a slide from the beginning. And I saw a bit about what could be possible cases for US postal operators. So I simply brainstormed and put them in the boxes. Within customer service automation, I would say that if you have service holds incoming, you can definitely think about the intent recognition, what does my customer want, who's the responsible agent or the agent with the right skill. And you can also authenticate and route these calling customers to essentially lower the amount of time that is needed to fulfill your customer service operations or reduction of AHT. Then you have phone and messenger, the order tracking, which is maybe something that some of you are already doing at the moment. Also for phone and messenger, you have a change of destination. You just saw how good the address change worked over the phone for bigger shipments and packages. This could be something very promising. If you look to customer experience, then Alexa or Google Assistant case is very interesting. Like a wizard for shipment fees, at least in Germany, it's very complicated to tell if the very thing you want to ship to somewhere in which category it belongs. And I guess it's something that could be really covered good within a voice workflow. And also the digital postage via voice. Also, I only know it from Germany, but I can use my pencil to write a code on the package, an online code that has been generated. Why not deliver this code directly after the wizard, maybe in voice experience. And if it comes to criminalization or commerce, what about a postcard assistant to allow people that accidentally forgot how they wanted to send a card to their grandmother. Maybe Alexa let's send a postcard to somewhere and then you just tell the text and you have a technology to use a random photo at the front and then write down the text so that it looks in a handwritten manner. Could be a thing. Or pick up my shipment in the office place. Could also be a very good idea. Like I need somebody to pick up my 26 boxes I want to send this day. And that has been my my little keynote and I'm looking forward to a good discussion. Then thanks a lot. First of all, I would like to do some clapping for everybody. Well, I think everybody was very interested in that one. We got a lot of questions already. So thanks a lot for that. I really like your last idea to have a bot to pick up my parcels. So probably I need not only a voice bot for that but something some real bot who can move around and get my package. But that's something for a different presentation probably. We got some questions already somewhere around the bad and dark and the site of AI. But we got some questions around the use of case. I would like to start with that one. So Gustavo asked, is there a global standard that currently works across different conversational AI devices? So the example you just showed us with your phone, would it be possible to use the same software to use it on Alexa, the same one to use it on Google Assistant and so on? So is there some kind of stand-up to this? There is no standard like a norm or something like that. But technology-wise, this is exactly what this platform we are using is doing. So we are using the same platform that we call a channel agnostic for phone, for Alexa, for Google Assistant, for the messengers and for the on-site chat. If you have those in mind, there is a base dialogue and then you have specifics that are relevant for every channel. But in the end, you have a lot of standardization going on and especially when it comes to the training of the AI, it can be very beneficial to bundle everything in one platform. Yeah, that makes sense, especially that you do not have to develop everything twice or three times or four times, but you're just once. Okay, makes sense. Another question asked by Bernard was about the storage of this data. So are the data used in conversational apps are usually being stored or are they just, well, ephemeral? So you just use while I'm talking to the device and then they are away? You can decide that. At least they are used during, is a very conversation. And if you want to store them, I would say it's necessary to store the transcribed text from voice or the text from the user made for some days to train your models, but then you can erase them. And when it comes to the smart speakers, this is something important I want to add. There is sometimes this, there is this anxiety about Alexa is listening all the time and sending everything to the cloud. This is something that is definitely not happening because there are two technologies that come into place. One is the ship within the device and the ship can only do one thing like if you're telling Alexa, then it's activating to listen and send every information to the cloud. And this ship can definitely do nothing besides detecting the word Alexa and echo. Okay, that makes sense. Another question was about the usage of this data for the, well, the large companies. I think Yassir is talking about the gar forms of this role, Google, Amazon and so on. So do they really have an advantage by using this technology or is this something where you think that, for example, a postal operator would be a good company to use this technology as well and to get some advantage out of that. You mean in terms of creating your own ecosystem? I don't think that a postal operator could create an own ecosystem on that but probably the usage of that ecosystem. I'm not really sure I get the question. So the original question was about privacy violations. So are the large companies collecting all kinds of data from the consumers? And I think you already answered that one because you said, well, they do not really do that. They just do the translation, the usage of the things, but they do not really store this data or use this data if I got you right. But my question was getting from there to the usage of this technology. And I think that would be interesting to see, well, they are using this technology already, but wouldn't a postal operator be as well able to use that technology? Sure thing. I mean, the advantage of the big ecosystems is, of course, that they use a lot of incoming data to get their AI models better. So it's not completely true that none of the data is stored, but I guess it's really stored in a manner that is no privacy violation, I would say that. And as a postal operator, you can use broad parts of this technology for your own purpose without connecting too deep to the big ecosystems. This is exactly what I showed in the phone case. We are using the Microsoft technology in speech to text because this is really the best one you can use at the moment. We benchmark a lot and it makes no sense for us to create something like that by our own. But after that, we have everything exactly under control. So we know which data is coming from where and we decide where we want to store that. And in Germany, for example, data privacy is really a hassle. We are very sophisticated regarding that in Germany, if you're working in digitalization, then it's really a pain in the ass. And our automation pipeline is compliant with German data protection. So I guess there is really no reason why a postal operator shouldn't be able to find a compliant way. You had that example about the usage of voice and we all were wrong, which one is you and which one is that official part. And you already said that you had to fill out forms with Microsoft to make this possible, so to reduce the risk of misuse of this kind of voices. How big do you see the risk of the dark use of AI, the dark bad use of AI and chatbots? Is it something that is really, where you really think that we will have a problem with this in the near future, that somebody is calling me and asking me and after five minutes, I get the feeling, okay, I'm not really sure whether this is a personal or not. So do you think this is possible in the near future? So there is a dark side coming up. When it comes to chatbots and dialogue automation, I'm skeptical or let's say I don't have any fear that this is getting in the wrong direction because you have this deterministic component in the middle in most of the chatbots. There is something going on in research and development at the moment to use artificial intelligence in the decisioning too, but this is future and it's also something where the learning mechanisms are supervised, meaning that the customer cannot manipulate the bot if he's saying specific things. So this is only something that we can accomplish and we are the creators of the bot and have a mission. So no risk there, I would say. Regarding the custom neural voice, I would say that this is something where we need to pay attention when I say we, I mean all the world because yeah, you saw how good this can work and you maybe all have seen movies or TV series lately where this topic of deep fakes has been mentioned and I mean Microsoft has this ethical mechanism because it's a big Microsoft, but yeah, what about hackers that come up with their own technology? This will definitely be possible and I guess there has to be some technology to determine this better and this will definitely be there. Thanks a lot, Ben. It's already five past five in Germany, so at least five minutes past our normal stopping time. Abby, if you push your microphone to on, we might be able to hear you. How about this? Is it working? Very good. Thank you, Ben. Thank you for a wonderful presentation and bringing us into an even broader perspective with direct marketing and with AI. It was fascinating for me and yes, hopefully the postal operators will take something away from this to think about developing this for tracking packages or other assorted things. I wanted to let everyone know that there will be an innovation talk eight. It's the theme is online offline, online and offline. What is the winning formula? It will be held on Monday, December 6th, same time, five o'clock to four o'clock to five o'clock and Professor Dr. Ralph Kreuzer will be Professor of Marketing at Berlin School of Economics and Law will be our guest speaker and everyone will receive their invitation. It will be posted on the UPU DMAB website also, the UPU Postal Direct Marketing LinkedIn site. We hope you all will join us and everyone be safe and we look forward to seeing you next time and Martin, as always, it is a pleasure to have you as our moderator and we look forward to seeing you back in December. Thanks a lot, Abby. Thanks for making this possible and thanks a lot to Ben. I personally found this a very, very interesting presentation again and I was very happy to have some pre-sessions with Ben and I'm totally astonished about his voice, to be honest and the voice samples we heard. This is really showing that technology is quite advanced and I think the case has been showed to us and especially the ideas he had about using voice technology in post-operators were really interesting. Thanks a lot, Ben, for all your time and your work going into this presentation and yes, thanks to everybody attending and we are seeing again hopefully on the 6th of December to our last session of this year, the last innovation dialogue this year and I'm happy to see you then. Thank you, bye-bye and if you have any questions, send an email to Ben, send an email to Abby, send an email to me or contact us via LinkedIn and we are happy to answer. Bye-bye. Thank you all.