 Thank you very much for joining. I think it's about time to start. Please take your seats. My name is Leon on a V and in the next 40 minutes I will share with you thoughts and some statistics based on our research that I did about voice assistance SDKs for embedded Linux devices. I'm a senior software engineer at the consumer group. This is a company that provides services specialized for embedded devices and open source software. My colleagues and I have experienced in projects such as the Yachter project, open embedded, the Linux kernel, U-boot, automotive grade Linux, Geneva development platform and a lot of projects in the user space. The company is based in California but we have people all around the world. I'm working remotely from Povelliburg area. The agenda for today includes three topics. The first one is brief introduction to smart speakers and the opportunities that they have for people like us developers. I will focus on three SDKs, Amazon Alexa, Google Assistant, which are market leaders at the moment and another not so popular but interesting solution called mycroft. It's open source. That's why it's so interesting for me. Last but not least I have prerecorded a few videos that I would like to show with you just to demonstrate what you can do and share some ideas how you can get started building your own devices. Virtual assistants are not something new. We know them from the science fiction and there are so many products that are actually being developed in the last couple of decades and are working pretty well. But what's interesting in the past few years is the rise of the smart speakers. And by saying smart speaker I mean not just the speaker that has a lot of options for communications like Bluetooth and Wi-Fi. I'm speaking about smart speakers with integrated voice assistant. Do you own such kind of a device? Can you raise your hand if you have one? All right. Half of the people. For me it's a very interesting technology. I have commercial products from both Amazon and Google in my apartment with Google assistant. It's an interesting technology because it's an end consumer device that includes several hot topics from engineering perspective. Of course the artificial intelligence and the big data but also application development in terms of developing new skills or action depending on the terminology which are basically third party applications that smart speaker can execute for you. And of course for Internet of Things and abandoned devices because we can start integrating these voice assistants in various devices. Let's have a look at the key ingredients that we need to make a smart speaker. We've already mentioned a few of them like the artificial intelligence. We need a wake word detection in order to make sure that the speaker listens to you. You need a wake word. You all know it. It's something like Alexa, hey Google, okay Google. We need text to speech. So basically when the answer from the cloud is returned to the speaker it should reproduce this answer as speech. And the biggest problem actually is the speech to text. You need recognition of the speech. After detecting the keyword you need to detect and recognize what's the person saying. Of course there is a board bring up. This is some bad Linux conference. I'm sure you're all familiar with this part. And there are third party applications. A few years ago I attended a presentation and someone complained that in the IOT we don't have so many third party applications. Well, here is an opportunity to make some. But actually this presentation will focus on the SDKs for device creation not the third party applications. The smart speaker market is huge. And the expectation is that it's going to be bigger and bigger. Keep in mind that the moment all the commercial solutions available on the market have limitations in the number of languages that are supported. Here are some data that I got from this website. At the moment it's clearly that Amazon and Google are the market leaders. Here is how are the sales of smart speakers per country. Of course this depends on the supported languages. We'll start the overview with Amazon Alexa. Because it is the first technology of this kind on the market. In alphabetical order it's also the first. How many of you are using Amazon devices? All right. Thank you. So Amazon Alexa, just a brief introduction. It's a virtual assistant powered by artificial intelligence developed by Amazon. It's available as a commercial software for FIRO, iOS and Android. It powers the Amazon devices, the smart speakers and not only provided by Amazon. It was initially released four years ago. And it requires the Amazon Alexa application installed on a smartphone. They support Android and iOS to get things configured. And if you buy their device you need the app to set up the speaker. Key features of course support several languages. It supports a list of wake words but it's a predefined list. You have some options but just a few. There's a voice profile for personalized experience. There are these interesting new features like Alexa to Alexa calling if you have several devices in different rooms and you would like to call a family member to Alexa instead of going to him. There are options even for making a landline calls in the US, Canada and Mexico. There are two types of developer opportunities. The first one is to make your own devices integrating the Alexa SDK and the other one is to create third party application that you can publish in a store. I think it's pretty similar to what we are familiar with in the world of mobile applications. The terminology of Amazon is that these applications are called skills and basically these are applications without user interface. The user interaction is true voice. Here is a list of some of the Amazon devices released on the market. Here this is the second generation of Amazon Echo. This is the most affordable one. They recently made a modification, another version. And there are a lot of third party devices with Alexa. A lot of, as you can see, most of these companies are well-known. They have integrated the SDK that provides Alexa as part of their end consumer devices. I did a Google research to find out teardown articles to see what's inside these speakers, these commercially available speakers, sold by Amazon. So this information is about the first generation of Echo and Echo Dot. You can see that the hardware is, it's a typical embedded device with four gigabytes internal memory, Texas Instruments system on a chip and 256 megabytes. So let's have a look at the Alexa voice services and the SDK. Basically this is what you need in order to put Alexa into a device. How many of you have used it? Anyone? All right. Just two people? All right. Okay. So this presentation, I have a lot of slides. I'll run through all of them. I just tried to provide an overview to give you the hints, the things that I'm showing here based on the public information. Keep in mind that both Alexa and Google that we were reviewing are black box projects. They're not very open source. All those certain parts are open source. So the information that you see here is also available in their documentation. They have pretty good documentation. So the SDK is easy to use and it enables prototypers, makers, and commercial manufacturers to integrate the Alexa voice service within their devices. The SDK itself is open source. It's available in GitHub under Apache 2.0 license. As of the moment or a week ago, there were 69 commits, 24 releases, and 23 contributors. This is just for the SDK that is in GitHub. It's written in C++. It's compatible with Android, macOS, Windows, and of course GNU Linux distributions such as Ubuntu and Raspbian. This is how it works. This is taken directly from the documentation in GitHub. So there are some third party binaries and the other is a source that you have to build. What's interesting is here this is how you detect the wake word. As I've already mentioned, Alexa supports several wake words. So instead of saying Alexa, you can say computer or another of the predefined wake words. And at the end, it goes to the cloud where the heavy lifting is done. So just the steps that you need to get this running on a Raspberry Pi. Raspberry Pi is the 35 U.S. door computer. I assume everyone has it. Who has a Raspberry Pi actually? Okay, perfect. I made a few examples with Raspberry Pi just because it's so popular in the maker community and I really love it. So the steps are straightforward. You have to assemble the Raspberry Pi with a speaker and a mic. It's obvious. You have to install the Raspbian OS distribution. Raspbian is a Debian-based distribution provided by the Raspberry Pi foundation. After that, you have to download the Alexa device SDK to input your credentials to build it on the target on the Raspberry Pi to get refresh tokens and to run the sample applications. This URL provides you the exact steps you can follow it. Of course, there are a lot of third-party developer kits that you can buy that are optimized for using the SDK on them. There are different manufacturers primarily ARM and Intel. While I was doing the research, I saw that some people have reported successfully building the SDK on MIPS, but I'm not exactly sure how well it works. Keep in mind that if you want to make a commercial device that you would like to sell or the company that you work for would like to sell, there are some certifications. You have to agree to Amazon terms and agreements. You have to go through product testing before going to the market. So pretty much, if you're developing a device, you depend on Amazon. Now, let's move to the next one. This is Google Assistant. Again, how many of you are using Google Assistant? The alternative. I have both Alexa and Google Assistant. Okay. A little bit more here than Alexa, I think. It has the same purpose, but it's developed by Google. It's a virtual assistant with voice commands. It's available for numerous platforms, including mobile and smart home devices. It was initially released a couple of years ago, so it's newer compared to Amazon Alexa. The SDK is written in C++. No, actually. Sorry, this is a mistake. It requires the Google Home application installed on your smartphone. It's pretty much the same workflow as for Amazon to set up a device that comes with Google Assistant. So the features, again, it supports multiple languages. There are several different voice options, so you can select which voice you prefer and use it. It also has a voice match which allows the Google Assistant to recognize your family members and to provide you personalized information depending on who's talking to the Google Assistant. There is this new feature called Google Duplex. You probably have seen the video that they presented in May. It's very exciting because basically the demonstration was that a person asked Google Assistant to schedule an appointment at a restaurant or a cosmetic saloon, and Google Assistant did it for it. But, of course, all these features are proprietary. They're not open source, so I cannot talk too much about them because I'm not involved in any way in the development of them. So the developer opportunities here are the same as for Alexa. It's pretty much an ecosystem, and this is the ecosystem of Google. There is an opportunity to create embedded devices with the Google Assistant, and there is an opportunity to create this third-party application that you can publish, again, without graphical user interface but with voice interaction. The difference is that, unlike SKUs, which is the term used by Alexa, Google decided to use actions. But it's pretty much the same idea. Once again, I'm just reminding that we will not focus on the SKUs and actions in this presentation. So this is the SDK that we need in order to integrate Google Assistant in our device. Google is providing a turnkey solution that's written in Python, really easy to install it. It's compatible with RMV7 or Intel X8664 devices. There is also a Google Assistant service that works over RPC communication. This is a comparison that you can find on their page to see the differences between the library and the service. The examples that we will discuss here are based on the library. Of course, there are Google Smart Home speakers. Google Home is the brand name. This is Google Home Mini. This is the most affordable smart speaker on the market. You can find it pretty much the same price as Alexa, Ecuador. They also recently released, this was released in the beginning of this month, Google Home Hub, which is a device with a display. So it's some kind of a smart speaker with graphical user interface that can show you additional information and provide a graphical user interface to control the other Internet of Things integrated with Google Assistant in your home. There is a long list of devices, a big companies that have already created third-party devices with Google Assistant, like Panasonic Sony. A lot of companies are interested in this field and they're making devices. Again, I've Googled to see what's inside the smart speakers provided by Google. As you can see, the specifications are not so different from what we saw for Amazon devices. Again, we have ARM systems on the chip. These are from Armada. Well, it has a little bit more RAM compared to the Amazon devices. I couldn't find the exact amount of RAM and internal storage for Google Home Max. Google Home Max is the high-end speaker that has very good sound capabilities. So now, in a few steps, I would like to share with you what we did for Alexa, how we can integrate Google Assistant in a device. There are a few options to build such a device in the showcases. I'll share a bit more information. But the first one is the Google VoiceKit for Raspberry Pi. This was something that was distributed with the MacPie magazine a year ago. They have a second release now. It's the easiest way to get started, because it's a cardboard box for Raspberry Pi with an atom board with a head that you plug on top of your Raspberry Pi with a mic, a speaker, and a button. You can activate Google by pressing the button. There is also an option if you want to get your hands a little bit dirty and do some soldering, you can build your own head for Raspberry Pi using these components from other fruit. You can also do it with a breadboard. In the showcase, I'll show you how it looks if you do it this way. There are a lot of wires. These are options for quick and dirty prototyping with low cost budget for makers. Of course, we have Orange Pi. Orange Pi is dirt cheap, so it's a good option to make a prototype for a low budget. There is Orange Pi Zero Set 6, which includes the Orange Pi with a case and an expansion board that brings a mic. You also need to plug a speaker to make this demo. The steps require working with the Google platform console to enable the Google API. After that, you need to install the SDK on the device. This particular tutorial has been tested on Orange Pi Zero. I was using Armbian. Armbian is a Debian-based distribution optimized for ARM devices, as the name suggests, and particularly for ARM devices with all-winner systems on the chip. These are the steps. They're pretty straightforward. You need to install Python because the SDK of Google is written in Python. After that, you have to install the SDK itself. Finally, to start the Google Assistant demo. In order to make things a little bit simpler, if you want this device to be dedicated for this purpose, you need to create a system-based service to make sure that the SDK started automatically at boot of the device. This is something that I wrote. I'm just sharing it. If you're familiar with SystemD, nothing interesting here. After that, you need to after writing the system-based service, you have to enable it at launch. Keep in mind that if you want to make something more serious, not just a prototype with low-cost hardware like the thing that I showed you, you need again to apply for certification by Google. Again, you're dependent on Google and releasing a third-party device that uses the Google Assistant is pretty much, it depends on this corporation making software. Here we come to the third option, which is called Mycroft. The interesting thing about Mycroft is that it is entirely open-source project for a voice assistant again. The idea is straightforward, something that's open source and it's capable to compete with Amazon Alexa or Google Assistant. Are you familiar? How many of you have heard about Mycroft? Okay, great. How many of you have Mycroft devices that were distributed? Okay, a few people. That's great. A few words about Mycroft. I'll spend a little bit more time on Mycroft because it's open-source and mapping and it's quite interesting. Everything is in GitHub. They have a GitHub repository for the company. The open-source license for the software is Apache 2.0. The hardware is also open-source and they have certified the hardware, the Mark 1. This is the commercial name of their first product that was crowd-funded through Kickstarter in Indiegogo. It's again available at GitHub. It has been certified by the open-source hardware association. Just a few words about this because I'm personally an open-source hardware enthusiast. This association checks that really the product is open-source hardware and it's compliant with their expectations. The certification is free, so Mycroft passed these certifications. It's a U.S. company, so the UID for the open-source hardware starts with U.S. This is the number. Mycroft is a company. They're a startup company, as I've already mentioned. They started with crowdfunding campaigns through Kickstarter and Indiegogo. Currently, they're seeking for investments as well through another web platform called Start Engine. Since this is an open-source project, I would like to make an overview of the pools of the project since all repositories in GitHub and we can have a look at the statistics. The majority of the software is written in Python. Mycroft core is the core component, the artificial intelligence on which Mycroft relies. There are almost 3,000 commits. These statistics were taken a few days ago, so they might be outdated already. There are eight contributors with more than 100 commits. This is interesting for me because you know how open-source works and there are a lot of projects that are very dependent on the authors. It's good to see that there are quite a lot of people contributing to this thing and they are contributing continuously. There is a SKUs repository. These are the third-party applications, like the ones that we discussed for Amazon Alexa and for Google Assistant. Here is a list of third-party SKUs that have been already developed. In this repository, we also have quite a lot of contributors. Here the number is bigger because people are listing their SKUs. The features in Mycroft, the moment it's officially available only in English but they're trying to translate it to different languages and actually this is where we can help. We can help them by starting to translate. So far, I have zero contributions to Mycroft. I just wanted to make sure that I'm not dependent when I'm providing this talk. One of the things in my to-do list for fun is to start contributing and the translations is the easiest way to help. So it supports extensions for functionality by developing software applications called SKUs. There is a Mycroft SKU Manager MSM and a repository that we've already mentioned. There is an optional device and account management in the cloud called Mycroft Home. The good thing is that there is option to use the device without home. One of the things that is of great concern of people, especially people like us that are engineers, is privacy. How many of you have concerns about the privacy when they're using something like this with things in the cloud? Okay, I think we have like 100% here. A lot of friends ask me, okay, that's super, that's so cool to have something like this, but can we run it entirely in-house to be completely independent from the cloud, from the internet? The answer is that Mycroft is, according to my research, it cannot run without internet connectivity. The biggest problem is the speech to text recognition, but most of the components are open source, and according to the documentation to their forums, their press releases, they're working in that direction. So in the next few slides, we'll review the components in Mycroft. The first one is the wake words, the default wake words are hey Mycroft, but since this is an open source project, there is a way to change them to whatever you need. And this could be even a good opportunity if you need to integrate artificial intelligence with voice commands in business to business solutions, because if you are putting this in a hotel or something like this, you might need to change the keyword to something that matches the business model of the company that you are working for. Precise is the default wake up word listener. It has been introduced this spring in March. It's written in Python. It's available at the GitHub repositories of Mycroft. Before that, they were using PocketSphinx. And PocketSphinx is still available to be used in Mycroft as an alternative, but Precise is the default at the moment. Here comes the biggest challenge for open source solution, is the speech to text engines. Mycroft supports a number of these speech to text engines. The default one is Google. So that's why at the moment you cannot run Mycroft in-house without internet connection, because it needs connection to Google, but there's a nice post on their website. You can have a look at the technical details there. They're explaining the measures that they have taken in order to guarantee anonymity and to resolve privacy concerns. There are also a couple of other proprietary solutions that they offer and you can select, but what's interesting is DeepSpeech. This is a new project that I have been developing with Moziwa. It's an open source speech to text engine. It's available at GitHub under Moziwa Public License. It's written in several languages, C++, Python, and, of course, Shell Scripts. It uses TensorFlow to simplify the implementation. It's been heavily developed, but it's still not the default speech to text engine in Mycroft. I'm not sure what is the state. I didn't have enough time to go deeper in my research and to see how useful is this. The problem with the speech to text recognition is that the accuracy. If you have 80% accuracy, this is not enough because if you say, hey, Mycroft, turn on the lights and instead of lights, it's here right. It doesn't make sense. It won't execute the command that you need. So the text to speech engines here, the situation is better. They have an open source solution. It's called Mimic as well as a bunch of other solutions that are available. Mimic is a fast, light wave text to speech engine. It's been developed by Mycroft and VocaliD. It's based on another software. It's available in GitHub. It's written in C. It works on several platforms, including, of course, Linux, Mac OS and Windows. Mycroft have released a couple of devices through crowdfunding campaigns. It's important to say that they've managed to shift the first one. It took them some time. It's very difficult to make a crowdfunding campaign for something that complex and to deliver it. It's great that they have done it. They had a new crowdfunding campaign for the Mark II. It's, as far as I remember, it's expected to be shipped in December. And of course, there is the option to make your own device using Raspberry Pi. The greatest thing about Raspberry Pi is that with this 35-year-old computer, you can try all the platforms that we are reviewing here today. For the moment, according to the information that I read on their website, Mycroft supports Raspberry Pi 2 and 3B, and there is working progress for 3B+. They have a distribution which is based on Raspbian, which is based on Debian. The name of the distribution is Pycroft. So, we have a few minutes for showcases, and after that, we've got two mics here so we can have a discussion. Let's start with the Google VoiceKit. I'm starting with it because it's particularly impressive the way how they're using a cardboard box to make the speaker. It's a do-it-yourself kit that takes approximately between one and two hours to assemble and to get it working. There are two versions. There are two versions. The first version was distributed with the MacPine magazine, and now they have a second version that was released this year. The first version was from a year ago. I have a couple of the first versions. This is how it looks, and when you assemble it, this is what you get. The first version was distributed without Raspberry Pi, so everything else was included here in the box. If you are feeling more adventurous, you can do something like this. It's... The photo isn't perfect because there are too many cables, but this is breadboard prototyping with other fruit out-on boards. The first one is some microphone. There are two microphones for stereo effect. This is especially if you're using Google Assistant SDK. This is a great advantage, and this is the Class D very simple amplifier that allows you to connect a small speaker. And I've already mentioned Orange Pi Zero. This is the third showcase, another great device for makers. The price is approximately, I think, 20, 25 US dollars. I might be wrong, but it's very, very cheap, and it's something that you get as a whole kit. You still need to add your own speaker, but the case and the expansion boards are included into this set. Unfortunately, Orange Pi is not open source hardware, but at least it's cheap. There is excellent Armbian support. Armbian provides both mainline kernel versions, distributions with mainline kernel, and with the notorious version, kernel 3.4 for all-inner devices, also known as Linux Tung-C. I would like to show you a video here, just a second. Okay, Google, how are you? Thanks for asking. What can I help you with? Okay, Google, what is my name? Okay, Google, what is the weather forecast? There will be showers with a forecast high of 23 and a low of 11 degrees celsius. Okay, Google, thank you very much. This is a demonstration that I recorded at home. I didn't want to take too many risks bringing it here, because it requires internet connection. Following the steps that I've already showed you during the presentation for Google Assistant, I have integrated it on Orange Pi. There was a little bit of an annoying background noise. It's because of the chief speaker that I was using. So a few words for Home Assistant. I really enjoy this open source platform for home automation. You can integrate a lot of things. You can also integrate Alexa. There is specifically for Alexa, there is an excellent support. And there is one back door that allows you to configure Home Assistant to use, to represent itself as Philips Hue. And since Alexa is having a built-in Philips Hue support, you can turn on and off all the devices that you have at home. This is something that I've already presented at FOSDEM at the beginning of the year. But I would like to reuse the demo to show you how you can turn on and off a device. Actually, it's a head on top of Raspberry Pi. On the Raspberry Pi, we have the Home Assistant running. Another video. This is the user interface of Home Assistant. It's a free and open source software. This is the Alexa and this is the Raspberry Pi. The Home Assistant is running here. And I'm using Qtid to communicate between all these things. Alexa, okay. Hi. Alexa, turn on Anavi Light FAT. Alexa, turn off Anavi Light FAT. I'm trying to be polite with these things because one day they might rule the world and I want to survive. So this is how it works. We have an Qtid broker for this particular setup. We're a little bit ahead of time. So let's go to the conclusions. There are just a few things that I would like you to remember from this presentation. As I said, it was not a deep dive into any of these technologies. But based on what I see, there is a huge demand on the market for this type of devices. There are a lot of opportunities for integration of Internet of Things and embedded devices in different terms, like third-party applications like SKUs or like integrating the voice assistant within devices such as consumer electronic devices such as TVs, refrigerators, or even toasters, who knows. At the moment, the market leaders are definitely Amazon and Google. They provide turnkey solutions if you would like to try it out on a maker device such as the Raspberry Pi. But if you want to make a commercial device, you need to go through a certification. The open source alternative right now is Microsoft. It's a very interesting project, definitely tough to compete on the market with huge corporations like Amazon and Google. Unfortunately, in practice at the moment, all reviewed solutions here require access to the Internet and to the cloud to work successfully. Thank you very much. These are a few useful links, including some links in YouTube for more presentations. Right after the talk, I'll upload the latest version of the slides to SlideShare. Thank you very much. And you have the mic for questions. Any questions? So you showed to demos one was with Alexa and the second one was with Google Assistant. But have you tried Microsoft? I haven't recorded a video with Microsoft. But yes, there is this pie cropped image that allows you to very quickly turn it on on Raspberry Pi. Yeah. My general doubt is, is it working or not? Yeah, it's working. It's working. Yeah. Yeah. So the quality is good enough that you can build something on the top of it. Because we know that Google Assistant works. So it's easy to deploy something. What do you mean when you say something like a commercial device? Not a commercial device, but own project, let's say. Yes. I mean, for Hobbit, it's good enough. Yes. Okay. For commercial devices, you need to spend more efforts on it. Yes. Okay. Thank you for a great talk. If you have evaluated the SDKs for the Google and the Alexa and for Microsoft, do you know what kind of the smallest hardware requirements do they allow? So the Raspberry Pi is, well, it's a small computer, but it's quite a formidable computer after all. So can you run them on maybe 100 megabytes, 100 megabytes of RAM, like 100 CPU, something like that? That's an excellent question. Thank you very much about it. And I wanted to show you exact answer in my slides, but I was unable because I wasn't able to find anyone listing the minimum system requirements, as they do for playing games or something like this. For 100 megabytes of RAM, I'm pretty sure it's not possible. We can go back to the slides where we had this teardown of specifications of the commercial device available out there. Here is the one for... Yeah, sorry to interrupt. Is this right that the Google Home Mini has like four jigs? Is this true? Well, I'm not sure if it's in all devices, but yeah, this is what I saw in the teardown. And actually, I have at home, I have both Google Home and Home Mini, and I have the impression that the Google Home Mini is a little bit faster, but I might be wrong because I already saw this. And I didn't have the courage to teardown my own Google Home Mini. I think it's an irreversible process if you teardown once. Okay, thank you very much. Maybe one last question about the... I mean, in the near future, how do you think if it will be possible to run all this stuff like locally on the device, it should be like a bit more powerful, quite a bit more powerful because you need to run text-to-speech, all the intelligent AI stuff, everything like that. What would we need to do to make it happen, like maybe some kind of AI-specific chip like an Apple does or something like that? Do you know if there is some progress in that direction? That's an excellent question again, and that's totally true. If you want to run everything in-house on the device without relying on the cloud, this would require a lot more computation of power. I know that there are a lot of companies making chips specifically for these purposes, but I'm not familiar with details of any of these chips, so I cannot give any recommendations. Okay, thank you very much. Thank you very much, too. Is there other questions? Are there other questions? Yes, thanks for the talk. So the Google Home Mini has 512 megs of RAM, actually, and current requirements for Google, so it used to be dual-core, and now they are moving to quad-core as the minimal system requirements, and the requirements for RAM are 512, and for NAND flash it's also 512, the minimum requirement for Google products at least. Okay, thank you very much for this correction, and sorry for not providing very accurate data on this particular slide. Are there any other questions? Thank you for the talk. As you said, privacy is a major concern, and that's why I don't have an echo or Google devices yet at home, because how can I be sure they are not listening? Of course, if I make my own product, I can guarantee that this doesn't happen, but what is your perspective on this? How can you make sure that Amazon and Google are not listening to what you say at home? Yeah, it's an excellent question. I think this is a problem for pretty much all of us, and well, with a corporation, you can just trust them unless it's open source. So for me, my personal opinion is that I would like to see something open source succeeding, whether it's Mycroft or something else, but based on the research that I did and the things that I share with you, I'm pretty optimistic about Mycroft. Of course, it's another question whether the business model would be good enough and sustainable to survive in this market. Okay, thank you very much. Thank you for joining.