 Good morning, everyone, and welcome to the 730 to 830 AM session of the 2021 Open Simulator Community Conference Day 2. In this session, we are pleased to introduce a panel discussion entitled SceneGate, DreamGate, Docker, and Echo Voice R&D. Our two panelists are Lisa Laxton and Frank Rulof. Lisa is the R&D visionary and CEO of the Open Simulator Community Focus Foundation Infinite Metaverse Alliance. She is also president of Laxton Consulting LLC with experience providing various virtual world technology solutions for education, research, business, and defense clients. Frank is a senior systems engineer at Thales, Netherlands, with expertise areas in training and simulation. He is leading the research and innovation activities related to Open Simulator technology within Thales Global Company, using multiple Open Simulator grids focused on user needs. I want to remind everybody to check out our website, conference.opensimulator.org. You can see more on the speaker bios that we have here today and details of upcoming sessions. This session is being live streamed and recorded, so if you have questions or comments during the session, you can tweet at OpenSimCC with the hashtag of OSCC21. And I want to welcome everybody, and let's start our session. I'm going to pass it off to Lisa and Frank. Hello, and thanks for that intro and to the conference organizers for giving us time to present. I know there's been a lot of great stuff presented, and it was difficult to get this time. There's a lot of competition, but it's all good. Frank and I are both happy to be here with all of you, and congratulations on another great annual conference. IMA has a strategic partnership with Thales Group to work on open source projects together, and we very much appreciate contributions from Thales and its interns. Thank you, Frank, for being their champion. Today, we want to share our progress during this panel, and we hope you will stay with us for the next session to celebrate the work of interns on the future viewer. We should have time for questions before that next session. Currently, we have six open source projects that are active, so let's get started. SceneGate is our primary viewer project dedicated for use in OpenSimulator only. It is designed for improved accessibility, usability, security, and onboarding. Based on user needs, spin-off work in this project includes a future viewer SceneGate 2.0, which is in the next session, and what we call DreamGate. It's a custom, DreamGate is a custom Firestorm installation to address use cases not currently addressed by our other viewer projects. EchoVoice is a major development effort designed to deliver a hypergrid voice solution to the community. Helios encompasses server R&D associated with deploying OpenSimulator using virtual machine and Docker implementations. I'm a box evolved from this R&D. Selene is a broad project that includes community hypergrid efforts like OpenSim work and radio stream, the 24-7 hypergrid list, the hypergrid DJ radio board, parcel visitor board, and many more. Not counting the actual work by multiple hypergrid community contributors, we've spent around 4,000 hours meeting and collaborating every week for the last five years on many projects. Yes, I may just had its five-year anniversary. I wish we had more time to celebrate and highlight contributors, the list is long. Thank you for all that you do for each other. Now, Eos is our newest project that involves OpenSimulator 09 software, R&D implemented using Docker research from Helios and is also related to ImaBox. There is actually a method to our madness, but our focus is on the hypergrid community of avatars. Periodically, we conduct user surveys to help us listen to the voice of the community. We use these results as one way the community's voice influences our development priorities. Analysis of the results from our July 2021 survey revealed some interesting information we would like to share with all of you. So we'll provide a link to the report with full details later. We asked the hypergrid community 30 questions of interest to creators, merchants, and grid owners. Amazingly, we had 111 responses thanks to everyone who participated. The margin of error was around 9%, which is similar to other hypergrid surveys. We sought to answer 10 research questions. First, how are users engaged with respect to system and mesh avatars? Pure data implies 83% of users use system avatars for registered and alt accounts. Of the 17% using mesh avatars, slightly more than half of them are using the Bakes on Mesh feature and animated mesh parts are the least popular. I think that will grow. The takeaway is that viewer appearance tools are widely used. Two, what do users look for or acquire in the existing marketplaces? This chart shows you a wide variety of work and it is actually fairly well balanced, but clothing, hair, and accessories are the top three items. Of those, mesh clothing and accessories are the most popular. Even though system avatars are more popular, the clothing tends to be mesh. So the takeaway from that viewer outfit tools are likely widely used. Then, what percentage of users create avatars and associated virtual items? So here we're talking about the creation, not the marketing. About 46% create avatar accessories followed by 39% who create clothing and 26 to 30% create animations, gestures, and avatar sound. We can't forget about those key components because that's what provides a certain level of immersion. So the takeaway is that the viewer build and the upload tools are likely widely used. What is the impact of virtual environment settings, immersive features, or accessibility options? Less than 22% of users prefer the new EEP in regions or parcels that they own. Roughly 60% either disallow or change their settings in parcels or regions that they visit. So most users adjust the time of day and listen to music and sounds for a more immersive experience. The takeaway is know your audience when you design your world. How you design it may not be how they're experiencing it. Are language translators, mouse look and voice commonly used aspects of the user experience? Nearly a third of users utilize a translator. We are global after all. Slightly less than half use mouse look and most users engage using spatial voice on average two hours a day. So the takeaway there is that voice is a vital component of the user experience. And what percentage of users create content other than avatar related content? Roughly 85% of users create prem content using viewer tools. Close to 56% create scripts implemented using the viewer tools. That takeaway is build and import tools are widely used. What activates or activities around the hyperverse are the most common. The top three are exploring, building and creating respectively. This may be different than the user experience and other virtual world platforms including Second Life. So other activities engaged by more than half of users are the music social events, socializing, relaxing and shopping. Big takeaway, avatars around the hyperverse are social. This is a really key component to the OpenSim platform. Does the average user have accounts on more than one grid? Yes, the average number of accounts is roughly 2.6 but as you can see from the chart almost half have accounts of five or more. So that's a big deal. The takeaway is that most hypergrid users have more than one virtual home. Now can we estimate the actual number of unique users from reported data? We extrapolated the data compared that to the application of average accounts and it was equivalent. So therefore we can estimate the total number of unique users is around 16,000. Based on grid reporting relevant data what is the size of the hyperverse market? That turns out to be around 36,000 hypergrid avatars. So the takeaway there is there are two distinct markets that exist on the hypergrid. So creators and grid owners take note of that. There were several R&D drivers and thank you Maria for the reported data from hypergrid business. I don't make sure that you let know that Maria actually does really grab a lot of great data and it's very useful. Thank you. Summary takeaway is that fully featured client viewers are needed despite calls for future limited mobile or browser development. And while we do have this in mind for the development team current community needs take priority. We're always interested in more help from volunteers for new and existing projects. So get in touch with either of us if you would like to help. Now we adjusted the roadmap for SceneGate changing priorities for avatar focused features and browser updates to meet immediate needs for live streaming and support for events management. We spun off DreamGate early. Now due to community need for a voice solution replacement we launched an echo voice funding campaign to help speed up development. So let's review our progress on SceneGate, DreamGate, I'm a box and echo voice before we listen to the great work the interns have been doing on the future viewers SceneGate 2.0. For SceneGate, we built upon the work of previous effort to start building new versions. Voice and sound bugs in the Linux version have been resolved. However, the build was designed for 1804 Ubuntu only. Worked to progress to update that for 2004 Ubuntu. Testers had difficulty with the installation of the earlier release. So we will take a package approach with this one. The untested Mac version with third-party library updates was not a simple build once we began CEF updates. Spaces and directory names had to be found and escaped manually along with the new auto build. It now builds on Mac OS Big Sur even though it was built originally on Catalina. But it does crash on runs. So troubleshooting is in progress. A new Windows version with third-party library and CEF updates is planned after the Linux and Mac versions are ready. So while work on SceneGate 2.0 is underway, which is what the next session is about, we will continue work and support on the 1.x generation. We hope to have a public beta soon for all three new versions. Following test and release, we will start adding requested and community features. For those interested in participating in the beta testing, please contact me. I mentioned earlier that we spun off what we call DreamGate. This image here will show you a little bit about what's different. This is a custom installer for the latest version of Firestorm for Open Simulator only. There are a number of reasons we did this, but it's primarily driven by use cases where SceneGate is not a good fit. And we found the popular Firestorm viewer to be quite capable, but new users on moderate systems had difficulty. So we made some changes to meet the needs of these users and used our EV cert to provide Windows installers that would not be incorrectly flagged as dangerous to install. Testing of DreamGate is nearly complete, so the repos will be updated soon and it will be made available for download. We have been having that tested by some of our customers and their users and the response is very positive. Now, once Beck is able to add the voice patch from the Alchemy and SceneGate code basis to Firestorm, we will make a new version of DreamGate. We mentioned a project before called ImaBox. We've not released that yet. We completed prior R&D using multiple VMs and proceeded with research running Open Simulator using Docker. Over the past year, extensive R&D has been underway regarding various backend implementations. Part of this work involved finding better ways to test and monitor performance across multiple grids and regions at the same time. This image shows performance of a server with 47 Docker containers across five operational grids running our version of 09 that we call EOS. During the six-hour period, real-time monitoring you can see on this chart shows the impact of work within one grid where we were making changes to scripts to reduce the script loading. The multiple containers are shown on the graphs with different line colors and the names of those regions were blurred out intentionally. Now, grids using this Docker base back in proved to be stable and able to support high-capacity events for avatars. We intend to use the result of the successful R&D to provide a community version called IMABOX. This is a high-level diagram of what we decided to put in the box. We'll provide instructions for Linux installation and the image in early 2022. IMABOX is platform-agnostic in the sense that Headless Linux can be run on any operating system. The last major project to review before the interns take the stage is Echo Voice. Evaluation and design was completed in 2020. However, funded projects take priority for the team because we all have bills to pay. Recently, Vibox announced it would discontinue the current V3-V4 voice offering by the end of 2021, upgrading to V5 or version 5 would require all the viewer teams to modify code to support that, and the community would continue to be at the mercy of Unity, who owns Vibox now. So Seth designed Echo Voice to offer an open-source freeware alternative that could bridge multiple voice solutions, including Vibox. This would be the only way to provide a seamless hypergrid experience and choice for all region owners who want to offer voice. Right now, users, when they're jumping from region to region across the hypergrid, they may or may not have voice, depending on if those regions are running voice, but imagine what will happen if you've got some grids implementing JITC, some grids implementing WebRTC, some grids on version 5, some on high fidelity, some on Agora, and so on. The impact to the users is negative because the users can't seamlessly just have voice as they travel. Now, to accelerate the development of Echo Voice and meet the Unity-imposed deadline, it would have to be funded, simply because there is a tremendous amount of work to be done. Otherwise, the project will continue, but only when volunteer time is available. This chart shows you the result of our evaluation of multiple voice solutions and our criteria. As noted in the Echo Voice blog history, Become's Mumble solution is obsolete. Now, Seth did give me a note this morning that the Mumble project we've been waiting for 1.4 to come out, which has a really suitable API, and 1.4 release candidate is out, so 1.4, the actual release is coming soon. We're very happy about that. Now, considering the alternatives, a ground-up replacement using Mumble is needed, and that is what we call Echo Voice. And given the recent expected decision by Lennon Labs to move away from Vibox and use high fidelity, we did the math. We evaluated a competitor called Agora also, which is available on Unity platforms like Vibox. Now, this slide defines a typical use case that might be a social grid and education region, etc. The constant is defined by the parts where we multiply to get the estimated participant minutes. This chart shows the result when we use the formula to see what the monthly cost would be for each solution. And there is quite a bit of difference between them. Now, for Lennon Labs and others who are revenue-earning, large companies, the cost is not bad. They're absorbing that cost because they're earning revenue on other service lines or products. But for the hypergrid, we believe this is not feasible, and here's why. Most grids in the hyperverse are non-revenue-earning. That's a given. Close to 40% of virtual world users in 2015 reported earning under 10,000 annually US dollars. And with the pandemic, incomes may be even lower now. Event donations average around 100 globits or less, which is basically one US dollar. Using high fidelity, an average single weekly event of five users can cost 1250 a month, or that's the equivalent to 3,125 globits. So it's unlikely that community donations are going to support that level of voice offering on high fidelity. Now, if you use Agora, you get 10,000 free minutes a month, but if you have three of these average events in a month, you've already exceeded that 10,000 minutes. So we therefore conclude a per minute paid voice solution is just not feasible for the hypergrid community at large. So here are the key standouts for Echo Voice. It works with all the supported viewers. It has no viewer development cost. It includes a bridge for regions using five-axe voice solutions so users could have a seamless experience. If we have grids who are going to use other voice solutions, we'll have to evaluate what would be necessary to add them to that bridge. It works on all operating systems. There's something that Vibox does not, version five, no longer supports Linux at all. Agora, it's possible to support Linux. We have spoken with them about that. Echo Voice has spatial voice features. FreeSwitch, while it is a viable solution for the short term, it uses a low-end codec with fairly low quality. The backward support from Vibox that allowed FreeSwitch to work and open them may go away with version five. We don't know about that. And FreeSwitch is not facial, and it would take a lot of code to add that feature. The other thing that FreeSwitch does not have is any kind of indicator over the avatar's head. And that's a problem when you're trying to figure out who's talking, especially if you have accessibility issues. Echo Voice can be self-hosted for HIPAA and FERPA compliance and other security concerns. So medical environments, advocacy, counseling, school systems that have FERPA protections in place, they need to have that self-host capability. Vibox and Agora, neither of them can be self-hosted. It will be open-source freeware for the hypergrid community. So what have we done so far? One question that's been asked of us. We've spent about 2,000 engineering hours in 2019 assessing VCOM's mumble. We were unable to update that because of deprecated libraries. We spent time designing a new solution testing prototype in 2020. All of that was funded by Frank's company and my own. Our estimate is 1,310 total build hours left to complete that work. We will fund 260 of those hours jointly. And that means we need funding for 730 hours for the test version and the additional 320 for the release version. Now, all of that is detailed in the blog to get things moving. We launched a crowdfunding campaign and set up that project blog. And we had some questions since the launch. Thank you, Hypergrid Business, for publishing an article to help us get the word out. Big question was why did we use GoFundMe and not something like Kickstarter? It's because GoFundMe doesn't take a cut. They don't take any money of that. The funds can be dispersed during ongoing development to keep the developers paid. And it's not an all-or-nothing crowdfunding, so we don't have to shut it off if we don't reach the goal. So then the next question is what happens if we don't reach the goal? Well, development continues, but it has a lower priority. And all funded open source code would be available in the open source repository. But the test version timeline would be unknown without that funding. Now, here are some links to help you learn more or help if you can. And if you have technical questions about Echovoise, I can follow them to the designer of the project who is working on a deadline and couldn't be here today, and that's Seth Nygard. Frank, did I miss anything we wanted to share about the project before we open for questions? I think your mic is muted, Frank. All right, so let's go ahead and open four questions. We have a little time here. Did anyone have any questions about any of those projects? I know it was a lot of information to absorb, but we had a short time to feed you a lot of information we had to share. Here's a question from Beth. She said, Lisa, could someone put the links into chat please? Now, I think we have them on the website under the session, correct? Because you have your slides up for the session, correct? The slides are up on SlideShare right now, but I'm not sure that's clickable on SlideShare myself. Okay. If you want, you want to put them in the chat for everybody to grab. I'll see what I can do. Yeah, they are in the SlideShare, NeoBird. And I was going to ask Lisa, because the whole voice concept for me, I have tried to use voice since voice was able to be used everywhere. And it is amazing to me the by minute charges that people do, right? Right. How do they even make that a viable thing? But is there ways that we should go about this to help your cause with Echo Voice, if you will? Yeah, well, that's why the bottom link on the slide is to the campaign. The blog also has a link to that. I was just trying to get to my browser so I could bring that up for you. So if you hit that link to the blog, you will find also a blog post about the GoFundMe campaign, and that will lead you to the GoFundMe link at the bottom. We did make a video for the campaign, and the comments we had about the video was that it didn't have enough technical information in it. Well, the whole point of the video is to talk about the social impact, the reasons why we're raising funds and the reason why it's so important, because we want to reach funding sources outside of the OpenSim community and not necessarily just within the community. But as far as technical documents, those, as they are developed and the project proceeds, will be on the project blog and certainly more than just the one diagram that we're showing here. Okay. And, Frank, are you there? I'm not sure why we lost Frank's voice, but are there any other questions that anybody might have about any of these projects? Here's a question. VIVIX offered small users the service for free. Is there any indication that HiFi would do that too now that they have landed this big contract with Second Life? Any info on how VIVIX even did that? We have no idea about what Philip Rosdell, who owns High Fidelity, what his intentions are. I did try to contact him about that. But there was no interest. And it's difficult. I understand he's in the business of making money. With VIVIX, they're in the business of making money too. The difference is Philip's business model is based on the minute and VIVIX business model is based on concurrent users. So with VIVIX, even if you go to version 5, you get 5,000 concurrent users for free, just like you did with the older one, but then you have to pay for the next 5,000 users at the next tier level. Now, companies like Linden Labs, that's not a problem. They're making revenue and they can afford to offer that voice feature for free to their users. But in OpenSim, it's unlikely that anybody would hit that 5,000 CCU. I don't think it happened over the last eight years or so. But the difference here is that Echo Voice does not use VIVIX. We will eventually eliminate SL Voice completely. We don't need that. We don't know whether Linden Labs, who owns SL Voice, is going to allow VIVIX to continue to use their SDK for connectivity. And Echo Voice does not require any of the viewer developers to make any changes because we'll support all of the viewers. And if you go to version 5, viewers have to be updated to take care of that. So there's a lot of little things in play and that's why it made us have a lot of effort in researching all of the different options and trying to establish criteria for selection. Can you hear me? Yay, we hear Frank back. Welcome back Frank. Thank you. I'm so sorry. Apparently I lost audio for a moment. I got the advice to get out of Skype and log in again and now it works. So I don't know what happened. Did you have anything you wanted to share about our progress, Frank, before we head over to the intern? No, not directly, except that I think that one of the major point is, and we discussed that before, is that if you do work on basis voluntarily, which people do and which make amazing things, it's still dependent on their capacity to support this. In other words, you have so many times, so many hours in a week that you can spend to do something for the open source community. Yes, it's, I think, very important and we have the same problem with COVID and other things that if you want to speed up things, you need funding somehow. And that's the idea that we had this crowdfunding to make sure that we maybe could speed up the development and also in the light of developments for VVox going to a different version and so which would then make for a lot of hyper grid users, yes, not a very nice situation. But still we want to support it and do everything we can to do it as quickly as we can. So yeah, the only thing I saw that I think we had, I sort of had an introduction sheet on the previous one, an introduction on the previous one that I wanted to give a short introduction on the future viewer, what the interns are going to tell. Yeah, that slide is up now. Yeah, so some background now that is that we at Thales have programs for interns. They can then do and learn on various trains to work on the job, working in a company as part of their education. And in my case with the work I'm doing for Virtual Worlds, for years I have interns, some come from the Netherlands, others coming from France, in this case, from France. And we started in 2019 to look at a different way of looking at the viewer. First of all, the current viewers are not situated to, for instance, support the headsets, a 3D headset on the right way. And that is because the loop over the viewer and the server makes it impossible to get the right frequency for the headsets. So we are thinking of a number of steps and a roadmap to make a new viewer out of the scene gate viewer. So the first step, and that's also the step where we have been busy with the last two years and continue to, the current interns will go to September 2022. We are looking at taking the rendering part out of the viewer and replace it by something else as a first step. Next step would be can we do different renderers in the same viewer so that we also can support 3D headsets if we want to. So that is the primary focus that we had in 2019 and in 2020, and up to September 22 to create a new viewer with an updated rendering part. And later on we want to see if we can change rendering parts and also we can make the viewer more common in the sense of plugins and make it instantiational to the job the viewer is to perform. In other words, have different voice of IP sources, have your menus tailored to what you want to do with the viewer inside a certain job, say training, so that training menus can appear in the viewer, et cetera, et cetera. So that is what the total roadmap looks like, but for currently we are working on this first plan where you see the red square rounded, which is separating the rendering part from the current viewer, so you have alternatives for new viewers, for new rendering engines, and try to make a first working prototype of it. Yeah, and they have selected Godot, is that correct? Yeah. As the initial game engine to replace the London Lab rendering. Yeah. This is line by line work that these interns have been doing, and in the process they've also found some codes that is in there that's not even needed, that's not used, and so in that sense, Cingate 2.0, hopefully will have a lighter load on the user system because it will remove some of the bloated code. Yes, and it's also not the intention that Cingate will be compatible with Second Life viewers in the future. Correct. It will be compatible with OpenSim to make it fully efficient and connected to OpenSimulator and not to try to keep the balance between Second Life and OpenSim, it's purely OpenSim viewer, which makes life a lot easier. Well, and it also adds to the security effort because while we have had an effort where a lot of third-party libraries have been updated that will be coming out in the next first generation of Cingate, when the rendering engine is different and we no longer need a lot of the London Lab libraries, we won't have to worry about keeping those up to date. That's true, and yeah, and we can then, on this path of the roadmap, we can of course extend, we have now a number of extension that we as a development group would like to have and to make the viewer more common, more flexible in use, but of course in the future we can add things from the community and certain functionality the viewer would have and supported by the community is also a part, can be also put on the roadmap for the future. That's one of the things that we would like to do. And of course, as Lisa already said, we need people to help us if we want to continue and to create this viewer in a reasonable amount of time, then we also need people that want to contribute to it. All right, I do have a question from Art Blue. If this is as I hear now, then you would have a license to make money by offering it as a service to other areas and refinance it this way. I know you said it on open source, so you could charge only on consulting. There are, as I assume, many patents in the voice field. Have you taken care of that too? Let me see if I can answer that question. First off, Echo Voice is not providing voice service. Echo Voice is providing connectivity between Mumble, which is the voice server, right? So when we are talking about things like voice patents and whatever, that's really outside the scope of Echo Voice. That's not what we're doing. We're providing the connectivity for an existing project. Now, Mumble is a long-running project. It has heavy support, it has active development. It is heavily used in the industry. So we don't feel like there would be any sort of a problem with that. So all of the licensing of the voice itself actually happens outside of Echo Voice. That's Mumble. And I hope that answers that question. Now, as far as some implication of a license to make money, we do not intend to offer region voice connection like Vivox offers right now. As part of Echo Voice, if there's a demand for it, we might look at a way that we could do it at a no-cost kind of thing. But I would imagine there are some costs involved. For example, if you don't want to self-host a Mumble murmur mode server of your own and you want to pay, the Mumble developers have information on their website, which is where I polled the $11.50 a month. That is bandwidth license costs. So for $11.50 a month, you get voice for 25 users 24 hours a day for that month. And that's way more affordable than High Fidelity or Agora. If there is someone who wants to offer region connection, they may do that. I don't know what their funding options would be. We have not looked at that because our focus is really on getting the software developed for connecting. Does that answer your question, Art? Yes, Mumble is already free for all. It is open-sourced and it's already available for anybody to use. For example, if I didn't have OpenSim in the mix at all and I wanted to have a voice solution to talk with other people that visit my region, I could ask them to each download a free Mumble client and then tell them the URL that they could connect to the Mumble server and then we could have a conversation that's external to OpenSim. Echo voice integrates it within OpenSim in the same manner. You see what I mean? Okay, good. That answers that question. There was a remark in the IM about not knowing of unfamiliar with Godot as a gaming engine. Well, Godot is one of the gaming engines that had a very, very active support and is gaining terrain and has a very large user group. That's one of the reasons and it is totally open-source and that is one of the reasons we are looking at it. So there is no, because if you use Unreal, for instance, then there are licenses involved as soon as you are going to use it industrial and our perspective is still that we want to use it in our company as a communication platform between different and we want to use open source software then and Unreal is not completely license free in that case. So that's one of the reasons why we choose Godot and also it's a very living community as well. So we expect that it will stay quite long and have benefits from it. So that's the reason why we looked at Godot. But the interns will probably tell a little bit more about it later on. Now, to answer Nick's question about can we have a mumble conversation in open seminal and external together, absolutely yes. If the person who has a mumble client knows the server URL to connect, then they can do that. Now with the Echo Voice design, if you look at the diagram, I backed up the slides for you. In the upper right, there is a red shaded box which says future off server. Authentication is what that's short for. That's where we would provide connectivity for mumble client to connect externally. So say someone who can't be in virtual but they want to participate in the conversation, that would be one way that would happen. The other thing that I'll give a little clarity on the Echo Voice design in this diagram and you'll see this in our booth as well. You notice the cyan colored lines and the cyan colored region module add-on. That's existing. That's how VCOMS mumble solutions worked in the past. We insert in the Echo Voice design the green boxes which are the server bridge, the client bridge that would then allow us to have detection and automatic switching between the Vibox and the mumble client that's running with the viewer. So when the user jumps around on the hyper grid and they jump from a region running Echo Voice to a region running Vibox, they don't notice anything different. They just connect. So I hope that sort of clarifies a little bit more about that design. Any other questions? Okay, we've got about a minute left. Any last questions here? All right, well, thank you everyone and I hope you stick around for us. The interns are going to give us some great updates and I really want to help celebrate that work. Okay, thank you very much. Thank you very much for listening to us. And well, I hope to have a nice presentation by the interns in the next half hour. Thank you very much.