 Hi there! Thank you for joining me for this session about driving the use of 3D engines in and out of the car here at the Automotive Linux Summit. My name is Royla Ryan. I'm the general manager of Digital Media & Games over at the Linux Foundation. I also work along with a project like Open3DEngine.org. I want to talk a little bit today about how 3D engines are really impacting vehicles and how people look at their displays and type of user experiences. So with that in mind, let's kind of take a look here at where are we today? What do the people expect? What do they see? And so if you think about it, the cars today, they have large LED displays, interactive type of entertainment screens, kind of fully embedded systems. They have smartphone class hardware. So that means that you've got fully intelligent cars that have the same kind of hardware of what you carry around in your hand, which is actually very, very powerful. Except it controls a lot more systems, but can provide a lot more interactivity. But you'll find out that a lot of times in these cars, it's kind of changing. People either want to use their phones or the onboard systems themselves, and it really varies on the capabilities or what they're used to. The other thing about it is that with the cars that today, you have fully integrated navigation and control systems. So this means when they want to go somewhere, they're either going to again use their smartphone or they're going to use the system that's actually built within there as the value add. And the other thing is that there's a ridiculous number of sensors, digital sensors and data inputs that are constantly feeding these types of systems, kind of telling it what it's supposed to do and how to present that information to the actual driver themselves. And these are the things they actually expect when they find cars. It's just not like the way it was 20, 30 years ago. The world is changing when it comes to vehicles. And so let's talk about kind of inside of the car and what we see and maybe some of the things that we can have that'll come up in the future. Well, for starters, remember, I talked about the LED screens. Well, they're only getting larger and they have a lot more interactive information. When we first started out, it was kind of uncommon where you had these actual LED displays and then we started seeing five inch panels, six inch, seven inch panels. And now you'll find basically iPad sized panels sitting inside of the car. And the actual instrument cluster itself, which used to be kind of liquid crystal, instead became LEDs and they have a ridiculous amount of information that's actually being shown on them. And they're being drawn actually in real time. And so these kind of cluster displays that I'm talking about have a really big impact as to what the user can see and what they actually will experience when they're driving their car. And it allows them to kind of customize and configure it. But these, of course, are all in 2D. Now, you also will see there's some newer things coming on that have been talked about in the media, such as augmented reality navigation interactivity, where it's actually showing a heads up display maybe on the actual windshield and drawing it alongside as the actual user is driving along, which is pretty interesting because this means that now we're talking about kind of entering a different layer of experience to how they drive and what they see and how it can be presented to them. Some other pieces that we've seen here are stereoscopic displays and 3D eye tracking, where it's actually taking a look at what are they pressing, what are they doing, and also giving them kind of a depth within the actual display itself. I haven't seen too many of these, but you can definitely read a lot of articles that are talking about these type of displays. And again, what it is we look at between the AR and stereoscopic, it's that point that you can get away from just a two dimensional experience and do something with more immersion, more depth, without really distracting people. So if we take a moment and we think about this, well, what's going on though in advanced displays and 3D rendering, you know, how are people going and doing these things, and what is it that people are used to seeing? I mean, you know, it's not, again, it's not like it was 20 years ago where we had more of the flat screen, things are changing quite a bit. And so really 3D is the new 2D is what it comes down to, and you'll find out that whenever you're doing it, it's not a choice anymore. You don't have to say, well, I'm either 2D or 3D because if you wind up going 3D, you use 2D as a result of it. You'll actually paint your display panels and things like that, which are two dimensional plane that's put on a three dimensional object. At the same time, when we look about that, these same 3D engines, like I said, they'll draw the two dimensional pieces. They draw them like a movie screen in a 3D world, which means that as you're drawing your 2D panels, it can be updated 20, 30 frames a second alongside with the 3D renderer, which is just updating that plane. And the other thing is that you can have these 2D objects where you can blend them with 3D objects. So you can have multiple type of displays that you're drawing as kind of like buffers on the side in different elements, and then you can present them on a 3D object, which means that it doesn't always have to just be 2D. It can be size, rotated, angle. You can do quite a few things with those when you blend them in with 3D objects and present them in that order. As a matter of fact, video games have been doing this forever for special effects on what you might think is a ball or something that's coming at you, but it's really a two dimensional plane that is always focused towards the camera, where just no matter where you are as you walk past it, it's always focused to you. So it looks like it's three dimensional. And what that means that the depth interaction of what users are actually doing, how people are seeing this and perceiving it, it can be greatly improved. This means that, you know, instead of just having this thing where there's just information where it's presented, and you have to pay attention to how much real estate do I have on the screen, now you get to change how things are thought about. It's not just about the real estate, but how can I present it alongside with other information within that boundary, but not just on an XY axis, but on a Z axis. Now, if we think about also when we bind all of this together, let's go take a look at kind of the sensory integration of what's happening. Well, when you start blending these together, you find out that the augmented reality can help keep your eyes on the road. Remember, we're very natural on how we view things or what's in front of us and we take that information and we make decisions based upon it. But you'll find out a lot of times people are actually taking their eyes off the road, looking at their LCD panel or looking at the display. They're looking at all kinds of different places, but by using augmented reality, you can really change how that is because it's just natural. So within their field of vision, they don't have to be distracted away and anything that changes within that view, they can react to, which means that you have things like haptic audible 2D and 3D notifications. When you're moving outside of your lane, maybe your steering wheel kind of bumps around in your hand and lets you know, hey, you're drifting out of your lane, or maybe it actually shows you an indicator that says, here's where the lines on the lane are. You should probably stay inside of this or if there's something in the road and you don't even know it's there. So having these notifications, whether you hear them, feel them, see them, they're all part of it. And what's important is that that actually helps convey hazard information and assistance where it can be helpful. So if there is something in the road, maybe you can't see it. If it's foggy out and there's something really in front of you and you don't know it's there, you can't see it, the reality is that a sensor could be picking it up and it could be showing you an augmented reality where you can actually do something about it without even knowing that you could see it. And the other thing is that if you're driving around with navigation, this kind of assistance becomes really helpful because you're not looking, oh, is this my turn? Do I have to stare at the screen and look up and did somebody hit their brakes? These are really important pieces that when it makes the comfort of driving, the ease of driving, those kind of elements when they're present in the car become more of a benefit instead of distractions. It's really the careful part of it that you want to make sure you're doing. Now, the other piece here is let's talk about 3D in depth. The thing is we look at this though, I mentioned before you have a certain amount of real estate to work with, kind of that X and Y, you have that square to work with. But when we're talking about depth, it changes because you can present more information. If I show you a vehicle of where it's located, how far something is in front of you, I have to create kind of lines that maybe scale down. But if I can show you this in 3D, you'll get a better sense of depth and I may only have to use one third or one half of that vertical resolution where I might not be as cluttered up in the display. So that means you get a greater amount of immersion and information feedback by using these kind of 3D interfaces. And the thing is that if you look at natively how they work, right now, if you're trying to go kind of like 3D to 2D, you have to do all the translation. But 3D engines, you know, once it's been mapped to this type of coordinate space, the engine does all the work for you. So really, all you have to do is understand where it's in space and where the distance should be and let the engine actually do the work. And so, you know, while I'm talking about engines here, let's talk about what does it really take to build a 3D engine for a moment? So there's the posed question. So really, let's think about it here. It's not a very simple proposition, but what does it really take? So if you want to build a 3D engine, there really are as complex as operating systems if you think about it. What does it really take? Well, you're going to need, if you're really going to build one that has all the pieces, you're going to need eight or more years to actually get feature parity. And then you have to have an ongoing commitment to kind of keep pace with the advancements as you're moving along, which means you have hardware, software requirements. You know, there's constantly newer chips coming out, newer features and capabilities, newer sensors. All of these elements have to go into it. And this, when we talk about a 3D engine, it doesn't just mean just for cars, but this has to do with simulation, AI, power, games, anything you can think of that has to deal with this. Well, these are all part of it. The other thing is you have to have a large investment of money to kind of fund the teams and technology to do this. And if you're wondering, well, what kind of team are we talking about? These are people that have to have specialized knowledge and that's experts in science, math, engineering. And if you really want to know that the developers themselves, not only do they have to be very knowledgeable in these areas, they don't have to be able to program and bring it down to how is it going to work on the graphics hardware? What are the physics that might be involved? What are the lighting? How do I get networking to work? What about cloud services and all of these elements? Because you know that a lot of vehicles now are cloud connected. They do have online services that actually do the interactivity and communication back and forth. These are all elements that either that somebody who wants to build these kind of engines have to build all of these different services and connect them in. And so the other part is you have to have that integration expertise to make sure that, however number of these systems that all work together, where it's an animation or a lighting or a presentation layer or a network layer or a core system, or how's it going to run on this hardware, you have to make sure that all of these pieces are able to communicate and work together, you know, in real time. And the hardware isn't really purely defined. Either you're building something for a very specific chip, which would put you in lock in and probably deprecate itself within three, four years. You know, you have to understand that these things will change. And so that means you have to build actual platforms to handle these things for this diverse hardware and operating systems and be able to have them kind of work the same way across. So when I'm talking about 3D engines, what I want to spend a moment talking about is an open source solution now that's out in public that has to do with a lot of the elements we're talking about here today. And so that project itself is actually called the open 3D engine. And it's basically put forth as a AAA class 3D engine. It's had games that have been written on it, large scale simulations that have been in the millions of users that have actually enjoyed it. It's been rewritten over the years as part of a project inside of Amazon, which then released it to the open source community to be used by really any vertical industry. And so the first thing it starts out with is kind of like a vision, which is that the mission of this project, which is the mission of open 3D engine is to make an open source fully featured high fidelity real time 3D engine for building games and simulation available to every industry. And it's really important to note the available to every industry, because that's the reason why I'm talking about it here today. It wasn't just built as an engine that is strictly set for building first person shooters or games. It's very modular and severable so that you can remove the elements that you don't want and use the things that you do. And so to kind of go along with this at the same time, we had to give it a good founding point of what are the values with this project. And so we wanted to make sure that it was neutral to all technologies and companies and that the interfaces and systems can be used universally and encumbered so that any organization can use it in any capacity. We wanted it to be industry agnostic to make sure that the core is actually, even though the core is a game development, it should be able to be used by anything to take advantage of the advancements that have been done. So in other words, if somebody builds a new type of rendering or a simulated LiDAR system for a game, you should be able to use it in a car or fluid dynamics or physics. It needs to be open. That means it's transparent and accessible. That means it's also independently of any kind of interests from any group. We accept contributions from everyone based on the merit of the contributions. And it's available for everything, but it's corally based in open source values. It's got to be easy to adopt. That means being able to onboard, kind of be brought up to speed and be able to start exploring and using the project. And it has to be fair. And that means that we want to avoid any kind of influence bad behavior or pay to play decision making. We have a lot of companies and members that are actually part of this that are in and out of enterprise. And so that's got to be a level playing field. And one of the biggest parts here is that it's got to be modular. In other words, the extensibility has to be done in a modular manner. Outside of the core engine where people can actually make changes to this and not have it break unrelated elements. In other words, if we want to rip out the renderer, you can do that. If you want to plug in a whole new physics system, you can do that without impacting things that have nothing to do with those systems. And then it has to be platform agnostic, which means that it has to be able to work on different environments, operating systems, and architectures across the board. And so why is this actually important to you? Well, for one, working with open source, you'll find that there's a lot of benefits. But first of all, you can dedicate more resource to actually building what your type of simulation or what your interface is instead of focusing all your time on the underlying engine. At the same time, you get a big head start because you have this complete package that you can build upon. It's a lot easier taking things apart and being able to do that without having to just rebuild everything up from scratch. But at the same time, you don't want to have to unravel a lot of spaghetti code or systems that are embedded within each other. And so we've made sure a lot of that has been separated apart so you don't have to deal with that. And then also become a part of community where there's a lot of expertise in different areas and a lot of experts that you can find in these diverse areas. And we've done a lot of this in Discord where people can ask questions continuously. And also it's free to use. It has the flexible open source Apache 2 licensing, which has really no boundaries. It doesn't matter where you are in the world. It's, of course, being open source. It can be utilized by anything. There are no real restrictions to it from that aspect. The other part is that it becomes really important to be able to support and sustain the project so that as this continues to grow, developers can actually depend on it and they have a community they can keep working on. So you don't have to wait for any specific agenda. It can be driven. And that's why being a part of this allows you to drive and influence the future direction of what the project does. And you can accelerate different needs by taking on different roles and things like that. If someone decides to build on this, they can align some of their downstream projects with some of the upstream community to actually build it out and find other uses of what it can be done with. The other thing is that you can find a lot of experts and attract talent because you can see the code that they're committing and you can see the projects that are being worked on in GitHub or as part of the community. And so it makes it very easy to do that, especially if you have collegiate or people who have even been in there for 20, 30 years building on these things that may be in a different industry that would never have thought of actually working for this type of industry but come to find out it's aligned directly with what they love to do. And so the other part is the solid security by transparency because of the contributors, higher reliability, low maintenance and sustainment costs being involved. So when we talk about how somebody would consume this engine, it's a license to the users. It's Apache 2.0 and there's an option to license under MIT. So the other part is that the contributions of the foundation of what gets put in have to be under both Apache and MIT. This allows them to retain their IP, the ownership of IP, license only, but there is no separate CLA. We have a DCO that we use that basically confirms that yes, you are authorized to be able to commit this code. It's yours. It's not restricted. And that's how we kind of do that control of that. And then the use of the name and logo for different projects and services, we work with almost everybody, but we keep that in accordance to the trademark policy. And the open source versions are right now for the open platforms limited for PC, Mac, Linux, iOS and Android. So that means you can do all of these different flavors, but it's not limited if you wanted to put it in an RTOS because it is C and C++. So you can actually use those libraries, but you're not just limited to where I have windows and I have to use Windows APIs because you can see with having Linux, Mac, iOS and Android, the call libraries are functionally extendable. So some of the benefits for one, obvious complete freedom from licensing fees, you get to use it and do whatever you want with it at that point under the Apache licensing. But you get faster innovation because it's openly covered and collaborative in the open source project. I mean, we've seen open source grow at a very fast pace, especially as a project gets embraced by more and more industries. And then the other thing is that the implementations of what are done are actually based on the needs and wants. So somebody can present something, get people to surround it and get them to actually buy in and drive it forward. And having a never-growing community for support and talent kind of empowers that and you get more growth, research and development. We've seen people come out of nowhere, start building things that we didn't even know was possible or didn't think it could be done for quite some time. In the beginning, we didn't expect Linux for support for another five, six months. The community had something otherwise to say about that and within three weeks, they had a build running. So it's one of those things that can really change how fast it grows and builds. There are thousands of pages of documentation that are searchable and for support online, plus the GitHub itself also has support and search services in that to dive into. And then also Discord, you can do a lot of searching inside of. And then of course, we have the Linux Foundation, which has 20 plus years of expertise in open source, which is why having this in Linux Foundation is really essential. But one of the things that we're talking about open source here is that you can't just kind of throw code over the wall because sustainable ecosystems matter. And so when I say a sustainable ecosystem, when we started out starting to build this type of project and bring it out, it was what's really going to work for companies and individuals that are contributing and leveling that playing field. So you can find out that you have a technical community and that community has different ideas and thoughts and building. And they start to build these technologies alongside with different companies. Those technologies then start to become products for companies and as a result, those products can then open up new markets. And the result of those markets is that that allows companies to kind of build out and monetize those markets, which brings the back, the participation back into the technical community. So you can see it's kind of that great flywheel of how it continually goes through its cycles. We look at it from a company perspective. You have the projects that are coming up and the projects themselves create the products. The products create the profits. So you can see that's the outlying there that allows it to work in a very synergistic manner together within an open source environment. So it's a balanced flywheel of community and commercial where they have an ecosystem that enable and support the openly developed technology into commercial products and solution and the commercial profits then benefits those companies who in turn are able to participate and reinvest back into the project and the technical community. And this is really important because having those kind of support when people are actually trying to build things upon it, we've already seen it open source has a big impact on companies across the board. As for where we started, having a really motivated community, we started out with maybe a couple hundred. We're coming up to almost 2,000 in our discord, which is active people over 500 people on at one time constantly communicating. We see thousands of messages per week and thousands of minutes of communication. But what's some of the partners? These are some of the initial partners that have joined us, companies like Adobe, AWS, Broadway, Intel, Neantic. Then we have some of our general members, Excelbyte, Futureway, AudioKinetic. So you can see that there's a lot of companies and we also have different organizations that produce different packages of software that support on top of that. And then we have different groups like the IGDA, Open Robotics, RIT for Collegiate. And then we have companies that actually provide different services and allows them to provide their different services inside of the system where they don't have to worry about the licensing. They have their same licensing, but because it's modular, they can build their technology and not have to worry about the Apache MIT licensing because they're modular. Their particular plug-in works underneath but there are a lot of major components that go into building a 3D engine. For one, you've got to be able to program it. Now, when I say programming, that doesn't mean that just having to write C++ or C sharp or something like that. That also means being able to have visual type of programming where people can do rapid script automation and things like that so that they don't have to have just a full-time developer and you can do some prototyping. But you have to understand also how to build some of the physics and collision subsystems and how do I actually change those? In relation to some of these, this is going to be important. How will it react? How will it roll? Then again, visual scripting, Lua and Python, things like that. These are really important where I can do rapid type of logic trees that I can build things out without requiring to write it C++. We write web pages and things like that today. You don't write them in C++. You write them in HTML. They work on web servers. So you find the appropriate language that actually saves time and produces the same efficient result of what's required to get it done. And then also you have things like AI, if you want to do nav mesh and behavior trees for doing simulation. And then also C++, code gems, Python scripting, QT for common interfaces and things like that just so that you can extend it for how you want it to operate. Then you've got also the packaging and assets which means you have to have the systems that can be able to import and export these different type of file formats. So that means that FBX and graphics sounds and code steps and things like that, you have to be able to have the systems that can handle that or you're going to be writing all the custom tools which ultimately means you'll be writing a packaging and asset system. And so part of it is having kind of that processor that can do that kind of conversion of your assets and have that extensibility. And then also having the package deployment system using kind of open common tools it's really easy to get yourself pigeonholed into something that is great for a proprietary system but when you want other people to support it it becomes more difficult. So using open systems like CMake that are very common base are fantastic and then of course having the kind of sample and starter material for people to be able to work with and to bring this in context a lot of this that I'm talking about is actually tied into the open 3D engine and what it takes but you'll find out that most 3D engines really do have to have these kind of systems. So the other pieces that you need to have is now in some cases you may not need them like in cars, maybe you don't need a full blown animation system you can do different things that like sprites and controls and things like that that may not need it but in a generic 3D engine animation is something that's kind of common of what's used and that's the fluid motion of 3D characters. And so having things like a motion FX animation system having the cinematic subsystem and things like that but then also having the networking and cloud that you have the software that allows to communicate with this engine and in this case I have it listed where you can actually allow players to play together and scale globally but if you want to define a player a player can really be anybody who is actually interacting with the system itself that is not just by themselves. So that means that having a network layer is really important to be able to have that communication going back and forth and then having the cloud services support so that you can pull things down like digital twins, navigation maps and things like that that can interact in the 3D plane that you want to present without having to write a complete translation system. Other pieces here again platform support to be able to build from multiple platforms because you don't know what your hardware will be or how you're going to prototype it what you'll be building it on it will change for different ones. So that means that for what we do a lot of things with open 3D we make sure that it's supported with native windows, Mac, Linux editing with full editor and runtime supports you're not just locked into one place. And really the common type of interfaces that you'll see in a lot of 3D engines and controls that are built with that. But then also having mobile platform support for things like so if you want to turn around and use this for marketing and you want to be able to have it so it works with web this becomes really important for how you want to do that because you may want to use the same assets across boundaries. And then having kind of VR and AR support to do some of these things that we're talking about. And then of course if you happen to get into consoles why not because they're supported as well. I'll dig a little more into these in a little bit but this gives you an idea of what are the things you'll find commonly found in a 3D engine that is a fully featured 3D engine not just you know something that's maybe particularly fit for one specific thing but can't do anything else. So let's dig a little bit deeper into what we were talking about earlier. So I want to talk to you a little bit about the displays here. So if you think about it when you're talking about this in regards to a 3D engine for 3D engines first of all you'll find out that you don't have to do any kind of pre-rendering to do the animation of rotation of things. So if you have the object that's a mesh that's on the system and you want to display it on the screen you don't have to create kind of a set of slides or use it that space you have the object you can spin it you can rotate it you can affect it and do all those things. So really no pre-rendering or having to use those kinds that kind of overhead and then also you can embed multiple 2D and 3D interfaces with direct input feedback. That means that when you're drawing your world you want to think about it from the perspective that I'm going to present a world that has maybe all the 3D elements. Maybe I'll draw all the sides of the road I'll draw a plane that shows the road that I'm on but at that point I can also put in the speedometer and some of the other specific information on a 2D plane that shows below it but still have the 3D interfaces of 3D vehicles in front of me, the side of me, behind me. You can actually show all those within the plane. So remember you can have multiple things in here of what you want to do and then also we're talking about this AR navigation interactivity. So if you think about it we take a look at this for a second. We have a 2D plane right now that's showing kind of the stats of miles per hour and things like that but if you look at the arrows, the arrows here are actually in 3D. They're resizing, they're scaling, they're rotating and they're moving along with a mesh. So the reality is that this right here gives you an example of a mixed 2D and 3D type of environment. So when we're talking about this kind of environment just kind of showing through it let's talk about what that's going to be. Well, here we have the display kind of going along. So what you want to do is you want to kind of map your sensors to the 3D coordinate space relative to the car. So that means that once you've identified the car and how far something is and how far it's out you might think, well, I'm moving through a landscape, but the thing is that you actually don't have to move. Everything can be moving towards you. It's relative in space. So as long as you understand the distance of how far something is from the car, you can then create that Z axis or you can create that depth axis and say it is this far away and this far to the left and to the right and be able to use rotation and angles in the 3D engine and you don't have to get super complex with it. You can keep it very simple but the reality, the thing behind it is you don't have to actually change how everything is done. And so again these are all done with the sensors that you use in the car that are all relative that you can do this kind of interaction. And then like I said the engine will scale the 3D objects to the camera. So once you have, once you've defined that mesh of an arrow drawing multiple, having them rotate, move around, do all that, all you do is pass the coordinates. It will take care of the rest for you and the camera of course being the car or the interface of where you fix it on the plane. Now the 2D panels are here for the text. Now we can scale and size those as well. Like in this case they kind of showed where the hours are but you could just easily draw distance of how far something is or what it's out. And you could put that in there. That will be able to size and scale along with it but it doesn't have to be a three dimensional object. It can still be a 2D plane that's been attached to a 3D object. And so to do this is a very simple interface. So you don't have to have some kind of heavy super hardware to get this done. Very simple, you know, cell phone hardware that's out there, it can handle this very easily and so you only make it as complex as the system that you want it to be able to do to be able to do that update. And the thing is you're not trying to actually play a real-time video game so you're probably not going to try and fire this out. It's 60 to 120 frames a second. So the fluidity of how this will act can also be gated by what your hardware capabilities are and what you want to do. And then of course you can embed this to a Linux type operating system or you can do it a fully embedded system. The engines itself, if you think about it, you're going to want this to be kind of a C and C++ type of library that you can separate, detach and be able to build this so that it interacts with your type of system that you have, whether it's an RTOS or it's a Linux-based system to do this kind of interactivity. And so by embedding that kind of code, you get the full control of what it is and not having to utilize libraries or other types of code that may not be necessary for what you're trying to get done. Severability is very important when we're talking about a lot of these elements. So, you know, I want to talk a little deeper about the stereoscopic displays in the eye-tracking. Well, if we look at it, you know, you can render again multiple 2D and 3D panels, but at the same time you can use a dual engine camera for the rendering, which means that within the engine itself you'll render two actual views, one to the left and one to the right. And by doing that you will draw your world in 3D, but you will render two different camera angles. So that means that you can have, I've seen different displays that are out there that actually have where they display two different angles, but by looking at it you have the ability of creating a 3D, a three-dimensional depth without needing glasses. I've seen a couple tablets use them and a few others, and it's actually pretty cool. So this means that for a 3D engine you create your scene in the 3D engine and you take two cameras and you have each one of them render that ocular left and ocular right, and then you can send each one of those to a particular panel that's going to draw that 3D environment. But the advantage is that you don't have to change the hardware and you don't have to change the software. You can change your environment and you can change everything around how you build it, and it will just simply work, which is really nice. And then for eye-tracking support, I mean the reality is when it comes to 3D engines eye-tracking support has been around forever. It really is their support all the way across on almost everything out there, the old Tobi systems and a handful of others that you can find them built into like Alienware and others. So eye-tracking support has been around for quite some time. So these engines can really take advantage of that and you can use that for the responsiveness or at least to be able to present what somebody is doing or what they're looking at for what they may want to see. So I want to take a minute to actually talk about some of the design and simulation elements of this. These are things that you'll do that aren't necessarily inside of the car, but how you'll build the car or do designing of interfaces or even expose some of these things to the customers that are online. So it really starts here kind of where we're thinking of it from the design and fast iteration. That means the conceptual interface and part design of how you'll be building things and how you'll be working. So this can really come from the perspective of maybe having a VR type headset or the type of interface where you want to design the car or build it or extend it. And then also it can be from the perspective of where you want to build an interactive interface. You want to see how this will look and how this will operate. In this case kind of an example tachometer that has this animation that goes along with it. But then at the same time perhaps you want to see what a renderer of this will look like or how to be designed or how will somebody see this. And so this is really important when you're starting to do these because you'll have all these different parts and elements and designs that you'll be building but also you want to be able to have those broken out. So in other words what you don't want to have to worry about all of the work that it takes to pre-render everything. If you have all the parts. This becomes really important when you're talking about doing marketing or pulling these elements out and building things upon it year after year whether you have to redo all of your images and all of your things of what you'll expose to the end customer or how you can do this more dynamically within a 3D engine. And so when I'm talking about it from this perspective think about it this way you have the ability of taking these different pieces and viewing them with their full fidelity and to do that you really want kind of a physical based renderer that can do that where when we're looking at the paint on the vehicle we can see the metallic flex we can actually see what will reflect what will not, what is matte, what is metallic, what is rough. These are really important and you want to make sure that you have the full color gamut that you're actually able to see it with and have the best fidelity because not everybody is going to the show floor every single day a lot of them are viewing it online they want to see what it looks like they want to understand what it is and what the possibilities are and the better representation you can get the more interesting they become and so having that kind of full control of design where not only can you take the idea of a fleck paint but you can change how detailed that paint is how reflective the metallic is, what are the base colors and you can do every single aspect of that from within the engine and be able to apply those materials so that each one of them operate independently and that when I want to change that or have that different exposure all I do is apply the material and it's updated dynamically and so it'll be really important when we're talking about in marketing or visibility of allowing somebody to make those changes just from the materials and parts that you've actually given them but on top of it when you're actually building this cost becomes a factor when you look at how much work do I have to put in to get this done so you have to look at it also from the technical prototyping and automation and this is the ability of using things like visual scripting where you can bind multiple elements together so that they have a relationship to each other but you don't need a C++ program to do it but if you do prefer to use your C++ or Python you can actually still use those and build a lot of this core interaction this is really important when you're trying to build your pipeline of how you're going to get materials brought in so in other words you may have it in Maya and you can use the Python interface from Maya to manipulate and control it and then have the asset processor pick up that material, split it apart reform it as to what it's supposed to be in the PBR engine or if you have an entire diagram of a bunch of materials and you want them broken out you can do that using automation within Python between the systems which is kind of a de facto standard which means that once you've got that iterative pipeline whatever you feed into it you can have it come right back out and work within your systems the other part is having these modular plugins because these are really important if there are certain elements that you want to use such as a specific physics engine or things like that this becomes important because then you can apply it without having to change the whole entire engine and do that and this also applies to things like renderers and different environmental areas so if we think about that when talking about kind of simulation and digital twin you've got to look at it well road simulation so if I want to see what it's like with the digital twin I can pull something down like say with cesium where they have their 3D geospatial you can pull that down to the engine and it can actually model out all the road maps you can do that for doing AI training models and things like that but if you apply say a realistic car physics engine on it and you have it going through you can simulate how much roll or tilt or how much bounce or what is the actual feeling inside of it and so this is actually really important for when you're trying to kind of prototype model that out using game physics are kind of interesting but there are different companies out there that build real actual physics that are built for like F1 racers and for cars and things like that that can really get as tight as possible and that's really what it comes into for that physical handling when you want to do this I mean remember the basis of this is that it's for every type of industry so you can turn this into a car simulator to drive it where it will work in real time being able to play this and by having those realistic physics you can actually get the feel of that but at the same time you can also look at the layout of the car or what are some of the safety devices that are involved how easy are they how accessible do they block things what are the visual impairments if I try to look behind me is there a view issue do I have something that's in the way and so having those designs and looking at them in 3D space especially in VR or even in AR when you're kind of augmenting it with your interfaces you can tell what the experience is going to be before you've actually had to build anything and because you're working off the actual you can translate that all of that media over to what you're going to be doing in marketing so and when we're talking about marketing it's that customer experience like I mentioned earlier for online marketing and media a lot of people now are using these car configurators from home and so I mean even myself I go looking for a new truck or a new car first thing I'm doing I'm going to hit the internet I want to take a look at everything I'm going to see all the different styles types paints and what are all the options and how do I want to build my car and in some cases you've got some that are actually doing it where you can order the car online and have it delivered from home delivered to home so this is becoming more and more important the problem is to do this traditionally you have to actually turn around and build a render for each type of part or material which means you've got thousands and thousands of images with a limited number of angles that they can actually view because let's face it it's the number of views times the number of parts times and it just keeps multiplying and multiplying but in a 3D environment it's kind of different because it's the same model it's the same materials you know you're just swapping them out and so to create the same car that has a transparent one or a you know or a red type of mesh or it's going to be material you can see how the car can be changed just from the 3D perspective by just changing the material and it doesn't actually change the car or you can just change how the design is overall by changing the different rims or styles and things like that but the model still maintains itself and the full integrity of it so you have to really think about that for a moment because when you're trying to build these kind of configurators you'll be adding new parts and new pieces having a 3D engine that can actually do a lot of those things and draw those images and automation is really important and then the other part behind it is environmental views you've got it where now you have these 3D different environments you can build a city environment outdoor environment the track environment and you can take these same 3D models and place them in those models using the dynamic lighting using ray tracing using all of these things to be able to have them see what it will look like when I have the cool neon lights and all that and my paint job that I have that I really like so these are things that can really entice people to want to seek out and purchase your car because they just like what they see they're like oh I want that anybody who's actually ever played a car game will build the car of their dreams and wish they could do it but they may never see it but you can give them the closest thing to it in a 3D environment and to accomplish that there's a few ways of doing that for one we know we have the baseline models here that you can build but the difference is that we can take this engine here and move it into the cloud or move it into a rendering farm and what it does is because you can have it running just this client that's running this 3D model and it's continuously updating it at 30 frames or 60 frames a second you could have somebody connect with a web request the web request would then go hit the server tell the server I need you to put these materials on it with this background on it and draw that and then send that image back to a client side and the reality is you can do that within milliseconds because game engines are designed to be throwing out 30, 60 frames a second if you've preloaded all the materials it can switch those objects out switch those materials and pieces and the angles because it natively does that so now you don't have to pre-render things you could be using cloud services and things like that to scale it out but the thing is the ease of update so that means that if I add new rims I don't have to rebuild all kinds of new models or excuse me new images and all those I literally just put the new rims into my cache in the 3D engine and if somebody selects it it just works they'll get a rendered image that comes down to them as part of it running but you can serve a lot of people because of the raw frames per second that you can actually throw out in real-time by requests and you never know somebody just might want that steering wheel that's a little bit out of the ordinary of what you've put together so the idea here is that it's almost limitless of what you can do but you can cut down a lot of production costs and things like that when you're moving from year to year model to model parts as they're being updated it's really important because now you can have your complete catalog that can be put into a 3D engine that's just simply running a view of this car with all the parts ready to go that people can custom select and choose and it can switch them out whether it's interior, exterior or accessories and things like that it's a matter of just adding them on and being able to do that and because you have the kind of thing where you can do render in the cloud because you'll find all the major services have these things you can turn around and get say a G4 instance running rendering in the cloud and then have it take the output of that image send it as a web request into a bucket where it can be picked up by the client under a second so now you're talking about experiences that can only get faster and faster and the more people that want to be able to use them you just spin up more nodes to be a little accommodated and then when it's a low you spin them back down so these are a couple other different ways of doing that but on top of it all this material gets to be reused if you want to do virtual show floors where you can take all of these materials that you have running in the cloud and allow people to actually do a virtual show floor being in it whether they have a VR environment or they're using their PC or Mac whatever the environment is or even on an iPad it doesn't really matter because you can do it either cloud or you can do it native where you have these complete show floors of everything and you're simply reusing all of these assets over and over inside of a 3D engine the other thing about this is that I brought up the point of this being very modular well at the same time you can actually take the renderer of this which in normal engines you really can't because they're just hard wired across the board you can disable the default ray tracing engine or excuse me hardware ray trace engine that's built into this and put say a software one in like the Intel Osprey and be able to generate 4K, 8K, 16K type of true ray traced renders with the same materials, the same projects and same objects and things like that that you have that are already built for it so being able to get print material or super high fidelity material you can do that and you can either use it locally, a farm however you want but the modularity allows you to do whatever it is that you would like to do it and if you build any kind of other intellectual property it's yours to keep or for yours to contribute back so there's so many different ways that you can work along with the open 3D engine some of the different facets of what's coming up in the automotive industry what do we see some of the trends and I can tell you it's definitely not going to slow down so depth is everything thank you very much for the time to sit and listen to me talk to you about all these fun things I would ask that if you're interested you want to know a little more about it you can go to the website o3d.org if you really want to see what kind of support and what the environment is like please join the Discord website URL here discord.gg you can jump in there and you can ask a question to almost anyone there is a lot of information what you want to know and understand and you'll see how it's all broken apart it is a vibrant growing community that just never sleeps