 So welcome to today's talk. My name is Royle O'Brien, yes, Royle is my actual name. In case you wondered, it's not a usual name. But, you know, we're going to talk a little bit about, you know, 3D engines, open source. Myself, I'm the general manager of digital media and games at the Linux Foundation. I also serve as the executive director on the Open 3D Foundation. I spent a pretty good amount of my career, about 20, 25 years, in the video game industry. Building on 3D engines, 3D applications, MMOs, games. You know, a lot of games that people have played, I may have been involved with one or another. Whether I'll admit it or not, different topic. But the thing is that people look at 3D engines where from the lens that they're only for games. But they're used everywhere around you, in every way. And so really, you know, let's talk about, since we're here for automotive, you know, how are some of these things really being used? And so if you think about it, it starts out first with the design. You know, we do all these in clay models, wood models. Today we do a lot of these in VR in 3D space and it allows us to do things much easier in a headset or in a shared environment where we can talk about these things in an AR and a VR type of perspective. It also allows us to train some of the navigation systems and some of the self-driving systems. You would have to record hundreds of thousands of hours of video, feed it to an AI system, be able to get some of that back. But now we can use 3D with physical-based textures and real digital photogrammetry and pull all this together and create, spin up a thousand servers in the cloud and build a million minutes of video and feed that into an AI system at any point in time. So 3D is a really important piece for building these scenarios. And then also inside of the car. I mean, it's not like we're using cars that are just stuck with regular LCD or just numbers. Now we have full-blown LED panels that are going on and we can see they're starting to use some limited 3D things or the illusion of it. We're seeing some newer advanced panels that actually have the ability of displaying in 3D depth. And then of course you have the heads-up display that are showing up on some of the windshields. These are really important because these are the new forms of feedback that really work with the driver and what they're doing. And then outside of the car, COVID really taught us a lot of things. One is that when we have this kind of epidemic, people aren't going to the dealership to go buy a car. They want to build that car at home. They want to see what does it look like at day? What does it look like at night? What about this paint job? How is it going to be done? And those are decisions that they will make as to whether the car they'll buy will be yours or not based upon the fidelity of what you can produce with them. That production is based on a 3D engine of how you'll do it, especially when you try to scale that and I'll go into it a little bit. So when we say how is it used, there's a lot of places where 3D is important in automotive. Now if we think about kind of the design and insight. So again, think about it. You used to really shape a lot of these things of how they were doing, the creation of the parts. But now in 3D we can use VR and you can rotate and look at these from all these different angles that you otherwise could never see or might not even take into account for. You can apply physics. You can apply different elements that otherwise wouldn't have been thought of until you built the prototype. Well that costs time and money. So these are ways where you can cut a lot of these down and you don't have to use game physics to do it. You can replace them with any realistic physics that you actually want to be used. There are a lot of companies out there that's one actually in Britain who says amazing car physics and they do that for the F1 racers and they're not game physics. These are crazy how well they do. So the other part of it is really the design and fast iteration. So you can take a conceptual idea and kind of build some of these interfaces of what you want people to see and you can show it and have them actually port over. You can build entire vehicles, change them, alter them, break apart the parts. So when you think about it, when you're actually doing something, if you build the construct in 2D and then I want to change the rims, I have to change all of the angles of the shots that I made of that. I have to change each one of the different pieces because they're all integrated. But in a 3D environment, you just swap out that one mesh object or you swap out its texture and you can do almost anything, making them interchangeable and reusable. So this was just an example of developing the cars or of what people are really looking at this. The other part is the fast visualization. If you take a look at this example, this is what will it look like from almost any angle for the experience without actually having to build it. Is it too far? Do I feel claustrophobic? Do I feel like it's wide and open in spaces? Does that 2 inches of distance really matter? But you do that in 3D. And then also being able to train, repair, build architecture and build these things of what's around it. That's really important because that's going to determine what is the cost of ownership of that vehicle and how will people look at that. So training in IEI, when I talked about this earlier, virtual tracks, you're talking about pulling down digital twin data from the cloud and this exists everywhere. You can pull this down and stream it dynamically. So now you're talking about entire cities in 3D environments. Get it from Google Earth, GIS, DEM maps. That means you can build these tracks and build these environments. That means having interactive data, buildings, pedestrians, traffic signals. If you look, I think any of us did something in Reinvent recently where they had a million actors in a small area and they did it in Las Vegas. And they showed that until you reduce it to about 50,000 people, you couldn't maintain traffic of just pedestrians. So if you're trying to determine how will I train or what will I be able to do that's more realistic and what are the thresholds of scenarios that you couldn't even create, could you imagine trying to get 100,000 people in a city to train how an AI car would deal with that kind of traffic? Because you couldn't. But in 3D and cloud, you can. And that's an important aspect here. The things that you couldn't do, you now can. The other part is that physical weather scenarios. So unless you're going to wait for a thunderstorm to produce the perfect amount of rain to determine if your AI system can see through this, you're going to have a hard time. We have shaders that have been doing this for years in games and we're really, really good at it. And so we can create weather scenarios that reflect things off of the ground because there's nothing worse than trying to train an AI that looks for the road, but when the road reflects the atmosphere or the sky, then it doesn't know what to do. So being able to simulate forced motion, create things that are really hard to replicate in those environments. And that includes dynamic lights. You're trying to train this thing and all of a sudden a lightning storm comes. Your car is going to freak out. How does it know to deal with these scenarios? But you can create these in a 3D environment and create other visual impairments. So think about it from that perspective. And I only put AWS, so I've worked with them quite a bit, but Microsoft, Google, 10Cloud, they all have a GPU-based instance that you can use that you can spin up these render farms and build these scenarios to train with for how to deal with it, especially with, even if you're doing lane assistance or depth perception. So inside of the car, you know, let's think about it. These are fully embedded systems, and these have smartphone-class hardware. I mean, these aren't just limited things. We're using SOCs that are by imagination, you know, that are arm-based. So you've got these LEDs that have these interactive displays and these full clusters, which means you're running basically what I've got in my phone to a degree in your car with bigger displays and more capabilities of interacting. So when you think about it and you're saying, well, it's car hardware, not really. It's what's being done in different forms on these different embedded systems. So that means opening up your mind to what actually has been developed outside of it and what you can do with these that are possible. You know, these fully integrated navigation and control systems, they're using 3D coordinate systems, okay? They're using a number of digital sensors that you can bring to their awareness, and you can feed these into a 3D environment to represent that data as to what that the car driver can do about it. And also within the car, if you take a look at an example, I have a video here. If you notice, this is showing, you know, the speed, the data, but notice that the arrows are pointing. They came back and they're rotating and they're moving around in real-time space. Well, do you want to actually have to rewrite that entire 3D engine or would you rather just use the coordinate system of a core engine that's open source and create your own intellectual property that can do this for you? Because 3D engines don't have to build the entire world and build the most beautiful things. Even the simplest things like this, that's basically two triangles for the lower polygon that's drawing our instrument cluster and then probably about five more, maybe, I don't know, there's 20 triangles we're shelling there. There's any piece of hardware on the planet can draw 20 triangles, but you don't have to do all the work trying to figure out how do I draw that in space once I have the sensors feeding me that information. So, back up. The other thing that I've seen going on is stereoscopic displays and 3D eye-tracking, knowing what the user's actually doing. You've got to think about it. If we're talking about depth and stereoscopic, that's 3D naturally of what it can do. So that means that you have an engine that's going to take care of all the coordinate space and everything you need to know, and you're not doing planet-size coordinate space, it is rather limited of what you're doing. So, when you think about using a 3D engine, don't think about it as this massive MMO shooter. It is as small as Pong. It is as simple as that. And what you build from it can be that small and that simple, but without having to do all the work to accomplish it. And so these are the things that can bring it forward for you. So, let's talk about some of the advanced displays in the 3D rendering here. So, sensory integration, right? Augmented reality, where we can take a video payload with a coordinate system and actually display what's going on in real time. Being able to have that haptic, which you can feel on the wheel, the audible notifications that are going on that are tied into a 3D coordinate space to identify when things are happening and also hazard assistance, info and assistance where it can actually be helpful, not distracting. You have a lot of things sensing what's going on and just being able to have the coordinate space to show depth. If I put a triangle on the screen, is it the triangle of the person in front or is it somebody in the back? And should I be paying attention to that? These may seem like very simple concepts, but when you're trying to convey a very limited amount of information in a very short period, you better make it simple, because they have to keep their eyes on the road and how you display this is very important to them. So, depth, again, I talked about interfaces with depths that are more immersive. If you think about how you would want to compile this down in 2D, but in 3D, you just place it in Z-space. You literally have all your UI elements, all your 2D, and you just simply move them around in that third dimension without doing any of the work of having to deal with it, without having to compile any of these things dynamically. Because once it's mapped to 3D space, the engine does everything else for you. You don't have to actually figure all this out. So, if you can tell the recurring theme here is don't reinvent the wheel when you already have something to work with. So, mobile hardware, Linux and portability. Really important, the engine itself, this engine that we're talking about, the open 3D engine, this isn't something that was just based for Windows. As a matter of fact, it is actually it actually treats Linux as a first-class citizen. This is not a port. This is actually all POSIX. It has hardware layers and software layers that are designed to communicate with mobile hardware. It is designed to be compiled on ARM. You use GCC. You don't need a Windows box anywhere within miles to get the editor, full system, runtime, and everything running. You can build it on Linux, and you can ship it on Linux. Or, if you want to ship it on Android or iOS. So, that means that it has to also have the ability of using these kind of CPUs with less power. Being able to use Neon and SIMD for fast math to do these computations. And also, ray tracing support for lower and mid-end mobile GPU. A lot of you are familiar with imagination technologies because they're in the Renaissance and a lot of these other reference boards. Well, they've been able to show, and this actually comes from imagination, ray tracing running on these GPUs that are running native today. There's a handful of them. So, you have this Apache licensed open source, full source code engine that already supports the hardware that you're running on today with advanced features that you would think you would need on a top-end Samsung or Apple phone. But it's just not the case. It's already there for you. So, that means having a very lightweight footprint. Having something that, if I don't need an animation system, if I don't need a graphics controller system, if I don't need a sound subsystem, there's a lot of things that I don't need, don't include it. This is a very lightweight binding system where it is a group of libraries that are self-describing when you load them. They don't have dependencies, hard dependencies on each other. So, if you think of it this way, if you're using an engine and you want to take the 3D renderer out of it, find it from all of the other systems because it's usually very integrated. If you want to take the renderer off of the open 3D engine, just turn the library off. That's it. You literally can just shut it off. We actually have where Intel has an implementation of Osprey. It's a software ray tracer. We turn the atom renderer off. We turn the Osprey on. And it just renders to that without having to change anything else that's involved. It's built that way. So, you only use what you need and you don't have to unwind everything to figure out how you're going to actually use it. And so, we also have a mobile working group that's focused specifically on optimizing the open 3D engine for mobile platforms. And this is Huawei, OPPO, Carbonated Imagination Technology and Heroic Labs. We actually have a working group where it is dedicated to getting this to use the least amount of power with the most amount of capability on mobile hardware, on arm-based imagination, low-end GPUs, low-end hardware that can actually do this with displays and be compiled out to Android or Linux or any other model if you can just compile it on GCC or Clang. That means also having the ability of control all of the design. So, this isn't where it's just, you know, it looks beautiful in the end but you don't have the tools to build it. The core of this engine where it started from actually came from a video game engine that shipped millions of games on it. So, it's not something that's, you know, kind of haphazard. The core of it was rewritten completely from the ground up to be completely modular so that we could get it on mobile platforms and others and have the most advanced capabilities. So, it is everything from editor to packaging and as I said, full Linux support for Vulkan as a first-class citizen. So, in other words, when we ship every release, it's not like DirectX works great but Vulkan's halfway broken. In this case, Vulkan is first class we ship binary Debian-based packages or you can pull it down from GitHub right off the source code and compile all of the features that we have that support are there. And of course, the physical base rendering for fidelity, which means you can build the most complex and gorgeous looking that you want. Part of what I brought up about those physical-based services, by the way, is that it works a little different with different hardware, especially mobile. You have different layers where you can choose if you want deferred or forward-based rendering depending on the texture and the surface. So, you can change how much complexity is on the device by changing the shader and pass model on this. So, in 3D models you can use things like FBX, GLTF, OBJ, materials, again, PDBR, which is a metal roughness base model, transparency, but it supports 32-bit per channel, which is really super overkill. But understand that you may never use it, but it is there if you're trying to maintain the fidelity from the artist who created it to the endpoint of where you're distributing it. Shaders, HLSL and extended shader languages that you can use to create any kind of objects of what you want to see and how you want to see it with shaders. And it's made to support both commercial and open source. So, we support blender integrations. We have integrations working with Houdini. You can use the import from Maya and 3D Studio. So, you'll find out that there are a lot of different packages that actually work with this today. And we even have a shared vision with some of our members, like Epic Games, if some of you are familiar with Unreal, the Unreal Engine. They are one of our premier members of the Houdini network. And so, they believe in the same thing, which is if you build your assets in a common format, we should be able to share them between the engines. And that's what's going to allow growth for both sides. So, that means that you're not stuck. If you decide to do work in one place or another, you can move from open source to commercial in whichever manner that you want to do it. And we love working with them, as a matter of fact. So, people think it's a competition, but actually, how can you empower people to build what they want using the tools? So, prototyping, it's also important to be able to build something without needing a C++ developer just to make a small tweak. And so, we have visual scripting, some of you are familiar with, like, Blueprints, but it extends everything of the entire system. So, if you wanted to build an editor or a UI or an IoT or an input sensor and test it without having to do a full-blown C++, you can extend it in visual, connect the pieces together. It will drive and control almost every aspect of the engine. But you can also do it in C++ and Python, if that's your first choice of what your engineers want to do. So, that means you can save time and money by being able to use rapid prototyping to do that and then moving it into shipping models. Everything in here is modular. So, when you go to build, let's say, your particular application or experience, you don't have to modify the original source code. You literally override libraries and you build them out that way. So, you don't have to go and get stuck on a particular build. It allows you to migrate from build to build as they come out without causing those problems. So, the customer experience and marketing, let's talk about the car configurator here from home of what people can build. Just by changing materials, you're talking about changing what people can see and what they can do. But, if you think about car configures today, there's an average of 36 shots, 36 angles, times the number of parts, times the number of materials that are produced and cached so that when you change them, it changes the image that you see. But, you can do the same thing in a 3D node that's running your car with a web request. That can literally change objects and send a screenshot back. Which means you can dynamically have these scale up and scale down. So, if somebody wants to see their car what it'll look like driving in the Sahara with everything, you literally can say, make that as one of the backdrops, it will be building these at 60 frames a second in the cloud. It will change that snapshot, send the image out and move to the next request. So, now you're talking about being able to pump out dynamic images on the fly because it's all loaded and changing. And then, of course, the ease of update. Think about it. If you already have all the images generated you probably have about a quarter million images that you have cached. When the new RIM comes out, are you going to build all new images again? In a 3D model, just put the RIM in. You can literally change the parts out and maintain the integrity of it in how you use it, which is an important part. And that's why having these exploding models are a key element to being able to allow people flexibility. Or, later on, in the cycle, if you release a new product, you don't have to change all of your materials. It scales with the cloud and you have virtual show floors that you can build and have new accessories without having to redo anything. So, ray tracing for production quality is really important. If people really, really want to see what the final material will look like, you can produce those so they can see what exactly will this look like and what will be the closest representation of real life. Because that's going to be a decision piece. They're going to look at that and they're going to look for a bunch of other pictures like it that looks like that. Now you have a better chance of them coming and visiting you. And this, by the way, is using that Osprey software ray trace renderer in case you wondered. That's where these images came from. So, the last part here I want to talk about is what does it take to build a 3D engine? You know, if you think about it, it takes a lot of years to come up with feature parity. And then you have an ongoing commitment to actually keep pace with the advancements of what's being done. In 2010. But 2010 compared to what it is today is not even remotely the same experience. It's a huge investment of money to fund the teams of technology. You need specialized knowledge from a lot of people in a lot of different areas to be able to maintain this and to keep building on it. And you need the integration expertise of these people to actually talk to each other to get them to actually operate at the end of the day. So there's a lot to it to building a 3D engine. The programming, you've got to be able to create the logics. The physics. We have the script canvas, which is our visual scripting. The AI, if you're going to build nav mesh or behavior trees of how things can be automated. Being able to build pieces in C++. Scripting. Having QT or UI extensions so you can build these for people to actually see what you're doing. But then also be able to package all of it up in a form that's shippable. I actually had somebody ask me today, is there a Yachto package of what this could be built into? But it's definitely possible because all of the elements for source code are there. It's a matter of how do we assemble this. But we have all of them now where it can do the conversion of media assets. The last thing that you want to deal with is if somebody builds a 3D object and you have no way to import it. So that's really important to be able to support all of those elements. And then using a lot of the standard off the shelf open source products that work really well like CMake and having sample elements that people can use. If you're doing animation elements, it's important to have one of those on how the nodes are put together in their relationships. If you want to see how a human moves inside of a car, you're probably going to need a skeletal system because when you apply physics, they're probably not going to shift as if they were made out of chocolate. So it's important to have that. And then if you're going to do this in a multiplayer environment, you'll need networking and also be able to have cloud support and assets and objects kind of inside and out. The other piece is here, platform support. You want native Linux, Mac, editor support so that you're not stuck where everybody has to get a Windows box and then I got to go find another device to test it with. You want to be able to test where you build and you can with this. Having the mobile hardware platform which is a very strong suit that having companies who ship a large majority of the devices on the planet, supporting it is a big thing. And then having OpenXR which is VR and AR support being fully developed in the open. You can actually go to that link and see the spec and see what they're building on which is the precursor of how these will be able to migrate when we're talking about metaverse and when we're talking about next generation technologies of how will this play into those standards and how will these be able to be portable so that what you build today is not unusable tomorrow. That's really everything to the presentation. Any questions? Yes. Are there what? Oh, language. Is there language? Good. So there is a behavior context system that actually gives you access to all of the routines and pointers so you could actually tie Rust into this and use it as a language within it. Matter of fact, we had somebody which was the craziest thing. We launched it in July after somebody tied JavaScript into the context behavior and had the engine being driven by JavaScript. I have no idea why but it was just super cool to see it done and we have people who are also looking at extending it to C sharp. So you can build any language and attach it to the context behavior and it becomes a part of the bus and at that point you're running that. Initiatives you do to get people to actually use the engine. So a lot of where we've been so a lot of what we focused is we have a handful of companies that are actually using it. Like AWS, Intel is starting to use some of the Osprey. So we have those Huawei is using it where they're building it for their own applications. But for the open community it's working with universities, hackathons and events where people can actually start building a lot of these pieces and providing the sample code to start using it. Because that's the first key is you've got to play with it to understand what it's capable of being done. And then it's also releasing a lot of video and media and having a support system because there's nothing worse than trying to work with an engine and going I'm never going to be able to figure this out because I don't have the documentation or anybody to talk to and no videos to look up. So those are three key areas that we make sure. Great documentation a lot of videos or people can work from. And also we have an active Discord server that's got about 3,000 people in it with about 6 to 700 on at any time. And it's all broken up into special interest groups. So if you want to know something about graphics you can drop into Discord, drop into SIG Graphics Networking, ask a question you will probably get an engineer that will answer you back within either immediately or within an hour. And you can go as deep as you want you'll find the engineers, the people who you couldn't normally access that work in some of the game studios or work inside of Amazon or work inside of Microsoft that you can't get, they're there. They're in that Discord answering questions, working in the open. So those are some of the ways that we've been able to help kind of grow those communities. It's a never ending battle though. It truly is. Any other questions? Yes. That's okay. There are a couple small projects right now that people are so the question was is there anybody that's using this for the Metaverse? There are a couple small projects that are starting to look at using this for Metaverse where they've built kind of small demos and pieces but we're looking, the interesting thing is every time I've had the conversation on Metaverse and trust me I can have a much bigger conversation on that topic because it's something I'm working on every time I bring it up though the same conversation happens which is open source is an obvious piece for Metaverse because you can't expect that every single person who's going to build a Metaverse experience need to license a 3D engine to build it. You know, imagine if I want to build a Metaverse, nope I got to go to Unity I got to get a Unil license or I need an unreal license I can't do it without it I mean it's not realistic so I hear a lot of people that are coming and playing and using this engine for Metaverse experiences because of that because of working with companies like Epic and having that flexibility they feel a little more comfortable of being able to port projects over so we're definitely seeing some movement on that so what the things that we're doing is basically building the bridging tools that allow you to move assets between Unreal and O3D which is the perfect starting place because any kind of, well so it would be assets that would be the actual 3D assets mesh textures, objects, NFTs would be the wrapper containers because there's no real NFT support natively at a 3D engine level that is more of a wrapper containment that goes around the asset that describes or points to it so, but if you don't have the portability, if it doesn't look the same in Unreal in O3D then you have a problem there you could wrap it in NFT and it's not going to look the same when it comes from the two which means whoever bought it is going to be really mad, certainly any other questions? any other questions in the automotive side of the world? I know this is kind of a twist you know, and it's I'm not sure how much people think about, you know, what you can do with something like this in automotive but I've seen the strangest, strangest things in the community just build on these things out of nowhere and I just, you never ever see it coming just no way. When we started out we had kind of limited Linux support, true Linux support where it didn't require anything and originally it wasn't planned that it wasn't going to be done until around December yeah, that didn't last long the community had other thoughts, two weeks later the first packages were built natively and they just steamrolled the whole thing so it's kind of like anybody that sinks their teeth into it we just see all these things come out, so thank you