 Hello. Welcome to the talk from Lubosch about the year of the Linux Virtual Desktop. Please, a round of applause for Lubosch. Yeah, hi. I'm Lubosch Saunetski. I'm at Colabora. And today I'm going to present about XR on Linux in general in open source and the Linux desktop in VR. So mostly related to my work. And thank you all for attending, for your interest in my work. And also thank you for Fostum for having me and considering me for the main track. This is my third Fostum, but it's the first time when I speak. So I hope it won't be the last time. Usually I don't put up the date on my slides, but today's date is so nice. It's a palindrome, so you can read it both ways. And it's the same in American and in normal notation. So this is me with a funny headset. It's an AR cardboard phone holder headset. If you have any complaints or questions about the talk, please contact me on email or Twitter. So I will start with a brief crash course about what XR is, what kind of terminology we have there. Maybe I should start with just asking you how many of you have ever tried AR or VR before? That's plenty, that's nearly all of you. How many of you did that on Linux? Okay, that's way less. And how many of that people that did that on Linux did that on a completely open source stack? That's even fewer, maybe nearly none. And this is where we are working on right now currently. So I hope that this year 2020 will improve the situation. There are some factors that will help with that. But let's first start with this nice diagram. It's by a guy called Milgram from 94. So VR is actually quite old already. And on this spectrum, we have the real world, the world we are in right now on the left side and the completely virtual environment on the right side. And in between, there are some steps. So augmented reality is, for example, if you have virtual elements in the real world like annotations for real objects then we can go to the completely virtual world and on the way we found the augmented virtual reality. This is when you have, for example, camera feeds or 3D point clouds of real objects in the virtual world. And this spectrum was called mixed reality spectrum. And the terms I have listed the terms here again and a more prominent term nowadays is maybe XR which is called X reality or cross reality. This is also what's the term that is used in the chrono standard open XR, for example. And this is the term I am going to go with because it just summarizes all of these combinations of realities, of inter-reality systems. And in terms of consumer available headsets I would just point out three categories we have there. For example, we have the simple phone holder headsets which run the phones operating system which render on the phone and you have simple lenses in them. These are the most accessible. These are what people mostly use. Most people have contact with them. Of course, you can get it fancier than as a piece of paper but it's as simple as it can get. And of course we have the PC tethered headsets. That's something for the PC gamer enthusiasts. I guess they're also more hacker-friendly since you can use your desktop, your regular Linux desktop and use the device as ever you like because you root. And on the right side I will point out the standalone headsets which are similar to the phone thing but like built from the ground up. So you have an embedded computer in there maybe in a belt or maybe also integrated in the headset. And the lenses are better adjusted for the display in contrast to these phone holders which are more generic. And yeah, this is for example augmented reality headset with optical see-through so you can actually see the real world. And if you want to do see-through on a device like that you usually have a camera where you do video see-through. In terms of tracking, so tracking is the most interesting part I guess when it comes to XR. The rendering is more straightforward than that. For tracking, one prominent device that also is in your phones is the inertial measurement unit, so-called IMU. It's pretty cheap and it's pretty small so this is an example how this looks. It has multiple sensors so it has for example a gyro and accelerometer. It also has a magnetoscopes or a compass but usually that's not used in terms because of error. And it's a very high frequency sensor so compared to the optical sensors I will show you on the next slide you get a very high frequency signal. And yeah, this is how VR, what the minimal requirement is for VR but you only get three degrees of freedom out of that by itself so that means only your hat's rotation will be recognized by the tracking system or if it's a controller this will be only the rotation of the controller and not the position in space. If you want a position in space you need to have optical tracking. For example, this is the work of Philipp Zabel. He's implementing an open source driver for the Oculus CV1 and this is exactly this headset seen in an infrared view. So the camera is a USB camera and a regular USB camera with either infrared filter or an infrared sensor in the best case. And this is what the camera sees. The regular person or the human eye wouldn't see the slide because it's out of our visible spectrum. I'm not sure if you can see it but there are like squares around these blobs, these so-called blobs with primitive computer vision algorithms these blobs can be detected and with these blobs the position can be calculated of the headset in space. In combination with the IMU you saw on the slide before there needs to be something done called sensor fusion. So we have the data from one sensor from the camera which runs at 60 hertz or something at regular camera frame rates or frequencies and the IMU runs at much faster frequencies I guess about 10 kilohertz and this needs to be fused. So this is one style of external optical tracking. The other style is the other way round where the camera emits the laser and the sensor is on the headset. So in this case, for example, this is the Lighthouse tracking system. It has two rotors where two lasers rotate and when they hit the sensor on the headset you get a timestamp and then you can calculate something similar like here. And from these techniques you get six degrees of freedom that means you can also recognize if the user moves up or down or in space. Another way of tracking which is easier in terms of hardware is SLAM. That's from robotics. It's called simultaneous localization and mapping but basically in XR we call that inside out tracking. That means that you have a camera in the headset and we run some computer vision algorithms to get the feature points and then the feature points are stored in a database and you can see if you saw this feature in the last frame. And from this you can also calculate the position. And in robotics it's also used for calculating a map of the real environment. So the SLAM is a very popular method of tracking because it's easy to implement in hardware but it's a bit tricky to do that in software especially if you want things like low latency. The next point is input in VR because not only the rendering is interesting and gives us immersion, like stereo rendering for one image for each eye is immersive definitely and if you move your head but one thing that gives us even more immersion so more connection to reality is the input in VR. For example, when I first did a VR demo back in 2013 without controllers because it wasn't a thing back then people asked me, oh, where are my hands in VR? I would like to see a representation of my body. And with tracked controllers this is now more possible and the experience can be more immersive. And I have like two different types of controllers here. The one on the left is a simple controller with just an IMU so you just can get the rotation from it and it has a touchpad and a button. It doesn't even have a trigger. But for simple things like pointing on something and clicking on it, it's enough. But on the right side we have a more complex controller which also has things like finger tracking so here on the left side it has a proximity sensor and it detects how far your fingers are away from the controller which gives you already the possibility to have a virtual representation of your hand. It also has six degrees of freedom tracking so you can actually position your hands in space correctly which gives us more immersion. So in terms of what devices these are, this is the Daydream controller. We have a branch in Monadu that supports stats from Peak Black but I will come to details what Monadu is and where you can get that later. And this is the Valve Index controller, a pretty nice controller. And the best thing to use in VR for interaction is something we have with us all day. It's our body and hands. Since with our real hands we can experience the things even more realistic. It's like when you see your hands in VR that actually look like your hands, you recognize this from the real world and you maybe notice that you look at your hands quite a lot like when I pick this bottle I'm actually looking at my hands and I know how they look. So if you have a weird virtual representation of different hands it's not as immersive as if you have your real hands. And hand tracking can be done by pure computer vision. This is a stereo camera which is also infrared. It emits an infrared light and it's a wide angle stereo camera and with some computer vision and machine learning even there can be a virtual representation of the hands calculated from this image. On the right side I have a more classical approach at hand tracking which is mechanical. It has the disadvantage of not being very user friendly so you'll need some time to put this on. It's not like this where you don't need to do anything you just have your hands. But it has the advantage of haptic feedback which is something not there with the other type of hand tracking. And VR in general haptic feedback is not something we have like in the real world so I cannot really touch the table and lean on it in VR. Something I can do is maybe vibrate, let the controller vibrate or maybe do acoustic feedback that replaces the haptic feedback because the sensorics in the brain are very tolerant so you can do acoustic feedback and the human mind will find it okay in terms of immersion. So this was the rough crash course about what we have in XR, what the challenges are, what the types of devices are. And now I wanted to point out some projects that implemented that in open source. Let me just, okay. So one important thing that needed to be done in the Linux graphics stack is something called direct mode. It's the possibility to lease the display of the headset so it's not used by the window manager as a desktop display and the application or the XR runtime can render to it directly. It's based on a work by Keith Packard and he introduced the non desktop property on displays. You can see it in XR and R maybe and you are wondering what this is. And it has the advantage of not only that windows don't show up like desktop windows don't show up on your display by mistake. It also has the advantage that you can render with a native refresh rate on the HMD. For example, HMDs have usually a refresh rate of 90 or like the more modern 144 and like all the desktop displays have just a refresh rate of 60 and in the old days of open source VR we were unlucky to run the headset at the same refresh rate as the desktop. So this was quite bad and this was resolved, I guess early 2019 this landed in the stack so it needed a couple of changes so you need a fairly recent Mesa for that and fairly recent XR and R and stuff and Vulkan as well. So the Vulkan extension is called a quiet XLIP display and as soon as this was like available I implemented it in XR gears. This is a demo application I wrote that renders a stereo scene in Vulkan and it is the gears to know them and it uses this extension as a reference implementation. I also had other backends like extended mode so if you don't render directly to the display it's called extended mode and I had extended modes in Wayland and XCB and also I KMS backends I guess for Intel. The work I did here later became the basis for the compositor in Monado. So Monado is our open source runtime we developed at Colabora, our open source OpenXR runtime and one aspect of the runtime is to provide all of the stuff so the application doesn't need to deal with it. So the runtime has this compositor that opens it leases the display and you only have the standard API to submit frames to that. And later I made a second iteration of this application which is just an OpenXR client which makes the code like reduces it by one half at least. And on Wayland the situation was different so this needed to be implemented on Wayland as well. And if you want to know details about direct mode on Wayland it was mostly done by Drew DeWalt and protocol was specified by NXP. It's called DRM Lease on Stable V1 and the Vulkan extension is also called similarly as for X. This also supports X Wayland clients that means you can run an X application with that as well and it will utilize the direct mode. This is a screenshot from Drew. He did that through the lens so you can actually see that it runs on the display. And it's not like all of this is not upstream yet. I guess there is a merge request for the Vulkan specification. This protocol needs to be implemented in the Wayland compositor so you need to have a compositor that implements that and there is a branch for Monado for the runtime so it can actually utilize that. So now we have the HMD working, like display working and we need to get some tracking data. I wanted to point out some notable open source tracking projects and most prominently maybe OpenHMD. Funny thing is that OpenHMD has a buff going on right now so they are not attending at this talk. I can talk whatever I want about them. Hopefully they won't see the video. OpenHMD is a community of enthusiasts and they have their methods and tools to get support for hardware pretty quickly so they analyze the hit protocol, the display is having, the HMD is having over USB with the PC and try to implement that. And mostly what is there in OpenHMD is trade-off tracking so they have, it's rather quickly to get a new headset supported with trade-off tracking, so just the IMU, but it's more complicated to get positional tracking done. And OpenHMD is currently implementing positional tracking for the Oculus CV1. This is work done by Philip Zabel and Jan Schmitz. They also did a couple of talks about this in the past. Maybe other projects I should point out are here so there are many projects that actually provide low-level access to the device but they don't provide a consistent API for applications and rather like experiments. Also one SLAM, I want to point out is MATLAB. It's a pretty decent SLAM. SLAM is usually used by robotics and used in research a lot but there are several open-source implementations. What I was doing back in the day, this is from 2017, we worked on a project called Vive Libre and here you can see the lighthouse tracking we implemented. So this code is based on OpenHMD and the lighthouse redox documentation and this is just a MATLAB visualization of what we have here. So here these points is the headset basically seen from the base station, the one with the rotors you've seen before and this is the configuration file from the headset. You can read out as JSON. These are the positions and the normals of the sensors on the headset and with the data from the sensors we could reconstruct this so-called station view and with a simple algorithm from OpenCV it's called PNP, point and point algorithm. You can give this algorithm a bunch of 2D points and a bunch of 3D points and then it calculates the position in 3D space according to the camera. And so this was when we prototyped the lighthouse tracking and the project or the code was picked up by a project called LibSurvive and it is a free and open source lighthouse driver. They added many things to our work, like proper filtering. So filtering is the complicated part of getting multiple sensor data, so sensor fusion and filtering are kind of synonymous and they also added support for controllers and also support for newer headsets. And they have multiple so-called poses that are competing at the same task to figure out which approach is the best. The project was done by a guy called Cian Lor. This is him running the code on the Raspberry Pi. I guess he's also doing the rendering on the Raspberry Pi, which is pretty nice. All Raspberry Pi, I guess this is a version 2 or something. And we have a branch. My colleague Christof implemented LibSurvive in Monado. So it's actually now usable with the OpenX or API. So you don't need to read this. Just to show how many open source SLAM implementations there are. There are quite a lot. I have different versions and different quality of projects. Most of them are from academia. And there is a site, openslam.org that catalyzes all of them. But for Monado, we will choose one for you in the near future, so you don't need to choose that. But there's a lot of going on in open source SLAM. I mentioned this a couple of times now. OpenXR is a standard API from Kronos, like OpenGL and Vulkan. And it tries to achieve a problem the industry had before OpenXR. Before OpenXR we had only vendor-specific APIs. So most vendors that did a headset also did their API. That means that applications were not portable. It was mostly the abstraction was done by the engines, like Unity and Unreal. So most of the projects used, for example, Unity, so they could run on multiple vendors. But OpenXR tries to standardize that. And we hope that it will get rapid adoption. It was released last year, so early 2019 or something like that. So it's still new. But most of the big vendors are in the specification or in the group that specifies that. And we at Colabora decided to implement this API as well. And we did that in Monado, our open source runtime. So Monado implements OpenXR and provides the device drivers from, for example, OpenHMD or LibSurvive. But it also has its own device drivers that we just write for Monado in our internal driver library. For example, we have device drivers for the OSVR HDK2. Or we are currently working on a PSVR driver, which also will have positional tracking. So my colleague, Pete Black, is working on that. And I also recently wrote a native driver for the Vive and index family of headsets that's currently quite simple, but it will evolve in the future. So we, as I mentioned already, we have a Vulkan compositor that opens the display for the application and communicates with that. And we're currently working on 6DoF tracking. And as I mentioned, we are also looking into providing SLAM for that. So Monado manages the camera devices, for example, from not only the cameras that track the headset, but also cameras that are on the headset that could be used for video see-through, for example. And this is a video from my colleague, Pete Black. He gave it to me today. I think he never released that. It's an example of his PSVR tracking. Not sure if you know how the headset looks, but here you see the emitters on the headset and the positions that are calculated by his algorithm. So this is Work in Progress, and soon you will have that exposed through the OpenXR API. So this was an overview of what's going on in terms of drivers and tracking and run times. And now, since we have that now, kind of, we can build on top, and what do we want to do with that? Now we have VR in Linux or in open source. And one thing that can be done with that is a project I've been working on for the recent time and was released, I guess, mid-2019. So it's XR Desktop. And in XR Desktop we have made a stack of libraries that interface with existing window managers or compositors like this. This is, for example, GNOME shell. We get the window buffers from GNOME shell and can display them in VR. And what we also do is to get the input from the controllers and synthesize this so the 2D desktop receives mouse and keyboard strokes. Like 3D desktops are not new. Like there were several open source 3D desktops or VR desktops. But what is new in our approach is that we try to interface with the existing window manager so that you don't run the 3D desktop separately, separated from your regular desktop, but you can just mirror the existing desktop. And this is something that was sponsored by Valve. So thank you for that. Without them, XR Desktop wouldn't be what it is right now. And I have a demo, not a live demo, but I have a video since doing live demos with VR is kind of crazy, I guess. But I have colleagues that do that. So this is me using Inkscape in VR. So what you can see here is that I'm dragging the window around and I'm moving it close to me. So if you move things closer, then you have more precision. You can, of course, use it from far away, but there will be an accumulation of error in the tracking of handshaking. There's also something you have to think about because you are holding the controller in your free hand and it's not on the table like with the mouse, so you have handshaking. And I'm showing my insane drawing skills in Inkscape here. And, yeah, as you can see, we have the cursor that is shown on the desktop as well in VR. And it is shown at the position of the end of your ray, so-called ray from the controller. And this is just one game from GNOME Games that I try to solve. As you can see, we also, when you click on the end of the pointer, there is animation emitted. And this was a model dialogue right now, so if the model dialogue is opened, we overlay it in 3D space over the window as well, which has had to be implemented. It's not like appearing in the zero position of the world. And Solitaire is actually a nice thing to do in VR. Every time I do a longer test, I play a round of Solitaire, I guess. So this is Krita. It also has a model dialogue. So the actual widgets rendered as they are done usually, and we get the buffer from the window manager. And this demo is showing GNOME, but we have also integration for KDE. And I'm using Krita to draw in VR. Of course, there could be many improvements. For example, one feature I heard many people would like to have, also my colleague Christoph, is that you put, like, other windows on the right hand, and you could just use the color picker on your hands and to interface better with the 2D application. So let's skip a bit forward. I'm browsing the web with cats. So on the controller, you have a touchpad, and you can just scroll feeds as you are used to. And this is also another example of a model dialogue. So it's quite usable if you get used to it. So another interesting concept in XR is the concept of actions. In traditional games or input systems like SDL, you were listening, for example, for the space bar, or looking if you got an event from the space bar, press or release or mouse clicks. And with an action system, this is kind of decoupled. So now we don't check if the space bar is pressed, but we check if the user wants to jump, for example. This is necessary since XR controllers are very heterogeneous, and the hack the PC gaming industry did and just supposed everyone has an Xbox controller doesn't work anymore. So there is something called actions and bindings, and the actions are defined by the application. I want to jump, and this is a Boolean operation, or I want to move forward, and this is an analog operation, for example. And the bindings need to be created by the runtime or by the user that has the device. That means, oh, I have this controller here, this presentation controller, and if I press the right button, the thing should jump. And this is available in the OpenVR API, which is the API SteamVR users, and as well in the OpenXR API. Usually, like in OpenVR, this is specified by JSON. In OpenXR, it's specified in code, but on our OpenXR branch for XR Desktop, we also have a JSON format. That's quite similar to this one. So this is how the mappings look for XR Desktop in particular. This is the index controller. We have two different sets of actions. One set is for interacting in 3D with the windows, and the other set is for doing the 2D desktop operations, like right-click, left-click, and scroll. And the 3D operations are pushing and pulling the window in Z-space or on the Z-axis and opening menu and stuff like that. So this is also translatable to the other controllers. So let's look at some software in our stack. On the very bottom, we wrote, like our stack is written in GLIP, and we introduced a couple of GLIP libraries. For example, Golcan is our Vulcan abstraction library where we do some things we require to display windows in 3D. And render objects, we have GXR, which is the library that abstracts both OpenXR and OpenVR APIs. So you can write an application in GXR that actually runs on both APIs. And in XR Desktop, which is also a library, we provide two types of applications. The one is an overlay application. This means that the scene is rendered by the VR runtime and we just supply the windows. And we have also a scene application, which means where we render the full scene, where we render the stereo buffers and submit the final stereo buffer rendering to the runtime. The overlay app has the advantage that you can use, that you can see the windows drawn over a VR application that you're running. You can play a game and see desktop windows. And the other, the CNAP has the advantage that we have full control of the renderer and can display as many windows as we want. The other one has limitations. LibInputSynth is the library where we do button presses and mouse clicks and mouse position manipulation. We have several backends for that, for example, XDO on X11, and there's also stuff for Wayland in there. And this is how we interface, basically, with the window manager. On Kwin, for KDE, there is a plugin that can be loaded, which is quite convenient, since we don't need to fork Kwin. On Nome Shell, the situation is different. We need to fork Nome Shell to interface with it. But this will hopefully change in the near future when I do my upstream work. So, again, LibInputSynth was required, so we can actually also do clicks on the desktop that runs at the same time as we do it in VR, which wouldn't be required if we had a standalone VR compositor. This is just required because we want to run on both. Yes, and I talked about it already. So, how do we share the window buffers? We do zero-copy operation, so we don't download the windows from the GPU. All the window buffers stay on the GPU. We use a GL Vulkan interrupt because, sadly, the compositors are still using GL. Which is not a big problem since there are extensions available to share memory between them. So, this is how the extensions are called in Vulkan and GL accordingly. Unfortunately, on Intel, this extension is not implemented as far as I know, so this is a limitation. On Intel, the extension is not implemented in GL. That means if we had a Vulkan window manager, it would be fine. Yes. And the overlays are currently only implemented in the open VR implementation of XR Desktop since the open XR alternative for that is called XR Composition Layer is currently not implemented in Monado. So, we are working on that as well. So, this is an example of the scene renderer with my example image, the hawk from Wikipedia. We can render many hawks. Our upcoming release of XR Desktop is coming soon. So, shortly after the first time, we will release 0.14. It's our biggest release yet with the most lines of code changed and most commits like we have about 364. We are two developers. We changed a bunch of APIs so it can run open XR and open VR backends with the same code. So, look out for that. It's already on Gitmaster. If you wonder where to get XR Desktop, if you are arch user, you're lucky. We have very good arch user repository packages. We have Ubuntu PPA, which is maintained, but maybe not updated currently, but we are working on that as well. And, of course, you can build it from source. We have a Vicky article on that. So, let's take a quick look in our roadmap. What are we trying to do in the near future? We would like to have a virtual keyboard. Currently on OpenVR, we are using the one from SteamVR. On OpenXR, we currently don't have one. So, it would be nice to have the same one on both APIs. We want to implement GLTF loading. So, we also have the controller models on the OpenXR implementation and can also maybe load other cool models like a scene where you spend your time and when you're using your desktop. Scripting is also something we are looking out to. And for the maybe not so near future, we wanted to do a 3D widget toolkit. I would call that G3K because we have a lot of code that is related to that, but it's just not nicely in one library and it doesn't have a nice API. And with that, we could do something that's called maybe XR Desktop Shell where you have not only windows floating around but maybe a full desktop experience with a clock and widgets and workspaces. So, this is the future. Also, what's maybe interesting is how to interface with 2D toolkits. So, we can also render from the 2D toolkit directly into VR without using the window manager, which is nicer since we can do stuff like high DPI and maybe nicer fonts. For that, we need zero copy access to the toolkit. GTK3 had something called GTK Offscreen Window and GSK, it's different. We need to access GTK4. It has now a scene graph. It's called GSK, so they change a lot. And I have an example for GTK3 for that. Native 3D UI. So, what do you need a 3D widget toolkit for? It's a nice thing since you can have actual 3D widgets. This is maybe not 3D enough. This is like a combination between 2D and 3D, but you can also have total 3D UI like you would work with the reality. And this is, I guess, the way to move forward to have native 3D applications. Also, font rendering. I want to point out Mozilla Pathfinder, which is font rendering for XR, and it's a partial license. And one more interesting thing is what about native 3D applications? It would be nice to have a protocol for that. So the renderer for the 3D application could render, for example, a stereo buffer and receive the position from the compositor. Or we could also supply native geometry to the compositor, which renders the 3D application. This would have the advantage that we could do transparencies, occlusions, physics, if we use a real model for that. Drew DeWalt worked on that. He has in his project called WXRC. It's a 3D window manager, or VR window manager, and it can take applications, 3D applications as clients. So he designed a well protocol for that. It's pretty neat. Check it out. It's a demo where he opens to these surfaces first, but the most interesting part for me was when he opens a 3D application. And this is also something I would like to see in XR desktop. And since he specified a protocol, it's pretty neat to implement that. And this is the 3D cube. Yeah, unfortunately, sometimes LibreOffice doesn't want to skip the slide if I have a video. Now it did. Okay, if you want to get involved, we are on FreeNote or we have a Discord and look at our wiki on Twitter and GitLab. There's also a FOS XR conference. It will be held again in Amsterdam. That's the second iteration. So if you're interested, it's around Blender conference, so we are looking forward to see you there. And I guess now we can come to questions. So any questions? Do you see a future for the desktop with VR but with mouse and keyboard interaction? Like we just saw here at the end? So the question was if I see a future for mouse and keyboard interaction in VR. So definitely as a programmer, I enjoy to have a physical keyboard. And for this, we would require either virtual representation of the keyboard in VR or you just have augmented reality where you just use your real keyboard and can see that. So definitely there are also projects. For example, the window manager I showed in the end, I guess it has only keyboard and mouse support. In XR desktop, it's kind of complicated. Since we moved the mouse pointer around, this could get quite hacky to decouple that from the real mouse. So I wonder how to solve that. But with standalone VR compositors, it's quite possible. Yeah, I was wondering, is it compatible with other VR applications? Like what if you want to start a VR game inside it? Does that require any kind of special support for it to work? Do you mean XR desktop or OpenXR? XR desktop, yes. So XR desktop can run on top of a VR application if you have it in overlay mode. That's possible. So this is also one feature we intended the design to be that we can run XR desktop. You can show existing desktop windows over VR applications. So this was one goal. On OpenXR, it's not supported because this would require some changes in Monado which are going to happen sometime. So who are you targeting with OpenXR and Monado? With OpenXR? Yes. Yeah, I hope that OpenXR is... we're targeting OpenXR for all XR application developers. So they write applications for the same API and not only application developers, but engines in particular. So many engines already supporting OpenXR, like Unreal Unity, Godot Engine, for example. There was a talk yesterday by my colleague Christoph. He implemented support in the open source Godot Engine for OpenXR. Any more questions? Thank you for your time.