 So, hello everyone, I'm Otelam, I'm actually working for HP, I'm working in the Immersive Computing Group and we deal basically with everything that is beyond a mouse and keyboard office PC. So whether it's new types of interaction, whether it's something that is related to gestures, touch, 3D scanning, 3D printing or anything of the sorts, that's going to be pretty much our table. And I think that it's hard to speak about 3D scanning or modeling or work, spatial work not come across Blender, so I think it's a good moment to explore how we can introduce one to the other, so how we can get more of the Sprout functionality into Blender and how we can make, well, Sprout a richer ecosystem altogether by having Blender, well, just be a good citizen and a good member of the Sprout family. So let's get started without further ado. How did this Sprout thing happen to be what it is, what is it in the first place? So just a quick check, how many of you have heard about Sprout by HP? All right, not today or yesterday by me. There is still somebody, right, great, great, great. But then I guess for many of you this will be maybe an interesting story that how we started this whole Immersive Workspace story. So there were different scenarios, different setups that we were experimenting with in terms of the overhead projection. So basically there is a projector hidden somewhere in this back of the device here and there is a mirror system that puts the image down and that creates you this display, secondary display as a workplace. And there were different form factors and of course the challenges that go with it and that is the difference between having a prototype solution and something that is something just being in development and something that is ready to go mainstream. So this is actually a commercial device, something that is available for sale. It's not a prototype and that's why the session is titled in as a 3D scanning for the masses because we are really going beyond this, well, it's somebody for somebody who is very, very professional and it's not for everybody, this is getting beyond that. So from very humble starts, which were very, very shoestring, we went through several iterations and we had a projector, we had a mirror and we had now a touch mat. So this was all nice, but this is really just giving us an extra screen. So is it worth doing this just to get an extra screen? I can always put a monitor next to it. I can maybe have a very, very tablet-like device in front of it. So that would be something also that is providing something similar, but actually we wanted to go beyond that. So what we created is a new type of PC. So as a core, it's still a Windows 10 device, regular core i7 Intel CPU in there and well, it runs pretty much any Windows application, blender included, of course, and has this extra screen set up with the touch mat and the projector, which gives you a workspace and that's when I say workspace, it's really something that is not just an extra screen but tries to understand what you have placed on it, what you are doing with it. And whether it's a 2D object or a 3D object tries to make most of it imported into your workspace or project or anything that you might be doing with it. And the way it does this is there is a camera array up top here. So this is not just a mirror anymore. There is a set of cameras here. So there is an Intel RealSense camera which does the 3D scanning. And there is also a high resolution camera which does the texture capture and we also use it for document scanning, probably less interesting for blender. I think that for blender, the primary interest is probably textures. And there's also a number of webcams if you want to just document what you are doing on the mat or have it as a live feed for something. And well, there's always the LED function at the lamp because we can do it. But it's 2015, it's not just about hardware. We have gotten used to this species getting an extra core, maybe a few megahertzes, a little terabyte there and here and there. But that's a very incremental sort of quantitative change. We really want or think that that PC can do or that the whole platform can do a lot better than that so we can evolve it. And there was a lot of development in the, for example, the mobile space where we see that people experiment with new ways of interaction or new use cases and everything, even though there is not necessarily something mobile specific about it. At the same time, on the PCs, we have all the source power, all the good silicon there and it's kind of getting wasted for, well, office use and every now and then a bit of render cycle. So we went actually beyond this and this is where the Sprout software kicks in. So just add a bit on the semantic side of things. So we do assisted image segmentation, we do the object detection and extraction. And also we have an API for it. So we're hoping that this builds as a platform and then you can integrate it into other applications, Hint Blender, so that we get most of this stuff integrated there. But this is all just words and words and words. So obviously the reason why I bothered to put this together because there is going to be a bit of a demo. So now let's see if we can get this thingy here. That's the, but it's not up on the, just a second. So I'm trying to find the screen, which is to the left, there you are. It's a bit makeshift and I'm gonna break my neck, but I'm gonna make it look. It's a bit crude, but I think that you will see far better. This is far as it gets, so I guess I will just kind of. Maybe I can help with the mic. I will need an extra hand though, so great, thank you. Well, how many engineers does it take, I guess? So obviously this is the touch mat. The projection is coming from here and there's this little mirror and that gives us this scan. It's a projective thing, so it's not coming from underneath it. The touch is actually detected by the by the capacitive touch screen there. So it actually sees the touches. It's not the optical thing, but rather the mat itself that sees the touch. So let's try and see what we can do here with the capture. I will place a couple of items on this, that, let's make these two. And remember I said that it tries to understand what's happening on the mat. So immediately when I put something on it, there is gonna be a capture. And it's going to see what's happening on the mat. And these are objects that I can then import. I'm guessing that we have a bit of an issue with the light because this should be actually segmented. So let me try and retake it. So we have, it's a bit sensitive to lighting condition. But the idea is still that you can very easily enter your whatever textures or information that you had there. So normally what you can end up with and that this is what I've been doing upstairs if you have seen is that I digitize these objects and what you see here is why I'm not an artist. So that's what ends up when you draw, let me draw. But anyway, so you can see that this acts as a touch screen, whatever objects you have inserted. So it's a multi-touch surface there and it's pretty solid. You can bang it, scratch it. So don't cut it, but otherwise or bend it at sharp angles. But otherwise it's pretty sturdy. Obviously you can do some drawing and some interaction there as well. So you can trace, outline, do whatever you want. Now note that this is, I said, a projective surface. So if I put a paper in it, the image will still be underneath. I see a lot of people in Blender doing tracing. So that would be also very useful. So there is no step of taking a picture, a JPEG, putting it in the background, and then trace it. You can literally put on the map whatever you want, a photo or something, and then use it as a working surface. And when I say that it tries to understand what it sees on the surface, let's try this one. I wonder how it will react to the lighting in the hole. Let's see. All right. So notice that I didn't press any buttons. So it's aware of what's happening on the map. There was no explicit interaction. This is what we call immersive. So normally when you move things, you don't expect to press a button. So there is no button on the spoon or the fork. I interact with it. I don't activate it. So same thing here. So whenever I touch something, it's going to process it and understand, analyze it, resize it, resize it, et cetera. You can use anything you want. If I want to make my little glove, I will have measures on it, so it's a vertical screen. You can see the detailed measures that I got. And if I move it, then obviously it's going to rescan it. Again, no buttons press or anything. It's just an interactive way of doing this. And this is something that I would like to see also in Blender, so that the interface is not just a pile of controls that I can click, but there is a more context awareness of what I'm doing so that I can do it in less clicks and less moving around. And if I put something on the mat, then let's say it can become automatically a texture or show it or render it in some shape or form. OK, so that's the first demo. All right. But that is, I need to put away my little help wrap. There you go. So that was the first demo. But then the other part of the whole story that I was getting at was the 3D story. So something completely 3D. One of the cameras there is actually an Intel RealSense camera. What is unusual about it is that it's not forward mounted or face mounting. So it's not looking at you as a face. Most of the Intel RealSense software has been developed to track skeletal motion, face, et cetera. Whereas we use it to scan objects. So it's a different use case scenario because it sets a downward facing thing. But also if you're thinking about models, it's probably more useful to be able to scan this way. And there are very specific solutions for 3D scanning. Some of them are more expensive than others. Some are good for certain types of objects. And the bad news is there is no silver bullet. So there is no single 3D scanning technology that will work with every type of material, every type of object, and all that. So unfortunately, we will have to compromise there. What we can do is that we can make this efficient. So HP, unfortunately, has the benefit of scale. So we can put a number of cameras there and do some sense of synthesis so that the texture camera, the 3D scanning camera, and all these play together. So we get a very good result of the scan itself, even though it's not a several-thousand-euro scanner, but rather a complete system for a lot less than that. So the key players for the whole scanning experience are the real-sense scam. We use that for the depth. The 14 megapixel high-res camera for the textures. And of course, there is a little webcam that helps you just to see what the camera angle is so that the object is easier to position and view. And finally, there is the illuminator. So we also use the projector in the scanning process itself. So this is a projected light scan. So you will see the lines being projected on the object so that we get extra information about the depth and shape of the object that is there. And there is a little accessory there. So the thing that you can see, this little guy. And the role of that is to basically move around the object and expose different sides of the object to the camera. Because it's a top-mounted one, this gives you a bit of an extra angle so that it goes around and sees it. It's also smart enough that if you do multiple scans, or as it turns around, it can combine these scans. So normally, you would work in cycles so that you rotate the object around and get scans from all the angles and sides. So sometimes with several cycles, so you scan from the scan from the top, from the left, from the right, back, front, underneath, et cetera. It really depends on the object that you are scanning of just how deep you want to go or how many scans you want to go. There is always a trade-out between the resolution and the speed of the scan. So we can get near real-time readings, but then with the real-time scanner, that's going to be a very, very low resolution pixel-wise scan of the object itself. At the same time, if we want very high resolution, it's going to take a lot more time just to do all the cycles and all the scans. For really detailed ones, expect powers, basically. But then you are well underneath a millimeter resolution of the scan, which makes this interesting because it's a fairly precise scanner in that sense. And that's also the reason why it's a very stable setup or sturdy setup, because when you are a sub-millimeter, then even dirt and small dents or material on the desk can cause detectable changes. So that's something that is getting in the way very quickly. So that's why I want to have it as sturdy as possible. People were asking me about handheld scanners. That's fine. So for face scans, et cetera, we can get away with a reasonably precise scan. But then the moment you detach it, the moment there is extra motion from your hand, from your body, et cetera, and it's not on a tripod or something very stable like here, then your resolution is going to suffer. Or you just basically have to average out, which is going to help your results. And of course, where there is 3D scanning, there is 3D printing. So when we talk about printing, HP is not necessarily a strange name. So we are also very interested in the use of this in terms of rapid prototyping. So whether it's engineering use. So if different industries, of course, have different uses for it, when it's engineering, they will say that they want to have a model very quickly and they want to have quick cycles, have it printed in solid materials, and then test it out, whether it fits, whether it serves the purpose or not. Traditionally, you would have to have somebody who is a very good modeler do the model, then they send it up to a shop, then they machine it, they send it back, they try it out. OK, that's almost what we wanted, but not quite. We change it a bit, then we do this whole iteration again. And this usually takes several days plus post and then all the delays that that includes. And this way, we basically have a turnaround which can be depending on the complexity of the object within hours. So you can start from something very simple. Let's say you take modeling clay. So you don't even need to have a designer for the initial iterations. You just take modeling clay and scan. You make an object from clay and you scan it and you print it. So you get a plastic solid representation of whatever your shape is. We had a project where we made a quadcopter that way. So literally downloaded an image from the internet, just made the shape out of the wings and everything from clay, scanned it, and then printed in plastics. And you literally just drill the holes in there at the engines and the control board and off it goes. So in terms of prototyping use, it's very, very, very nifty. Of course, Blunder itself is being used in very, very different scenarios. So I'm always keen on hearing on what people use it for in terms of the modeling when it comes to 3D printing and prototyping use, not just animation. And I am going to do some demos because seeing is believing, I guess, especially in 3D scanning. Let's see how much of the response lights here, which is probably not going to help much. But let's try nonetheless, just the amount we need. All right, so I'm actually just, because we are short on time, I'm going to do a snapshot scan. Sorry, the world upside down. All right, so I'll try to move as little as I can and do the fastest possible scan here, just to have been in time. So you can see the structured light scan going on, but it actually uses all the cameras that are on the device. I'll probably move a little bit, but what can I do? All right, so now it has taken several snapshots with the different cameras and now it will fuse these readings together and create basically a small model. Normally I would actually use the stage and put the object on it and then have a scan from all the sides, but this is a single scan, which will be hopefully good enough to see the demo or what we are actually getting in the scan. So this is my hand. Sorry about the hairiness, but that's how it looks like. So anyway, this is, as you can see, it's a fairly complete scan. So the sides are missing and of course, the underneath, the camera didn't see. This is what the extra passes would give me if I wanted to do the extra scan. I can show you what actually the real-sense camera sees. So this is the real-sense picture and we of course captured the texture on top of it. So you can see plenty of detail. This is well underneath the millimeter. There are of course some artifacts because hair and me moving around a bit, but still we have plenty of information here to create a very, very nice little model, especially if we allow for several scans to accumulate. So overall, that's the little demo for the 3D scan here that I can do in this amount of time. And what maybe I can still show you is what you would end up. So this is an actual object that I scanned. Let's try to lose these little elephant that I can open. So this is what you get when you scan an object from multiple sides and attach it. So merge or fuse these all these renders. So then you get something that is a fairly high resolution scan with texture. So in this case, we went all the way through. So underneath, belly scan, et cetera. So the other tricky areas, of course, between the legs, behind the trunk, et cetera, behind the ears, which are hard to reach. So that's why you probably need extra scan cycles, but that's the magic of 3D scanning that that's why what makes it a bit maybe tricky, but something that is still worth doing, at least the starting point for processing. All right, all right. And so I just have enough time to have the question of interoperability. So the scans that we do, we actually export in OBJ. So you can import the stuff from Blender. We, of course, capture the textures and the objects in JPEGs, PNGs, et cetera, with alpha channels. So you can import it from there. But what I wanna see is direct Blender integration so that all these features would be available directly from Blender. And the pace, of course, the touch mat would be very nice and functional. I can start Blender already. I can put it on the touch mat, but because the controls are really meant for a keyboard and mouse setup, it doesn't get me too far. So I already talked with Julian and he had some ideas there, but it's definitely something that needs to be explored. And different areas can be used here. So what is the mat? There's obviously a 3D camera and there's also the texture or Harris camera. And are there any ideas maybe of how else you would like to see this kind of hardware being used in Blender or what would you use it for? There was first and... Would it be possible to use it for performance capture or virtual puppetry? Yes, absolutely. We actually have a stop motion application there. So which can merge real world objects and also captured objects. And you can integrate that into Blender so that you get something there. And it's a lot easier or a lot more natural to do than on a vertical screen when you have things that you can move along. And I'm happy to show you the stop motion stuff after the session. Thank you. All right, all right. So because you also said that the camera originally was designed to see your face or maybe a human rig, I could imagine that if you have like a puppet which is recognized as a human and maybe also a face, you could do something like stop motion rigging. So make some poses, record them, record them. And it could be something like an animation doll. Absolutely. So it's already capable enough so that it can recognize objects. So if you have markers, it already can track those. So and that's on an API level. So we have an SDK for it. So it can be integrated. So all the functionality that you have seen here is integratable into other applications. And with markers sort of, they work like basically like QR codes but they are real world markers. So you would make a capture, an image, and then mark the area. This is a part of the, my sort of like point. So you don't need to have explicit markers. Parts of the object can work for that as well. And then as long as it's visible, it can track and record the position and that would basically act as a puppet. And also the position in depth, I mean. So that the object tracking doesn't give you depth but if you are willing to go deep enough so that you combine that with the real sense data, then you could get that. Okay, thank you. And you've got some three-dimensional space between the cameras and the table. Right. Is there a possibility to actually kind of minority report kind of interact with objects with your hands up off the table? Right, so the real sense camera actually does tracking of the sort. In our experience, the question is, so it's very good for actions. So if you want to start something open, say close for that type of things, it's detectable. So if you have a limited number of gestures that you want to recognize, it's good. It's not nearly as good when it comes to setting values. So if I have sliders, if I have something that requires a precise input of number of sorts, then of course that input method falls short. So keep in mind what are the advantages of using gestures. It looked like there would be a beautiful system for sculpting, for example, where you could just stretch and pull and shape stuff. That's the other thing that we also had when it was exhibited upstairs. There's a short discussion on this that currently when you extrude, when you change a morph object, et cetera, then it's really just meant for one finger interaction, basically the mouse. You poke and then maybe you have a modifier active at the time, but it's always a one-moment thing. Whereas we have 10 fingers, our brains are very good at moving all 10 fingers in parallel and thinking in space. So there is no reason why we couldn't use it. So if I use one finger, it's one strength. If I use multiple fingers or I stretch it or do something like this, I'm actually conveying far more information than a single poke with a modifier. Thank you. There was somebody here. All right. Somebody in front. Just this is a scanner that competes with many other scanners that are now coming on the market. How do you differentiate yourself towards these other scanners and second question linked to that, how much does this system cost? Right. So we are not positioning it solely as a scanner. So we try to create an immersive workspace from it. So just bridging the real-world objects and the digital content that you're working on so that this is a lot smoother. A lot of the 3D scanners are one trick ponies that this is what I do. I do a scan and that's it. We go beyond that. So whether it's interaction, whether it's digitizing, whether it's interaction with real-world objects, blended reality applications. So as an environment, it just gives you more than this one trick of being to do a 3D scan. It can do that as well, but we can do far more. As for the second part of the question, for the price. So normally it's $1,900. I think that November, Black Friday, everything coming, it's currently 1,600. And that's actually for the whole system. So the scanner projector part, the Core i7 Windows 10 machine and the touch mat. So all that together is that. And that's why it's really, I refer to it as a 3D for the masses because you get a reasonably powerful computer with discrete graphics in there, Core i7, and you get all the sensors and cameras in there and also the mat. And it's not a stellar price, especially for professional use or purposes. Okay. All right. Well, thank you I guess. And if you have any further questions, then feel free to contact me after the session. Thank you. Take it down now or? I can, if you have like three minutes, then I will do it down here. Yeah, but you can leave it, no problem.