 In bom zelo, da boite povojali vzpečne jogovice, zato ne bo bila vse več odličnima. Zelo, da bom začali vzpečne jogovice, je tudi do vsev, in tudi naj več otvarujem laboriči, na vsev mindsetji, zelo mi je zelo povoj, odličnijo še reijo z tvoj vzpačanjem. Nisem tudi o ui, kaj je ušljaj, če sem ne shoj, da ste se v poču plimovatnih in neč veliko zelo vzpečnih, ste pas пesne kaj je in ta! Here is กуص Waar ne z�؟ I that way I will show you to build UI usable in a 3D world, which is a technical feat right now, so this is what is going to happen. And with some solid engineering principles, because anybody can hack something, but then you should try to actually make it work and maintainable, so we should try to understand how this could be done. Now, why am I doing this talk at all? And virtual reality is old age, it's sexy, that could be a reason, in če pa tudi nekaj. Tukaj je z vsem tukaj nekaj sičen njih. V Hyperfer, tako da jaz vse snimam, zelo smo vse vse na vseče, vseče vseče vseče. V tem nekaj je zelo način. Poživaj, je to vseč, ki jim vseče izvahnem, je to morskama način vseče in je za ui. Jekaj? This is a UI. You can take places to go, you can wait to actually go there. This is like a virtual exhibition. You know exhibitions where you can go online. In principle there should be avatars interacting with mine. And then I could chat, so as you can see, this is a system that is in 3D, Zdaj smo tukaj, da smo v 2011 vstavljali. Zdaj smo biti veliko v tem marketu. Zdaj je ui, da smo videli, da je vse vstavljali vstavljene, vse, kaj je začnevali za vse. Zdaj smo tukaj vstavljali v vr. Zdaj je 3D na weba, je 3D na weba. Vstavljali smo to, da smo vzivati inokoli, kardor, kaj ima, v vr. We need the UI in VR, too. We have things that our users are doing with the product. When they take their Oculus and put them in front of their head, they should be able to do it so we should show them the same thing as before. Now, technology issue. We are based on Unity 3D. And so is our UI. Which means that our technical problems are of a certain kind. but what if we wanted to use proper web technologies? So right now picture that kind of piece of software done with real web technologies. And now I have a dream. The dream is of a world where developers can be 3D UIs on the web, in virtual reality worlds, using the same technology tools they use every day. Now the problem with this dream is the 1090 rule. So every project that is visionary tako valo in 90 % perspiracija. Zdaj, da si doblizvaš, in inzipiracija potraža. To je več složitega, da tudi je zelo začnega vsega, da je vsega technickega. Zelo uvaj izgledak. V webu vsega uvaj zelo dobi. V 10 dvačnev zelo uvaja vsega tudi vsega zelo zelo zelo, in vsega vzivka na pomečenje. A vzivka je dom. Vr je web. Vzivka je jel. Vzivka je jel canvas na 3D modu. Vzivka je jel. Vzivka je jel. 3JS, Babylon, vsega. Zato. Kaj bi bom jel? Vzivka je jel. Zato so vsega. Zelo sem ui vzivka v 3D vode, a vič je hodnje na nas. To je nekaj nekaj početnja. Tekniko, imamo css-3D transforcije. Zato početimo ui na div, na dom, in početimo ui v 3D transforcije v 3D vode. To je nekaj nekaj početnja, ker nekaj početnja z zetboj, Which means when you give them some deft and you put them in the word, they are not to be partially covered way and object in the word. Trying to do that thing will be really, really tricky. It's possible, but it's so tricky. That it is just not worth it. So essentially what you still have is a canvas and a div. Either the div is over the canvas or it's behind the canvas. The div is not inside the 3D world, with the 3D CSS Transform. Tako, to ne vrste. Nowe, da sem počustila dom v kambasi kontekstu? Ok. Teknikovno, to je počustila. Vrstajte dom, vrstajte vrst, in vrstajte to v SVG. In SVG engine vrstajte vrstajte dom. Tako je bilo je sem vsožen. A potem there are security issues. Resource sharing gets in the way, and the security issues are really, really non trivial. Browser vendors have toyed with the idea, and they said we are not going to do it until we solve the security issues, which will be, I don't know when. Why? Because if you do, then you have to think like this. If you've got pieces of JavaScript coming from these different domains, One of them will essentially see the screen that has been rendered by the other. And this is against the segregation principles that we have for security in JavaScript. So, this is a security nightmare, because once you have a shader running in the CPU, the shader has essentially access to the video memory. So, it's hard. Okay, I say, I don't give up. I want this UI done. Are there other options? What can you do to do this? Well, we can give up the DOM. We say, okay, I cannot use the DOM and put that inside the 3D context. Let's use something else. We should render a UI directly inside the WebGL canvas. Then I said, well, not so fast. Because the WebGL canvas is a 3D context. And usually we want a UI in 2D. Now, I'll talk about this for a while. Believe me, you really do want it. Because it's true that we are going VR, and everything will be 3D world, 3D objects flying everywhere. But what if you... Okay, think about our example. Virtual trade shows. Two men meet each other. I want to give you my business card. Now, the business card. Am I going to assemble a business card out of 3D objects? Does it make any sense? In my opinion, a business card is a flat thing where there's written something, written something else, a picture of something, a logo of something. So this is the thing that I want to put inside the 3D world. So if I have a list of something, a description of a product, or something like that, in the end I want it flat. So, yes, we could imagine ways to interact in 3D, but in the end, all the 2D things that we do on the web now should be brought inside the virtual world in some way, because we will want them, one way or another. So, well, the idea. The idea is just to render the UI in a 2D canvas. And then use this 2D canvas as a texture inside an object in a 3D canvas. What does it mean? Is there somebody that doesn't know what a texture is? So everybody knows what a texture is, wonderful. Essentially it means, I have my 2D canvas, I write the goo inside that, I have a 3D object, and I use this canvas as a kind of skin around the object to give it the color, so that in fact I have the UI projected inside the 3D object. Okay, I said render the UI. So now, I started thinking I want the DOM, I want to use the user technology, but then I ended up saying, oh, I have to render the UI pixel by pixel in a canvas. Which framework am I going to use? Am I going to write all the pixel with the JavaScript functions one by one? I don't think I'm going to finish so soon. Okay, what I want to do is to use a framework that we are used to, because think about it, we are going virtual reality, but very likely the product that we will do will still live in the normal web. So we'll still have to maintain the normal 2D UI. What do I want to do? Revrite the whole code of my product? Or will I want to actually share code? And how much code can I share? Will the paradigm, the framework be so different or almost the same? And my idea is to pick react. Why react? Okay, disclaimer. I love react a lot. I like a lot of things about react. And in general of those kind of purely functional frameworks when the view is a pure function of a model. So even cycle, thanks, Andre, and so on. So it's easy to separate the, let's call it business logic, so the application logic from the view code when you have a framework like this. And most of all, these are naturally DOM independent. So you have the view is a pure function of your object model, of your state. It's a pure function that emits something. It can emit DOM or manipulate DOM or it could emit or manipulate canvas data. Which means if you go this way, you can have your software structured in a pretty solid way. Don't do anything crazy. And essentially you only have to reimplement the view, which for me is sort of okay. This is something that I can accept. Now, let's pick actually a sample application so that we are talking about something concrete. And the application is the ubiquitous to-do list because, well, everybody does a demo with a to-do list. It's very, very simplified because I just wanted to do a proof of concept, so it's not a traditional full to-do application. And for this implementation as a proof of concept, I built the logic in redux style. Which means there is an immutable state. Every time something happens, I get a new version of the state. And through the view I can emit events, actions, call them, whatever you like. That makes so that the system switches to the next state. And then when the system switches state, it triggers a re-render of the view. This is the gist of redux in a few words. And this way we'll be able to reuse the logic in the 3D UI. So we'll use the logic in the 2D UI and in the 3D one. It will be just the same. We are not going to touch any line of code about that. So let's see how it looks. It looks something like... OK, this, where was it, where was it, where was it? Essentially I have a small function that builds the state, which has the next available ID for to-dos, the current text that I am writing, the list of the to-dos and the amount of scrolling. We'll see now why. I can build the initial state, I can build a to-do. And then I have a few functions that do actions. They add the current to-do, toggle the state of the current to-do. And every function essentially returns the result of build state. So it gives me the new build state. This is a very, very small piece of code. In the end I export all these functions. They are pure functions. If I invoke them, the things happen. So very, very simple. And then what I can do is I create... I am using the redux vocabulary. So I am creating a dispatcher, which is a function that starts with an initial state. And every time I dispatch an action, I have this action with arguments and I invoke it, get the new state. And if it is different from the state, I change it and I call this state handler. State handler will be the thing that will trigger the re-render of the view. So the idea is this file is the whole logic of the to-do application. This is the thing that I have written. It's my application logic. It doesn't change. And every time it does something, it can signal the view. Hey, the state changed. Show this to the user. Do whatever magic you want. If it's react, it will call set state that will trigger render. That's the way it works. Okay. So this is just the logic file. And with this logic file, we can have a DOM view, which just look from the size of the file. It's very small. And in the end, it's usual react code. So here, I have the to-do element and I emit a few divs. And there is an onclick to do the toggle on a button. And so, it's just a small piece of react code. And here, okay. And here I have the view with the DOM, which is what we expect. So I have one to-do, another to-do, another to-do. I can toggle them. And once I have toggled a few, I can remove them. So, I can do it right forward. That's it. Now, this is the implementation I did as a proof of concept just to have a UI working. And then I could say, okay, let's see what can we do on the canvas about this. And I said, what about a canvas drawing framework with react bindings? Because this is what I'm really after. In the end, so what I need is something that when I want to do render, actually rise to a canvas. Am I going to revise this thing, or am I going to reuse something that's there? And well, here we have Java fatigue that is hitting us because as usually in this ecosystem there's always too many frameworks, too much choice, and choice is good, but it's also hard. But we must be honest. It's always better having choice for a scratch. If I had to start from scratch, this would have been very hard. The top choices of framework that I looked at where one was react canvas. There's a company that did exactly this, a UI on canvas. That would have been the perfect match. Unfortunately, they stopped supporting the library, of course. And it was tied to older versions of react. I looked at the code, trying to do the parting and the maintenance was non-trivial. They explicitly said in the issue tracker we are not going to do it, we don't have time, and it's non-trivial. And if it's non-trivial for those that wrote it, I said, okay, I don't have the time. There's Pixi.js, it's a wonderful library that handles the canvas and it has react bindings. And there is Conva.js, which is another nice library. The thing that they have in common is that they have a real object model. So what they do in these libraries is they have objects when you can make trees of objects like the dome is a tree, and essentially when you manipulate the tree, so you mutate the tree, they update the canvas. So they are the perfect fit for a react binding because what you do with react is manipulate their tree. So it's a very layered approach, but it's just the same as the dome one. And I say, pick your poison in the sense that you must really make this choice wisely. Because you live with this poison, the library that you choose for longer than you think. So in the end what I'm trying to say is when you do a choice like this, this choice has very big consequences. You are tying yourself to a library and then it's very difficult to undo the choice. And if you pick the wrong one for whatever reasons, you will have to suffer with it for a while. In the end I pick at COMBA. Mostly, mostly, mostly because its react wrapper is very, very simple. Now COMBA is this, so it's a project. You see it, it is maintained, it is nice. I also thought to the maintainer and this is a canvas. This is a compilation of things on the canvas. And what's important, there is this other project that is react COMBA, which is the react binding to COMBA. And the nice thing about this thing is that it's essentially one single file. It's very, very small and very understandable. So it's something that I could understand in one evening, essentially. I said, at least let's start with something that I can manage. Then what I did was to add CSS layout to it. This is another tricky issue. If you're thinking canvas, you're thinking I draw something on the screen. And usually when you think canvas you think absolute positioning. But when you think UI, you think about something that has a structure, a logical structure and you want to position things according to the logical structure you don't want to do all the calculation by hand. You want a framework where you can actually do the layout of the UI for you. Now we have CSS on the DOM. We take this for granted. We might hate it. It might be I hate it. But in the end it gets the job done. Without that, the layout would be a horrible mess. And the CSS layout is a project that Facebook did which reimplements the flex box model in JavaScript. And they use it in react native and other things there. So it's recompilable to the C language and other things. But I just need it because it was pure JavaScript. And what I did is I took the Conva object model which didn't have a layout engine and I added this thing to that tree so that I could layout the elements of the tree using CSS essentially without having to write anything. I even did a pull request to the project for this. It could be hard to understand. So he said, oh, cool, I look at that. And then he didn't look at that. So let's see what happens. And let's see what actually happened. And to show you what happened, I want to start from an older version of the code. So at some point I wanted the thing to take some shape and this is the same thing as before but now here we also have a canvas. So when I when I add to do you can see OK. Now it's starting to look funny but what's really important is that I have one view here with DOM and this thing is a 2D canvas and it is synchronized with the DOM because the logic is running only once. There's only one instance of this dispatcher that holds the state and I have essentially two views on this page. They are both synchronized to the same thing. Now here it looks horrible because the first thing they did was just to implement the integration with CSS so that I could lay out things. Then I wanted to make them let's say beautiful in the sense let's give them borders, let's make them more readable and so on. And so what I really, really ended up doing in the end is this. So you have your objects here your todos and you can scroll them and you see that they are synchronized you can pick them from here and remove them from here or pick them from sorry the button is here and remove them from here and you can even write here. Now the text box is not focused there is no car sorry I am intercepting the keyboard events in the canvas and I had to implement the text editing logic myself this is another problem at this point you simply have no framework you are doing everything on your own it's sort of doable, manageable but these are problems that needs to be solved in some way but anyway the next thing is that it's here it works we have a dom view and we have a canvas view and they are in sync which was what we wanted now as usual the devil is in the details so there are a lot of issues with this thing one is that CSS layout is not enough so I started optimist and I said oh I take this CSS javascript library and I integrated it with combo and then I have all the layout for free and well yes if flex is enough sorry problem is combo is very very poor in the sense that it has very low level primitives particularly if I want something like this which is a rectangle with borders and text and something superimpose them yourselves so you end up with code like panel row is the culprit you end up with code that is a react component that essentially makes a group that contains a rectangle that then contains another group that contains the children and all this stuff simply to have something that with CSS normally you have a div and you set the style with the correct border I didn't notice that combo was so limited in this sense because I was in a hurry and then I had a problem because to have something so simple as set a border to something I had to build this kind of component and the funky thing is the rectangle with the content and the content need to be totally superimposed they are one above the other and the flexbox model is not going to do this so I had a CSS integration that was only doing one model of CSS and I also needed the others so I ended up hacking the code to have multiple layout ways of doing things so that I could either base do something in absolute ways or relative to the parent or inject the CSS engine onto that so that it would do the layout and it became so the code was not clean anymore then propagating size info in the layout became cumbersome because at some point I was using the CSS implementation and at other points in the layout myself so the propagation of the values started becoming crazy and mixing these layouts became crazy and then CSS layout does not always do what you think it should do this is a problem with CSS by itself so and then there was a problem with event handling so Conva has its own event handling engine so there are mouse events that meet things on the screen so if you do a mouse over on a rectangle that object gets the event and if you do a mouse over on another another rectangle that is getting the events but the filtering is a bit gross so sometimes I had to do strange things to get it done and then it does not handle keyboard events at all the way I saw it because it doesn't need it in the most important issues you don't have it on a canvas you just get everything and so I had to actually put an event handler manually on the page to get the keyboard events so this something that in principle had to be clean ended up being way, way dirty and then what I actually wanted now we solved these issues in a canvas and we still need to show it in 3D because the whole point was to have it in 3D so let's build a 3D scene and it turns out that you can actually build a 3D scene I'll show you the code first this time I pick a Babylon as an engine and so I have this react component which in the end all it does is create a canvas on the screen and I have a ref that takes this canvas and when I have the canvas I can actually take essentially create the scene so instantiate the 3D engine and work on it and create a camera create a light all skybox all those funky 3D things is that I can have this and this is sort of what I wanted so it is exactly this canvas notice one thing if I pass above this button it gets the highlight the other view doesn't get it if I pass above this one both do get the highlight because in this case what happened is that it's physically the same canvas I only have one 2D canvas and I'm showing it inside this div and I'm using it as a texture for an object here so this is a 3D object in a 3D space there's a satellite orbiting so that you can be sure that it's 3D and this is essentially the idea but as you saw it's the same UI the same code, everything only it's been projected in 3D now and then we did it what happened is yet more issues so mapping events mapping events started out being way, way tricker than what I was imagining in principle it's simple in practice you end up with so when I do a mouse move at each mouse move what happens is that array is casted in the camera space and in the camera space because this is a 3D camera so a mouse move is in 2D it points to a pixel and this pixel is logically a direction in 3D space so you need to create a ray that starts from the origin of the camera to the pixel rendered at those coordinates now you have that ray you have to use this ray to do a 3D object intersection to find if it is something now when it hits here it is hitting the 3D object that has the UI you who and then and then you have to do something even more funky because at that point you have a point in 3D space but what you need really to do is to convert this point in 3D space into a point in 2D space in the 2D coordinates of this surface luckily the engine came into help so you just need to call the correct methods of the engine and in the end what you end up with are the coordinates in texture space which are okay so in texture space this is a texture so I get the pixel of the texture that got the event then what I need to do is to take these 2 coordinates go back to conva so I have to pierce my react abstraction over conva take the conva object and invoke a method onto the conva layer that asks conva hey guy, I've got a point in the canvas because at that point it's in 2D space it's in the canvas please tell me if there is any shape under this point and then conva will actually tell me something like oh, there is this rectangle it's your object and then what I do is in the react, so what I do is to tell conva make this object emit an event and then on the react side I listen to those events which means that the machinery is sort of cumbersome but it can propagate events and then it becomes okay because in the end in your code you just do something like this I have a rectangle and I have these events pointer moves 3D pointer picks 3D so these events are synthetic events they are the events generated by my code and I listen to them and essentially I emit these ones it looks a bit funky but in the end it gets the job done and then you also have this thing on mouse down on mouse up on mouse move so these events are the normal DOM events that I get when I am in this space when I am in this space the canvas is emitting DOM events and using react I can catch them but when I am in that other space what I get are the synthetic events that I am emitting in principle I could have synthesized mouse events but to do it in the proper way I would have needed to fake the client xy coordinate screen xy coordinate locale xy client all those coordinates to be handled by a normal event handler needs to look real and I didn't feel like taking the time to actually generate synthetic events that looked real enough so that the event handler would actually pick them and do the right thing so I decided to go this strange mixed way so my react component is reacting to both the DOM events and the synthetic events now when I am here essentially the real DOM events when I am here the DOM events go to the 3D canvas which does the intersection to the 3D shape finds the 2D coordinates in the 3D shape invokes conva and so on and so on and then I emit the synthetic one which has the same semantics and the same locale coordinates so I can just use it so as I was saying mapping events is tricky and then something crazy happened Babylon events 3D engine events and set state invokations on react where they are doing horrible things I don't know why but essentially after I did set state if I did set state inside the event handler somehow Babylon got crazy and re-emitted the event handler immediately without even relinquishing control so I essentially was blocking the javascript engine the page was frozen and the same event was emitted all over again which is something that you say why I don't know why I would need to debug Babylon to know so when you do things like this expect stupid issues to pop up because the technology is tricky how did I solve this well I solved this simply doing totally async UI rendering which is something that if you do anyway and should do anyway so essentially what I do is the dispatcher does set state and what the state handler does is store the new state and do a request animation frame and in the request animation frame which is totally asynchronous and has nothing to do with event handling so no event handler is running there in the request animation frame I compare the states and they say oh if I really have a new state then I do the react set state so I somehow managed the couple the Babylon event handler from the react set state they were not fighting each other anymore and this thing is working so why am I describing this to you just to say things that look simple on the surface the devil is always in the details ok this is the solution that I picked and did I say that mapping events were tricky it was seriously now it works but about VR what about virtual reality we started saying I want this in VR I was toying with the idea to do a VR demo of some kind but it's ok I let the other speaker, the one after me do a good job at this because I'm sure she will and what I did was just this in the Babylon thing what I can say is that instead of this camera I can create this camera which is a web VR camera and probably if everything goes well we still have the same system as before but now we have stereo rendering so we are in VR and it's the same thing as before, I mean it's really the same so it's handling things, events so future ideas real 3D UIs one could do a UI done with real 3D objects sometimes it should be nice so after what I learned doing this experiment what I would probably do is make a unified framework with React bindings or CycleJS bindings I'm starting to like it on Babylon 3D objects and Babylon is a thing called Canvas2D it's logically equivalent to Conva so it's again a library that does 2D drawing and they were meant exactly to the UIs it has a lot of small limitations but the nice thing is that the event handling is totally integrated in Babylon it's like 3JS Babylon only a bit more comprehensive so at this point I would expect the event handling to be natural and I would just need to fix clipping and the layout engine I would need to add it but the nice thing about Canvas2D is that I discovered it has a layout engine on its own which does either absolute or stacking and then I would have a layout system with absolute stacking and flags so that would be pretty usable and then there would be the Canvas2D texture this is a technical thing to long to explain so the takeaway from this talk is that this proof of concept works this is something that can be done if one is persistent and overcomes all the issues you can do it it's not production radios about what you really need if your application is this simple it's ready, if it's more complex there's more work to do but in the end this code at this point works and with current frameworks it's harder than what I had imagined I wouldn't imagine all these silly complications but with some effort it can maybe be beautiful from the developer point of view because all this issue that I got can be fixed at the framework level so once you abstract the event handling the clipping, the layout and you find a nice API for yourself to do it then you've got a framework that works uniformly on the DOM and in 3D and in VR so it's something that has a value if anybody of you will work on VR just do like this in my opinion it will work so that's all folks that was it is React to rea... time for one question is React really a good abstraction for this problem is it cumbersome just because of the lack of libraries is it cumbersome just because of okay for me the really good abstraction to this problem is the fact that it's purely functional it could be a react or another of the react like frameworks as long as they are DOM independent which essentially all of them are the lack of libraries well as far as I know there were these react bindings for canvas drawing libraries I am not aware of any library for the other frameworks so in this sense let's say the momentum of react and the fact that it is so popular makes it a good fit now perhaps with the less cumbersome framework doing the binding yourself would be less expensive I was seriously after this day of conference toying with the idea of exploring more cycle.js because maybe making a backend for cycle.js is easier than making a backend for react this is a different issue excellent thank you ladies and gentlemen Massimiliana Mantione thanks