 It's LVR is when your anniversary of being hosted by Y Combinator Research and we wanted to commemorate the occasion by looking past on some of the projects we've been working on in the past year. If you don't know us, we're a non-profit research group looking at technologies like AR and VR, not for technology's sake or for selling a product, but in preparation for a future where those technologies become an invisible part of everyday life. When instead of VR and AR being something we think or feel about, there are a tool we think and feel with. We hope that by sharing our research, designers and technologists will be more conscious of the choices and affordances that can be made now while the technology is still visible. There's definitely been some themes to our past year of work, so here's number one, technologies for thinking. We don't think VR's power is in simulating reality, we're interested in using it to create wholly new kinds of experiences that give us new abilities of reason, of communication, of self-expression and self-reflection that last through the rest of our lives. What the headset shows us isn't reality, but the experience is real and it changes how we feel and how we think. For a basic example, a virtual object, once seen, continues to linger. Virtual objects can be referenced, shared, pointed at, becoming a real part of our common experience. We started to see this effect in the prototype AR framework that Andrea made a couple years ago when we'd pass around the headset and reference the placement of objects to each other. In the past year, we've put more work into that headset and we've also started working with the HoloLens to get this effect at a larger scale. Virtual experiences linger in our bodies too. It's not uncommon for those new to VR to have to check themselves from trying to walk through people and objects after they come out of the headset for the first time. We'd like to understand these extraordinary powers and use them for good. They're inspired by on-paper visualizations such as graphs and venn diagrams, icons and abstract art, as well as computer models, interactive diagrams and games that give us new ways to think even when we're away from the computer or the page. We've created dozens of virtual venn diagram variations to get a feel for how different laws of collision in VR allowing overlap might give us a different way of thinking about containers and categories. According to Likoff and Johnson, our very concept of categories comes from places like kitchens where things are sorted into separate drawers and cabinets. So we created a venn kitchen that obeys different rules, giving a hands-on experience of overlapping categories that we hope might inspire the player to think more complexly about their categorizations of things and people. We used VR to create interactive museum exhibits that are unconstrained by physical laws, and as a side effect found that this thought process helped us design compelling real-life exhibits too. Our work with tools for thought overlaps with our second theme, embodied knowledge. The thing about VR and AR isn't just that you see 3D graphics in a headset but that the technology tracks your actual body's real movement. This lets us take advantage of a huge set of human skills, things that often get called intuition that technology is previously ignored but that now we can build rational models around and design for. Take something as simple as my ability to roughly know how I'm moving my hand through the air without looking at it. Evelyn's work on networked gestures allows us to send these hand motions into shared spaces online where we can add gestures to our communication as well as push virtual objects together and know we're pushing them. Not because we see the hand push graphic but because we're doing the pushing. We wanted to create a prototype that helps people understand the abstraction of a graph because graph literacy is one of the biggest predictors of success in grade school physics. With something as simple as a graph of your hand's Y position over time, you can see the abstraction and feel its relationship to your body's motion. We don't know how effective it is yet but we're creating a model that can be tested using tools accessible to any education research lab. Even our head's motion through space comes with a lot of knowledge and expectations as to how our view of space should behave. With this in mind we collaborated with mathematician Henry Segerman and physicist Sabeta Matsumoto to put two different types of hyperbolic spaces into virtual reality, allowing you to feel the way hyperbolic space behaves when you move through it rather than merely seeing it. We also wrote a couple papers on the math and tech behind the software this year and also this work was featured in Nature and a bunch of other places so that's cool. Also pretty cool is our ability to know where things are in space around us to grab and arrange objects using our hands and group objects into collections that are organized non-linearly. So we've been working on prototyping a programming language that allows you to stick programming elements together into chunks of code that can be arranged into large programs that contain a spatial texture that aids readability and understandability, with changes in scale that allow larger code bits to contain their own programming and maybe new and more expressive ways to think about programming altogether. All right, theme number three, The Office of the Future. In the past year we spent a lot more time working with VR and AR rather than just on VR and AR. Through Andrea's asymmetric multiplayer game designs, we experienced the effectiveness of elements like gravity, scale, and placement in communicating information, as well as some of the flaws in our intuition for space. From this, we delved into social VR and started to have our weekly group meetings in various VR locations, and we found that with any work we do in VR, we always end up on the floor at some point. So we embraced the fundamental truth that sitting at a desk or standing in one position is not what human bodies were made for. And so M took a deep dive into figuring out how bodies want to work, including completing an entire yoga teacher training course, and undaunted by the ergonomic failures of aesthetic floor-based designs such as beanbags and furry rugs, they forged ahead and ended up with a design based on restorative yoga techniques using foam floor mats, bolsters, and blocks. And let me tell you, it is such a great way to work. The main office is our central prototype and grounding presence, which is copied both as a networked virtual space, using a 3D model of the office Elijah made, and also in different physical iterations. But the technology also lets us branch out our offices into the wider physical and virtual worlds. We've been bringing our art studios into VR to share our spaces at a distance. Evelyn shared her own studio and virtual works with us during her residency at the Banff Center, and also the studios of other artists to see the extent to which we could get a sense of how different studio spaces feel in VR. Which brings us to theme number four, art-based research. We've talked about the goal of finding new ways of thinking and methods of understanding, but what research practice gets you there? If we were merely looking for answers, the scientific method might be a good tool, but you can't use it to find a hypothesis in the first place. And that's why in the past year, we've been refining our practice of art-based research. The idea that artistic explorations push the boundaries of technology and human expression in ways that help you get to truly new ideas and new questions that aren't along the standard path. With the help of Evelyn, who joined us last year, we're borrowing methods from the art world and using them in self-aware ways to further our research. M made 50 virtual bed sculptures for the piece Making the Bed, and in the process, we learned about new sorts of spatial organization and uses of teleportation, as well as uses for scale. We've better understood the way AR objects stick in your brain and how groups perceive alternate realities together through their work Would You Like to See an Invisible Sculpture, shown unsolicited at SF MoMA. Tossing and Turning is a work that combines 3D elements from a variety of different VR technologies into a new context. Oh, and also in the past year, M completed a project to make a spherical video every day for an entire year. And we've certainly learned a lot of VR videos expressive and editing capabilities from that. Evelyn made self-expressive works following the themes of weight, texture, and scale that showed us how previous technology ignored these elements and future technology should keep them in mind. Her landscape interventions challenged my assumptions about where AR is done and the speed and scale at which it can be used, and her AR still lifes playfully toy with our reality-based expectations of the behavior of virtual objects. She has also explored AR and VR as tools for artistic thought, using both as sketchbook tools for reconceptualizing and reframing mental imagery. Our artists' selves admire the surrealness of the conflict between the virtual and the real, and then our researchers' selves ask why we have the expectations we do, how we can use them, what we would have to change in our sensibilities in order to change those expectations, and what else these new ways of thinking, seeing, and feeling might lead to. Looking forward to the next year, we hope to better understand the body's role in cognition and how it can design for it. We hope to interface with more diverse fields. We hope to find new powerful representations of thoughts, feelings, and ideas. And of course, we hope for enough funding to keep doing it all, so if you have a big pile of research funding looking for a home, we could use it. Big thank you to everyone at Hark and YCR, to our funders, to Alan Kay and Sam Altman, and especially to M. Evelyn and Andrea. You're the best team ever. Here's to another year.