 Hello, everyone. Thank you for joining us this evening for our NCAR Explorer series lecture called How Emerging Technologies Can Enable Us to Create an Inclusive Future with Dr. Niha Cheruku. My name is Dr. Lorena Medinaluna and I am an education designer and lead organizer for the NCAR Explorer series. NCAR, or the National Center for Atmospheric Research, is a world leading organization dedicated to understanding earth system science, including our atmosphere, weather, climate, the sun, and the importance of all these systems to our society. And I'm really glad to be with you all today. For this lecture, we will take questions at the end, but please feel free to submit any questions that you might have during the talk using the Slido platform. If you scroll down the webpage, you can see the Slido window just below where you're seeing the livestream video of this event. So if you haven't already, go ahead and click on the green join event button. Then you can ask questions on the Q&A tab and answer the poll question on the polls tab, both of which are found on that blue bar across the top. And be sure to join Slido to add your thoughts to our word cloud question. What do you think of when you hear emerging technologies? Because we're going to get to that soon. This lecture will be recorded and will be available on the NCAR Explorer series website. And today we have NCAR scientist Dr. Nihant Cherukuru from NCAR's Computational Information Systems Laboratory. Dr. Cherukuru is a project scientist and the lead and the head of the Visualization Services and Research Group at SISL, which is the Computational Information Systems Laboratory at NCAR. And as an interdisciplinary applied researcher, his research focuses on the application of emerging technologies in the design of inclusive experiences to communicate scientific findings to domain experts, to policymakers, and the general public. He had designed and developed multiple visualization interactives, which have been featured at the USA Science and Engineering Festival in Washington DC, the White House Frontiers Conference, and on Capitol Hill. Dr. Cherukuru received his Ph.D. in Mechanical Engineering from Arizona State University Tempe, specializing in Doppler LiDARs and XR data visualizations. And Nihant, I invite you to turn on your camera and say hello to our guest today. Thanks, Lorena. Hi, everyone. And now since we had some time to have guests fill out our word cloud, Paul or Brett, would you be able to share the Slido for the word cloud for us, please? Thank you. So what do you think of when you hear emerging technologies? Think of AI, machine learning, new technologies in the experimental phase, cloud applications, innovation, virtual reality, meta, and most recent AI technologies. You're definitely welcome to continue adding to this word cloud. But with that, Nihant, I can pass it over to you. And I'll come back on at the end to help with the questions. But definitely welcome. And I look forward to hearing your talk. Thank you all. Thank you, Lorena. And first of all, many thanks to the organizers of EnCloud Explorer series, especially Lorena, Alia, Dan, as well as Brett and Paul. I know you folks will be working in the background making this happen. So your work is much appreciated. And thank you all for filling out the word cloud. And I especially like the answer where there was a catch all answer that said about everything about anything that's new and emerging. And I must say, that's like the most accurate answer. And again, almost everything that you have mentioned is basically correct. So going into the talk, basically emerging technologies, if you look at the definition, they are like these new and fast growing technologies. And some of the examples are augmented reality, although we are beginning to see augmented reality more and more these days. Although their practical applications are still yet to be seen, it's like a work in progress. And then you have autonomous vehicles, internet of things, and all these technologies clubbed together. And the focus of this particular talk would be how these emerging technologies have helped us really discover some fascinating applications in the field of data visualization science. And as well as like their applications to accessibility and bringing all these things together. So to begin with, let's look at data visualizations. And like that's the that's basically where I started when I explored a little bit about my background. Now I am an interdisciplinary applied researcher. And like when I started off doing this work, like I had no clue that, you know, I'll be working on all these different technologies. So like my first introduction to interdisciplinary research came in during my graduation work when like I worked in the environmental remote sensing group of Arizona State University. So in that group, like my work specialized with Doppler LiDARs and some of the applications of Doppler LiDARs using computer vision, as well as data visualizations. Now, like a quick introduction to LiDARs. So LiDAR is the machine that you see over here, like in the bottom right image. So it's basically a machine that shoots laser pulses into the atmosphere. And it basically looks at the reflected light from like dust and other aerosols that are present in the atmosphere. And by looking at the reflections from it, it's able to create paint a picture of its surroundings. And we can measure things like wind speed. And this is fascinating because prior to Doppler LiDARs, we only had like point measurements and like sparse measurements. Now with these instruments, you could paint a much richer picture. And that's probably like one of the reasons that got me into data visualizations too. Because now we have access to data that can be like visualized in high resolution. And yeah, another reason that got me interested in my work is the field projects that we get to do. So the one in the top right is like us using the LiDAR to make some measurements at a wind farm. And the one at the bottom is to study like how the dust disperses when a helicopter lands in a desert. Now, so the first emerging technology like the main emerging technology that I'll talk about is augmented reality. And my introduction to augmented reality happened through one of these field projects that I was involved with. So the photograph that you see on the top right corner is of the meteor crater in Arizona. So meteor crater is an impact crater that happened around 60,000 years ago when an iron meteorite like hit that area, fell on that area. Then atmospheric scientists were interested in studying the wind patterns that were happening inside and around the crater. Because they had many similarities with downslope wind storms, basically the wind storms that happen near mountainous regions, such as the ones that people might be familiar with in the Boulder area. And in that experiment as like as my job was like I was taking care of the LiDARs, I was working with LiDARs. And the good thing about LiDARs is that now with LiDARs we were able to actually observe the phenomenon that was happening as you can see in the image in the lower right corner where you have a traditional data visualization where you have this rich data set being shown in 2D. Now one thing that really bugged me was that when you look at this rich data set that is available, now this is a physical phenomenon that's happening around the crater. Like you can sort of feel it, you know it's happening by looking at it in the computer screen, but then you couldn't really see it in person because I mean as wind is invisible. And while I was exploring for how to get this thing into the real world, that's when I got introduced to augmented reality. So for folks who are not familiar with augmented reality, AR is an environment that allows you to place virtual objects in your real world. So let's take a look at the image on the lower right on the left, sorry. So that's a screenshot from a very popular game that came out in 2016 called Pokémon Go which you have this animated character that's basically present that's overlaid in the camera feed of the real world. So that's one example of augmented reality. Now the way I implemented this with data visualizations was like now imagine instead of that animated character you replace that with the actual measurements that you're taking on the spot and that's the video that you see on the right where I have an iPad based augmented reality application that we developed in which we take the measurements from the LiDAR and instead of viewing it on a computer screen it is displayed at the location where it's happening. That way A making it more intuitive as well as putting things in perspective. Now I'll go a little bit into the implementation of how we did it. So at the core you can think of any augmented reality application to have like three layers and for the sake of simplicity I'm just looking at phone and iPad based augmented reality basically video based augmented reality. So in that you would have the first layer which is basically the camera feed the live camera feed and now the goal is to place a virtual object on the top of it like the word sensor data and for the augmented reality to work as the camera or as the phone moves in the real world we need to adjust the virtual object such that it gives an illusion that that object is fixed in the real world and the way we do that is by taking input from the phone sensors. So basically for us to make that adjustment like we really need two pieces of information like the first thing like as the image is moving we need to understand like how the phone is moving in the three dimensions like is it moving front back left right up and down as well as we also need the orientation information. So in addition to how it's moving in that direction we have to figure out how the phone is tilted and which way the image is being pointed. Now we can get these information from the inertial measurement unit so that's basically a suite of sensors where like you have the accelerometer which measures the acceleration a gyroscope which can basically tell how the phone is tilting and the magnetometer which is essentially a compass. So we can put these we can take the information from these sensors together and we use this data to adjust the virtual background giving it an impression that like this virtual object is fixed in space. So this is a very simple implementation of AR and like this was implemented around like eight years ago and before like we had this fancy like technology available and there's just one problem with this implementation. So when you take the measurement from sensors like GPS GPS is only accurate up to like 10 meters which will work if you are driving and you just need to find where you are in a map but for applications like augmented reality you want the virtual object to be like rock steady but so that's the disconnect that we get that we used to get when I tried to implement it using these suite of sensors wherein like it would work but then there would be a drift to the inertial measurement unit so you need to correct for it and to address that we have a different approach for AR that is using images. Now how do we use images to detect where the phone is in space? Let's look at this picture and imagine you have an image if you look at that image let's just place that image on a table and let's look at that image at different from different angles. So as you can see like if you know how the image would look how the actual photograph of that object is depending on where you are located around that object the way that object appears to you would be different. For instance if you keep that sheet of paper very close to you you'd see it to be really big and as you move it further it appears to be small and then depending on where which other locations you're positioned you'd see it in different perspectives and by if we know exactly what the original picture is and by find and by detecting like what the camera actually sees we can sort of back calculate where the camera is actually located in this case the phone. Putting these pieces together we have an image-based augmented reality or a marker it's called a marker because in the example that you see on the right you have the page or in this case the image that image acts like an anchor for the virtual object and by continuously detecting and tracking this image we are able to overlay it on the top and create an AR experience. Now this was actually one of the this was my main project during my summer internship at NCAR and this is also my introduction to NCAR and how I got interested in the work that I was doing I'm doing currently at NCAR. So the example that you see here is of an app called Meteo AR and Meteo AR is an AR application that we use for education and outreach. We basically have a bunch of pages we call them the science sheets that have information about different science topics related to the work that NCAR does along with the marker image and when users use our application and view this page through their phone they're able to see an animation and a 3D object corresponding to the data set pop up on the top. So this was a neat way to get basically people excited about what we are doing and sort of keeping them engaged and giving them a more interactive view into like what we do at NCAR. So those were the visual aspects of AR and basically how we started exploring AR and you know we started like noticing the fun stuff that are happening with AR. However AR is really a much smaller concept like it's related to a much a bigger concept called spatial computing and that's something that we'll get into next. So interestingly spatial computing is a term that was coined by like Simon Greenwald in his master's thesis in 2003 and according to like what was given the definitions that spatial computing is a human interaction with a machine in which the machine retains and manipulates reference to real world objects and spaces. So let's translate that. So when you look at traditional computing you have a computer which is a device and the computer would have data and some logic and that makes the computer work and you interact with the computer using your monitor which is your primary display device and your keyboard and the mouse. So with spatial computing you add spatial awareness to the mix of data so now the computers know what is in their surroundings. So with this capability instead of confining to the two-dimensional screen with traditional computing with spatial computing now you can expand it into the room and the surroundings around you. So what that means is that things like the desk that you see on the corner or the table, the bed, the walls essentially the physical surroundings becomes a part of the interface. Now this has interesting applications as in the accessible technology world. So as a part of our education outreach stuff that we do like we maintain to we have a visitor center at the Mesa lab so it's like a small-scale museum that we use to educate people about the weather, climate, impact, basically the science that happens at NCAR. So imagine a space like that. Now with spatial computing we can make, we can create an interaction in which the entire world or entire building becomes a part of the interaction. So we can do cool things like we can make the posters and the building talk to the person, give additional instructions which could have fascinating applications in the for people who are blind or vision impaired. And like now this is something that we initially like had an idea and like to be honest like I was like blissfully unaware of the accessibility implications of different stuff but the thing is like I was able to see the, I was able to witness the inequities that happened because of lack of access primarily through the expedience of my wife who's blind. So we did the next thing possible where we put together a small tummy prototype to see if it was working and seems to have a lot of potential. And that's when like one thing led to another and we collaborated with another group at CISL, like the CISL Code and the Smithsonian Snatch to Museum of African American History and Culture. And the goal of this project was to create one such augmented reality application whose details I'll get into momentarily. So in order to create an experience like the one that I've shown earlier like what are some of the building blocks of it. So the first step of it is to create a virtual copy of the real world. So in this case imagine you have a museum or an exhibit space. The first step is to create an exact virtual copy of this real world. Once we create a virtual copy we need to do something called localization localization as a term from robotics. It basically refers to the process that robots use to basically find out where they are in the space. For instance, if you give a map to a robot and ask it to determine like where it is on that map, it needs to figure out, it needs to look outside into the surroundings and compare it with the map and try to figure out where it is on the map. In this case the localization the robot that I'm talking about is the phone that people are using. So once we have the localization handy then we can go to the next step that is augmentation wherein we add different virtual content in the real world and you can have other applications such as navigation wherein it can be used as an indoor navigation device because like AR compared to some of the existing technologies with AR you can get to really high resolution. So let's go into each of these processes and how we implemented that. So in order to create a virtual copy we really need two pieces of information. The first one is image detection and tracking and the second one is something called VIO. So image detection and tracking. Now this is similar to what I've shown earlier with the Meteor AR app in which we had these science pages. So essentially how going a little bit into the details like how image detection tracking really works is you first have a bunch of images and we extract certain features from those images. So these features are it could be like changes in the texture or like it could be angles, it could be at places that have a different contrast. Basically these unique features that are represented as yellow dots and these images are abstracted into these bunch of these things with bunch of yellow dots. So once we create a image library we can then create an AR app in which when we run these image detection tracking algorithms basically what the phone searches for us these yellow dots in the space. So it tries to create these yellow dots in like whatever it sees and tries to detect where these things are in the space and once it detects it's all a matter of track tracking it. The good thing is over the past couple of years we have a lot of libraries available that I love you to do this process. It sort of simplifies the process. So you don't have to go in and write a computer which don't go to like do these things manually and the thing on the top like those are all the libraries that are available for us to do. So now this is how we do it for Meteor AR where we have these science pages. Now expanding it those images need not be something that's movable. So we could make the app detect something that is more concrete that is something fixed in space like this wallpaper in the museum of IMPay and Walter Roberts. So what this allows us to do is that now we have a reference to the real world that the image knows sorry that the phone knows and of course image detection has its limitations like similar to what Meteor AR has like the tracking is always relative to the image. So like as long as the image is in view the phone can sort of determine where it is in relative to the image. But once the image goes out of view it needs to do something else. This technique alone does not allow the phone determine where it is in space. And the second technology is visual inertial odometry. So VIO is something like relatively new that came out that became popular that became widely available in the last I would say like five years. So with VIO it uses a hybrid approach in which we use something similar to the image detection and tracking. But in addition like if you remember the first approach where we use gyroscopes and the IMU it uses the IMU data along with the camera data to sort of track the device post. So the way that does that is I'll just call it the features or let's call them the yellow dots. So as a phone that is running this VIO would generate this yellow dots in the environment and by tracking these yellow dots between frames it's able to determine how it's moving relative to those yellow dots which is great. So the images that you see over here like the first one is an image of my apartment thanks COVID. So yeah we had to like work from home so you'll see a lot of pictures of my apartment. And so yeah so that's the photograph that's what we would see and the image on the bottom is basically what a phone would see like I call it the matrix view in which it has all these yellow dots which correspond to different features. So this can be used to for the phone to determine how it's moving. There's only one problem with this approach but there are a couple of limitations of this approach. Firstly like tracking is always relative to initial position. So for instance with this approach alone the phone doesn't know what it's seeing. So if you look at the image on the right where I'm taking the phone and scanning the couch the phone doesn't know that it's a couch in the first place it just knows that there are a bunch of points and it knows how it's moving relative to the initial position but it doesn't know where it is in three dimensional space. And the other limitation is data volume. So if you were to use this alone imagine this phone is generating these yellow dots everywhere. Now expand it to a space like mcar and now take it a step further now expand it to a place like the smithsonian let's take namaq like in national museum of african-american history and culture where those spaces are quite huge. So like just stay in one exhibit and it would fill up your phone's memory if you were to use this approach alone. So we'll see how we can use these two things together to find to make the map but just using these two pieces independently of one another we can generate a map of the real world. So that's the first piece of the puzzle that I've mentioned earlier and this video shows that example of us mapping and scanning and creating a virtual map a scaled virtual map of the visitor center at ncar. You can see all the yellow dots as well as the images that we train the phone to detect in its surroundings and as well as like try to keep a track of like where those images are located. And for folks who in the audience like if you've been to ncar and if you've been to the museum like you might be able to recognize some of the features like the main staircase and the big mural on the wall of the sun. So that's the first that's the first part that's to create the virtual copy of the world. The second step is localization and this is the place where like we sort of bring those two pieces of technology together. So in this piece let's see how we put that together. So the first step is the image detection and tracking where we first create a reference of all the images that are placed that are present in the museum. We create an image library and run image detection and tracking. So that should tell the phone which image is detected and track it if needed. Now because we already we have already created a facial facility map that map has information on not just which image but we also know where that image is located. And that provides that gives a sense of reference to the phone and the phone knows where it is in the on in the building. So you're combining the location information with the images information we can sort of know where you're located and that's where VIO comes in. Now instead of storing and creating a map what we do is we use VIO just to track between the images and it's something that runs in the background and both these systems they sort of hand off control to one another depending on like where the phone is located. I'm sorry depending on what the phone is doing. So if an image is in view it uses the image to just make sure that it's it is where it is where it's supposed to be or where it thinks it is the phone. And whenever when there's no image in view it just uses VIO to figure out like how it's moving relative to the previous state and that's basically the localization. And of course now we have a map and we have a phone that is localized in world so we can combine these two things together and have a navigation application where like we can add pathfinding algorithms to this facility map and have navigation capabilities. And this is like those two pieces trying to work together and yeah this is the spot where like it tries to detect and tries to localize itself and what you're seeing is basically the phone creating the yellow dots and in this step like I'm basically saving that entire map just to debug and try to understand what that phone is doing and the white line shows like where that phone thinks it's going in space and the inset image on the right shows basically the phone like what the user would be seeing and it goes on for a while and it keeps detecting the images and if there's no correction involved like it doesn't do anything and then I climb up the stairs and come back to my starting position. So at this stage we only have we have those two pieces implemented and it is a work in progress so the next step is the augmentation the navigation components that come together to put in and of course like we have a long way to go but this this is basically the core part of it is implemented. So now this work has potential other applications too so at this point the map that you have seen it's basically a much simpler version of a map so you only have a few images in it and of course you have the floor map you could but we could add like auxiliary sensor data to it so imagine like adding data from the real world sensors that sort of like direct people to places where there are less crowded or imagine a situation where like if you have a map like trying to find a different so because we have this three-dimensional map of the world we can run some like computer simulations like let's say that's a gas leak or something we could use that to redirect people to places where it's safer or basically redirect people to places where it's less crowded and these are all some of the ideas that we've discussed with like our these are some of the ideas that our collaborators have brought up like including like Raven and hi Raven if you're in the audience and some of the other folks at NCAR and we can extend it further like for instance take a map of this world now so you have a phone that knows where it is in that three-dimensional space we can attach digital repositories to it for instance take a digital repository such as like any museum there's only a small fraction of their stuff that is on the floor there's a lot of digital footprint but it could be articles or scanned images or scanned information related to different objects we could use that information along with this application to sort of anchor them both in space in place so we can bring those two things together so what we can do is like as for instance if someone were to walk near an exhibit related to the climate the phone can detect and bring in some some of the other content that we have that's not present on the floor and of course we can also have real-time updates from sensors so really this is going into a concept called digital twin like basically what what we try to do with the virtual model is like a zero level digital twin but we only have that space mapped out but if you start adding the systems and some of the other systems ability to simulate the environment inside it it can really have other applications where it really gives us control over like what's happening in the building and how we can use this information to better serve people around and that's the part with spatial computing so like the previous two like and to give you a little bit of a map so we've seen science and visualizations along with AR and how those pieces put together to inform data visualizations and we've seen how spatial computing could be used as a navigation and wayfinding device and this brings us to the third component of which is like accessibility of data visualizations so over here you see an image of a map with it's basically a dashboard of the COVID cases and basically some public health information and over the last two years we've seen a lot of examples where this information was available to this information was primarily was the primary means by which the impact of COVID-19 and basically a real pulse of what's happening around was distributed to the people however like a recent study in 2021 and in fact not just this study like some of the other studies have found out that this predominant reliance on visual encoding has created accessibility barriers for people who are blind or vision impaired so we clearly have an issue with data visualizations now as a technology person like I am tempted to go ahead and find a technical solution to like any problem that I come across however like over like over a period of time like I began like understanding that especially when you're designing systems for people who do not have the same experiences as we have we need to take a more cautious approach especially because of that mismatch in the kind of experience that one might be having so with the thing about data visualizations like the approach that we took was the first question that we were trying to answer as like even try to find what the problem is so we need to find out like what do we need to do to address this issue what the problem is and secondly when you look at the solution like what kind of solution do we need and this is where like I had to put in my like interdisciplinary hat and there is a related field called disability studies and disability studies is a field that's focused on the study of disability through social cultural and political perspectives so like over many years like disability studies has played a crucial role in trying to define the rhetoric trying to define the language that people use as well as explain the ways in which researchers and others can understand disability so in the field of disability studies like let's go to the first question so how do we find out like what the problem is so for that in disability studies we have something called a model a model is basically a framework or it's an approach that one could use to basically define like how we are trying to understand disability so there are two main models over here so one is called the medical model and the other one is social model so with like if first for a researcher or someone who's using a medical model for them they they approach disability as something that's caused by an underlying medical condition and so consequently the solutions that they would propose would be related to the medical condition or like trying to fix in quotes the disability and then you look at the social model so people following the social model would see disability as something that's caused by environmental factors so to give you an example so if you have someone with a in a wheelchair in front of stairs someone following a medical model would go to fix the disability so it would they would either look to find better prosthetics or find a better wheelchair a wheelchair that can maybe climb stairs and this is an example from like a paper by Langard et al in 2019 and for someone who's following a social model their solutions are outwards so what they would say is like when you have someone with a wheelchair like in front of stairs what we need is really a ramp and not a better wheelchair or something that needs to be done to the wheelchair or something that the person with the disability has to do so with this we I have a poll over here so when you look at connecting it back to what we are trying to do so with data visualizations and if you're looking at accessible data visualizations and yet we are trying to understand which model we need to follow which model do you think is a better fit for our use let's give it a few more seconds so that's interesting so because we need both of them in a way and that makes sense because like some of the literature when you look at the literature what it shows is that when you look at the medical model of disability most of it is focused in words so in fact if you look at any accessible technology related literature most of the approach that they take is the medical model mainly because it defines exactly like what the limitation is and you're trying to limit you are trying to address that limitation however like medical model is heavily criticized mainly because it oftentimes it takes like a narrow view of what a disability is and that is something that people don't appreciate and the other thing with social model is that the most of the solutions that are addressed are more outward focused so most of their solutions would be directed towards self-advocacy peer support or anything that we can do to the environment to basically address the disability so and both these models could be used so for instance the medical model can be used while designing assistive technology to find get technical considerations and the social models can be used to get social considerations and it's important to get both because especially with the social considerations historically it's always been about the person with the disability like what they can do what can we do to fix that disability and oftentimes we lose track of the person and so social considerations has a place of major role in trying to understand like what what we need to solve but no matter what like it is imperative to work with people with disabilities while developing accessible technologies again this is important like if you're a researcher or if since I'm a researcher who is in this case not blind when I try to tackle a solution it is an ocular centric approach so here I have a picture of a Braille Rubik's cube and Rubik's cube that has a tactile markers for different colors and you think about why a Braille Rubik's cube is not a great design and personally like this is something that I it was like an aha light bulb moment for it when like I was trying to like test this application with my wife with the previous like the navigation application for instance over here I used an I devised an interface that was right-handed because I knew my wife was right-handed and I thought it'll be easier for her to use it that way but it turns out like since she's a white cane user she uses a right hand to use the white cane so in fact like she would have preferred a left-handed interface and taking it a step further ideally there should be an option to make it both right-handed and left-handed to accommodate all types of people and taking it even a step further like do you even need to have a user interface like can it be voice activated or something so that we do not people don't even need hands to operate it so those are the kinds of approaches that we can find out by following this participatory design method and now moving forward to like what kind of solutions we need now over here we have two options so we have something called a universal design in which the solution that we are trying to find we design a solution that will reach as many people as possible the second approach is inclusive design in which we might not design a single solution it could be a suite of solutions that are meant to meet as many user needs as possible now again both of them have merits in a way because universal design is a concept that was introduced in architecture early on and it makes sense in physical buildings because you need to have that one physical space that need to accommodate as many people as possible so the goal is to make a solution that can satisfy as many people as possible but with digital interfaces and technology we do have an advantage because now we are no longer constrained by that physical space so we can look at solutions that are outwards where we can design a solution that can meet specific needs of people and people would just pick the solution that works best for them so following the inclusive design principles where like I first came across inclusive design principles through cat home's work and she identifies these three principles that one has to implement in order to like approach and problem through an inclusive design lens the first step is to identify exclusion so in our case we need to identify who we are excluding with data visualizations so we have that part figured out so that could be like someone who is blind or vision impaired and the second one is learning from diversity the second for the second step we need to reach out to people who are blind and vision impaired who have been excluded with data visualizations and understand their experiences and perspective and the last one is basically solve for those those specific needs and try to find a solution and extend that solution so that it meets users who have other situational based needs and based on that we designed the research study to examine the accessibility barriers of data visualizations the first step was we reached out to around like seven professionals in geosciences who are blind and we basically conducted an interview it was an in-depth interview with examining their experiences the tools techniques everything that they use and the way we approached this as we did a thematic analysis on it so what that means is like we take the interview create a transcript we try to find common themes between different interviews and we club those teams together and try to create a story and understand like yeah what's the common theme across all like all these individuals so that was the first step and the second step was case study so we were we picked the Arctic CI's representation and I'll go into the details as to like what that is and how we approached designing an alternate visual representation for it or non-visual representation for CI's so firstly with the tools and techniques one thing that surprised us or rather like we shouldn't be surprised is that there's no one approach that people have been using like they use a variety of solutions to access the information that is present the first thing is visual not all people who are blind are like totally blind so there are people with partial vision so they would use like they they've stated that they use magnifiers a lot and auditory so we have something called sonification and this is a project that was done by a couple of our scientists at NCAR where and if you look at traditional data visualizations you have a map and the data is encoded so data is basically numbers and those numbers are encoded with color and that's so the color can tell you where different patterns are so with sonification instead of color we use sound so each of these colors imagine like they are represented by different tone and by changes in the tone we can listen to what the data is doing it works really good for things like a line chart like the chart that you see below it's a little challenging to do in a two-dimensional data set like a map of course you have the tactile stuff like over here we have a Braille embossed graph and even something called the Braille display so in fact many of our participants mentioned that given the complexity of some of the data sets they just go back to an excel sheet lay out all the numbers try to narrow down where the interesting features are and go through the numbers line by line through a Braille keyboard so it basically translates the line by line whatever you see on the excel sheet onto that Braille line that you see at the bottom and lastly like they often use cited assistants too so some of their colleagues or people could give them an overview of what's interesting and that can help them narrow down on to like if there's a specific area in the visualization that they're interested they could delve deeper onto that so this brings us to the other two findings which is so what are some of the technical and social considerations that we need to figure out when you're trying to address accessibility barriers so of course I've distilled them to just the key findings so the first one is that there's no one approach that is preferred but and this again is not a surprise because disability is a spectrum so even if you take two people who are blind their experiences might be different based on the kind of disability that they have so if you create one approach that works for everybody like that so if you try to create an approach that works for everybody like we might end up creating an approach that works for nobody and the other thing is that like there are two aspects to data visualizations we can use data visualizations like at least in our lab like we use data visualizations for two purposes so the first one is a scientist could use data visualizations to explore what is there in the data so that's called like exploratory data visualization which is good for finding as the names are just exploring what's in the data and the second one is explanatory data visualizations so those are data visualizations that we use to as the names are just explain what's in the data visualization it often comes after the exploratory phase where we create a visual of something that we want to communicate and we try to tell a story with that visual so what what we found out was that there are very few tools available for research and most of the limitations currently come from the following so if you take the existing techniques like they lack the resolution that is needed to that is needed to interpret data related to research and like some of the techniques like 3d printing and tactile graphics they did mention that they were very memorable and those are some of the most memorable representations of data that they've seen but they haven't come across a tactile solution that is that can be used in research because it's hard to like create you can't create a 3d print for like every time you're exploring a data set so that's just not just very impractical the second thing was with production time so like things like 3d printing take time so for instance if you if you want to 3d print a data set it can take a whole night a day and depending on the complexity it can take a couple of days and that is not very ideal for research for outreach yes like if you want to just tell a story you can you can take your time create a visualization I'm sorry create a tactile representation that you can use repeatedly in your outreach events and some of the other limitations are like cost of production and like lack of techniques that give them autonomous capability to explore data sets so these are the technical considerations and these are the opportunities for where technology could potentially help but again caveat like we have to work with the people with disabilities in a participatory design approach to make sure that we are not designing something that's utterly unusable or we design something like totally discounting the experiences of someone with a disability. Moving to the social considerations now these are something that the society or the community can do to make visualizations more accessible the first step thing is alternate text so many of my study participants they use screenreader so screenreader is the software that pretty much reads out the content or information that is present on the screen and typically when they visit a web page and there are images in a web page there are options in which you can add something called an alternate text to those images so if you are a visual person if you're seeing a web page you might not you will not see the alt text but a screen reader user or a screen reader will catch the alt text and it'll read out what the image is doing and many people actually prefer alt text but they've pointed out that alt text is missing from any scientific publications and most of the publications that happen in journal and basically in academia and so for things like these we already have a screen reader that can catch and read alt text so really the focus is on what can all of us as a community do one thing is to add alt text and it can be something simple and much simpler than that like many people have asked access to underlying data this comes back to the no approach that works for everybody so sometimes like many of these scientists like they have their own code and they have their own techniques that they use to make sense of data but for that you need the data so so basically if you are able to provide an underlying data if you can attach the data to the visualization that is published in research that could be something very simple but that can go a long ways and we don't need to develop a completely new technology to make it happen so together this sort of gives us an idea of like where we can approach and what kind of techniques we can use to address some of these limitations so the next part of the thing was to create a sea ice prototype so sea ice is like the layer of the ocean that's frozen in the northern hemisphere and traditionally we use visualizations like the one that you've seen in which what you see is the northern view of you you're seeing an animation of the northern hemisphere and the white thing in the center is basically the sea ice shrinking and expanding over the over the course of an year so sea ice has implications because that that's one of the major victims of climate change where like because of the global warming we have sea ice that is shrinking as in like that there is this annual cycle but if you look at the September sea ice values they are shrinking over time and that can have major implications to the Arctic ecosystem so the video is one way to visualize it and the other way to do it is with things like the graph where and you can see that the graph at the bottom the blue line shows how the sea ice extent is changing and you can see how like around 2030 to 2040 the sea ice actually we will have an Arctic sea ice free Arctic in September so we have sonification that I've showed earlier and graphs like this could be made tactile in this case what we tried to do is we took the sea ice data that you see and the laser cut those sea ice pieces and the basic goal is it's one thing to see a graph in which it shows a number going to zero but it's totally another thing to actually see a piece of sea ice in your hand and then trying to understand like basically trying to understand like how that piece gets smaller and disappears it just tries to look I think it we can get that point across more viscerally and again like this was the feedback of the participants that we tested with Colorado Center for the blind wherein we showed them the sea ice tell tell them the story and try them ask them to interpret the data and when they when they compare the sea ice piece from 1980 and they overlay the 2040 sea ice it really gets the point across and we can then go into like explaining why climate change is crucial and why we need to have methods to address or mitigate it now again the basic goal is of this project is so we have a visual of the sea ice representation like the video that you've seen following the inclusive design principles our approach is to add these additional supplements to sea ice so for instance we can have these prints that people could use to 3d print or laser cut to you know basically understand the story now I understand that not everyone has a 3d printer not everyone has a laser cutter and that's where maker spaces can come in handy and for this particular project we've used the boulder public libraries maker space and a huge shout out to them like they really helped us like navigate these machines and like trying to get those printouts try to get those prints out so if you're someone interested in 3d printing or laser cutting or things like that look out for maker spaces these are community workshops that anyone can go in and work on different projects so with that wrapping up all these topics so we've seen we have started with an application of augmented reality that have got nothing to do with accessibility and we've seen how like accessibility AR could be made accessible if we step a little away from seeing AR or augmented reality as a predominantly visual means of augmenting information moving on we've looked at how like as a technologist we need to take a step back and try to understand the experiences of people who are disabled before like designing solutions that could address a specific issue putting these two all these things together like some of the lessons that I learned in this journey was the quest for includes the future is a process like it's not it's not to say that yeah we have this tactile representation of cis and the problem is done so we don't know how many more applications are out there and secondly like it's yeah as I said like it's important to understand the lived experiences of people with disabilities include people with disabilities at every stage and that's something that we saw with the tactile representation the data visualizations project where like they had some of the insights that like for us like it's something that we just doesn't occur like when as someone who cited like that's a perspective that I couldn't imagine that I would have and the next thing is like drawing from interdisciplinary work so we have a lot of disciplines and like all these disciplines are progressing at a rapid pace and the advantage of interdisciplinary work is that we can have like if you're looking for a solution for a problem like that solution could be there in some other discipline so being able to draw things from interdisciplinary work can really expand our horizon and lastly like technology can be a great enabler but it has to be considered in tandem with human factors and acknowledgments sincere thanks to all our collaborators who taught us about disability studies who taught us about like museums who taught us about the science happening who helped us with visualizations and handling data as well as folks from Boulder Public Library who helped us manufacture some of these prototypes or who helped us use the machines to do it and lastly like all the funding agencies without whom this work wouldn't have been possible. With that I thank you all and I'm happy to take any questions. Thank you so much Neha for that wonderful talk and you covered a lot I saw I see that you and your team have done a lot in terms of augmented reality in VR and that hailstone with the doggie is pretty neat I did play Pokemon Go or the Pokemon app that you had shared and it's pretty neat and I do have this cube that also is kind of what you talked about and it's pretty fun so we can I think the audience can definitely check out these the media AR applications and everything that you talked about on our website and we do have a few questions from our audience with Eric Levine's question can you please discuss the software you use for this work and the hardware computer requirements? Sure so again like at NCAR like most of our focus has been like we wanted to create technology that people could readily access and when you look at augmented things like augmented reality you can go really high and like you can get an AR glasses and use it or you can use these low-end devices like mobile phones so if you just want to get started the minimum requirements would be honestly like any run of the mill like laptop would work for augmented reality application the software requirements uh so I use game engines for it interestingly so apparently game engines have applications beyond video games they can be used for data visualizations as well so I use Unity in particular but I assume you could use Unreal as well and that's another game engine hardware compute requirements honestly like any if you take an existing like any laptop with like the basic RAM and those like the stuff that you have currently like take the one with the minimum requirements you should be able to put together an AR application with it because like a lot of AR also depends on the data that that we use so so really it depends like you can put an augmented reality application with as little requirements like as little data as you want or as more that's often to hear that there's already technology out there that you can use so thank you for that question and for that response and then another question that we have is in vio what does it mean when there's a denser cluster of yellow dots versus just the few sporadic yellow dots and I did notice this when you were showing the the couch the pillows were pretty bright and then the rest of it was a little bit less yeah and that's because the pillow has a design on it and the design and the textures create more dots and so it becomes easier for the phone to see and when I'm pointing at a wall the wall doesn't have any texture that's where the phone's not able to detect any features and when it comes to the inner workings of like how it works that's something I believe like it's a proprietary secret of apple so but but in general that's the thing more texture you would see four dots and you need it it's good less texture like less dots and even think about the texture of objects so thank you for that clarification um and can you tell us a little bit about what do you hope to create an ink car in the future yeah so we really have seen two potential future directions so that's that application that we are collaborating with smithsonians namark in which we we want to literally like the ideal like goal would be to create a digital twin of the exhibit space so that like we can not only use it as an accessibility technology but it can have like a universal applications to it so that's one approach and in fact using that same approach like one thing that I haven't discussed is potentially like using a phone or AR experience combined with sonification to figure out if we can make three-dimensional data sets accessible so that's one approach and really with with these data visualizations and accessibility project it's a very underexplored area in visualization research so there's a lot of work that can be done on that front to both work on the process as well as the technology to improve the accessibility and it sounds like you're having you're doing some collaborations external with n car so external twin car so that's also great to hear um and then can you explain what it means to do interdisciplinary work yeah so interdisciplinary is basically any work that draws from multiple disciplines so for instance like I was first introduced to the idea like I have my degree in mechanical engineering but while I was working with and with lidars like I had to learn the optics as to how like lidars work like I had to have a basic understanding of how that how that happens I need to know like computer vision concepts because like what a lidar sees us not exactly what we want it doesn't really directly measure the wind field you need to calculate it back so so basically interdisciplinary work is any work that goes across the disciplines great thank you and it's it's cool that you can have a degree maybe that's not directly arvr but you can apply the skills that you learn from your degree into this field and it sounds like you had also done an internship to be exposed to this type of work yeah okay thank you and then um can you help us understand is the metaverse the same thing as a digital as digital twins they're related but they're not exactly the same like metaverse is more I know Facebook is currently working like meta is currently working on that so with metaverse the whole goal is to have that virtual experience so you create this three-dimensional world and you use virtual reality that you I mean we can you like all of us can have virtual goggles and like transport into that virtual space so metaverse is more about VR and with digital twin it's a much broader concept it's related to metaverse so in fact the digital twin that I've mentioned can be used as metaverse but the scope of digital twins is a lot broader than just visual or just a space to work in thank you um and thank you John for your comment that this was a fascinating talk and you learned a lot so we're definitely glad to hear that and Nihon I wonder if you can tell us um you know you kind of mentioned your career path mentioned you had an internship for any student who's interested in kind of going into this field but they might not know how or what that process would be can you share a little bit about your recommendations on what they should study um to pursue this type of career path so again like as I've said like when I started my work like if I knew exactly what I had to do to be where I am that would have been a lot simpler so I would say like based on my personal experience it might be different for people who are like who have a different experience than I do I would say like just start where you are and um if you're interested in like if you're someone who's absolutely new to coding I would look into like learning some programming language and I would say you can even start with game engines like it has a very gamified like fun way of learning um seeing the results of like what we can do with coding so I would definitely learn programming that's going to that's going to be applied in any any like even a field like learn programming and decide that this is not the part that you want to take I'm pretty sure you can use that elsewhere so I would definitely start with learning how to code awesome and you did mention interdisciplinary work so even if they weren't doing the coding there might be other opportunities to collaborate in this type of work yeah yeah exactly like in fact um yeah to clarify like if you want to do the technology stuff yeah you can but if you want to collaborate like maybe like we can meet somewhere in middle like you can like this person you can bring in like your expertise and like we can collaborate awesome yeah because I know art is always the other thing that's like how can we get art into the part of STEM and definitely needs a lot of creativity to do some of the work that we do it doesn't look like we have any other questions but we will have this recording up on our website and we're always happy to connect with the scientists if any other questions come up definitely check out the website check out some of the features for the medial AR application and once the NCAR Mesa lab opens up again and the technologies are kind of set then we'll be able to to use the applications that Nihon mentioned today but with that I just want to say thank you so much Nihon for such a great talk and sharing all the work that you and your team are doing at NCAR and thank you for your talk and then we'll see everybody else at our next event and thank you Dan, Aliyah, Brett, and Paul and we hope you have a good evening and another comment was just you know very interesting talk learned a lot inclusion is a very important subject so thank you so much