 Welcome to another episode of Likeable Science here on ThinkTech Hawaii. I'm your host, Ethan Allen, and thanks for joining us on Likeable Science, where we're going to talk about another aspect of science here, another application that's making waves here in Hawaii and would like to be making waves around the world. I have with me today David Takayama and Mark Kamura, both from Oceanet. Welcome, guys. David's their head of the IT and has a master's in urban regional planning, runs the innovation consulting group, I guess there, is that what they call it? And Mark's a senior scientist with a master's in mechanical engineering, and he's worked on machine learning, I guess, I could say, right? Yeah. Excellent. So, and just in case our audience may not know Oceanet, can we say just a few words about what Oceanet is? A mind-to-market company, I gather. Yeah, we're a diversified engineering and technology company. We have a philosophy of mind-to-market, so any ideas that we have like to find ways to bring it out into the marketplace. Yeah, no, it's great. It's sort of all in one house, right? You think of something good idea and all the way up to hand it off to somebody to actually manufacture this product or process and run with it, right? Right. Excellent, excellent. Well, that's great. Proving a good model to get, Oceanet has gotten a lot of good stuff out there already, right? You've gotten a lot of different products out over the years and processes. And I want to talk about a current project, and I guess this has been developed with Department of Energy funding, what do I understand? Yeah, so we've got a grant from the Department of Energy, but my background, I'm actually not, I've worked for Oceanet for over 20 years, but I was like managing projects, doing business development, and Oceanet has a philosophy that anybody in the company can submit proposals. That was bored, so I submitted a proposal to the federal government for a grant, and it was a mashup of technologies that was developed within Oceanet, and part of it included machine vision. So I submitted this grant and we got funded, and I was pleasantly surprised. And then once we got funded, then we've had to figure out how do we do it. I've been in that situation, yeah, you say, well, we'll do this. They say, good, here's the money. Oh, now we've got to do it. Yeah. And so we developed a proof of concept. But basically, Department of Energy's problem, the problem is after major disaster, when power goes out, there was a study that was done that when power goes out, it affects about $170 billion worth of dollars for power outages throughout the world. And there's a study that said that 9 out of 10 people in this world would die with extended power outages because of its effect on water supply, cooling, and other things. It's a big problem. Right. And so a storm comes through, knocks down power lines, knocks over trees. And part of the issue then is somebody's got to come in and figure out how big a workforce do we need, what materials do we need to fix all this, right? Exactly. That's a labor-intensive business, right? Exactly. I think your first photo that we have here sort of shows a scope of it, right? Here's people inspecting this. And obviously, if you get a couple guys out there sort of going pole to pole to pole, that's going to take a long time, right? Yeah. So this is the traditional way of how it's done. Right. But after the major disaster, people still manually go out, do a damage assessment, and then come back to the office and report out. And every minute counts when there is a disaster. Sure. Hospitals run out of power. They're running on emergency generators, keeping people life support, refrigeration units for food are down. Food is starting to spoil very quickly. Yeah, you can't pump water into reservoirs. Now, I mean, there are 1,000 things that can go wrong, right? So what was the idea? You guys teamed up then, right? Yeah. So our technology is to use machine vision to help quickly identify damaged grid infrastructure very quickly so that the decision makers can, they know where to allocate resources more sufficiently and quickly. And maybe if you could queue up that video, Mark can explain a little bit about our technology. Sure. OK. So tell us where this is and what's going on here. So this is an actual video footage from, I think, Hurricane Haravi. That was about one week to 10 days after the disaster. And so what we're looking at here is that whenever you see a green box like that, it means the pole is intact. It's standing. And then we just missed it. But then there was a red box that was lying on the ground. That means the machine vision arm model system actually detected a down pole. So we're using a deep learning AI system to detect those poles. As you can see, there are some confusing elements like trees. But our system is not detecting those. We're just detecting poles like that. Yeah, I know. I mean, it's a messy landscape, a lot of visual noise, and it's amazing. I think there's a red box right there. On the down one, yeah. OK. So your machine then can tell you for a given area. I mean, this is all a drone, right? Exactly. And as you can tell, the machine vision today, it's just so smart that it's almost as good as human brain. And it's then a question of teaching it what to look for and what to ignore. What kind of features a standing pole versus a down pole have, right? So what we're looking at right here is using the system that we developed, the code name, EOVU, actually. So this is a kind of a, we're using the same technology, but we developed this for a demo purpose, an experimental purpose. So in the background, you'll see a model street with a pole standing, that one lying down. So the drone has a little camera. And the laptop right there and the drone is communicating to control the drone and also receiving the videos. And on the screen, you can see that we have up pole and down pole as well. Yeah. And so this is, you have to teach the machine to distinguish these things. Exactly. And it's real time like that. It's really fast today. So Mark makes it look easy, but it's an extremely difficult problem. If you look at that first video with all the debris, there's a lot of things that look like trees. There's palm trees that look rather much like telephone holes. There's street lines on the street that may look like down holes. And your machine has to learn to reliably distinguish among all these things, right? And that's a matter to some extent of this sort of, I don't want to say exactly trial and error learning, but it's a teaching process, right? Yes. Your initial run through is a machine probably misidentifies a bunch of stuff, and you have to go through and tell what got wrong, right, and gradually then it learns. That's what machine learning is about. Literally it's machine learning, yes. Until it gets better and better and better at detecting exactly what you want. Yes. Now, what I'm finding out too in learning is that this machine learning is as much art as it is as science. Right. And I thought you could just feed images into the system, but Mark is actually an artist, I think, as well as a scientist. Yeah, well, that's the beauty of it. And now what this sort of illustrates is you're taking these various different technologies, right, the computer vision, machine learning, and drone technologies, and sort of mashing them all together. Yeah, it's really, really exciting to look creatively and productively. Exactly. And it's really, really exciting to see actual science, cutting edge science, being useful purposes like this to help people. Right, exactly. This is why I thought you guys would be so great here, likeable science, because this is a really great application of some cutting edge science that people are just now really figuring out how to make these things work together, and has a clear understandable purpose. We can see disaster relief is a big thing. In fact, David and I visited a small town in Texas the other day, was Lagrange, I think. And they were a small community where it's hard to reach places. And we got to see those places and see people. So it gives a lot of meaning to do it. Yeah, I had a good friend who was in Tallahassee. He got brushed by Hurricane Harvey. And he said, there were down poles and down trees all over the place, and caused trouble. And again, the faster, as you said point out, the faster it can be identified, the faster you can get the right resources, the right numbers of people, the right kinds of people, the right equipment in there to fix things, right? Yeah, I made a big difference. Mark and I, we were doing the technical research and developing this. But until we actually went there and talked to people who experienced being in a disaster, it gave it a lot more meaning for us, I think. Yeah, yeah. Again, you see this importance of time if you look back at Puerto Rico, right? When the initial estimates right after the hurricane hit were 50, 60 people had been killed. And later on, that got basically amped that number up to several thousand, right? Because, yeah, people died from subsequent lack of power as well as they found more people who had been killed outright immediately. But a bunch of those deaths were later but directly related. Right, and in the case of Puerto Rico, gathering drone data was not an issue. It was tons of drone data. The problem was is the image processing speed. It took so long to process the data to figure out what everything meant, what the data meant, that that's what slowed people down. So with technologies like this, where it can be almost real time or near real time even, the image processing is very quick where we can get results very quickly. Yeah, if you're, you've taught your machine to do the recognition and the sorting of the initial data so you don't have to manually sort through hours and hours and hours of pointless video, right? The machine's already basically flown over and tell you, look here, look here, look here, look here, right? Yeah, I'm sure in Puerto Rico there are other problems in terms of just giving the resources to it after they identified it but which is often another challenge. But this is certainly, this project illustrates a really nice big first leap in terms of being able to get the response moving much more quickly now, you know? I think this is very relevant to Hawaii especially. Because I mean, Oahu hasn't really experienced major hurricane or typhoon or anything. But it always misses just by a little bit, right? Eventually it's gonna happen, especially because of the climate change and eventually something could really happen. And also, before joining Ocean, I was living in the big island. I saw people, I saw my friends dealing with, say, you know, those hurricanes and the lava flow. So I know how much it means for people in rural areas to be able to see the government working really fast and faster to help them. So it's really personal to myself as well. No, you're quite right. The issue in Hawaii, or Pionna O'ahu particularly, is yes, if we do have a major disaster, the timing is gonna become critical because we have very limited reserves of food, water, et cetera, and we're gonna need to know very quickly just how bad things are and just what's needed. And so, again, technology like this will be incredibly valuable. Yeah, but really, you know, it goes beyond just this sort of disaster relief and sort of categorization process, right? I mean, that these, once you begin to think about these technologies and put them together, you suddenly realize that there are a lot of things you can do with this, right? Right, exactly. I mean, it gets into the whole issue of self-driving cars, which of course are using exactly the same kind of stuff, right, they have to be able to recognize a wide range of things and make decisions based on that recognition as to what to do about that, whether to stop, to run over it, to swerve around it or whatever, right? Yes, our philosophy of what we talked about, the mind to market. So we use, often use government-funded projects to get to speed on a new technology, build up the technology, but often the application of that technology into a market is often something, could be totally different from the original concept that we developed it for. And that's both market-driven and involves talking to people and understanding what real-life problems are for. So we love doing that to find out what problems are and how technology can be used to apply. Yeah, yeah, I mean, that always happens with technologies, right? They get developed for one process or one goal, and then people take them, start using them for other things. Usually, usually that's for good. But yeah, I mean, the whole, you know, the folks who developed the original technology for the internet could not envision what the internet has become, right? And how broadly it's being used for all these different things. So can we, well, I'll tell you what, I think we're probably coming right up on a break here, but maybe we can think about this over the break. When we come back, we can talk a little bit more about some of these other applications and ways that you guys are thinking about it, other people are thinking about using computer vision or other sensing technologies in collaboration with machine learning to make products that'll help mankind. All right, we're gonna go off for a break here. Mark Kamura and David Takayama, both from Oceanit, and I'm your host, Ethan Allen. We'll be back in one minute. Aloha, I am Howard Wigg. I am the proud host of Cold Green for Think Tech Hawaii. I appear every other Monday at three and I have really, really exciting guests on the exciting topic of energy efficiency. Hope to see you there. Aloha, I'm Wendy Lo, and I'm coming to you every other Tuesday at two o'clock, live from Think Tech Hawaii, and on our show, we talk about taking your health back. And what does that mean? It means mind, body, and soul. Anything you can do that makes your body healthier and happier is what we're gonna be talking about. Whether it's spiritual health, mental health, fascia health, beautiful smile health, whatever it means, let's take healthy back. Aloha. And you're back here on Likeable Science with me, your host, Ethan Allen here on Think Tech Hawaii. And with me today in the Think Tech studios are David Takayama and Mark Kamura, both from Oceanit, a senior scientist and IT director. And we've been talking about a project they've been working on in conjunction with the Department of Energy to, for a sort of disaster relief, to basically combine computer vision with machine learning to help identify the power grid infrastructure that's been damaged. Look at poles, power poles, and say which ones are up, which ones are down. Get some estimate on how much, how much supplies you need, where, basically. How much manpower you're gonna need, where to fix everything. But as we said just before the break, this clue, this melding of technology goes way beyond just that one project, right? So, talk to me if you would about some other projects that Oceanit is sort of thinking about that involve similar kinds of things. If it's not classified. Actually, just this morning, or yesterday, I was talking to Mark, I said, somebody sent up some information on different types of plants. I was asking Mark, can we use this to help identify different plants? Like if somebody's in the field, can they just take a picture of a plant and identify it if they have a thousand different plants they need to identify? Can it be used for that? And I asked Mark, well. Yeah, I think I've even heard of at least some systems that are trying to do that. Some databases and all that you can upload your picture to and it will give you the best matches. Right. It's fine, but yeah. Or animals, right, if you're out there. Rather than carrying it, if you're out there snorkeling you don't have to carry one of those plastic cards with all of the pictures of fish on them, right? Just snap picture of fish and upload it. Yeah, so when I gave him the answer, basically the answer was yes. I mean today, machine vision is so smart. So basically whenever your human brain can see it, it's most likely that machine vision can do it as well. It's that good today. And it's one of these things people don't think about how marvelous and amazing our vision is. But as we can identify a plant, say a rose, no matter what way you see it, me facing away from you, facing towards you, facing up, facing down, and you'll still recognize it as a rose, right? But that's a very difficult visual challenge, right? I mean, those are lots of different patterns. They don't always have the same feature showing and for a computer vision to learn, to pick that out and do so reliably. And that's the thing there. Now getting together, very, very good at some of that identification. I gather they're using computer vision and machine learning to read x-rays and radiological scans of people. And some of the machines now are getting in that particular place better than the human readers. Basically, they can spot smaller features more quickly. They can process huge amounts of data. They don't get bored. I mean, that must be incredibly tedious work. And that's one of the beauties, right? Right, right. Yeah, and I saw you had something on here about coffee bean quality control as another project. So tell me about that. Yeah, so we had a project where we helped identify ripe coffee beans versus coffee beans that were not ripe yet. So it helped the coffee industry pick the beans that are ripe with easy machine vision. Oh, okay, so it's looking at trees, coffee trees, basically, and judging the ripeness of the crop. Right. Okay, I was thinking once they're all processed, it's gonna be pretty even running fast. It's gonna be a long time to sort through individual beans, but it's looking at the whole tree. I can see how it's a much more efficient way to do it. I have some friends in the big island who are coffee farmers. And they told me that, you know, growing into something like a season of coffee, depending on season, the requirement or the demand for labor changes, right? So it's one of their problems to secure labor. So machine vision and AI could be a big problem and a big help for them. Yeah, I hadn't thought about that, but right, it can tell you probably these trees here are gonna be ready three or four days, these trees over here, it's gonna be eight or nine days, so you can now get your labor force lined up properly to go to the right sections of your coffee plantation at the right times, with the right numbers of people, the right amount of them. So there are Hawaii-specific problems like that, but I think it's important for us to be able to talk to the people who need them, potential need them, and give them what we can do, what machine vision can do, and work with them. Right, and again, I mean, this goes in so many different areas, right? I understand they're using some of these same kind of processing for looking at sort of astronomical, cosmological views, these big masses of stars, and you wanna be able to pick out certain patterns within them that indicate certain things, whether the universe is expanding or not, say. And again, it's very time consuming, rather subtle visual stuff that for people to do it is gonna take a long time for people to train the people, to get them to do it, to get them to do it for long periods of time, reliably, is sort of tremendously difficult, very expensive, very sort of human capital intensive, whereas once you put the time and energy into getting your machine to learn this, it'll do it in a heartbeat forever and a day, right? And it's not gonna ask for coffee breaks over time or anything like that, right? When AI figures that out, we're gonna be in trouble, right? There'll be other jobs, definitely. We will need more creativity, and then we're still really good at creativity and empathy, there will be always room for humans. Well, yeah, that brings up a really interesting point. People are very, some people are very afraid of artificial intelligence or machine learning, taking over large numbers of tasks and putting human beings out of work, making us redundant for a whole lot of things, and certain that there's gonna be changes, right? That people, 20 years now, some of the tasks we do today, 20 years now, are gonna be done by machines routinely, right? Yeah, I think whenever a new technology is introduced, there's always that threat, but usually there's always, on the other hand, new jobs that are created for an ancillary need for that technology. So I wouldn't worry in this case. Yeah, exactly, I mean, 20 years ago, nobody thought about teaching coding as a big thing, right? And now, every elementary kid in the world, I think is learning to code, right? I think in South Korea, they've now- Mandatory. Yeah, it's mandatory for, as a course, all through K-12, actually. Yeah, because, and you're seeing us, that it's gonna be a big thing going forward, and that's a skill which, again, it's probably not gonna fall out of date too fast, so eventually AI will be able to take over and start doing all the coding too, but. So, where do you see this going? I mean, it sort of seems like it's a field with awesome potential. Yeah, I think there's a huge potential for different kinds of applications, and like I said, we love to find out and identify other applications, and we use a process at Oceanette called Design Thinking, which came from Stanford's design school, but like Mark mentioned, it's really a empathy-based technique where we try to find out what people's core problems are and where their pain is, and try to design solutions around that, but by using that technique, we usually start at a very small scale and prototype and test really quickly to see if we can, you know, our technology can help them. Sure, sure, good prototyping is incredibly valuable, but that you bring up a very interesting point too, and one of the worries about artificial intelligence and machine learning is, how do you teach more ability or ethics to these machines, right? There are certain areas that are so no-brainer, easy to tell that that's a good thing. For example, you know, an example is actually pole damage detection, you know, things like that, do it just to simply help people, help people's lives, you know, that's something that's so easy to, you know, to say that's a good thing, you know. Right, but I mean, the classic issue for say autonomous vehicles is, you know, does it, if your car is faced with, you know, running into a cement pole or hitting a group of people, you know, which is it gonna do, is it gonna protect the occupants of the car or try to protect the bystander, it's more, and how does it make that judgment? Who gets it to side health makes that judgment, sort of what, even more deeply, what's the process by which that decision is made, right? Yeah, so there's gonna be all kinds of interesting challenges coming forward and less opportunity for smart people to be thinking about this, right? And helping, again, helping ensure this technology does get used for good and not for nefarious purposes, right? Yeah, I mean, you're right, even that drone video that you showed, I mean, we're identifying poles, but there's other data in that, those images. So, yeah, so people have to be careful about how that data is used and applied. Sure, sure, yeah, somebody, you know, hold that, you can see that as an opportunity to go looting, right, because, hey, here's an area that's damaged, has no power, houses, alarm systems, they're all gonna be down, you know, I can go in there. Yeah, so again, you got, as in most technologies, right, there are ways they can be applied for good and ways they can be applied for not so good purposes, right? And that's part of your challenge, I guess, is trying to keep steering it always to the beneficial ones, the ones that are appropriate and help humanity as a whole, right? Right, I think it was, I can't remember who said this, but it was a famous quote that says, the best way to predict the future is to create it, right? So, I mean, in a sense, that's what we're trying to do to that alternate. Yeah, yeah, I know, it's very admirable work, it's a little scary, I'd imagine sometimes. You really are out here on this sort of cutting edge of technology, wondering who's gonna take us and do what with it now. And, you know, things get away from people, too, and get used to, yeah, I've just been reading a biography of Albert Einstein, he was quite conflicted, the whole EMC Square business, where that got taken off into weaponizing that particular little insight, you know, was deeply, he was a very avowed pacifist for much of his life. But, yeah, all right, amazing stuff, amazing stuff. So, if you guys had to give brief advice to today's students, what should they learn, how to prepare for worthwhile careers, what would you say? One thing, Mark is actually just starting to put together a course on AI and machine vision. I don't know if you wanna talk about that a little bit, it was pretty cool. I took his course, so we were kind of prototyping the course to see how it would work out, but it was fun. Cool, can you give us a 30-second version of that? Sure, so we work with the Kamhemer course to offer this AI workshops to not just students, but also adults and teachers and entrepreneurs. We try to simplify it just to give them some ideas and, you know, just good intuition that's good enough. So, we made it fun. Cool, excellent. Well, sounds very exciting. Thank you guys so much for being here. I've learned a lot. I have better appreciation for what your technology can do and for the challenges you guys face and the wonderful things you've accomplished. Thanks to you, thanks to Oceanit. And thanks to you for watching another episode of Likeable Science, so we'll be back next week. Until then, I'm your host Ethan Allen here on Likeable Science on Think Tech Hawaii.