 Hey, Aloha, and welcome back to the Think Tech Hawaii studios for another exciting episode of Security Matters Hawaii. Today, a little techie, a little out there, really good, solid stuff I want to get into. We've got Jeremy Crennett with us from BriefCam. BriefCam has been building an analytic environment for video investigation for a long time, so Jeremy, I'm really happy to have you here today. I know you're on the East Coast, so I appreciate you working a little late tonight and joining us today. It's my pleasure. Thank you. Thanks, man. So I haven't gotten you to Hawaii enough, which is my fault, so go ahead and give our audience, I guess, as much as you care to share about your history, and I know you've been in the industry a little bit, so give us some of that and how you ended up at BriefCam. Yeah, so I've been in the security industry for about 20 years, done everything from communications, access control video. The thing that brought me to BriefCam was there was an opportunity to get into video analytics, and what had happened with the solution had changed significantly since the first time that I'd seen it five or six years earlier, and it really became a solution that was really compelling for the market and something that I really wanted to get involved with. Analytics have been around for many years, and what BriefCam was doing was really a different take versus a lot of the other organizations that were involved in the analytic industry. So I wanted to get involved, and that was about three and a half years ago now, and things have been going really well. Yeah, I believe that early on BriefCam was starting to leverage GPU in the platforms that they integrate with, and I know you guys are kind of across the spectrum of industry. Give us a little talk about that change, the rise of GPU, I know you've had good visibility on that, and what it's been able to do for BriefCam and the technology you guys have out in the hood. Yeah, so BriefCam has been around for more than 10 years now, and through the history of that time processing video and providing these capabilities out to the market, the compute that's been available has really improved dramatically, and what that ultimately turns into is it turns into more ability that we have to get more granular in the analysis. So when BriefCam started, it was really just a tool for compressing the review of video. Yeah. That was the only solution that was available. Over time it got into filtering and got into recognizing objects in the scene. What GPU has really brought is a game changer in terms of the technology, because it used to be around computer vision, and what computer vision is, is it allowed the processing to recognize things that were in the scene. So it might be things like colors or directional movement and such, but the whole industry has shifted towards a new type of technology, and that technology is deep learning, and GPUs are really critical to being able to make that shift to this new technology. And what deep learning has allowed us to do is multi-fold. So it allowed a huge increase in the accuracy of the analytics, which we can talk about more. That's critical for almost every type of application of video analytics, but also just the ability to process much more on these GPUs. So we happen to use NVIDIA GPUs, and with that we've been able to really significantly reduce the cost around the processing of this video, where before you might have a whole rack of servers in order to process a lot of video, by using GPUs we can take a server and we can put multiple GPU cards into that server and process much more video at a much lower cost. So it allows us to use more of the video, and this has really been a core to what BriefCam has been trying to do over the years, is to make more use of the video. So much video gets recorded and most of it never gets used at all, and so what we want to do is we want to be able to provide the ability to use that infrastructure, use the investment that's gone into the video surveillance for a variety of purposes. Yeah, I think we used to always say 99% of the video never got used. Let's take a look at using some video. I appreciate you sending in some clips. So let's cut to, I think we've got a little clip here of some of the technology at work. Yeah, so the first clip, this just shows a scene, a typical surveillance scene, and what you see here is that all these objects are being tracked. So there's boxes around these objects, it's tracking them through the scene. So we know when these objects came on camera, where they went, also the nature of the objects themselves. So if it's a person in a black shirt or blue pants or riding a bicycle, all of these elements are things that we're saving into the database. We don't necessarily know what you're going to want to find in that video, you may not know initially when you jump into it. But by saving all this information, you can use whatever attributes you're told about the nature of the situation in order to find that needle in the haystack. I think a lot of people, they'll see the results of a video investigation. And I think anyone that's had to watch 100 hours of surveillance video knows that it's not just the output from that, that's the majority of the work. It's the many hours that go into investigating that video, that police work that's being done on a daily basis in order to be able to find that needle in the haystack. And from a video investigation perspective, that's really a lot of the history of BriefCam is finding that needle in the haystack where you can realistically search thousands of hours of video and find that incident that you're looking for. Yeah, and find it in minutes sometimes, like it just depends on the volume of video, but it's reduced 100 fold or something, saying things like that's I think a legitimate statement. Absolutely. Yeah, I mean, we've seen a lot of video investigations where the video was saved off because they weren't able to find what they were looking for in the video. And then they came back, they used BriefCam to look at the video and they found it right away within seconds. So really having the ability to use the descriptions that you've been given and search through hundreds of hours of video is really key to not only the time that it takes to perform that investigation, but often we find that it makes a difference in terms of the success of the investigation. So let's go ahead and take a look at some other clips and we'll show you some other ideas, some other ways you can apply this technology. Sure. So this is a typical surveillance clip. So this is what an investigator might be watching from one of their cameras. So, you know, if there was an incident that happened in this scene, you'd want to be able to find the person, the vehicle, whatever you want to find. You can see that, you know, there's fairly constant activity in this video, but there's a lot of empty space. And if you're an investigator, you're waiting for the next thing to happen that's relevant to what you're looking for. So often what we find is that people start fast forwarding. And so they'll start out at 2X speed and then they'll go to 4X speed and then they'll go to 8X speed. And eventually they're missing whatever it was that they were looking for. So if we want to go to the next video, we can show you how we can really make that investigation of that video much more efficient. So, you know, what we're able to do is we're able to take all of the objects that were in that previous scene, we pull them out of the video and we put them back in again in a compressed way. So you can see these time stamps on all these objects. That's when those objects, those people were present in the scene. So they're from all different times. You see sometimes they go through each other. It's because they were actually on camera at different times of the day. But what this lets you do is it lets you watch that video at a normal speed, looking for the object that you're trying to find. But you're able to do it sometimes in one one hundredth of the amount of time that you would have spent just watching that video. And then I think you can actually do a little filtering, right? We'll see another example. Yeah. Yeah. So if you know what you're looking for, in this case, we filtered on on women wearing blue. So we've reduced the video significantly. So what started out as a half hour of video becomes less than a minute of video here because we've taken out all the men, we've taken out all the bicycles, we've taken out all the vehicles and we're just looking at women wearing blue clothing or having blue somewhere on the on on the purse or or shoes or something like that. Another example, the next one is an example of bicycles. And so, you know, by if you know multiple things about what you're looking for, in this case, bicycles traveling south towards the camera, you know, you can really eliminate a lot of objects from the scene. And ultimately, what this means is that you get to to focus more on on what you're looking for within that scene. So where do you think the range of filtering is? Basically, is it are we getting to the point now where if it's in there, we can find it via like a filter through the metadata that's available from the images themselves? So, you know, the way that we're recognizing these objects in the scene is through our deep learning engine and the deep learning engine trained to recognize particular types of objects. So we can't type in. I'm looking for a zebra and have it find a zebra. If it hasn't been trained to recognize that in the scene. So, you know, what we're trying to do is we're trying to provide the most common descriptors that that organizations are getting for what they're looking for in these scenes. So it's it's things like types of vehicles, you know, men, women, hats, carrying bags, things like this that that would commonly be the things that people want to find in the scene. And we're able to train to a very high level of accuracy. So we're into the high into the 90s in terms of the levels of accuracy that we can we can reach in recognizing these types of objects. So one thing is is really around, you know, the nature of the types of objects that that we're able to find, but also the combination. So when you start thinking about, you know, combining trying to find a man in a red shirt and green pants, you think about how that how much that can reduce the number of objects that are in the scene. And you can take days of footage and really reduce it down to just a minute or two that you have to watch in order to find those objects you're looking for. But it's also very interactive. So as you're you're adding these filters, you know, it's it's reducing. And if it doesn't work out, you can try another filter in order to get there a different way. So it makes investigations very fast, you know, ultimately it frees you up to either do more investigations or to be able to focus on other things. I see that, you know, we're getting into some gender, you know, male versus female. Are we getting good enough now? You think to do like maybe to estimated height and weight, things like that. You know, oftentimes people when they're reporting on an incident, they're using those types of things, you know, the bald-headed guys or whatever it may be. Are we are we in the 70% with that type of stuff? Are we getting to the 90s? Or does it still depend on the sort of the camera shot and pixels on target and those sorts of things? You know, one of the things that we're currently doing is we're analyzing size and we're analyzing speed and we're calibrating to the objects in the scene in order to reach that. So absolutely, you know, it is to a point in terms of the accuracy where we could start looking at things like height and weight in order to be able to find the objects we're looking for. You know, the way that the training works is that we feed it examples. And so in the case of the men and women, we've fed millions of examples into the engine in order to raise that accuracy. You know, I like to explain it as that the deep learning engine learns very much in the same way that we do. So if I'm trying to teach a child what a tree is, I'll walk the child around and I'll point out trees and I'll say there's a tree, there's a tree, there's a tree. And eventually the child's gonna be able to look at a tree that they've never seen before and recognize that that's a tree based on the former examples that they had. So this works exactly the same way. We're just feeding it millions of examples in order for it to learn what to look for in order to be able to recognize that something is a man or a woman or a pickup truck or an airplane. This is awesome. It's really good. I hope you guys are getting a good dose out there of BriefCam. Jeremy Crennitz with us, he'll be back in about one minute. We're gonna go pay some bills, stick around. Aloha, this is Winston Welch. I am your host of Out and About where every other week, Mondays at three, we explore a variety of topics in our city, state, nation and world and events, organizations, the people that fuel them. It's a really interesting show. We welcome you to tune in and we welcome your suggestions for shows. You got a lot of them out there and we have an awesome studio here where we can get your ideas out as well. So I look forward to you tuning in every other week where we've got some great guests and great topics. You're gonna learn a lot. You're gonna come away inspired like I do. So I'll see you every other week here at three o'clock on Monday afternoon. Aloha. Hi, I'm Rusty Komori, host of Beyond the Lines on Think Tech Hawaii. My show is based on my book also titled Beyond the Lines and it's about creating a superior culture of excellence, leadership and finding greatness. I interview guests who are successful in business, sports and life, which is sure to inspire you in finding your greatness. Join me every Monday as we go Beyond the Lines at 11 a.m. Aloha. Hey Aloha and welcome back to Think Tech Hawaii Studios where with Jeremy Krinnit today from BriefCam, we are talking about some of the hottest technology really in the security industry. What used to be a tool for investigation is now becoming a tool for active alerting, a tool for finding what you need in an image and finding it quickly instead of taking hours and hours and any of you that have ever had to look through surveillance video know what I'm talking about. You get like surveillance review fatigue, you start speeding through it and then you miss what you were looking for and you never find it. So Jeremy welcome back. Before the break we were talking a little bit about the technology that you guys have been building and my question's always been, I know we can do some of this stuff like out on the edge with a camera and a little bit of a chip set in a camera but when it gets bigger and harder to process we have to maybe pull that back into the server itself and if it's really a lot do we go to the cloud? Talk a little bit about how the architecture is working for BriefCam and where you see that going? Yeah, so I think for some applications relatively simple rules, it makes sense to have it on the camera. So for things like line cross or directional motion or something that's gonna be processing all the time and maybe sending an alarm based on particular conditions you're looking for. What we do is we centralize some of the processing. Typically we live very close to where the video management system is so we can quickly get access to that video but it also makes sense in some cases to get this into the cloud and what we see is that there is a lot of value in the data that we're generating and being able to consolidate that data, get that data into the cloud and ultimately be able to aggregate it and get a better understanding of what's happening within a variety of locations. So we work in retail spaces and being able to understand performance for example, know what stores are getting more traffic, what stores are starting to dip and make sure that we can staff them appropriately and focus our attention appropriately and for example, in a retail environment and we're doing a regional promotion, I wanna know if that builds the traffic within that particular region. So being able to pull the aggregate data and compare it to other regions might be something that can tell me how effective my efforts are. Yeah, so now we're talking about the realm of really business intelligence and this is where surveillance video finally becomes not just security but a tool for business optimization. Let's take a look at some of the dashboards. I forget what the first one was, but let's have a shot of that. So what do we got here? This looks like some retail. Yeah, so this is essentially just a dashboard where the origin was video but the fact that it came from video is sort of irrelevant. It's all about the information and it's all about making better decisions within a business. So this can tell us how many people moved through different spaces within a retail environment, what our traffic is over time, even what our demographics are in terms of men and women. So if I wanna know what the most popular area was for men, what the most popular area was for women, at what time of the day, I can understand those type of metrics and make sure that I'm catering to the audience appropriately. And also just understanding over time what the number of visitors that are in the environment, we can even import data from outside from a point of sale system and do a comparison so that we can see what the difference is between a number of people that have walked into our store versus number of people that have bought things so that we make sure we've got the right number of people on the floor and are helping people appropriately. Yeah, I think retail was pretty, one of the really early adopters of this technology, obviously they get a lot of power from knowing who's buying what and where they're aggregating, how long they're standing in front of an object, then if they stand there for 10 seconds, are they more likely to buy or all that kind of stuff. So I'm not real big in retail, but I do understand the application very well. Let's take a look at the next one. I think this might be another retail application. This is actually traffic analysis. Oh, traffic, okay, wow. We're changing the number of different types of vehicles and then on the right side, you can see over a 24 hour period of time, what are the spikes and what are the lows in terms of volume of traffic. And this can come from one camera, can be aggregated from a great number of cameras, but ultimately, this is a city application where we wanna get an understanding of not only how many vehicles are moving through a space, but how long are they stopping, how long are they on camera so that we can better do our city planning to time the lights and maybe we're prioritizing bike lanes based on the number of bicycles that ride down particular roads. So it allows us to focus and make better decisions based on data. And very often we see that there's traffic analysis that's done, but it's a very short sort of spot check over the course of maybe a couple of hours in a year. And so with video analytics, we can get a much better sampling over the course of weeks, months, or years to get an understanding of what is the real traffic that's present within the space. Yeah, and I think that's... Sorry, go ahead. I was just gonna say that it's gonna change the way that these cameras are used and how often these cameras are used and ultimately the value that's coming out of them. Yeah, I think in this, we keep talking about here in smart cities and the camera for us is gonna become that sensor when you see the type of data that's available. I think we're gonna have to add a category for those little scooters, right? You're seeing the Lyme scooters now. And we might have to unfortunately add even a category for homeless detection, like where an object doesn't move, right? So there's, when people set up camps, we get this situation in Honolulu where they'll set up a tent like on the sidewalk and then they're blocking the pathway for other people be it visitors or folks with ADA disabilities who can't move around. So I see a lot of value in sensor data coming from a camera. A camera becoming data, not just video surveillance. Did you bring this? Sorry. It's something that's been promised for a long time. The camera is a sensor and I think we are finally to a point where we can do that but also that it's accurate enough that it's actually useful. Yeah, did you bring more data? What do we got next? There is another video clip here which is an example of just from way up on the side of a building, just how we can track objects. So at an intersection like this, the level of detail that you can get of how this intersection is used is on a completely different level from what was available previously in terms of understanding the number of vehicles, what types of vehicles. We know what's a bus and what's a car and so we can better understand the way that that intersection is utilized. And I do have one more example as well and that's really around understanding, you know, incidents that are happening within the space. So yeah, we do do alarming based on particular conditions. In this case, there's illegal U-turns. Quite common in this particular intersection, unfortunately, but it does allow us to find particular conditions that are happening and really, you know, where we wanna apply our attentions in terms of, you know, having people present there to keep the intersection safe. Yeah, and we have a real problem in Honolulu with pedestrian incidents, pedestrian fatalities. I think basically, I don't mean to be mean to tourists, but I think the tourists are walking around, looking around, looking at their phone and they just step out into the street and the driver's maybe not expecting that behavior or not seeing them and actually, you know, they get hit. I would love it when we get to a point where we can warn one or both, you know, or stop the car before it hits the person. You know, you can see the eventuality of a smart city actually saving a life because we've got the ability to detect these things perhaps coming together like maybe like two aircraft crashing or something. You know, I don't know, I don't know where we're gonna go with it. I can see some value in the future for the smart city for pedestrian protection. It's amazing what BriefCam's doing. Talk, we've got to think of a few minutes left. Give me, what do you see? What's going on up there? You're sitting in the sort of the hide shed. Oh, you have to share all the trade secrets but give us a taste of what you think we'll be talking about in the fall and maybe next spring. You know, I think we're gonna see a lot of expansion in terms of additional refinements. Facial recognition is something else that we offer that's really taking off a ton of interest in facial recognition. You know, not only for finding that individual that you wanna find, but also for understanding behavior and continuing to track, you know, the path of an individual through a space. So retailers wanna understand, you know, a sampling of individuals and where they're going throughout a store and only count them once, you know, facial recognition can help with those type of applications. You know, there's a lot of buzz around facial recognition and I think it's a really interesting conversation and one that really has to happen. You know, I think that there's both pro and con to a lot of technologies and often it comes down to the application of those technologies and the policies that are set up within organizations. So, you know, those policies are really important to making sure that the public understands the way that the technology is utilized and the benefit that's coming from it. We all wanna have a safe and secure environment, but there's a lot of concern about privacy as well, which is understandable. Will you, does BriefCam participate with like SIA? And I know they talk to government on behalf of our industry. Are you guys a part of that conversation about privacy that government's got going on now? I know there's been some laws in California regarding facial recognition and obviously we had GDPR. The US is kind of getting aboard late with the privacy discussion. Is that a thing that you're taking on at BriefCam? We do take part in the trade shows and such with the different organizations and you know, what I've often found is that recently, you know, it's a reaction to the technology rather than a conversation. So I think a lot of the local laws are being driven by concerns about what the technology might do versus the way that the technology is actually being utilized. I think there's still a lot of defensiveness that's there and a lot of times the conversations aren't happening. You know, I know that you've spent a lot of time in command centers as I have. And you know, one thing I've consistently seen is that, you know, police don't have time to go and do something on a whim. They're so busy with the cases that are coming in the door that, you know, they're not just on a whim saying, let's look to see if anyone did anything wrong. You know, they have particular things that they're looking for. And so I think it is important to have a conversation around it and understand the way that it's being utilized and the benefit that it's bringing. Yeah, I couldn't agree more, Jeremy. We've got to have this sort of this open dialogue around trust and we've got to have technology as a part of that discussion for sure. I really do appreciate you joining us today. Jeremy Crennan, an absolute, you know, great sort of thought leader in this space and with great technology that you're driving. We appreciate your time today. I hope you all got something out of there in the audience today. Check us out on Security Matters. Aloha.