 So there's an app called Alice Who and so who are you? I'm a SAFE, I'm one of the co-founders of Fringeify. Alice Who is our demo app, demo app. Can use it in San Francisco and Tel Aviv and some of the places where we demoed in the past it can work, it can recognize places just by pointing at them in real time. Is this a demo showing what's going on? Yeah, this is a screen capture of Alice Who. Once you recognize a place if you click the bar you can get all kinds of content about it, Foursquare TripAdvisor. This is a camera? Exactly. And what happens? When you start the app it asks you to aim up to look straight and once you pointed at the place that it knows it recognizes it and you can see the Foursquare page, Google, Facebook, TripAdvisor, Zelo, many other content providers. Can you hook into a database somewhere or are you making your own? Of all the places. We can do both. We have our own database of the way places look. It's a geo-visual database. That could be a huge database if you want to cover it. Absolutely, that's what we're working on. So you're preparing something that could be huge? Absolutely. But you could also hook into like Google or something? In case we'll have a deal with Yelp for example, for them to implement our technology inside their app then we can tap into their database with the images and just use their images in order to recognize. So is this computer vision image recognition? What is this? Computer vision, image recognition, hardcore stuff. We worked on this for almost two years and only recently it matured enough to work that well so we started working on the business development. That's why I'm here to find companies who want to take this technology and implement it in their app. So it asks you to aim up because it doesn't recognize the floor? Yeah. But how about different weather, different lighting, different time of the day? Is that okay? Yeah, we're very robust to this in addition to the different points of view, occlusions, people in the shop, outside of the shop, cars, bicycles, you know, anything like that. Really? Yeah. How's that possible? Because we don't care so much. We don't read the text or match the logos or something like that. We actually look at the entire building, the entire structure and we recognize it by the way it looks. Isn't that like extremely hardcore stuff? So how do you get that? What's your background? How do you know how to do this? I'm a machine learning person for at least more than 10 years. My co-founder, staff, the CTO is a computer vision guy. And we have a very, very serious computer vision team. We are backed by two big professors who are very experienced. They came up with the original idea about how to solve this challenge. Where are you from? Israel. Israel. So you start up from Israel. And what's what are you doing here at TechCrunch? We got investment from a Rothenberg venture. We're taking part in REVERT, the program that helps startups. So I just relocated here three a week ago. I'm gonna be here for three months and the Rothenberg ventures are helping us both working on the next round of financing and with business development meeting these companies here in the Bay Area and closing deals with them. So did they find you or did you fund them? They found us because our advisor, Orin Bar, is the head of the augmented reality community. He's in good touch with them and he recommended us. They contacted us and we did very quickly. So why is this not yet in Google Maps? A guy from Google just came by today and talked to me. Because this is going to be awesome, right? The future is awesome, right? Absolutely. And this could be in the Google Glass, right? You could enter a store maybe and recognize stuff in the store or not? No. We will never recognize objects or faces or mountains and trees. We will only recognize places. We can recognize stuff indoor but right now we're focusing on the outdoor. So you don't, why is it a different thing to do indoor? The algorithm is very very optimized for the outdoor and for storefronts. That's how we're able to get such a performance. This is something that no one was ever able to do. You go in the cloud and you get on the cloud. You cannot recognize stuff just in the phone, right? Based on you. So it knows where you are and it finds all the places nearby or what? Yeah. We know how places look, the places that are around you and we send all of these places to the client and then the actual matching is happening on the client. So all of this happens on the client. That's why it happens so fast. So you, for example, on this corner right here, there's maybe 12 or 15 or 20 places max or what? Yeah. That could match. So you already preload? We preload much more than that. We can preload the entire city of San Francisco but on the client it's looking around it in a smaller radius. It takes like 50 places and then matching the way they look with what it gets from the camera. Nice. So let's hope it happens very soon that everybody will have this, right? Absolutely. Is this your vision too? Absolutely, yes. How fast is this ready? How soon is this awesome? It's pretty much ready. We just need big companies to take the technology and use it. We don't want to take care of the content. We want to use existing content. We believe the content is good. The problem is only the search. The local search is very annoying today and just doing like that it's much more intuitive and fast. Yeah, but still need muscles a little bit to hold the phone up. Sometimes, once in a while. I agree. It's not for Uber. It's maybe for Yelp. It's maybe for these kinds of... It's for both. Like vegetarians, they want to look for their vegetarian restaurant. Yeah, it's for all of these and many others as well. Yeah, I started talking to all of these.