 So as folks make their way over from Javits, I'm going to give you the least interesting part of the evening, and that's my segment in which I welcome you here, introduce myself, lay out what we're going to do for the next couple of hours. So first off, thank you very much for coming. As all of you know, Wikibon is a part of SiliconANGLE, which also includes theCUBE. So if you look around, this is what we have been doing for the past couple of days here in theCUBE. We've been inviting some significant thought leaders from over on the show and incredibly expensive limousines, driven them up the street to come on to theCUBE and spend time with us and talk about some of the things that are happening in the industry today that are especially important. We tore it down and we're having this party tonight. So we want to thank you very much for coming and look forward to having more conversations with all of you. Now what are we going to talk about? Well Wikibon is the research arm of SiliconANGLE. So we take data that comes out of theCUBE and other places and we incorporate it into our research and work very closely with large end users and large technology companies regarding how to make better decisions in this incredibly complex, incredibly important transformative world of digital business. What we're going to talk about tonight and I've got a couple of my analysts assembled and we're also going to have a panel is this notion of software is eating the edge. Now most of you have probably heard Mark Andreessen, the venture capitalist and developer, original developer of Netscape many years ago, talk about how software is eating the world. Well, if software is truly going to eat the world, it's going to take the big chunks, big bites at the edge. That's where the actual action is going to be. And what we want to talk about specifically is the entangling of the internet or the industrial internet of things in IoT with analytics. So that's what we're going to talk about over the course of the next couple of hours. To do that, I've already blown the schedule, that's on me. But to do that, I'm going to spend a couple of minutes talking about what we regard as the essential digital business capabilities, which includes analytics and big data and includes IoT, and we'll explain at least in our position why those two things come together the way that they do. Then I'm going to ask the Auguste and revered Neil Raiden, working about an analyst to come on up and talk about harvesting value at the edge because there are some, well, not now, Neil, when we're done, when I'm done. Then I'm going to ask Neil to come on up and he's going to talk about harvesting value at the edge. And then Jim Cabellus will follow up with him, another Wikibon analyst, and he'll talk specifically about how we're going to take that combination of analytics and edge and turn it into the new types of systems and software that are going to sustain this significant transformation that's going on. And then after that, I'm going to ask Neil and Jim to come back and invite some other folks up and we're going to run a panel to talk about some of these issues and do a real question and answer. So the goal here is before we break for drinks is to create a community feeling within the room that includes smart people here, smart people in the audience, having a conversation ultimately about some of these significant changes. So please participate and we look forward to talking about the rest of it. All right, so let's keep going. What is digital business? One of the nice things about being an analyst is that you can reach back on people who are significantly smarter than you and build your points of view on the shoulders of those giants, including Peter Drucker. Many years ago, Peter Drucker made the observation that the purpose of business is to create and keep a customer. Not better shareholder value, not anything else. It is about creating and keeping a customer. Now you can argue with that at the end of the day if you don't have customers, you don't have a business. Now the observation that we've made, what we've added to that is that we've made the observation that the difference between business and digital business essentially is one thing. That's data. A digital business uses data to differentially create and keep customers. That's the only difference. If you think about the difference between taxicab companies here in New York City, every cab that I've been in in the last three days has bothered me about Uber. The reason the difference between Uber and a taxicab company is data. That's the primary difference. It uses, Uber uses data as an asset. And we think this is the fundamental feature of digital business that everybody has to pay attention to. How is a business going to use data as an asset? Is a business using data as an asset? Is a business driving its engagement with customers, the role of its products, et cetera, using data? And if they are, they are becoming a more digital business. Now when we think about that, what we're really talking about is how are they going to put data to work? How are they going to take their customer data and their operational data and their financial data and any other kind of data and ultimately turn that into superior engagement or improve customer experience or more agile operations or increased automation? Those are the kinds of outcomes that we're talking about, but it is about putting data to work. That's fundamentally what we're trying to do within a digital business. Now that leads to an observation about the crucial strategic business capabilities that every business that aspires to be more digital or to be digital has to put in place. And I want to be clear, when I say strategic capabilities, I mean something specific. It's, when you talk about, for example, technology architecture or information architecture, there's this notion of, what capabilities does your business need? Your business needs capabilities to pursue and achieve its mission. And in a digital business, these are the capabilities that are now additive to this core question, ultimately, of whether or not a company is a digital business. What are the three capabilities? One, you have to capture data. Not just do a good job of it, but better than your competition. You have to capture data better than your competition in a way that is ultimately less intrusive on your markets and on your customers. That's, in many respects, one of the first priorities of the Internet of Things and People, the idea of using sensors and related technologies to capture more data. Once you capture that data, you have to turn it into value. You have to do something with it that creates business value so you can do a better job of engaging your markets and serving your customers. And that essentially is what we regard as the basis of big data, including operations, including financial performance, and everything else, but ultimately it's taking this data that's being captured and turning it into value within the business. The last point here is that once you have generated a model or an insight or some other resources you can act upon, you then have to act upon it in the real world. We call that systems of agency, the ability to enact based on data. Now I want to spend just a second talking about systems of agency because we think it's an interesting concept and it's something that Jim Kabilis is going to talk about a little bit later. When we say systems of agency, what we're saying is increasingly machines are acting on behalf of a brand or systems, combinations of machines and people are acting on behalf of the brand and this whole notion of agency is the idea that ultimately these systems are now acting as a business's agent. They are at the front line of engaging customers. It's an extremely rich proposition that has subtle but crucial implications. For example, I was talking to a senior decision maker at a business today and they made a quick observation. They talked about the fact that they, on their way here to New York City they had followed a woman who was going through security, opened up her suitcase and took out a bird and then went through security with the bird and the reason why I bring this up now is as TSA was trying to figure out how exactly to deal with this the bird started talking and repeating things that the woman had said and many of those things in fact might have put her in jail. Now in this case the bird is not an agent of that woman. You can't put the woman in jail because of what the bird said. But increasingly we have to ask ourselves as we ask machines to do more on our behalf digital instrumentation and elements to do more on our behalf it's gonna have blowback and an impact on our brand if we don't do it well. I wanna draw that forward a little bit because it suggests that there's going to be a new life cycle for data. And the way that we think about it is we have the internet or the edge which is comprised of things and crucially people using sensors whether they be small arm processors and control towers or whether they be phones that are tracking where we go. And this crucial element here is something that we call information transducers. Now a transducer in a traditional sense is something that takes energy from one form to another so that it can perform new types of work. By information transducer I essentially mean it takes information from one form to another so it can perform another type of work. This is a crucial feature of data. One of the beauties of data is that it can be used in multiple places at multiple times and not engender significant net new cost. It's one of the few assets that you can say about that. So this concept of an information transducer is really important because it's the basis for a lot of the transformations of data as data flies through organizations. So we end up with these transducers storing data in the form of analytics, machine learning, business operations, other types of things and then it goes back in its transduced back into the real world as we program the real world and turning into these systems of agency. So that's the new life cycle. And increasingly that's how we have to think about data flows. Capturing it, turning it into value and having it act on our behalf in front of markets. That's going to have enormous implications for how ultimately money is spent over the next few years. So Wikibon does a significant amount of market research in addition to advising our large user customers and that includes doing studies on cloud, public cloud, but also studies on what's happening within the analytics world. And if you take a look at it, what we basically see happening over the course of the next few years is significant investments in software and also services to get the word out. But we also expect that there's going to be a lot of hardware, a significant amount of hardware that's ultimately sold within this space. And that's because of something that we call true private cloud. This concept of ultimately a business increasingly being designed and architected around the idea of data assets means that the reality, the physical realities of how data operates, how much it costs to store it or move it, the issues of latency, the issues of intellectual property protection, as well as things like the regulatory regimes that are being put in place to govern how data gets used in between locations. All of those factors are going to drive increased utilization of what we call true private cloud. On-premise technologies that provide cloud experience but act where the data naturally needs to be processed. I'll come in a little bit more to that in a second. So we think that it's going to be a relatively balanced market. A lot of stuff is going to end up in the cloud, but as Neil and Jim will talk about, there's going to be an enormous amount of analytics that pulls an enormous amount of data out to the edge because that's where the action is going to be. Now one of the things I want to also reveal to you is we've done a fair amount of data, we've done a fair amount of research around this question of where or how will data guide decisions about infrastructure? And in particular, the edge is driving these conversations. So here's a piece of research that one of our cohorts at Wikibon did, David Floyer, taking a look at IoT edge cost comparisons over a three-year period. And it showed on the left-hand side an example where the sensor towers and other types of devices were streaming data back into a central location in a wind farm, stylized wind farm example. Very, very expensive. Significant amounts of money end up being consumed, significant resources end up being consumed by the cost of moving the data from one place to another. Now this is even assuming that latency does not become a problem. The second example that we looked at is if we kept more of that data at the edge and processed it at the edge. And literally, it is a 85-plus-percent cost reduction to keep more of the data at the edge. Now that has enormous implications, how we think about big data, how we think about next-generation application, architectures, et cetera. But it's these costs that are going to be so crucial to shaping the decisions that we make over the next few years about where we put hardware, where we put resources, what type of automation is possible, and what types of technology management have to be put in place. Ultimately, we think it's going to lead to a structure, an architecture in the infrastructure as well as applications that is informed more by moving cloud to the data than moving the data to the cloud. That's kind of our fundamental proposition, is that the norm in the industry has been to think about moving all data up to the cloud because who wants to do IT? It's so much cheaper. Look at what Amazon can do or what AWS can do. All true statements, very, very important in many respects. But most businesses today are starting to rethink that simple proposition and asking themselves, do we have to move our business to the cloud or can we move our cloud to the business? And increasingly, what we see happening as we talk to our large customers about this is that the cloud is being extended out to the edge. We're moving the cloud and the cloud services out to the business because of economic reasons, intellectual property control reasons, regulatory reasons, security reasons, any number of other reasons. It's just a more natural way to deal with it. And of course, the most important reason is latency. So with that as a quick backdrop, if I may quickly summarize, we believe fundamentally that the difference today is that businesses are trying to understand how to use data as an asset. And that requires an investment in new sets of technology capabilities that are not cheap, not simple and require significant thought, lot of planning, lot of change within an IT and business organizations, how we capture data, how we turn it into value and how we translate that into real world action through software. That's going to lead to a rethinking ultimately based on costs and other factors about how we deploy infrastructure, how we use the cloud so that the data guides the activity and not the choice of cloud supplier determines or limits what we can do with our data. And that's going to lead to this notion of true private cloud and elevate the role that the edge plays in analytics and all other architectures. So I hope that was perfectly clear. And now what I want to do is I want to bring up Neil Raden. Yes, now's the time Neil. So let me invite Neil up to spend some time talking about harvesting value at the edge. Thanks, Neil. All right, got it. Oh boy. Hi, everybody. Yeah, this is a really big and complicated topic. So I decided to just concentrate on something fairly simple. But I know that Peter mentioned customers and he also had a picture of Peter Drucker. I had the pleasure in 1998 of interviewing Peter and photographing him. Peter Drucker, not this Peter. Because I had started a magazine called Hired Brains. It was for consultants. And Peter said a number of really interesting things to me but one of them was his definition of a customer was someone who wrote you a check that didn't bounce. He was kind of a wag. He was. So anyway, he had to leave to do a video conference with Jack Welch. And so I said to him, how much do you charge Jack Welch to spend an hour on a video conference? And he said, you know, I have this theory that you should always charge your client enough that it hurts a little bit or they don't take you seriously. Well, I had the chance to talk to Jack's wife, Susie Welch, recently. And I told her that story and she said, oh, he's full of it. Jack never paid a dime for those conferences. So anyway, so let's talk about this. To me, things about engineered things like hardware and network and all these other standards and so forth. We haven't fully developed those yet but they're coming. As far as I'm concerned, they're not the most interesting thing. The most interesting thing to me in edge analytics is what you're going to get out of it. What the result is going to be. Making sense of this data that's coming. And while we're on data, something I've been thinking a lot lately because everybody I've talked to for the last three days just keeps talking to me about data. I have this feeling that data isn't actually quite real. That any data that we deal with is the result of some process that's captured it from something else that's actually real. In other words, it's proxy. So it's not exactly perfect. And that's why we've always had these problems about customer A, customer A, customer A, what's their definition, what's the definition of this, that and the other thing. And with sensor data, I really have the feeling when companies get, no, not companies, organizations get instrumented and start dealing with this kind of data, what they're going to find is that this is the first time. And I've been involved in analytics, I don't want to date myself because I know I look young, but the first, I've been dealing with analytics since 1975. And everything we've ever done in analytics has involved pulling data from some other system that was not designed for analytics. But if you think about sensor data, this is data that we're actually going to catch the first time. It's going to be ours. We're not going to get it from some other source. It's going to be the real deal to the extent that it's the real deal. Now you may say, Neil, a sensor that's sending us information about oil pressure or temperature or something like that, how can you quarrel with that? Well, I can quarrel with it because I don't know if the sensor's doing it right. So we still don't know, even with that data, if it's right, but that's what we have to work with. Now, what does that really mean? Is that we have to be real careful with this data. It's ours. We have to take care of it. We don't get to reload it from source some other day. If we munge it up, it's gone forever. So that has very serious implications, but let me roll you back a little bit. The way I look at analytics is it's come in three different areas and we're entering into the third now. The first year was business intelligence. It was basically built and governed by IT. It was system of record kind of reporting. And as far as I can recall, it probably started around 1988, or at least that's the year that Howard Dresner claims to have invented the term. I'm not sure it's true. And things happened before 1988 that were sort of like BI, but 88 was really when they started coming out. That's when you saw business objects and Cognos and MicroStrategy and those guys. The second generation just popped out on everybody else. We were all looking around at BI and we were saying, why isn't this working? Why are only five people in the organization using this? Why are we not getting value out of this massive license we bought? And along comes companies like Tableau doing data discovery, visualization, data prep, and line of business people are using this now. But it's still the same kind of data sources. It's moved out a little bit, but it still hasn't really hit the big data thing. Now we're in third generation, so we not only had big data, which is coming in as like a tsunami, but we're looking at smart discovery, we're looking at machine learning, we're looking at AI induced analytics workflows, and then all the natural language cousins, you know, natural language processing, natural language, what's, oh, Q, natural language query, natural language generation. Anybody here know what natural language generation is? Yeah, so what you see now is you do some sort of analysis and that tool comes up and says, this chart is about the following and it used the following data, and it's blah, blah, blah, blah, blah, right? I think it's kind of wordy and it's going to get refined some, but it's an interesting thing to do. Now, problem I see with Edge analytics and IoT in general is that most of the canonical examples we talk about are pretty thin. I always talk about autonomous cars, I hope that God we never have them because I'm a car guy. Fleet management, Qualcomm started fleet management in 1988. That is not a new application. Industrial controls, I've seen to remember, I've seen to remember Honeywell doing industrial controls at least in the 70s and before that I wasn't, well I don't want to talk about what I was doing, but I definitely wasn't in this industry. So my feeling is we all need to sit down and think about this and get creative because the real value in Edge analytics or IoT, whatever you want to call it, the real value is going to be figuring out something that's new or different, creating a brand new business, changing the way an operation happens in a company, right? And I think there's a lot of smart people out there and I think there's a million apps that we haven't even talked about. So if you as a vendor come to me and tell me how great your product is, please don't talk to me about autonomous cars or fleet management because I've heard about that, okay? Now, hardware and architecture are really not the most interesting thing. We fell into that trap with data warehousing. We've fallen into that trap with big data. We talk about speeds and feeds. Somebody said to me the other day, what's the narrative of this company? This is a technology provider. And I said, as far as I can tell, they don't have a narrative. They have some products and they compete in a space. And when they go to clients and the clients say, what's the value of your product, they don't have an answer for that. So we don't want to fall into this trap, okay? Because IoT is going to inform you in ways you've never even dreamed about. Unfortunately, some of them are going to be really stinky. You know, they're going to be really bad. You're going to lose more of your privacy. It's going to be harder to get, I don't know, mortgage, for example, I don't know, maybe it'll be easier. But in any case, it's not going to all be good. So let's really think about what you want to do with this technology to do something that's really valuable. Cost takeout is not the place to justify an IoT project because number one, it's very expensive. And number two, it's a waste of the technology because you should be looking at, you know, the old numerator-denominator thing. You should be looking at the numerators and forget about the denominators because that's not what you do with IoT. And the other thing is, you don't want to get overconfident. Actually, this is good advice about anything, right? But in this case, I love this quote by Derek Sivers, he's a pretty funny guy. He said, if more information was the answer, then we'd all be billionaires with perfect abs. Now, you know, I'm not sure what's on his wish list, but you know, I would, those aren't necessarily the two things I would think of. Go ahead. Now, what I said about the data, I want to explain some more. Big data analytics, if you look at this graphic, it depicts it perfectly. It's a bunch of different stuff falling into the funnel. All right? It comes from other places. It's not original material. And when it comes in, it's always used or secondhand data. Now, what does that mean? That means that you have to figure out the semantics of this information and you have to find a way to put it together in a way that's useful to you, okay? That's big data. That's where we are. How is that different from IoT data? It's like I said, IoT is original. You can put it together any way you want because no one else has ever done that before, right? It's yours to construct, okay? You don't even have to transform it into a schema because you're creating the new application. But the most important thing is you have to take care of it because if you lose it, it's gone. It's the original data. It's the same way in operational systems for a long, long time, we've always been concerned about backup and security and everything else. You better believe this is a problem. I know a lot of people think about streaming data that we're gonna look at it for a minute and we're gonna throw most of it away. Personally, I don't think that's gonna happen. I think it's all gonna be saved for at least for a while. Now, governance and security, by the way, I don't know where you're gonna find a presentation where somebody uses a newspaper clipping about Vladimir Lenin, but here it is, enjoy yourselves. I believe that when people think about governance and security today, they're still thinking along the same grids that we thought about it all along. But this is very, very different. And again, I'm sorry I keep thrashing this around, but this is treasured data that has to be carefully taken care of. Now, when I say governance, my experience has been over the years, the governance is something that IT does to make everybody's lives miserable. But that's not what I mean by governance today. It means a comprehensive program to really secure the value of the data as an asset. And you need to think about this differently. Now, the other thing is you may not get to think about it differently because some of this stuff may end up being subject to regulation. And if the regulators start regulating some of this, then that'll take some of the degrees of freedom away from you and how you put this together, but that's the way it works. Now, machine learning. I think I told somebody the other day that claims about machine learning and software products, there's commonest twisters and trailer parks. And a lot of it is not really what I'd call machine learning. But there's a lot of it around. And I think all of the open source machine learning and artificial intelligence that's popped up, it's great because all those math PhDs who work at Home Depot now have something to do when they go home at night and they construct this stuff. But if you're going to have machine learning at the edge, here's the question. What kind of machine learning would you have at the edge as opposed to developing your models back at, say the cloud, when you transmit the data there? The devices at the edge are not very powerful and they don't have a lot of memory. So you're only going to be able to do things that have been modeled or constructed somewhere else. But that's okay. Because machine learning algorithm development is actually slow and painful. So you really want the people who know how to do this, working with gobs of data, creating models and testing them offline. And when you have something that works, you can put it there. Now, one thing I want to talk about before I'm finished and I think I'm almost finished. I wrote a book about 10 years ago about automated decision making. And the conclusion that I came up with was that little decisions add up and that's good. But it also means you don't have to get them all right. But you don't want computers or software making decisions unattended if it involves human life. But frankly, any life or the environment. So when you think about the applications that you can build using this architecture and this technology, think about the fact that you're not going to be doing air traffic control. You're not going to be monitoring crossing guards at the elementary school. You're going to be doing things that you may seem barely mundane. Managing machinery in a factory floor, I mean that may sound great, but it really isn't that interesting. Managing well heads, drilling for oil. I mean, it's great to the extent that it doesn't cause wells to explode, but they don't usually explode. What it's usually used for is to drive the cost out of preventative maintenance. Not very interesting. So use your heads. Come up with really cool stuff. And any of you who are involved in edge analytics, the next time I talk to you, I don't want to hear about the same five applications that everybody talks about. Let's hear about some new ones. So in conclusion, I don't really have anything in conclusion except that Peter mentioned something about limousines bringing people up here. On Monday, I was slogging up and down Park Avenue and Madison Avenue with my client and we were visiting all the hedge funds there because we were doing project with them. And in the miserable weather, I looked at him and I said, for God's sake Paul, where's the black car? And he said, that was the 90s. Thank you. So, up to you. Okay. This is terrible. Go that way. This is terrible coming up. We don't want to trip. And let's move to, there we go. Hi everybody. How are you doing? Thanks Neil. Thanks Peter. Those were great, great discussions. So my, I'm the third leg in this relay race here. Talking about how, of course how software is eating the world. And focusing on the value of edge analytics in a lot of real world scenarios, programming the real world for, you know, to make the world a better place. So I will talk, I'll break it out analytically in terms of the research that Wikibon is doing in the area of the IoT, but specifically how AI intelligence is being embedded really into all material reality potentially at the edge. But you know, mobile applications and industrial IoT and the smart appliances and self-driving vehicles, I will break it out in terms of a reference architecture for understanding what functions are being pushed to the edge, to hardware, to our phones and so forth to drive various scenarios in terms of real world results. And so I'll move a pace here. So basically AI software, AI microservices are being infused into edge hardware as we speak. What we see is more vendors of smartphones and other, you know, real world appliances and things like smart driving, self-driving vehicles are what they're doing is that they're instrumenting their products with computer vision and natural language processing, environmental awareness based on sensing and actuation and those capabilities and inferences that these devices just do to both provide human support for human users of these devices as well as to enable varying degrees of autonomous operation. So what I'll be talking about is how AI is a foundation for data-driven systems of agency of the sort that Peter is talking about. Infusing data-driven intelligence into everything or potentially so. As more of this capability, all these algorithms for things like, you know, for doing real-time predictions and classifications, anomaly detection and so forth, as this functionality gets diffused widely and becomes more commoditized, you'll see it burned into an ever-wider variety of hardware architectures, neuro-synaptic chips, GPUs and so forth. So what I've got here in front of you is sort of a high-level reference architecture that we're building out in our research at Wikibon. So AI, artificial intelligence is a big term, a big paradigm, I'm not gonna unpack it completely. Of course, we don't have, you know, oodles of time so I'm gonna take you fairly quickly through the high points. It's a driver for systems of agency, programming the real world, transducing digital inputs, the data, to analog real-world results. Through the embedding of this capability in the IoT, but pushing more and more of it out to the edge with the points of decision and action in real-time. And the four capabilities that we're seeing in terms of AI-enabled, enabling capabilities that are absolutely critical to software being pushed to the edge are sensing, actuation, inference and learning. Sensing and actuation, like Peter was describing, it's about capturing data from the environment within which a device or a user is operating or moving. And then actuation is the fancy term for doing stuff, like industrial IoT, it's obviously machine controls but clearly self-driving vehicles, it's steering a vehicle and avoiding crashing and so forth. Inference is the meat and potatoes as it were of AI. Analytics does inferences. It infers from the data the logic of the application, predictive logic, correlations, classification, abstractions, differentiation and anomaly detection, recognizing faces and voices. We see now with Apple in the latest version of the iPhone is embedding face recognition as the core multi-factor authentication technique. Clearly that's a harbinger of what's gonna be universal fairly soon, which is that depends on AI. That depends on convolutional neural networks. That is some heavy hitting processing power that's necessary and it's processing the data that's coming from your face. So that's critically important. So what we're looking at then is that AI software is taking root in hardware to power continuous agency, getting stuff done, power decision support by human beings who have to take varying degrees of action in various environments. We don't necessarily wanna let the car steer itself in all scenarios, they want some degree of override. For lots of good reasons, they wanna protect life and limb, including their own. And just more data-driven automation across the internet of things in the broadest sense. So unpacking this reference framework. What's happening is that AI-driven intelligence is powering real-time decisioning at the edge. Real-time local sensing from the data that it's capturing there, it's ingesting the data. Some, not all, of that data may be persisted at the edge. Some, perhaps most of it, will be pushed into the cloud for other processing. When you have these highly complex algorithms that are doing AI, deep learning, multi-layer to do a variety of like anti-fraud higher-level auto-narrative roll-ups from various scenes that are unfolding. A lot of this processing's gonna happen in the cloud, but a fair amount of the more narrowly scoped inferences that drive real-time decision support at the point of action will be done on the device itself. Contextual actuation, so it's the sensor data that's captured by the device, along with other data that may be coming down in real-time streams through the cloud will provide the broader contextual envelope of data needed to drive actuation, to drive various models and rules and so forth that are making stuff happen at the point of action at the edge. Continuous inference, what it all comes down to is that inference is what's going on inside the chips at the edge device. And what we're seeing is a growing range of hardware architectures, GPUs, CPUs, FPGAs, A6, neurosynaptic chips of all sorts, playing in various combinations that are automating more and more very complex inference scenarios at the edge. And not just individual devices, swarms of devices like drones and so forth are essentially an edge unto themselves. You'll see these tiered hierarchies of edge swarms that are playing and doing inferences of ever more complex dynamic nature. And much of this will be this capability, the fundamental capabilities that is powering them all will be burned into the hardware that powers them. And then adaptive learning. Now I use the term learning rather than training here. Training is at the core of it. Training means everything in terms of the predictive fitness or the fitness of your AI services for whatever task predictions, classifications, face recognition, that you've built them for. But I use the term learning in a broader sense. It's what makes your inferences get better and better and more accurate over time is you're training them with fresh data in a supervised learning environment. But you can have reinforcement learning if you're doing like say robotics and you don't have ground truth against which to train the data set. You know, there's maximizer reward function versus minimize a loss function. You know, the standard approach, the latter for supervised learning. There's also, of course, the issue, or not the issue, the approach of unsupervised learning with cluster analysis, critically important in a lot of real world scenarios. So edge AI algorithms. You know, clearly deep learning, which is a multi-layered machine learning models that can do abstractions at higher and higher levels. Face recognition is a high level of abstraction. Faces in a social environment is an even higher level of abstraction in terms of groups. Faces over time and bodies and gestures doing various things in various environments is an even higher level abstraction in terms of narratives that can be rolled up, are being rolled up by deep learning capabilities of greater sophistication. Convolutional neural networks for processing images, recurrent neural networks for processing time series. Generative adversarial networks for doing, essentially what's called generative applications of all sort, you know, composing music and a lot of it's being used for auto-programming. These are all deep learning. And there's a variety of other algorithmic approaches I'm not gonna bore you with here. Deep learning is essentially the enabler of the five senses of the IoT. Your phone's gonna have, has a camera. It has, you know, a microphone. It has the ability to, of course, as geolocation and navigation capabilities. It's environmentally aware. It's got an accelerometer and so forth embedded therein. The reason that your phone and all other devices are getting scary sentient is that they have the sensory modalities and the AI, the deep learning that enables them to make environmentally correct decisions in a wider range of scenarios. So machine learning is the foundation of all this but there are other, I mean, of deep learning. And artificial neural networks is the foundation of that. But there are other approaches for machine learning I wanna make you aware of because support vector machines and these other established approaches for machine learning are not going away. But really what's driving the show now is deep learning because it's scary effective. And that's where most of the investment in AI is going into these days for deep learning. AI edge platforms, tools and frameworks are just coming along like gangbusters. Most much development of AI of deep learning happens in the context of your data lake. This is where you're storing your training data. It's the data that you use to build and test to validate your models. So we're seeing a deepening stack of Hadoop and there's Kafka and Spark and so forth that are driving the training, excuse me, of AI models that are powering all these edge analytic applications so that lake will continue to broaden in terms of a scope and the range of data sets and the range of AI modeling that it supports. Data science is critically important in this scenario because the data scientists, data science teams, the tools and techniques and flows of data science are the fundamental development paradigm or discipline or capability that's being leveraged to build and to train and deploy and iterate. All this AI that's being pushed to the edge. So clearly data science is at the center of data scientists of an increasingly specialized nature are necessary to the realization of this value at the edge. AI frameworks are coming along like a mile a minute. TensorFlow has achieved a, most of these are open source, has achieved sort of almost like a de facto standard, a status, I'm using the word de facto in AirQuiz. With this Theano and Carras and MXNet and CNTK and a variety of other ones, we're seeing a broader range of AI frameworks come to market, most open source, most are supported by most of the major tool vendors as well. So Wikibon, we're definitely tracking that and we plan to go deeper in our coverage of that space. And then next best action, powers recommendation engines. I mean next best action decision automation of the sort that Neil's covered in a variety of contexts in his career is fundamentally important to the edge analytics, to systems of agency, because it's driving the process automation, decision automation, sort of the targeted recommendations that are made at the edge to individual users, as well as the process that automation, that's absolutely necessary for self-driving vehicles to do their jobs and industrial IoT. So we're seeing as more and more recommendation engine or recommender capabilities power by ML and DL are going to the edge are already at the edge for a variety of applications. Edge AI capabilities, like I said, they're sensing and sensing at the edge is becoming ever more rich mixed reality edge modalities of all sort of for augmented reality and so forth. We're just seeing a growth in certain, the range of sensory modalities that are enabled or filtered and analyzed through AI that are being pushed to the edge into the chipsets. Actuation, that's where robotics comes in. Robotics is coming into all aspects of our lives and it's brainless without AI, without deep learning and these capabilities. Inference, autonomous edge decisioning, like I said, it's a growing range of inferences that are being done at the edge and that's where it has to happen because that's the point of decision. Learning, training, much training, most training will continue to be done in the cloud because it's very data intensive. It's a grind to train and optimize an AI algorithm to do its job. It's not something that you necessarily want to do or can do at the edge at edge devices. So the models that are built and trained in the cloud are pushed down through a DevOps process down to the edge and that's the way it will work pretty much in most AI environments, most edge analytics environments you centralize the modeling, you decentralize the execution of the inference models. The training engines will be in the cloud. Edge AI applications, I'll just run you through sort of a core list of the ones that are coming into them, already coming to the mainstream, at the edge. Multi-factor authentication, clearly the Apple announcement face recognition is just a harbinger of the fact that that's coming to every device. Computer vision, speech recognition, NLP, digital assistance and chatbots, power by natural language processing and understanding. It's all AI powered and it's becoming very mainstream. Emotion detection, face recognition, I can go on and on but these are the core things that everybody has access to or will by 2020 and the core devices, mass market devices. Developers, designers and hardware engineers is coming together to pool their expertise to build and train not just the AI but also the entire package of hardware and UX and the orchestration of real world business scenarios or life scenarios that all this intelligence, this embedded intelligence enables and most, much of what they build in terms of AI will be containerized as microservices through Docker and orchestrated through Kubernetes as full cloud services in an increasingly distributed fabric. That's coming along very rapidly. We can see a fair amount of that already on display at Strata in terms of what the vendors are doing or announcing or who they're working with. The hardware itself, at the edge, some data will be persistent, needs to be persistent to drive inference and drive a variety of different application scenarios that need some degree of historical data related to what that device and question happens to be sensing or has sense in the immediate past or whatever. The hardware itself is geared towards both sensing and increasing persistence and edge driven actuation of real world results. The whole notion of drones and robotics being embedded into everything that we do, that's where that comes in. That has to be powered by low cost, low power, commodity chipsets of various sorts. What we see right now in terms of chipsets is it's a GPUs, NVIDIA has gone real far and GPUs have come along very fast in terms of powering inference engines for the Tesla cars and so forth, but GPUs are in many ways the core hardware substrate for inference engines in DL so far. But to become a mass market phenomenon, it's got to get cheaper and lower powered and more commoditized and so we see a fair number of CPUs being used as the hardware for analytic applications. Some vendors are fairly big on FPGA because I believe Microsoft has gone fairly far with FPGAs inside their DL strategy. A6, I mean there's neuro-synaptic chips like IBM's got one, there's at least a few dozen vendors of neuro-synaptic chips on the market. So at Wikibon we're going to track that market as it develops and what we're seeing is a fair number of scenarios where it's a mixed environment where you use one chipset architecture at the inference side of the edge and other chipset architectures that are driving the DL that's processed in the cloud playing together within a common architecture and we see a fair number of DL environments where the actual training is done in the cloud on Spark using CPUs and parallelized in memory but pushing like TensorFlow models that might be trained with Spark down to the edge where the inferences are done in FPGAs and GPUs. Those kinds of mixed hardware scenarios are very likely to be the standard going forward in lots of areas. So analytics at the edge power continuous results is what it's all about. The whole point is really not moving the data. It's putting the inference inference at the edge and working from the data that's already captured and persisted there for the duration of whatever does action or decision or result needs to be powered from the edge. Like Neil said, cost take out alone is not worth doing. Cost take out alone is not the rationale for putting AI at the edge. It's getting new stuff done, new kinds of things done in an automated, consistent, intelligent, contextualized way to make our lives better and more productive. Security and governance are becoming more important. Governance of the models, governance of the data, governance in a DevOps context in terms of version controls over all those DL models that are built, that are trained, that are containerized and deployed, continuous iteration and improvement of those to help them learn to do, make our lives better and easier. With that said, I'm gonna hand it over now it's five minutes after the hour. We're gonna go with the influencer panel. So what we'd like to do is I like to call Peter and Peter's gonna call our influencers. All right, am I live again? Can you hear me? All right, so we've got, let me jump back and control here. We've got, again, the objective here is to have a community take on some things. And so what we wanna do is I wanna invite five other people up. Neil, why don't you come on up as well? Start with Neil. You can sit here. On the far right hand side, Judith, Judith Hurwitz. I'm glad I'm on the left side. From the Hurwitz Group. From the Hurwitz Group. Jennifer Shen, who is affiliated with UC Berkeley, Jennifer, are you here? She's here. Jennifer, where are you? He's here a second ago. I saw her walk out. All right, she'll be back in a second. So, here's Jennifer. Here's Jennifer. With eight path solutions, right? Yep. Yeah, eight path solutions. Let me just get my mic. Take your time, Jennifer. All right. Oh, this is nice. Stephanie McRenalds, far left. And finally, Joe Casterda. Joe, come on up. Stephanie's with a leash. Middle left. So, what I want to do is I want to start by having everybody just go around, introduce yourself quickly. Judith, why don't we start there? I'm Judith Hurwitz. I'm president of Hurwitz and Associates. We're an analyst research and thought leadership firm. I'm the co-author of eight books. Most recent is Cognitive Computing and Big Data Analytics. I've been in the market for a couple years now. Jennifer. Hi, my name is Jennifer Shen. I'm a data science analyst in technology. We're actually about to do a big launch next month with Box, actually. Are we having a... Sorry, Jennifer. Are we having a problem with Jennifer's microphone? You have to turn it back on. Oh, you have to turn it back on. Can you hear me now? Yes, we can hear you now. Okay, I don't know how to turn it back off. Okay. So, you got to redo on that. Okay. So, my name is Jennifer Shen. I'm the founder of eight path solutions, LLC. The Data Science, Analytics and Technology Company. I founded about six years ago. So, we've been developing some really cool technology that we're going to be launching with Box next month. It's really exciting. And I have been developing a lot of patents and some technology as well as teaching at UC Berkeley as a lecture in data science. You know, Jim, you know, Neil, Joe, you ready to go? Joe's microphone is broken. Ah, should be all right. Speaking of Neil's. Hello, Malone. I just feel not worthy in the presence of Joe Caserta. That's right. Master of mics. All right. If you could hear me. Joe Caserta. So, you have been doing data technology solutions since 1986, almost as old as Neil here, but been doing specifically like BI, data warehousing, business intelligence, type of work since 1996, and been doing wholly dedicated to big data solutions and modern data engineering since 2009. Where should I be looking? And that's basically it. So, my company was formed in 2001. It's called Caserta Concepts. We recently rebranded to only Caserta, because what we do is way more than just concepts. So we conceptualize the stuff, we vision what the future brings, and we actually build it. And we help clients large and small who are just, want to be leaders in innovation using data specifically to advance their business. And finally, Stephanie McRenalds. And Stephanie McRenalds, I had product marketing as well as corporate marketing for a company called Elation. And we are a data catalog. So we help bring together not only a technical understanding of your data, but we curate that data with human knowledge and use automated intelligence internally within the system to make recommendations about what data to use for decision-making. And some of our customers like City of San Diego, a large automotive manufacturer working on self-driving cars, and General Electric use Elation to help power their solutions for IoT at the edge. All right, so let's jump right into it. And again, if you have a question, raise your hand and we'll do our best to get it to the floor. But what I want to do is I want to get seven questions in front of this group and have you guys discuss, slog, disagree, agree. Let's start here. What is the relationship between big data, AI, and IoT? Now Wikibon's put forward its observation that data's being generated at the edge, that action's being taken at the edge, and then increasingly the software and other infrastructure architectures need to accommodate the realities of how data is going to work in these very complex systems. That's our perspective. Anybody, Judith, do you want to start? Yeah, so I think that if you look at AI, machine learning, all of these different areas, you have to be able to have the data learn. Now when it comes to IoT, I think one of the issues we have to be careful about, it's not all data will be at the edge. Not all data needs to be analyzed at the edge. For example, if the light is green and that's goodness supposed to be green, do you really have to constantly analyze the fact that the light is green? You actually really only want to be able to analyze and take action when there's an anomaly. Well, if it goes purple, that's actually sign that something might explode. So that's where you want to make sure that you have the analytics at the edge, not for everything, but for the things where there is an anomaly and a change. Joe, how about for your perspective? For me, I think the evolution of data is really becoming the, eventually, oxygen is just, I mean, data is going to be the oxygen we breathe. It used to be very, very reactive and there used to be a lot of latency. You do something, there's a behavior, there's an event, there's a transaction, and then you go record it, and then you collect it, and then you can analyze it, and it was very, very waterfall-ish. And then eventually we figured out to put it back into the system, or at least have human beings interpret it to try to make the system better. And that is really completely turned on and said, we don't do that anymore. Right now it's very, very, it's synchronous, where as we're actually making these transactions, the machines, we don't really need, I mean, human beings are involved a bit, but less and less and less, and it's just a reality, it may be not politically correct to say, but it's a reality that my phone in my pocket is following my behavior, and it knows without telling a human being what I'm doing, and it can actually help me do things like get to where I want to go faster, depending on my preference, if I want to save money or save time, or visit things along the way. And I think that's all integration of big data, streaming data, artificial intelligence, and I think the next thing that we're going to start seeing is the culmination of all of that. I actually, hopefully it'll be published soon, I just wrote an article for Forbes, with the new term of RB. And RB is the integration of augmented reality and business intelligence, where I think, eventually, we're going to see, hold your phone up to Jim's face, and it's going to recognize it, and it's going to say exactly what are the key metrics that we want to know about Jim, if he works on my sales force, what's his attainment of goal, what is, can it read my mind, what deals potentially based on behavior patterns? Okay. So, I'm scared. I don't think Jim's fine. It will, without a doubt, be able to predict, what you've done in the past, you may, with some certain level of confidence, you may do again in the future. And is that mind reading? It's pretty close, right? Well, sometimes, I mean, mind reading is in the eye of the individual who wants to know. And if the machine appears to approximate what's going on in the person's head, sometimes you can't tell. So I guess we could call that the Turing machine test of the paranormal. Well, face recognition, micro gesture recognition, I mean, facial gestures, people can do it. Not maybe not, there's better than a coin toss, but if it can be seen visually and captured and analyzed, because seemingly, some degree of mind reading can be built in. I can see when somebody's angry looking at me. So that's a possibility, and it's kind of a scary possibility when in a surveillance society, potentially. Right, absolutely. Stephanie, what do you think? Well, I hear a world of it's the bots versus the humans being painted here. And I think that, you know, at elation we have a very strong perspective on this. And that is that the greatest impact or the greatest results is gonna be when humans figure out how to collaborate with the machines. And so, yes, you wanna get to the location more quickly. But the machine isn't, the bot isn't able to tell you exactly what to do when you're just gonna blindly follow it. You need to train that machine. You need to have a partnership with that machine. So, you know, a lot of the power, and I think this goes back to Judah's story, is in what is the human decision-making that can be augmented with data from the machine? But then, the humans are actually training the training side and driving machines in the right direction. And I think that's when we get true power out of some of these solutions. So it's not just all about the technology, it's not all about the data or the AI or the IoT. It's about how that empowers human systems to become smarter and more effective and more efficient. And I think we're playing that out in our technology in a certain way. And I think organizations that are thinking along those lines with IoT are seeing more benefits immediately from those projects. So I think we have a general agreement of what kind of some of the things you talked about, IoT, crucial to capturing information and then having action being taken, AI being crucial to defining and refining the nature of the actions that are being taken and big data ultimately powering how a lot of that changes. Let's go to the next one. So I actually have a few things to add to that. So I think it makes sense with IoT, why we have big data associated with it. If you think about what data is collected by IoT, we're talking about a serial information. It's over time, it's going to grow exponentially just by definition. So every minute you collect a piece of information, that means over time it's going to keep growing, growing, growing as it accumulates. So that's one of the reasons why the IoT is so strongly associated with big data. And also why you need AI to be able to differentiate between one minute versus the next minute, right? Trying to find a better way rather than looking at all of that information and manually picking out patterns to have some automated process for being able to filter through that much data that's being collected. I want to point out though, based on what you just said Jennifer, I want to bring Neil in at this point, that this question of IoT now generating unprecedented levels of data does introduce this idea of the primary source. Historically what we've done within technology, within an IT certainly, is we've taken stylized data. There is no such thing as a real world accounting thing. It is a human contrivance and we stylize data and therefore it's relatively easy to be very precise on it. But when we start, as you noted, when we start measuring things with a tolerance down to thousands of a millimeter, whatever that is, metric system, now we're still sometimes dealing with errors that we have to attend to. So the reality is we're not just dealing with stylized data, we're dealing with real data and it's more frequent, but it also has special cases that we have to attend to as in terms of how we use it. What do you think Neil? Well, I agree with that. I think I already said that, right? Yes you did. Okay, let's move on to the next one. It's a doppelganger. The digital twin doppelganger that's automatically created by your very fact that you're living and interacting and so forth and so on. It's going to accumulate regardless. Now that doppelganger may not be your agent or may not be the foundation for your agent unless there's some other piece of logic like an interest graph that you build, a human being saying this is my broad set of interests and so all of my agents out there in the IoT, you all need to be aware that when you make a decision on my behalf as my agent, this is what Jim would do. You know, I mean, there needs to be that kind of logic somewhere in this fabric to enable true agency. All right, so I'm going to start with you. Oh, go ahead. I have a real short answer to this though. I think that Big Data provides the data and compute platform to make AI possible. For those of us who dipped our toes in the water in the 80s, we got clobbered because we didn't have the facilities, we didn't have the resources to really do AI. We just kind of played around with it. And I think that the other thing about it is if you combine Big Data and AI and IoT, what you're going to see is people, a lot of the applications we've developed now are very inward looking. We look at our organization. We look at our customers. We try to figure out how to sell more shoes to fashionable ladies, right? But with this technology, I think people can really expand what they're thinking about and what they model and come up with applications that are much more external. Actually what I would ask that is also, it actually introduces being able to use engineering, right? Having engineers interested in data, because it's actually technical data that's collected, not just say preferences or information about people, but actual measurements that are being collected with IoT. So I think it's really interesting in the engineering space because it opens up a whole new world for the engineers to actually look at data and to actually combine both that hardware side as well as the data that's being collected from it. Well, Neil, you and I have talked about something because it's not just engineers. We have in the healthcare industry, for example, which you know fair amount about, is this notion of empirical-based management and the idea that increasingly we have to be driven by data as a way of improving the way that managers do things, the way that managers collaborate and ultimately collectively how they take action. So it's not just engineers. It's supposed to also inform business. What's actually happening in the healthcare world when we start thinking about some of the empirical-based management? Is it working? What are some of the barriers? It's not a function of technology. What happens in medicine and healthcare research is, I guess you could say, borders on fraud. Is that, no, I'm not kidding. I know the New England Journal of Medicine a couple of years ago released a study and said that at least half their articles that they published turned out to be written, ghost-written by pharmaceutical companies, right? So I think the problem is that when you do a clinical study, the one that really killed me about 10 years ago was the Women's Health Initiative. They spent $700 million gathering this data over 20 years and when they released it, they looked at all the wrong things, deliberately, right? So I think that's a systemic- I think you're bringing up a really important point that we haven't brought up yet and that is, can you use big data and machine learning to begin to take the biases out? So if you let the, you know, if you divorce your preconceived notions and your biases from the data and let the data lead you to the logic, you start to, I think, get better over time but it's going to take a while to get there because we do tend to gravitate towards our biases. I will share an anecdote. So I had some arm pain and I had numbness in my thumb and pointer finger and excruciating pain went to the hospital. So the doctor examined me and he said, you know, you probably have a pinched nerve. He said, but I'm not exactly sure which nerve it would be, I'll be right back. And I kid you not, he went to a computer and he Googled it and he came back because this little bit of information was something that can easily be looked up, right? Every nerve in your spine is connected to your different fingers. So the pointer and the thumb just happens to be your C6. So he came back and said, it's your C6, right? I just went to send it. I mean, that's a good example. One of the issues with healthcare data is that the data set is not always shared across the entire research community. So by making big data accessible to everyone, you actually start a more rational conversation or debate on, well, what are the true insights? If that conversation includes what Judith talked about, the actual model that you use to set priorities and make decisions about what's actually important. So it's not just about improving, this is the test, it's not just about improving your understanding of the wrong thing. It's also testing whether it's the right or wrong thing as well. That's right, and to be able to test that, you need to have humans in dialogue with one another bringing different biases to the table to work through, okay, is there truth in this data? Because it's context and it's correlation. And you can have a great correlation, this garbage, you know, if you don't have the right context. So I want to, hold on, Jim, I want to take it to the next question because I want to build off of what you talked about, Stephanie, and that is that this says something about what is the edge. Now our perspective is that the edge is not just devices. That when we talk about the edge, we're talking about human beings and the role that human beings are going to play both as sensors or carrying things with them, but also as actuators, actually taking action, which is not a simple thing. So what do you guys think? What does the edge mean to you? Joe, why don't you start? I think it could be a combination of the two and specifically when we talk about healthcare. So, you know, I believe in 2017, you know, when we eat, we don't know why we're eating. Like, I think we should absolutely by now be able to know exactly, you know, what is my protein level? What is my calcium level? What is my potassium level? And then find the foods to meet that. What have I depleted versus what I should have and eat very, very purposely and not by taste. And it's amazing that red wine is always the answer. It is. And tequila, where it helps too. You're a precision foodie is what you are. But I, you know, there's no reason why we should not be able to know that right now, right? And it comes to healthcare is, you know, the biggest problem or challenge with healthcare is no matter how great of a technology you have, you can't manage what you can't measure. And you're really not allowed to use a lot of this data. So you can't measure it, right? You can't do things very, very scientifically, right? In the healthcare world. And I think regulation in the healthcare world is burdening advancement in science. Any thoughts, gentlemen? Yeah, so I teach statistics for data scientists, right? So, you know, we talk about a lot of these concepts. I think what makes these questions so difficult is you have to find the balance, right? The middle ground. For instance, in the case of, you know, are you being too biased to the data? Well, you know, you could say like, well, we want to look at data only objectively. But then there's certain relationships that your data models might show that aren't actually a causal relationship. For instance, if there's an alien that came from space and saw, you know, saw Earth, saw the people, everyone's carrying umbrellas, right? And then it started to rain. That alien might think, well, it's because they're carrying umbrellas that it's raining. Now, we know from real world that that's actually not the way these things were. So if you look only at the data, that's the potential risk. That you'll start making associations or saying some things causal when it's actually not, right? So that's one of the, I think, big challenges. I think when it comes to looking also at things like healthier data, right? You collect data about anything and everything. Doesn't mean that, A, we need to collect all that data for the question we're looking at. Or that it's actually the best and more optimal way to be able to get to the answer. Meaning sometimes you could take some shortcuts in terms of what data you collect and still get the right answer. And not have maybe that level of specificity that's going to cost you millions extra to be able to get. So Jennifer, as a data scientist, I want to build upon what you just said. And that is, are we going to start to see methods and models emerge for how we actually solve some of these problems? So for example, we know how to build a system for stylized process like accounting or some elements of accounting. We have methods and models that lead to technology and actions and whatnot all the way down that system can be generated. We don't have the same notion to the same degree when we start talking about AI and some of these big data. We have algorithms, we have technology. But are we going to start seeing, as a data scientist, repeatability and learning and how to think the problems through that's going to lead us to a more likely, best or at least good result? So I think that's a bit of a tough question, right? Because part of it is, it's going to be under how many of these researchers actually get exposed to real world scenarios, right? Research is going to do all these papers and you come up with all these models but if it's never tested in a real world scenario, well, I mean, we really can't validate that it works, right? So I think it is dependent on how much of this integration there's going to be between the research community and industry and how much investment there is, right? Funding is going to matter in this case, right? If there's no funding in the research side, then you'll see a lot of industry folk who feel very confident about their models that, but again, on the other side, of course, if researchers don't validate those models, then you really can't say for sure that it's actually more accurate, right? Or it's more efficient, right? Listen, the issue of real world testing and experimentation, A-B testing, that's standard practice in many operationalized ML and AI implementations in the business world. But real world experimentation in the edge analytics, where you're actually transducing or touching people's actual lives, problem there is, like in healthcare and so forth, when you're experimenting with people's lives, somebody's going to die. I mean, in other words, that's a critical, in terms of causal analysis, you got to tread lightly on operationalizing that kind of testing in the IOT, when people's lives in health are at stake. We still give them placebo's, so we still test them. All right, so let's go to the next question. What are the hottest innovations in AI? So if I want to start with you as a company, as someone at a company, it's got a kind of an interesting little thing happening when we start thinking about how we better catalog data and represent it to a large number of people. What are some of the hottest innovations in AI as you see it? I think it's a little counterintuitive about what the hottest innovations are in AI, because we're at a spot in the industry where the most successful companies that are working with AI are actually incorporating them into solutions. So the best AI solutions are actually the products that you don't know there's AI operating underneath, but they're having a significant impact on business decision-making, or bringing a different type of application to the market. And I think there's a lot of investment that's going into AI tooling and tool sets for data scientists or researchers, but the more innovative companies are thinking through how do we really take AI and make it have an impact on business decision-making? And that means kind of hiding the AI to the business user, because if you think a bot is making a decision instead of you, you're not gonna partner with that bot very easily or very readily. I worked at, way at the start of my career, I worked in CRM, when recommendation engines were all the rage, online and also in call centers. And the hardest thing was to get a call center agent to actually read the script that the algorithm was presenting to them. That algorithm was 99% correct most of the time, but there was this human resistance to let the computer tell you what to tell that customer on the other side, even if it was more successful in the end. And so, I think that the innovation in AI that's really gonna push us forward is when humans feel like they can partner with these bots and they don't think of it as a bot, but they think about it as assisting their work and getting to a better result. Hence the augmentation point you made earlier. Absolutely. Joe, how about you? What do you look at? What are you excited about? I think the coolest thing at the moment right now is chat bots, to have voice, be able to speak with you in natural language to do that. I think that's pretty innovative, right? And I do think that eventually for the average user, not for techies like me, but for the average user, I think keyboards are gonna be a thing of the past. I think we're gonna communicate with computers through voice. And I think this is the very, very beginning of that and it's an incredible innovation. Neil? Well, I think we all have myopia here. We're all thinking about commercial applications. Big, big things are happening with AI in the intelligence community, in military, the defense industry, and all sorts of things, meteorology. And that's where, well, hopefully not on an everyday basis with military, you really see the effect of this. But I was involved in a project a couple of years ago where we were developing AI software to detect artillery pieces in terrain from satellite imagery. I don't have to tell you what country that was. I think you could probably figure that one out, right? But there are legions of people and many, many companies that are involved in that industry. So if you're talking about the dollar spend on AI, I think the stuff that we do in our industries is probably fairly small. Well, it reminds me of an application that I actually thought was interesting about AI related to that, AI being applied to removing mines from war zones. Why not? Which is not a bad thing for a whole lot of people. Judith, what do you look at? So I'm looking at things like being able to have pre-trained data sets in specific solution areas. I think that that's something that's coming. Also the ability to really be able to have a machine assist you in selecting the right algorithms based on what your data looks like and the problems you're trying to solve. Some of the things that data scientists still spend a lot of their time on, but can be augmented with some. Basically, we have to move to levels of abstraction. Before this becomes truly ubiquitous across many different areas. Jennifer? So I'm going to say computer vision. So computer vision. Computer vision. So computer vision ranges from image recognition to being able to say what content is in the image. Is it a dog? Is it a cat? Is it a blueberry muffin? Like a sort of popular post out there where it's like a blueberry muffin versus like I think a Chihuahua and the comparison to and can the AI really actually detect difference, right? So I think that's really where a lot of people who are in this space of being in both the AI space as well as data science are looking to for the newer innovations. I think for instance, cloud vision, I think that's Google still calls it. I think the vision API, which they've released on beta allows you to actually use an API to send your image and then have it be recognized, right? By their API. There's another startup in New York called Clarified that also does the similar thing as well as Amazon has their recognition platform as well. So I think from images, being able to detect what's in the content as well as from videos, being able to say things like how many people are entering a frame? How many people enter the store? Not having to actually go look at it and count it but having a computer actually tally that information for you, right? There is actually an extra piece to that. So if I have a picture of a stop sign and I'm that automated car and is it a picture on the back of a bus of a stop sign or is it a real stop sign? So that's gonna be one of the complications. Doesn't matter to a New York City cab driver. How about you? Probably not. The hottest thing in AI is generative adversarial networks, GANs. What's hot about them, well, I'll be very quick. Most AI, most deep learning machine learning is analytical. It's distilling or inferring insights from the data. Generative takes that same algorithmic basis but to build stuff, in other words, to create realistic looking photographs, to compose music, to build CAD-CAM models essentially that can be constructed on 3D printers. So GANs, that's a huge research focus all around the world, are used for, often increasingly used for natural language generation. In other words, it's institutionalizing or having a foundation for nailing the Turing test every single time, building something with machines that looks like what's constructed by a human and doing it over and over again to fool humans. I mean, it's like you can imagine the fraud potential. You can also imagine just the sheer, like it's gonna shape the world, GANs. All right, so I'm gonna say one thing and then we're gonna ask, I'm gonna ask if anybody in the audience has an idea. So the thing that I find interesting is traditional programs or when you tell a machine to do something you don't need incentives. When you tell a human being something, you have to provide incentives. Like how do you get someone to actually read the test? And this whole question of elements within AI that incorporate incentives as a way of trying to guide human behavior is just absolutely fascinating to me, whether it's gamification or even some things we're thinking about with blockchain and bitcoins and related types of stuff. To my mind, that's gonna have an enormous impact. Some good, some bad. Anybody in the audience? I don't want to lose everybody here. What do you think, sir? And I'll try to do my best to repeat it. Oh, we have a mic. So the question's pretty much about what Stephanie's talking about, which is human and loop training, right? I come from a computer vision background. That's a problem. We need millions of images trained. We need humans to do that. And that's like, the workforce is essentially people that aren't necessarily part of the AI community. These are people that are just able to use that data and analyze their data and label that data. That's something that I think is a big problem to everyone in the computer vision industry at least faces. I was wondering where everyone was talking about. So again, but the problem is that is the difficulty of methodologically bringing together people who understand it and people who, people who have made expertise, people who have algorithm expertise and working together. The expertise issue comes in healthcare, right? In healthcare, you need experts to be labeling the images. With contextual information, with essentially augmented reality applications coming in, you have the AR kit and everything coming out, but there is a lack of context-based intelligence. And all of that comes through training images and all of that requires people to do it. And that's kind of like the foundational basis of AI coming forward is not necessarily the algorithm, right? It's how well our data is labeled, who's doing the labeling and how do we ensure that it happens? Great question. So for the panel, so if you think about it, a consultant talks about being on the bench. How much time are they going to spend on trying to develop additional business? How much time should we set aside for executives to help train some of these systems? I think that the key is not to think of the problem a different way. So you can have people manually label data and that's one way to solve the problem. But you can also look at what is the natural workflow that executive or that individual and is there a way to gather that context automatically using AI, right? And if you can do that, similar to what we do on our product, we observe how someone is analyzing the data and from those observations, we can actually create the metadata that then trains the system in a particular direction. But you have to think about solving the problem differently of finding the workflow that then you can feed into to make this labeling easy without the human really realizing that they're labeling the data. Anybody else? It's just ample what Stephanie said. So in the IoT applications, all those sensory modalities, the computer vision, the speech recognition, that's all potential training data. So it's cross checks against all the other models that are processing all the other data coming from that device. So that the natural language cross understanding can be reality checked against the images that the person happens to be commenting upon or the scene which they're embedded. So yeah, the data is embedded in. We're not at the stage yet where this is easy. It's gonna take time before we do start doing the pre-training of some of these details so that it goes faster. But right now, they're not that many shortcuts. Go ahead, Joe. Sorry, so a couple things. So one is like, I was just caught up on your incentivizing programs to be more efficient like humans. You know, in Ethereum that has this notion, it was just blockchain as this concept of gas where as the process becomes more efficient, it costs less to actually run, right? It costs less Ether, right? So it actually is kind of, the machine is actually incentivized and you don't really know what it's gonna cost until the machine processes it. So there is some notion of that there. But as far as vision, like training the machine for computer vision, I think it's through adoption and crowdsourcing. So as people start using it more, they're going to be adding more pictures, very, very organically. And then the machines will be trained and right now is a very small handful of people doing it and it's very proactive by the Googles and the Facebooks and all of that. But as we start using it, as they start looking at my images and Jim's and Jen's images, it's going to keep getting smarter and smarter through adoption and through very organic process. So, Neil, let me ask you a question. Who owns the value that's generated as a consequence of all these people ultimately contributing their insight and intelligence into these systems? Well, to a certain extent, the people who are contributing the insight own nothing because the systems collect their actions in the things they do and then that data doesn't belong to them. It belongs to whoever collected it or whoever's going to do something with it. But the other thing, getting back to the medical stuff, it's not enough to say that the systems, people will do the right thing because a lot of them are not motivated to do the right thing. The whole grant thing, the whole, oh my God, I'm not going to go against the senior professor. A lot of these, I knew a guy who was a doctor at the University of Pittsburgh and they were doing a clinical study on one of the tubes they put in little kids' ears who have ear infections, right? And... Google it, who can help us out? Anyway, I forget the exact thing, but he came out and said that the principal investigator lied when he made the presentation that it should be this, or I forget which way it went. He was fired from his position at Pittsburgh and he has never worked as a doctor again. He went against the senior line of authority, right? He was a whistleblower. Another question back here? Yes. Mark Turner has a question. Not a question, just want to piggyback what you're saying about the transfixation of maybe in healthcare of black and white images, the color images in the case of sonograms and ultrasound and mammograms. Do you see that happening using AI? Do you see that being... I mean, it's already happening. Do you see it moving forward in that kind of way? I mean, talk more about that, about AI and black and white images being used and they can be transfixed. They can be made to color images as you can see things, but if doctors can perform better, operation. So I'm sorry, but could you summarize it down? What's the question? Summarize it just... I have a lot of students that they're interested in the cross-pollination between AI and say the medical community as far as things like ultrasound and sonograms and mammograms and how you can literally take a black and white image and it's confusing algorithms and stuff being made to color images that can help doctors better do the work that they've already been doing, just do it better. So you touched on it, like for 30 seconds. So how AI can be used to actually add information in a way that's not necessarily invasive, but is ultimately improves how someone might respond to it or use it? Yes? Related? Yeah. I also got something to say about medical images in a second. Any of you guys want to go ahead, Jennifer? Yeah, so for one thing, you know, and it kind of goes back to what we were talking about before, when we look at, for instance, scans, like at some point I was looking at CT scans, right, for lung cancer nodules. In order to, for me, who I don't have a medical background to identify where the nodule is, of course, a doctor actually had to go in and specify which slice of the scan had the nodule and where exactly it is. So it's both the slice level, right, as well as within that 2D image, right, where it's located and the size of it, right. So the beauty of things like AI is that ultimately, right now, a radiologist has to look at every slice and actually identify this manually, right. The goal, of course, would be that one day we wouldn't have to have someone look at every slice, which is like 300 usually slices, and be able to identify it much more automated. And I think the reality is we're not going to get something where it's going to be 100%, right. And with anything we do in the real world, it's always like a 95% chance of it being accurate. So I think it's finding that in between of where, what's the threshold that we want to use to be able to say that this is definitively, say a lung cancer nodule or not. I think the other thing to think about is, in terms of how they're using other information, what they might use to say, for instance, to say like based on other characteristics of the person's health, they might use that as sort of a gradient, right. So how dark or how light something is to identify maybe in that region, the prevalence of that specific variable. So that's usually how they integrate that information into something that's already existing in a computer vision sense. I think the difficulty with this, of course, is being able to identify which variables would introduce into data that does exist. So I'll make two quick observations on this and I'll go to the next question. One is radiologists have historically been some of the highest paid physicians within a medical community, partly because they don't have to be particularly clinical. They don't have to spend a lot of time with patients. They tend to spend time with doctors, which means they can do a lot of work in a little bit of time and charge a fair amount of money. As we start to introduce some of these technologies that allow us to, from a machine standpoint, actually make diagnosis based on those images, I find it fascinating that you now see television ads promoting the role that the radiologist plays in clinical medicine. It's kind of an interesting response. But it's also disruptive as I'm seeing more and more studies showing that deep learning models processing images, ultrasonic sounds and so forth, are getting as accurate as many of the best radiologists. That's the point. And detecting cancer or whatever. Now radiologists are saying, oh, look, we do this great thing in terms of interacting with the patients that they never had because they're being disintermediated. The second thing that I'll note is one of my favorite examples of that, if I got it right, is looking at the images, the deep space images that come out of Hubble, where they're taking data from thousands, maybe even millions of images, and combining it together in interesting ways. You can actually see depth. You can actually move through to a very, very small scale, a system that's 150, well, maybe not that much. Maybe six billion light years away. Fascinating stuff. All right, so let me go to the last question here and then I'm going to close it down. And we can have something to drink. One of the hottest, oh, I'm sorry, question? Yes, hi, my name's George. I'm with Blue Talon. You asked earlier the question, what's the hottest thing in the Edge and AI? I would say that it's security. I don't, it seems to me that before you can empower agency, you need to be able to authorize what they can act on, how they can act on, who they can act on. So it seems if you're going to move from very distributed data at the Edge and analytics at the Edge, there have to be security similarly done at the Edge. So I must have, and I saw a couple of slides that called out security as a, as a keeper requisite and maybe Judith can comment. But I'm curious how security is going to evolve to meet this analytics at the Edge. Well, let me do that and I'll ask Jim to comment. The notion of agency is crucially important. Slightly different from security, just so that we're clear. And the basic idea here is historically folks have thought about moving data or they've thought about moving application function. Now we are thinking about moving authority. So as you said, and that's not necessarily, that's not really a security question, but this has been a problem that's been in, of a concern in a number of different domains. How do we move authority with the resources? And that's really what informs the whole agency process. But with that said, Jim. Yeah, actually, yeah, thank you for bringing up security. So identity is the foundation of security, strong identity, multi-factor, face recognition, biometrics and so forth. Clearly AI, machine learning, deep learning, are powering a new era of biometrics and its behavioral metrics and so forth. That's organic to people's use of devices and so forth. Getting to the point that Peter was raising is important, agency, systems of agency. Your agent, you have to, you as a human being, should be vouching in a secure, tamper-proof way. Your identity should be vouching for the identity of some agent, physical or virtual, that does stuff on your behalf. How can that, how should that be managed within this increasingly distributed IoT fabric? Well, a lot of that has been worked at all right through webs of trust, public key infrastructure, formats and like SAML for single sign and so forth. It's all about assertions, strong assertions and vouching. I mean, there's a whole, the workflows of things. Back in the ancient days when I was actually a PKI analyst, three analyst firms ago, you know, I got deep in all the guts of all those federation agreements. Something like that has to be IoT scalable to enable systems of agency to be truly fluid so we can vouch for our agents, wherever they happen to be. And we're gonna keep on having as human beings agents all over creation. We're not even gonna be aware of everywhere that our agents are, but our identity has to follow. But it's not just identity, it's also authorization and context. Permissioning, of course. Yeah, so I may be the right person to do something yesterday, but I'm not authorized to do it in another context in another application. Role-based permissioning, yeah. That's right. I understand your persona-based, yes. I agree, yes. And obviously it's gonna be interesting to see the role that blockchain or its follow-ons that a technology's gonna play here. Okay, so let me throw one more question out. What are the hottest applications of AI at the edge? We've talked about a number of them. Does anybody wanna add something that hasn't been talked about? Or do you wanna get a beer? Ha, ha, ha, ha, ha. Stephanie, you raise your hand. I was gonna go and bring something mundane to the table, actually, because I think one of the most exciting innovations with IOT and AI are actually simple things. Like, City of San Diego is rolling out 3,200 automated streetlights that will actually help you find a parking space, reduce the amount of emissions into the atmosphere, so it has some environmental change, positive environmental change impact. I mean, it's streetlights. It's not like a, you know, it's not a medical industry. It doesn't look like a life-changing innovation. And yet, if we automate streetlights and we manage our energy batter and maybe they can flicker on and off if there's a parking space there for you, that's a significant impact on everyone. And dramatically suppress the impact of backseat driving. Joe, are you saying? I was just gonna say, you know, there's already the technology out there where you can put a camera on a drone with machine learning within an artificial intelligence within it, and it can look at buildings and determine whether there's rusty pipes and cracks in cement and, you know, leaky roofs and all of those things. And that's all based on artificial intelligence. And I think if you can do that, to be able to look at an X-ray and determine if there's a tumor there is not out of the realm of possibility, right? To be able. I agree with both of them. That's what I meant about external kind of applications, right, instead of, you know, figuring out what to sell our customers, which is most what we hear. I just, I think all of those things are imminently doable. And boy, street lights that help you find a parking place. I mean, that's brilliant, right? It's, yeah, it improves your life, you know, more than, I don't know, I don't know a lot of things. Something I used on the internet recently, but I mean, I think it's great. That's, I'd like to see a thousand things like that. Jim? Yeah, actually, building on what Stephanie and Neil were saying, it's ambient intelligence built into everything, to enable fine-grained micro-climate awareness of all of us as human beings moving through the world and enable greening of every micro-climate in buildings. In other words, you know, you have sensors on your body that are always detecting, you know, the heat, the humidity, the level of pollution or whatever, in every environment that you're in, or that you might be likely to move into fairly soon and either A, can it help give you guidance in real time about where to avoid or give that environment guidance about how to adjust itself to your, like the lighting or whatever it might be, to your specific requirements. And, you know, when you have a room like this full of other human beings, there has to be some negotiated settlement. Some will find it too hot, some will find it too cold or whatever. But I think that is fundamental in terms of reshaping the sheer quality of experience of most of our lived habitats on the planet or potentially, you know. That's really the edgy analytics application that depends on everybody having, being fully equipped with a personal area network of sensors that's communicating into the cloud. General? So I think it's, what's really interesting about it is being able to utilize the technology we do have, right? It's a lot cheaper now to have a lot of these ways of measuring that we didn't have before and whether or not engineers can then leverage what we have as ways to measure things. And then of course, then you need people like data scientists to build the right model. So you can collect all this data. If you don't build the right model that identifies these patterns, then all that data is just collected and it's just going to be repository, right? So without having the models that support patterns that are actually in the data, you're not going to find a better way of being able to find insights in the data itself. So I think what'll be really interesting is see how existing technology is leveraged to collect data and then how that's actually modeled, as well as to be able to see how technology is going to now develop from where it is now to be able to either collect things more sensitively or in the case of say, for instance, if you're dealing with how people move, whether we can build things that we can then use to measure how we move, right? Like how we move every day and then being able to model that in a way that is actually going to give us better insights and things like healthcare and maybe even just our behaviors. Judith? So I think we also have to look at it from a peer to peer perspective. So I may be able to get some data for one thing at the edge, but then all those edge devices, sensors or whatever, they all have to interact with each other because we don't live, we may in our business lives act in silos, but in the real world, when you look at things like sensors and devices, it's how they react to each other on a peer to peer basis. All right, before I invite John up, I want to say, I'll say what my thing is and it's not the hottest. It's the one I hate the most. I hate AI-generated music. I hate it. All right, so I want to thank all the panelists, every single person, it's a great commentary, great observations, I want to thank you very much. I want to thank everybody that joined. John, in a second, you'll kind of announce who's the big winner, but the one thing I want to do is as I was listening, I learned a lot from everybody, but I want to call out the one comment that I think we all need to remember and I'm going to give you the award, Stephanie. That is that increasingly we have to remember that the best AI is probably AI that we don't even know is working on our behalf. The same flip side of that is all of us have to be very cognizant of the idea that AI is acting on our behalf and we may not know it. So, John, why don't you come on up? Who won the, whatever it's called, the raffle? You won. Thank you. How about a round of applause for the great panel? Okay, we have put the business cards in the basket. We're going to have that brought up. We're going to have two raffle gifts, some nice Bose headsets and a nice speaker, Bluetooth speaker. Good, waiting for that. I just want to say thank you for coming and for the folks watching. This is our fifth year doing our own event called Big Data NYC, which is really an extension of the landscape beyond the big data world as cloud and AI and IoT and other great things happen and great experts and influencers and analysts here. Thanks for sharing your opinion. We appreciate you taking the time to come out and share your data and your knowledge. Appreciate it. Thank you. Where's the... Sam's right in front of you. There's the thing, okay. All right, got to be present to win. We saw some people sneaking out the back door to go to a dinner. Wait, first prize, first prize. Okay, first prize is the Bose headset. Thank you. Bluetooth and noise canceling. I won gloves. Tim, you've got to hold it down. I can see the cards. Stephanie, you won. Okay. Sonny Cox. Sonny Alley Cox. Okay, the bar's open, so help yourself, but we've got one more. Hold on, I saw you. I need to wake up a little bit. Yeah, okay. All right. Next one is... Oh, this is... My kids love this. This is great. Great for the beach, great for everything. Portable speaker, great gift. It is a portable speaker. It's pretty awesome. Oh, that's one of our guys. It can't be related. Ava, Ava, Ava. Okay, Gene Pinesco. Hey, came in. All right, look at that. That timing's great. Hey, thanks everybody. Enjoy the night. Thank Peter Burst, head of research for SiliconANGLE Media, Wikibon, and the great guests and influencers and friends and you guys for coming in the community. Thanks for watching and thanks for coming. Enjoy the party. Have some drinks and that's out. That's it for the influencer panel and analyst discussion. Thank you.