 from the Sunnyvale, California, in the heart of Silicon Valley, it's theCUBE. Covering Accelerator Journey to AI, brought to you by NetApp. I'm Peter Burris and welcome to a great conversation here from NetApp's Data Visionary Center. Specifically, we've got Mati Barlow from Cambridge Consultants. Mati is the head of Artificial Intelligence at a relatively well-known August consultant group. Mati, welcome to theCUBE. Thank you, Peter. So Mati, what we're going to do is we're going to spend a number of minutes talking about some of the trends and transformations that are being made by and wrought by AI. But let's start. What is Cambridge Consultants? So Cambridge Consultants does technology, product and service, development for customers all across the world. Probably about 400 new projects starting each year. And what's a common thread in them is that they're technically difficult and innovative. So there's something really challenging about them. They're typically strategic for our customers and they're looking to do something disruptive and do it fast. So we've a pretty broad range of customers that you're utilizing or you're working with. Let's ask the question then. What don't, give us some examples of some of the customer cases that you've been working on, specifically as it relates to AI. Sure. So we're working with everything from blue chips to startups and what they're looking for is slightly different from AI. I can't talk about some of the confidential details but some really interesting applications. For example, one for me is precision agriculture. We've heard a lot about improving crop yield but we're reaching the point now where you can drive over a crop and recognize it from a weed and put water on just the crop and pesticide on just the weed so you get a much better yield. You cut down on water, you cut down on pesticide and it's a really nice application where it's a win-win for everything. So as we think about some of these big issues associated with inventing technology and inventing AI related stuff to do many of the things we're talking about, we also have to recognize that there's a social side of introducing AI. There's invention and there's the innovative side which is many respects the social things. How do you get people to adopt this stuff? What are the challenges that you're seeing customers face as they conceive how best to adopt AI and AI related capabilities within markets? Sure, I think in most of the markets we work in, the benefits are becoming so clear that there's not a massive reluctance to adopt or difficulty. There's obviously in the public those normal fears about loss of jobs or safety or security, having machines do jobs for you that you might wish a person to do for you and those are there in some markets like healthcare in particular. But many markets see no such problems and the benefits of being able to do innovative things scalably, flexibly outperforming humans in many cases, it just makes economic sense. So is it just the numbers? Is that what big companies are doing to ensure a more rapid time to value for AI related things? Or are there other things that big companies are doing to try to facilitate the introduction of some of these advanced technologies? It depends from company to company. There's all sorts of ways they're approaching this. It may be trialing early services that introduce people gently to AI, get them accustomed to it. Of course, that's what's been the case for social media. None of us believe we were using AI in the early days and then suddenly we realized that we're interacting with it on an almost daily basis. Through to targeted trials, all sorts of different approaches being taken. AI has been associated with a lot of different algorithmic forms. It's been a lot of different basic models for thinking about how you do it. Machine learning, deep learning, predictive analysis, recommendation analysis. What's the difference particularly between AI, machine learning, ML, and deep learning, DL? Okay, if I could take a step back for a moment, we've been working with AI for decades and as you say, there's some really quite old school techniques out there. Decision support expert systems, where the idea was that you embodied the coders, the programmers knowledge in a system and really all it could do is replay that. So at best it could act as well as the person who programmed it. Very rules driven. Very, very rules driven. We then in the early 2000s saw machine learning beginning to surface more. That's where a system learns. Perhaps a few parameters from some data that it does learn by itself, but it's doing something quite simple. It's from the vibrations in a road counting the axles of the vehicles going past or in an industrial process, monitoring temperature, pressure, and saying this process is going well. But not rules driven, it's still data driven. Data driven. Deep learning just takes that to a whole new scale. It is still machines learning from data, but now a few parameters has become millions or billions. You can now point a camera at a road and recognize all of the different vehicle types instead of just how many axles they've got, for example. And so the notion of that is that it's a focus on patterns that it discovers out of the data as opposed to rules or patterns that are put into the data by a developer or by a data person. Absolutely, you don't always know what insight you're going to derive from a data set. So I understand that Cambridge consultants use a variety of technologies, but specifically you're utilizing this MedApp and NVIDIA gear in your labs. Talk a little bit about that experience. How's that been? Sure. So time is everything for us as a business and for our customers. People want to be first to a particular market window and this AI is still at some level experimental. We don't know what it's going to do in three or five years time. So key to our business is a fast turnaround on proof of concepts. How would this work? What would happen? Perhaps our customers got some data and they need to know if they need a trial to collect more. So getting through jobs quickly is what matters most to us and that's what the NVIDIA and NetApp equipment is all about. So for GPUs, it's a case of big parallel processing, large models, crunching the numbers and adjusting the parameters quickly, but equally important is the ability to get data from storage into those GPUs quickly. And so there is a relationship between the characteristics of the hardware and the success of the AI efforts? Absolutely. And it's a really demanding application for file serving. It's the most demanding we've ever seen because it's potentially millions or billions of tiny files that have to be called up in different patterns quite randomly. It's not just like, for example, streaming video. It's too much to cash locally. You need really high performance equipment to manage the data quickly enough that you can learn something in days and not in months. One of the crucial features of any AI development effort is this notion of a data pipeline. How you stage change to the data, where it is, knowing how to move it, when to move it, do it with speed, do it at scale. Talk a little bit about the differences between AI-driven data pipelines and some of the other data pipelines that have been out there. Sure, the difference we tend to see on AI is it's touching the real world more directly. So you may have data coming in live from the edge, from sensors, and that's not as carefully clean, sanitized, formatted as you might expect in a normal, say, enterprise database or data application. So knowing what to do with those difficult cases, how to format it, what to reject, what to feed in, and then at the other end, how to present that decision, because AI is often making some form a decision, how to present that efficiently back to humans or how to make a quick, sensible decision based on that, how to steer the vehicle in the correct direction, how to highlight a cancer, whatever it is we're doing, that pipeline from data first coming in through intelligence and back again to the real world is longer and more complicated and sophisticated than any other data pipeline we've seen before. Now, it's that sophistication, that length, the duration of the transactions, for example, that increases the complexity that ultimately big companies working with Cambridge consultants and others have to address so that they can be successful and get that time to value. But as you think about ultimately the challenges that you're trying to address with customers, what is it that you're seeing in their AI projects that are more consistently associated with success or more unfortunately perhaps consistently associated with having to do it again? Sure, I'll limit my answer to those I feel who are doing genuine AI because there is an element of people labeling anything AI, but assuming they are doing something that's only been possible in the last few years that is innovative, difficult and complicated, it's really reaching the right distance. It's stretching themselves the correct amount. So going into a new market with new data and new algorithmic approaches is dangerous. There'll be a lot of iteration, a lot of learning needed before that'll come good. But if you can take an approach that's beginning to work in one vertical to another or you can start with data you understand and know perhaps from a previous big data application and start to do more intelligent things with it, then you can achieve these kind of breakthrough innovations and really impressive systems that AI can today. So novel data, practiced algorithms, and hardware that works. Yeah, and don't mix up too many new factors together, absolutely. Marni Barlow, Head of Artificial Intelligence at Cambridge Consultants, thanks very much for being on theCUBE. Thank you, Peter. Thank you.