 on disruptive technologies. And if you think about the disruption that's happening in business with IoT, with OT, and with big data, you can't get anything more disruptive to the whole of the business chain as this particular area. It's an area that I focused on myself asking the question, should everything go to the cloud? Is that the new future? Sure, is 90% of the computing going to go to the cloud with just little mobile devices route on the edge? Felt wrong when I did the math on it. I did some examples of real world environments, wind farms, et cetera. It clearly was not the right answer. Things need to be near the edge. And I think one of the areas to me that solidified it was when you looked at an area like video, huge amounts of data, real important decisions being made on the content of that video. For example, recognizing a face, a white hat or black hat. If you look at the technology, sending that data somewhere to do that recognition just does not make sense. Where is it going? It's going actually into the camera itself right next to the data because that's where you have the raw data. That's where you have the maximum granularity of data. That's where you need to do the processing of which faces are which right close to the edge itself. And then you can send the other data back up to the cloud, for example, to improve those algorithms within that camera to do all that sort of work on a batch basis over time. That's what I was looking at and looking at the cost justification for doing that sort of work. So today we've got a set of people here on the panel and we want to talk about coming down one level two where IoT and IT are going to have to connect together. So on the panel, I've got, I've got to get these names really wrong. Sanji Kumar from Foghorn. Could you introduce yourself and what you're doing where the data is meeting the people and the machines? Sure, sure. So my name is Sanji Kumar. I actually run engineering for a company called Foghorn Systems. We are actually bringing analytics and machine learning to the edge. And so our goal and motto is to take computing to where the data is than the other way around. So it's a two-year-old company that started, was incubated in the hive and we are in the process of getting our second release of the product out shortly. Excellent. So let me start with the other hand. Rohan, can you talk about your company and what contribution you're focusing on? Sure. I head product marketing for Mana. Mana is a startup about three years old. What we're doing is we're offering an enterprise platform for large enterprises. We're helping the likes of Shell and MERSC and Chevron digitally transform. And that simply means putting the focus on subject matter experts, putting the focus on the people. Data is definitely an important part of it, but allowing them to bring their expertise into the decision flows so that ultimately the key decisions that are driving the revenue for these behemoths are made at a higher quality and faster. Excellent. Well, two software companies, we have a practitioner here who is actually doing fog computing, doing it for real, has been doing it for some time. So could you like Janet George from Western Digital? Can you introduce yourself? Say something from the trenches, so what's really going on? Okay, very good. Thank you. I actually build infrastructure for the edge to deal with fog computing. And so for Western Digital, we're very lucky because we are the largest storage manufacturer and we have what we call Internet of Things and Internet of Test Equipment. And I process petabytes of data that comes out of the Internet of Things, which is basically our factories. And then I take these petabytes of data, I process them both on the cloud and then on the edge, but primarily to be able to consume that data. And the way we consume that data is by building very high profile models through artificial intelligence and machine learning. And I'll talk a lot more about that. But at the end of the day, it's all about consuming the data that you collect from anywhere. Internet of Things, computer equipment, data that's being produced through products, you have to figure out a way to compute that. And the cloud has many advantages and many trade-offs. And so we're going to talk about the trade-offs. That's where the gap of fog computing comes into play. Excellent. Thanks very much. And last but not least, we've Val, and I can never pronounce your surname. Berkevici. Thank you. You are in the midst of a transition yourself. So talk about where you have been and where you're going. For the better part of this century, I've been with NetApp, working in various functions, obviously enterprise storage. And around 2008, my developer instinct kind of fired up and this thing called cloud became really interesting to me. So I became the self-oriented cloud czar at NetApp and I ended up initiating a lot of our projects, which we know today as the NetApp data fabric that culminated about 18 months ago in an acquisition of SolidFire. And I'm now the acting CTO of SolidFire, but I plan to retire from the storage industry at the end of our fiscal year at the end of April. And I'm spending a lot of time with particularly the Cloud Native Compute Foundation, that is the open source home of Google's Kubernetes technology and about seven other related projects. We keep adding some almost every month. I'm starting to lose track. And spending a lot of time on the data gravity challenge. It's a challenge in the cloud. It's a particularly new and interesting challenge at the edge. And I look forward to talking about that. Okay, and data gravity is absolutely key, isn't it? Yeah. It's extremely expensive and extremely heavy to move around. And the best analogy is workloads are like electricity. They move fairly easily and lightly. Data is like water. It's really hard to move particularly large quantities around. Great. I want to start with one question though. Just in the problem, the core problem, particularly in established industries, of how do we get change to work? In an IT shop, we have enough problems dealing with operations and development. In the industrial world, we have the IT and the OT who look at each other with less than pleasure and mainly disdain. How do we solve the people problem in trying to put together solutions? You're must be right in the middle of it. Would you like to start with that question? Absolutely. So we are 26 years old, probably more than that. But we have very old and new mix of manufacturing equipment. It's a storage industry. And in our storage industry, we are used to doing things a certain way. We have existing data. We have historical data. We have trend data. You can't get rid of what you already have. The goal is to make connectors such that you can move from where you're at to where you're going. And so you have to be able to take care of the shift that is happening in the market. So at the end of the day, if you look at five years from now, it's all going to be machine learning and AI, right? Agent technology is already here. It's proven. We can see Siri is out here. We can see Alexa. We can see these agent technologies out there, right? So machine learning is getting a lot of momentum, deep learning and neural networks and things like that. So we got to be able to look at our data and tap into our data near realistically, very different. And the way to do that is really making these connections happen, tapping into all versus new. Like for example, if you look at storage, you have file storage, you have block storage, and then you have object storage, right? We've not really tapped into the field of object storage, really. And the reason is because if you are going to process one trillion objects like Amazon is doing right now with S3, you can't do it with the file system level storage or with the block system level storage. You have to go to objects. Think internet of things. How many trillions of objects are going to come out of these internet of things? So one, you have to be positioned from an infrastructure standpoint. Two, you have to be positioned from a use case prototyping perspective. And three, you got to be able to scale that very rapidly, very quickly. And that's how change happens. Change does not happen because you ask somebody to change their behavior. Change happens when you show value. And people are so eager to get that value out of what you've shown them in real life that they are so quick to adapt. That's an excellent comment on that as well. We've just gone through training a bunch of OT guys on our software. And two analogies that actually work very well. One is the operational people are very familiar with circuit diagrams and flow of things through essentially black boxes. You can think of these as something that has a bunch of inputs and has a bunch of outputs. So that's one thing that worked very well. The second thing that works very well is the PLC model. And there are direct analogies between PLCs and analytics, which people on the floor can actually relate to. So if you have software that's basically based on data streams and time as a first class citizen, the PLC model, again, works very well in terms of explaining the new software to the OT people. Excellent. OK. And would you want to come in on that as well? I think a couple of points to add to what Janet said. I couldn't agree more in terms of the results. I think Manna did a few projects, a few pilots to convince customers of their value. And we typically focus very heavily on operationalizing the output. So we are very focused on making sure that there is some measurable value that comes out of it. And it's not until the end user started seeing that value that they were willing and open to adopt the newer methodologies. A second point to that is a lot of the more recent techniques available to solve certain challenges that are deep learning neural nets. There's all sorts of sophisticated AI and machine learning algorithms that are out there. A lot of these are very sophisticated in their ability to deliver results, but not necessarily in their transparency of how you got there. And I think that's another thing that Manna is learning is, yes, we have this arsenal of fantastic algorithms to throw our problems, but we try to start with the simplest approach first. We don't unnecessarily try to brute force, because I think an enterprise, they are more than willing to have that transparency in how they are solving something. So if they're able to see how the software was able to get to a certain conclusion, then they are a lot happier with that approach. Could you maybe just give one example, you're a real world example, make it a little bit more real? Right, absolutely. So we did a project for a very large organization for collections. They had a lot of outstanding capital locked up in customers not paying. It's a standard problem. You're going to find it in pretty much any industry. And so for that outstanding invoice, what we did was we went ahead and we worked with the subject matter experts. We looked at all the historical accounts receivable data. We took data from a lot of other sources and we were able to come up with models to predict when certain customers are likely to pay and when they should be contacted. Ultimately, what we wanted to give the collection agents were a list of customers to call. It was fairly straightforward. Of course, the approach was not, I mean, the solution was not very, very easy, but at least on a holistic level, it made a lot of sense to us. When we went to the collection agents, many of them actually refused to use that approach. This is part of change management in some sense. They were so used to doing things their way. They were so used to trying to target the customers with the most largest outstanding invoice or the ones that hadn't paid for the longest amount of time that it actually took us a while. Because initially, what the feedback we got was that your approach is not working. We're not seeing the results. And when we dug into it, it was because it wasn't being used. So that would be one example. So again, proof points that you will actually get results from. Absolutely, and the transparency. I think we actually sent some of our engineers to work with the collections agents to help them understand what approach is it that we're taking, and we showed them that this is not magic. We're actually, instead of looking at the final dollar value, we're calculating time value lost. So we're coming up with a metric that allows us to incorporate not just the outstanding amount or the time that they haven't paid for, but a lot of other factors as well. Excellent. Val. When you ask that question, I immediately went to more of a non-technical business side of my brain to answer it. So my experience over the years has been particularly doing major industry transitions. I'm old enough to remember the mainframe, the client-server transition, and now client-server to virtualization and cloud. And really, sales reps have that well-earned reputation of being coin-operated. Well, it's remarkable how much you can adjust compensation plans to pretty much anyone in a capitalist environment. And the ITOT divide, if you will, is pretty easy to solve from a business perspective when you take someone with an IT supporting the business mentality, and you compensate them on new revenue streams, new business, all of a sudden, their world perspective changes sometimes overnight or certainly when that contract is signed. That's probably the number one thing you can do from a people perspective is incent them and motivate them to focus on these new things. The technology, particularly nowadays, is evolving to support them for these new initiatives, but nothing motivates the right compensation plan. Excellent, a great series of different viewpoints. So the second question I have, again, coming down a bit to this level, is how do we architect a solution? We heard you've got to architect it and you've got layers like this. It seems to me that that's pretty difficult to do ahead of where you're going. That in general, you take smaller steps, one step at a time, you solve one problem, you go on to the next. Am I right in that? If I am, how would you suggest that people go about this decision making of putting architectures together and if you think I'm wrong and you have a great new way of doing it? I'd love to hear about it. I can take a shot at that. So we have a number of customers that are trying to go through a phased way of adopting our technology and products. And so it begins with the first gathering of the data and replaying it back to build the first level of confidence in the sense that the product is actually doing what you're expecting it to do. So that's more from, you know, monitoring administration standpoint. The second stage is you begin to capture analytical logic into the product where it can start doing, you know, prediction for you. So you go into, so from operational, you go into a predictive maintenance, could be predictive mentors, predictive models standpoint. The third part is prescriptive where you actually help create a machine learning model. Now, it's still in flux in terms of where that model gets created, whether it's on the cloud in a central fashion or some sort of a, you know, the right place at the right context in a multi-level hierarchical fog layer. And then you sort of operationalize that as close to the data again as possible. So you go through this operational to predictive to prescriptive adoption of the technology. And that's how people actually build confidence in terms of, you know, adopting something new into let's say a manufacturing environment or things that are pretty expensive. So I'll give you another example where, you know, you have the case of, you know, capacitors being built on an assembly line, manufacturing, right, and so how do you, can you look at data across different stations in manufacturing on an assembly line and can you predict on the second station that it's gonna fail on the eighth one, right? By that, what you're doing is you're actually reducing the scrap that's coming off of the assembly line. So that's the kind of usage that you go into in the second and the third stage. Excellent. Janet, do you want to come in? I agree and I have a slightly different point of view also. I think architecture is very difficult, right? It's like Thomas Edison. He spent a lot of time creating negative knowledge to get to that positive knowledge, right? And so that's kind of the way it is in the trenches. We spend a lot of time trying to think through the keyword that comes to mind is abstraction layers because where we came from, everything was tightly coupled and tightly coupled, computing storage are tightly coupled, structured and unstructured data are tightly coupled, they're tightly coupled with the database, schema is tightly coupled. So now we are going into this world of everything being decoupled. In that, multiple things, multiple operating systems should be able to use your storage. Multiple models should be able to use your data. You cannot structure your data in any kind of way that is customized to one particular model. Many models have to run on that data on the fly, retrain itself and then run again. So when you think about that, you think about what suits best to stay in the cloud? Maybe large amounts of training data, schema that's already processed can stay on the cloud. Schema that is very dynamic, schema that is on the fly that you need to read and data that's coming at you from the internet of things that's changing, I call it heteroscedastic data, which is very statistical in nature and highly variable in nature. You don't have time to sit there and create rows and columns and structure this data and put it into some sort of a structured set. You need to have a data lake, you need to have a stack on top of that data lake that can then adapt, create metadata, process that data and make it available for your models. So, and then over time, like I totally believe that now we're running into near real-time compute bottleneck processing all this parallel processing for the different models and for training sets. So we need a stack that we can quickly replace with GPUs, which is where the future is going with parallel processing and machine learning. So your architecture has to be extremely flexible, high layers of abstraction, ability to train and grow and iterate. Excellent. Do you want to go next? So I'll be a broken record, back to data gravity I think in an edge context, you've really got to look at the cost of processing data is orders of magnitude less than moving it or even storing it. So I think the real urgency, I don't know, there's 90% I think of data at the edge is kind of wasted, you can filter through it and find that signal through the noise. So processing data to make sure that you're dealing with really good data at the edge first, figuring out what's worth retaining for future steps. I love the manufacturing example. We have lots of customer examples ourselves where for quality control and a high moving assembly line, you want to take thousands, if not millions of images and compare frame by frame exactly according to the schematics, where the device is compared to where it should be or where the components and that device are compared to where they should be processing all that data locally and making sure you extract the maximum value before you move data to a central data lake to correlate it against other anomalies or other similarities. That's really key. So really focus on that cost of moving and storing data. Yeah, that's one of the last words. Sure, Mana takes an interesting approach. I'm gonna up level a little bit. Whenever we are faced with a customer or a particular problem for a customer, we try to go with the question answer approach. So we start with giving a very, taking a very specific business question. We don't look at what data sources are available. We don't ask them whether they have a data lake or we literally get their business leaders, their subject matter experts. We literally lock them up in a room and say you have to define a very specific problem statement from which we start working backwards. Each problem statement can be then broken down into questions. And what we believe is any question can be answered by a series of models. And you talked about models. We go beyond just data models. We believe anything in the real world in the case of let's say manufacturing since we were talking about it, any smallest component of a machine should be represented in the form of a concept. Relationships between people operating that machinery should be represented in the form of models. And even physics equations that are going into predicting behavior should be able to represent in the form of model. So ultimately what that allows us is that granularity, that abstraction that you were talking about that it shouldn't matter what the data source is. Any model should be able to plug into any data source or any more sophisticated bigger model. I'll give you an example of that. We started solving a problem of predictive maintenance for a very large customer. And while we were solving that predictive maintenance problem, we came up with a number of models to go ahead and solve that problem. We soon realized that within that enterprise there are several related problems. For example, replacement, apart inventory management, right? So now that you've figured out which machine is gonna fail at roughly what instance of time from now, we can also figure out what parts are likely to fail. So now you don't have to go ahead and order a ton of replacement parts because you know what parts are gonna likely fail. And then you can take that a step further by figuring out which equipment engineer has the skill set to go ahead and solve that particular issue. Now all of that in today's world is in somewhat happening in some companies, but it is actually a series of point solutions that are not talking to each other. That's where our patented knowledge graph is coming into play, where each and every model is actually a node on the graph, including computational models. So once you build, say, 10 models to solve that first problem, you can reuse some of them to solve the second and third. So it's a time-to-value advantage. Well, you've been a fantastic panel. I think these guys would like to get to a drink at the bar and there's an opportunity to talk to you people. I think this conversation could go on for a long, long time. There's so much to learn and so much to share in this piece of information. So with that, over to you. I'll just wrap it up real quick. Thanks, everyone. Give the panel a hand. Great job. Thanks for coming out. We have drinks for the next hour or two here. So feel free to network and mingle. Great questions to ask them privately one-on-one or just have a great conversation. And thanks for coming. I really appreciate it for our big data SV event live streamed out. It'll be on demand on youtube.com slash SiliconANGLE all the video. If you want to go back, look at the presentations. Go to youtube.com slash SiliconANGLE and of course siliconangle.com and wikibond.com for the research and content coverage. So thanks for coming one more time. Big round of applause for the panel. Enjoy your evening. Thanks so much.