 Hi, we're at the Palo Alto studio of SiliconANGLE Media and The Cube. My name is George Gilbert. We have a special guest with us this week, Viru Ramaswamy, whose VP IBM Watson IoT platform. And he's here to fill us in on the incredible amount of innovation and growth that's going on in that sector of the world. And we're going to talk more broadly about IoT and digital twins as a broad new construct that we're seeing in how to build enterprise systems. So Viru, good to have you. Why don't you introduce yourself and tell us a little bit about your background. Sure. Thanks, George. Thanks for having me. So I've been in the technology space for a long time, and if you look at what's happening in the IoT and digital twin space, it's pretty interesting. The amount of growth, the amount of productivity and efficiency the companies are trying to achieve is just phenomenal. And I think we are now turning off the hype cycle and getting into real actions in a lot of businesses. Prior to joining IBM, I was a Chief Data Officer and Senior VP of Data Science with Cable Vision where I led the data strategy for the entire company. And then prior to that, I was the GE, I was one of the first two guys who actually built the San Ramon Digital Center, G Digital Center, it's a center of excellence, looking at different kinds of IoT related projects and products, along with leading some of the UX and the analytics and the collaboration and the social integration, so that's the background. Okay, so just to set context, because this is, as we were talking before, there was another era when Steve Jobs was talking about the next workstation and he talked about object orientation, and then everything was sprinkled with fairy dust about objects. So tell us, help us distinguish between IoT and digital twins, which GE was brilliant in marketing, because it was a concept everyone could sort of grok, help us understand where they fit. Right, so the idea of digital twins is how do you abstract the actual physical entity out there in the world and create an object model out of it. So it's very similar in that sense, what happened in the 90s from Steve Jobs, and if you look at that object abstraction is what is now happening in the digital twin space from the IoT angle. So the way we look at IoT is we look at every sensor which is out there, which can actually produce a metric, or every device which produces a metric we consider as a sensor. So it could be as simple as the pressure, temperature, humidity sensors, or it could be as complicated as your myocardial sensors in your healthcare and so on, so forth. And the concept of bringing the sensors into the digital world, the data from that physical world to the digital world is what is making it even more abstract from a programming perspective. So help us understand, so it sounds like we're going to have these fire hoses of data, how do we organize that into something that someone who's going to work on that data, someone who's going to program to it, how do they make sense out of it the way a normal person looks at a physical object? So that's a great question. So now we're looking at sensors as a device that we can measure out of, and that we call it as a device twin. Taking the data that's coming from the device, we call that as a device twin. And then your physical asset, the physical thing itself, which could be elevators, jet engines, anything, physical asset, that we have what we call the asset twin. And there's a hierarchical model that we believe that will have to be existing for the digital twin to be actually constructed from an IoT perspective. So the asset twins will basically encompass some of the device twins. And then we actually take that and represent the digital twin or the physical world of that particular asset. So that would be sort of like as we were talking about earlier, like an elevator might be the asset, but the devices within it might be the brakes and the pulleys and the panels for operating. Exactly. And it's then the hierarchy of these, or in manufacturing terms, the bill of materials that becomes a critical part of the twin. What are some other components of this digital twin? So when we talk about digital twin, we don't just take the blueprint of schematics, right? We also think about the system, the process, the operations that goes along with that physical asset. And when we capture that and be able to model that in the digital world, then that gives you an ability to do a lot of things where you don't have to do it in the physical world. For instance, you don't have to train your people with, on the physical world, even mission critical systems and so on and so forth. You could actually train them on the digital world and then be able to allow them to operate on the physical world whenever it's needed. Or if you want to increase your productivity or efficiency during predictive models and so forth, you can test all the models in your digital world and then you actually deploy it in your physical world. Okay. All right. That's great for context setting. How would you think of, this digital twin is more than just a representation of the structure, but it's also got the behavior in there. So in a sense, it's a sensor and an actuator in that you could program the real world. What would that look like? What things can you do with that sort of approach? So when you actually have the data coming, this humongous amount of petabytes of data that comes from the sensors, once you model it and you get the insights out of that, based on the insights, do you take an actionable outcome that could be turning off an actuator or turning on an actuator? Simple things like in the elevator case, open the door, shut the door, move the elevator up, move the elevator down, et cetera, et cetera. So all of these things can be done from a digital world. I think that's where it makes a humongous difference. Okay. So it's a structured way of interacting with the highly structured world around us. That's right. Okay. That's right. So it's not the narrow definition that many of us have been used to, like an airplane engine or the autonomous driving capability in a car. It's more general. Yeah. It is more generic than that. Okay. So now let's talk about having sort of set context with the definition. So everyone knows we're talking about a broader sense of what's going on. What are some of the business impacts in terms of, you know, operational efficiency may be just the first order impact, but what about the ability to change products into more customizable services that have SLAs or entirely new business models, including engineer to order instead of, you know, make to stock? Tell us something about that hierarchy of value. So that's a great question. So you're talking about things like operations optimization and predictive maintenance and all of that, which you can actually do from the digital world itself on the digital twin. But you also can look into various kinds of business models. So now instead of a product, you can actually have a service out of the product and then be able to have different business model, like powered by the hour, paper use and kinds of things, right? So these kinds of models, business models can be tried out and think about what's happening in the world of Airbnb and Uber. Nobody owns any asset, but still be able to make revenue by paper use or powered by the hour, right? So I think it's that's so interesting model we can, which is not, I don't think it has been tested out so much in the digital world, on the physical asset world. But I think that that could be interesting model that you could actually try. So one thing that I picked up at the genius of things event in Munich in February was that we really have to rethink about software markets in the sense that IBM's customers become in a way your channel sometimes because they sell to their customers, like almost like a supply chain master or something similar. And also pricing changes from potentially we've already migrated or are migrating from perpetual licenses to service, software as a service. But now we could do unit pricing or SLA based pricing, in which case you as a vendor have to start getting very smart about you or your customers the risk in meeting an SLA. So it's almost more like insurance, you know, actuarial modeling. Correct. So the way we want to think about is how can we make our customers more what you call monetizable, their products to be more monetizable with their customers, right? And then in that case, when we enter into a service level agreement with our customers, there's always that risk of what can we deliver to make their products and services more successful, right? So there's always a risk component which we will have to work with the customers to make sure that combined model of what our customers are going to deliver is going to be more beneficial, more contributing to a bottom line on top line. That implies that you're modeling, someone's modeling risk from you, the supplier to your customer as vendor to their customer. Right. So that sounds tricky. So they would, so I'm pretty sure we have a lot of financial risk modeling entered into our SLAs when we actually go with our customers. So that's that's a new business model for IBM for IBM's sort of supply chain master type customers, if that's the right word, where as this as this capabilities, this technology pervades more industries, the customers become software vendors, or if not software vendors, services vendors for software enhanced products or service enhanced. Exactly, exactly. Okay. So, oh, on another thing, I had listened to a briefing by IBM Global Services where they thought ultimately this might end up where there's far more, far more industries are engineered to order instead of make to stock. How would this enable that? So, I think the way we want to think about is that most of the IoT based services will actually start by co-designing and co-developing with your customers. Okay. Right. And that's where you're going to start. That's how you're going to start. You're not going to say, here is my 100 data centers and you bring your billion devices and connect and it's going to happen. Right. So we are going to start that way. And then our customers are going to say, hey, by the way, I have these use cases that we want to start doing. So that's why a platform becomes so important. So once you have the platform, now you can scale, instead of scaling individual silos as a vertical use case for them, we provide the platform and then these use cases start driving on top of the platform. So the scale becomes much easier for the customers. So this sounds like the traditional application, the traditional way, an application vendor might turn into a platform vendor, which is a difficult transition in itself, but you take a few use cases and then generalize into a platform. So we call it Arizona application services. Right. So the Arizona application services basically is running on top of a core platform service, which actually provides you the ability. So for instance, like an asset management, right? An asset management can be done in an oil and gas rig. You can look at asset management in a power turbine. You can look at asset management in a jet engine. You can do asset management across any different vertical, but that is a common horizontal application. So most of the time, if you get 80% of your asset management APIs, if you will, correct? Then you can be able to scale across multiple different vertical applications and solutions. Okay, hold that thought because we're going to come back to sort of joint development and leveraging expertise from vendor and customer and sharing that. So let's talk just at a high level. One of the things I keep hearing is that in Europe, industry 4.0 is sort of the hot topic and in the States it's more digital twins. Help parse that out for us. So the way we believe how digital twin should be viewed is the component view, right? What we mean by component view is that we have your knowledge graph representation of the real assets in the digital world. And then you bring in your IoT sensors and connections to the models, then you have your functional, logical, physical models that you want to bring into your knowledge graph. And then you also want to be able to give the ability of search, visualize, analyze kind of an intelligent experience for the end consumer. And then you want to bring your simulation models when you do the actual simulation models in the digital to bring it in there. And then your enterprise asset management, your ERP systems, all of that. And then when you're able to build a knowledge graph, that's when the digital twin really connects with your enterprise systems. So sort of bringing the IoT and the IoT together. OK. So this is sort of to try and summarize, because there are a lot of moving parts in there. You've got the product hierarchy and product guys are called bill of materials, sort of the explosion of parts in an assembly and sub-assembly. And then that provides a structure, a data model. Then the machine learning models, or in the different types of models that they could represent behavior. And then when you put a knowledge graph across that structure and behavior, is that what makes it simulation ready? Yes. So you're talking about entities and connecting these entities with the actual relationship between the entities. And that's the graph. That's the graph. That holds the relation between nodes and your links. And then integrating the enterprise systems and maybe the lower level operation systems. That's how you affect business processes. Correct. For efficiency or optimization. Yes. Automation. Yeah. I mean, take a look at what you can do with a shop floor optimization. You have all the bill of materials you need to know from your existing ERP systems. And then you will actually have the actual real parts that is coming to your shop floors to manage them. And now, based supposing, depending on whether you want to repair, you want to replace, you want to overhaul, you want to modify whatever that is, you want to look at your existing bill of materials and see, OK, do I first have it? Do we need more? Do we need to order more? So your ordering system naturally gets integrated into that. And then you have to integrate the data that's coming from these models and the availability of the existing assets with you. You can integrate and say, how fast can you actually start moving these out of your shop into the. And that's where you translate essentially what's more like intelligence about an object or a rich object. A rich object, yes. Into sort of operational implications. Yes. OK, operational processes. Yes. So let's talk about customer engagement so far. I mean, there's intense interest in this. I remember in the Munich event, they were like, they had to shut off attendance because they couldn't find a bigger venue. That's true. So what are the characteristics of some of the successful engagements or the ones that are promising maybe they're a little early to say successful? So I think the way you can definitely see success from a customer engagement are two-fold. One is show what's possible, show what's possible after all these IoT connection, collection of data, and all of that. So that's one part of it. And the second part is understand the customer. The customer has certain requirements in their existing processes and operations. Understand that and then deliver based on what solutions they're expecting, what applications they want to build. So how you bring them together is what is. So we're thinking about how do we, so that Munich Center you talked about, we are actually bringing in chip manufacturers, sensor manufacturers, device manufacturers, we are bringing in network providers, we are bringing in SIs, instrument integrators, all of them into the fall and show what is possible. And then your partners enable you to get to the market faster. So that's how we see it. The engagement with customers should happen in a much more faster manner and show them what's possible. It sounds like in the chip industry, Moore's law for many years, it wasn't deterministic that we would do it double things every 18 months to two years. It was actually an incredibly complex ecosystem web where everyone's product release cycles were synchronized so as to enable that. Correct. And it sounds like you're synchronizing the ecosystem to keep up. Exactly. The success of a particular organization IoT efforts is going to depend on how do you build this ecosystem and how do you establish that ecosystem to get to market faster. That's going to be extremely key for all your integration efforts with your customers. OK, so let's start narrowly with you, IBM. What are the key skills that you feel you need to own starting from the base rocket scientists who not only work on machine learning models, but they come up with new algorithms on top of, say, TensorFlow or something like that. And all the way up to the guys who are going to work in conjunction with a customer to apply that science to a particular industry. How does that hold together? So it all starts from the platform, right? On the platform side, we have all the developers, engineers who build these platforms, all the way to your connection, telemetry protocols like MQTT and HTTPS and all of that to make the connections. So you need the highest cost software developer engineers to build these products to the platform without a set. And then you also need the solution builders who is in front of the customer understanding what kind of solutions you want to build. So solutions could be anything. It could be predictive maintenance. It could be a simple ass management. It could be remote monitoring and diagnostics. It could be any of these solutions that you want to build. And then the solution builders and the platform builders work together to make sure that it's a holistic approach for the customer at the final deployment. And how much is the solution builder typically in the early stages, IBM, or is there some expertise that the customer has to contribute? And almost like agile development, but not two programmers, but like 500 and 500 from different companies. Yeah, 500 is way too much. I would say this is the concept of code design and code development where we bring in. So we definitely want the ultimate, the developers, the engineers from, the subject matter experts from our customers. And we also need our analytics experts and the software developers to come and sit together and understand what's the use case. How do we actually bring in this optimized solution for the customer? OK, and what level of expertise or what type of expertise are the developers you're contributing to this effort in terms of do they have to, if you're working with manufacturing, let's say auto manufacturing, do they have to have automotive software development expertise, or are they more generic on the analytics and the automotive customer brings in the specific industry expertise? So it depends, right? In some cases, we have the RGB units, for instance. We have dedicated service, that particular vertical service provider, right? So we know, we understand some of this industry knowledge. In some cases, we don't. In some cases, it actually comes from the customer. But it has to be an aggregation of the subject matter experts with our platform developers and solution developers sitting together, finding what's the solution. Literally going through, I mean, think about how we actually bring in the UX, what do you call, observational research, what does a typical day of a persona look like? We always, by the way, believe in the augmented intelligence, which means the human and the machine work together rather than a complete eye, which is, it gives you the answer for everything you ask for. We've sort of, I mean, it's a debate that keeps coming up, and Doug Engelbart sort of had his own answer, you know, like 50 years ago, which was he sort of set the path for modern computing by saying, we're not going to replace people. We're going to augment them. Correct. And this is just a continuation of that. It's a continuation of that, I suppose. So like, UX design sounds like someone on the IBM side might be talking to the domain expert and the customer to say, how does this workflow work? Exactly. OK. So you have these design thinking, design sessions with our customers. And then based on that, we take that knowledge, take it back, we build our markups, we build our wireframe, visual designs, and the analytics, the software that goes behind it, and then we provide on top of the platform. So most of your platform work, the standard, what do you call, table state connections, collection of data, all of that is already existing. Then it's one level above as to what the particular solution our customer wants. That's when we actually. OK. So in terms of getting the customer organization aligned to make this project successful, what are some of the different configurations? Who needs to be a sponsor? Where does budget typically come from? How long are the pilots? That sort of stuff. So to set expectations for. So we believe in all the agile thinking, agile development, and we believe in all of that. It's almost given now. So depending on where the customer comes from, so the customer could actually directly come and sign up to our platform on the existing cloud infrastructure. And then they will say, OK, we want to build applications. Great. Then there are some customers, really big customers, large enterprises, who want to say, give me the platform. We have our solution folks. We will want to work and build with you. But we also want somebody who understands building solutions. We integrate with our solution developers, and then we build on top of that. They build on top of that, actually. So you have that model as well. And then we have a GBS, which actually does this, have been doing this for years, decades. Almost like from the silicon. All the way up to the application So the ones, when the customer is not outsourcing completely the custom app that they need to build. In other words, when they need to go to GBS, Global Business Services, whereas if they want a semi-packaged app, can they go to the Industry Solutions Group for the, I assume it's the IoT Industry Solutions Group, and that they then take a, it's almost maybe a framework or an existing application that needs customization. Exactly. So we have IoT for manufacturing, IoT for retail, IoT for insurance, IoT for you name it. We have all these industry solutions. So there would be some amount of template, which is already existing in some fashion. So when GBS gets a request to say, OK, here is customer X coming and asking for a particular solution, they would come back to the IoT Solutions Group to say, do you already have some template solutions from where we can start off from, rather than building it from scratch. So your speed to market again is much faster. And then based on that, if it's something that has to be customizable, both of them work together with the customer and then make that happen. And they leverage our platform underneath to do all the connection, collection data, analytics, and so on and so forth that goes along with that. OK. So tell me, from everything we hear, there's a huge talent shortage. Tell me in which roles is there the greatest shortage and then how do different members of the ecosystem, vendors, platform vendors, solution vendors, sort of supply chain master customers and their customers, how do they attract and retain and train? So it's a fantastic question. I mean, one of the difficulties, both in the valley and everywhere across, is that there is a skill gap. I mean, you want advanced data scientists. You want advanced mission learning experts. You want what you call AI specialists to actually come in. Luckily for us, we have about 1,000 data scientists and AI specialists distributed across the globe. When you say 1,000 data scientists and AI specialists, help us understand which layer are they once? So it can be all the way from like a BI person to all the way to people who can build advanced AI models. On top of a engine or framework like TensorFlow. We have our Watson APIs from which we build. Then we have our data science experience, which actually has some of the models involved on the top of Watson data platform. So we take that as well. So there are many different ways by which we can actually bring the AI model and mission learning models to bear. OK. And where do you find those people? Not just the sort of bench strengths that's been with IBM for years, but to grow that skill space and then sort of where are they also attracted to? So it's a great question. I mean, the valley definitely has a lot of talent. Then we also go outside. We have multiple centers of excellence in Israel, in India, in China. So we have multiple centers of excellence. We gather from them. It's difficult to get all the talent just from US or just from one country. So naturally, that talent has to be much more what you call improved and enhanced all the way from fresh graduates from colleges to more experienced folks in the actual profession. What about when you say enhancing the pool of talent you have? Could it also include productivity improvements, qualitative productivity improvements in the tools that makes machine learning more accessible at any level? The old story of rising abstraction layers where deep learning might help design statistical models by doing feature engineering and optimizing the search for the best model, that sort of stuff. Right. So tools are very, very helpful. I mean, there are so many. I mean, you have from our tools to Python tools to Scikit and all of that, which can help the data scientists. The key part is the knowledge of the data science. So data science, you need the algorithm, the statistical background. Then you need your application software development background. And then you also need the domain expert engineering background. You have to bring all of them together. So we don't have too many Michelangelo's who are these all-around geniuses. Exactly. So there's the issue of how do you get them to work more effectively together? And then assuming even each of those are in short supply, how do you make them more productive? So making them more productive by giving them the right tools and resources to work with. I think that's the best way to do it. And in some cases, in my organization, we just say, OK, we know that a particular person is skilled. It's up-skilled in certain technologies and certain skill sets. And then give them all the tools and resources for them to go and build. And that's what we, and there's a constant education training process that goes through. In fact, we have our entire Watson IIT platform that can be learned on Coursera today. Interesting. So people can go and learn how to build the IIT platform from a Coursera. You know, when we start talking about, talking with clients and with vendors, one of the things we hear is that, and we were kind of, I think, early to sort of calling foul, but in the open-source infrastructure, big data infrastructure, this notion of sort of mix and match and roll your own pipeline sounded so alluring. But in the end, it was only the big internet companies, you know, and maybe some big banks and telcos that had the people to operate that stuff. And probably even fewer who could build stuff on it. Are we, do we need to up-level or simplify some of those roles? Because, you know, mainstream companies can't have enough or won't have enough data scientists or, you know, whoever, or other roles needed to make that whole team work. Correct. So I think it's a real combination of both. One is we need to up-skill our existing students with the STEM background. That's one thing. And the other aspect is how do you up-skill your existing folks in your companies with the latest tools and how can you automate more things so that people who may not be schooled will still be able to use the tool to deliver on the things, but they don't have to go to a rigorous curriculum to actually be able to deliver on it. What does that look like? Give us an example. So think of, say, tools like today. There are a lot of BI folks who can actually build. BI is usually your trends and graphs and charts that comes out of their data, which are simple things. So they understand the distribution and so on and so forth. But they may not know what is a random forest model. But if you look at tools today, that actually gives you to build them up. Once you give the data to that model, it actually gives you the outputs. So they don't really have to go dig deep. I have to understand the decision tree model and so on and so forth. They actually can go, they have the data. They can give the data to tools like WECA. There are so many data science tools which could actually give you the outputs. And then they can actually start building the analytics application on top of that, rather than being worried about how do I write a 1,000-line code or 2,000-line code to actually build that model itself. So the in-built machine learning models are in an end-hand integrated tool like Pentaho. Or what's another example of, I'm trying to think I lost one, having a senior moment. These happen too often now. I mean, we do have it in our own data science experience tools. We already have those model support. So you don't really have to, you can actually go and call those in your web portal and be able to call the data and then call the model. And then you'll get all the. Splunk has something like that. Splunk does, yes. I don't know how functional it is, but it seems to be oriented towards someone who built a dashboard can sort of wire up a model. It gives you an example of what type of predictions or what type of data you need. True, in this Splunk case, I think it is more of BI tool actually supporting a level of data science models supported on the back. I do not know how I'm going to be able to look at this, but in our case, we have a complete data science experience where you can actually start from the minute the data gets ingested. You can actually start the storage, the transformation, the analytics, and all of that can be done in less than 10 lines of code. You can just actually do the whole thing. So you just have to call those functions and it'll be right there in front of you. So end-to-end, you can do that. So that, I think, is much more powerful. And there are many, many tools today. So you're saying the data science experience is end-to-end pipeline and therefore can integrate what were boundaries between separate products? Correct. The boundaries is becoming narrower and narrower in some sense. You can go all the way from data ingestion to the analytics in just a few clicks or a few lines of code. That's what is happening today. Integrated innovation. Integrated innovation, exactly. Integrated experience, if you will. And that's different from the specialized skills where you might have a trifecta or a prexata or something similar as for the wrangling and then something else for the visualization, like Altrex or Tableau, and then into modeling. Correct. This was true at least a year or so ago. People, most of the data science, time to spend a lot of time doing data wrangling. Because some of the models they can actually call very directly. But wrangling is where they actually spend the time. How do you get the data, crawl the data, cleanse the data, et cetera. So that is all now part of our data platform. So it's already integrated into the platform. So you don't have to go through some of these things. Where are you finding the first success for that tool suite? Today it is almost integrated with, for instance, in IoT case, where we actually ingest the data. We'll integrate that into the Watson data platform. And the Watson APIs is a layer above the Watson data platform, where we actually use the analytics tools, the more advanced AI tools. But the simple machine learning models and so on and so forth is already integrated into as part of the Watson data platform. So it is going to become an integrated experience through and through. I think to connect data science experience into Watson IoT platform, and maybe even a little higher at this quasi-solution layer. Correct. Exactly. OK, interesting. We are doing that today. And given the fact that we have so much happening on the edge side of things, which means mission critical systems today are expecting streaming analytics to get the insights right there and then be able to provide the outcomes at the edge, rather than pushing all the data up to your cloud and then bringing it back down. Let's talk about Edge versus Cloud. Obviously, for latency and bandwidth reasons, we can't forward all the data to the cloud. But there's different use cases. We were talking to Matej Zaharia at Spark Summit. And one of the use cases he talked about was video. You can't send, obviously, all the video back. And you typically, on an Edge device, wouldn't have heavy-duty machine learning. But for a video camera, you might want to learn what is anomalous or behavior call out for that camera. So help us understand some of the different use cases. And how much sort of data do you bring back? And how frequently do you retrain the models? So in the case of a video, it's so true that you want to do a lot of any object recognition and so on and so forth in the video itself. Like we have tools today. We have cameras outside where if a van goes and it'll detect that particular object in the video live. So real-time streaming analytics. So we can do that today. So where I'm seeing today the market is in the transaction between Edge and the cloud. We believe Edge is an extension of the cloud, closer to the asset or device. And we believe that models are going to get pushed from the cloud closer to the Edge because the compute capacity and storage and the networking capacity are all improving, right? From a compute, it is becoming more and more, we are pushing more and more compute into the devices. But when you talk about pushing more of the processing, you're talking more about the prediction and inferencing than the training. Correct. Okay. Yeah, because you don't, at least, I don't think I see so much of the training that needs to be done on the Edge, right? Once the model... You don't see it. No, not yet, at least. So we see the training happening in the cloud and then once the train, the model has been trained, then you come to a steady-state model. And then that is the model you want to push. And model, when you say model, it could be a bunch of coefficients, right? That could be pushed down to the Edge. And then when a new data comes in, you evaluate and make decisions on that, create insights and push it back as actions to the asset. And then that data can be pushed back into the cloud once a day or once in a week, whatever that is, whatever the capacity of the device you have. And we believe that Edge can go across multiple scales. We believe it could be as small as a Raspberry Pi with a 128, 256 MB RAM. It could be one U2 U chassis sitting in your local data center on the premise. I've actually heard examples of 32 megs in elevators. Yes, exactly. You know, the network card. Exactly. So when you look at the, there might be more like a sort of bandwidth and latency oriented platform at the Edge and then throughput and volume in the cloud for training. Yes. And then there's the issue of do you have a model at the Edge that corresponds to that instance of a physical asset? Asset. And then do you have an ensemble, meaning the model that maps to that instance plus a master, you know, canonical model? Right. Does that work for? So in some cases, I think it'll be, I think there where you have a master canonical model and other what we call subsidiary models based on what's the asset in front. It could be a fleet. So you could, in the fleet of assets which you have, you can have, does one asset in the fleet behave similar to another asset in the fleet. Then you could build similarity models in that, right? But then there'll also be a model to look at now that I have to manage this fleet of assets which would be a different model compared to actually similarity model, right? So in terms of operations, in terms of optimization, if I want to make certain operations of that asset work more efficiently, that model could be completely different when compared to when you look at similarity of one model or one asset without that. That's interesting. And then that model might fit into the information technology systems, the enterprise systems. Correct, exactly. Okay, so let's talk, I wanna go get a little lower level now about the issue of sort of intellectual property, joint development and sharing and ownership. And sort of if we, IBM is, it's a nuanced subject, so we get different sort of answers, definitive answers from different execs. But at this high level, IBM says, unlike Google and Facebook, we will not take your customer data and make use of it. But there's more to it than that. It's not as black and white. Help explain that for us. So the way we wanna think is, I mean, I would definitely, what do you call, parrot back what our chairman always says, that customer's data is customer's data, customer insights is customer insights. So the way we look at it is, if you look at a black box engine, that could be an analytic engine, whatever it is, right? The data is your inputs and the insights are your outputs. The inputs and outputs belong to them, right? We don't take the data and marry it with somebody else's data and so forth. But we use the data to train the models, right? And that model, which is an abstract version of what that engine should be, right? And then the more we train, the more better the model becomes. And then we can then use across many different customers and as we improve the models, we might go back to the same customer and say, hey, we have an improved model. You want to deploy this version rather than the previous version of the model we have. We can go to customer Y and say, here's a model which we believe we can take more of your data and fine tune that model again and then give it back to them to serve there. So it is true that we don't actually take the data and share the data or the insights from one customer X to another customer Y. But the models are the models that we make it better. How do you make that model more intelligent is what our job is and that's what we do. So if we go with precise terminology, it sounds like when we talk about the black box, having learned from the customer data and the insights also belonging to the customer. Let's say one of the examples we've heard was sort of architecture engineering, consulting for large capital projects has a model that's common obviously across that vertical but also large capital projects like oil and gas exploration, something like that. Yes. There, the model sounds like it's gonna get richer with each engagement. Yes. And let's pin down. So what in the model is sort of not exposed to the next customer? And what part of the model that has gotten richer does the next customer get the balance out? So when we actually build a model, when we pass the data, right? In some cases, customer X data, the model which is built out of customer X data may not sometimes work with the customer Y's data. So in which case you actually build it from scratch again. That's sometimes it doesn't. In some cases, it does help because of the similarity of the data in some sense because if the data from company X in on gas is similar to company Y in on gas, sometimes the data could be similar. So in which case when you train that model, it becomes more efficient and the efficiency goes back to both customers. So we will do that. But there are places where it would really not work. And then what we are trying to do is, we are in fact trying to build some kind of knowledge bundles where we can actually, what used to be a long process to train the model can now shorten using that knowledge bundle of what we have actually gained. Tell me more about how that works. Right, so in retail for instance, right? So when we actually provide and retail analytics, right? From any kind of IoT sensors, whatever sensors data this comes in, we train the model, we get analytics used for ads, pushing coupons, whatever it is. That knowledge, what you have gained of that retail, it could be models of models, it could be meta models, whatever you built, right? That can actually serve many different customers. So if it's, but the first customer who's trying to engage with us, you don't have any data to the model. It's almost starting from ground zero, right? And so that would actually take a longer time. When you are starting with a new industry and you don't have the data, it'll take you a longer time to understand what is that saturation point or optimization point where you think the model cannot go any further, right? But in some cases, once you do that, you can take that saturated model or near saturated model and improve it based on more data that actually comes from different other segments. Okay, so when you have a model that has gotten better with engagements, and we've talked about the sort of the black box which produces the insights after taking in the customer data, inside that black box, there's like, at the highest level, we might call it the digital twin with the broad definition that we started with. Then there's a data model which, a data model which I guess could also be sort of incorporated into the knowledge graph for the structure. And then would it be fair to call the operational model the behavior? Yes, okay. How does the system perform or behave with respect to the data and the asset itself? Okay, and then underpinning that are the different models that correspond to the behaviors of different parts of this overall asset. Yes, that's correct. So if we were to be really precise about this black box, what can move from one customer to the next and what won't? So the overall model, so supposing I'm using a random photo decision tree based model, right? That remains, but actual the coefficients or the feature vectors or whatever I use, that could be totally different for customers, depending on what kind of data they actually provide us. So that model remains, we know that model, so in data science world, in analytics world, you have a whole plethora of all the way from simple classification algorithms to very advanced predictive modeling algorithms, right? So if you take that whole class, we, when you start with a customer, you don't know which model is really going to work for a specific use case because the customer might come back and say, you might get some idea, but you will not know exactly, this is the model that will work, right? So with having tested with one customer, that model could remain for the same kind of a use case for some other customer, but that actual, the coefficients, the degree of the decision, so in some cases, it might be two level decision trees, in other guys, it might be a six level decision tree. Okay, so it's not like you take the model and the features and then just let different customers tweak the coefficients for the features. I mean, if you can do that, that'll be great, but I don't know whether you can really do, because the data is going to change. The data is definitely going to change at some point of time, but in certain cases, it might be directly correlated where you can help, in certain cases, it might not help. So what I'm taking away is this is fundamentally different from traditional enterprise applications, where you could standardize business processes and the transactional data that they were producing. Here, it's going to be much more bespoke because I guess the processes, the analytic processes are not standardized. Every business processes is unique for a business, right? But the Accentures of the World, we're trying to tell people that when SAP shipped, packaged processes, which were pretty much good enough, but they convinced them to spend 10 times as much as the license fee on customization, but is there a qualitative difference between the processes here and the processes in the old ERP era? I think it's kind of different, right? In the ERP era and the processes, we are more talking about just data management. Here, we're talking about data science, which means in the data management world, you're just moving data or transforming data and things like that. That's what you're doing. You're taking the data, transforming it to some other form, and then you're doing basic SQL queries to get some response, blah, blah, blah, right? That is a standard process. There is not much of intelligence attached to it. But now, you're trying to see from the data what kind of intelligence can you derive by modeling the characteristics of the data. That becomes a much tougher problem. So it now becomes a level, one level higher of intelligence that you need to capture from the data itself, that you want to serve a particular outcome from the insights you get from this model. But this sounds like the differences are based on one different business objectives and perhaps a data that's not as uniform that you would, in enterprise applications, you would standardize the data here. Correct. It's not standardized. Yeah. I mean, it'll take, I think, because of the varied disparity of the businesses and the kinds of verticals and things like that you're looking at. I mean, to get a complete, what do you call, unified business model, it's gonna be extremely difficult. Okay. Extremely difficult. Last question. Backoffice systems were, I guess the highest level they got to were maybe the CFO, because he had to sign off on a lot of, the budget for the license and the much, much bigger budget for the SI. But he was getting something that was like, close your quarter in three days or something instead of two weeks. It was a control function. Who do you sell to now for these different systems? And what's the message? How much more strategic, how do you sell the business impact differently? So the platforms, we directly interact with the CIOs and the CTOs or the head of engineering, right? And the actual solutions or the insights, we usually sell it to the COO as the operational folks. So because the COO is responsible for showing you productivity efficiency, right? How much of a savings can you do on the bottom line in top lines? So the insights would actually go to the COO's or in some sense to go through their CTO's to COO's. But the actual platform itself will go to the enterprise IT folks that they can see I or CTO. So this is, this sounds like a, it's a platform and a solution cell. Yes. Which requires, is that different from the sales motions of other IBM technologies or is this, I mean, is this a new approach? I would say, I mean, IBM is transforming on its way, right? I mean, the days where we believe that all these strategic imperatives that we are allying towards, that actually needs to be the key goal because that's where the world is going, right? I mean, there are folks who, I mean, like Jeff Bezos talks about in the olden days, you need 70 people to sell or 70% of the people to sell a 30% product. Today, it's a 70% product and you need 30% to actually sell the product, right? So the model is completely changing the way we interact with the customers. So I think that's what's going to drive. So you do think that, I mean, we are transforming in that area. We are becoming more conscious about all these strategic imperatives that we want to deliver to the market and we want to be able to enable our customers with a much broader value proposition. So that I think is... With the industry solutions group and the global business services teams that work on these solutions, they've already been selling line of business, CXO type solutions. So is this more of the same? It's just better? Or is this really a higher level than IBM's ever gotten in terms of strategic value? Correct, this is possibly in decades, I would say a higher level of value which comes from a strategic perspective, yes. Okay. On that note, Vero, we'll call it a day. This was great, great discussion and we look forward to writing it up and clipping all the videos and showering the internet with highlights. Thank you, Josh. Appreciate it. Very nice meeting you. Hopefully we'll get you back soon. It was a pleasure, absolutely. All right. With that, this is George Gilbert. We're in our Palo Alto studio for Wikibon and theCUBE and we've been talking to Vero Ramaswamy, who's VP of Watson IoT Platform and we look forward to coming back with Vero sometime soon. Vero Ramaswamy, thank you.