 Live from the Javits Center in New York City, it's theCUBE, covering Inforum 2017. Brought to you by Infor. Welcome back to theCUBE's coverage of Inforum here at the Javits Center in New York City. I'm your host, Rebecca Knight, along with my co-host Dave Vellante and Jim Covellius, who is the lead analyst for Wikibon in AI. So guys, we're wrapping up day one of this conference. What do we think? What did we learn? Jim, we've been here at the desk interviewing people and we've certainly learned a lot from them, but you've been out there talking to people and off the record, I should say. So give us your impressions. I'm going to name names once I may. I want to clarify something this morning. As I said this morning that the implied valuation was like 3.7, 3.8 billion. Charles Phillips indicated to us off camera, actually it was more like 10 and a half. But I still can't make the math work, so I'm working on that. I suspect because I suspect what's happened was that a pre-debt number. Remember they have a lot of debt. I will figure it out, find out, and report back. So I just wanted to clarify that. Run those numbers, okay. I'll call George. Right, but Jim, back to you. What do you think is the biggest impression you have of the day in terms of where Infor is? Yeah, I've had the better part of this day to absorb the Coleman announcement, which of course, AI is one of my core focus areas at Wikibon, and it really seems to me that, well, Infor's direct competitors in the ERP space are all in cloud, SAP is Oracle's Microsoft. They all have AI investments and strategies going forward in their ERP portfolios. So I was going back and doing my own research today just to get my head around where does Coleman put Infor in the race, because it's a very competitive race. I refer to it this morning maybe a little bit extremely as a war of attrition. But what I see, what I think is that Coleman represents a little milestone in the development of the ERP, cloud ERP market, where with SAP Oracle and Microsoft, they're all going deep on AI and ERP, but none of them has the comprehensive framework or strategy to AI enable their suites for human augmentation, natural language processing, conversational UIs, recommenders in line to the whole experience of inventory management and so forth. What Infor has done with Coleman has laid out more than just a framework and a strategy, but they've got a lot of other assets behind the whole AI first strategy that I think will put them in good stead in terms of innovating within their portfolio going forward. One of which is they've got this substantial infusion of capital from Coke Industries of course, and Coke is very much as we've heard today at this show, very much behind where the Infor team under Charles is going with AI enabling everything. But also the burst team is now on board with Infor, the acquisition closed last month. Brad Peters spoke this morning, and of course he spoke yesterday at the analyst pre-brief, and so David and I have had more than 24 hours to absorb what they're saying about where burst fits in. Is burst has AI assets already? That Infor is very much committed to converging the best of what burst has with where Coleman is going throughout their portfolio. What Infor announced this morning is all of that plus the fact that they've already got some Coleman eyes, that's the term I'm using, applications in their current portfolio. So it's not just a future statement of direction, it's all, they've already done significant development productization of Coleman. And they've also announced a commitment Infor within the coming year to bring, to introduce Coleman features throughout each of the industry vertical suite and cloud suites. Like I said, human augmentation plus automation, plus assistance that are chat boxes in line. Infor has a far more ambitious, and I think potentially revolutionary strategy to really make, to take ERP away from the legacy architectures that have all been based on deterministic business rules that a rickety thicket of business rules that need to be maintained. Bringing it closer to the future of cognitive applications where the logic will be in predictive and determine this predictive data driven algorithms that are continually learning, continually adapting, continually optimizing all interactions and transactions. That's the statement of direction. I think that Infor is on the path to making it happen in the next couple of years in a way that will probably force SAP Oracle Microsoft to step up their game and bring their cognitive or AI strategy. So I want to talk some more about the horses in the track, but I want to still understand what it is. So what the competitors are going to say is, oh, it's Alexa, okay? Okay, so it is partly. Yeah, sure, it's very reductive. That's their job to reduce. Yeah, that's right. You lived that world for a while. Actually, that was not your job. So if you don't understand technology, you're just some very smart guy who talks a good talk. Yeah, okay. So, okay, so what we heard yesterday in the analyst meeting, and maybe you found us out today was, it's conversational UX, so it's chat wired into the APIs, and they said that's table stakes. It augments, it automates. An example is early payments versus my cash on hand. Should I take the early payment deal and take the discount or? And so it helps decide those decisions. And which can, if you have a lot of volume, could be complex, and it advises it uncovers insights. Now, what I don't know is how much of the IP is OEM defense essentially from Amazon and how much is actual in for IP, do you know? Good question, good question. Whether it's all organically developed so far or whether they sourced it from partners is an open issue that maybe we- Questions for Duncan tomorrow. We'll ask Duncan tomorrow, exactly. Okay, so who are the horses on the track? I mean, obviously there's Google, there's Amazon, there's, I guess, Facebook, even though they're not competing in the enterprise, there's IBM Watson, and then you mentioned Oracle and SAP. Well, here's the thing, you named at least one of those solution providers, IBM, for example, provides obviously a really sophisticated, cognitive AI suite under Watson that is not embedded, however, within an ERP application suite from that. No, it's purpose built for whatever. It's purpose built for standalone deployment into all manner of applications. What Infor is not doing with Coleman and they made that very clear, they're not building a standalone AI platform. Which strategy do you like better? Do I like it? They're both valid strategies. First of all, Infor is very much a SAS vendor going forward and they haven't given any indications of going into pass. I mean, that's why they partnered with Amazon, for example, you know. So it's clear for a SAS vendor like Infor going forward to do what they've done, which is that they're not going to allow their customers apparently to decouple the Coleman infrastructure from everything else that, you know, Infor makes money on. Which for them is the right strategy. Yeah, that's the right strategy for them. I'm not saying it's a bad strategy for anybody who wants to be in Infor's market. So what is an Oracle or an SAP, or for that matter, a work day? Yeah. Do, I mean, ServiceNow made some AI announcements at their knowledge event. So they're spending money on that. I think that was organic IP or, I don't know, maybe they're open source AI components. Yeah, yeah, sure. A, they need to have a cloud data platform that provides the data upon which to build and train the algorithm. Clearly, Infor has casted a slot with AWS, you know, SAP Microsoft, Oracle IBM. They all have their own cloud platform. So GT Nexus plays into that data corpus or? Yeah, because GT Nexus is very much a commerce network, you know, in other words, EDI for this century. That is a continual, free-flowing, ever-replenishing pool of data upon which to build and train and train the algorithm. Okay, but I get you're wrong. You said number one, you need the cloud platform with data. Yeah, you need the conversational UI, the, you know, I'll use a reductive term, chat box. You know, digital assistance. You need that technology. And it, you know, it's very much a technology in the works. It's not like everybody's building chatbots. It doesn't mean that every customer is using them or that they perform well. But chatbots are at the very heart of a new generation of application development, conversational interfaces, which is why Wikibon, why we are doing a study on the art of building and training and tuning chatbots. Because they are so fundamental to the UX of every product category in the cloud. And non-cloud as well. And only getting more so. IOT, right, you know, desktop applications, everything's going with a, moving towards more of a conversational interface. You know, for starters. So you need a big data cloud platform, you need a chatbot framework for building and you know, the engagement, you know, and the UI and all that. You need obviously machine learning and deep learning capabilities, you know, open source. We're looking at a completely open source stack in the middle there for all the data. You know, you need obviously things like TensorFlow for deep learning, which is becoming the standard there. You need things like Spark, you know, for machine learning, streaming analytics and so forth. You need all that plumbing to make it happen. But you need, in terms of an ERP of course, you need business applications. And you need to have a business application stack to infuse with this capability. And there's only a hard core of really dominant vendors in that space that we've named them all. But the precious commodity seems to be data. Yeah. Right? The precious commodity is data both to build the algorithms and to an ongoing basis to train them. See, the thing is training is just as important as building the algorithms, because training makes all the differences in the world between whether a predictive analytics, ML algorithm actually predicts what it's supposed to predict or doesn't. So without continual retraining of the algorithms, they'll lose their ability to do predictions and classifications and pattern recognitions. So, you know, the vendors in the cloud arena who are in a good place are the Googles and the Facebooks and others who generate this data organically as part of their services. Google's got YouTube. And YouTube is a motherload of video and audio and so forth for training all the video analytics, all the speech recognition, everything else that you might want to do. But also very much, you know, when you look at natural language processing, you know, text data, social media data. I mean, everybody is tapping into the social media fire hose to tune all the NLP ongoing. That's very, very important. So the vendor that can assemble a complete solution portfolio that provides all the data. And also very much, this is something people often overlook. Training the data involves increasingly labeling the data and labeling needs a hard core of resources, increasingly crowdsourced to do that training. That's why companies like CrowdFlower and Mighty AI, and of course Amazon with Mechanical Turk are becoming ever more important. They are the go-to solution providers in the cloud for training these algorithms to keep them fit for purpose. All right, Rebecca, what are your thoughts as a sort of newbie to them for and in the forum? I'm a newbie, yes, and well, to be honest, yes, I'm a newbie and I have only an inch wide, an inch deep understanding of the technology. But one thing that has really resonated with me. You think it really well. Well, thank you, I appreciate that, thank you. That I've really taken away from this is the difficulties of implementing this stuff. And this is what you hear time and time again, is that the technology is tough, but it's the change management piece that is what trips up these companies because of personality, personalities who are resistant to it and just the entrenched ways of doing things. It is so hard. Yeah, if you change management, yes, I agree. There's so many moving parts in these stacks. It's incredible. If we just focus on the moving parts that represent the business logic that's driving all of this AI, that's a governance mess in its own way because what you're governing, I mean version controls and so forth, are both traditional business rules that drive all these application code, plus all these predictive algorithms, model governance and so forth and so on. I mean, just making sure that all of that is you're controlling versions of that. You've got stewards who are managing the quality of all that, that it moves in lockstep with each other, so when you change the underlying coding of a chatbot, for example, you're also making sure to continue to refresh and train and verify that the algorithms that were built along with that code are doing their job and so forth. In other words, I'm just giving this sort of this metadata and all that other stuff that needs to be managed in a unified way within what I call a business logic governance framework for cloud, data-driven applications like AI. And in companies that are so big and where people are so disparately located, that is, these are the biggest challenges that companies are facing. Yeah, you're going to get your data scientists in, let's say, China to build the deep learning algorithms and probably to train them. You're probably going to get coders in Poland or in Uruguay or somewhere else to build the code and over time, and there'll be different pockets of development all around the world, collaborating within a unified DevOps environment for data science. Another focus, by the way, DevOps for data science. Over time, these applications, like any application, it'll be year after year after year of change. The people who are building and tuning and tweaking this stuff now probably weren't the people five years ago, as this stuff gets older, who built the original, so you're going to need to manage the end-to-end lifecycle documentation and change control and all that. It's a DevOps challenge ongoing within a broader development initiative to keep this stuff from flying apart from the sheer complexity. So, just, I don't know if, Jim, if you can help me answer this, this might be more of a floor issue, but when we heard from the analysts meeting yesterday, Soma, their chief technical guy, who's been on theCUBE before in New Orleans, very sharp, dude, two things that stood out. Remember that architecture slide they showed? They showed the slide of XI and the architecture. And obviously they're building on AWS Cloud. So, their greatest strengths are where they're, in my view, anyway, the Achilles heel is here. And one is Edge, let's talk about Edge. So Edge to Cloud, very expensive to move data into the cloud. And that's where we heard today that all the analysis is going to be done. We know that. You're really only going to be moving the needles, presumably, into the cloud. The haystack's going to stay at the Edge and the processing's going to be done at the Edge. It's going to be interesting to see how Amazon plays there. We've seen Amazon make some moves to the Edge with Snowball and Greenfield and things like that. But it just seems that analytics are going to happen at the Edge, otherwise it's going to be too expensive. The economic model doesn't favor Edge to Cloud. Sort of one sort of caveat. The second was the complexity of the data pipeline. So we saw a lot of AWS in that slide yesterday. I mean, I wrote down DynamoDB, Kinesis, S3, Redshift. I'm sure there's some EC2. These are all discrete sort of one-trick pony platforms with a proprietary API. And that data pipeline is going to get very, very complex. So here's- You call them flywheel platforms, I think, when you were talking to Charles Phillips. Well, yes, but when you talked to Andy Jassy he said, he said, look, we want to have access to primitive access to those APIs because we don't know what the market's going to do. So we have to have control. It was all about control. But that said, it's this burgeoning collection of at least 10 to 15 data services. So the end-to-end, the question I have is, Oracle threw down the Gauntlet in Cloud. They said they'll be able to service any user request in 150 milliseconds. What is the end-to-end performance going to be as that data pipeline gets more robust and more complicated? I don't know the answer to that, but I think it's something to watch. Can you deliver that in under 150 milliseconds? Can Oracle even do that? Who knows? It's- You can if you deliver more of the actual logic, you know, machine learning and code to the edge. I mean, close to the user, close to the point of decision, yes. Keep in mind that the term pipeline is ambiguous here. On one hand, it refers, in many people's minds, to the end-to-end path of a packet, for example, from source to target application. But in the context of development or DevOps, it refers to the end-to-end lifecycle of a given asset, code or machine learning model and so forth. In the context of data science, in the pipeline for data science, much of the training, the term, the whole notion of training and machine learning model, say, for predictive analysis, that doesn't happen in real time, in line to the actual executing application. That happens, you know, it happens, but it doesn't need, it's not in line in the critical path of the performance of the application. Much of that will stay in the cloud because that's massively parallel processing of, you know, of TensorFlow, graphs and so forth. Doesn't need to happen in real time. What needs to happen in real time is that the, well, the algorithms, like TensorFlow that are trained, we push to the edge and they'll execute, you know, in increasingly nanoscopic platforms like your smart phone and like smart sensors embedded in your smart car and so forth. So most of the application logic, probabilistic, you know, machine learning, will execute at the edge. More of the pipeline functions like model building, model training and so forth, data ingestion, data discovery, that will not happen in real time, but it will happen in the cloud. It need not happen in the edge. Kind of geeky topics, but still one that I wanted to just sort of bring up and riff on a little bit, but let's bring it back up, Rebecca, to sort of. And this is the thing, there's going to be a lot more to talk about. We love speaking out here, Rebecca. We apologize for that. We do, we do indeed. It's okay, it's okay. Dave asked, Dave indulges me. He humors me. No, you love it too. Of course, no, I mean, I learn every time I try to describe these things and get smart people like Jim to help unpack it. And we'll do more unpacking tomorrow at day two of Inform 2017, where we will all return. Jim Cobelius, Dave DeValante, I'm Rebecca Knight. We will see you back here tomorrow for day two.