 From the Corinium Chief Analytics Officer Conference, Spring, San Francisco, it's theCUBE. Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're at the Corinium Chief Analytics Officer event in San Francisco, Spring, 2018, about 100 people, predominantly practitioners, which is a pretty unique event. Not a lot of vendors, a couple of them around, but really a lot of people that are out in the wild doing this work, and we're really excited to have a return guest. We last saw him at Spark Summit East 2017. Can you believe I keep all these shows straight? I do not. Alfred Essa, he is the VP Analytics and R&D at McGraw Hill Education. Alfred, great to see you again. Great being here, thank you. Absolutely, so last time we were talking it was Spark Summit, it was all about data and motion and data on the fly and real-time analytics, and you talked a lot about trying to apply these types of new edge technologies and cutting edge things to actually education. What a concept, you use artificial intelligence and machine learning for people learning. So give us a quick update on that journey, how's it been progressing? Yeah, the journey progresses. We recently have a new CEO come on board, started two weeks ago, Nana Banerjee, very interesting background, PhD in mathematics, and his area of expertise is data analytics. So it just confirms the direction of McGraw Hill Education that our future is deeply embedded in data and analytics. Right, it's funny, there's an often quoted kind of fact that if somebody came from a time machine from, let's just pick 1849, he was here in San Francisco, everything would look different except for Market Street and the schools, right? The way we get around is different. The things we do to earn a living are different. The way we get around is different, but the schools are just slow to change. Education, ironically, has been slow to adopt new technology. You guys are trying to really change that paradigm and bring the best and latest and cutting edge to help people learn better. Why do you think it's taken education so long and must have seen nothing but opportunity ahead for you? Yeah, I think the sort of paradox in the 70s and 80s when it came to IT, and I think we have something similar going on. Economists noticed that we were investing lots and lots of money, billions of dollars in information technology, but there were no productivity gains. And so this was somewhat of a paradox. Why are we not seeing productivity gains based on those investments? And it turned out that the productivity gains did appear and trail, and it was because just investment in technology in itself is not sufficient. You have to have also business process transformation. So I think what we're seeing is we are at that cusp where people recognize that technology can make a difference, but it's not technology alone. Faculty have to teach differently. Students have to understand what they need to do. So it's a similar business transformation in education that I think we're starting to see now occur. Yeah, it's great, because I think the old way is clearly not the way for the way forward. That's, I think, pretty clear. So let's dig into some of these topics because you're a super smart guy. One of the things we're talking about is this algorithmic transparency. So a lot of stuff in the news going on. Of course, we have all the stuff with self-driving cars where there's these black box machine learning algorithms and artificial intelligence or augmented intelligence. A bunch of stuff goes in and out pops either a Chihuahua or a blueberry muffin. Sometimes it's hard to tell the difference. So really it's important to open up the black box, to open up so you can at least explain to some level of what was the method that took these inputs and derived this output. But we haven't, but you know, people don't necessarily want to open up the black box. So kind of what is the state that you're seeing? Yeah, so I think this is an area where not only is it necessary that we have algorithmic transparency, but I think those companies and organizations that are transparent, I think that will become a competitive advantage. And that's how we view algorithms. So, specifically, I think in the world of machine learning and artificial intelligence, there's skepticism and that skepticism is justified. What are these machines? They're making decisions, making judgments. Just because it's a machine doesn't mean it can't be biased. We know it can be. So I think there are techniques. So for example, in the case of machine learning, what the machine learns, it learns the algorithm and those rules are embedded in parameters. And I sort of think of it as gears in the black box or in the box. And what we should be able to do is allow our customers, academic researchers, users to understand at whatever level they need to understand and want to understand what the gears do and how they work. So fundamental I think for us is we believe that the smarter our customers are and the smarter our users are. And one of the ways in which they can become smarter is understanding how these algorithms work. We think that that will allow us to gain a greater market share. So what we do is see is that our customers are becoming smarter. They're asking more questions. And I think this is just the beginning. So we definitely see this as an area that we want to distinguish ourselves. So how do you draw lines? Because there's a lot of big science underneath those algorithms to different degrees. So some of it might be relatively easy to explain as a simple formula. Other stuff maybe is going into some crazy statistical process that most laymen or business or stakeholders, my may or may not understand. So is there a way you slice it? Is there kind of words of magnitude in how much you expose and the way you expose within that black box? I think there is a tension. And the tension sort of traditionally, I think organizations think of algorithms like they think of everything else as intellectual property. We want to lock down our intellectual property. We don't want to expose that to our competitors. So I think that's, we do need to have intellectual property. We do need to have intellectual property. However, I think many organizations get locked into a mental model, which I don't think is just the right one. So I think we can and we want our customers to understand how our algorithm works. We also collaborate quite a bit with academic researchers. We want validation from the academic research community that, yeah, the stuff that you're building is in fact based on learning science that it has warrant, that when you make claims that it works, yes, we can validate that. Now, where I think, so based on the research that we do, things that we publish, our collaboration with researchers, we are exposing and letting the world know how we do things. But at the same time, it's very, very difficult to build an engineer and architect scalable solutions that implement those algorithms for millions of users. That's not trivial. So even if we give away quite a bit of our secret sauce, it's not easy to implement that. At the same time, I believe that, and we believe that it's good to be chased by our competition, we're just going to go faster. And so being more open also creates excitement and an ecosystem around our products and solutions, and it just makes us go faster. Right, which gives us to another transition point, which we talked about kind of the old mental model of closed IP systems. And we're seeing that just get crushed with open source, not only open source movements around specific applications, and like we saw you at Spark Summit, which is an open source project, but even within what you would think for sure has got to be core IP like Facebook opening up their hardware spec for their data centers. Again, and I think what's interesting, because you said the mental model, I love that, because the ethos of open source by rule is that all the smartest people are not inside your four walls. They're actually, there's more of them outside the four walls, regardless of how big your four walls are. So it's more of a significant mental shift to embrace, adopt, and engage that community for a much bigger cumulative brain power than trying to just hire the smartest and keep it all inside. So how is that impacting your world? How is that impacting education? How can you bring that power to bear within your products and solutions? Yeah, I think you were in effect quoting, I think it was Bill Joy saying when one of the founders of Sun Microsystems, they're always, you have smart people in your organization, they're always more smarter people outside of your organization, right? So how can we entice, lure, and collaborate with the best and the brightest? So one of the ways we're doing that is around analytics and data and learning science, we've put together a advisory board of learning science researchers. These are the best and brightest learning science researcher, data scientists, learning scientists. They're on our advisory board and they help and set, give us guidance on our research portfolio. And that research portfolio is, it's not blue sky research, we're on Google and Facebook, but it's very much applied research. We try to take the known, knowns in learning science. We go through a very quick, iterative, innovative pipeline where we do research, move a subset of those to product validation, and then another subset of that to product development. But this is under the guidance and advice and collaboration with the academic research community. Right, right. And you guys are at an interesting spot because people learn one way, and you've mentioned a couple times this interview, using good learning science as the way that people learn. Machines learn a completely different way because of the way they're built and what they do well and what they don't do so well. And again, I joke before about the Chihuahua and the blueberry muffin, which is still one of my favorite pictures, if you haven't seen it go, find it on the internet, you'll laugh and smile, I promise. But you guys are really trying to bring together the latter to really help the former. So where do those things intersect? Where do they clash? How do you kind of meld those two methodologies together? Yeah, that's a very interesting question. So I think where they do overlap quite a bit is, so in many ways, machines learn the way we learn. What do I mean by that? Machine learning and deep learning, the way machines learn is by making errors. There's something, a technical concept in machine learning called a loss function or a cost function. It's basically the difference between your predicted output and ground truth. And then there's some sort of optimizer that says, okay, you didn't quite get it right. Try again, make this adjustment. So that's how machines learn. They're making lots and lots of errors and there's something behind the scenes called the optimizer, which is giving the machine feedback. And that's how humans learn. It's by making errors and getting lots and lots of feedback. So that's one of the things that's been absent in traditional schooling. You have a lecture mode and then a test. But so what we're trying to do is incorporate what's called formative assessment. This is just feedback. Make errors, practice. You're not gonna learn something, especially something that's complicated the first time. You need to practice, practice, practice and need lots and lots of feedback. So that's very much how we learn and how machines learn. Now, the differences are technologically and state of knowledge, machines can now do many things really well, but there's still some things and many things that humans are really good at. So what we're trying to do is not have machines replace humans, but have augmented intelligence. Unify things that machines can do really well. Bring that to bear in the case of learning. Also insights that we provide instructors, advisors. So I think this is the great promise now of combining the best of machine intelligence and human intelligence. Which is great. We had Gary Kasparoff on and it comes up time and time again. The machine is not better than a person, but a machine and a person together are better than a person or a machine to really add that context. Yeah, and that dynamics of how do you set up the context so that both are working in tandem in the combination? Right, right. All right, Alfred, I think we'll leave it there because I think there's not a better lesson that we could extract from our time together. So I thank you for taking a few minutes out of your day and great to catch up again. Thank you very much. All right, he's Alfred. I'm Jeff, you're watching theCUBE from the Corinean Chief Analytics Officer event in downtown San Francisco. Thanks for watching. Thank you.