 Hey, welcome to Analytics Unleashed. I'm Robert Christensen, your host today. Thank you for joining us. Today we have three quick wins that drive big gains in the enterprise workloads. And today we have Olaf with Erickson. We have John with O-Roc and we have Dragon with DXC. Welcome. Thank you for joining me gentlemen. Yeah. Good to be here. Thank you. Good to have you. Hey Olaf, let's start off with you. What big problems are you trying to solve today that are joined for those quick wins? What are you trying to do today, top of mind? Yeah, when we started looking into this microservices for our financial platform, we immediately saw the challenges that we have. And we wanted to have a strong partner. And we have a good relationship with HP before. So we turned to HP because we know that they have the technical support that we need, the possibilities that we need in our platform to fulfill our requirements and also the reliability that we would need. So tell me, I think this is really important. You guys are starting into a digital wallet space. Is that correct? Yeah, that's correct. So we are in a financial platform. So we are spanning across the world the living of financial services to our end customers. Well, that's not classically what you hear about Erickson diving into. What's really started you guys down that path? And specifically these big wins around this digitization? No, what we could see early was that we have a mobile networks, right? So we have a lot of strong user base within the most kind of networks. And where we started in the emerging markets, you normally have a lot of unbanked people. And that people was the ones that you want to target. So be able to, instead of going down and use your cash, for example, to buy your fruits or your electricity bill, et cetera, you could use your mobile wallet. And that's how it all started. And now we're also turning into the emerged markets also like the Western side, part of worlds, et cetera. That's fantastic. And I wanna talk to John here, John's with O-Rock and he's one of those early adopters of those container platforms for the, in the United States here in the federal government. Tell us a little bit about that program and what's going on with that, John. Yeah, sure, absolutely, I appreciate it. Yeah, so with O-Rock, what we've done is we developed one of the first FedRAMP authorized container platforms that runs in our moderate and soon to be high cloud. And what that does is building on the Israel platform gave us the capability of offering customers, both commercial as well as federal, the capability and the flexibility of running their workloads as a service model where they can customize. And typically what customers have to do is they have to either build it internally or if they go to the cloud, they have to be able to take what resources that are available, then tweak to those designs to make what they need. So in this architecture built on open source and with our own infrastructure, we offer very low cost, zero egress capability, but also the workload processing that they would need to run data analytics, machine language and other types of high performance processing that typically they would need as we move forward in this computer age. So John, you touched on a topic that I think is really critical and you had mentioned open source. Why is open source a key aspect for this transformation that we're seeing coming up in like the next decade? Yeah, sure. Yeah, with open source, we shifted early on in the company to move to open source only to offer the flexibility. We didn't wanna be set on one particular platform to operate within. So when we took and built the cloud infrastructure, we went with open source as an open architecture that we can scale and grow within. Because of that, we were one of the very first FedRAMP authorizations built on open source, not on a specific platform. And what we've seen from that is the increased performance capability that we would get as well as the flexibility to add additional components that typically you don't get on other platforms. So it was a good move we went with and one that the customer will definitely benefit from. And that's huge actually because performance leads to better cost and better performance around that. I'm just super, super happy with all the advanced work that you all are doing there is fantastic. And Dragon, so you're in a space that I think is really interesting. You're dealing with what everybody likes to talk about and that's autonomous vehicles. You're working with automobile manufacturers. You're dealing with data at a scale that is unprecedented. Can you just open that door for us to talk to about these big, big wins that you're trying to get over the line with these enterprises? Yeah, absolutely. And thank you, Robert. We approach leveraging Esmeral from the data fabric angle. We practically have a fully integrated the Esmeral data fabric into our robotic drive solution. Robotic drive solution is actually a game changer as you mentioned in accelerating the development of autonomous driving vehicles. It's an end-to-end hyperscale machine learning and AI platform as I mentioned based on the Esmeral data fabric which is used by the sum of the largest manufacturers in the world for development of their autonomous driving algorithms. And I think we all in technology and the following of the same type of news and research right across the globe in this area. So we're pretty proud that we are one of the leaders in actually providing hyperscale machine learning platforms for kind manufacturers. Some of them I cannot talk about but BMW is one of the kind manufacturers that we provide these type of solutions and they have publicly spoken about their D3 platform, data-driven development platform. Just to give you an idea of the scale as Robert mentioned, daily we collect over 1.5 petabytes of data, of raw data. You say daily, daily. Yeah, daily. The storage capacity is over 250 petabytes in growing. There's over 100,000 cores and over 200 GPUs in the compute area. Over 50 petabytes of data is delivered every two weeks into our hard-running loop for testing. And we have daily thousands of engineers and data scientists accessing the relevant data and developing machine learning models on the daily basis. Part of it is the simulation. Simulation cuts at the cost as well as the time for development of the autonomous driving algorithms and the simulations are taking probably 75% of the research that's being done on this platform. That's amazing, Greg. The more I get involved with that and I've been part of these conversations with a number of the folks that are involved with it. Computer science meant me, my geekiness, my little propeller head starts coming out and it just blows my mind. And I think, so I'm gonna pivot back over to Olaf. So you're talking about something that is a global network of financial services. Yeah, correct. And the flow of transactional, typically non-relational transactional data flows, to actual transactions going through, you have issues of potential fraud. You have issues of safety and you have multi-geographic regional problems with data and data privacy. How are you guys addressing that today? So to answer that question, today we have managed to solve that using the container platform to gather with the data fabric. But as you say, we need to span across different regions. We need to have the data secure as possible because we have a lot of legal aspects to look into because if our data disappears, but your money is also disappearing. So it's a really important area for us with the security and the reliability of the platforms. So that's why we also went this way to make sure that we have a strong partner that could help us with this. Because just looking at where we are deployed in more than 23 countries today, and it's processing more than 900 million US dollars per day in our systems currently. So it is a lot of money passing through and you need to take security. It's a very important point, right? It really is. It really is. And so, John, I mean, you obviously are dealing with a lot of folks that have three letters as acronyms around the government agencies and they range in various degrees of security. When you say FedRAMP, I mean, could you just articulate why the Esmeral platform was something that you selected to go to that FedRAMP compliant container platform? Because I think that kind of speaks to the industrial strength of what we're talking about. Yeah, it all comes down to being able to offer a product that's secure that the customers can trust. And when we went with FedRAMP, FedRAMP has very stringent security requirements that have monthly poems, which are performance reviews and updates that need to be done, if not on a daily basis, but on a monthly basis. So the customers, there's a lot that goes on behind the scenes that they don't are able to articulate. And by selecting the HP Esmeral platform for containers, one of the key strengths that we looked at was the Esmeral fabric. And it's all about the data. It's all about securing the data, moving the data, transferring the data. And from a customer's perspective, if they want to be able to operate in an environment that they can trust, no different than being able to turn on their lights or making sure there's water in their utilities. Containers with the Esmeral platform built on O-Rocs infrastructure gives that capability. FedRAMP enables the security tied to the platform that we're able to follow. So it's government guided, which includes NIST and many and over hundreds of controls that typically, the customers don't have time or the capability to address. So our commercial customers benefit, our federal customers that you discuss, they're able to follow and check the box to meet those requirements. And the container platform gives us a capability where now we're able to move files, which we're here about through the Esmeral fabric, and then we're able to run the workloads in the containers themselves and give isolation. And the security element of FedRAMPing Esmeral gave us that capability in order to paint that environment FedRAMP authorized that the customers benefit from security. So they have confidence in running the workloads using their data, and able to focus on their core job at hand and not worried about their infrastructure. The fundamental requirement, isn't it? That isolation between that compute and storage and going up a layer there in a way that provides them a set of services that they can, I wouldn't say set it and forget it, but really had the confidence that what they're getting is the best performance for the dollars that they're spending. John, my hat's off to what the work that you all do in there. Thank you, we appreciate it. Yeah, yeah. And Dragon, I wanted to pivot a little bit here because you are primarily the operator, what I consider one of the largest data fabrics on the planet for that matter. And I just wanna talk a little bit about the openness of our architecture, right? And all of the multiple protocols that we support that allow for, you know, some people may have selected a different set of application deployment models and virtualization models that allow to plug into the data fabric. You know, can you talk a little bit about that? Yeah, and I think in my mind, right? To operate such a data fabric at scale, right? There were three key elements that we were looking for, right? That we found in as well, data fabric, right? The first one was speed, cost, and scalability, right? The second one was the globally distributed data lake or ability to distribute data globally. And the third was certainly the strength of our partnership with HP in this case, right? So if you look at the Esmeralda data fabric, it's fast, it's cost effective, and it's certainly highly scalable because we, as you just mentioned, stretch the sort of the capabilities of the data fabric to hundreds of petabytes and over a million data points, if you will. And what was important for us was that the Esmeralda data fabric actually eliminates the need for multiple vendor solutions which would be otherwise required, right? Because it provides integrated file system database or the data lake, right? And the data management on top of it, right? Usually you would probably need to incorporate multiple tools from different vendors. And the file system itself, it's so important, right? When you're working at scale like this, right? And honestly, in our research, maybe there are three file systems in the world that can support this kind of size of the data fabric. The distributed data lake was also important to us, and the reason for that is, you can imagine that these large car manufacturers are testing and have testing vehicles all around the world, right? They're not just doing it locally around their data, their IT centers, right? So collected the data and this 1.5 petabytes example, right? For BMW, on a daily basis, it's really challenging unless you have the ability to actually leverage the data in a distributed data lake fashion, right? So data can basically reside in different data centers globally or even on-premise and in cloud environments which became very important later because a lot of these car manufacturers actually have OEMs, right? That would like to get either portions of the data or get access to the data in different environments not necessarily in the data center. And truly, I think to build something at this scale, right? You need a strong partner and we certainly had that in HPE and we got the comprehensive support, right? For the software, but more importantly, I think a partner that really understood criticality of the data fabric, right? And the need for the fast response, right? To our clients and jointly, I think we met all the challenges and it's so doing. I think we made the Esmeralda data fabric a much better and stronger product over the last few years. That's fantastic. Thank you, Dragon, appreciate it. Hey, so we're going to wrap up here. Any last words, Olaf, do you want to share with us? No, looking forward now and from our perspective on helping out with the COVID-19 situation that we have, enabling people to still be in the market without actually touching each other and leaving maybe for the actual market and being at home, et cetera, doing those transactions. That's great, thank you. John, in last comment. Yeah, thanks. Yeah, look for a joint offering announcement coming up between HPE and OROC or we're going to be offering Sandbox as a service where the data analytics and machine language where people can actually test drive the actual environment as a service. And if they like it, then they can move into a production-wise environment. So stay tuned for that. That's great, John. Thank you for that. Then hey, Dragon, last words? Yeah, last words. We're pretty happy what we have done already for car manufacturers. We're taking this solution, right? In terms of the distributed data-like capabilities as well as the hyperscale machine learning and AI platform to other industries. And we hope to do a joint survey with you. We hope to do it with us well. So thank you very much, everybody. Gentlemen, thank you so much for joining us. For appreciating it. Thank you very much. Thank you very much. Hey, this is Robert Christensen with Analytics Unleashed. I want to thank all of our guests here today and we'll catch you next time. Thank you for joining us. Bye-bye.