 Good morning, everyone. I'm going to give a quick introduction on Ajax Foundry. How many of you have worked with Ajax Foundry before? So it's really going to be an introduction, what it can do, what the current status is, where we want to go. Take a seat. And how the roadmap is currently looking. So a little bit background, how it works, where we are, and what we've done from Dell Technology as an investment on Ajax Foundry. Who am I? I'm Jeroen Makenbach. I'm a lead system engineer for EMEA, for Dell Technologies, for the IoT and Edge Compute. Take a seat. I've been working for 20 years in embedded computing and industrial computing. I wrote my first assembly at the age of nine and probably haven't done any coding after that. But I've been heavily involved in people that have and helped them to debug their code. Good. Ajax Foundry. What is Ajax Foundry? Ajax Foundry is an open source, phenoneutral project. It's more of an ecosystem and it consists of microservices. They're loosely coupled and they're sitting at the edge. Main target for its design is that it's hardware and OS agnostic. So whether you run Windows, Linux, whatever, it can run. The goal is to enable and encourage the growth of IoT solutions. And we see, especially in my territory, a lot of an explosion of growth there. Now, if you take a look at the IoT market, there's a huge diversity there. In the skill set that is needed to actually get IoT solutions to market. In connectivity, the amount of protocols that's needed there. The application environment, we see Java, Python, R, Go, C. There's not a single language that is used there. And also in terms of operating system, yeah, it's all Linux, but the majority is on a variety of OSs there. So, specifically at the edge, we see that there's so many different varieties of that fragmentation. And all the way from the edge up to the fork and then into the cloud. You see that those different aspects. So what is Agix Foundry in principle? It's a collection of a dozen microservices written in various languages. And they work together to make the application. So we have a data flow, and I will show you later how that works. So basically, you have the sensor data that's collected from the device services. That's brought on to the core services, and the core services bring that on to the export services. And that can be brought back to do some analytics on it or send it into the cloud to do further AI on it. Now, all these different microservices communicate via REST API with each other. That's the key to make sure that they can exist. And the microservices are deployed via Docker or Docker Compose. So here we have an overview of the whole framework. So what we see here is that specifically, for instance, a sensor detects a temperature which is 102 degrees centigrade over a bug net driver that is brought up to the core data. So for logging, and that can be brought into distribution so it can be later on sent into the cloud or sent back to the rules engine to do some analytics on there. And then we brought on to command set. And that command set can, for instance, say, hey, I have a device that can understand MQTT. Stop the machine. So that's basically the whole clue of what Azure Foundry can do. And you see those different layers laid out. Really specific, the data services, the core services, the supporting services, and the export services. And we typically call those the south on the device end and the export, the north side to export to forger cloud. So here you see, again, the full overview of how these microservices are set up and how they communicate with each other and interact with each other. So it's just a full collection of services working together to make one unified system. These are some of the performance targets which we are currently working on. Basically, the target was to run on Raspberry Pi 3 with one gigabyte RAM, 64-bit CPU and at least 32 gigabytes of RAM. It should start up in less than a second or a minute post OS boot. We should have a latency less than one second and it should be agnostic for different OSs and hardware. Now between the Barcelona, and I will go into the details of the actual roadmap later on. The last step we've taken this June was to move to the California release. We made a huge step forward, basically reducing the footprint from almost 300 megabytes to 42 megabytes. And have the startup time, and that was very important, reduced from 35 seconds up to less than a second. So these microservices can live where they want, and that's a good part because they can run either on the edge or in the fog or even in the cloud. It depends on where you want them. When latency is important, you tend to move those lower upstream, so back to the south. Also in terms of storage and cost, what you might want to do is not send all the data over 3G connection because that's costly. You also could create a situation where you have disconnected nodes that can only come up online for a certain period. So the microservices can be adopted in several use cases depending on where you want them, and have that extreme loose coupling. And here you see an overview of how that could look. So you could have a device communicating directly to the cloud, or a device communicating to a gateway, doing some analytics on there, data collection, and then send reduced amount of data into the cloud. You could also remove analytics onto the fog where you have a big client, and I will show you some scenarios in the next slide, where you can do some analytics on high-end computing. Or you could have a fat client on the edge and do everything in one machine. So here you see a typical building automation system as a sort of proof concept. So on the room level, we're just collecting data. So we have the security layer, we have some on-and-off situations that can be managed locally. Just the core services are running there. Now on the floor level, you want to do some analytics, and maybe you want to do some data aggregation there. Now if you move that up, you go to a building level, and basically what you could do there is create some dashboarding, some more overview situations. See this, your intelligent thermostat in your house that's running there. And then it's coming up to the cloud. And there you can do your deep learning and doing more advanced AI learning. So currently the supported protocols and interfaces is HTTP, HTTPS, MQTT. There's still some considerable work in AWS and IBM Watson, but this is the current status as we are now. So on the south side, these are the ready available device drivers. So you could just enable them and start communicating with them. In the Delhi release, we're going to have a Go and C API, which will enable you to write your own device drivers. And that becomes more interesting than to integrating it to your own IoT setups because you might have a device which communicates a driver which is not in here, which is most likely. Good, the ecosystem and the current status. This is a little bit outdated. Actually this week we had some big announcements where actually Intel joined us. The Intel retail group joined. They were very interested in joining us. There's a lot of movement going on all the time with this and a lot of people are engaging at this stage. Here you see the actual project organization and the way that it's set up. One of the founding members of this project and actually he is the one that wrote line one of the whole project is Jim White. He's one of my coworkers in the US supporting by our CTO, Jason Shepard. He's also on this conference. He gave a huge session yesterday on the technical deep dive. And we're going to be holding an external event here for everyone to join. So if you want to participate, actually let us know. It's an event which is not here, but it's really close. And a lot of talks with the steering committee is happening there. In the next three days. So the California release is been launched in June. So it's got all the security features in there. What we did is that all the services have been transferred from Java to go with that use decrease of resource constraints. There have been a lot of additions on the northbound connections, and we think included the ARM64 support. So here you see that the actual whole group is here in Edinburgh to discuss the latest on the Delhi release. And we're also discussing some of the future Fuji releases. So currently there's more than 40 developers working and developing on this. Some within Dell, but a lot of outside Dell contributing code to this concept. So here you see the overview of the actual releases. We launched Barcelona, which was really huge in 2017. California is really good. And we're planning on releasing Delhi this month, later this month. Edinburgh is going to be April, it's not going to be here. Fiji, October 2019, and there's a Geneva planned for April 2020. You see also some of the different new features. So the actual 2019 release is going to have also certification included. The Delhi release will have a UI included, which is really handy because at the moment it's pretty hard to actually interact with the services. You have to do that via REST API. So yeah, you can do it, but a UI is definitely something that's needed. So in Fuji we're going to have a multi-host. That's going to be a big thing. And we're currently working on the whole security setup, which is specifically challenging in these types of setups and it's going to be distributed on different scenarios. So there's a huge plan for that. I actually went through it. It's, I think, 60 slides of brain dump of the people involved on how to actually get that done correctly. So here's some of the highlights of the Delhi release where we've already gone over. So this was initially the key accomplishments since 2017. So we've been able to biannually release a roadmap and get the first two release dates met. We've got 64 individual code contributors. We have refactored whole code base to Golang. There's specifically good documentation on the whole project. Actually, if you want to get started, everything's really well documented. And that's what I'm seeing in my daily work. There's a huge amount of customers actually getting involved in EdgeX Foundry and starting to actually do proof of concept on EdgeX Foundry, which is good to see and actually people get that setup. Now, what have been the Dell investments in this? We've invested seven man years of effort to the initial project views. That's how EdgeX Foundry was initially codenamed. We've raised an IoT solution division in October 2017, which is my division. The actual Dell technology leadership is done by the EdgeX Foundry president, which is governed by Jason Shepard, my CTO. Two members of the technical steering committee. We have a core working group, Trevor Kahn and a system management working group, Jim White, who's part of the whole steering committee. So what is our offer? So next to the EdgeX Foundry ecosystem, we of course have our Dell platforms, which are enabled to run EdgeX Foundry. We have VMware. VMware is also part of the Dell technology groups. That enables to manage these IoT projects and software and hardware in this ecosystem. We have secure works and we have RSA, which makes sure that it all gets secured. And we have Dell EMC on the north side, which is the infrastructure company that enables the distribution of core analytics and, for instance, projects like Nautilus and The World of Heard. And we have Pivotal, the largest cloud provider, which is also under the Dell technology on board. And then we have Dell Boomi, which enables really easy cloud enablement to workflow scenarios. And that's basically a full ecosystem inside Dell that can enable a full IoT solution with the products that we deliver in both hard and software. So what is the design goals with EdgeX Foundry and where does Dell want to go with this? Basically, we want to accelerate this because we think that EdgeX Foundry is one platform, but it's sort of like an detail on doing IoT. And I see customers struggling with IoT and how to actually do that day by day. And I think the foundation and the components where EdgeX Foundry is built on is sort of the way to go. It allows interoperability with partners. So there's a lot of work with our partners but also with our competitors and other vendors. That's a good thing that we're neutral. It should be the center of the Dell technology software solution at the Edge. We definitely see an increase of interest on the Edge at the moment. It's more of an explosion. We want to provide a total Dell technology S solution with the PhotonOS. That's a Linux variant which is open sourced by VMware. EdgeX Foundry, initially a Nautilus project. Pulse IoT center and a little IoT agent, which is also open sourced. Worldwide Heard, which is one of the analytics platforms. We have Project Iris, which is the security setup. And Ardell Gateways and Ardell Core Service. If you want to start yourself on this, here are some of the links. I will share my slides to make sure that everyone can access that code. Later today there will be a lunch where I'm actually showcasing EdgeX Foundry. Setting it up into different flavors. One in Docker and one with a snap. So you see both versions running. You need to register for this. On Tuesday we have Jason Shepard, my CTO. He's talking about the legacy of goods. And he will give a pitch on where we want to go as Dell technology. And he has the latest and greatest on this. On Wednesday we have Jim White talking and he will get Lina Mien in the distributed path for this EdgeX Foundry. Questions? Well, we have started with... You repeat the question for the... Oh yeah, sure. Why have we chosen Photon OS? What we've seen is we've done a lot of work with Canonical at the moment. Someone from Canonical was just talking to me here. We have a very strong relationship with Canonical. All of our gateway products and most of our laptops are running Ubuntu. For the Dell gateways we've chosen Ubuntu Core and the Snap setup there. Now what we've seen is that maintaining a platform over time is challenging. Especially with security challenges and also with the way that you need to update these systems. All the time. So having all these devices and I mean millions of devices in the field. And having these updates managed by third party as Canonical will be very challenging for us. And that's why we're looking to see if we can use that Photon OS. Because it's ours, it's VMware. If we can re-leverage that and use that as the container OS of choice to actually run Agix Foundry on. So that's the idea at the moment. Other questions? I have a white paper. It's actually all the... The question was about the security. All the documentation of the whole project is open. And actually on the wiki there's all the meeting minutes. And there's also the documentation about the security. So you can find it there. And I'll be able to share it with you. There will be more added to docs.agixfoundry.org in the near future right now than at a separate location. But we're working on moving it all in. There's no further questions. Oh, there's another one. The question is, can you elaborate a little bit more about device provisioning and device updates? At the moment our Ubuntu Core devices, they are dynamically updated over time. They're also... The firmware is part of the LFWF. So the open source firmware initiative. And basically is dynamic. So as soon as there is an update, it will update these devices. So that's for the security updates. And with the snaps, you will be able to actually deploy your application and update your application to our snap store. So that's the whole ecosystem that we have in place to actually make sure that your device can be updated over time. The biggest challenge for IoT is indeed bringing these devices at its place. And then have them auto provisioning themselves and then start working as the function they need. Now there's a lot of initiatives going on. And actually I've been talking with a lot of CTOs from the companies that we've been working with. There's different scenarios. And actually Intel and ARM are working together now on a new initiative. And they've showcased it last week. I forgot the name. But they basically have a framework and an SDK that makes sure that as soon as you attest them to the device in the field, it will actually self provision itself and start to work as it should be. So there's a lot of interest on that at the moment. Because that's a tough job. We have things like cloud in it, but cloud in it is sort of okay, that's your workload, right? That's going to work. But how do you do that in a secure way? We actually provision that device securely with your TPM and your secure boot and everything else around that to make sure that that device is not compromised, but also in the logistical chain. And that's a challenge for a lot of customers. So we're working on that and we have a lot of customers that actually are looking how to do that. And I'm working together with those customers to actually set that up and make sure that that's happening. The thing is that Dell has the tools to get that working, but we don't do it ourselves. It's the partners that we work with that actually do that. Yes? Is there any plans to run HX as a service? So developers will need to deploy their own solutions in the cloud, so they will check my description. So as a sort of... Yes, as a button in service. No, I've not heard of plans on doing that. Maybe one of the requirements will be there. Yeah, absolutely. That's the thing, especially with HX Foundry, there's plenty of room to actually adopt what we have, start working and commercializing that yourself. So there is still a lot of money? Absolutely, yeah, definitely. Good luck. Yes? Yes. Threat. I've not ran into threat yet. Oh, I had one partner asking for this. No, at the moment this is the planning, but since this is an open source, join us. The SDKs are there to help you. Yeah, the SDKs are there to help you. So yeah, there's definitely time to... Sorry, the horizontally. Well, the load balancing and failover, that's going to be part of the future roadmap. You saw that here. So in the Fuji release, we will have load balancing there and multi-host. And since it's containers, that should be pretty doable. It's just a matter of how you're going to manage the on and off of these services. So probably we need something on top of HX, for instance Kubernetes or something, that actually will manage this in a decent way. No, at the moment it's not multi-node. Any other questions? Very good. Then I say thank you very much, everyone. And yeah, come see us at some of the other speakings. If there's anyone wants them, I'll set them up here at the table, because we don't have our con booth here today. Good. Thank you very much.