 Hi, good afternoon. My name is Jason Daniels, CTO for Hybrid IT. You'll guess what company I work for, for Jetsu. We're here today to talk about artificial intelligence, but before we do that, can you just put your hands up, please, if you can tell me the current temperature in your house from your phone? OK, cool. Who can turn their kettle on from their phone? Really? That's geeky. Let's do a super geek check. Who can tell me the current charging status of their Tesla car? OK, but that's the point I'm making, OK? We're living in a hyper-connected world where every device and every person is connected, OK? And that's pretty cool, right? Although it's not cool if you're an enterprise. You know, an enterprise is big, monolithic, not agile, can't move quick enough to compete and keep up in the digital era. And this poses a big challenge, and it's a challenge that for Jetsu recognized some time ago, and it's a challenge that for Jetsu here to solve with what we call MetaArk. And MetaArk is for Jetsu's digital business platform. So you can't buy MetaArk, as such. You can't say, Jason, can I have some MetaArk, please? What you can have from us is a collection of tool-in methodologies that when bought together create a package to enable digital transformation, enabling these big enterprises to digitally transform and compete in the new world, OK? And that's pretty cool. Some of the tech involved in that business platform, IoT, next generation blockchain services, big data, and of course, artificial intelligence. Now, all these great technologies need to be powered by something, right? And that power comes from what we call K5. And K5 is for Jetsu's next generation cloud service. K5 is OpenStack. OpenStack will power the digital business platform. It will power digital transformation for our customers, and it will power our future, putting people at the heart of what we do. And that's really important to us at for Jetsu, OK? We're really lucky to have some member of the Jetsu team here from our labs, for Jetsu Labs, and he's going to put the human intelligence into artificial intelligence, OK? If you add seatbelts, you should strap yourself in now for this one, OK? So he's going to talk about some pretty cool stuff, everything powered by OpenStack. Thanks. OK, then. Hi, my name is Roger. I'm working in the laboratories for Jetsu. And today, I want to talk about some of our activity in AI and machine learning and how we are transitioning that work towards the cloud. So actually, part of that rings a strange bell in me, because when I look at the tools, and we've been using OpenStack, and we've been using Cloud Foundry, we've been using Docker, and of course, we've been using Jetsu K5. These tools are so great, and they've come on so fast in the last couple of years, five years maybe, that it's not a question of innovating in the lab on an individual machine and then going to the cloud. These tools are so great now that you want to use these tools right from the start. So when people in the keynote this morning spoke about Cloud First, I think it's the same thing. We're using these tools to help us throughout the whole innovation process, going from the labs and taking it through. And if I think about some of the challenges in the past where, I don't know, setting up an infrastructure really quickly or outsourcing essential services like OpenStack gives us, or deploying an application really quickly through Cloud Foundry, or setting up a consistent running environment. So all these problems in the past which took a long time and quite frustrating, they're starting to, you know, there's so many great tools to help us now. And so it's been a pleasure to work on K5. It's been a pleasure to use this technology. So I'm going to tell you a little bit about some of the AI work we've been doing in London. And I'm motivated with an example use case in signature analysis. And then I'll tell you about an approach to rapid engineering of machine learning solutions, which we call image-ification, and then, you know, more journey to K5 there. So signature analysis. Now, this is a topic which probably with the, with the maturing of machine learning in the last couple of years has become tangible and we can work on this. So the basic problem is if you have a bunch of signatures here, can you say whether a new signature is going to be an in-lie or an out-lie? Can you say that if this signature is a good likeness based on the asserted other signatures? And our system allows us to do that. We've deployed this on K5. So we run in different parts of the, I should move to the next slide, because it has a system image. So we're deploying this service on our cloud offering. We're using the various underlying technologies. So we use, we have a signature repository and we manage our signatures using Swift onto OpenStack. And the whole thing is circled by an API so we can exercise the API from simple applications to see the service running. So the kind of things that we want to ask here is this a good signature for a person? Or if you don't know for whom that signature comes from, can we have an idea of who that signature belongs to? And can we say something about the consistency of signatures? In the middle of the box here, we have some neural network. We have this box here, which we call imageification classification. And I'm gonna go to the next slide and we'll talk about what imageification means, but the general principle of decomposing, getting it ready for running with these cloud technologies is important to us. And, yeah, finally, the API as well. Just wanted to mention, so we drive the application for its API as well, but then the bigger picture is we want, interested in putting solutions together, which are a connection of all these different capabilities. So there's an ecosystem and we are connecting all these capabilities together inside. And applications are then, there's many applications as part of this ecosystem and running through the ecosystem. So what is imageification? Because that's a clear aspect of what we're doing and essentially this means turn any data problem into an image problem. And so the conventional or the, often the common approach with using or training a neural network is to develop a neural network for every single problem. So you have your source data and you train your network accordingly according to that source data. So this is something you repeat for every single problem you made. In our approach, the thing we've been looking at and we've applied to signatures and to drive in and to some other things you'll see later, we have a fixed neural network, which is the, which we've trained for images. So it's a general purpose neural network trained for images, which means that it doesn't even have to have seen a signature before, but it might have seen cats and dogs and that network can look at the image and can produce a feature vector from an image. And that's essentially, so the middle part, which is the new convolutional neural network stays essentially the same. And for every application domain, we have a new imageification box. So you can see this here for driving that should have started. So you can see here that the, this is a representation. So driving is interesting because it's time series data and time series data measured by probably, well in this case, measurements from the accelerometer. And we produce this thing here, which is a visual representation of the data of the acceleration data. And our technique leverages the fact that as humans, we can look at a time series or we can say something about a particular time series and we can compare it with other types of situations which are from the same, in this case, activity. So we produce these images and we feed them into the network and we make conclusions based on what we see here. So the driving, so we have a different type of imageification here, which is producing images based on the time series from the driver. And we can ask questions of the AI and we say, what are the current activities of a driver? Maybe it's eating or using a phone. In this particular scenario here, we could use this to encourage safer driving or less dangerous driving. Is it safe? No, yes or no. And again, we are using the same service interface here and decomposition of functionality. And another example is on 3D shapes. So if you had, and again, the same technique, so we are producing images, in this case, images from multiple perspectives. Now, if you had some laptops or some mechanical device, a car or an engine and you took it all apart, could you say this piece here, what is this piece in the engine? Where does it match? And this demonstration we have here, which again runs, can say for a particular shape, what is the potential match in shapes you have? So we're generating based on this shape here, we can say all these shapes are somewhat similar, again, generated from this image verification approach. Again, we have a repository and we can ask questions such as, what is this shape? Which shapes are similar to this? And appropriate manufacturing cost based on previous experience, is it likely to fail? So yeah, this is, there's three examples there of this technique and we're confident, actually, that it's a very human-centric approach to AI because rather than getting into the hairy details of tuning a neural network for every single domain, you think of it from a human perspective and how a human brain can see patterns in pictures. We are using a similar mechanism inside a neural network. So it's a very human-centric way of thinking about AI and engineering AI and then deploying it to the cloud. So we've, this is a journey which we are currently in the middle of. So it's how we use OpenStack, how we can leverage these services so we do get high availability, robustness, all those great qualities. We'll follow an API-centric approach and decomposition of the various components and using Cloud Foundry as well. Well, actually, there's a breakout session tomorrow and some more of the details we're going to tomorrow. So we use Cloud Foundry as well. That's it for now. Cheers, Roger. Thanks, Roger. So quick question before we close. Who uses artificial intelligence or has deployed AI on OpenStack? Okay, so one person, right? So it's a pretty new technology with new demands on the platform that provides the power and the resource, right? So I think from an OpenStack perspective, this is a great use case to show the agility, the performance and the scalability that OpenStack provides to enable us to deliver next generation artificial intelligence to people. And it really is about human-centric innovation. So putting people at the heart of everything that we do, okay? And we feel that K5 delivers that. K5 is now live in Japan, live in the UK. So that's four regions in total. We're due to deploy into more countries this year. A total of 12-plus countries, regions, 24-plus availability zones at the end of the deployment. So it's going to be a large-scale cloud offering, public cloud offering, consumable by the enterprise and by anybody else that wants to consume our digital business platform. So from Fujitsu, thank you very much. It's been a pleasure. We're at stand A20, I think. So please, we've got an Oculus Rift. You can have a go on as well as talk about K5. So thank you very much.