 Felly, hi, wrth gwrs, mae'n rhan o Jason Daniels, rhan o'r CEO o hybrid IT yn Ymnia, ac mae'n rhan o'n gwneud, rwy'n ffordd o Gytsu. Felly, rwy'n ffordd, rwy'n ffordd, rwy'n ffordd, rwy'n ffordd o'r ysgol, yn y mae'r sefyllfa yng Nghymru, ac mae'n gwybod yn gwybod yn fawr, ac mae'n gwybod yn fawr. Rwy'n ffordd, rwy'n ffordd o'r holl yma, rwy'n ffordd o'r holl ymddangos yn ymddangos ar y cwrdd. Mae Gry stamping yn defnyddio'r cheitwch ac yn ymddangos. Mae gan am chi wnaethon archfaluol allan yn gwybod? Mae gan angen i yw'r amry를au enteredol? Mae no gweith cri gallu. Mae gwoith maen nhu'n llys, ond fawr yn rhan i'w cesais eu gwirill o neudol wneud. Mae gyda shiftid â newydd oedd Cysllewtanaethol. Felly ni? Felly i wneudbarthill bunu? Mae ferch gywirwch pwynau, ydych yn fwy impor postoedd gyda gwirid, mae'r Fun mneud yn ym Lewyn rwyndSiol, o Interconnected, OK, which is pretty cool for everyone as a consumer. But that actually poses a massive challenge in the enterprise space for companies that need to compete in the digital era. So a big enterprise, a monolithic bloc, slow to respond... How does a company like that actually compete for the digital age and innovate for the future? And that's quite a tough thing to do. A Fujitsu, we recognised this a few years ago a we created something that we call MetroArc. MetroArc is our digital business platform, so you can't buy MetroArc, it's a platform and ecosystem of tooling and methodologies, okay, and when bought together help deliver digital transformation for a company. Some of the technology within MetroArc consists of IoT, big data, analytics, security, and something called artificial intelligence. Why is that cool? Well, in this respect, and with OpenStack, it's really cool because everything I've just spoken about is powered by something we call K5, and K5 is Fujitsu's next generation clay platform, which is powered by OpenStack. It's a global OpenStack deployment, okay, so it's currently live in Japan at the moment, live in three regions in Japan, when live in the UK earlier on in the year, so that's four regions, all with multiple availability zones, into another three more countries by the end of this year, and around another eight next year. So it's on track to actually be one of the largest global deployments of OpenStack globally in the world, which is pretty cool and pretty impressive. We're here today to talk, and if you were in a marketplace session yesterday, you would have seen some of this, but we're going to go into a bit more detail. We're here today to talk about artificial intelligence and how OpenStack is a base platform that delivers the agility that's needed by a next generation service like artificial intelligence, okay. So we're really lucky that we have a member of Fujitsu from our labs in the United Kingdom, who actually focuses on artificial intelligence technology. And lately, Roger and his team have been focusing on OpenStack to see how OpenStack accompanied with next generation technologies such as clayed foundry, apogee, and the other usual suspects that you would see when bought together can really deliver the agile and globally distributed platform to deliver next generation artificial intelligence. And that's quite a big thing for us. What's even more important for us is that K5, MetroArc powers the transformation of the enterprise, but more importantly powers our future, powers people, it powers how we work today and how we will work for the future, putting people first, okay. And that's really human centric innovation. So I'd like to hand over to Roger. Roger's going to talk to you around some really clever artificial intelligence stuff. The point here is that that's powered by OpenStack, okay. So some of the really clever processing from an AI perspective that you're going to see is delivered via an ice house distribution at the moment, okay, of OpenStack. Soon we're upgrading to Kilo, and then from Kilo, we will upgrade further. But like big service providers, we need to stabilise on a certain version first. So at that point, I'll hand over to Roger. Thanks. Hi there. Thanks, Jason. So yeah, my name is Roger Mende. I am working in Fujitsu laboratories. And yeah, so Fujitsu laboratories are a bit like K5 is a globally dispersed set of activity. So we have activity in Japan, activity in China, in America, and in Europe. I'm working out of the office in London. And we also have an office in Madrid, so a couple of sites in Europe. And yeah, today, yeah. So we have some AI research, some machine learning research. And this is running on K5, which of course running on top of OpenStack. And I'm going to describe what our AI does in rough terms. It takes some data coming in. And it produces insight on the other side. So the data could be timestamp data or non timestamp data. It could involve data from sensors, data from social sources, or anything else. And we have a, well, we'd like to think it's a special approach with regards what happens inside the box marked ML. And so I'm going to bring that to life with three scenarios today. One is going to be to do with signature analysis. The second one will be related to driving. And the third one will be 3D shapes, recognizing 3D shapes. And this is all a combination of work from the labs in London and various people working on these topics and working with the business in terms of cloud enabling it. And yeah, at the end, I'll finish up with some summary thoughts about how we are, how K5 is helping us achieve this whole thing. So the first thing is signature analysis. So, you know, it's 2016. But even in 2016, we're still signing signatures. And we're still using signatures in a workflow for people to assert and improve their identity. And, okay, machine learning has come on in the last five years, and it's made this a problem which is easier to solve. But it's quite a challenge. Here's one of the, one way of expressing the challenge. If you have these three signatures, which you can see there, is that fourth signature considered to be a good likeness? Ie, is it a inlier or an outlier? Does it lie inside kind of what would be acceptable? Does it lie outside? So, you know, the service we produce, you know, keeps lots of different signatures for various people. And, you know, you would like to provide a signature into the system. And so here you can see someone inserting a signature there and asking the system to see if it looks the same. In this case, it says it's an outlier, probably because there was a little bit of a bump in the liner, in the middle of the signature there. So it works out and, you know, gives you some suggestion whether it's an inlier or an outlier. Another way of using the system might be in a scenario where you don't actually know the identity of someone, but you have lots of signatures and you have a new signature. Can you say to which person does this signature belong? Or can you make some suggestions to which person does this signature belong? In this case here, again, I'll go for someone who's going to put in some, oops, sorry, and run this video again. The normal key I use to press the video doesn't work there, I'll go to the next one. Can I have some assistance from the bank because I want to play that video? And normally I would click on it and it would work, so maybe you know the answer, Jason. You click there. It's because you're in presentation mode, press that. Is that right? Yeah, you're due all that. Will that run there? Is that okay? He'll net there? Yeah, that was all right there, but I think I'm going to hit the next problem again. Maybe we'll just take it out. All right. What happens there in that video is that you put your signature in and it says, again, it's an outlier, so there's a subtle difference in the signature you insert, and it says it's an outlier, inlier in that case. The next one here, it says it's an inlier, so the signature which you send. Definitely not right. I'll move on. Okay, so anyway, what you saw there is, and this is a system image of what you saw there, is essentially it's hosted on K5. It's wrapped up as a service, so it's driven by an API. The API is not very nice because service is a client easily, so you can build clients very easy from an API. And of course, once you collect various parts of an overall ecosystem together, an API is really the fundamental thing which drives an ecosystem, so we're looking to have everything managed through an API. And of course, we like OpenStack in a similar way, so we use OpenStack APIs to, for example, manage our inventory and repository of signatures or images, and we use those APIs as well. So APIs are very good. So we outsource to Swift for the storage of images, and we decompose this into smaller steps inside. As you can see, three boxes there. One is called imageification. One is the convolutional neural network, and the other one is a classification post-processing step. So this is a good time to go into why our particular approach to AI is interesting and original and quirky and unique and useful. So I'm going to go into that next. So we have this thing called imageification, and the definition of that or our interpretation of what that means is we turn any data problem into an image problem. So we've done this for many applications, and we have it for useful real-world applications, which we'll see. So the regular approach or the common approach you see is that you create a neural network for every application domain, and you configure and train it with data for every single time, every single application scenario you're looking at. And what we do is we use a single-purpose general neural network, which is trained with image data, many different types of images, including cats and dogs and that kind of thing. And so once you've trained that, and it's an intensive process, but it's a one-off process, once you've trained the neural network, it's got an ability to see features and patterns in images. So then once we can transform our input data onto an image, we can put those two things together. We can say, for whatever problem we have, we have that represented as an image, and we compare that or we use the general purpose neural network to look at the features and patterns in that image. So a word for this that people are using is called transfer learning, which means that you might be, you might learn how to see cats and dogs and see features in pictures of cats and dogs, but later on you can use that to see features in signatures or any type of thing you might express as a picture. So, yeah, this essentially is what imageification does, it makes an image. So I'm going to give you another scenario, the second scenario, which is to do with driving. So, you know, suppose we want to promote safer driving, which means we want to monitor the driving activities of someone, and maybe we'll use that in an insurance scenario where we can offer people an improved insurance offer based on the fact that they are not doing disruptive things or while they are driving, for example, eating or using their phone. So we can reward people who show less of these bad habits. So I really hope this next video works. So, yeah, so here we have a driving simulation, and so I'll talk you through this picture here, and then we'll play the video. So essentially, you know, it's an experiment to show this, to show this system running. So the screen on the right hand side basically shows the my colleague Joe, who's driving, who coded this up and was one of the key inventors of this technique. So he's driving the car, and while he's driving, he is, you know, doing these things he shouldn't be doing such as using his phone and eating. And in the left hand screen, you can see the image which is generated based on that activity, which is measured. In this case, it's a physical device which measures his activity, but it could be a non-physical device. And so you can see in the picture here, there is some picture which is produced. I'm just going to wait for that to do something. It's spinning. Maybe it will go. Oh, there you go. Okay. That's it. So he's driving there. And so, and as activity takes place, the picture of the time series, because essentially it is a time series, is being generated there. And at some point, that changes. So sometimes it's more curvaceous, and sometimes it has more colour in it. But the system knows what to look for, and just because it looks appealing to us and lots of colour doesn't necessarily mean that it's interesting to the artificial intelligence looking at it. But at some point, it recognises that it's here driving here. I think at 25 seconds, yes, it picks up the fact that the person is using a phone. Later on in the video, you can see it's on the phone call now. So, you know, the image is changing. The visual recognition of the particular patterns are being picked up. So this part, yes, driving whilst using the phone. I'm not sure. I think maybe he does some eating later on. So, yes, that's really what's a nice example of showing imageification in action, showing how it generates the images and uses that through this general purpose neural network and works out the activity, what's going on. The system image looks similar to the, very similar, and that's actually the point, to the last, the one for the signatures. So there's a box. We're running it on K5. We're using the OpenStack APIs. We have a number of questions we'd want to ask of the, based on the incoming data. What are the current activities of the driver, for example, and is it safe? We can answer those questions as well. And, yes, so the convolutional neural network part is actually fairly fixed. We can do some tweaking of that neural network to optimise it, but it's not a complete retrain. It's just some of the final layers can be tuned. And then the imageification and classification boxes, these are functions, functions which you customise for your application. And they lend themselves to very nice stateless cloud functions as well. So you plug those in to support your different applications. And on that point of different applications, I've got another scenario here, which is 3D shape analysis. So suppose you dismantled laptop or some form of machinery, and you were presented with a series of components from that machine. Could you take another shape and say, which shape is this? Can you show other shapes which are a bit similar to this? Maybe another question you might ask is, what is the appropriate manufacturing cost? I have a little video here to show. I hope it works. So in this video, I'll just describe it quickly. This is a video of the demo. So there is a camera here recording an image of some object you place in front of the camera, and then the system basically tries to recognise what it is or tries to pull out what it might be. So in this case, the first shape that Serb and my colleague puts down is some sort of heat diffuser, and it recognises that fairly quickly there. So you can see there on the left-hand side, it's recognised the heat diffuser. The second thing is a kind of like a monitor dongle, and places that down and picks up and is recognised it. Again, there is some sort of intermedia. Well, this is an image by itself, but there is some manipulation of the image into a more into a richer image, which is the imageification step, and places that into the system and recognises the shapes. I can give a little bit more demo here. So these are all the different types of, it looks different on my screen to that. Okay, I won't go to that part. Yeah, so there is a visualiser here, which for any of these shapes here, it can show the shapes it's comparable to on the right-hand side. And if the shapes are tagged as well, which would be additional metadata, it can say it's a pin or some sort of connector. Let's see, you can figure out what it might be. Sorry, and you've got that turtle there on that side. Right, here we go. I'll go back again. What is this? I've got it, Jason, I think. That's my one. That must be. No, it's not. That's my one. No, it's not. I have to get it so it looks right on your screen, and then, okay, it's because you're in a different mode. Can you drag it if it's the last person that's doing the monitor? You might be able to drag it up to it. All right. Maybe I can just turn that off. No. If you get your window, get this shape window. Do you want to pull up your web page with the shapes on? No, I'm not going to do the shape one now. I think it's too complex. See, if you get the presentation again, and then play it, probably hit that there, and it'll put it up behind you. Sorry, everyone, about that. I'm back again. I'm back in the room. All right. Okay. Yes, so there you go. I showed three examples there of the scenarios we've been looking at, signatures, shapes, and driving. We have this running, or we're working on running this all-on K5. I've got some bullet points really about some experience points of how we've been looking at that, the things we like, and things we look forward to doing in the future. First of all, what opens that gives us is this API-first approach. To programmatically interact with the underlying open stack environment, this is very powerful, because, for example, in the simple case of object storage, we can do this from our applications. In terms of pulling up and building a machine learning deployment, we use heat, which is great. Just one click, and you've got the whole infrastructure there and ready to go. Based on the API as we use, this decomposition of the application into functions, and how it encourages the developer to ensure that their application is cloud-native and scales nicely, so we exploit the linear scaling of open stack enables, and we get that benefit as well. At the moment, we've decomposed the two parts, like the front part, which is the web part. The API side is run through Cloud Foundry, which is very nice, and we like very much. It's quick and easy, and it gives you very nice, rapid deployment cycles. On the other hand, we did find with Cloud Foundry that some aspect that we like from Docker in terms of managing the underlying machine learning environment, we found wasn't quite so mature in Cloud Foundry, so what we did is we have a Cloud Foundry aspect of our application running, and then we have an IS part, which is run using Docker to prepare the machine learning environment, and one of the benefits of our image-efication approach is that, yes, this transfer learning, it's a one-off learning which can take place offline or at one time, and the actual execution of the model, which is pre-trained, isn't that intensive, so it gives us some flexibility in terms of deployment options, and so we kind of, in terms of deployment on K5, we spread it into two parts, IS and PASS, which is nice in terms of the possibilities we have. Yes, a final summary for my side before hand back to Jason, so this is a labs activity in London, and we have what we think is a very human-centric approach to doing engineering AI on K5. It's human-centric because you don't get into the hairiness of training and neural network and all the configuration and weight in, which it takes in the general, in the case when you have to create it for every scenario, so you do it once, and so from a human perspective you just have to draw a picture, essentially take your data and draw a picture from it, so it's a very human-centric approach to AI, and we've proven it in three cases, in signature analysis, in driving 3D shapes, and there is more application areas to come, and we're at booth A20 downstairs. Should you like more information, please come and talk to us. Thank you, Roger. So I think what's really impressive with that is everything Roger just spoke about is powered by OpenStack. Okay, so OpenStack gives Roger the agility he needs, the API ecosystem that he needs, whereas application to be truly agile and respond to the demand that these very complex algorithms need. And that's pretty cool from an OpenStack perspective. Does anyone here use artificial intelligence or machine learning on OpenStack at all? Nobody? Okay, do we think that we may see artificial intelligence or machine learning project from OpenStack at some point? Of course, we've got a Fujitsu guy there going, yes, we will. I truly believe that we will. Hopefully, Fujitsu will contribute to that, so Fujitsu contributes with a sixth largest contributor to Mataka, and with a fifth largest contributor to the Newton release. So we're really dedicated to OpenStack, we're dedicated to the evolution of the platform. More importantly, we hope we can provide some innovation as well along the way. Now, because the videos didn't play properly, we're ahead of time slightly, so we'd like to open up now for some questions, which I'm sure everybody will have, right? Yes, Mr Fujitsu, you can stand at the mic, please, I'm told. You need to... Nothing hard, please, Droll. Now, I just wanted to ask about... Droll, can you just let everyone know who you are, where you're from, and why you're here tonight? Droll against Droll. I work for the Fujitsu hybrid IT team, mainly for focusing on the platform as a service side, but I have a question regarding the artificial intelligence side, which is a hobby of mine. Regarding the training of the final layers of the network, rather than a full retraining of the entire model, is that a feature of cafe, or is something unique that we developed? Or more generally, how do you do that? Okay, so I can only give you partial answers, Droll, but we use cafe underneath. I believe it's a feature of cafe that you can partially, you can have a partially complete and trained model, and you can step in and do this last part of the end. So that's my answer on that. I believe it's something in cafe. I'm not sure about the other libraries, and if they support that. I know there's a way to do it manually in TensorFlow, I think. That's cool. That's really efficient. Yeah, it works well for us. Yeah, and we have some numbers, I think, on another documentation that shows the little, the additional percent of performance we get from training your network for particular situations. Thank you, Droll, from Fujitsu. Does anybody else have any other questions? Yes, hi. Please take the stand. It's all yours. Hi, I'm Christian Riddler from Suze. I'm working on the Cloud Foundry team there. My question would be what aspects of Docker are you lacking in the Cloud Foundry system that you're using? And we can't speak later. Yeah, I mean, okay, that's excellent. Okay, well, I would like, I would love to speak with you later, but I'll give you a quick an answer, and if I'm wrong, and I have misunderstood some part of Cloud Foundry, well, I apologise. It's mostly to do with preparing the machine on which the Cloud Foundry application is pushed to. So if I needed to install the cafe dependencies, for example, that was a little bit tricky for us to achieve. And we read some stuff online, we tried a few things. We had a custom build pack. Didn't quite, we had some success with a custom build pack, putting it in from a Git repository. And it was somewhat fiddly to work out which binary dependencies would be needed, and how I would then configure the build pack to take those binary dependencies. Does that make sense? That answers the question, and I might come to you later and then talk to you in detail. That's brilliant. That's brilliant. Thanks very much. Thanks a lot. So can you just hold your place there for one minute? Quick question back to you. How much artificial intelligence do you see being executed from a process inspector in Cloud Foundry? Have you got much experience around that? Not that much. From my point of view, it's basically just another workload. So I personally don't care what you run on Cloud Foundry in that regard. It's just workload and if it needs scalability. Basically, Cloud Foundry just tries to make the, well, doesn't, you don't have to see that the open stack system below anymore. So it's developer friendly. You just push your code somewhere and it gets executed and you're done with that. So that's a big advantage. So that's why I'm curious we're just starting on that basically in our team. So picking up on what's there. But I'm trying to gather information on especially use cases and problems occur there. I'll be great if we could chat with you after then and hopefully resolve that one. Come to your booth later on. You can knock Docker off then, right? Okay. Thanks a lot. Okay. Excellent. Any other questions? No? Okay. Great. And by the way, everything Roger's just spoken about was actually developed within Fujitsu. Okay. So Roger and the team have created all this clever stuff from scratch. Yeah. Obviously relying on some great open source. Of course. But yeah, use the open source components to create the artificial intelligence and machine learning. And as you saw from Roger's demonstration, all very human centric around the driver centricity, around what's safe, what isn't, from manufacturing. I've got a component in my hand. Where should that go for it to be analyzed? What part is it? Where does it need to go? So putting people at the heart of everything we do is important at Fujitsu. Okay. How are we looking on time? Cool. Right. Yeah. A few minutes. So yeah, we're done. So thank you very much from Fujitsu. We're A20. Please come and have a chat with us. But thank you.