 So today we're going to talk about the IoT app. I'm going to show you how an IoT app finds its way to the cloud, basically. We'll show how the same app runs at local hosts. Then we move it to PCF depth. Then finally, the full-blown payload cloud foundry. We'll show some differences for each of these stages. And we'll also highlight some of the problems we're writing to as we move the app to the cloud, basically. All right, so we've seen lots of hype around IoT. So in the given study, as you've seen on the bottom of the screen, in this year, IDC and Garner shows that they're being over $800 million in spending on related to IoT. Then they're basically over a billion connected things just this year. But these number will greatly increase. So by the end of the year 2021, we will have over $1.4 trillion spending on IoT, as well as over 20 billion connected things. And of course, that poses lots of challenges as well as opportunities. All right, so here we have a connect. We basically work with our partners and set up some labs. And this is the architecture of the lab we built for also the lab environment behind the app I'm going to show you. So on the bottom, we see that there are three labs, but we are adding more labs as well. Each lab is focusing on different scenarios. So the first one demo app on the bottom left, that's this one. So that's for abandoned device. And the second one, we call lab one room. So that's for basically drone detection. So think of a surveillance app scenario where there's a drone flying, trying to play information. And we use AI to detect that. So that's the plan. And the third one, that's highlighting the chemical pollution scenario. We are also adding several other labs, which is now showing the architecture. On the top, in the middle section, that's the center for our environment. So we have a data center. And it's built on top of Dell IoT Gateway. And we also have a Dell PowerEdge server creating arrays. So such hardware is set up in such a way that all these labs, each lab has over a dozen sensors. So each sensor is emitting at least a record a second. So all these data are collected from these labs using the standard MQTT protocol as well as the other protocol to feed into this hardware environment. We also set up virtual on top. So that's on top of the virtual layer, we have a Hadoop for batch processing, including some of the Hadoop ecosystem tools. We also have, you see the NAS cameras. You could see the video feed coming in. And same video feed is also ingested into an offshore store. We also have a MongoDB. So that's the heart of this demo I'm going to show later for the data store part. Then on top of that, we have two destinations showing in the picture. So on the left, that's the on-prime environment, native hybrid cloud. That's a solution we built for our customers for on-prime management in cloud-native environment. On the top right, that's the public service in Peeltoe Cloud Foundry and Peeltoe Web Services. Then on the top left, we also support some of the public cloud services depending on the use case we built. So we use Microsoft Azure for the POC. So one of the demo we built is to build a customer 360 profile with license detection. So when imagining a bank scenario, when the customer walks into the parking lot and such video feeds pick up by the surveillance camera and they analyze basically the license plate and that license plate is associated with the banking customer, then we could quickly recognize the high-profile customer. Then by the time he walks into the door, we will show them the different promotion depending on his history with the bank. So that's basically the environment. All right, so to start, so our data scientists have created this app originally. It's running on our environment and it uses the Shiny R, which is a very popular visualization dashboard in our environment. And such a dashboard is running basically on his laptop. Then eventually he made this app running on Shiny web as well as also near the edge, but still it's monolithic architecture. The services still basically just some CSV file he put in to analyze the data. He changed it later into MongoDB, but still it has lots of issues as monolithic application architecture like it's slow to load and also difficult to manage. So originally our plan was to use a customized R build pack to push this app to pillow cloud foundry. There are many R build packs you could find on GitHub. We use this build pack show on the screen and you can go to this link to check it out. But I'll show some problem as well as the benefits of using customized build pack in the next slide. So first of all, these are some of the benefits of using customized R build pack. Not necessarily just R, but in general the customized build pack, basically they design for containers. So you could easily move apps to the cloud foundry environment. It's friendly to our users. So a lot of data scientists, at least half of the data scientists uses R for programming and analytics. So it's very friendly to them. They don't need to talk to the developers. They could just figure out how to push the app to the cloud foundry themselves. And finally, for the build pack, since they designed, like the build pack I showed previously, it's designed for container, not just for cloud foundry, but also for hero cool. So the apps, chances are that these data science apps could run in both environments. But that being said, there are a lot of challenges using customized R build pack as well. So first of all, there are a lot of version compatibility issues. So certain libraries don't work together. And it's difficult to figure out for the end user until they push the apps to the cloud. Then there will be lots of difficulties to move that and change the version compatibility, different versions. It's just a headache. Second of all, that's even a bigger one, the staging time. So by default, I think the cloud foundry staging is only 15 minutes. But when you use a build pack, like a customized R build pack, the installation of the base R package, installation of all the R dependency libraries, all included in the staging time. So chances are, most of the time, you will exceed this staging time threshold 15 minutes. Of course, you could do the export in the CFL staging timeout as well as the startup timeout. But it's just a pain. They have to remember doing that. And third one, it will result in a very bulky app. Because even a simple R dashboard app, you will have to install the R package as well as all the dependency libraries. Even if you only use a few lines from a dependency library, you still have to install the whole thing. And some of the dependency libraries are pretty big. So a lot of times, after we finish staging the app, the drop plate is over 1 gig. So it's just bulky. Then lastly, for certain technologies, it doesn't work with cloud foundry. So one example is the JDK. So cloud foundry, it doesn't support OpenJTK, period. You could work with JRE with cloud foundry. But JDK is no goal. Second law, we found another problem is that with this OpenMP, which is OpenShort's massive parallel processing paradigm that developers could use to run processes parallelly. But I think it's because cloud foundry natively, you could use the CF scale to paralyze your apps. So probably because of that, cloud foundry disabled OpenMP. So if our apps uses OpenMP as well as the libraries, even the dependency of your libraries uses OpenMP, you will get the error when you do the CF push. It's saying something like OpenLT, the container doesn't work with OpenMP. And we show that some of these libraries that basically uses either the JDK as well as OpenMP technologies. And you could see some of them are very popular libraries, like this R Java on the top. Mallet is a very popular machine learning libraries. Then R Mongo, so basically that's the goal to library if you want to access MongoDB with the R. And on the bottom, Markov chain, that's also a very popular library for building Markov chain for machine learning. So basically, because of these reasons, we decide we're not going to do the customized R build path, at least not at the production level. So another direction is to stick with the official CF build path. In this example app, I'm going to show it uses the Python build path. And it works with Python 2. You could change it to Python 3 easily. It also runs local hosts as a starting point. Then testing in PCF Dev, then finally push it to the on-prem PCF public web services. All right, so in the first scenario, running on local hosts, you could see down the table, we use the language Python 2. So you could also do anaconda Python 2, either one is fine. If you want to change to Python 3, you have to create a runtime.txt in your app directory, then point it to Python 3 so that the build path knows which version Python to use. Then secondly, you could create this requirements.txt to explicitly list which libraries as well as the version you want. And this is highly recommended as long as your app is handed by over one person. Then there's no endpoint logging. Then dependency, basically, I use the anaconda virtual environment. You could also use just the virtual end for the version control as well as different virtual environment for Python. And they also use the local Redis service for some of the caching. So you don't keep curing your MongoDB database. Then the star command, I use the bokeh. It's a very popular Python visualization library. It runs both as a just like snippet produce an image, or you could run it entirely as a bokeh server. Then finally, the app URL is the local host. Then second, in the PCF dev, we switch to the Python build path. And you can see that Python building build path will handle all the dependency libraries as long as you define the requirements.txt file. It's very easy to do the cf push. For the endpoint, you do the typical cf login to the local PCF dev instance. Then when you do the dependency, you have to create a service. So first of all, you've got to make sure the Redis service is included. If you just go with the default PCF dev, chances are that this P Redis is included in your instance. So you could just do a cf create service, P Redis to create this service instance. We call it IoT underscore Redis. That's this part. Then one difference is in the proc file. You have to define proc file for PCF dev as well as PCF. This is the file that you list the starting command so that when your app gets pushed, PCF or PCF dev knows which command to run. So in this proc file, since we use bokeh, we just use this bokeh command to start the server. And you could see that there's some difference. So in the local host scenario, you give port A0, A0. And this part, you could change it to A0. It could keep it A0, A0, or you could just use the dollar port. Then this one, allow WebSocket origin. This is an important part. So I pointed to the final URL, this app's gone around. Technically, you could switch this to just a wild card, just a star. But that will oppose the security leak. So when you run a web app, whenever you want to update data and update your application, you will use the WebSocket to update that. Then if you allow WebSocket connection from any URL, then your application will oppose to basically the denial of service attack. So because of this reason, I highly recommend you to explicitly list the URL in the WebSocket origin URL for your app. Then the address lists whatever address you want your app to run on to listen. So I give it 0, 0, 0, 0. So that means that any address this container runs will use that IP for the listening request. So that's pretty much it for the second one. Then the third one is the Pivotal Web Service. The major difference here is the port. You could see that I use this port over here, 4443. That's because Pivotal Web Service changed the default WebSocket port, which is 4443. They changed it to 4443. So if you don't explicitly list the 4443 port, then your app basically won't work. It won't update data, et cetera. Another difference is that for Pivotal Web Service, it uses the Redis Cloud as the plan name for Redis. So you have to change from PRedis to Redis Cloud. You have to also provide a plan detail. In this case, I go with the free tier 30 megabytes. So those are basically the two major difference. Another thing is that because I listed the WebSocket to 4443, you have to access the app at 4443 as well. All right, so with that said, I'm going to show you a demo. Hello? Sorry, I have to stand a little taller. Yes, so here I have my app here. Listen, this is the directory of my app local for app one. Can you see the screen while, by the way, especially from back? OK, great. Yes, so first of all, I'm going to go with this command. So I want to show you one thing before I start the server. So you have to give this environment variable. You basically just set up some system environment variable. So this is the Windows version I put in BAT. And the Linux version is the export. So you have to set these variable up to tell the app which MongoDB instance to connect. Yes, so then inside your application, you access these variable names to get the data. So that's the local host scenario. All right, so I'm going to execute this command. So now you see this app is running on local host. Then here, let me go to the demo gateway. And you see all the devices available in the demo room. And you can also go to this nest to see what's really there. So this is the demo room. So let's see, for example, I go to this zone. This is another app we built for just on the edge side. Then so that we could allow low latency. So let's say if I go to the pilot, it's factory pilot. So in the video, you could see it's right here. Right now it's off. Now let's say if I go to the same pilot in my app locally, then I choose control bits. I'm going to increase, let's say, 17 minutes. So right now you see it's a zero because it's off. You can also see here, this is the app on the edge side. Let's say I change it to green. See, there's some latency, but you see the pilot turns on screen right now. I'll give it a few seconds so that my app could pick it up. So in this app, I update every 15 seconds. There's a little latency. I intentionally did 15 seconds so we don't access the database too frequently. Imagine that this app is going to stay in the cloud, away from the edge. So there will be some latency issues. We don't want to update too frequently. But you see that after a few seconds it pick up the control bit is now at four. And this is the same number here, color number four. Now let's go change it to red. See, now it becomes red. And after a few seconds, this app will pick up as well. The red color is, by the way, is one. So the number will change to one after a few seconds. And this is the Bokeh library I use for visualization. So it includes all the functionality of Bokeh. For example, I could do the wheel zoom. I could pan it. Then let's say if I'm fine with this and I could just save it so that it saves into a static image that I can view later, I'll share with my colleagues. Yeah, so basically you see this app is now picked up. All right, so that's local host. Now let's take a look at what's for the PCF depth. I'm just going to play a video because it takes a long time to start the PCF depth virtual box image. So first of all, you see this. I targeted the local PCF depth, logging as a user pass to the default org and space. Then I go to the CF marketplace. So you see right now we have the P reddits on the bottom of the list in the marketplace. So I create a service. P reddits, now I call it IoT underscore reddits. So you see now the service gets created. Then I go to my app directory. I have a requirements.test list, the dependencies. I have manifest, the YAML, as well as the app directory. And this is the content of the manifest YAML. Of course, I hate the logging credentials and such. But once you replace them with the real credential, then you can do the CF push. This is the profile. So as I mentioned, you could do the dollar port. And this is the host web socket. And this is the address the app's going to list for requests. So after this is all done, we could take a look at the requirement.test and do the CF push. So finally, we just do a simple CF push because we have the manifest. All right, so we will call it IoT app one. Then it will basically create the container, run through the entire building the app and pushing the app. Now this stages to basically install all the dependency at listing in the requirements.test. Yes, so this part also install all the dependency library of my dependency libraries. Then eventually, I upload the droplet. So now my app is running in the cloud. So I could go to my web GUI to log in as admin and then check my app status as well as running the app. So this is the same app. I'm not going to show the rest to save some time. So finally, I'm going to show how we push the app to PILOT web service. So I'm in my app too. So this is a fancier app. So I took the original app one. Now I add some CSS style as well as the JavaScript libraries. Then let's take a look at the manifest. So this is the manifest for app two. It's very similar to what you've seen previously for PCF Dev scenario. So in my directory, first of all, I make sure I'm targeting the right instance. So you see I'm targeting the PILOT web service endpoint. Now let's basically do a simple CF push because I already updated my manifest. Make sure it works as well as the requirements.txt. So let's do a CF push. OK, see now we are creating this app IoT app two at port 4043. That's the name. But the final app is going to run at iotapp2.cfapps.io at this URL. You see this is very similar to PCF Dev in terms of pushing the app. A little difference is that you see the build part is basically accessing some temporary directory for getting some of the base packages. So now that's the requirements text. That's the dependency library. You see it installs a lot of more libraries because a lot of them define as dependency for my dependency library. So it goes to all the iterations. And now it's uploading Droplet. Yeah, it's now starting the app and finally it's running. So I could do the CF apps, make sure it's listening. So you see now this app is started. And you could go to this URL I mentioned earlier. Make sure to add the 4443 port number. And it's at HTTPS. So now it's the app too with some styles. And you could go to, for example, let's say demo lab. I'm going to go to the motor and change it to check the RPM. So right now you see this is the motor RPM. So it's from 0. Now it's 500. So let's go to the same one. That's this motor, demo room motor. So you see it's running. And let's say right now it's 500. I'm changing to 250 and let's see what happens. So you see now the speed gradually will decrease. So that's pretty much it. This is the customer 360 app I mentioned earlier. So I'm not going to show the detail, but basically we mimic this scenario as a customer walking into the bank and then immediately such information like license plate is picked up by the bank to highlight who he is and also what he's spending habits. So they could do the target marketing. All right, so here are some of the next steps at my company. So first of all, we will extend the IoT use cases. We're also creating deep learning use cases. One thing we work is create this GPU as a service for Kilo Cloud Foundry. Then we also work working on the edge computes. Come back tomorrow, I think my colleague Barton George has a talk on edge edge foundry. So you will learn what we have done and what's planning on the roadmap. And finally, of course, we're going to design the next gen IoT solution. In fact, yesterday, Michael Dell announced that at Dell we will spend $1 billion for the next three years just for IoT investment. All right, and you could come talk to me after the session. I don't think we have time for FAQ, but make sure to talk to me when you have any questions. Thank you.