 Man of the hour, for 30 minutes. 30 minutes? Oh, well, an hour. So what do I do with the 25 other minutes? I almost had a bad job, about five minutes jobs. I'm going to do a quick introduction into IBM Bluemix. That's IBM's play on software's platform as a service. You know, all this wonderful confusion. Marriage as a service, software as a service, business process as a service, also called BS as a service, you know. Infrastructure as a service and platform as a service. Just to clarify the whole thing, infrastructure as a service, is your server not sitting in your basement anymore? Platform as a service is your server with the admin tied to it not sitting in your basement anymore? And IBM has entered that market very late. So we started only last year in July with the official availability. We don't have the Singapore Data Center up and running. That's coming somewhere this year. It's my job to push for it. But bitch with him. We currently have the platform as a service running in Texas, as well as in London. And to be scheduled up and running is Melbourne and Singapore. The big thing is, so what you see there is, that's the web UI. There's a command line as well. We do Cloud Foundry, so we decided not to reinvent the wheel, because we got the patent for the wheel anyway. IBM is old enough. And we also have containers. So we're not allowed to say Docker, but this is basically IBM's version of Docker. One of the things what we did beyond what standard Docker does is like, say, you check in your definition in a version control system and the backend builder builds your Docker image for you, including a vulnerability scan and said, you haven't patched it properly, I won't deploy it for you. And then the other thing is like, say, if you have to run a virtual machine because you want to have some of this funny operating system, like Windows or so, or BSD. So what I'm going to show you quickly, like I say, what the whole platform is about. I'm pretty empty. I just have one IoT demo, which I'm going to use later. It uses a Cloud and Database. That's a NoSQL database, also known as Apache CouchDB. And I haven't any other stuff. So when I want to create an application, this is really complicated. I say, oops. I say, create an application. And then it asks me, hey, what do you want to do? Web or mobile? I don't understand mobile, so I do web. And then he says, OK, pick your poison. So we currently have from Java, Node.js, go PHP, Python, Ruby, ASP. And that's a nice little thing is if you have a build pack, heritage of Cloud Foundry, you can deploy your build pack. My favorite build pack is the null build pack. The null build pack doesn't do anything, but it just copies the file you have. You have Linux binaries. You can use a null build pack to run it on Bluemix as well. OK, I will refuse to do PHP. Pick your language. Potentially hazardous programming, PHP, Python. So I said, I want to do Python. And then he says, a Python application gets you up and running, quickly and blah, blah, blah. There is one interesting thing is, down here, 375 gigabyte hours free each month. Last time I checked in the manual of physics, there is no such thing as a gigabyte hour. Patent pending IBM, we invented a new physics measurement. This is size of the runtime times hours run. And if you're really good in math, who's good in math? OK, so that's basically a 5.1.2 runtime, or two 5.6 runtimes a whole month free of charge every month. Other than our friendly competitor, the big one with the A, we don't charge any network. So there is no network transfer. There's only, like, say, runtime for the runtime, storage for the storage, and API calls for the APIs, depending what you use. OK, so we use Python. So then it asks me, how do you want to name your application? I already was a nice boy, and I preconfigured a few domains for myself. Usually you start with MyBlueMix.net, and then you compete with everybody else what the application's name is going to be. This is why I use DragonTamer. My boys are born in the ear of the dragon, so this is why I DragonTamer. How should I call it? Come on. I need to ask you, huh? Ralph. Ralph? OK, Ralph. I paid him for that. No, they are Ernest and Anthony. So as I finish, in that very moment, a Cloud Foundry build script kicks off in the background, and if I'm very lucky, the application will actually run, which usually it does. So then it says, OK, what are the two options you have? Since it's a Python application, it doesn't offer you Eclipse, but real Python developer use what IDE? Notepad, command line, huh? Whatever. VR. VR, yeah. That's good. VR is less painful. No, it's as painful as poking yourself in the eye. If you know the colon, you know how to get a colon cool. That's all I ever learned. OK. So we have two ways. So we have the command line interface. All the CF commands are going to show that in a second. And my personal favorite, we have a full Git integration. OK, so when I look, go in the overview, and the overview now says that you have one instance of Python running, it has 128 meg of memory. Python is actually quite frugal compared to the other languages. So excellent choice. And it says, your application Ralph is now running. It tells me here the Ralph Dragon tamer. I have the activity locked. And down here, I can say, estimate the cost of the application. I come back to that in a second. And up here, it says, add the Git repository. I said, OK, add the Git repository. IBM runs as part of the Bluemix environment under the name hub.js.net, a full-fledged Git server with a few little extras. And that's the build and deploy capability. And it says, OK, you want to do Git repository. And said, oh, yeah, I love a Git repository. I click OK, continue. And bang, off it goes. And create copy C. Just before the step. Yeah. What did we have? Did we have a Django framework? Or did we just have a Python proper? Just we got to look. This is why I copied in the Git repository. We're going to have a look at the source. What it just did right now. If you have something ready-made code, you can use it as well. But before you copied from Git, did we have anything? There is. Yeah, OK. So let me just show you the running application. Ralph, Dragon, Payment, and that's the Hello World one. Let me open that up. That was a copy to Git, not from you. To Git, to Git. So that's basically the Hello World application. There's, to my best knowledge, no framework in there. We do have boilerplates with Django. OK. I covered the boilerplates in a few minutes. OK. So now I go back to my dashboard. I have now the Git URL and straight away the ability to edit code. So since I'm supposed to speak faster, this is how the repository looks like. So it should, OK, I need to reload that. Otherwise it doesn't show it up. So these are all the applications I have in there. I can then invite different people to participate. Where is it, Ralph? There you go. Yeah. And one of the things is, like I said, what we did, all I said, when you look at standard Git, it's just version control. What we added is the, does it run? Yeah. It's the one of the things track and plan that's a little bit like our little bug tracker. And then my little favorite one is build and deploy. I'm going to show you that in a second. So we used the Eclipse Orion project, which is a browser-based editor. I'm not sure who would do that in his right mind. But like I said, if you are in need somewhere and you bloody need to fix that two lines of bug, what you found, you can anyway go in there. And it will actually check out the complete repository. There's no live editing of some code involved. And then build and deploy. Let me just show you that quickly. And there you see, there's just a server pie, so there's not much in. So we have build into a build stage and a deploy stage, and you can add a test stage. Where I said, once I stuff something into a Git URL, typically in the master branch, go run the build. If the build was successful, deploy that. And then for the build, let me just go for you to the thingy, configure the stage. And it says, okay, where does my input come from? I said, okay, it comes from the repository. This is the URL where it came from. When anything goes into the branch master, please trigger the build. And then for the jobs, what you can see, okay, what are the builder types, and that's the interesting part. So we have basically the who's, who of building and Gradle, Grunt, Maven, NPM, or Shell script. And then what I mentioned earlier, the container service to build a docker container in the background. So you check in your docker definition in, let's say, this will be a docker project. You check in your docker definition in the version control, and it gets built and deployed in the background. You don't need to wait. So these are the capabilities. So whatever your fancy is there, like I say, there's always shell script if nothing fails, okay? And that makes things very, very easy. So you happily deploy it. Check it into the respective branch, and you push it out. You also can have different build stages that you said, if I check into the dev branch, please deploy to the dev space. If I check into the master branch, please deploy to the production stage. So, but the real fun is, that's our hello world. I think you believe me if I would edit the source code that it runs. Let's go back to the dashboard. A Python server that runs and does nothing is not a good Python server. So you need services or APIs, and this is where the strengths of Bluemix comes into play. So all the Cloud Foundry-based implementation, so you got OpenShift from Red Hat running on AWS. You got Herdoku from Salesforce. A few smaller ones. I always forget them. No? Hadoop. No? Hadoop. Hadoop is not Open... It's not Open... Cloud Foundry. Okay, so, I'm in my application, Ralph, and it says, what would you like? And then you have here the categories and the level of support. Is it an IBM supported one, a community one? Is it still better or is it third party? And then the first one, of course, that I'm very proud of is, like, say, the Watson, the cognitive computing. I call it the teenage computing. You ask a question, you get an ambiguous answer. I asked my boy something, have you done your homework? Daddy, what exactly do you mean homework? Something, like I said, there is when you Google Watson personality insights, what you should not do is load the personality inside, remove the sample text, and then, like I say, stand in stage in KL and paste Mr. Najib's last speech in there. It's not good. Luckily in AI, I cannot be sued for libel. Okay, so you get all these funny things. I like trade-off analytics quite nicely. This takes away the necessity to make your own head while you buy this phone or that phone. There is one hard-coded, so there is a soccer game or a romantic movie you want to go with your spouse. Do you go to the romantic movie? One of the interesting things that we were discussing about trade-off analysis that's really for single-oriented is housing HDB. One of the things that I was probably going to play around with is to create a demo on how to get all the different parameters for your HDB housing and essentially, based on your parameters, which HDB house or which condo? Estate, condo, landed housing, anything. So that's something that trade-off analytics can actually help. It goes beyond got money, need condo. Yeah. Okay, then the more interesting thing is, like I say, for mobile services, we got the push notifications, the iOS 8 had it a bit special because you have the push notifications that are actionable, so the API got different. Twilio is a third-party service that lets you send spam. No, SMS. Why do I say spam on this? And my personal one, what I like is mobile QA that allows you, that locks for you which of the APIs were called, what happened in your application. You can have, like, say, a module, you shake your phone and give a feedback straight away to the user. And little secret one, you can use it as a mini app store bypassing the whole public process. So for 50-odd users. Little DevOps. Also one of my little favorite auto-scaling. Your application is popular between 11 and 1, not before and not after. So before and after you run one instance and between 11 and 1, once the demand picks up, you start new instances and since we only bill you for running instances, you can tailor the bill towards to what actually is the consumption. That's my, so this is one of my little favorite. And then you see there's quite a bit of third-party, New Relic is quite a place meter. They're quite popular in terms of monitoring and load. Web and application. From MQlite to SessionCast, the workflow, the scheduler, they can say some tasks, once every two hours, each third of the month, basically what you can do in a good, a cron task, you can schedule there. Then we have a lot of third-party there from Memcached, the geocoding from Pitney Vowels. I haven't checked how good they are in Asia. Send grid. For geocoding? Yeah. Nope. I was guessing that. Send grid for sending out all your SPAMs. SessionCast, that's quite a bit of a thing. When you start developing cloud-based, this is probably not relevant for you guys because you're cloud-born, but I see it's an enterprise developer. They're so used, I have this one-step big web-server. And then I use a singleton to keep track of all the sessions. But when you have multiple instances, all that doesn't work anymore. So you need to have a service that keeps your sessions alive because you never know where the user's going to hit. It's almost like Kung Fu. You never know where you get hit. Okay. So down there, integration. This is one of the things where we are pretty strong with cloud integration, CQGate, with API management. API management is my current favorite because my boss wants it to be my favorite. This is a layer between your application and your clients where you can throttle, measure, and build your API usage. Especially, let's say, our enterprise customers are very fond of that. They said, hey, hey, what is... If suddenly I have 10,000 requests per second, my backend can't handle that. I said, don't worry. We put the API management in front and said the open API can be called 10 times a second, not more. You're paying customers. They can call it 1,000 times more. And you're super special customers. They run their own hardware so you can define what they're going to do. And you get nice little graphics. Which application, with which key, and all that stuff called it. You know, you code all your REST APIs to properly do application keys and all the stuff. No, you haven't. API management puts that layer on top of you and you just need to do... If it's a REST call, you're good. And then data management. Starting from SQL database, which I find very amusing. They didn't dare to call it what it is. A DB2. Then we got Cloud and NoSQL, my little favorite because, like I said, I know Damian Cuts. Damian works for Salesforce now. But she's crazy. XIBM, he wrote. Cloud... CouchDB in Erlang. Not Erlang. That's a city in Germany in Erlang. And if you write Erlang, you need to be crazy. But it runs. Okay, then we got... Mongolab, that's quite popular. Elephant SQL and ClearDB. So that's... So we basically have all the bases covered. And then big data. So for somebody who, like I said, pretends to know something about data, DashDB is pretty cool. That is a database for analytics. Works nicely with R. Or big insight, you know? If you can lift your data, it's not big data. Time series database, the artist formerly known as Informix. Insights for Twitter, that's kind of a mix because it's data storage and it links out to Twitter straight away. Then got a bit of security and business analytics and Internet of Things and custom... And this one, I built myself. I built myself. That's for one of the... That's with the API management. So I define in the API management, I define a constraint, and it surfaced back into Bluemix. Okay, but database. Let's add the database. I said, okay, I want a database. It tells me, okay, you want to be in the space development. You want to do it for the application wealth. It gets a fancy name and it tells me 20 gig is free per month and 500,000 reads and 100,000 writes more or less. After that, 1,000 calls is 3 cents, roughly, US. I said, create, bang, the whole thing creates and adds it to my project. So then the big question is, yeah, and how do I then get to that database using Python? And he said, okay, I add something you want to restage. I said, okay, restage it. And now I see here, down here, environment variables. There's a wonderful environmental environmental variable called vCAPS services. And there, you can read that. It's a JSON string. It tells you where is the host, what's the URL to get there, what's your username, what's your password. So you shouldn't flash that on screen. They're going to delete this application after the show. So it doesn't really matter. Fine. You read that into Python. How to do that? I don't know. I don't speak Python. At least I pretend I don't do. And then it's under environment variables. So you can actually get it from any Python and just environment variables. By the way, it's under if I'm not wrong, it's vCAP underscore something. If you read this thing, vCAPS services. There you go. That environment. Okay. So let's have a quick look at the cloud database. I could actually steal the URL straight away. Should I steal it? So the cloud database runs on its own infrastructure and allows me to does it? Looking up. Just the and then allows me to go and have basically JSON documents as a team fit with a nice little map reduce mechanism to get to the data. And we have an internal server error also sweet. Okay. Never mind, like I say, but you know TV kitchen. Here's the cloud and database. This is actually if you look very carefully runs a local host. So I can go and have it on my machine. It replicates sorry, synchronizes with any one of it. My little favorite feature where's my Russian friend this one runs on a Raspberry Pi as well. So we did a project where we used a bunch of sensors locally deployed sending the data to a Raspberry, stuffing it in a couch database and then synchronizing it back to the cloud. So we then can go and do all the analysis in the cloud so everybody can see that, but I don't have it. And then when you look into that like say let me just open one of them. Let me show you the including the documents. And you see it's pretty straightforward a JSON document. The funny thing is you need to have ID and REF is mandatory. That's what they use for conflict resolution when synchronizing. And then you can have it could be whatever. This one is quite interesting. This is a design. This is the underscore design. This is a map function. I said give me all documents where the document type is question there. So this is lovely. Just show you I have a view by ID and that is No? Okay. Oh, I don't have queries inside, sorry. But that's basically what's in that design. What's in that design in there. Very optional. Yeah. When do we get to the IoT stuff? We get to the IoT stuff in about 90 seconds. So rough overview. Now the interesting thing is IoT. So I was a bit lazy, so let me close this. I created an IoT already. So since my hardware didn't work because you sparkle I have one little sum. So I have an IoT sensor at 38 degrees right now virtually 36, 35 and then I have created an application as was the IoT demo. We don't need that. Where's my IoT there? To create this, I just show it. Where is it? That was the arrow. Go back to the easy one is just go to Google Mix. That's the fastest. So to create what IBM is using for the IoT cloud is we have a tool that's called Node Red. It sits on top of Node.js escaped out of IBM research. Somehow they must have made IBM legal drunk because they allowed them to publish that in Apache license. And the tool is called Node Red. So when I go to the catalog and then I have what we call the boiler plates. This is basically ready-made menus. Coke fries and a burger. And one of them is the IoT starters. They don't call it boiler plates anymore. They call it starters. They just deployed a new version last week. Yeah, I know. I suffered that. The Internet of Things Foundation starter, I click on this. It tells me what it is. And then I can go and say, yeah, credit card amounts. Hey, I work for you. I don't want that. So it says it's the SDK for Node. It's a cloud and database. And then it's here Node Red. Anybody heard about Node Red already? A little bit. Okay, no. So Node Red basically looks like this. This is wide open. I haven't closed. Well, here we are. Looks like this. I can go and have a series of elements on the left-hand side. I can drag and drop them as I need and connect them to each other. And then when you look at this you see I inject this basically just send data in there. I can connect to MQTT, HTTP, WebExpress, TCPIP, MQLite and IBM IoT. This is going to spend on this a little bit more. And then it said output from debug, HTTP response functions around a function, around a template. I have switch and change and range and what have you. I can input, output, email, Twitter, whatever, storage and all the usual suspects. I have the sentiment analysis. I have all the Watson tools in there. This one you will not find when you install your own version of Node Red. So this is specifically to Bluemix. And then just to make it more fun. For instance, this one is Node Red running on my local machine. And there I deploy it when you leave it down there. You see this is a little hardware device that talks Bluetooth. And you can say, okay, I run it on a Raspberry Pi, I stuff a Bluetooth shield on it and then I can use that to interact with this type of stuff. But okay, back to our little example. So how do you get the light blue beam extension in Node Red? You just define it as a dependency. That's it. In your package JSON file. So what dependency is it? Light blue beam, contribute whatever. It's on the light blue beam page. Let me show the sample application first. So I have D, IBM IoT. They have one mechanism, Quick Start or the proper one with using an API key. I use the Quick Start. I only need a device ID which I copy it already from the other one. Of course, to make things a little bit more complicated, we have a bug in the software which requires that the colons go and that the letters are lower case. It took me two embarrassing shows where it didn't work. B and F. Where does this device ID come from? Is it given to you by this? The device ID comes from the device sensor. It's basically a MAC address. With the MAC address. In the production environment you use an API key. You don't just use the device address. This is for demo purposes. We all know the way to hell is plus that with the demo version. And then I said, okay, and up here my deploy screen goes up. I said, okay, deploy this. It then goes and starts and now I should be able to see messages from the device. Because then I go and say, okay, take a function and extract from the payload the temperature. And then I said, okay, based on the temperature I have a temperature threshold. I said, okay, is it below 40 or above 40 or is it exactly 42? I have three entry point and then I can say, okay, here I have a moustache template to output the data. And then I simply, in this case for demo purposes I simply use a debug window. And then when you look at this oops, I've played a little bit already and you see temperature 35 within safe limits. Let's play a little bit 35 sub, sub, sub, sub, sub, sub, sub, 42. I go back to my application as a, one thing is all questions of the universe are answered. That's here. This one. And since I said evaluate every condition it also said temperature is critical. So I immediately have, without great ado if I have a sensor that can connect to the IBM IoT user in Bluemix to process my results. Now you can say, oh, this is funny virtual sensor is no big deal but I have real sensors. How do I do this? Any data size constraints on this transmitted IoT data? Of course, if you want results fast you should keep your data small. What if it's a once every minute? It's not a problem. By the way, there's one of the things you might want to do is you can then pump it into the alchemy API and it will tell you back which people were on the picture. Alchemy is within this. Within Watson. There is also a visual recognition API for Watson which is another thing that you can do. My favourite example is they have a wonderful picture of a tiger and Watson goes and looks at it with a 78% probability that's a tiger. Do you know I put in my picture in there to ask Watson, what is this? You know what it says? I'm a shoe. A shoe? I'm 78% sure you are a shoe. Not quite embarrassingly enough when I was presenting in public, you are a shoe. This is why I use not your speech. Okay, on developer IBM.com or IOT foundation both URLs are the same. There are all the ready recipes for IOT devices and what I was using was simulated device but you look, we have the arm and bed in the little gallery. I was in Penang last Saturday so I had a little chat with the Intel guys. It just took one out of the box plugged it in, deployed the library and it just worked. Which is pretty cool because you can use the device simulator to try something. You can even create a page which emits data play with it even if your hardware is not ready you already can program. The Raspberry Pi Arduino UNO the TI stuff and and and. Spark is an Arduino. So what's cool about this for those people who have been doing software development and everything can you imagine that you are just walking your device first before you even have your device. So pretty much when you can simulate and eventually you can actually replace it directly when you have the device already available. So you can do all your testing and everything with a mock hardware device. I think from a software development perspective it's pretty useful. You don't have to build your hardware first and wait for it to be built before you do your software. The other thing is there is a Node-RED extension for the Spark car directly from what they now call ParticleIO. I didn't get my bloody Spark to work so otherwise I would have shown it. So basically these are all the stuff so when you said this MQTT stuff is okay but I really don't want to really go into the deep things you get the readymade libraries there. Of course another example you can always is able to do HTTP get or post. You can connect it to the IoT device as well. There is a question here? Well, so I have seen that for example if you simulate the Raspberry Pi you will have some graphical representation in some way to see the GPIOs and see the outputs and then simulate the... Sorry! It's not what I say. What he meant is not that you simulate the Raspberry Pi but I say you think about building a piece of hardware you know what it will send out and you can simulate the signalage already not the device itself. We're not in the business of... We just simulate revenue. So I have a question. So if I have the actual device why get the actual device to connect? Click here and it brings it to the download page. There is a there's a small library for that you just add it to your project and then you need it with network connectivity with a Wi-Fi shield or network shield. So this library will assist us in transmitting the data? Correct. Is there any data types that are already defined? No. Of course you have strings in there so basically at the end of the day when you look at MQTT payloads most of them are string based but like I say nobody stops you from transmitting binary data. MQTT is perfectly capable of doing that. Based on your network capabilities. The message in MQTT is up to 125 megabits. 125 megabits. That's a standard. This is there. SD card Linux image content to the micro SD card connect your board off you go. The funny thing is the Galileo uses the Arduino IDE. I find that quite amusing. So quickly back where is my oops no here we go. Then I use another example. This is Trine. This one. Trine can't do MQTT. Trine only does HTTP. So you see here temperature let me switch off the simulator data first. So this is like switching on and off the debug. And I got the twine here so I shake this a little bit and eventually you see temperature 25 degrees orientation was right and then message a sacrifice over. That was like say when I put the twine flat this is the message the twine network generates and when you look here as an example this is basically simply said okay I want to have a post and to the URL slash twine on my IoT demo and then it will automatically take a standard HTTP post and transfer that into a string for me so I can process it. So that's another example how can connect with an IoT capable device. The libraries that we have the recipes that use MQTT so use the IBM IoT cloud if you fancy to talk to MQTT directly it's available here as well. HTTP get and post actually knows the more get post put delete. And then the last one so you don't really need to you don't really need to go through the IoT IBM IoT platform right? Only if you want to. This is like the advertisement when Next we will tell you about price and how expensive it is. When Daimler Benz came out with a navigator they had this wonderful lawn with a beautiful lady and with our navigator you never have to ask for the way anymore unless you want to. Same with the IoT cloud you see down here I have open weather so I went in here and said okay I have this is just a trigger that goes and repeats in an interval every one minute and then it says okay please pull out the Singapore weather for me and then just send it to the debug so and this one was one of the extensions that is available for Node-RED to so you don't have to go okay I need to make a HTTP call to this and then get things back you can really encapsulate functionality into this modules and this is one of the things I would like to show you there is red-nodes.org this is the registry where all the extensions for Node-RED are published and the installation is really really complicated for most of the packages you just go to your packages file and say I depend on this package and it will show up in your editor that's all and then you see LDARB and MQTT and AT&T, MX2, M2X whatever that might be so there are 14 pages of additional modules that can make your interaction with the Node-RED environment much simpler and last last not least where was it this one that's my little there is the the thingbox.io and that is for our Raspberry Pi fans for the Pi-1 and Pi-2 with the Node-RED installation running on it if you said okay, I got my local little exercise running I go and collect my data for instance using the Bluetooth devices the punch rubines I collect data locally and then either I do from there HDPOS to the cloud or use the cloud and database run it on the Pi the cloud, that's up to you so you then have like say one level of environment where you very rapidly can assemble your stuff at the end of the day the source code is all open if you said okay, I'm outgrowing the Node-RED stuff you always can steal the source code the engine that is running under the graphical UI that processes all the data that was roughly okay so was that okay, now that was a block entry, that's not saying about installing HDB on the Raspberry you're gonna compile the links and put it on the side let me know all the links and I'll put it on the meetup.com okay, so recipes sample sensors Node-RED, this is local and he has a light blue beam I said okay, I wanna read like say, I wanna whenever the acceleration kicks in, please light up the beam as simple as that and then when the beam I just need to go and say okay, which of the beams is it and get the idea of the beam in there whatever and voilà, and this as fast as it can go and program with the stuff little faster than the other stuff you can actually process and output it to an HTTP post so that you can actually put it all your data and all your processing can do it here and you can post it to your website I can go outside and go to HTTP and say okay up to HTTP, off you go or to MQ or WebSocket or let's say, I probably would today rather use WebSocket than HTTP but up to you and you can actually say I also wanna put that into WebSocket oops wanna put that into WebSocket and while I'm on it let me let me tweet it of course let's say this is not magic so you need to have like say you need to have your Twitter credentials so you authenticate with Twitter and say it posts on your behalf so there's nothing irregular there can you sort of decompose them particularly large, complicated things so you see here you saw that on the other side already so I use different pages and then there is also the latest one where we added is the ability to somewhere I said what are you looking for? subpage subflows create a subflow then you have a subflow and just press the plus button that's a so that subflow can then be called by an input an output and then I can go and call that subflow from one of my main flows I think it should show up here where is it there you go subflow and my weather source should go to the subflow first and then from the subflow goes back to the output how many models 10,000 sensors when you go that's an excellent question if you go and look at the IBM IoT let me just drag this one over so you would not go and use the quick start you use the API key and then in the API key you say okay you group them by types for instance I say you have 10,000 sensors 5,000 temperature, 5,000 pressure so I said okay the device type so you say tempxy whatever you however they identify that and then they said all of them and then you collect them in one go so you have some attributes exactly device ID event type you can say what format maybe some of the devices some of them send plane text so you have this ability to group them up so you don't end up with 10,000 inputs that would be a little silly when I deploy the sensors then how does the sensor know about these is it something that I've provided through the API configuration on the sensor that's one of the stuff what the libraries that we provide help you to configure so what I see in a practical way this idea that I have I have a single sensor that straight away talks to the network that's nice for simple use cases if you start deploying armies of sensors you have a collection hierarchy that's very clear and then you said okay maybe you have the raspberry talking to the internet and then you have a bunch of Arduino talking to the raspberry and then you have the analog ports on your Arduino then serving a bunch of sensors that's what would be a very typical hierarchy where you easily have a few hundred sensors which end up with a single internet connection from a local aggregator like the raspberry so this would be something I would provision when I deploy before I deploy it or while I deploy it correct any questions more questions so I understand you will save all the coding of the low level stuff but I still think this is all the fun of coding something but you still have to go you can go to the boiler let's say give me go and then IoT I totally agree with the approach I understand that probably I can connect to it I can easily I can create an API to connect to it if I have to do a more complex logic based on let's say Python or Go and I just connect to an API but there's no way to once I create this logic here to actually connect natively with it I mean like I said to somehow if I want to run this in the actual Raspberry Pi and I want to create I want to use all this logic without having to to actually go out to Lumix that one runs on a Raspberry this thing is actually running so that because what I see there that's not on the local machine so this this one is on Lumix and that one that one is on local so I have two installations so I can have it running all locally and save all the all the hassle of coming to create to write all the low levels but then I need two models one for the Raspberry it's not one model exactly what I'm thinking is my background is in the auto industry and what they sometimes do is they create models across different ECUs and then say they deploy them afterwards and say this one is here and this one is there actually they're called node packages so you can the same node package with the same logic can run on two different devices so what you do if you said normally reusable modules you create one of these yourself and this is a very simple node package so even I could read the code that is very interesting but it's not what I meant I meant that you already have an end-to-end model and then say this part of the model runs here and this part of the model runs there you can do that how can you push how can you push updates to in the context of one node server exactly yes you can yes you can you just need to make that one leap of faith A running node server is part of a bigger system so you orchestrate that using a set of APIs I have a scenario it might be what you just answered a node server by itself doesn't spend across node servers but nobody stops you that a node server it goes to an API to a fellow node server so you can orchestrate that so you I can't build a model in one screen here that has pieces running on one node and pieces running on another that's right but you could do that and say you model it in one screen deployed on both ends and the ends know which part to run so if I have 7e run but in 5 is enough to relate that's not the kind of thing that's being answered for the concern we're talking about do you have a tool here that deals with one instance I go the question is about a tool that deals with a deployed instance so you see it like I say I got these four tabs in there I can take this model and deploy it to all instances so I have one model that I deploy everywhere but as long as I use the input from only one then only that one page will run because the other pages don't get an input they won't process anyway still answering a different question because it's closed which this one yeah, Node-RED is not a DevOps tool that's right so that was being asked I think it's already so yeah so I have a scenario if I deploy 7e Raspberry Pi in a mesh, in a village anyone or many of them can break down their connection to the internet can I still converge all the data in this blue mix depends on how you design it I'm not so user space a programmer can I converge so there's two things that come to my mind yeah the first one is like say MQ is designed to have a reliable message delivery I would have a little bit how long can it work how much buffer would it have before it breaks down I probably because I have a more background in databases than in networks I would probably go and put a couch to be on the Raspberry because like say you put a 32 gig memory card in there you can store a lot of data and that one can say okay if I have a network connection I do a synchronization transmit the data and if there's no network connection it just collects the data locally so using a couch to be you can do that the occasionally connected network the occasional, no? that's basically the scenario described so my thinking is in a place where the net can break down in one point but we can store and forward by machine can the data still be all converged I call that the occasionally connected network the network is always available except at the moment you need it you have the right to use as sources or you can use them as consensus either they can be the source of information or they can be collectors also so the Raspberry sits there collects it's peripheral but it also takes in the data from over there and gives it around there storage so I don't see a problem with that it's just when all the Raspberry Pisces gets all the data now the question is where do you want to send it to and how that's the question whether this modeling programming environment can support that kind of store and forward messaging system so this really is a case one so are you doing any processing to the data in the pie there might be some pre-processing of the data because you don't want to send out all the data you want to send out the unfiltered data which is going to be like a mess of a lot of information so what you want to do is probably you want to use this on the Raspberry Pi each of them is gathering the data break down the data, analyze a little bit I don't do something with it pre-processing the data make it a little bit smaller maybe in the bytes and then start sending out kilobytes kilobytes rather than in the max and the multiple max I see the pictures coming my consultant it's very important that is if you have a rough idea of what your connectivity is that will limit how much data you can actually transmit that will decide how small, how much you want to compress it or how much you want to process it it's just an internet connection by any one of these Raspberry Pi's might not be reliable one for many of them but also what does it mean not reliable 90%, 95% 30% or the occasional connectivity I think because it's okay I'm not sure I understand the concept because if you said that this is an internet connectivity this is IP transit how do you mesh them supposing I had a river I had to deploy 70 Raspberry Pi's along this river line so it's not an internet connection there are not many villages with internet connection along this river line how do you mesh them I mean the Raspberry Pi's have one storing forward I never mention Raspberry Pi oh okay sorry I only use the word Raspberry Pi because they have used it as a concentrator gateway if you have a mesh then what's your mesh it'll be computers like the Raspberry Pi just a different computer what kind of mesh network Wi-Fi so we have 300 meters maybe who knows sorry I have a question I'm sorry I want to monopolize the question but your first part of the session you show about a session of Python I think somebody asked about whether the framework is in place or not the question is how do I get libraries into there how do I get my libraries to be in there so there's two parts the easiest one is the very moment you add it to the version control system you will clone the repository pack everything you need in there and just deploy it back I can just understand the repository so I'll write that on my own and then get it there and then whatever I run there I will pick it up from the repository there is a little bit more esoteric mechanism there in the documentation a certain set of standard libraries you can define as dependencies and then the build process will add them to you the reason I ask is because could be pretty dangerous right if you don't sanitize that this is a big problem but you have the challenge where you sanitize whatever I put in there so what is it that so what are you concerned in terms of sanitization it's not going to be I assume it will be your concern over your siege of CPU well I assume it will be more of your concern of mine in terms of sanitization what I put in there you're paying for the CPU time no you're not paying for the CPU time so essentially how Blooming's works is you're only paying for memory usage of your instance so CPU time networking you're not paying for it we have a standard fair use so to answer your question there is a standard fair use if you actually become a rogue application so say for example your application starts using lots of CPU starts sending out lots of networking and stuff like that alright so we do have an internal kind of mechanism in order to identify which one becomes rogue and we'll shut it down sorry every application is a sandbox every application is a sandbox so say for example he didn't show you in the way below that we actually have an experimental MySQL server so MySQL server you cannot connect to the MySQL server except within the Blooming's platform so everything is all sandbox the only thing that's not sandbox is your API or other your application that's exposed to HTTP import AD but all database connections and everything unless it's a cloud service say for example like cloudin it's a different cloud provider which is under Blooming's too under IBM also so that's how it is but everything that's like MySQL that's installed locally on the instance itself it is a firewall within that sandbox within that instance see the whole magic under cloud foundry do isolation monitoring limiting instances so everything is sandbox and runs on a virtual CPU basically which is quite interesting it's like say how is it different from a container because it's almost the same technology at the end I have a way to deploy that if I decide to put that into into a virtual machine into a container or virtual machine so I have another choice for what containers and virtual machines and they say what's interesting is when we started was cloud foundry only and then we realized that the distinction between infrastructure and platform is kind of an artificial one because like say where does it stop if I have a pretty configured container that's almost as good as a platform so that's one of the reasons that then only in January we added the containers and the virtual machines they're still in beta which is quite nice because beta means we don't charge for them yet and so what will happen over time I would say give it five years every single IBM software offering will be available as a service and blow mix including the ability like say to even provision bare metal virtual servers what you do today do in IBM's infrastructure as a service offering software and so the idea is to have a single interface where you can pick and choose all your compute needs hopefully it's not so depressing that's that green brown I think one of the things is they're trying to change the color scheme and everything so really yeah they are what do you mean is this a complete separate approach from your infrastructure as a service so it's also dependent it's a software right so infrastructure as a service software they're two different offerings there the funny thing is I say when you buy a virtual machine on blow mix you can buy a virtual machine on software you can buy a virtual machine on blow mix and the difference is where the bill comes from you can get into the software infrastructure so we make we call their APIs there and this is why I say I teach my software colleagues and I said you're you're doomed because you're going to be blowmix in half a year so the next question there will be does blowmix allow me to use other APIs can I talk to Amazon yes you can so I can use blowmix to ask Maria anything that you know REST APIs everything all of that stuff it's all right and the beauty of that is you're not paying for the network cost whereas if you're having on the AWS you're actually paying for the network cost you still have to understand the API and Amazon API something on the back end of this so you already have built in let's say for Amazon they are our competitors we will not entertain them there are sufficient libraries from Amazon available Amazon has APIs for all the different languages right to get your job I'm talking about the infrastructure service if I'm going to deploy something as a platform here I'm going to use the platform here but I want this to run on an instance in Amazon so you need to sort and then we have to be a little bit more specific you go to Amazon you're talking about infrastructure this is platform so the overlay is that okay I can deploy containers in the Amazon I can deploy containers into blowmix you then have to go and say okay you need to have the container deployable when you go and use the Amazon platform as a service so you have what's it called the database they're using Elastic Beanstalk DocumentDB now that was Microsoft what is it full storage? SG? so Elastic Beanstalk would be the equivalent but they are all proprietary APIs they're not compatible to anything I thought you had some sort of abstraction with your connector that's what you had all your not yet that's not going to happen it doesn't have to be Amazon or not Amazon forget about Amazon let's say I run it in my data center my own open stack I don't want to go and implement all this if it's open stack you can deploy it Cloud Foundry sorry Cloud Foundry I promise that Amazon won't implement let's I mean Amazon provides the APIs they're open APIs as long as you implement the abstraction you can launch I'm running out of follow-up you can plug in now implement Amazon and Amazon is not going to be able to implement the APIs just have a question one of the models is a bigger board do you have a hardware with the same Texas instrument sure take that live pretty much that alright so next up we have Nikolai if you're saying Lee because it's a LAY LAY yes actually