 And today I'm going to talk to you about accelerating your digital transformation. Let's see if I can just get my mouse pointer here working again. Before I get started a little bit about myself, I'm Michael Dawson, work for IBM. I lead the team with an IBM that's focused on Node.js and supporting both our internal projects that bundle Node.js, as well as enhancing or adding features to Node. We're going to do that out in the community. And for that reason, you'll see that I'm a contributor. I'm a member of the CTC and TSE, which are the boards that sort of steer the direction for Node.js overall, as well as we're very active in a whole bunch of the working groups where a lot of the work on sort of focus topics like benchmarking, long-term support, and so forth take place. Our team has also done some interesting things like porting V8. If you know Node.js, V8 is the core JavaScript engine that supports it. And we've done work to port those to PowerPC and S390 architectures. And really it's much more than a regular port where you sort of port it over. V8 includes a compiler. So basically every week we have to keep up with all the changes that are being done in that just-in-time compiler to keep those ports going. That and getting it to a cloud native deployment. So for example, in Cloud Foundry, as part of that I'm going to touch on the locality of data because often, you know, if you have an existing data set or existing applications, that's one of the things that's hard to just move out into the cloud. Once I've identified a number of those, you know, the potential obstacles on that journey, I'm then going to give you some insight into the work that we've been doing out in the Node.js community to try and sort of work around those or find solutions to the obstacles you might run into. And finally I'll just end up with a summary and wrap of what we've gone over. So some common adoption patterns, you know, we often see, you know, Node.js almost coming in through the back door. You know, a developer comes up with a nice solution, not necessarily an official project, but comes up with something that solves a problem for the business, shows it, it goes up the executive chain and it's like, yeah, we really want to do this. And the decision is well, okay, we're not going to re-implement that and it actually ends up going into production. So it kind of sneaks into the back door. And that initial success then starts to more and more use of Node.js within the enterprise. Sort of the next level we see is that people use it as a first step internally, so they use it to expose internal APIs or sites for internal use as a good way to sort of become familiar with the technology and, you know, use it for the first time without having to worry too much about external exposure. And then the next level is that people start then using it to expose datasets or existing systems. So provide services where they're wrapping something that already exists and sort of provide an external or a new way to get at that data, say through mobile or something else. Of course, if there's always green fields development where people just start with a whole new idea where you're in a startup, that's obviously easier in that you don't have to deal with existing data and infrastructure and all that. So most of the obstacles I'll be talking about are more likely in one of the three cases where, you know, you're starting with an existing enterprise, you have systems and data and you want to continue to leverage those as you move forward. Locality of data turns out to be an issue. One of the, you know, one of the important issues because moving the data can be hard. It may just be that you have, you know, it's sensitive, it's confidential data, you don't want to move it out into the cloud. There may even be regulation that prevents you from doing that. So if you work in the federal government or certain industries, it may be that your data has to be located in a particular place and you can't easily move it. You may even be using a legacy data store. So, you know, even though the cloud and cloud foundry is fairly rich in terms of the different databases that it supports, maybe, you know, the legacy database system that you're using just isn't available there. And finally, it just takes effort. So if you have an established data store moving it somewhere else, you know, the effort to do that may just be a roadblock in itself. The other thing is even if you, you know, if you're in the situation and you can't move your data, then the locality of the data can affect performance. So one of the things we did is we put together a sample application where we're using Day Trader, which is a benchmark that's often used to benchmark Java systems. And we added to that benchmarking infrastructure some new NodeJS layers to basically provide a new way to get at some of that data. That's one of the common use cases we see. You have a system of record and you want to provide a way to, you know, get input through Twitter, be able to show data on mobile through some new sites. And so basically, you know, you use NodeJS as a new way to view into that existing system. And for this example, you know, we did that both with running NodeJS on an external X86 system so that, you know, basically you had two systems, your legacy system and your new NodeJS front end. And then we ran it the other way where we actually ran NodeJS on the legacy hardware as well. And we measured the difference. And, you know, we could see that, you know, in this case, we saw two and a half times better throughput by having co-located your NodeJS application and the data. And the response times were 60% better. And so really, you know, being able to run NodeJS co-located with your data can actually have a significant impact on the performance and effectiveness of your overall application. So having said that, you know, some of the potential roadblocks, if you're starting on one of those initial three ways of adopting NodeJS is, you know, if you have an existing environment and existing standard within your development organization, if you have to change all of that to get started, that's a barrier. So it's important to be able to start to use Node, experiment, build up experience with the tools you're already familiar with. So if you're using a particular operating system, you're using a particular set of hardware, if you can start on that hardware and operating system, that's going to help you get started in the journey. Even if your end goal is cloud deployment on, you know, a CF type deployment. The next thing is, you know, for traditional applications, we're familiar with using products and services that have a certain level of support. So you want to make sure, like in the case of NodeJS, that you have a good confidence that the versions you're using are going to be supported, how long they're going to be supported for, and that they're going to be stable. And I'll talk about, you know, some of the things we're doing to provide that as well. Unfortunately, things will go wrong. So if you have your applications in production, you want to know that if things do go wrong, you're going to have the tools to be able to investigate and resolve those problems. So monitoring and problem investigation is important to have the infrastructure framework and tools that you need. And then there's just sort of a sort of, just as important, but more of a sort of baseline set of requirements where, you know, for doing a real application, we need internationalization. We need to know our applications will be secure. We want to know that the infrastructure we're relying on is high quality and stable. And finally, we want it to be performant as well so that we maximize the use of the hardware and infrastructure that we're investing in. So before I get into the things that we're acting, actually, you know, working on in the community to address those issues, just a little bit about IBM's involvement in the community itself, we've been very active right from the beginning. IBM was one of the Platinum sponsors in terms of the initial Platinum sponsors for the foundation. And we helped, you know, sort of pull that together. We have nine collaborators. So these are people who can like commit code, review, pull requests. We have two CTC TSC members, which I said, are the boards that basically chart the direction for node resolve conflicts. We're active in many of the working groups. So if you, you know, have an interest in things like benchmarking or APIs or the build infrastructure, you'll see that, you know, we're actively involved in there. And, you know, these are the people in the faces. So if you go to GitHub and you're interested in getting involved and you see us, it's a very open and friendly community. So just say hi and get involved. So the first thing that we focused on was the environment. We wanted people to have the choice of platforms when they came to use Node.js. So we worked hard to make sure that, you know, our IBM platforms are supported. So binaries for, you know, Linux on P, Linux on Z, as well as AI acts are now all available on the community download sites. We shipped earlier, you know, as we worked on that, we shipped the IBM SDK for Node.js since 2013 and all the way back to Node 010. And we're currently working on support even for ZOS. So if you have some data that's located on ZOS and you want to leverage that, there's already a tech preview and actually this is a little out of date. We have a beta that's available now. So if you want to try that out, it's already there. But more in general, so, you know, we of course focused on our platforms but we also have and are involved in just generally supporting the broad platform support that Node.js has, you know, including things like even Alpine, you know, some of our internal groups are saying, hey, we'd like to be able to deploy in smaller containers. So that's something that, you know, we're looking to get involved and to help facilitate to make sure that, you know, when you want to get started, you can just start with the existing environment and platforms that you're familiar with. The next thing is, you know, if you start out development, we want to make sure that you have deployment choice. So you want to be able to deploy, maybe you want to start locally and then be able to deploy to the cloud once you're ready for that. So just a few things that, you know, we recommend on that front is that, you know, when you're developing to develop for deployment independence, don't, you know, sort of tightly bundle in dependencies on local infrastructure or a particular logging mechanism. Plan to leverage the services that are available in the infrastructure for you. So, you know, environments like Cloud Foundry give you ways to do logging. They give you ways to do load balancing and scaling in and out. So don't build those into your application. Make sure you plan to leverage whatever infrastructure and that way, you know, you can start internal if you want and then you can go externally later on. 12, the 12 factor apps list 12 good ways and sort of guidance, 12 things to follow if you want to sort of fall into that nice pattern. In terms of providing the deployment choice, you know, as a company, through Bluemix, we provide, you know, a Cloud Foundry deployment option that supports Cloud Foundry and even Kubernetes Docker type deployments. And you can get that in flavors which are, you know, public cloud, cloud which is dedicated to yourself or even managed locally on your own hardware if you don't want to go out. As well, of course, the serverless type things. And we're also working on some tools to make it a little bit easier to get deployment independence. One of the things we recently worked on were pattern generators which would basically give you the scaffolding for the application that you wanted to develop. So using the BX Dev command line which is an extension of the Cloud Foundry command line, you can basically select a pattern for the type of application you want to develop and then based on that pattern, it asks you what kind of services you want to plug in authentication, do you want to plug in databases, and it will generate you basically the scaffolding for the application that you can put in production. And the nice thing about the way it generates that is it's generated so that you can take advantage of services locally, run and, you know, test it locally in a Docker image or you can push it up to Cloud Foundry, Bluemix type installations. If you're interested in that, that's a link that there's a very good blog about the whole concept and how to get started. The next thing, the next obstacle I talked about was long term support. So, you know, if I have an application in production and I'm running on a particular version of Node, I want to know that I'm going to have support for that version and not have to say update two months from now. So one of the things we've worked hard in the community is on the long term support strategy. When the foundation came together and IOJS and JS came together, there really wasn't a plan for, you know, how long are the binaries that we put out there that are going to be supported and have updates? And we worked in the community to develop this one where basically there's a plan where every six months a current release is cut and every alternating current release becomes a long term release. So Node 8 was just announced about a couple of weeks ago as the next current. Six months from then, roughly in October, it's going to become the next LTS release. And we've had, you know, two years ago we had Node JS 4 in October, Node JS 6, you know, one year ago in October and we're going to have Node JS 8 in October. And you get about 30 months of support. There's an initial 18 months of very active support where you get lots of changes and fixes and even some features going in followed by a shorter 12 month maintenance phase where it's really just critical security fixes and things that are really important. But the key thing here of, you know, what we, when we got involved and what I think we're proud of what's been achieved here is there's very regular and predictable releases. Every year you have a new LTS release that releases, you know, supported by 30 months and we now have a couple years of history of the releases coming out on that predictable and sort of consistent pattern, which is as an enterprise is something that I think most people are looking. The next thing is, you know, if I have my, I've got my long term releases, I have them in production. When things go wrong, I really wanna know that I can actually dive in and figure out what's going on. I was involved on our Java team for 10, 12 years and, you know, we found there are really three key tools that you needed to figure out a lot of the gnarly problems. The first one is like a first data capture easily consumable, easy to create, human readable that gives you, you know, quite a lot of information about what's going on that you can quickly look at and figure out as, you know, often what's going on just with that one piece of information. If you're familiar with Java, that's called a Java core and no didn't have an equivalent. So our team got involved and we helped the community to develop Node Report, which I'll show you a picture of, which serves the same purpose. You can easily generate it and it gives you lots of information what's going on. That's often enough to solve the problem. The next piece is heap dump generation. So we're still working in the community to make it easy to generate heap dumps. It's something I'd like to see bundled right in with the core Node runtime itself. And then core inspection, where if your application fails in production, you can't really stop and attach a debugger or that kind of stuff. You want to basically just generate a core file, which is an image of the memory of what was running at the time. And then be able to look at that and figure out what the problem was from that. And I'll talk a bit more about that as well. So just on Node Report, you know, basically it gives you a human readable summary of the state of your Node.js runtime and its application. It gives you things, a summary of the events, versions of Node.js, and even some of the modules that are running inside. You get things like Java and native stack traces. It tells you things like your OS U-limit setting. So if somebody has set your U-limit too low for say the number of file handles and your application isn't being able to open new files, well maybe that's a pretty good example of what's going on. That's one in particular, you know, the U-limits we've seen a lot of times, you look at those and you can immediately figure out what's going on if somebody just hasn't configured something right. So Node Report is something I'd recommend. It's bundled into the IBM SDK for Node.js, which is what's available in Bluemix, our product that supports Cloud Foundry. And we're working in the community to have a sort of a track where it's gonna be bundled into the core runtime as well. In terms of heap generation, there's a heap dump module written by Ben Nordhaus, one of my coworkers, that lets you generate a heap dump programmatically. You can then open those core dumps in the Chrome Developer Tools and it has some nice screens where you can actually tell the difference, the GC state before, you know, if you have, sorry, if you have multiple heap dumps, you can tell the difference between two of those heap dumps. So if you take three of them in a row, you can see, oh wait a sec, I'm getting more and more of these particular types of objects, so that's probably that I'm leaking those kinds of objects. There's still some challenges in that, you know, you need to modify your application to get those dumps, although Node Report does give you an option to generate some of those on certain events like unhandled exception or out of memory, so there's a little bit of help there. They're generally, it's slow to generate because if you have a very big, you know, if you're using a lot of memory, it's gonna take a long time to generate a core that captures all that. And it's kind of limited content. So we're still working on that front in the community and the post-mortem working group to try and, you know, improve to make it just easier to generate them, consume them, and so forth. And finally, core inspection, so if you're really stuck and all you have is that core file, there's been a few solutions over time, so MDB was basically a debugger available from Joint, who was one of the early stewards of Node.js. It had a few limitations in that you could only run it on SmartOS, so if you're running in production and you'd have to take your core files from Linux, move them over a different system, do some inspection. IBM had a product we called IDDE, which allowed you to do similar things, basically take core files, open them up. Both of them understood the structure of V8 and Node structures, so you could basically say it, like print out this object, giving a C pointer and it would interpret the whole structure and say like, okay, it's a string, here's the contents of the string. It could do things like find roots and all sorts of very useful things with a tool that a tool can do if it has enough knowledge of what's going on. Unfortunately, IDDE had some technology that we couldn't open source. So we're currently working in the community to standardize on LL Node, which is an open source debugger. Well actually LLDB is the open source debugger and LL Node is a plugin that goes on top of that so that you can get those same things in a solution that will be standard and open source for all Node.js users. The end goal really is that you'll be able to create your core dump on any platform, use LLDB with LL Node to read in those core files. It'll have the understanding and it already reuses the metadata that's in Node.js in terms of the structures of V8, the structures of Node, and has command line options in the debugger to print those out. But we'd also like to build and we have started working on a JavaScript API that would be layered on top of that so that people could even use JavaScript to write things that are gonna introspect the core files and we're hoping that will open up even more people getting involved in that end of things as well. Of course, we wanna bring these tools that we're working on into the cloud so this is actually a picture of one of the dashboards we can see or will be able to see soon through Bluemix and we wanna easily make options to generate Node reports, display them, save heap snapshots or generate cores for your running applications. Monitoring is also important and you'll see if you go out to the booths, there's lots of companies that are developing products to monitor your applications and productions so Dynatrace, OpBeat. We do believe though that there should be some level of monitoring that you can get which is open source. So if we've worked on an open source monitoring set of tools called App Metrics, it's out there in GitHub, you can go collect, you can go download it, use it. It is packaged in again with the IBM SDK for Node.js so you get it automatically if you're using Bluemix and there's a few components like App Metrics Dash that lets you generate a very nice dashboard using the data that comes out of App Metrics or if you're familiar with Java that Health Center is a client that actually shows similar data for Java and you can use that same client to connect to both Java and Node.js instances to sort of visualize the data. So App Metrics itself is a data source, it's an MPM module you install and then it generates data about what's going on, your CPU usage, your HTTP requests, really all sorts of things that are going on within your application. You can actually write your own consumers of that data so in JavaScript you could register for the events and then use those events to say well I'm gonna keep track of the GC information. But one of the things we put together is this App Metrics Dash so simply through one line in your application you can basically add a dashboard that will show up on a particular port of your application you can just HTTP in to see what's going on. And so if you're interested in that you can just go to MPM and install that particular package. And as I said, App Metrics is another client that supports Java as well and so you can use that if you want to as a UI to connect to a running application and basically see a visualization of all the data that App Metrics is going out there. So that's something, if you want something that's a little lighter weight or is open source in terms of getting into monitoring your applications that's a good place to start. The other thing I touched on was as we put our apps in production we really want to have confidence in the runtimes and so the community has done a lot of work and we've been involved in that in terms of making sure that the Node.js runtimes themselves are stable and you can have a high level of confidence when you're using them. The first thing is that you'll notice there's a bunch of different release types. There's nightly releases, there's the current releases that I mentioned and then there's the LTS releases and really the goal is that we don't want to trade off quality for speed. It's very important in the Node.js community for us to be able to innovate, change, get things in but at the same time we need high quality runtimes for people using the production and often people sort of talk about those as a trade off. We looked at it as more in terms of how do you maximize both of those at the same time? So the different release types allow us to have nightly releases where you have right up to date latest changes that went in. Current which is updated very regularly so every once or two weeks you get a new release that has the features that are in Master and if you want to sort of live on the bleeding edge you can work with that and then we have the LTS releases where those are the ones we recommend for production and as changes flow into the system they're tested out in Master people can try them out in the nightly then once they've sort of proved themselves there for a few weeks they flow into the current release that people who are living more on the bleeding edge can try and validate those some people like to do that and then only after all that happens do we pull them back into the LTS releases and there's an additional level of scrutiny so that by the time changes get into LTS releases the chances that they're gonna regress things are much lower and we may not pull back the really high risk type changes so by having the different release types you can keep good innovation while making sure you have solid release we have processes like the enhancement proposal process which lets us discuss and sort of come to agreement on larger changes so that if you're gonna have bigger things in there they're documented and you have more discussion there's a very strong push in the community on automation and testing so if you look at the functional test every change that goes in gets regression test against all the platforms it runs in about 20 minutes which is pretty good depending on the number of testing that gets done in addition to that testing we have Canary in the gold one which runs functional testing on a number of modules that probably you're using so there's lots and lots of modules out there 400,000 but we've chosen a much smaller subset of the ones we know lots of people use and are really important and we run before every release goes out we'll run testing on those to make sure the new release hasn't broken those and we continue to work on this in terms of we're looking on building up stress testing and test based on development workflows to sort of continually raise the bar in terms of the quality that we can have out there we've also worked in the performance and benchmarking work group to get basically the Canaries to tell us are we regressing performance so if you go to benchmarking.nogs.org you can see charts we run tests every day and those basically show the graph of how on some key things that we believe are key metrics for Node.js that we're either going up hopefully or at least staying even and finally one of our focuses as a team at IBM is also to enable the community to do things more efficiently and for example we helped out with getting code coverage generated and published every night so there's coverage.nogs.org you can go there and see what the coverage is and that's actually been quite instrumental in terms of focusing people on what tests need to be added and sort of leveraging the large community of people who are interested in getting involved and helping out and you can see that you know the since we started that the overall code coverage went from something which was pretty respectable I think in the mid 80s to like 95, 90 you know 93, 95, I don't know. If you're interested more in reading about the overall approach you can find this quality with speed blog that myself and Miles Borenz from the community authored about the different things we're putting in place to achieve this sort of that balance. The other thing that I've been personally involved in I think quite interesting is that native modules and having to recompile them for every version of Node has been a bit of a challenge if you've written a native knowledge you know that they're very today quite tightly bound to V8. V8 moves very quickly they only support each release for six weeks and so that basically means every time you upgrade you're going to have to recompile your modules don't always want to have compilers in production so this makes things more difficult so any PI is an interface that's going to be provided by Node itself that wraps and basically hides V8 behind the scenes and breaks the dependency so it's a new set of functions that you can use in developing native modules that basically once you compile against one release it'll continue to work without recompilation as you move up to new versions of Node. Internationalization was something you know it's quite important to IBM's customers and I know lots of big companies so Steven Loomis, our ICU lead within IBM was instrumental in working in the community to get ICU bundled into Node.js and today the other thing we're working on is the internal messages today are just a string it makes it very difficult for us to upgrade those strings because if we find a typo or something we're reluctant because that's potentially a breaking change to an application if it's depending on the string type so we're going through and introducing codes for every string which will let us do things like more easily change the strings but also then opens up the opportunities for things like internationalization as well of those strings so that's another thing we're participating in. In terms of security, there's sort of two aspects there's things like features so Bluemix needed a FIPS version of Node.js the community had done some work to enable an option where you could compile in that way and we helped out by making sure that the full test suite would actually pass when you turned on FIPS and have it added to part of the nightly regression run so that we know that it's going to stay consistently green so we have a job that runs all the time. The other thing that's interesting that's going on is there was an agreement between the foundation to bring in security vulnerability database in terms of modules and known vulnerabilities and so there's a security working group that Sam Roberts from our team is participating and kind of leading in terms of bringing that data in and leveraging it for the overall community in terms of knowing when there's vulnerabilities and some of the packages that we'll be using. As I mentioned benchmarking so there's a benchmarking.nodejs.org so really the approach there was to define the use cases identify and build benchmarks and run and capture the results. You can see the results that we run every night and it's still a work in progress in terms of we need to get more benchmarks and we need to add those to the set of what we read. We run every night in terms of building our safety net. So just as a summary, we're involved because Node.js is a key runtime for polyglot deployments. We think Java, Node and Swift are the three key ones we see in like Cloud Foundry and polyglot deployments. I touched a little bit on sort of the how people commonly get started on the journey to Cloud Native and some of the obstacles that might get in your way and then I went into the work that we're working with the community doing to work with the community to try and work around and provide solutions to those problems and my last pitch will be it's a really open and welcoming community so if any of these things are interesting to you come meet us at GitHub, get involved and we'd love to see you there. So thank you very much. I think we have time for maybe one question if there's someone, otherwise Michael's hanging around and we'll be happy to chat with you later. Okay.