 Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in downtown San Francisco at the Mission Bay Convention Center at Node Summit 2017. We've been coming to Node Summit off and on for a number of years and it's pretty amazing the growth of this application for development. It really seems to take off. There's about eight or 900 people here. It's kind of the limits of the facility here at Mission Bay, but really, really excited to be here and it's not surprising having to see Intel is here in full force. Our first guest is Monica N.A. Pietrasanu and she is the Director of Software Engineering for Intel. Welcome. Thank you. Hello and thank you very much for inviting me. Oh, absolutely. It's definitely exciting to be here. Node is this dynamic community that grows in one year like others in 10. So it's always exciting to be at one of these events and present about the work we are doing for Node. Sorry, now you're in a panel later on taking benchmarking to the next level. So what is that all about? That is part of the work we are doing for Node and I want to mention here the word stewardship. Intel is a long time contributor in the open source communities and has assumed the performance leadership in many of these communities. We are doing the same for Node. We are trying to be a steward for the performance in Node.js and what this means is we are watching to make sure that every check-in that happens doesn't impact performance. We are also optimizing Node so it gives the best of the hardware out, runs best on the newest hardware we have. And also we are developing right now new measures, new benchmarks which better reflect the reality of the data center use cases. The way Node is getting used in the cloud, the way Node is getting used in the data center, there are very few ways to measure that today and with this fast development of the ecosystem, my team has also taken this role of working with the industry partners and coming up with realistic measures for the performance. Right, so these new benchmarks that you're defining around the capabilities of Node or are you using old benchmarks or how are you kind of addressing that challenge? That's, we started by running what was available and most of the benchmarks were quite, let's say isolated. They were focused on a single Node, one operation, not very realistic in terms of what the measurements were being done for the data center, especially as in the data center, everything is evolving, right? So nothing is just running one single computer. Everything is impacted by network latencies. We have a significant number of servers out there. We have multiple software components interacting. So it's way more complex. And then you have containers coming into the picture and everything makes it harder and harder to evaluate from the performance perspective. And I think Node is doing a pretty good job from the performance perspective, but who's watching that it stays the same? I think performance is one of those things that you value when you don't have it, right? Otherwise, you just take it as granted, like it's there. So My Team at Intel is focused on top tier scripting languages. We are part of this larger software organization called Software and Services Group. And we are right now optimizing and driving the performance for Python, Node.js, PHP, HHVM, so some of the top tier languages used in the data center space. So Node is actually our interesting story in terms of evolution because we've seen also an extraordinary growth. We've seen, it's probably the one who's doubled for the past three years. The community has doubled. Everything has doubled for Node, right? Even the number of commits depends on which statistics you look at. It's double or more. Yeah, so then it's a very fast progress which we need to keep pace with. And one thing that is important for us is to make sure that we expose the best of our hardware to the software. With Node, that is taking an interesting approach because Node is one of what we call CPU front end bound. It's having a large footprint. It's one of the largest footprint applications that we've seen. And for this, we want to make sure that the newest CPUs we bring to market are able to handle it. I was just gonna say, they had Trevor Livingston on from HomeAway, kicked off things today. Talk about the growth. He said a year ago, they had one Node.js project. And this is a big site that competes with Airbnb. That's now owned by Expedia. Now he said they had 15 projects in production, 22 almost in production, and 75 other internal projects in one year from one. So that shows pretty amazing growth and the power of the application. And from Intel's point of view, you guys are all in on cloud. You're all in on data centers. We've all seen all the ads. So you guys are really aggressively taking on the optimization for the unique challenges in special environment that is cloud, which is computing everywhere, computing nowhere. But at the end of the day, it's got to sit on somebody's servers and there's got to be a CPU in the background. So you look at all these different languages. Why do you think Node has gone so crazy? Oh, I think there are several reasons. And my background is a C++ developer. So coming and security. So coming into the Node space, one thing amazed me, like only 2% of the code is yours when you write an application. So that is like, what is the other 98% coming from? Or it's already pre-developed. It's an ecosystem you're just holding those libraries. So that's what brings, in addition to the security risks you have, it brings a fantastic time to market. So it enables you as a developer to launch your application in a matter of days instead of months every year. So time to market is an unbeatable proposition. And I think that's what drives this space where you need to launch new applications faster and faster and upgrade. For us, that's also an interesting challenge because we have like our CPU roadmaps are not days, right? So what we want to make sure is that we feed back into the CPU roadmap, the developments we are seeing into this space. I have, on my team, I have several principal engineers who are working with the CPU architects to make sure that we are continuously providing this information back. One thing I wanted to mention is we, as you probably know, since you've been talking to other Intel people, we've been launching recently the latest generation server, Skylake. And on this latest generation nodes, some of the node workloads we've been optimizing and measuring show 1.5X performance improvement from the prior generation. So this is a fantastic boost. And this doesn't happen only from hardware, it happens from a combination of hardware and software. And we are continuously, we are continuing to work now with the CPU architects to make sure that the future generation also keeps pace with the developments. It's interesting, kind of the three horsemen of computing, if you will, right? There's compute, there's store, and there's IO and networking. And it's interesting that Ryan Dahl, it's funny they brought up Ryan Dahl. We interviewed him back at the Node.js, I think in 2011. Still one of our most popular segments on theCUBE. We do thousands of interviews a year. He's still one of the most popular. But to really rethink the IO problem in this asynchronous form seems to be just another real breakthrough that opens up all types of capacity in compute and store when you don't have to sit and wait. So that must be another thing that you guys have addressed from kind of the hardware and the software perspective. Yeah, you are right on spot. Because I think Node, comparing to other scripting languages, brings more into the picture, the whole platform. So it's not only a CPU. It's also a networking. It's also related to storage. So it makes the entire platform to shine if it's optimized to the right capabilities. We've been investing a lot into this. We have, all our work is made available as open source. All our contributions are upstreamed back into the main tree. We also started an effort to work with the industry in developing this new workloads. So last year at Node Interactive, we launched one new workload benchmark for Node, which we call Node DC with its first use case, which is an employee information system, simulating what a large data center distributed application will be doing. This year, now at Node Summit, we will be presenting the updated version of that, 1.0 this time. It was version 0.9 last time where we added support for containers. We included several capabilities to be able to run in a configurable manner in as many configurations as needed. And we are also contributing this back. We submitted it to the Node Foundation, so it becomes an official benchmark for the Node Foundation, which means every night after the build system runs, this will be run as part of the regressions to make sure that the performance doesn't degrade. So that's part of our work. And that's also continuing an effort we started with what we call the Languages Performance Portal. If you go to languagesperformance.intel.com, we have an entire lab behind that portal in which every night, we build these top tier scripting languages, including Python, including Node, including PHP, and we run performance regressions on them on the latest Intel architecture. So we are contributing the results back to the open source community to make sure that the community is aware if any regression happens and we have a team of engineers who jumps on those regressions and the root causes that analyzes it to figure it out. So Monica, we're also out of time. Before I let you go, that we're talking before, we got started. I love Kim Stevenson, I've interviewed her a bunch of times. And one of the conversations we had was about Moore's Law. And that Moore's Law is really an attitude. And it's kind of a way to do things more than hitting the physical limitations on chips, which I think is a silly conversation. You're in the role of constantly optimizing and making things better, faster, cheaper. As you sit back and look at kind of what you've done to date and looking forward, I mean, do you see any slowdown in this ability to continue to tweak, optimize, tweak, optimize and just get more and more performance out of some of these new technologies? I wouldn't see a slowdown. That's at least from where I see it on the software side, I'm seeing only acceleration. So the hardware brings a 30%, 40% improvement. We add on top of that the software optimizations, which bring 10, 20% improvements as well. So that continuously is going on. And I'm not seeing it improving. I'm seeing it becoming more, there is a need for customization. So that's where when we design the workloads, we need to make them customizable because there are different use cases across the data center customers. So they are used differently and we want to make sure that we reflect the reality. That's how they're in the world and that our customers, our partners can also leverage them to measure something that's meaningful for them. So in terms of speed, now we want to make sure that we fully utilize our CPU and we go to more and more cores and increase frequency. We also grow to more capabilities and our focus is also to make the entire platform to shine. And when we talk about platform, we talk about networking, we talk about non-volatile memory, we talk about storage as well as CPU. So Gordon Safe, you're safe, Gordon Moore. Your loss still solid, all right. Monica, thanks for taking a few minutes of your day and good luck on your panel this afternoon. Thank you very much for having me here. It was a pleasure. All right, Jeff Rick, check it in from Node Summit 2017 in San Francisco. We'll be right back after this short break. Thanks for watching.