 We're back here live in Santa Clara. This is the Velocity Conference by O'Reilly Media. This is SiliconANGLE and Wikibon, the CUBE, our flagship program when we go out to the events to extract the signal from the noise. We have been world-win. We were in Las Vegas for all the shows last week. HP Discover, IBM Edge. We're at the Worldwide Developer Conference in San Francisco at the GE Industrial Cloud Internet of Things event yesterday. Now we're here with O'Reilly Media at the Velocity Conference. Two days live coverage. I'm John Furrier, the founder of SiliconANGLE.com and I'm joining with my co-host. I'm Dave Vellante of Wikibon.org. Stephen Ludin is here. He's the chief architect of Akamai, focused on site acceleration and security. Stephen, welcome to the CUBE. Thanks, good to be here. So we heard one of your colleagues up giving a keynote this morning on Akamai.io. And you guys are obviously focused on some of the big problems in the web and regarding whether it's routing data or securing data and so forth. So talk about your role and then we'll get into some of those challenges and how they've evolved over time. Sure, at a super high level, Akamai is split into two main focuses. One is around delivery of media, video, large files, that sort of stuff. What you think of as a traditional CDN. And the other side is the performance aspect. Think about commerce websites, websites, web delivery, small objects, which is really what Velocity here is all about. So my role has been for the past number of years really working on trying to figure out how to get the bar to go faster and farther when it comes to the sort of smaller object delivery to get to the point where everyone wants to be, which is instant page rendering. Yeah, so you've got a session tomorrow and you're going to be talking about performance, particularly from the standpoint of what the user sees, right? Talk about that a little bit. So tomorrow's session is actually where we are, we have Moby test coming out. I don't want to steal too much thunder from the actual announcement, but it's going to be talking about how the advances you've done over Moby test, something that we're giving back to the community to help with web page test and acceleration of your own properties. So maybe you could talk a little bit about just performance in general. I know you don't want to give away, show too much leg on tomorrow, but talk about why web performance is getting so much more difficult. I mean, obviously complexities on the rise, but what does that mean? Maybe unpack that a little bit for us. Well, there's a couple of things going on. One would think with bandwidth getting bigger and bigger and more broadband, you would see a lot, performance would be, that problem would be going away. But what's happening is everyone's expectations are ratcheting up as well. So you look at a website from 93, 94, that's going to be an instant render site, but no one wants to go to a website from 93, 94. So in order to produce the experience that people expect, the technological needs are so huge. We have to deliver more and we deliver faster. Now, in addition, the variety of networks you have to go over is huge. So for example, we have traditional broadband to the home, that's easy, but that same website needs to go to someone's 3G handset as they're on a train traveling to work. So that's where the challenges come in and trying to address those situations and sort of have what we call a situational performance solution is so critical. And this is where you have all the front end optimization, all those things, are technologies that come to bear to help deal with that ever changing internet. Stephen, go ahead. Obviously the expectations, you mentioned the expectations. Talk about the two areas. Obviously diversity of traffic. I mean, obviously, traffic's like data, right? It's never going down, it's kind of going up, right? And you have issues around borders of certain service providers and networks. You got to deal with policies and different routing capabilities. There's like network issues. And then there's also just overall traffic diversity, mobile versus web, having certain content formats change based upon the devices. So we had internet of things, the edge of the network. So you have a mega trend of explosion of at the edge, which is throwing off data and packets in certain form factors or containers. And then you got traffic issues. Software defined networking has completely turned the networking world upside down, east, west, north, south. So you had explosion and two really mega trends in your world, packet management, policy-based routing, management of packets, load balancing, virtualization on networks. And then you got the application edge. How are you guys making sense of that? I mean, because you're in the middle of it. You're like at 35,000 feet and you got to change out the engine of the airplane at the same time. Right, well, that's a great question. And this is why we have a different focus on the very different traffic types now. But in the end, it's a lot of the same software and infrastructure delivering both. It's how we don't go ahead and apply those technologies to the different problem at hand. One example is when it comes to video, you want to figure out how to go ahead and handle a constant bandwidth for a sustained period of time over these less than perfect networks that we've talked about. When it comes to a website, you're talking about a huge burst of traffic that has to get down instantly, otherwise your customers walk. So with those two different things, one of the consistent problems is though the edges of the internet, whether they be origin or consumer, are exploding in size. That central sort of tier one part of the internet isn't growing at pace. Congestion isn't getting better, it's getting worse. And so anything that has to travel through that central point is going to have problems. Now this is where the Akamai idea of having points of presence close to end users, a network hop or two away, is something that's so critical to getting this sort of real time experience that everyone expects these days. So web app guys like Facebook have really done a lot of pioneering work by building their own stuff because they couldn't buy off the shelf stuff. But you're also a supplier to them on the network side. What are you guys offering folks that are looking to tool up in the off side that want to be the next Facebook? Well, it might not be that kind of scale, or let's see the same example. You got kind of a small knit group of people who have actually kicked some butt on the scale side on ops that they're managing a lot of apps, a lot of traffic. What are you guys offering those customers that in the traditional IT or app space? Well, this is where you have to look at the content delivery space with two different lenses these days. You have kind of the old school lens of what it was back in 99, 2000, which was give us your objects, we'll deliver them for you. Then the new school lens is really more about how do I get a bit of data from place to place faster without any caching whatsoever, which is what we call acceleration. We got into the acceleration space back in about 2004 or so. So we've actually been doing acceleration longer. We've been doing what we call traditional CDN. So when it comes to your IT folks, the ops guys, they have usually two tasks at hand. One is they have to go ahead and deliver stuff. It has to be fast. It has to be rapid. It has to be real time. The other part is they have cost considerations that they have to have to figure out. Do I need two boxes to serve this or do I need 2000? And for them, one of the things that Ockman comes in is providing incredible offload for anything that is offloadable. We talk about acceleration. We talk about cash. We talk about dynamic content. But despite all the dynamic content out there, there's still a huge amount of stuff that is still catchable. And that's where any solution for acceleration needs to both have a catchable solution as well as a pure acceleration getting bits from A to B faster. Steven, you talked earlier about some of the challenge, you alluded to anyway, the challenges that mobile brings. We were at GE yesterday and lost this big event talking about what they call the industrial internet. How will this internet of things generally and then specifically this machine data, which is going to generate many, many petabytes of data, how will that change some of the challenges that you face? Well, there are so many interesting things to think about there. One part, and I'll still start with a couple of whole list of problems. First off, every one of these things needs a name and an address. And we're out of addresses. So this is where moving to the IPv6 from v4 is so critical to making this internet of things a reality. Right now we're in a space where we are rushing forward in this direction without really a net to catch us. v6 is still out there right over the horizon. There are a lot of people moving towards that direction, but it's going to be some time. Right now I think Akamai has seen about one or two percent of our traffic is actually over v6, which is great, but we need to get to a point where that's, one or two percent of our traffic is over v4. Other thing around internet of things is just the pure connectivity aspect. We're not talking about a thing where there's a ethernet cable plugged into the back of your refrigerator or in your car, we're talking about mobile. So the networks themselves are going to get more robust. They're going to get more solid and reliable. I liken it a lot to back in 96, 95, when dial-up was a norm, everyone was like, well, how is this ever going to work because this isn't usable. And sure enough, things changed. The same thing is going to happen in the mobile world. Those networks will improve. There will always be a bleeding edge and people like Akamai will be at the edge. Is that bleeding edge trying to make it better? But things are going to get better. Yeah, and as you've written, there's a lot that can go wrong between point A and point B. Indeed. All right, so what's, again, the announcement tomorrow that we're all eagerly anticipating, what do you see over the horizon? What's sort of next beyond that for Akamai, some of the big challenges that you guys are tackling? God, there's so many challenges. Scale is probably the biggest one. The biggest challenge is, of course, the ones that we don't know about yet. The thing that this industry is so incredible for is, although a lot of us sit here and talk about what is going to happen six months from now, half the time, more than half the time, we're wrong. It's those surprises that completely catch us by surprise that are making it so interesting and fun to be in this space. Give me an example of something that really surprised you guys, that you responded to, talk a little bit about how you responded. So, this is going back in time, but I was truly surprised by the explosion of TLSSSL and people using the encrypted internet. And the reason, not because it was a good thing, but because it was really, really hard to do that, and it's hard to do it well. The whole infrastructure of it is kind of a collaboration that mostly works, and there's a lot of cost in it. And so, the fact that Akamai terminates probably about 20, 30% of our traffic that we do now is over TLSSSL, so we're terminating all that traffic for our users, that has been a huge growth that I personally did not expect. Now, one of the things that Akamai's adapted to that is we do what we wanted to call early termination. So, when you try to negotiate a TLS connection, the handshake, you have to go back and forth many, many times. In order to do this, you'll hear people at this conference, especially talking about round trips, round trips, how many round trips. There are two things you can do, you can reduce round trips, or you can make those round trips really, really short. So, with early termination, what Akamai is doing is we're getting the handshake to terminate very close to the end user, so then rather than 100, 150 millisecond round trips, you're talking about 15 millisecond round trips, and so they just, you just zip through them very quickly. Awesome. Okay, we got a break, thanks for coming on theCUBE. Obviously, Akamai, storied history, one of the most successful early web companies, and then just content routing is evolving, it's never going to go away, it's kind of like data. David, I always joke about storage, it's like the industry that never dies because no one, we need to store more and more things just like your business traffic and your evolving. Congratulations on your success there, and app acceleration, app delivery is not going away, it should leave the mobile. Thanks for coming on theCUBE, Steve, we appreciate it, we'll be right back with our next guest right after this short break. All right.