 Welcome everyone to the XKL Virtual Press Conference. Thank you to the journalists and analysts who join us live today, and thank you for our viewers joining us on demand. I'm Jamie Scato-Gattaya of JSA, and it's my honor to host today's Virtual Press Conference with XKL founder and CEO, Mr. Len Bozak. Len is the founder and CEO of XKL and the visionary behind the e-velocity platform. Co-founder of Cisco Systems, Mr. Bozak launched XKL over 25 years ago, even then as a forward-thinking company that engineers and implements optical networking solutions with the future in mind. And today, XKL continues to develop and release enterprise-class products that support the bandwidth needs of customers today and into the future. So Len, welcome. Thank you. And congratulations on the very exciting launch of your new product, sweet e-velocity, this huge news. So could you start sort of setting the stage for us and explaining the basics behind the solution and what makes it unique among other products out in the marketplace? Okay. Many times when you go to build small and medium-sized optical networks, you find that you've got a lot of the last-generation technology sitting around. So currently, the most popular thing right now is 10-gig Ethernet. There's more 10-gig Ethernet sold right now than anything else in the enterprise and ISP space. We now have, well, let's say, let me be polite, acceptable, coherent 100-gig solutions. It's taken even longer than I expected. A number of you may recall that various industry events around 2011 is. Oh, yeah, 2012 was going to be the year. 100-gig was going to take over right then. I didn't believe it. It's considerably harder than 10-gig, which was, you will recall, some of you will recall, how long it took to go from 2.5-gig to 10-gig. Part of that was the infamous party like it's 1999 bust, but still the physics is fundamentally harder, and so it took a while. Our problem with 100-gig is essentially we have to build a computer system that can handle 56-gig example per second streams on several channels. It takes a lot of power, and that's been most of the problem. So what e-velocity does is it lets you take a whole bunch of 10-gigs and statistically multiplex them up onto some 100-gig potentially coherent, or we can use anything you can put in a CFP at the moment. That's the rear interface. So it works in a sort of old-fashioned way. It amounts to a statistical multiplexer. As a number of you are familiar, Ethernet traffic statistics are very, very bursting. People ordinarily consider, say, a Poisson distributed traffic to be pretty bursting, but there is a certain amount of buffer that you cannot lose traffic with Poisson. Computer distributions are much worse than that. So we're able, we can't solve that problem, you can't have any finite amount of buffering that won't lose something. That's to get over it. No matter how much buffer you put in your Ethernet switch, you still lose things occasionally. So we're prepared to do layer one, maybe it should be called layer one or quarter, statistical multiplexing, and further, to let you make decisions if there are short-term traffic conflicts, who wins? So you can set priorities in groups, and we will give you better average surface to the groups you prefer, and numerically, you can choose how much everybody gets. It's our version of something like deficit weighted ground rock, and if you want the magic acronym. And so here you are able to take a pretty large number of your legacy 10 gig channels and run them up on the line side in such a way that you have quite an efficient system. Every time you, and you can choose, some people will choose to do no oversubscription at all. So they will do nothing more than simply add it all up, and there's never going to be a conflict when you do that. Of course, you will probably find your line utilization never goes much above 50%. So you have a choice here. You can take a little bit of a risk and say do a two to one theoretical oversubscription. That'll get you a higher line use, but not a lot of loss. And some people are actually quite comfortable with four to one. If you know your customers and what's going on, that's a pretty comfortable oversubscription ratio that people have used. You can find out empirically just by doing it and seeing what actually happens. We count everything under the sun. So let's say a few other things about it. You'll give you an alternative to simple A to B pushing in that you can choose to send some groups of traffic down each of your 200 line sides. And if one of them fails, you fall back to the other side using the same ability to statistically oversubscribe your line for your protection switching. So yeah, you'll be oversubscribed and you may start losing more things in bursts than you would lose in your production environment. But many customers perceive no difference at all. And here you have what amounts to very efficient protection switching. We have some other tricks involving how the multiplexing in somewhat more complicated networks are done. I won't know if somebody is really curious, go right into those, but that's the overview. Now something else, when you look at the slides, you will notice, I think the press release explicitly says 240 gigabits. That's a safe oversubscription ratio. Guess what? I think people can count and you'll notice there are twice that many ports on the front. Well, watch this space. There's more coming. As I think everyone also knows that line side signal processing is improving. And so we will give you more bits per second on the line side in time, but that's not today. What you see today is what we announced. But that's future. I believe in the future and we're aiming to be there. You certainly are. And in layman's terms, if you will, what are the benefits for users to be able to scale that bandwidth on demand? Well, it's primarily economic. You have the ability to come much closer to using your line side capacity than you could otherwise without losing a significant amount of traffic without an enormous amount of buffer. We do the statistical, the decision making at the full line rate. There's no sort of averaging or anything of that kind. We'll, obviously, there's a delay of storage everywhere. So it's not that there's zero delay, but the switching delay, the actual instantaneous switching delay happens effectively in 100 gigabit gaps. So it's making a decision every few nanoseconds, quite literally. But that's not where you get the delay. You get the delay because you do have to store things just because that's how the systems have to work. But it's not tremendous. It's not, some switches actually have a significant play out and decision delay. This one doesn't. It's not really a level-to-switch. We do deal with frames rather than very small units. Still, so I call it a level one and a quarter. It's not a switch in the traditional sense, but it is also not just a low-level bit multiplexer. It's not a muck-sponder. There is no one-to-one onto mapping of every time slot to every output time slot to every input time slot. There's no fixed relationship. We do permit any one channel to run at its full rate. And of course, nanosecond by nanosecond, that changes. Who's running changes very rapidly? So the real benefits are it's economic. It gives you the ability to add things to your network where you're both saving money and delivering for perhaps the next level of customer some other economic benefits. One of the things you can do with this product, which I don't think we've talked much about, is you can build a robust, physically distributed multiplexer for a router port. So you can take your typical 100-gig router port if you're running other traffic multiplexers, which rarely gets above 50% of the utilization in a couple of minute period. With this type of product, you could actually make that go up without a tremendous level of dissatisfaction on anybody's part. It's quite interesting to watch when you get real traffic how little losses you get even at 4 to 1 allegedly over subscription ratio. So if you take 40 typical upwind 10-gigs and run it into a single 100-gig router port, you are very rarely losing anything. That requires a pretty good distributed front-end, and that's what we're giving you. We could potentially be used as a distributed router port front-end. Again, a considerable economic improvement. Could you tell me what intuitive technologies such as statistical multiplexing were used to engineer e-velocity, and has this been done before, or could we consider this an industry first? As far as we know, we're the first people to do this on Ethernet frames, but the idea is very old. People built character multiplexers that worked like this in the 1960s, some of us remember. So the concept is not at all a new one. This has been around since the beginning of the computer industry, but making it go with these line speeds, as far as I know, we're the only ones who've done it. Pretty much everyone else has done these deterministic MUX founders, or something like a Layer 2 switch. The Layer 2 switches also tend to burn more power than we do. Even with the rather expensive integrated circuit technology, they still do burn quite a bit of power, because they solve a harder problem. We've picked a subset of the problems, and solved them in a way that let you get much of the benefit without anything near all of the complexity and cost. I remember back in the day when I first started chatting with you, Len, and I asked, what does XKL do? You were like, we solved problems, and certainly you're still doing that today in an amazing way. So this new announcement, what does it mean now for XKL? Are there any other technologies you're toying with internally that you can tell us about? You kind of hinted, I was wondering if there's a story there. There's actually a number of stories, but this is the product that we're starting to sell today. So there's going to be a future. Trust me on this one, we're quite sure that there is more coming. There is going to be a future. There will be more in the same product frame. This is primarily engineered on fairly advanced FPGA technology. There are some fixed format ICs in there, but this lets us modify and improve what the actual hardware system does over time. Not to be too unkind to various of the component part of the optics industry, but they aren't really, we build systems, we have to have systems that our customers use and rely upon. And so when we go and do long-term tests, we have to find things that stay up. Internally, our goal is to basically have a system that you boot up and it runs for its entire economic lifetime. That's probably not it. Most people aren't able to maintain the environment power and cooling for five or more years, but we would like to be able to try. And if that's your goal, when you look at several month-long tests of various of the optical components, I regret to tell you you will be disappointed. So there will be advances. We're slowly qualifying more parts and because we do sell systems and that's a long-running system is something that we must provide. There will be more. There will be more in different client interfaces. That means faster really, as well as different optical formats, whether you're looking at things locally or you need to go some kilometers from the client side, as well as one hopes lower power but also higher speed line-side interfaces. We're able to run things faster than what you see in these products. But these are products that we have qualified. All of the client and line-side components, our own switches, the power, the cooling, everything it takes to build a system. Trust me, it's harder than it looks. So there's going to be more. But you just have to believe that there's going to be a future. I believe in the future. A number of you have seen that over a rather long period of time. Still, here we are. We can do this, I believe, interesting statistical multiplexing for sort of soft ring-like failovers and things of that sort. It's really, it works remarkably well. Once you give it a try, you get some confidence in it. And then I always love chatting with you because you're so forward-thinking. Your brain is literally in a time tunnel. I feel like five years in advance and you're reporting to us looking back. So I'm always in awe of speaking with you. I want to open up our lines here for any questions that we might have for our audience. Here's one from Stephen Hardy. And I will go ahead and read it out to you. Stephen Hardy asks, Len, if you could provide more details on the differences between the DQM-100 and the DQT-100. So what's the difference between DQM-100 and DQT-100 and which will prove more popular in your opinion? Good question, Stephen. I couldn't tell you which is going to be more popular. Both. We did them both because we think that you'll have some different applications. Some of the stuff that you get with all these products is, initially, let's imagine you already have a 10 gig based system and you just need to get a little more bandwidth out of it. The alien wave injection is usually the solution that you get there. But depending on how you want to arrange the optical plumb and we can do it for all of us, you have choices with both multiplexing and we make the general assumption that these are not simple point-to-point lines so that you're part of a bigger system that's optical multiplexed by whatever technology is employed. So we have our own fully elaborated EDFA, hybrid EDFA-roman system that will give you very long lengths if necessary, pretty good individual spans. A number of you may recall that we have our own 2,000 kilometer plus actually test bed made out of not very good fiber and that was a deliberate choice to validate these things. In particular, one of the things that it took a little while for the component vendors to get straight was the ability to mix 100 gig coherent and a lot of 10 gig randomly assorted in a 96 channel system. So having done all these things and realizing that we're not usually at Greenfield installation both of our system configurations will let you deal with alien wave in different ways with the terminal multiplexing. So if what's really going on is you're building something that amounts to a series of point-to-point links it's easier probably with the PQT. I remember exactly how we did that with the optical point. Part of my problem by the way is when you mention part number I look at the optical, I remember the optical plumbing in them and I see possibilities. That's your visionary mind, you know, you look at it, you see the future. The curse of knowing how things work. Anyway, both systems have good application depending on how your systems are now configured whether we're going to just add a few hundred gigs of multiplexing some 10s onto a nearly subscribed system or whether you're building something that is fundamentally going to be 100 gig nearly Greenfield we'll find it easier with some version of the other. And Steven, I'm just going to unmute you here for a second. Did that answer your question? I think so. Just to make sure I've got my acronyms straight. If we look at the slide that describes the two of them the major difference between them is one is compatible with DBC and the other one is compatible with DMD. So it might be helpful, you know, I think I know what those mean but maybe it would be helpful to dispel those two acronyms out. That's entirely a different skin deployment of your front end multiplexer at the site. I think as with many people we're going to the 24 fiber MBO connectors for all the high count interfaces and ordinarily our front end multiplexers work is you get 8 to 12 channel blocks on a single MBO. And depending exactly on what you're going to plug us into you might find some of the arrangements of those. The industry hasn't completely agreed just how you number and distribute things. You might find it easier with one than the other, that's all. This truly is more a plumbing difference than anything else. So while I'm unmuted if I could pepper you with another question. Are these platforms currently in customer trials and if so can you talk about I would imagine you would say whom would be trailing them if you could. So with that caveat in mind could you perhaps describe the type of customers who are perhaps trailing this platform right now? They're what you expect. Our customers are, we have internet exchange points. We have sort of medium scale ISPs. We have some, our large universities enterprise customers are not, I don't know, they sometimes resemble enterprise customers in what they do. So you have customers, you have people who have a variety of interests. Interestingly enough, the soft protection that I alluded to, the ability to not have a simple optical switchover, but rather use the statistical multiplexing to give you your protection when you get a one-side link failure, has drawn a whole lot more interest than we suspected at first. As far as I'm aware, we don't have anyone trying the distributed core multiplexing trick, but I think that will change in time. Do you also see people potentially using the statistical multiplexing capability to do things like energy management, you know, moving traffic together onto a smaller number of channels, say overnight or that sort of thing? I don't know how much that would change their power, but it could possibly let them ride through some maintenance intervals on some routers. That would be actually a pretty slick way to do that. Can you explain how the deficit weighted round-robin algorithm works? Smart group. Okay. Let's, let me try, I'm going to take a couple of passes at it. First, conceptually what you get to do is there are four major groups. So the first thing you get to do is you get to put sticky notes on your reports saying which of the major groups each of the client-side interfaces belong to. Then you get to make some decisions about how the traffic is allocated among those groups. So within a group, it's purely, it is purely round-robin if there's a conflict, if there's a sustained conflict. But after you've put your sticky notes on them, as it were, or you've covered them, however you want to visualize your labeling, you get to think how the client-side will deal with the traffic. And you start, obviously, in a reasonably simple, there's both a priority order. It's not just a priority because you get to say numerically. And it's not how the, quite how the user interface is formulated. But let's imagine what you wanted to say was, you know, group zero was your really bread and butter traffic. And you wanted to let them use up to 80% of your line-side resources. Well, you can do that. And then let's imagine group one was something that didn't have a lot of traffic but really did need, you know, reasonable real-time access. So what you could potentially say is that group one actually, although it's only allowed 5% of your total bandwidth when saturated, you might want to say that they come first. They'll get the very best real-time performance up to their 5%. I'm giving you an example that might seem a little complicated. But then the other groups, again, you get to say the percentage that they use and what order they get satisfied in. Oftentimes what people will do is they'll have one of the groups. Well, it's often the one number three that you say, okay, you just get what's left over. And that, the amazing thing is how often that works just fine. You need to say everything adds up to just 100%. Now, in terms of the group numbers, one way or another, there's only 100% fail, so they'll be scaled. Some of the steps are decide which group everybody's in, then decide how the groups will be treated when you're making traffic decisions among them. Works out pretty well, actually. The reason that you call it deficit weighted round robin is when you have long packets, you can't break anything. You can't break your Ethernet frames at least in a frame multiplexer. So we simply count the, we remember the tail. So we will let the traffic finish. If you have a long frame, you'll be a little oversubscribed. So next time when the cycle refreshes and everybody gets their fresh allocation, well, you'll be dinged a little bit for having gotten a little more last time. So you get a little less the next time. It turns out to be a very smooth traffic flow that results from this. I don't know whether I've exactly answered your question, but I tried. I know too much about it is the problem. So if there's any specific, go ahead, ask again. We always love folks who are answering their questions to know so much about it. So good problem to have. And if that answered your question, let us know. I think it helps actually easier than it might seem. You can read, there's regrettably a lot of text describing how mechanistic it is. The steps are what I described. Decide who goes together, how much they get in order. And yeah, very good. We're clear on that question. Any other questions out there? My last question for you before we go, Len, is what optical networking trends are you expecting to see in the next year or so? So again, getting at that crystal ball that I think you have just like in your back pocket all the time. What optical networking trends do you predict next year or so? Well, it won't surprise anyone. We're trying to make it all go faster to take less power and to cost a bit less. The last two have been the real crunch points. We are able to make things go faster. Yeah, it's getting harder and harder. And what that means is the power isn't going down as fast as certainly I would like. And the cost is not falling as rapidly as you might hope. We do see some things coming up in the next year that might let us build an economical metro scale. You know, 20, 50 kilometer spans was a good number of power efficient hundred gigs. But if you go and if we if there unless the folks who are building these things were so much better than their predecessors, we won't get components that we deem fully qualified to the end of 2017, which means our products have to come out a bit after that. You know, a tip of the hat to our component industry for for trying these things. But as a system builder, it's a bit frustrating that many of them are almost working. So close. Yeah, well, and with IoT and just data consumption brand and with consumption on the rise. Thinkors like you are so needed and required for us to really move and push forward. So thank you, Len, for everything you do, particularly with your insight and leadership over at XKL. Thank you to our journalists and analysts. We adore and appreciate that you took this time to have this intimate chat with Mr. Len Bozak here with us. Again, if you have any other questions, feel free to just send them over jsa underscore XKL at jamiescato.com or OPR at jamiescato.com. If that makes it easier, gets to us both ways. And and we look forward to answering them and getting any of them further insights over to you. Thank you everyone for joining us and we look forward to next time. Thank you all.