 This is Stu Miniman with wikibond.org, SiliconANGLE TV's live continuing coverage of the Open Networking Summit here in the heart of Silicon Valley talking about network transformation and excited to really talk about a start up here. So joining me for this segment is Emilio Billy who is the founder, chairman of A3Cube which came out of stealth just a week ago. Emilio, thank you for joining us on this segment. Thank you to you, thank you for the introduction. Great, so can you tell us a little bit, what is A3Cube and how do you fit into the networking ecosystem? A3Cube is a start up, it's a company that was born in May 2012 so it's very new. But it's the result of more than five years of my personal work with my team and so on in stealth mode. And what is done? A3Cube developed a new share address space oriented interconnection that aimed to extend the PCXPress capability from the inside box outside the data center so to create a real PCXPress capable fabric. And why? Why we do that? Because as you know there are a lot of interconnection that are very good. There are Ethernet everywhere, especially in the center, there are InfiniBand, YPCXPress. So the idea come to me and I invested everything in this idea because at certain point in my life I would like to build a parallel scale out storage architecture. There where the time people start to talk about SSD. So I would like to use SSD to extoll the maximum power as possible. And I realize also that the new emerging analytics software like ADUQ use a parallel file system to aggregate these distributed disks among the nodes. There was one thing that immediately I realized. When you use HDFS for example, HDFS will use TCPT software for the communication. So Solistator drive are really good because they lower the latency to access to the disk, especially locally. What's happened when you connect multiple of these boxes with the SSD inside in a parallel way? The network became the bottom because the latency of the TCPT software over the network is becoming higher than the latency of the disk itself. So there was something that was not good. Emilio, if I could just jump in there. So we've been talking to a lot of people here and talking about open networking and all of the open source projects and everything. I think the general assumption is everything goes on Ethernet. The old saying it's Ethernet or EtherNOT. So what was the core deficiency that Ethernet couldn't solve and therefore why you need to look at this alternative? This was the key of our development because we realized that everything is driven by at least the TCPT. That is the key of the Ethernet. But the TCPP, as we know, is too high latency. So Ethernet is a standard. So we need to think about the standard. But this standard is too high latency. So the idea was, okay, don't think anymore to Ethernet. Change the transport media. Change the way how to communicate between the server. But don't change the TCPP socket. And this was our answer to the Ethernet. So join the old world of Ethernet that is 40 years old with a new approach that is coming from the PC Express, that is new bus. Without changing the programming model but extolling the power of the PC Express and the memory mapping in a completely different way. So performance obviously is a key driver here. What kind of performance do you get compared to a traditional network? Let me explain just a little bit of the mechanism so you understand why we achieve that kind of performance. PC Express is a memory mapped bus. Most of the people don't think about that. They think that memory mapping is coming from the past and stopping. Really we have inside our computer a memory mapped bus that communicate very efficiently inside the box. So the idea was, take these mechanisms and use outside the box in a way that to afloat completely the operating system from the communication and use the new mechanism to put inside a virtual TCPP that permits to communicate server to server without using the kernel operating system. So the result was, for example, the standard TCPP used the kernel operating system, very high overhead. And you, for example, for 64 bytes you can have, in the best case, 20 microseconds. With our approach, same benchmark, standard benchmark, you can achieve 2.5 microseconds. So more or less, 10 times less latency without changing the programming model. So make sure I understand this. So much better latency, much higher performance. Do I have to change my application? This limited certain applications? So it does change the application set? You don't have to change the application. You don't change. At least if your application are written for TCPP software. Yeah, which should just about every application is. That's great. So it needs to be transparent to the upper layer protocols there. So I guess one of the other things, I think this has been tried before. This is not the first time we've tried to extend the bus. There's been many times there. There's a whole, I mean, just such a big ecosystem that works on Ethernet and other protocols for so long. How do you replicate things like security and reliability? Okay, we've done. There's another step. In extending PC Express outside the box, we realized that we need something that was really stronger than a standard network. Also because in memory operation are more critical than standard message spacing. So the idea was to take the PC Express and adding some features that coming from avionics and military space. Like, for example, end-to-end flow control, traffic congestion management, and flow control at link level. So we can manage it in a very, very deeply way. All the traffic inside the network from two point of view. The first is hardware level. So a layer in hardware provides the basic flow control of the network. After that, we can access with the software to all the registers of the devices connected together because of the address phase. And we can change in real time so we can act as a software-defined network to change, for example, the network connection. To change, for example, the routing case of congestion or in case of link failure. And this was to upgrade the reliability of the fabric of the PC Express to a level that can be considered a carrier grade fabric. Can you talk to us a little bit of how scalable is this environment with the bus architecture? I have to think that there are some severe limitations on distance. Of course, we are talking that hardware is a limit and now we are working to the optical interface. So imagine that you can reach without losing signal with standard PC Express Gen2 kind of CERDES or Gen3. We can reach up to 5 meters in cable copper and maybe 100 in optical. So we adopt a new kind of topology coming from supercomputing to avoid the limited length of the cable. And this topology is the 3D torus. So its card contains itself a micro switch inside. So you can connect the server together with short cables, just one with another, like in a CRE supercomputer. So really, to cabling one rack or two rack or three rack, we don't need really longer cable. The maximum length of the cable is when we want to go from one rack to another one and that's it. We don't need to reach 100 meters of length of the cable. And this permit to us to scale with this topology also we don't have the support for longer cable yet. So you can cabling 10 rack or 14 oz using the maximum of 5 meter cable. Of course, by June with optical and we assume that with optical we can scale up to 100 meter and so on. The maximum scalability of the network is driven by the other space and we use a 16-bit other space. So it means 65,535 nodes. And this is the maximum scalability we carry. Of course this is a theoretical one because you have to take in consideration many other stuff. But we can say that we can at least scale without any problem at least 10,000 nodes or more. That's great scalability. I guess the next question is, does this replace all of your networks or do I still need to have a different network for some of my other traffic? The idea behind this architecture was to create very analytics and parallel storage architecture. So the idea was to act as a data plane. So imagine to have, for example, you want to build an Adupe cluster. Adupe cluster requires that the storage and the CPU are merged together and then you want to connect these nodes that contain this CPU 1000 of these nodes. You can do using your own data center network or using Ronny you can create a connection between these nodes and create a very fast data plane between the nodes to aggregate the file system in real time to have extremely low latency if you want to extol SSD performance and so on, but also for the communication between the CPU like in a supercomputer then you can use your data center network completely unmodified to access to this Adupe cluster without changing your IP policy, without changing your firewall and so on. Because when you have two different network like for example Internet and PC Express also if the programming model is compliant the plug are not compliant together. So how to match? The idea was to the separation between a very high speed network behind the scene and the standard data center for what they you have to communicate in terms of data access and reading and writing. So you mentioned Hadoop clusters a couple of times is that kind of the first use case for your environment? Yes, was what we are focused because we have two main applications that we are really focused and where we can really extol the performance. The first one is an analytic machine like Adupe or Storm and the second one is a very high parallel NAS. So instead to put the analytics inside you put the computation outside but you access concurrently to a global file system like in Adupe so it's a very close application and the reason because we focus on that and we use a supercomputer style network is because we are convinced that in the next year high performance computing will become high performance data so our life will be driven by data. Yeah, so I mean Wikibon CTO David Flairs talked a lot about how the HPC model is really impacting lots of architectures especially when you look at things like Hadoop so definitely see that proliferating some what's your ideal customer? What market segment are you hitting? Is this kind of a large enterprise service provider? Where are you kind of looking to find your first customer? As a company we would like to be vertical so we target OEM and system integrators that can provide our solution. Of course inside there are some market needs that are really really perfect for our application maybe oil and gas application for example genomics proteomics and in a certain way also high performance computing in a traditional way for example through the dynamics and so on. It's interesting because I've actually had some conversations here at this show that there's some of the SDN solutions that are very much targeted oil and gas is a high one really low latency applications model. How much of your team is focused on the software piece? How does software fit into your overall solution? So our team is divided between software development in terms also of design everything inside our cart and zone and hardware development and software development. We are complementary. We are divided exactly in half and software development is really focused on creating a very robust in-memory library that permit two things. The first one to support the existing application and the second one to find a new way to use this network for more and more powerful application for the future. So Emilio we're getting the hook really appreciate you coming out with the founder, chairman and chief development officer of A3Q you know very interesting technology start up definitely go check them out thank you for finding time and we will be right back with our wrap up of Open Networking Summit 2014.