 Hello everybody, my name is Viktor Savyatov, I'm co-founder and CEO of CloudOther, the company behind the Erwin-Consent project. Today we're going to discuss some thoughts which we had developed during our work on Erwin-Consent project. We have formulated them in a well form of commandments for the next generation of cloud software. Also, we would like to share a couple of ideas which could be useful to solve real-world problems. Before we start with our commandments, I will have to describe our main project very briefly. Basically, Erwin-Consent is a new built-from-scratch virtual machine that is able to execute Erwin code without having an operating system underneath. Development was started in 2009, but the most active development phase began only a year ago. This VM doesn't share a single line of code with Ericsson's VM, so it's a completely new thing built to run on then-paravirtualized domains. This RVM is optimized for extremely low startup latency. Currently, we start to execute Erwin code in just 4-5 milliseconds after the launch. Also, we are able to spawn new VMs quite fast, at least 10 new VMs per second on single physical node. It's a very important feature of the VM, and it's why we had to rebuild Erwin-Consent from the scratch. We consider the importance of startup latency a bit later. Being just in another runtime, Erwin-Consent is highly compatible with standard Erwin code TP. We can run the vast majority of Erwin code without any modifications. Though, because of having no operating system underneath, Erwin-Consent is incompatible with some parts of standard library which need to be properly ported to our architecture. The code of our runtime wasn't released in that open source yet. Though our build service is available for free and instances built with that service can be used for any purposes without any limitations, so everybody's welcome to come and try it. Right now, this VM is fairly functional. The project site and all our demos have been generated by our public build service and are self-hosted now. Recently, we have started to build commercial applications with using our technology as a cornerstone, and our experience forced us to define seven architectural principles, which we would like to share with you now. Well, CloudDozer is a typical IT company started in a shed. It happens that one wall of our shed was clean, so we played a row of pigs in Orville's Animal Farm and used this wall to write down our own commandments, which you can see on this slide. Let's discuss them one by one now. Do not assume the presence of operating system underneath. Well, we probably have to confess to ourselves that we have to review the operating system in VMs, which are running clouds. Having an operating system in every instance was important while physical infrastructure have been migrating into clouds and was an organic part of the process. But do we really need the operating system in every instance now, or we have to treat it as a limitation? If we look on Linux or Windows, we can see that they were built to manage physical service hardware and share its resources properly between applications run on this single physical host. We believe this model doesn't work anymore at least for cloud computing. Now we rather have to use many VMs for a single cloud application and requires completely different resource management, which we have trying to build our functionality of all the operating systems. Working on their one can then force us to understand that the real server for our applications is a cloud and cloud stack is our operating system. It isn't a new idea, there were a lot of efforts to simplify OS layer, including the projects presented on this summit, I mean brilliant OpenMirage project. We also follow this way only in slightly different manner. We interpret the runtime of high-level programming language as operating system. It's only natural we're serving, since the language itself gives us proper pre-emptive multitasking, memory management, etc. So we can simply forget that we need an operating system to run our cloud instance with everyone inside. Software must be oblivious to boundaries of physical nodes. So if we treat cloud as our computer, we have to enjoy the elasticity of resources which cloud can provide. If everything is virtual, there is no reason to keep physical nodes in mind and there is no reason to limit ourselves in scalability. It's a real paradigm shift if we rely on elasticity, because it can change our vision on how we have to engineer distributed applications and how we have to provision our services. Now we really can scale our applications up and down automatically even with using current cloud stacks that pave the road to real horizontal scalability. We only need to stop thinking that parts of our applications should be executed in physical service, which cannot scale in runtime by their nature. We even can introduce ephemeral services, which do not exist and do not consume any resources while they do not have a workload to execute. Unfortunately, current cloud stacks aren't ready to provide such functionality out of the box, but they've got a good potential for that. The only thing that really has to be changed is the API that cloud stacks are supposed to apply. To be realistic, apps should be able to run and stop nodes from within, and it brings us to the next commandment. All services must share the same auto scalable fabric. For everybody's sake, we have to unify management and users tasks, even if it sounds a bit socialistic. There is a chance to simplify cloud management with using this approach, and potential advantages can be countless. At least we will be able to provide autonomous cloud-like ecosystem for applications, almost isolated from external cloud stack. Such approach potentially can significantly reduce deployment and maintenance costs and improve resource scheduling as well. Here I have to remind you importance of startup latency and the idea of ephemeral services. I have mentioned a minute before. Now we have an opportunity to use much less pre-started services than we must have now. We can look at an example. Basically, ephemeral service is a service that doesn't exist when it has neither visitors or any other busy work to be done. This demo shown on this slide runs brand new than VM for every incoming HTTP request. Well, there is no practical reasons to do so in the real world, of course, is just to show that service provisioning can be so fast that the user hardly would notice that. Such fast provisioning enables us to have no pre-started services at all, without sacrificing usability of the services. And it means that we can utilize our hardware much better indeed. So, you can come to this URL and try. It's always running on our side. Next commandment. Runner computations near data they process. We have considered true elasticity and fast provisioning so far. And now we have to add the proper isolation of instances, which then gives us for free. With having proper isolation, the big reform of input output can be discussed. Input output is quite a problem in data centers nowadays. It generates a lot of network traffic and switching to diskless nodes in data centers will only make situation worse. With using our architecture, we can propose a solution, I think. The idea is simple. We could treat cloud storage as a part of computing cloud and run our lightweight instances as close to physical disks as we can. And use them to execute source parts of queries, which could benefit affinity to physical disk and reduce network upstream. In other word, if we're going to be elastic, we have to be elastic everywhere, even in IO. Well, next commandment. Child nodes get configuration from the parent only. Another painful problem for nowadays cloud software is configuration management. Many are passive and call it chef and puppet hell. Since management of big applications can be very difficult even with using sophisticated tools. Surprisingly, everyone can show a solution in this case. Standard Daerwang's library introduces a separation of processes on supervisors and workers. You easily can find details on any Daerwang resource. Speaking simply, Daerwang introduces an hierarchy of managers. It allows Daerwang programmers to manage millions of concurrent processes in distributed environment without making any significant efforts. We believe that expanding this approach into virtual machines will help us to distribute per node configurations in large distributed applications, at least semi-automatically. And we already started to think of fully automatic provisioning of the configuration data. Avoid administration at all costs. Well, if we can distribute configurations between nodes without having a need in a human administrator, we can organically consider running other administrative tasks automatically. There are a lot of disadvantages of engagement of human beings or administrators, even if not mentioned drinking a beer. Humans are not fast enough. Expanding the team of administrators can be quite complicated because of human factors. And we believe that with the proper health of cloud stack, many of administrative tasks can be automated and executed inside an application. I mentioned that we could have supervisor nodes, virtual machines fully dedicated to perform management and configuration tasks. So they are the right places to execute administrative logic, just in natural places. Well, the last and probably most arguable commandment is SMP, is abomination of cloud computing. That's arguable commandment in a real world, which is trying to be so many core as it can right now. But considering new software, we're trying to be abstract from physical nature of our hardware and make services as granular as it possible. The reason to do so is not to base resources but to protect investments by making software completely ignorant to hardware specific. In this case, Erlang also can teach us a lot. Using Erlang software in real world has proven that message passing approach to concurrency can work even better than SMP. Also, we found numerous advantages of breaking an application into a set of single CPU VMs. Such VMs are much easier and faster to migrate because of their size and requirements. I presume it's quite difficult to find a place for VM configured to use, let's say, 16 cores to be instantly provisioned in busy data center. Well, it's an office commandment, so let me pass the word to my colleague, Maxim Harchenko. Hello, everybody. My name is Maxim Harchenko and I'm the founder of CloudOser 2 and I'm the author of most of the Erlang on Zen code. In addition to Erlang on Zen, CloudOser has a couple of other Zen-related projects. They are at the earliest stages of development in Erlang on Zen, but we decided to share some preliminary information about this project with the community to solicit early feedback. The projects are DOM0 based on Erlang on Zen and JavaScript in a Zen model. The first project is about replacing Linux in DOM0 with Erlang on Zen. The idea is to do so came to our minds then we were implementing a fast spawning interface for Erlang on Zen. All standard interfaces were not fast enough. We were trying to achieve really fast speeds. We tried to tap the lowest level of the stack possible, which is LibExcel. If you have a look at the LibExcel code, you will notice a lot of similarities between Erlang. LibExcel introduces events, spawns, child processes, even use pattern matching. On the other hand, LibExcel makes heavy use of OS primitives. It runs bash scripts, allocates pseudo-terminals, etc. You definitely cannot build a fast system with this. It would be only too natural to use Erlang on Zen in DOM0. All interfaces will be much clearer and it may even run forever requiring no maintenance. DOM0 based on Erlang on Zen is a step in the direction of autonomous clouds, which were managed earlier by Victor. Of course, we immediately stumble upon the necessity to provide hardware drivers and the solution for this is to use the separate and privileged driver domain. The current status of the project is that we have the fast spawning interface that I mentioned. It is implemented in Erlang and NC and our most prominent demo, Zerk, uses that interface. The second project is JavaScript in a Zen portal. It's all about executing web scripts in a separate Zen domain. All rendering and interaction with the user and with the network happens in the browser domain and all scripts are supposed to run in a special script domain. Of course, performance of JavaScript will suffer in this setup. We estimate that the performance heat for JavaScript will be around 10-20%. But the neat thing is that we are not limited to JavaScript here. We can use any script in language or even run native code inside the script domain. We essentially can put a whole operating system in a script domain. JavaScript engines are getting better but there is still 3.5x performance gap between native code and JavaScript and this gap will not go away. This is especially painful for mobile devices. The Zen portal for web scripts let us run native code inside the web browser. The project is similar to Google's native client but we use Zen for better, more uniform protection. The JavaScript is a frequent attack vector today and if a hacker seizes the script domain, it will not help him to get to the rest of the system. The current status of the project is that we have a simple application that uses a spider monkey engine encapsulated in a separate Zen domain and we are working on replacing this application with a whole web browser. This is all about the two Zen related projects which we have. Your feedback is definitely welcome and thank you for your attention. And of course you are very welcome to ask questions. Anybody has questions? Comments? Nobody? Ah, yes. Yeah, I have to repeat the question. The question was about the performance of the interaction between a web script in a different VM to the actual browser. So what would your comment be? Yes, this is a very natural question here and I expect that this kind of packaging of a standard JavaScript will have at least 10, probably 15 percent performance penalty. But at the same time this setup will allow us to run native code instead of JavaScript. So you may have a natively implemented Gmail application. It's no JavaScript, no JIT, everything statically compiled and then downloaded as just a x86 object code. And then this thing will run probably three times faster than JavaScript. So we will have a performance penalty on one side and we will have a benefit from on the other. Does that answer your question? Yeah, so you're saying it's still skeptical but you're saying too much for me to repeat it. But maybe we'll just have to see and run experiments and then of course, you know, yes, it will become a bit clearer. Yeah, we hope that it will be fast enough to make it all worthwhile. I guess one suggestion I would have is to get more community feedback and input in particular is to start posting some of these ideas on the user's list as well as where appropriate on the development list and that's I guess how you will be able to get more input from a wider community. I guess that's a good idea. Do we have any more, there's another question. Do you maybe just want to come up rather than me repeating this? The question is do you have any open source patches yet and if not when do you think they will come? Have a purchase to the Zen core but we have some software which talks directly to LibExcel and we're sharing it with our partners who use Zerlin on Zen. So it's kind of a semi open source but we're just not ready to release it because of the documentation state is a bit sketchy and it just needs a brush up. And also I would also rather welcome so that this effort will somehow get synchronized with what was going on in a Zen project itself. So we're just doing something like replicating stuff essentially, making some things obsolete so that's why it's not like a patch, it's like a completely new thing and definitely we just need to work closer to the community just to get valuable contribution to it. Yeah, maybe we can have a chat about this sometimes. We have processes and mechanisms for that in the Zen community to do these kind of things. Any other questions? No, not at this stage. Well, thank you for the talk. I think despite the remoteness this works very well and thank you.