 Okay, it's 3 p.m. I think we started with an introduction to the host info. Can you give a word? Hello, my name is Ivan Janowsky, and I'm a CEO and I'm a team-based ecologist, and we're here to present a download list of the source inspiration we've been following, and also we took classes as we developed them, and how to perform and how to participate here and in other ways. For example, there will be a collaboration with Fargo, mentioned by the management partners, we've made it for architectures and products, and also right now we've developed a few regions for my AGD by the press of my AGD in ADL, and the future will be covered here in my state, and my name is made here for the team. Right now we're using a financial version of the financial system that we've decided to have. So we started with the press of download by the press of one of our customers. It was a huge part of what we provided. They actually wanted to protect a huge web portion, I guess, where it indicated lots of that, and essentially they were interested in capturing those tasks as the most indicated, and that's what we decided to do. And we started to go on pricing engineers for time and other open source web integrators and we figured that once it's done, modern web accelerators are suitable for our customers. So we kind of looked at the carry-on accelerators, saying we have the best weather, however there are opportunities to inspire our needs. Basically, once it's done, we need some kind of fabric for web accelerators and the firewall to be able to process each piece as fast as possible and generate concrete tools to work, make each where it does. So it must have a very fast HPE accelerator because we still need to process HPE press to fight against the HPE fields and does it, do those attacks. It must have a very fast web cache to be able to mitigate those attacks if we cannot find out for the work of the host. Also, we need a very strong network set to be able to fight against attacks with many kind of connections so very strict connections and so on. Also, the last thing I also mentioned is there for me. So we started from hybrid of web accelerators and firewalls. We were there and we went to carry-on space. During this visitation, I will put all the details about carry-on programs with user space web accelerators and explain in details why we moved to carry-on space. So basically, first, the type of HPE press was just a hybrid accelerator, a firewall, and science hugely hybrid accelerators taking one job. A huge set of farms. We met several of our friends and our ladies. As our development moved forward, we implemented more web-based capabilities. We actually used several few watches to protect against web application attacks. And we also needed to use SSL and we connected with carry-on space because nowadays, any product package was using HTTPS. Data compression and connection score to survive the silly problems. Basically, the software and such products are named like package-based Android. You can find the mix of the functionality in boxes like F5, which is profiteer and cost-solution. And the model updated with open source has equal solution, equal performance to these profiteer boxes and such sodium open source. The two examples of HTTPS allow would be the same, exactly as for main application, so if you think about more performance and protection, for example, if it's not enough performance from the engineers or other web-based accelerators, then you might think of other web-based for example, F5 to put it in front of your engineer's and protect and create a better. And this case is also a serious point better than other web. I connected with myself HTTPS management systems or sharing options and so on. So, we started to discover what's wrong with nowadays web-based architecture. So, the concept which we think is was to put some digital tools on the links. And surprisingly, we found that HTTPS needs more time to work maliciously great spaces are serviced from side to side. Basically, you use it very small side to side and usually you will see different picture if you use language dynamic content and character and approach. But anyway, I think it's acceptable to spend a lot of resources to work maliciously because it has a lot of regrets from those works. The problem with HTTPS is it's very limited model. Basically, when you need to service such content and block malicious request you do exactly the same logic. You must send a pair of response to new malicious life which you just observed. Well, you must learn additional logic which is remembered in very limited model. So, the limited functionality of HTTPS additional logic and additional complexity which we take is 1x and minus 7. So, the problem with the issues people usually give you and it takes a long time to send a pair of response to different clients and sometimes they use some long classes like Pave 1, 2x and 6x to work and generate results. However, the logic based on passing HTTPS file looks like 50 years old it's not fast and sometimes people recommend to just finish all of HTTPS works for better performance. So, seems I have reached these slides. So, one might think about like an additional model which will generate rules, maybe table rules on the slide. However, generate a couple of web accelerators with the power of these songs and a whole story. And, actually have a lot of issues with web accelerators. Actually, any one of web accelerators deliver content in normal different contexts. So, in most cases they deliver better performance for you. And, most of the cases means that your clients are innocent, good clients. However, web accelerators like HTTPS files are free and we just saw that phishing is not so fast it's a tool to defend the case where it does test. And, also, all the services are basically designed in every 2000 when we had an issue to handle 10,000 connections on card. It's not a problem to generate 10,000 or 100,000 connections for DDoS for that. And, actually, DDoS is always about all the cases. It means that if your system has bottleneck, for example, SSA connection, then an attack can generate exactly an attack against your SSA handshake. If you're not so fast to handle small packets, then an attack can generate clues with small packets. So, it's also always about all the cases. And, we started in order to be viable and stable against important cases. So, you can see the stable performance regardless of number of thresholds, the audience, and the DDoS attack. So, as we analyzed, general mode where we see radius, because there's some attestations and so on. So, however, model basis and model spiritual characters are very fast, which is why still copy and ready to be which makes material rich very cheap. And, you know, operators now are dead. However, we show that if we think about high resolution, which can detect against DDoS and deliver stable performance for normal duration, we still need carry on with this. So, one of the huge problems which is now as is by way of look at it, and that is these theories. Basically, if you need to send your content to a critical nation, you must to focus and file content together space. And, as such, take it and copy it again to transmit it, connect with it. So, the device drives to all the addition part of TOS to carry on. So, now we can use the zero interface to send file and send the critical content itself. However, TOS can shake and this will have more perspective and shape. So, that's why we move it to TOS into the carry on. What do you think? Whether we can take one by server, one by server and optimize it and develop further side to deliver more performance will be capable to defend against the TOS test. That is all. We have a Facebook file, and a simple HTTP for the test. And we see that the most possible is HTTP parser. We also have some core build and better IOR roadness. And finally with the loader takes also the good ways. This is top 10 of most possible calls. And we see the file is actually worked. It means that there is no single point of which we can optimize and we can perform to the server if we need to make a solution. For example, here we need to remove the small IOR we need to remove work and so on. The problem with user-based service are also system calls. They should be achieved in nowadays system calls. It's still not for free. I think that our request should be about nine system calls in engineering. And we have only one of them available. By the way, so far I'm talking about engineers. However, most of the things we can say are also people with lots of file accelerators. This is just an example which we started from. We started with data cache. A lot of accelerators still use old-fashioned party-based databases. Some find with databases. Because they generate keys to the database using hash on BLA and host. And the finding part begins with different names. And we have a hierarchy of two areas. And the things are done to cope with slow, fast system operation. Actually, to find the file in fast system having thousands of fast-streamed operation. The file opens. The system calls. It requires distance. So, engineers use cache of open-file accelerators and show you the cache to create a program. However, it's not well stated in the form. So you can easily get the document to just iterate over and over your way of resources. So your cache will be efficiently generated. So you have one without cache and you use the 24-season page request. That's a simple way how you can kill. So, myHTDAC server starts to do more relevant things. And they invented some files as a base to handle web cache. So, we do the same. So, you have a device secondary in terms of additional pathways. The problem is if you have some content which can be generated, which can have different versions, for example, for mobile ones and mobile and different devices at all. And you want to work on the same files and the same files sheets. And if you need some kind of secondary key to differentiate web users from desktop users. So, you use the secondary key and give way to this files user agent. Then you can take care about user engine failure and generate different content for secondary key. So, all in all, we need actually handle web content with same performance. The next thing is HTTP password. As we just saw, HTTP password is possible under those rules. And surprisingly, most of HTTP servers use old-fashioned dynamic approach to process HTTP. So, we have one rule with switch statement and now we have state variable with radio. Again, we have when you want, we have the state one. And we have to charge the beam. So, start from the state variable. We move to global state to global chapter and we assign the way to state variable. Next, we move to handle of wire from the statement. Keep in mind that at this point, we catch at least three page-wise of voltage. The current page one were fetched at the beginning of wire from the statement. The second wire we fetched from the statement page when we assign way to the state variable. And the fifth wire is fetched right now at the end of the loop and switch statement. Now we move to the beginning of the loop. We start the state variable again and finally we go to state 2. You can look at the beam and try to understand what's happening. We see just a lot of spin around the loop or voltage. The reason at state one and given chapter beam is simple. We don't know where we need to go. In this case, we can just fetch the next instruction. It doesn't need any charge at all. So, our CPU is ready. If we would be in state 1 and base 8, we need a charge by we again know where we need to be. We actually do not need the dummy and state variable. We don't handle it and we can save or watch it. This parcel is actually generated by a quantum generator. We use the same voltage in our parcel and we also create separate states in different pieces to generate a slow switch jump tables parcel. So, we apply a different architecture to make the jumps as fast as possible. Well, HTTP is actually text protocol. This is version 1.1. So, you can easily get very large strings. It could be a hooker, we saw with the agent about 65 kilobytes in size. It could be a hooker, your lines with the agent. So, sometimes you need to process very long strings. Sometimes we have a mix to sphere extension for about 3 years, which is still doesn't use their extension in their recreation. Moreover, HTTP strings is a very special base. There are different points which makes HTTP strings special and they are all important in optimization of string patterns in HTTP base. The first one is that actually we don't need to read your bytes especially. Actually, we have several by the whole spinos key and we should read your bytes summary. Secondly, there are especially with this and the real problem for us because if we agree with the same profession here it will be one byte or the other byte and we have to keep functioning machine stays between the receiving the path of the derivative. Also, some basic way since just left as writing and also we have to handle the the same way. In HTTP base, when you put your own HTTP base you definitely have some such strings. For example, there could be names of HTTP headers which you want to process especially. For example, host header, user header so these names are included in the HTTP path of the derivative so you have constant strings. So when you compare input string against this input string you don't need to compare the case of the second string. You need to actually want to create a version for input string only and you can save the string version. For example, strings of both cases of both of the strings. If you need to validate input string against a large alphabet you might think about using STL-SPF to accept or reject some alphabets. This is a bad idea because STL-SPF spends just pretty much many resources to compile accept or reject sets. Basically, HTTP has only several alphabets for different HTTP headers and UI and different input sets directly in your HTTP path. So in some of modern HTTP servers there are two or three websites in which I mentioned HTTP has a name and five sets of machines which are sort of favorites. Slide and performance of such passes are very, very cool. The last thing about modern web observators is NETO-IO. Actually we can consider an example of this slide. There are three pages and left is a packet contained in the HTTP message. So firstly we need the packets in software Q and software Q places the packet in the Q of circuits. So the Q is a process only in the circuit and process starts to copy the data of the process and do some other work. So when process finishes with the first circuit it can define the packets on the second circuit where you are on the cache. Actually NETO-IO did a cache. NETO shines in this station. However DGA places NETO practice in where the office view cache. How about if you work to process technicalities NETO track where the field cache will be not enough for to wait while use space process pass HTTP, copy data and so on. So in this test the activity will be a CPU processor and all the packets are processed as soon as possible while all the data in this view cache. So all the issues which I have just mentioned are listed in special download work and this is a habit of HTTP and the firewall. Actually the firewall because it gives firewall rules and IPware and HPware specific build and the ship built-in features to work against where we serve DDoS and web application effects. As I told you before we have a special designated memory which is in my work to optimize it for modern hardware. We have a lot of services and I will cover the topics in next slide. I want to start from benchmark benchmark results and distribution how do we manage the results we can find them in our video and the slide shows that we can reach 1.8 million requests per second on focus CPU. If we analyze the process result carefully basically this sub-work three times faster than web application and HP we should be much faster than DDoS because we have that integrated firewall we can work once more faster than any engine is processing while using very big engine for each one and so on. However it's very difficult to practice DDoS work. It's very difficult to emulate DDoS work in test labs for example we have three servers now at work and real DDoS is not possible to emulate in this environment. The next interesting result is about popular user space display results and which gives service management on top of user space accelerators with CSTAR basically it's not something for me for them from there. Before switchmark where is the 1.3 million requests per second for OS or for hardware test but in a way CSTAR will be slower than DDoS work. So basically we are proud of the performance for DDoS which we would use as space accelerators. However we do not have the long way of such purchase because CSTAR has built into DDoS it's probably we can thank you about the support probably is not so good. However we fully integrated with many other things for example you need to wear some ID table rules and traffic which is faster than DDoS will be treated by ID tables. Go ahead and do that. If you want to speed up and now add traffic you can use traffic control to distribute traffic to control traffic which is faster than DDoS. So basically if you need to do such things in user space space there you need to ask for user space step and timing. The basis is working and additional over there. So our approach is yes we know that going basic system is not good but we are able to not work worldwide. It's not good for basic system work to execute to work as a specific system but we need to run two files to the same system you need to run to and whatever. We just do not work it and we do a lot of work and we divide it by smart page development. Also I want to make sure that while there are a lot of specific ideas in user space how to leverage space step is and basically a lot of user space space steps do not have good accessions like say department knowledge maps even speed congestion, motorbikes and so on. So actually space steps is good and fast and we can work and we can maybe do it. So if we have a test we check it against our rules and we have the next we test it in htpl parsing and run analyze model space and analyze already parsed htpl records and why htpl rules are the request is malicious, we generate new rules which is we put to layer theme and we just go to request. If the request is good then we can pass it to our team because it was overpriced. As I said we use sockets size we live in Linux kernel. We do not use files, we do not use it in use and so on. So there are no rules. However the case why user space htpl records are the rule why we still see very good numbers for user space htpl is that really watched htpl server must solve these questions you want. The issue is question is very great. In this example we have two sockets first one is one, the second one is server socket we have htpl use and basically if you have redirectional connection between client server client send is new request like using by both htpl client server it is not very request. Then you might want to take law of htpl control rules in different order and you easily can go to that law. This is how we still need to solve it using space htpl server. And now what we use in the cpu interconnection we use first cpu and if you consider one htpl server first the page is processing on first cpu htpl control log. Now we are working because all htpl control rules are billed to all htpl so for htpl you cannot access to all htpl server. So first we process payment to on the first htpl control. Next we always request to the mover of server socket and htpl new which handles htpl new actual console to server. So there is no working we have great performance on this traffic and law of htpl server. So also we we are very good if you need law of htpl htpl server. We need to bring in a performance measurement scheme to solve the vision of server socket. We see that the blue line of server socket much faster service connection in campaigns with user space sockets and better general space sockets. Also we have more stable and more performance in processing htpl so also the basic is how to carry on sockets. I would like to say about htpl testing you can find other lists about htpl and particular testing in our law. And now just a few numbers here which faster than engineers in 1.6 1.8 times in short htpl how about if you go to long streams for example with leoch cookie or geolive it's something about 1 kilovind so leoch cookie not leoch geolive cookies or such such yes it's very common nowadays and how the same process can be faster in 3 or 6 times the retail and user space implementation. Well we also use dedicated web cache specifically on top of numarware architecture we work with numarware is that we can put in data on new monitors to be able to survive request from popular monitors or we can do private computation to respond and we use our own unique updates that is reaction to for faster interest in other ways. Basically it's basic on modern research very spacious data searches and the more I use data searches that we have a set in one CPU cache so we have 16 cache files if you want to find some string we will create a cache value and we use this different for these. Since now we have a data we just rotate new data page with data in it if we find that our data page is full we have our operations then we pass the node you need to pass a new index node which is those second four beams and we have to split our data page and this is the only way we use logs any other voices are created with the data section are all free. So we provide several reading functionality for those attacks you can also defer to the very slow HTTP attacks specifying time limits and it's basically very hard to do in user space because you do not control the network pages also we have a protection against several web attacks for example we are countered with queries and responses basically we constantly go in our set of reading watch however we do only very fast watch if we use that we need to do some complex watch for example which processes in responses or something like that we leave the watch for user space for example so at the point of the test it is done to learn very fast watch and we get a lot of traffic we have also what we also would say some of them are known for example the index is able to fail forward we use machine learning technology in more depth generally we analyze how we view several revised trade requests and build trends that we see I would like to see some configuration we have our sticky cookies to fight against DDoS which are unable to solve cookie changes and how I sticky session 3.0 which is pretty difficult to use models to view features and unfortunately I just want to add some questions and concerns about the product so now it is very small or pretty small it is much smaller than many models we do not try to move any more more we are going to deliver a fast transfer to user space and they typically will deliver application to firewall which is more capable of sending information in user space we use best practices in energy development you can find our request and what you use we will deliver packages so you do not need to build a channel and test the firewall head so as a result the test result is very good we need a lot of performance when traditionally test users are not enough for you and when we need to test efficiency against DDoS attacks so thank you the product is powered by TPSU Double for 9 months we did not see any failure we will meet and how we can deliver very well and you can find our release and so on so thank you for listening just a few minutes for questions that is no more from your audience what is the variable compatible to the kernel or how? the code is compatible to the actual kernel versions yes your site is powered by your own product obviously yeah actually we have a patch for the kernel you can download it from our app and we have a kernel model that makes a gaming so far hopefully we will make some time to finish the bottom did I answer your question? yeah some more if not so thank you very much thank you it was very interesting maybe it was