 Normally, when we say DevOps, we normally think about servers, large scale infrastructure, right, like lot of servers, how to scale up servers, how to scale up even logs, how to scale up this and that, right? Is that how we normally think about DevOps, right? Agree, disagree, okay? So, here we have a speaker, Sriram, he runs a company called Wachi, okay? And before explaining what Wachi is, let me ask you another question. When you go to your hometown, wherever that be, other than Bangalore, with your 3G dongle or a 4G dongle, and when you sit at home, does it go to a 2G speed? And it sucks. Skype doesn't work, Gmail doesn't load HTML, Gmail, how many remembers loading HTML, Gmail? Couple of hands, right, okay? Same problem, Sriram wanted to solve a live telecast of a village, marriage, to somebody in your relative in the US. Is that even possible? BSNL half the time doesn't work, 3G never works, here internet doesn't work, Bangalore. So interestingly, Sriram has built something very interesting, I don't want to lose the excitement by sharing what he has done, over to Sriram to share his experience on how he solved a common Indian village problem in terms of how to get good bandwidth in remote areas through a very simple approach. Welcome Sriram on the stage. So, this is going to be a very different session because last one of the days, most of you were completely engaged with different techniques for deployment, automation for deployment, making the life of a sysad main DevOps easier. But here, I'm going to give you a perspective of a different automation, an automation in the sense which is more relevant to a starter than an enterprise, when you have a very limited number of people working on your product and you have a very small timeline in the sense your budget doesn't help you serve it too long. And how do you automate things so that you can convert a concept to a product and then start selling it? So that is what I'm focusing it for the session. So what is automation? What is the purpose of automation? It is not just related to deployment or cloud or that's it. Automation in computer science is basically, I believe there are two main reasons. One is to reduce the complexity of a task, basically you delegate most of the decision making to a system and your life becomes simpler and they get to put human in control of a situation. If there is a notification flood, which one you prioritize and handle, if there is somebody who tells you that these are the high priority notifications you take out of this, then it helps. See, automation, there was a reason one and a half years back, Netflix, the whole data center region code went for a toss. So because of the Netflix code affected, however, they had the whole system automated. But in search eventualities that is where actually human intervention is needed and the human intervention there was composing a regret email to all the customers. Those are the critical things where a human can do and the rest of the thing we can delegate to a system. So for a start-up, before going into this, let me introduce myself some more than what Shikha said. So I'm the CEO and founder of Wachie Technologies, a 3 O startup. What we do here is we try to see how to improve your internet speed. So there is a box, a nice little box, I can show you the picture later, which takes up to seven wireless connections, 2G, 3G and 4G, combines the bandwidth and gives you one single high-speed internet to your laptop or your network device. We use cases for media broadcasting and even managers, news coverage and all that, any live video stuff. Now we had an idea that we want to do this, but we didn't have a clue how to go about it. We had bits and pieces. We developed the software on an x86 platform. But once you deliver an appliance, it has to be a portable run-on battery so it has to move to an ARM platform. So we're doing a lot of R&D. So what is the right ARM process? What is the right memory size? What is the right USB controller? Because we're doing a lot of things on USB so we need to have a very stable USB controller. What is the right kernel version? What is the right driver patches? So we had so many things in our mind to automate. So the first thing that we automate is automating an R&D part of it. So this is a typical disk where you can see at least there are 67 different boards are there. Some of them are in development. Some of them we are evaluating. Some from TI, some from FreeScale, some from Allwinner, a Chinese company. And we are evaluating different songs all at a parallel at the same time. So we are trying to say which gives maximum performance, which is having the lowest power drainage, which is more stable, which has a better thermal capacity. And top of that, which is cost effective and giving the maximum performance. So now imagine we are building all these. And for every single variation, we have to do a lot of measurements, including the CPU, RAM, power, voltage, long duration, how it works for a long duration, is there any memory leak and all that. So there is a huge matrix we started building. And then we realized that it is not possible for earlier that time, we were a three-member team. So it is not possible for a three-member team to do all this. So we thought, let's change the philosophy. Let's not do POC and then ramp up a team and then build automation stuff. Let's automate the whole process of building even the POC. So we automated this whole step, where we have Jenkins and Robot Framework working together, which builds different firmware for us, for all these different versions with different patches and kernels. And then all we need to do is quickly connect the cable and then certain test starts running and we get the metrics. And what we do is we spend most of the time looking at the metrics and saying, OK, this guy is good, this guy is bad. And now let's try these combinations and all that. Another primary thing is how do you do field testing? See, field testing for us is very critical. This is not a server appliance. It's going to sit in one data center enough to test in one particular. I'm sure even servers need to be tested across different data centers. But imagine an appliance that has to go. It can be walking in any place. It can be here. It can be in a village. It can be in a train. How do you test on different scenarios? So we were doing sending people, engineers to different places to do the testing. And what is always, some of the testings are good and some of the testings, the results were completely nonsense. Later, we realized that these people are testing near to a women's college. So there are a lot of distractions. We cannot always trust on human nature to actually to do proper testing. So we cannot take risk of wrong measurements. So we automated the field testing. So you go and set up a device. Basically, this is our device, the white box, which can be connected to. Here it is connected to seven 3G connections. And it was serving around. This is in Chennai, serving around 12.5 Mbps at that place. So we need to go to a field and do the measurements. So we have automated that process where we can just ask robot framework to do the measurement test and even point out some anomalies. And of course, this, have you ever done a long duration test? I think this is something, I don't know how it is directly mapped to a cloud deployment perspective. But here we don't have a tier one, tier two, tier three support. And it's all the developers. And once it is deployed, the box has to perform 24 bar seven. At least not 24 bar seven. We guarantee for 72 hours at least. So we have to do a 24 hour test, 36 hour test, and 72 hours test and measure the temperature doesn't go too high in the box, the chip is fried, or the RAM doesn't spike. There is memory leak in course of time. Sometimes the memory leak happens after two, three hours of usage. So definitely, a human cannot sit and measure this continuously. It has to be automated. So we internally, we decided that let's change our philosophy to a different approach. So how many of you have developers here? Or been a developer or moved to? Great. So most of you must be knowing what is a TDD, test driven development. So the philosophy of test driven development says, if you're going to write a function, a logic, first write the unit test case. And then implement the logic. Never write the logic first. So you have an addition of two numbers. Then you write the unit test cases. 5 plus 5 should be 10. 5 minus 5 should be 0. And then you write some logic. And as long as the logic matches the unit test case, your logic is correct. So that's the concept of TDD. Now how do you do the automation? So we said, instead of creating manuals and instructions for a human to read and execute on a target device, it can be your laptop, it can be your computer or server, or it can be our appliance. Instead of doing that, why not create a instruction for a computer to handle the target device? So that was the philosophy. Now what is new about this philosophy? Nothing. People are always writing instructions to the computer to handle a target computer or a target device. There is one catch here. Now in case we write something for a computer to handle our box, and then it fails. Let's say our box doesn't give 7 3G, doesn't give 15 Mbps. So we consider that it's a failure of a test. Then a human has to take up the same instructions, run it manually, and see, OK, where is it going wrong? Now a computer instruction can be returned in any scripting language or programming language. If a human takes it, a human can be intern, it can be an outsource consultant, or it can be a busy guy. We just need a couple of minutes to actually look through it. He doesn't necessarily have to go through all this programming stuff. He doesn't have to be a programmer. It can be a user expert or a domain expert. In that case, how do you make sure the automation is told to the computer in a language which is common between a human and a computer? So basically you have to describe that in a natural language. So that is the only catch. So you write something in a natural language and let the computer automate it. And if it fails, let a human take care of it, go through the natural language, understand it, because it's plain English. And then he will try to reproduce the same scenario. So the philosophy is tell a computer what to do, not a human. And most of the time, this might not work. In case if it is not working, what we do internally is we assume that as a bug, a moderate or severe bug, and then raise a bug first, and then start implementing a hack, which is nothing but whatever, write a script or something like that, and make sure this bug is fixed sooner or later. We don't raise it as a feature. We raise it as a bug. So this is a philosophy part of it. So we are looking at what is the right tool for this. I mean, there are plenty of automation tools out there, right from Auto-It to Sikuli, QTP, there's a lot of standard enterprise and open source tools out there. But which supports our philosophy? There is something called automation driven testing. We hacked this concept into doing automation for everything. This need not be applied only for testing after writing a code, but this can be applied for anything, even for building, even for doing an R&D, even doing anything to a target system. Now, I will not go through the KDT and DTT product, but I'll try to explain KDT is a key driven testing which has a set of keys. A key is nothing but a statement in a natural language, plain English. Data-driven testing is a matrix of key driven testing. I'll just leave you for this for a minute. You can go and read it later. What I'm more interested, we have adopted is behavior driven testing. Have you heard of Cucumber? So Cucumber is a behavior driven testing. It got quite popular three, four years back, 2008 time frame, because that time I was with Tektronix. I proposed and developed a test automation system for Tektronix, which is tech is using, you guys know Tektronix? Popular for oscilloscopes. I was working for the MPEG division, video analysis for them. So that framework is almost adapting the similar philosophy. And when I came out of tech, I was looking for a similar open source product. And then is when I came across a robot framework. If you're a Rails engineer, and if you're hellbent on BDT, you might also try Cucumber. OK, so this is a little tidbits about robot framework. That's a site. It has pretty much all the information, pretty documented and self-explaining. Of course, it's open source. And if you're using Python, then this is a natural fit. And that doesn't mean that it is not fitting anything else. It can be easily expanded to different languages. We have expanded it to UI testing tools, Selenium. They have a standard library for Selenium. They have a standard library. And we have extended it for Sikuli. Have you used Sikuli? Yeah, it's a wonderful tool. If you're not, you should try using Sikuli. It might help you to do some kind of automation at the cloud level also, depends upon your creativity. So it can be integrated with Jenkins and other plugins. What I am more interested in showing you a demo of this will be how a robot framework can go hand in hand with Jenkins and hear the whole system automated. So I'm going to take you to, so robot framework is quite mature in the sense it has, this is sublime. It has multiple supports. So it has multiple UIs. So basically, robot framework is a plain, it's quite Pythonic in the sense it follows a plain ASCII representation. I think the representation is heavily borrowed from, this is how you write a test case. The representation is heavily borrowed from Markdown language. Don't get boggled down by this format. Once you go through the syntax of this format, it's pretty simple. It's all tabs separated stuff. I'm not going to go deep into the syntax of this, but my point of showing you a sublime plugin is to illustrate that it has already built-in support for VM Emacs, sublime, any popular editor you will think that a programmer might like to use. And however, it also has a dedicated UI, which is called as Ride. This is, of course, written in Python. In this, is what I have a few sample test cases that I have. So I can give you a overview of how this test is written. So this is actually taken right from our own setup. So you see the notations on top of that. It can be any string. But this notation specific to us, which says this test case uses seven reliance network dongles. So that is a notion we use. And this is how a typical data-driven test case is happening. So I'm feeding to that combination a link. And I'm saying, OK, this is the expected time, 10 seconds. You should download. And this is the maximum CPU it can reach. And this is the maximum RAM it can take, MB. And you can create a build of metrics of this. You can say different links from different sites. And you can measure or give the metrics. And you can keep expanding it. So if you have seen this, essentially this matrix is calling a single keyword, which I wrote here. So basically, this called the single keyword called file download test. So I have associated a keyword with a particular matrix. It is enough to understand that. And let's go deep into the keyword. That is what I'm trying to illustrate here. This keyword is actually implemented in natural language. Now for some reason, if this test goes for a toss, anybody can open this and then go through this. It is self-documented natural language. You can write it in any flexible way. It's not even a DSL. It's not a domain-specific language. And then you can alter the steps. You can add new steps. It will help you come up. It has an auto-suggestion of already, if a keyword is there, it also gives you suggestions of what does it mean. So here, I'm not sure. Can you read this? Is it visible? Because I tried to zoom max. So it says, start timer, monitor CPU for max value, monitor RAM for max value, download a file. The file link, of course, you pass it here. And then it says, stop timer, and stop other things, and then match whether it suits it. So let's run this test for now. So I have connected to this MyTablet, which is tethered to this. So this is a download test. So I'm going to run. What's happening is it is trying to download the file and try to verify it has failed. So that is good. I can show you a failure log also. So then I opened this log. The log file came here. The test case failed because if you're running at 90, let me go back to the test case and try to quickly say that don't bother about that. So this is a notation that we are using. You can actually say CPU should be SHOULD should be greater than 90 also is fine. So I'll come to the magic how this thing is getting, let's say CPU can be 100, not a problem. I think they should fix it. So good. So now I can show you the pass log also. This is all test cases passed. If you want to drill down, you can go here and see what each keyword in that particular test did and what was its output and all that. Now, where is the magic happening? We wrote something in English. We wrote, and then it started executing, and then we had this value test successfully run. So the magic is happening in the philosophy of behavior-driven testing. So these test cases uses what do you call a keyword. A keyword is nothing but, let's say, in our case, a top-level keyword is file download test, and then it says start timer. So a keyword is nothing but can be defined as a collection of another set of keywords. So it can loop down like that. And finally, the base keywords are the ones which are actually implemented in whatever language you prefer. Now, there is a philosophical change in this. So it gives a freedom of writing the test case by a non-programming guy. You need not know any coding. He can go write a test case in a natural language, and he can start reusing. As soon as he writes a keyword, it gets stored and started to be presented in the intelligence. And he can start extending these keywords for other test cases. So basically, somebody will sit overnight and write all the 100 cases, 100 test cases. And next day, you come up and see, you will have the base-level test cases. That may not be 100. It might be like 20, 25 base-level test cases, because these keep on getting reused. And then you decide what is the right language to implement those base-level test cases. In our case, we have implemented them in Python. But it can also be extended to multiple any different languages using a simple wrapper, Python wrapper. And then you start using. So you're isolating the test case writing phase from a development phase, all but assuring that automation is going to happen right from the beginning. So I'm going to show you a small video clip. Do I turn for a video clip or should I? It's one minute less than a minute, which shows the integration to Jenkins. So this is try to get the context, if not the details. That's fine. So we have a test case here. And we ran the test case. And this is how we configure for a Jenkins project. This can be a build project, or it can be any, you can directly say, why don't you do a field test? So it doesn't have to always be triggered after a code build. You can manually go, you can configure such a way that you want to do many different tasks. You can also specify what Jenkins should consider. 50% of the test case path, you should consider this activity is a success. So the build is happening. And then what happens is the result is actually integrated into Jenkins. And you can use Jenkins propagation feature, the emails, and alerts to forward to that. And you know this is happening. Let me also give you some of the nice hacks we did. If you see, let this go through. So the result is integrated, right, graph into the Jenkins. And then you have the test case details, also embedded within the Jenkins dashboard. So we did it in a way that, for example, there is a test case where you put all the reliance network dongles into the box and test it. Now, after the test suite is done, suite is done, you want to replace that with the TataFoot on dongles. So how do you do that? So what we made is we made a robot framework to actually run all the test cases, then send us an email. So somebody's, one of the team member will see the email. He knows the robot framework tells him what is the next combination of dongles to be connected. Then the person goes manually connects those dongles and then comes back and replies to the same email done. And then the robot framework continues its test from there. So it is continuously running the test in the background. So I hope that has given you a context of what is the robot framework and how it can be used for a small team. And if you can put in your creativity, you can use it to automate anything, not just record testing. Thank you. Wonderful. And if there is any questions, do I have time for questions? Is there one interesting QA that anybody wants to ask? Any question, which is like, anyone in the back here? Yeah, there you go. So basically, this will connect to different types of OSS ultimately, right? So then there must be some firmware running on your device. Not necessarily. Earlier, somebody was talking about Ansible, right? It's a similar logic. You just need to open a server running on it. Everything's done over there, so such. OK, OK, thanks. So the appliance of the server doesn't need to have any daemon running on top of it to listen to the robot framework. Wonderful, sir. Now you know if a village happens in the wedding, what to do? Call him and livestream it. Wonderful round of applause.