 So, what we have done here is that we have sort of come up with a possible stack for if at all we want to build this sort of a matching engine, right? So of course, this is not there, there can be multiple ways to build a matching engine, but this is what we thought would be a good start. So, as far as the stack is concerned, we can, we will go with OS, which is Windows based, Windows 7, and since we have been talking about Python all along, we will use Python as the choice for the programming language. And there are many integrated development environments that you can use to develop Python code. So, we have decided to go with Spider, which comes along with the Anaconda package, since it also provides a lot of modules out of the box, we do not have to install them separately. And then we will use a very minimalistic, you know, low footprint database called SQLite, which also can be used in memory as well as file based. In this case, we will be using it in a file based mode. So, this is what we thought could be a reference architecture for the matching engine. Again, this is very, very minimalistic. There might be a lot of bells and whistles that will come with an actual production grade reference matching engine, but this is just to drive the point across. So, we will have an order service, which will, you know, be responsible for taking all the orders, right, from the external world for the matching engine to operate. So, by orders, cell orders, and different flavors of orders like market orders, limit orders. So, we would typically communicate with this order service using HTTP REST, so it will be a RESTful API. And then once the order service kind of gets, keeps getting orders, it will push those orders into an orders queue. Again, we will not get into the semantics of that order queue just at a very logical level is what we are explaining right now. So, the orders queue will start getting these orders. And as soon as the orders queue starts getting filled up, the matcher who is observing the orders queue will, you know, start matching the different orders that are placed. So, this is like an observer pattern where the orders queue is being observed by the matcher. And then at the bottom of the architecture, you see a database. So, as soon as an order comes, the order also gets placed into the database in addition to be, in addition to getting placed on the orders queue, which is in memory. And when the matcher finds a match based on the orders in the queue, it will also update that fact in the database. So, we will move on now. So, I don't know whether it is visible, but this is, this is a very, very simple class diagram that we thought could be used for the different entities and the different actors in this entire matching engine. So, we have the order class. We have the trade class. The order class represents an order that is placed by the customer. And we have the matcher, right, and the orders queue, which is shown as a class again over here. And as the orders start getting matched, then the trade starts getting created. So, we have a trade class as well. So, moving on. Okay. So, now we'll talk about a case study. So, this is what, if you have to build a matching engine, these are the minimum requirements that the matching engine should satisfy, right? So, we'll talk a little bit about those requirements. And once this session is over and once this entire program is over, I think the expectation is also that all of you, right, who have been participating in this session for so many weeks now, would get an opportunity to actually build a matching engine of your own and then kind of submit it to the course organizers. So, this is the problem statement, build a matching engine to match orders that are placed for buying and selling stocks, right, with some of the features that are listed here. So, we'll support two kinds of orders, limit orders and market orders. As I explained earlier, limit orders basically dictate the price and the quantity at which the order needs to be placed or traded. And market orders have no such restriction, whatever is the available price, best available price, that is the price at which the stock is traded. And then, there are certain flavors that come in these orders also, right? So, for example, there might be a dictation that says that, you know, partial fills should be allowed. So, which means that if, let's say that I am looking for 50 shares of, say, stock X and if somebody is willing to sell, say, only 20, so then if a lot of partial fills is enabled, then this trade will go through, which means that I will be able to buy 20 only from the seller. But then there is another specification which says that all or none, right? So, in which case, if I am not able to fulfill the entire order, then I don't want the trade to go through. So, that is what it will control, this particular specification. And then I could also specify minimum quantity, right? So, if, let's say, allow partial fills is specified, but then I want to say that minimum this should be traded, right? My total order is 50 stocks and allow partial fills is allowed, but I don't want to go below 20, say, right? So then, again, another variant comes into the picture. So, this is very, as I said, this is a very minimalistic matching engine. There are many other flavors that come in, but we don't want to make it too complicated. So, that is why we thought that we will just go with these different variants and flavors. Again, to reduce the complexity, we will have only intraday trades. We will not go across business days because then again, we will have to start getting into the end of day and close of day and all that stuff, right? So, we thought that we will keep it simple. When you start developing this matching engine, then you may decide to actually mock the database calls as well, okay? So, you might decide to actually do everything in memory and return the output through the matching that happens within the memory. There is no stipulation that you have to write it to the database. Of course, you are free to do so, but it is not a constraint as such. Okay, so then the program or the matching engine that is developed, it should basically have the capability to accept an input file which will contain the orders, right? And then spit out an output file which will contain the trades or the matches for the given orders that are placed. So, that is how the program should operate, okay? And that is how the program will be evaluated on the basis of an input file and on the basis of the expected output that is spit out by the program. So, we'll move on now, okay? So, this is the input file format. So, it is a comma separated file where each record represents an order, okay? And there could be multiple such records, signifying that there are multiple such orders. So, we'll just go through the format of each record which is basically comma separated. So, the first field will be the trade type and there can be two possible values for the trade type. It can either be a bid, which means it's a buy or it can be an ask, which means that it's a sell. Then the price limit. So, the price limit will be a floating point number. It will be applicable only in case of a limit order, which we'll come to later. And in case of a market order, anyway, it will not have any meaning. So, it can be anything, even if it is zero or more than zero, it will not be looked at actually. So, in case of a limit order, this is the limit, the price limit at which the order should go through. Then the quantity, which is an integer and after that the stock code, right? Whatever is the stock code, for example, RIL is one stock code. Then the customer ID, the customer who is placing the order, which is a string and then the order type. So, the order type determines whether it's a market order or a partial limit order. So, there can be two possible values here, market or limit. And then the flavor, right, which we talked about in the earlier slide, which is basically whether you want to allow partial orders or all or none, okay? So again, these are the two options that are allowed here. And then the minimum quantity. So, the minimum quantity will look at what is the minimum quantity at which the order should go through. So, we have given an example at the bottom of the slide here, right? So, we can go through that. So, we can see that this is one order that is being placed. It's a bid order. The price limit is 13. It's a limit order, limit bid order. And the buyer wants to basically buy 10 stocks of GOOG. And C1 is the customer ID. And allow partial is specified. And the minimum quantity is one. So, this is the output file format. So, this is what we would expect from the program, right? As it matches and as it generates the trades. So, we'll go through a record for the output file format. Again, it's a comma-separated file. And the first field is the buy customer ID, right? Who is the customer who is buying the stocks? And the second field is the sell customer ID, the customer ID, the customer who is basically selling the stocks. And the quantity which was traded, right? Between the buyer and the seller, the stock code, the price at which the trade happened, and the buy order ID and the sell order ID. So, when an order gets placed, it is assumed that the system will automatically generate an ID for each order. So, that ID will basically come here to basically serve as an identifier for that order. So, again, at the bottom, we have an example for the output file. So, we can see that C001 is the buyer, C002 is the seller, and the quantity of stocks that is being traded is 50. So, basically, C002 will sell 50 stocks to C001 of book at the rate of 17.3 per share. And the next two numbers are the IDs. So, with that, we will now be going into a prototype that we have developed for the matching engine. So, this is a very simple prototype that we have developed for the matching engine based on the, of course, we are not basically trying to satisfy all the requirements that are there in the case study, but most of the requirements. So, this is spider, the ID that we spoke about earlier. And we can see that this is the project that has been created. So, the project's name is matching engine. And this is the package structure. So, within, or we have EDU, and then within EDU we have the main package called matching engine, which has most of the code. And there is a folder for tests as well, which has some of the test cases. So, what we will do is we will, before we get into the design of the code and the workings of the code, we will just run through the application so that we get an idea as to what exactly it is doing. So, this is a very simple web page that has been developed. It shows the orders that are placed and the trades that result out of the orders. So, currently there is nothing in it because no orders have been placed and no trades have happened. So, we will go into, so this is the curl command which is used to send requests to RESTful APIs. So, we are sending a request which is basically, as you can see here, it's a build request, right? And the customer ID is CC001, stock code is GOOG, quantity is 10, price is 18, and it is being sent to this endpoint to place the order. So, as soon as I execute this command, the order will be placed. So, if we now go to the browse, we can see that the order has been placed here, right? So, this is the order ID which is automatically generated by the system, the trade type, the price limit, and so on. So, this is the bid order. Now, let's look at another order that we can place. So, now we can see that this is an ask order. I will change the customer ID and the stock code is the same and quantity is 50 and the price limit is 17. So, now we can see that this particular buyer is willing to buy at 18, right? And this seller is willing to sell at 17. So, because the condition will be matched, the trade will go through, because he's willing to pay a maximum of 18 for the stock and he's willing to receive a minimum of 17 for the stock, right? So, in this case, the match will happen. So, as soon as I place this order, we can see that a trade will be executed over here. So, your trade has been executed, the order has been placed, and it has been executed at 17. So, the buyer will always benefit. So, even though he has said it is 18, it will be at 17, because this is the minimum that the seller is willing to receive. So, this is how the matching engine is working. I just wanted to show the workings of it before we go into the actual code. So, now we'll move on and go to the workings of the code. Okay. So, let's start from the beginning. So, the entry point of this application is the restful service, right? If you remember the reference architecture, we had an order service. So, this is the order service. Internally, it calls another service, but you can think of this as a gateway into the application, okay? Which deals with the message transport and the communication with the outside world. So, we are using a library called Flask, which allows us to create restful APIs. So, this is the method that we use to basically place the order. As you can see, this is the endpoint, right? So, whatever is the URL of the application, you just put slash order at the end. And this will be the endpoint for placing an order. It's a post request, and it takes a request in the form of a JSON string. So, we could see in the curl command, we were sending a JSON string to the restful API. So, this is a JSON string. And once the request comes in, we basically create an order object. So, if we again recall the class diagram, we had an order object, order class. So, we create an order class or an order object. And then we populate that order object based on the parameters that are coming from the request JSON. And then we call the order service to actually place the order. So, now we'll go to the order service. So, this is where the order is getting placed. As you can see, this is the method that was being called place order. And internally, we are using, so, since we have gone through the different ORM frameworks during the coursework, we are using the SQL Alchemy ORM framework that is available in Python. So, I'll just show that. So, these are the entities, right? The order entity and the trade entity. And this is basically mapped to a table called orders. And similarly, this trade entity is mapped to a table called trades. And these are the various attributes of the object class, of the order class and the trade class. So, what we are doing here is in the place order, we are taking the order that is coming in. Enriching the order, because there are some things which are not coming in from the external world, right? Like the order ID, as I said, the order ID is auto-generated. Then the status is set to placed. Then at what time the order was placed on, right? And the quantity pending. So, the quantity pending is initially set to the quantity that is being requested. And then we create a session. This is an Alchemy session. We then add that object to the session. And then we commit it. As soon as we committed, whatever is there in the session will get flushed to the database and will get persisted in the database. And then once it gets persisted, we again read the placed order. We fetch the placed order from the session by firing a query on it. And then we return that placed order. Because the placed order will be kind of enriched, right? Because we have done a lot of things which were not there in the original request. So, this is how an order gets placed. And if we notice, so now multiple orders are getting placed, right? And pi orders are getting placed, cell orders are getting placed. So, while all this is happening, the matcher has to come, has to kind of kick in, right? So, as we, if you again reflect back to the class diagram, the design said that the matcher always observes an order being placed, right? So, how does it observe it? So, we have here something called as a decorator. So, Python supports the concept of a decorator, right? So, this decorator has been placed on this method. So, we can see that this decorator has been defined over here. So, as soon as a call comes in for this place order, we can see that this decorator will kick in. And as this method call is executed, as soon as this method call is executed in its entirety, it will do some additional work at the end. So, this is what the additional work is being done, right? So, this is where the actual method is being called, the place order method, which was being decorated, this over here. And as soon as the place order method completes, then this decorator will do some additional stuff, which is basically putting the order that was placed into an order queue. So, this is the order queue. So, we'll look at the order queue as well. So, this is the order queue, right? So, as soon as an order is placed, it is pushed into the order queue. And as soon as it is pushed into the order queue, it basically maintains a map of different stocks. And for each stock, it maintains a list of two lists, basically, a list for buy orders and list for sell orders. And this is how it basically facilitates the matching by the matcher. So, once the order is placed over here, and once it is pushed into the order queue, then we will basically use the matcher. This is the matcher matching algorithm, the find matches method over here. To find matches between the different orders that have been placed so far. So, as soon as an order is placed, it happens instantaneously, because we do not want to wait for any time, because if you wait for, you know, even for a short interval, it's quite possible that we might lose the potential of a trade going through, right? So, that is why it is all instantaneously happening. So, this is the find matches method, which is basically the heart of the matching engine, kind of the algorithm, which goes through all the bid orders and the ask orders and tries to find a match. So, I will not get too much into the details of it. We already know what the requirements are, right? So, we can understand how this operates. So, once it finds a match, the match list is written for all the matching orders. So, basically the match list is then scanned and then trades are created. So, we can see here that for each match in the match list, a trade is being created here, okay? So, this is the create trade method that is being called and the trade gets created. So, again, it is all very object-oriented. As I said earlier, there are classes that represent each and every entity in the matching engine and everything is getting persisted into the database. And after the trade is created, then we need to update the quantity in the original order, right? Because now some stocks have moved from customer one to customer two, right? So, the order also has to be updated to reflect that fact, okay? So, that's it on the matching engine. So now, with respect to the assignment that you have to do, right? So, what I've developed here is a simple program, right? So, for the assignment that you develop, it is not necessary for you to develop a restful service. It is not necessary for you to even persist the data into the database. All that can be mocked. So, what we are expecting is that you will be basically developing a program, right? Which will be callable through the command line, right? And it will accept an input file and an output file path, okay? So, the input file will basically represent the comma separated file, which will contain the different orders. So, I'll just go through a sample input file, which I have here. So, this is a sample input file, CSP file. So, again, we can see that there are different orders being placed here, right? These are four bid orders and there are four ask orders. And once these orders are placed, then the program is expected to find matches between these orders and spit out the trades in the output file. So, I'll again show a sample output file as well. So, we can see that for this particular input file, this is the output file and there are two matches that happened, effectively two trades that were executed. So, again, to stress on the specification, this is what is expected of all of you that you should be developing a program and a lot of other files as well, which will be callable from this main program to basically do the matching and to accept the orders. So, I will just run through this program as well to just show how it works. So, this is doing nothing but calling the order service that we saw earlier through the RESTful API, right? It is doing the same thing. The only difference is that it calls the order service, it gets the matches once the orders are placed and then it writes it to an output file. Just run through it, just give me a minute while I restart this. So, while that is restarting, I just wanted to show some more things. So, this is the database where it is all getting persisted, right? This is the SQLite database and we have also enabled some logging. So, as Archana mentioned during our session, the previous session that we had, it is extremely important that we log, do a lot of logging in our code to troubleshoot subsequently, right? So, this is where it is all getting logged. So, we are using a library called logging which is available in Python, right? We show that as well. So, this is the logging module. It's as simple as that. Just basically do a config and we specify the path of the log file and the level of the logging. So, the lowest, most granular level is debug. Of course, we can increase it when we move to higher environments. And this is how we do the actual logging. So, we just say logging.debug, whatever you want to log, right? Okay, so the console has come up again, the kernel has come up. Now we can start running the main program. Okay, so before that, I will just clean up the database again just to start afresh. So, this is how I'm cleaning up the database. Basically, we drop all the objects in the database and then we create all the objects again. So, now the database would have got cleaned up. Now we can start with the program. Syntax errors, like we say, right? Yeah. Bugs can creep in anytime. So, it's just commenting out the problem lines. So, basically, it's just print statements. Yeah. Those are happening, so. Okay, so the program has run and it would have created the output file based on the input file. We can see that the output file has been created at 5.7 PM, right? Again, it's the same for this input file. These are the two trades that basically went through. So, I think we are at, we are done with the demonstration of the prototype and I think we will not open it up for any questions that you might have on this application and on the case study that all of you have to work on, right? With this, we will open stuff for Q&A. Thanks so much for that, Saurabh. Is there a hand raise? No hand raises yet. That means either everybody is understood or nobody is understood. Both of them are very dangerous. I think we can go college by college now, I mean, since there are no questions. DQTS College, any questions from your end? Okay, we have some questions from Walton College. Hello. Hi, we can hear you. Can you hold the mic a little bit closer? Yeah, great. How to achieve synchronization when we are buying a share? Hello. Yeah, much better. Unaudible? Yes. My question is, how to achieve synchronization when we are buying a share and its value frequently changes? And its value frequently changes. I think we have lost your battery on your mic. Okay, so what I was saying was that since Python has this concept called global interpreter lock, it does not ever allow multiple threads to operate at the same time, right? Because there can be contentions and all that. So if you have a multi-threaded application, you can just assure that at any given point in time, only one thread will actually be operating on your shared resource, right? So if you have a shared resource, only one thread will be operating on it. So since the order queue that we saw was more or less like a singleton, it's a shared resource, right? So there is only ever going to be one copy of that order queue in the entire application. And since even if multiple threads are trying to y or compete to get access to that order queue, only one thread will ever be able to get access to it. So in computer sciences, there are different ways of you can achieve synchronization between frequently changing values. And one of the ways is popular ways is locking. So whenever you are about to do a match, while you can have new orders coming in and all orders getting matched or traded, but the matching engine will always work with some specific data and that data will be locked. That means when it has entered the matching engine, the customer will not be allowed to change the quantity or the price and so on and so forth. The only exit from that kind of condition is a proper trade after a match has happened. So in environments where this global interpreter lock is not there, for example, in Java and so on and so forth. So there you'll have to do this locking on a manual basis. You'll have to write code for that. The other way around locking is to use something called as STM. So STM stands for software transactional memory. So there you can, just like you do transactions in a database where you can do commits and rollbacks and so on and so forth. So while you're writing to the memory that your software or the matching engine is actually running on, you can make it transactional. So either the entire match will happen and you'll get into the memory or none of it will happen. So software transactional memory is like very heavily researched in computer sciences. And there are programming languages that like Haskell that support it. But in other languages, things are a bit dicey, it may happen, it may not happen. But what are the factors behind the stock market failure? So what do you mean by stock market failure? So see, from a failure standpoint, failures can happen because of many reasons, right? So some reasons are logical, some reasons are technological and, you know. So from a technology standpoint, failures can happen because of, you know, let's say the power goes or even the backup power generator goes or the network is down. Okay, so any kind of failures that can happen in a real life computer system that you might work at home at, the same kind of failures can happen at many levels for any stock engine or for any stock market. Having said that, there are also certain failures that can happen because of logical reasons. So if you look at logical reasons, every stock market has something called as a circuit breaker. So if the price deviates very badly, I mean, if the stock market is coming down in a very short manner, that they understand that something is wrong and then we need to stop it. So that can also be termed as a, so it's like a break. So before the losses spread far and wide because of some bug or, you know, some faulty trading algorithm somebody has implemented or something of that sort. So the market applies the breaks and you know, you can also look at that as a failure. So that these are kind of failures from something that we can see in real life and this events have happened. If you follow NSE or BSE or, you know, any kind of popular kind of market places in India, you maybe once or three years, once, four years, five years, you'll figure out that, okay, the circuit breaker was applied because the price was going down too sharply and you know. Is it possible to make hybrid combination of SDLC model and agile? So SDLC model by SDLC model, you mean which model? So you mean waterfall and agile or spiral and agile or? Yes. Or scrum and you can actually do that. For example, in agile methodologies, right? You can be inside a scrum. You can follow scrum methodology and let's say you don't have too much time to test, okay? Assuming and hypothetical example, let's say testers are more expensive than developers in the market and your project manager has not given you too many testers, but let's say developers are cheap. So can we test the software while it is being developed? So you can actually combine scrum and XP. So in XP you use pair programming. So just like we are here, paired up as buddies. If you make some mistake, I correct him. If I make a mistake, he corrects me. So while the software is actually being written, it'll go through a set of two brains and if both of us are like really working and not like wiling away our time, then the bugs will get reduced. So that's one way. So you have combined scrum and XP while developing. It's up to you. I mean everything depends on the kind of resources you have at hand at what is the kind of timeline you're looking at. So yeah, I mean, even when you're doing your own graduation or engineering related projects, you can actually pair up with someone and it's a good practice to do it because not only the number of bugs go down, but sometimes your friend might know something that you don't and when he writes code about it and you ask, what's going on? This is something that I don't know and you get to learn also. So pairing and something that you can put in any kind of SDLC kind of pattern. Yes, sir. Next question. Matching engine using Python and Java. Which is more efficient? So this is a very kind of tricky kind of question. So if the kind of code is the same, if the kind of code is the same, then I would say that the Python engine will be much more predictable because it has GIL. And if you're doing the same kind of code and just, for example, if you just translated all the code that we have shown you into Java, Java doesn't have a GIL. You might find many threads are running in parallel and our code does not have locking. So things might go bad in Java. So maybe in Java we'll have to write more code for locking and doing certain kind of assertions to make sure that not too many threads are running at the same time for the same kind of process. So the Java code might be lengthier, but when it comes to speed, we can't say because we have not tested it out. And so while people say that Java applications run as fast as C, having said that it also depends on how you have written that code, which algorithms have you used? For example, we have used queues. Queues are simple to understand, but they run slower. If you use trees, then you know the matching engine can be substantially faster and so on and so forth. Just to add to what Amal said, we should not look at performance only from the point of view of the running application. You should also look at the productivity that you get by the choice of language that you make. So if at all you're going with Python, you might get more productivity and you might deliver the application earlier, then you would deliver the same thing with Java. So that is also one consideration that you should have in your mind. And of course, now since hardware is not too expensive, you can easily kind of solve that problem by having a lot of load balance components in your application and solving the performance problem. Of course, that is not to say that we should develop code that is under-performance. That is in no way advocating that. But again, just to give you a different perspective from what Amal said, performance should not only be looked at from the point of view of the running application. My question is on, can you hold the mic a bit closer? If we make the model on, make the model on stock trainer, like machine learning model for stock trainer, then what are the main factors with the... Did you get the question, sir? Can you hold the mic closer to your model? See, hold the mic like this, a little closer. Indian idle step. But actually my point is that which factor should be considered the most for the machine learning process for stock training app? Okay, okay. So he's trying to combine machine learning with stock training. Okay, that's a very good question and it's a brilliant question. I can only give you some kind of a generic background because I'm not allowed to get into specifics. So you must have heard of something called as HFT or it's called high frequency trading where machine algorithms or machine learning algorithms, they look at the patterns in the stock market and figure out what is a profitable trade. So they will buy stuff and they will sell stuff and they will hedge stuff in maybe in other kind of markets like derivatives or futures. So yes, you can do machine learning and figure out what should be the optimal strategy to buy or sell a stock and so on and so forth. You can complicate your machine learning algorithms by looking at other stocks in the industry or any other commodity, so on and so forth. So it is possible but for the purpose of time and other kind of requirements that are forced on us by our company, we cannot be very specific on this. But if you research on Google and look at high frequency trading and machine learning and stock markets and making profits on stock market using machine learning, you'll find a lot of resources. Also moving forward, if you want to ever do machine learning on stock markets, you are going to need a humongous amount of data, historical data, a data that ranges into the past 30 years, 40 years, 70 years. And once your machine learning model is ready, you will also have to back test. Back test is like, if I did this algorithm, let's say 10 years ago, will I still get profits? And let's say the back test says that, okay, you would have made certain amount of profits and in the market, if you find the same patterns that happened 10 years ago, then there is a good chance that you'll make profit today as well. So yes, it's possible. And the sky is the limit. You are going to need a lot of data analysis, a lot of data and a lot of computational power for doing this. And there are companies that do this. We are the most important like factors we should take into the mind for planning that. Indian Idol style, please. Yeah. Actually, what are the main important factors actually? We should be considered for, like on the business side, we should be considered machine to train. What are the main important factors to consider on the business side for? For on the, on the, on the machine to train. On the machine trade, you mean when you are machine learning algorithms are trading, that is what you're saying? Please make the model for that. Is there any important factor like step in the mind? Okay, so there can be many factors. One of the most important factor is obviously since if you are running a business, then you are running it for profit. So you should not go into loss. So profit should be one of the primary factors. And second is what is the money that you're putting in into the stock market to generate that profit? Okay, that money should be as less as possible. And as you know, stock markets always volatile and you know, bets usually can go wrong. Some trades can go wrong. So you also have to minimize the risk. So for generating, let's say X amount of profit, if I'm spending, let's say one lakh rupees. Okay, that's one algorithm. And let's say there's second algorithm that's for generating the same amount of profits. Let's say X amount of profits, it is spending, let's say, 80,000 rupees. Then the second algorithm is obviously better. But then again, if the first algorithm is less risky, then you know, it will probably be a tie. You'll have to figure out what to use when. So there's no one kind of answer for your question, but the primary should always be at profits, at lower investments, at lower risk. Good evening, sir. Good evening. Sir, my question is from Cloud Computing. Okay. In disposability, what is the difference between graceful shutdown and forced shutdown? Like what are the factors? Okay, so if you've used any mechanical device, let's say like a scooty or you know, you're traveling to college, right? So every time you park your bike, you reduce the accelerator, you turn the key, and you know, and the bike can get started again, right? So that's, you can think of it like as a graceful shutdown. And what's a forced shutdown? Unfortunately, you land up in an accident or something goes wrong, your petrol gets over or, you know. Yeah, there are some problems with the other kind of, of course the machine is going to shut down, but it's not going to be a graceful shutdown. It will be a shutdown because of failure. Okay, so these are the two kind of examples that I need you to keep in mind. So now let's take this example towards the software world. So let's say in our matching engine, okay? Let's say the trading day is over. So I'll, I want to shut the matching engine down so that no orders are accepted. So it'll be a graceful shutdown. I will make sure that all my order queues are completely empty. All the match trades have been committed to the disk. There is nothing stray in the memory and then shut down. But let's say if my system fails or if I find out that there's a bug in my matching engine and millions of orders are getting matched wrongly, then I'd want to kill it. Then that'll be a forceful shutdown, okay? I will probably figure out, take some more time to figure out how to fix those bugs and so on and so forth and get it back up again. But for that point of time, it'll be a force shutdown. Thank you, sir. IS University, Jaipur. Hello. Hi, sir. Hello. Hello. Hello. Namaskar, sir. Namaskar. So my question is, what is the difference between cloud computing security and fog computing security? Cloud computing security and? Fog computing security. Which computing? I'm, I didn't get the second part. What is the difference between cloud computing security and fog computing security? Fog. Fog computing. I'm sorry, I have not heard about fog computing. Maybe you can put your question on. The question is foggy, of course. So, you can put your question in the forums and we'll ask around and figure out who can answer your question best inside the company. The next question is, what is the difference between Scrum and Agile? Okay. So Agile, you can think of as a generic term and Scrum is one way to do Agile. Just like XP is another way to do Agile. Kanban is another way to do Agile. If you study object-oriented programming, you can look at the class as Scrum and the objects as the methodology. I mean, like Agile and so on and so forth. So, there are different ways of implementing Agile methodology. So Agile is the umbrella term and how you implement it in your own organization based on the resources, the cost constraints and other kind of staffing constraints you might have. You will follow either XP. You might follow Scrum. You might follow Kanban. You might follow FDD. Depending on your constraints, you will realize the concepts of Agile using a process that is more attuned to your organization. So one is an umbrella term. One is a specific term. Namaskar, sir. Which Agile technology is popular other than Scrum? Okay. So there's this trend in the industry that every few years, some new things get popular. So for some years, XP was popular and then some years, Scrum was popular. Then for some years, Kanban was popular. As of now, Scrum and Kanban are the two most popular models and they can be mixed and matched only in Kanban style that you don't have definite sprints. Things get over when they get over. But Scrum and Kanban are popular. Thank you, sir. Hello, sir. In context to cloud computing, what is horizontal and vertical scalability? Okay. I think we should use the notepad. Yeah. So vertical scalability, right? Let's say you have a machine, okay? It's a computer. It's running some software applications, right? No, that's okay. And let's say that this machine is not able to keep up with the load, okay? Many applications are running. So what do you do? You buy a bigger machine. So if you have computer gaming friends, they will understand this problem very well. New versions of games keep coming out and they require better graphics card, more RAM, more CPU, and you keep on buying better and bigger, bigger machines, okay? At some point of time, you realize, boss, you know, I cannot keep on improving my machine. So what you do is you have many small machines and each of them will run something, okay? Some program or the other. It could be the same program. It could be different programs. If it's a microservice architecture, they could be different programs. And this is what you call this horizontal scaling. So basically in vertical scaling, you are focusing on one machine and increasing its power as your demand increases. And in horizontal scaling, as your demand increases, you are focusing on increasing the number of machines. So this could be generic hardware. This was the era of mainframes, okay? And this is our current X86. So that's what you do on cloud. So if you find that my load is increasing and I'm not getting enough TPS or my customers are not getting served as they were, so you can just start another box, put the same kind of software on it, connect them all to some kind of a load balancer, which will distribute the load to the right kind of box, okay? And you can scale horizontally like this. The advantage of horizontal scalabilities, like let's say if one of my machine dies, okay? Or something bad happens or the disk gets corrupted. You still have so many other machines to service the load, right? But in case of vertical scalability, what happens? If this machine dies, nothing happens. I mean, not even 1% of your clients get serviced. So from a resiliency point of view, horizontal scalability is good, okay? This is not so good from a resiliency point of view. So generally on the cloud environment, if your demand is low, you start with one machine, then when your customers keep on increasing, growing, growing, growing, then you can just replicate that machine and keep serving the same customers. And that's horizontal scalability. And this happens in nature also. As a species, if you see dinosaurs, they became extinct because they were really big. They had to eat a lot, they had to, the requirements were huge. And after that meteorite strike or whatever the current theory is on dinosaurs getting extinct, they went extinct. But you know what? Ants were still around when dinosaurs were around. And ants are small, the requirements are less, and ants are still around, but dinosaurs are extinct. So this is the model that works in nature also. The most resilient animals or most resilient species, they are tiny and they make group decisions and they are basically horizontally scalable. Hope that answers your question. Thank you, sir. Good evening, sir. My question is, what is the difference between SDLC and SDLC? The testing, SD. So you are saying software development lifecycle and software testing lifecycle, right? Testing lifecycle. Okay. So it's a lifecycle of life cycles. So how do you say it? When you develop a software, software testing lifecycle can be a part of the development lifecycle. So once the code is developed or as the code is being developed, you will test that software. You will probably find out some bugs, okay? And once the bugs are found, you will feed it back to the developers. We tested it and we got so many bugs. Please sort them out. And then they will follow the SDLC cycle again. And after they have followed it again, you will follow your, the tester will follow his SDLC. And so you can, if we can again come back to the whiteboard, okay? I can just, so my drawing is very bad. So you need to kind of forgive me for it. This is not, you know, my daily job, okay? So let's say that this is an SDLC cycle where you kind of plan, build, okay? Test, just like Archana showed, right? Maintain, okay? So this is your SDLC lifecycle, okay? So in your test phase, in your test you can have another small cycle like this. So whenever you are in the testing phase, so you'll figure out, you will test, then maybe you will find bugs. And these bugs can then get reported and get sorted out in the next version or some other version. So you can think of it as a cycle of cycles, right? If you already have pre-built software, software that is developed, let's say you have Windows and you know, let's say Microsoft comes and says that I'll be skew per development, okay? So you can just follow this testing cycle and then probably go and tell that, okay? These are the buggy areas of the software. Please do not use it now. It will get fixed in the next release and so on and so forth. So each cycle can be followed individually and they can also be followed as part of each other. Another dimension to what Amol said, in addition to the iterative nature of the software testing life cycle, there could also be other components in the software testing life cycle. Like you would have various types of testing, right? You would have, once a code is delivered to the testing team, they would basically do some sort of a module testing and then they would do a full-blown system testing and there could also be some sort of an integration testing where you are integrating with a lot of other systems and trying to make sure that the code, the software is also working in conjunction with those systems. So that is another dimension. Of course, the iterative nature is always there within the software testing life cycle but these are other things that can also be introduced into the software testing life cycle. Yes. So what Saurabh is saying that in this test phase, you can have something like unit testing using TDD. You can have integration testing, okay? You could also have something like system testing. You could have performance testing, security testing and so on and so forth. So this test could be anything. And the next question is, what is the difference between waterfall and V-model? Waterfall and? V-model. V-model. Let's get the expert here. Ashwin, just give me two minutes. Just give me two minutes. Hi. Yes, so see V-model is an extension to waterfall. Waterfall follows very sequential phases, correct? But with V, right, so we start with requirements, right? We start with the design, the coding and after coding, the next phase which is test and the release, right? So they are in V. So like if I put it on the whiteboard, how do I rub this? So this is requirements. So when we do the requirements phase, right? We also work with the customer to understand the acceptance criteria, right? So which falls on the other side. So this is the from plan, design to code and this is where the code lies. This is where is the testing cycle. So this piece is the test plan and here what we would have is the acceptance criteria. So as you are carving out the requirements, also figure out what is your acceptance criteria for those requirements, right? The next phase is let's say design. Again, design can have high level, low level, but just for the sake of simplicity, in order to have our design, whether the design is correct or not, we would have here integration test, right? And then followed by the code and here we are also looking at unit test. So here we would have module related coding. So when we are doing the module coding, we carry out the unit test. When we are doing the design, we figure out what is the integration or the business test which we would want to do and requirements acceptance criteria, which would be like end to end acceptance criteria, which you would want to carve out. So these phases would go hand in hand when we are doing requirements design, the test phase and the review phase would go hand in hand to understand what is it that needs to be done and then in the middle, what you're doing is the coding. So this is like what goes through with the V model, but with waterfall, everything is very sequential. Like you're doing requirements, right? Followed by your design and you are not able to revisit the stages, right? And then it'll go on so forth with design and implementation or your code, right? And so forth, it'll go through. Is this helping your question? Yes? Okay. So just before we take on this question, right? There was a question which was put on the fog computing, right? I think from the same institute, right? So just to highlight, there was a question which was, what is the difference between cloud computing and fog computing, right? So fog computing is nothing, but it is an extension to cloud computing, right? So it does not replace cloud computing, but it leverages the cloud computing. Now what happens if I were to give an example, right? So let us say we have, let's say, let's say we are trying to do certain analytics based on certain sensors which are placed in our peripherals to say who came in or who came out, et cetera, right? So giving a very basic example, let's say it's a vehicle tracking system, right? There are vehicles who are passing across certain sensors and we are trying to capture the data and generate certain analytics, et cetera. And this is a cloud-based application. So where is the data being captured? Data is being captured on the ground with vehicles moving around with the sensors, et cetera, right? Now if I want to do analytics, right? With just pure cloud computing, this data needs to be sent all the way in the cloud instances, right? And the computation needs to happen in cloud. The cloud applications may themselves be very far away from the origination or the source of the data. So if I'm just doing cloud computing, the analytics would happen, but it may take a little bit of latency because the data from the origination is going all the way up to wherever the applications are hosted in whichever cloud environment. The computation would happen. The response is received back. But the extension to this is for computing, where we say certain types of computation can happen at the source by leveraging the cloud computing capability. So it's an extension to cloud where we may have certain devices, like smart devices or edge computing devices, which can do some basic short-term analytic computation at the source. But when they need long-term compute capabilities, they would still need to pass on the data to the cloud to do a wider computation, leverage the wider compute capabilities within the cloud. But short-term analytics can still be carried out at the source, so which is what is the fog computing or in other words, also called as edge computing. But it's an extension to cloud computing. Does that help? Yes, no, I can't hear you actually. We cannot hear you, sorry. I think we have lost the voice. Yeah, please speak. Could you please repeat your question? No, no, please, please again repeat. Sorry, we are, yeah. I think the question is on cloud computing, but not able to hear. Maybe, yeah, the battery with the mic is low and probably, which is why we are receiving audio break. Audio break. We will get back to your institute, right? We are going to the other institute, we'll come back. So this is Sagar Institute. Good evening, sir. Good evening. Yes, sir. I want to ask that the code which we have shown us of that matching the engine. Can you hold the mic close to the mic? Matching record. I want to ask that if, no. Yeah, yeah, yeah. Is it audible, sir? Yeah, it's audible, yeah. Sir, I want to ask that if we are using the Python 3.6 version and the MySQL database, then what are the changes that will come in that code which you have told us related to matching engine? So you want to use MySQL database? That's what you're asking? Yes, yes. Just a minute. Just a minute, I'm just going back to the code. Okay, so I hope you can see the screen now. So your question was that if you want to use a MySQL engine, then what is the change that you need to do, right? So the only change that you would need to do is that in this create engine, you would need to give a MySQL URL instead of a SQLite URL. That is the only change that you would need to do. Okay, so each database has its own way of, you know, specifying the URL for communicating with the database. So in case of a MySQL database, you will have to just replace this line with the corresponding MySQL URL. You might also want to kind of make sure that the database exists before, because you require some admin privileges to do that. And to connect to that database, you will also require a username and a password. So that in that connection, you'll have to specify the host name, connection algorithm, or a particular socket to which to connect that, to connect with MySQL. On SQLite, you just need access to the file. Hello. Is that okay? Hello. Sir, I want to ask that any connector driver is required for MySQL to connect to our application? Yes, you would need... It requires a... Yeah, you would need the MySQL driver that is provided by MySQL. You would need that. And when you use SQL Alchemy, SQL Alchemy already does database abstraction for many popular databases. So SQL Alchemy will always come with drivers for popular databases. So I don't think there'll be any additional software for you to install if you want to use Postgres or MySQL or something of that sort. Sir, and I want to ask, sir, from the topic cloud, sir. I want to ask from cloud that how it is secure, sir. How is cloud secure? So like Ashwin mentioned, so cloud security depends on a lot of factors, okay? First of all, the place where you are hosting your application, that has to be secure. Second is your application inherently has to be secure. And what is more important that is being seen in practical and real-life examples is that your application security is more important than your cloud provider security. In a simpler example, you bought a house on a stick and you're buying a house, okay? So look at it as cloud, okay? Now, if you're buying a house on a stick, you'll put a lock on it anyway, you'll do everything, whatever. Your house will have a key. So regardless of what kind of setup you have, you yourself have to be secure from your application development and application hosting point of view. Building security or your cloud platform security, that is a later part. The most important fact to consider is how is your application secure inherently? But if cloud provider will give us security, then what is the need that we will secure our restaurant? Exactly. So one is the security that you can control. The other is the security that you cannot control. What if the cloud provider security systems fail? Then what will you do? Then your application is wide open in the open, right? It's like saying, in my building complex, security guard is below, so I won't put a lock on it. What will you do if your neighbor doesn't steal it? It can be possible with our applications also, right? Yes. If we are applying the security on our application, so it can be possible in that case. For neighboring applications or applications in the same server to steal your data and so on and so forth. So that's why we keep telling that your application has to be inherently secure. The cloud security and all these other aspects you can look at once your application is secure. The primary look out for as a developer, that's that application has to be secure. Then you can look at secure cloud hosting facilities and so on and so forth. Thank you, sir. Okay, any more questions? We have one from Walchand as well. Okay, so runtime engineering, we are running out of batteries. Yeah, Walchand, go ahead. Why was MongoDB not introduced in this call? Okay, so MongoDB is a document-oriented database system, right? And the rules of using document-oriented database systems are a bit different. And document-oriented database systems do not yield very nicely to applications where ordering is important, or clean query syntax is important, or where your data is very structured. For example, in our case, in our trading, stock trading example case, our data is very structured. We know that the data is going to be of this format each time, every time. So the RDMMS concept was very attractive and superior. And even we could have introduced MongoDB, but it would have been a course which would have had no relevance to your case study. So we did not introduce MongoDB, but I think as far as I am aware, there are other courses that will soon be introduced by IIT professors and faculties. One of them will be about machine learning, and one of them will be about NoSQL databases like Cassandra and MongoDB and so on and so forth. So just wait around, look out for the notifications. I'm sure you'll be able to find that course. Good evening, sir. What approach do developers follow when requirements change continuously in agile methodology? Okay. So requirements change continuously. There's a degree to which you can tolerate changing requirements. You cannot develop anything in a state of flux. So first of all, the primary requirement should be to get the changing requirements into some kind of a controlled, control that number of changes. That should be your first agenda. Second is how do you paste those changes or how do you sequence those changes across various prints? So you cannot make a foundational change later on in your sprint cycle. So let's say if your client comes and says, can I need this matching engine and please deliver it as fast as possible, and then sort of over here runs and starts his prints and starts developing his matching engine in Python. And when the day is ready to deliver, then the client comes and says, no, we want Java. So you cannot tolerate those kind of foundational changes. So as much as possible, try to get as much as requirements fixed. And once those things are fixed, you build on top of that. And if changes happen, you have to be controlled and introduced in the right kind of cycles. So that those changes can be made. But having coming back to your question, when the requirements are changing rapidly, it's not a good place to be in. And you'll have to educate your client and make him understand that this is not good. It's not only us, no other software developer will like to work with you. So let's do it in a proper manner. Just you discussed about the fog computing. Okay. You told about that fog computing is used in time-sensitive. All the application used in time-sensitive. Apart from time-sensitive, any other applications are there? I'll let me call Ashin back. So on the fog computing, right? So the point was around the latency, which we called out, right? But the other piece is if you need, yeah, if you need like high compute capability, right? Your smart devices, which are on the edge of the network, won't have that capability, for which you would still need to leverage the cloud computing applications. So those are the two things, which I can call out very clearly, when you are looking to see like, whether your fog computing capability would suffice or not. But fog computing or those devices are a bridge between two networks, right? And they are on the edge of the network. And they are more closer to the source of the data than very far off. So which is why they are able to give that short-term analytics very quickly to whoever needs it on the ground, and rather than passing it on to the cloud. Any research has happened on this scheduling part on fog computing? So I don't have a lot of details, but the one which I do have is Cisco has been widely been talking about fog computing and some of the devices which Cisco has come out with. So you may want to refer on the internet with regards to Cisco's smart cutting edge devices. Thank you. Okay, thank you. What are the precautions taken for system failure in matching machine? What are the precautions taken for? Precautions taken for system failure in matching engines. So as I mentioned earlier, this is just a prototype that we have developed. So we have not really thought too much about taking a lot of precautions and all. But of course, if you have to develop a matching engine, then yes, we need to think about those aspects as well. So one of the most important things as was mentioned earlier in one of the questions is that how do we ensure that the order queue is not kind of corrupted? Because multiple threads might be coming in and placing orders and then the matching engine will be matching based on the order set that have been placed. So you have to have some sort of a critical section in place which ensures that access to the order queue is serialized, right? So that multiple threads do not operate on the orders queue at the same time. That is one of the precautions that I think should be taken as part of the matching engine. Then of course, there are other peripheral things which need to be handled like authentication, security and all those things which are true for any other application as well. And then since this is, of course, if we talk about a very production grade matching engine, then performance also plays a major role. So we have to make sure that we have kind of sized the hardware appropriately and we are able to handle the load that is coming in because we are talking about maybe thousands of orders coming in or tens of thousands of orders coming in every day. So we have to make sure that the hardware is able to stand with stand that kind of a load. And then one more thing that needs to be looked out for is that we need to have proper logging in place so that if at all there is some error that creeps in, we are able to easily react to it and troubleshoot the problem and then get the business running up and again as well. So these are some of the precautions that can be taken. IS University. IS University. What is hypervisor in cloud computing? Okay. So I think we do not need Ashwin for this. This I can answer on my own. A cloud question answer concept. So if you look at how the system is, first of all you will have something called your hardware, right? This will basically be your CPU, your RAM, disk, network and so on and so forth. Now on this, you have to host many different small operating systems. So what you'll use is some kind of a layer, okay? Which you will call as the hypervisor. And on top of that, you can put Linux in one container, you can put windows in the other container, okay? So basically the job of the hypervisor is threefold, okay? One is separation. This container should not be able to figure out what is happening in this container unless and until it is intended. Second, divide resources fairly. So if I've said that my total memory is let's say 20 GB, okay? This is some other operating system that I don't care about. And let's say eight and I give 10 year, okay? Then I can size this up or size this down, depending on how I size it up and what is the contention that I require. It could either be two when everything is being fairly divided or it could even be more than two. So example, if this machine is down, this could also be 20, okay? So if these two machines are down, then the entire 20 GB of memory I can give here and so on and so forth. So it has to divide resources. And the third job will be to be fast and reliable. For example, if your hypervisor is slow, then everything on top of it that is running on your hypervisor will slow down. So basically you can think of hypervisor as an operating system for operating systems. So hypervisor will look at your hardware, chunk it up according to how we have configured it and it will virtualize your base hardware and make it appear like two different, three different, four different, five different computers according to your guest hosts that you are going to run on your machine. Namaskar sir. My question is, what is the role of development team in Scrum? Okay. Sir, do you want to take this? Okay. So of course the role of the development team is to understand how the progress is being made as part of the Scrum cycle, right? So as Ashwin had mentioned earlier, we have in Scrum, we have the different sprints, right? That we are running and then there is a Scrum master and he will basically be holding the different meetings maybe every day or once every two days to understand how the progress is being made. So the developers are of course the primary responsibility of the developers is to develop the code based on the requirements that have been taken by the, from the customer. So within the Scrum ambit, the developers kind of have multiple roles. One is to make sure that the code is developed and then to report their progress to the Scrum master in a reliable way so that the Scrum master can then take decision, right? Whether we are going in the right direction or not or whether some course correction needs to be made or not. So basically it's kind of two-fold where they do the day-to-day work which is their bread and butter and also report the correct progress to the Scrum master so that any course correction can be done. We want to add to that. Thank you, sir. Namaskar, sir, my question is how neural network help in pattern recognition application? So when it comes to neural networks, we use neural networks specifically for matching patterns. So the relationship between neural networks and pattern recognition is that of, you know, a human body and the eyes, right? Both of them go together. They are inseparable. So the reason why we use neural networks is for doing pattern recognition. Be it patterns in data or be it patterns in image files or in video or audio, you name it. But the primary purpose of using neural nets is for doing pattern recognition. Now, the other question, part of the question is how do neural networks understand and recognize patterns? Unfortunately, I am not an expert on that. And there is good amount of research out there. And the other problem of that research is that research is very primitive. Even how and why neural networks work, there is no mathematical or scientific description on why a neural network will work or not work. All we know is that it just works. How X goes to Y or how X recognizes a pattern using neural networks. There isn't any elucidating papers or decent amount of understanding as humans, as a community, as a species that has come out in this area. So as of now, how neural networks work and how they actually see patterns, that's a hot topic of research. There have been many guesses, but there is no final answer on this. Thank you, sir. No more questions, sir. Thank you. So any more questions from anywhere else? We're almost out of time. We can take a last question. I take it as a no. And thank you very much. Thank you for being with us, being patient with us, putting in the hours to come on every weekend outside your study schedule and listening to us and asking us questions. It's been a great joy for all of us here to help you, to help you in your growth and make you more aware of what's happening outside. Thank you. We are very honored by this privilege and that's all that you'll be hearing for us for some time, I guess. Any final words or? No, thanks a lot. Thanks everyone. Thank you.