 My name is Narsimha Muthi, I am the co-founder and vice president from G Omega. I lead product management and the development activities. We will be talking about how we created a Python enterprise product using Python as a business logic. So far, we as a G Omega started in 2002, started using Python as a main primary language. We started with a small product having Python as a primary language. From 2002 to 2006, the small Python application worked for us and served our business needs. From 2007 to 2012, we extended the same Python web application, adding a lot of load balancer, memcache and we are able to serve the mid-size business needs from 2002 to 2012. From 2013, we have grown too much and our business needs are extended and the need for us to scale and create an enterprise product. So, if you look at it in 2002 to 2006, we started with small. Our web application is Jop and we have a server as controller U and model and the controller we used in Python logic and the backend as RDBMS it is connected with any RDBMS. And the front end, we have Apache. This is started in 2002 itself for us till 2006. Today, you can start with small and without Jop, you can also start with Django or Pyramid or Cherry PY. So, what it helped us starting with small? We are able to successfully support 100 to 200 concurrent users and we use server-side Jop template language and we use server-side g-sequels and we use heavily on the Python business logic. So, we are able to scale 100 to 200 using three mid-size servers. This is what we are able to survive in the business between 2002 to 2006 and the business need has grown. We need to serve, we need to extend our product to serve more than 200 concurrent users. So, that means we have a need for 400 to 600 concurrent users. So, how do we go about it? So, we cannot really start from the fresh and we need to extend the existing web application and we also have a need for ETL which we do not want to buy commercial products. We have written our own ETL product using the same Python language as a core primary language. So, if you look at we are able to use the same Jop application and Jop server. We have written our own workflow in Python. We have written our own root engine in Python. The root engine is heavy and it is required for us to support synchronize events and actions and criteria. The logic is pretty heavy and we extended the same application using the memcast so that we can serve more requests and more users. We use the same RDBMS and it can be any RDBMS we will be supporting. We will be deploying with the customer need for any RDBMS. It can be MSSQ server, Oracle or MySQL and Postgrease. The choice is client for the RDBMS, but we support all the RDBMS. If you look at we also added the load balancer in front of the web application which is written which is started with Jop. We use Pound and Apache both of them in our deployment to serve the 600 concurrent users. The ETL is heavily written. It is more code in Python. Everything is in Python. When it comes to transform, extract, everything is in Python. The loader we used SQL Alchemy. This is where we added one more Orem which is again Python based. We also used Memcast. If you look at the stack, we added Apache, Python, same Jop. We added SQL Alchemy, we added Memcast. Python is the core language for the workflow and the root engine. With extending the small to the mid-size, we are supported from 400 to 600 by just adding Memcast and ETL and the root engine using the load balancer. We use the same server-side Jop template language and GSQL and the same business logic in Python. We developed our own workflow and the root engine and ETL process in Python using the SQL Alchemy. Then, we also converted to serve, to sustain the business, to support the 600 concurrent users. We also converted the front-end technology, which is JavaScript digest calls using YI. At that time, it is very popular. Again, we added one more server for the ETL process. This is where we are at. Today, we started creating a large-scale Python application from 2013. We are going to continue to extend this stack to support minimum 30,000 concurrent users. That is where we have benchmarked and tested for 3,000 concurrent users with the same model. The core business logic is still remained the same in Python. We use the same logic, root engine, workflow. Everything is the same. If you look at the front-end, we added a middle layer, which can be supported as the service bus. We added the open-source product, the WSO2 open-source product. If you look at it, Enterprise Service Bus is WSO2, APA Manager is WSO2. That helped us to take any input from mobile and the real-time interfaces like APIs with JSON, XML, SOP, and the batch process. We are able to consolidate and convert from all these input, XML, CSV, and JSON and everything. In the middle layer, we converted one format called JSON. That is the JSON format. We have used RabbitMQ to scale as a synchronized and a synchronized process. We extended the same business logic using the RabbitMQ Python workers. That can scale the way we want to scale. The back-end is the same business logic is the same. Python, workflow, root engine, and the model. If you look at the model, which we written previous year in SQL Alchemy, that is where we consolidated as one model to be used for the application and the middle layer also. Here, the front-end, we used some part Jop, some part Permit, but we planned to replace Jop here or so. The technology changed for the web browsers and front-end technology changed. If you look at it, the web application front-end, we converted from the server-side Jop template language to front-end technology in AngularJS, which is HTML templates. It is heavily dependent on the client-side performance. Currently, there are so many modern browsers having modern technology to support the client-side template language. It is not just AngularJS. There are so many options available in the open-source community. If you look at the middleware and the web application, which is coming from the mobile, mobile uses the APS, which is developed using the same business layer. It can be extended our business logic to the third-party external users as APS. The change here is, if you look at, it is the same Python logic and rules engine workflow. We converted as a Rabbit MQ Python workers and the front-end we changed as an AngularJS. We used the middle layer that helped us to support any format and extend our business logic to the external vendors also. What is that we benefited from adding all these technology and middle layer? If you look at the history, we used the same business layer developed in Python. It is a heavy complex logic written in Python. We cannot really take any challenge or any chance to replace that. We have to continue the same Python logic, which we have written from 2002. We extended with Python in the rules engine and workflow. We are able to deploy the same functionality and use the same components across the platform. Using the same deployment, which is enterprise deployment, we are able to scale and we are able to support the security having the middle connected to the web application and the middle layer, which is identity server, which is common for that entire platform. We do not have a security enabled one for the web application, one for the middle layer, which is API and service bus. We used one security layer for the entire connectivity. We are also able to build service oriented architecture, which is again the middle layer. That helped us to convert any format to any format. When it comes to the business logic, what we have for our core business logic? We converted as one format internally, that is JSON format. We are able to scale using the RabbitMQ. Again, we converted the latest technologies in the server side instead of server side template language. People are moving into the client side template language, which is again from last one year. More than one year, we took to convert the server side template languages to AngularJs. We are able to scale with the current, the deployment, what we have with minimum. We tested with four servers. We are able to scale and support up to 6,000 users. We are confident that we can support 30,000 concurrent users, but it has to be tested with all the sizing and hardware resource and how we can grow horizontally and vertically. We are able to scale again with the enterprise product having middle layer, having RabbitMQ and having queues. We are able to scale vertically and horizontally without any issues. What is JMA experience with Python? In simple words, we started using Python in 2002. With a small web application. Today, JMA has grown to use large scale enterprise application and products. We have grown from 20 Python developers to 300 plus Python developers from last 14 years. We started with small, but we used Python as primary language and added a lot of Python developers from last couple of years. We all know that some of them, why we are using Python? Who uses Python? Just to give an idea, why we started using Python? This is where we started using Python. We know that it is easy to learn. You would have seen that how we maintain the same Python logic from more than 10 years. It is easy to maintain. Most of us think Python is just a scripting language. That is not true always. We can build enterprise products using Python. It is again cross-floor form. You have seen that we had experience. We are able to build enterprise product using the same Python logic as a primary language. It has good concurrency support. We all think Python is lacking the concurrency support. We proven in our product it supports a concurrency by adding... It is all about how you write your code, how you optimize your code, how you deploy and plan your platform using the same Python logic. It is all about writing the good code. Who and all is using Python? If you look at YouTube, it is heavily developed in Python. I think 90% of the code is in Python. They have more than maybe 100-plus Python developers working for YouTube. eBay, PyPol and Google, if you look at the commercial side, Bank of America, they are using Python as one of their code technology. They are having more than 5000 Python developers working for Bank of America. This is some of the companies which I am aware using Python as a code technology. If you look at today all the young generation and a lot of people selecting Python as a challenging language, this is from the Codewall. Codewall is the Challengers Portal. If you look at Python is at 31% as of this year. A lot of people adopting and learning and writing challenging programs, challenging code using Python from the Codewall. It is not just from Codewall. There are other portals which are available for writing challenging codes. All proven that Python is one of the popular languages which is used by a lot of developers. Overall, if you look at, we are able to use the same Python logic which we started from 2002 and able to use the same Python logic till now and unable to scale. Any questions? The business logic is after middleware. The middleware is not in Python. It is the WSU to product. If you look at it, that is in Java, the Java-based middleware. We used middleware to convert the input format like XML and CSV and JSON. Any format which comes, we are able to accept any format and transform using the same middleware, which is again Java technology. It is not Python. But if you transform that into the JSON and able to use the same Python logic which is there on the back end through the rabbit MQ and through an JSON format, that is how we are able to scale. Having that role there. Because Facebook is using Hadoop to track all the customers, their posts, their videos, everything regarding and Yahoo also. I do not want to take any chance talking about other companies. What I know or what I learned, YouTube is using C-Python. Though it might have written a C code legacy, but they are able to convert that as a C-Python and use the Python as one of their primary language. When it comes to Facebook, yes, they are using Hadoop, but they use Python for all the scripting. Basically, I was guessing how it is being applied in the enterprise level. In comparison to Java, I am not comparing with Java, but how Python finds its place in these distributed and big data applications. The big data application, we are to do the R&D, but it is more about writing a lot of Python scripting with business logic. You have data coming, but you use Hadoop to convert all that and use the Python for complex business logic, which can be converted as any format, whatever visualization or data analysis. Thanks a lot. Hello. My question is how do we bring in asynchronous nature of the model wire messaging structure on top of the web application that you described as a synchronous request-response system? If you look at what we are saying from the web application, we are able to use synchronized and synchronized format. If anything, you want to have a backend process, even you can have synchronized queue, which is waiting for the front-end. We are in the junction of transforming completely into the event queue system, not directly talking to the business logic through the web app. The down layer, what we have, we want to reduce that down layer talking to the business logic from directly to the web app. If you look at it, we are in the process of going through everything, event queue system, which can be synchronized and unsynchronized based on our business needs. I don't know the logic you built. My question is why Apache was used as a choice instead of there are other websites like Nginx or UWSGI web servers. Why Apache? Again, it is our experience because it is more of a commercial product. It is not just that we limited to Apache. We used F5 and we used Pound. We have not really tried Nginx, but F5 and even for that matter we deployed in IS server also. The front-end is our choice is our experience is more with Python Apache where we are very comfortable with Apache. That is where we are able to continue with the same Apache. Excuse me, which Python interpreters are you using other than Cpython? Other than Cpython, just normal Python we are using. We have not used Python or any other things. Which are the biggest performance bottlenecks you have faced with Python stack? The performance bottlenecks, it is more about if you look at the previous image, it is in the sequence. That is where we got into the bottleneck. We are not able to scale in a sequential mode. That is where we said let us have as a rabbit empty as workers. We can spin the workers as we grow. We can extend the workers. That is where we added but the limitation is we do not have a middleware in Python. We always have some of the open source like Mule and WSO2 which is very popular nowadays. This is again open source product developed in Java. That is where we do not have but we are able to scale without the middleware also. If we just remove the middleware, we are able to scale using the rabbit MQ or it can be active MQ or any MQ. We are able to scale. It is more about how you write your code. It is all about how do you optimize. We spend a lot of time reviewing the code and finding new code. Your workflow is suitable for a Q-based architecture. Hello. My question is related to the Q-based architecture. How did you prioritize the Qs? If I get a request from FTP and comparatively if I get a request from mobile, mobile has to be served much more quicker than any kind of FTP request. Again, your application you have mentioned using AngularJ. The returns has to be much faster from the server, the request return. How did you manage that? The way we managed, if you look at the web app from which is heavily used by all the users, that has dedicated controller and dedicated Qs. It will listen to the Qs which is dedicated for web application. Again, if you look at the middleware where we have the API manager and the API gateway, that is where it has the flexibility to configure how you want to use the mobile for the real-time. What is the priority? What is the business need? What kind of client you have? If a client is just small, if you purchase only for 25 requests to the max at a given point of time, that is where we are able to control and configure through the API manager and API gateway. The Qs were managed right in the RabbitMQ itself, the priority. Right from the middleware. Where was the infrastructure like load balances all was placed into it? Any idea like this seems to be a bit cluttered for me, because ZO parameters directly calling controller and then we have the third-party integrations going through events. So where do we see the load balancers placed? The load balancer remains same because I just included the load balancer in the previous images. I am not able to continue the same design here. But the load balancer is again there before the Jopun parameter. So that load balancer remains same. Again, we have the load balancer and the control and configuration in middleware. That is again WSOTO product. So we are using two products. One is Apache in the same Apache. We have continued here with web application and middleware also we have the load balancer. Another curious question from the previous slide. Why would you have a web server Apache before the load balancers? Wouldn't the web server Apache have its own request limit? Because we used Apache as a public domain on a server, we are able to, the public domain if you look at, that is the only server accessible outside the world. So everything else is within the network. It is security-wise and more control. It is not exposed on outside. The only server is exposed to the public is the Apache. But again, we can remove Apache if you look at it. You can start with your load balancer server itself. So it is not really required to be used on both the servers. You can use either one or you can continue the same way. Thank you. What are your thoughts of Python 3 in general? That is our next step. We are able to continue. Now we came to this stage using the same Python 2. But Python 3, we need to plan and migrate to the Python 3 from Mono 2. That is the next step, the next milestone we have. For a new application, would you use Python 2 or Python 3? New application. If you look at it, I prefer Python 3 than Python 2. But Python 2, it is a lot of help and community. A lot of products is available. But I am not sure of the same level of support and community and the third-party packages available in Python 3. That is one of the things we need to compare and evaluate and maybe start using Python 3. For all the open-source community models we are using Python 2. Those models should be available in Python 3. That is where we will start using the Python 3. If you look at it, it is not just limited to what we designed or what we used. You can replace, you can think about how you can start the small web application or any application with Django or Pyramid, CherryPy, any of these open-source products. You can replace, if you look at having the Python as a core language and core business logic, you can replace anything from the middleware to the front-end. If you look at the web application with an AngularJS, you can also replace with any front-end open-source products like, I think, apart from AngularJS, there are other template languages, even Django also has some. They are available. So you can think on how you can develop an enterprise product. This is just to give you an idea, Python also can scale. We can also use Python into the enterprise product. It is all about how we optimize and plan the sizing and deployment process. Thank you very much. My colleague Baizu will be giving a talk about how the web deployment and web application can be used. He will be continuing with his presentation. Thank you very much.