 Saya harap awak telah makan malam yang baik. Saya terlalu risau untuk mempunyai fikiran saya sebenarnya. Ya, okey, ini boleh mulakan. Okey, hari ini saya akan bercakap tentang skill ability. Saya akan memperkenalkan diri saya dulu. Saya ibu Wen Yan. Baiklah, saya telah menjadi pembangunan kerja kerja kerja selama beberapa tahun saja, tetapi saya telah bersama Php selama sebanyak lama. Okey, saya rasa lebih dari 30 tahun lalu apabila Php telah menjadi Php2. Saya rasa begitu, Php2. Okey, saya rasa saya telah memperkenalkan kekuatan saya kepada kamu juga. Okey, berhenti. Ada sesuatu yang menggembangkan perkara ini. Okey, saya datang dari company ini. Kita panggil iVideoSmart. Kedua-duanya. Okey, kita melakukannya dengan semua video. Baiklah, kita melakukan stream video, kita melakukan posting video, video transkoding, kita melakukan sites video video, dan kita juga melakukan monotis video. Okey, jadi jika anda ingin menemukan lebih banyak, anda boleh bercakap dengan saya dan kawan saya di sana, semasa sesi T. Okey, sebabnya, saya rasa semua kita di sini, saya tidak perlu membeli kepada anda sebabnya, saya rasa semua kita sangat gembira dengan sebabnya. Saya rasa semua kita sangat gembira dengan sebabnya. Baiklah, mereka adalah beberapa, mereka adalah hanya beberapa sebabnya, sebabnya menggunakan sebabnya. Okey, jadi ia mempunyai plug-ins untuk segalanya, ia ada teman-teman untuk sebarang-barang pembentangan, sebarang pembentangan, sebarang pembentangan, sebarang pembentangan mendukungnya. Dan ia sangat mudah untuk digunakan. Ibu saya, yang tidak tahu tech, tahu bagaimana untuk digunakan, jadi ia sangat mudah untuk digunakan. Baiklah, tetapi kenapa tidak berbual? Okey, sebabnya ia paling popular cms. Kita tahu ia mempunyai haking, dan saya rasa awal-awal kita ada sesuatu di sekuriti, saya minta maaf, saya rasa bila-bila sekarang kita harus tahu lebih baik bagaimana untuk mempunyai pembentangan, tetapi pembentangan secara secara secara secara secara sangat mempunyai pembentangan dan sebabnya telah menerang plug-ins dan sebagainya, dan banyak plug-in pembentangan, sebenarnya fokus pada menerang kerja tanpa sebenarnya mempertimbangkan keadaan secara secara secara secara secara secara secara secara secara secara secara secara saya rasa semua kita tahu bahawa saya telah mempunyai, saya telah mempunyai atau salah satu pembentangan, salah satu pembentangan juga, okey, apabila digunakan, kemudian anda tahu, anda dapat banyak, banyak haking cms. Baiklah, jadi kita semua tahu itu. Ah, perkara yang terakhir. Oh, saya tidak melakukannya. whether, Oh okey. warna bukan pembentangan bukan ulangika pras val balala. penerang percampuran itu bukan pelakon mempunyai bahan. anda mobil pembentangan yang tidak bermelarang. di dalam box. Kita menggabungkan segalanya di dalam box. Database, WordPress, segalanya di dalam box. Atau beberapa kita lebih seronok, kita mengimplikan perangkat di dalam box dan kita menggabungkan bahan eksternal database. Mereka semua kita, saya rasa kita menggunakan hosting shared di mana banyak perangkat-perangkat yang dihosekkan ke satu klas tumpuan. Jadi, ini kind of implementation is very hard to scale. Have you ever experienced a time where your WordPress site goes down? Especially when your WordPress site is very, very popular, then you realize when there's a spike or everything goes down, everything starts crawling. We have experienced so many of this hard-stopping kind of a moment. And it is true, quite a lot of pain that we realize that WordPress, if you just implement it, we will just install it and use it just out of the box. You cannot really take much load. So, what are the ways to overcome this problem with WordPress not being able to take much load? You can scale upwards. You can use the most powerful server and pay for it, then maybe you can scale. But the issue with this is that your WordPress site might not be using all your CPUs' power all the time. So, there will be times where people are sleeping. If let's say you're serving a certain region in the world, so those people will be sleeping, then your high-cost box will be sitting there idling, not doing anything. So, this is one of the problem with scaling upwards. And for a typical single-box setup in our experience, you cannot take more than 50 concurrent requests. I'm not sure whether you guys agree or not, but that's our experience when we set up an EC2 instance in Amazon Web Services. When there's 20-plus concurrent requests, our CPU starts heating 100%. But then we'll be looking into the problem. We realize that actually the PHP part is okay, but it is a database server that is trashing. So, the database server is the one that cannot take this load. I think I need to explain a little bit on the database server later as well. So, can we do this? You will see that there's 3 PHP nodes. Can I have a show of hands? How many of us have actually implemented this? Well done. We implemented that as well, but we went through a lot of pain. So, actually to this session, we wanted to share with you guys, for those of you who have not implemented this before, that this is possible. But I want to share with you the things to look out for because we went through a lot of pain to do this. I think the gentleman over there will realize that. What is the issue with this implementation? PHP is a session base. WordPress is a session base. WordPress uses a lot of session, cookies and all that. So, these are stored in sessions. So, if you have multiple PHP nodes, you realize that each node will store their own PHP session. And when the load balancer... Okay, I think I'm going ahead about myself. Let me go to the next. Let me talk about the problems with WordPress cluster first. PHP session is one of the issue because the session is being stored in different nodes. So, when the next request comes in, it will not come back to the same session. And the next thing is that WordPress, we know that we have this uploads folder. It's a very common knowledge for all WordPress developers. There's uploads folder where you store all your media files. If you spread your load across different nodes, when you upload a media file, it will go to one of the nodes, but two other nodes won't get your file. The next time when someone request for that particular asset, the asset won't be there. So, there's this problem as well. And of course, within the load balancer to balance out the load across the three nodes. Okay, we wanted to share with you guys how we implement this WordPress cluster platform. Okay, what we did is that we of course use a load balancer to round-robin the request across multiple nodes. And we made use of Nginx and PHP. Each of those nodes actually contains Nginx as our web service interface. Then PHP as the app logic. And then to solve the problem of having different uploads folder in each of these nodes, we actually use a centralized file system. And that file system we use is NFS. We have tried several things like the GlusterFS or SSHFS, different implementation. We wanted to use what Amazon has which is the elastic file service. I think that is the name, but it was not readily available in Singapore. So, at the time what we tried, we tried several of them and you found that NFS works best. So, if you wanted to do something like this, NFS is a workable solution. We are using it right now. And it's serving us very, very well. And then to store the PHP session, again, we centralized everything and store them in a Redis server. I will be showing you how this can be done. The pros and cons of this implementation. The pros. We have been using this for the past one year plus and there's no issue at all. We're serving more than we're scaling up to 50 nodes. There was one time when we scaled this thing up to 50 nodes. Instead of 3 over there, you see we scale up to 50 nodes running on a single database server. And we're able to scale and take 1,000 concurrent requests. From 20 concurrent requests, we increase it to 1,000 concurrent requests. Yes. What I want to share with you guys is that the database is still trashing at this point. But because we managed to scale up PHP, the PHP nodes, each PHP node is able to take the load and it waits. At least it doesn't trash. The PHP doesn't trash. Okay. When you have a single node and your database is trashing, you know what happens? All the PHP requests start coming in. They will queue up. They will queue up all the PHP requests and your PHP node, your PHP server starts trashing. Your database is trashing, yes. But because your PHP server is trashing, you start seeing all the 500, the HTTP 500. Your customers start seeing HTTP 500 server error. Or else, when you scale it out, your web server still works. Why? I mean your website still works because the PHP is not trashing. It just waits for the database to be available and when the request comes back, when the data comes back, your PHP will just serve the web page. It just slow down the web page but it won't trash. So that's what our experience is. So there was one time we had about four nodes then we never expected the traffic to spike so quickly because one of our partner was running some events and everybody started to access the website and it starts trashing. We see that the database is trashing, then the PHP site is all trashing. Then we immediately the good thing about AWS is that you can just create new nodes, new EC2 instances and we spread it across and immediately the traffic started to go normal again. In fact, it helps. In fact, the database doesn't trash that much anymore. It doesn't reach 100%. So it does help to scale out the PHP nodes in this manner. And what we experience is that it's very easy to scale out or scale in. You call it scale in? You can either remove or add nodes without any issue at all. Okay, this is we use AWS for this. So we just start up an instance, add it into the load balancer as one of the nodes to be round robin. And it just works like that without issue at all. It's very easy. And whenever we find that the load is not that high, we just take out because every node is cost to us. So when the traffic is not as heavy, we just take out the nodes to save cost. That is very easily done. What is the pros and cons? So what is the disadvantage of using this implementation? Okay, the implementation that I've shown you guys here doesn't take into account the single point failure in the centralized file system, which is the NFS or the database already. If any of this fail, the whole site goes down. But our motivation is not for HA, it's not for high availability but more for scalability. We want to be able to take the load. So in order to have HA as a totally different scenario altogether, you have to make sure that you have your NFS, your database and your ready server, they are all HA. I mean that is another talk. But I think to do that is not that difficult. You just need to get some expert to help you. And I'm not an expert on that. Okay, but to let you guys know, we have been running this for one year plus. No issue with NFS or database or ready server failing. Not a single instance. Where else we can scale up and down very easily. And that helps us a lot. I think now you guys will be interested in the gory details. But I have to warn you guys you may not appreciate this because it's a lot of gory details. Okay, I'm going to go into what we did. Can you see that? I'm so sorry. This is wrong. Okay, what I'm trying to show you here is the engine X configuration needed to spread out the PHP nodes. Maybe let me change the color, if you don't mind. Let me change the color of the fonts. Give me a moment. Let me change it. Is that okay? I'll just go. Let me go back to the slide. It's better, right? Because I changed to the black color. I will let you have some time to soak in all the implementation. I think the gentleman, you can appreciate that. These are very, very system stuff. And we went through a lot of pain to discover, well, not really to discover, but try and error. And then we finally got it to work. So this is an exact configuration that we have on most of our engine X, our PHP nodes. Okay, so basically we are saying that when a PHP request comes in and engine X sees as a PHP request, it will just call the PHP node to compile to serve out this request. Right? It's as simple as that. This is a very usual setting in PHP. Okay. Are you able to see this? Okay. This is the part where you need to take node off. The session, PHP session, this is the crucial part. Right? If you look at a normal PHP .ini file, you will see that the sessions are saved into local file. Now we just change this. Instead of saving the local file, we save it into a ready server. Two-liner. There's four lines over here. The third and fourth lines are being commented out. First and second line. Right? So instead of saving to local file, session file, we store it into ready server. Two-liner, but took us quite some time to discover this. Just this. Okay. And then the NFS. Each of these PHP nodes will also, each of these PHP nodes is actually a NFS folder, which in this case is an upload folder as an NFS mount. You mount the folder in the NFS drive. So all of you are mounting the same folder on the NFS server. Over here. Then of course on the NFS what we do is that we install an NFS server and then we designate it as a shared folder to be mounted by all the PHP nodes. By doing things this way, you are able to centralize all your files into one location. So all your PHP nodes see the same file, see the same assets, see the same PHP files. But one thing to note is that this NFS server does have a delay in updating the nodes. For example, if let's say this node over here uploaded an asset file into the uploads folder is being stored here. It takes about a few seconds for the rest of the nodes to see it. So if your website is very well, you need everything to be very quickly updated. So NFS may not be the solution for you. There are other solutions which I'm not aware of right now. This thing works for us because our website is not that kind of high demand website. So we are able to take that hit of a few seconds update. Right. So we leave the question to you later. Let me go through this thing. It's about the end. Even with this implementation we find that we can take about 1,000 requests. But the issue with that is that it is still calling the database. Every request that comes in will still work the database. So there's a lot of database calls in and out of the MySQL server. And during the peak period, over to share with you this painful experience that we have is that the engineers have to be there to see if there's a spike. When there's a spike, they will quickly add more nodes. At one time we add up to 50 plus 60 nodes. Then it solves the problem so we are able to take 1,000 2,000 concurrent request coming in. So we are always starting especially every weekend is burned because the weekend is a time when everybody access our website. To add more nodes and the spike goes up. And then when the spike comes down, we have to remember to take out the nodes. If not, we will be checking up our cost. So we are always sitting down there to add and remove nodes. And of course I may sell a bit of selling for AWS here. AWS, they do have this thing called the auto-skilling. They have this auto-skilling that you're able to configure the automatically scale. But the issue with that is that when they add one node, it takes up to 20 minutes for the node to be active. By the time the spike went down already. So we can't do that. We have to manually add the nodes. That's a lot faster. So that is what we did the last time. So we still have this problem. Even though we're able to scale out and scale in we still have the problem of having to sit there to add nodes. Of course, we can add 100 nodes and just leave it there running. It will just run. But we'll be paying so much. So it's not feasible. So the next thing we did is that we realized that actually our website mostly is serving static contents. We don't really serve a lot of people that need them to become subscribers and login. They have their own content. So every time we serve the landing page or whatever page, category page, it's always the same page. So what we did is that we just do caching. And what we did is that we used the fast You see that? We used a fast CGI caching. Fast CGI cache within Nginx. We love Nginx. Nginx is everything we need to be able to serve out a server that can take high load. So we use fast CGI caching. And you will see from the diagram over here actually everything stores into this. When a request comes in we fetch from the database and then we store a copy within the disk. But next time the same request comes in for the same particular page and you know what? With this, we are able to handle 10,000 concurrent load. And our CPU was running at 6% with 10,000 concurrent request. So caching is really the way to go. But the disadvantage with caching is that once you cache, you cannot have dynamic data. For example, you can't be loading a page that say hello John. Because if you cache that page hello John the next person that comes in you will see hello John. So it's not feasible. So if let's say you're running an e-commerce website that needs to have dynamic data this solution doesn't work for you. We are currently working on another solution that takes advantage of caching as well as injecting dynamic information into the site. That is another talk again. We are working with that. Of course we invite comments. If you have any comments you could help us all the more we would be very happy to hear that. Okay. I think that is all I have to say for WordPress scalability. Basically today what I did is that I share with you guys what works for us. What works for us. I'm very sure there are better solutions out there. But this works for us and we are sharing it with you. So yes it's a bit more system than WordPress a bit more system administration than WordPress. I hope we can all appreciate that to know that WordPress actually is scalable. You just need to do some thinking. So should we have Q&A? Q&A ya. Any question? I may not be able to answer every question but whatever I know I can share with you. Whatever works for us. Yes. About the nodes that you say you can switch it in and switch it off. So I'm just wondering because AWS will cost you although the nodes it switched off unless you terminate it. So are you going to set it up all the way again before you switch it on that's a lot of work. That's a very good question. At one time we actually set up 50 EC2 instances but we turn it off. Let me just shut it down. When a node is being shut down you're not paying for the nodes. You may be paying for the storage but the storage is minimal because our storage is in NFS. We only had about 8GB so there's very little cost to us even though we run 50-100 nodes we're not paying anything. So we can just leave it there and then at the time when there's spikes we just attach the EC2 to the load balancer to round up all the requests. That's what we did. But after we do the caching now we're only running 3 nodes and it's taking 10,000 concurrenties at the spike. He's about 10% CPU and a database doesn't trash. So that's the beauty of this caching. I heard about some people setting up WordPress with Heroku would that be an alternative to this and what would be the pro and cons? I didn't really get you can repeat that in WordPress again. With Heroku the company purchased by Salesforce to build an app on top of it without taking Heroku, sorry. Oh, I'm not so familiar with that. Is it a web hosting company? No, no. They provide platform as a service so you can easily host your app there. So what's your question? I would like to share my knowledge if I can. Actually but not that much because Heroku would manage the things for you but the technology or the infrastructure he told about Heroku is not that much into that much like all these things. But for a mid-level user or Heroku could be a solution but not that level. I'm going to put that off. You need to be able to quickly provision more service on the fly. Well, not on the fly. Very quickly provision more service and attach it to the system. I would love to know more about your infrastructure what PHP version you are using. PHP version you are using. Oh, we were using PHP 5.6 but now we have upgraded to 7.1 but there's not much a lot of people have been saying PHP 7 is a lot faster. But it's not that much faster. With this implementation actually we don't see any performance gain in PHP. And have you tried like MariaDB and MySQL both performance optimization or benchmark that have you tried MariaDB or something? MariaDB We mostly use MySQL MySQLDB Ya. Is MariaDB faster? We have found that MariaDB is better. It's an actual open source fork of MySQL and we have seen that it's at least 1.5 time faster than MySQL. Thank you so much. Actually we are looking at optimizing MySQL as well. So what we did is that provision and RDS server we just use whatever the default that is there. But lately one of our colleague has been looking at how to optimize MySQL. Actually MySQL it depends on what you want to do. If you want to optimize it for reading there's one type of optimization you want to optimize it for. MySQL is good for read write Ya. So we are looking at that to optimize MySQL server So actually it's not that WordPress cannot scale. In fact if you are using Drupal or whatever CMS they won't be able to scale as well. You have the same issue. But you can do the same thing over here not just for WordPress, for anything else that is PHP. As long as you take care of the session how system itself centralize it you can scale. Any PHP application you can just scale with this implementation. We have used that implementation for another of our app developed using Laravel It scales as well. So it just works this thing. It works. Do we have more questions for Mr. Cog? If not, thank you so much Mr. Cog. Thank you very much.