 Saya Joseph dan saya berada di sini untuk bergantung dengan javascript Saya adalah penjaja software di GS GapTek Singapura Jika anda ingin menghidupi saya untuk perniagaan perniagaan perniagaan perniagaan perniagaan perniagaan Selain itu, saya mempunyai komentar kreatif Jika saya adalah penjagaan perniagaan perniagaan perniagaan perniagaan perniagaan perniagaan Ada orang beritahu saya tiba-tiba memulangkan perniagaan perniagaan perniagaan Jika kota itu telah dikulungkannya tapi tidak dihidupi, ia pernah digulangkannya? Jadi kita lihat sama satu idea yang besar Jadi kita lihat anda menerima test anda, test anda berlainan... Tidak mengulangkannya, bukan? Jadi, apa lagi? So, ceritakan saya adalah, saya menggunakan untuk menjadi start-up artis musik Kita menggunakan kemampuan kami untuk menghidupi perempuan dan mengambil orang untuk menghidupi kami Untuk menghidupi kita. Untuk menghidupi pesakit dan kita mengambil komisyen sekejap-kejap. Jadi pada malam ini, perjalanan telah diperlukan dan kita pergi. Kami ada perjalanan sekitar 100. Perjalanan telah diperlukan dan segala-galanya. Kemudian, boom, 10 saat perjalanan sekejap. Dan SBO 5 saat adalah lebih kurang seperti internet infinity. Jadi jangan seperti saya. Jadi yang baik adalah menghidupi metodologi yang menolak kita untuk mempunyai keperluan-keperluan. Oleh sebab mempunyai keperluan-keperluan, dan menghidupi keperluan-keperluan untuk keperluan-keperluan dengan lebih kurang. Dan mempunyai keperluan-keperluan. Okey, saya menghidupi keperluan-keperluan pada LinkedIn juga. Dan di sini ada beberapa perjalanan. Untuk keperluan-keperluan, ada 13,000 pekerjaan dan ada 281,000 keperluan-keperluan. Jadi menerima semua perkara yang sama, menerima semua orang yang beritahu kebenaran, hanya ada 4% keperluan untuk mendapat pekerjaan. Jadi kemudian, kemudian, kemudian, kembali ke perjalanan. Dan kemudian, mari kita cari keperluan-keperluan dan pekerjaan. Jadi anda dapat melihat betul-betul 50,000, lebih 49,000. Dan melewati keperluan-keperluan dan pekerjaan. Ini adalah keperluan-keperluan-keperluan. Jadi, saya harap itu menghidupi anda. Dan ya, satu lagi perkara, keperluan-keperluan kita menghidupi juga untuk keperluan-keperluan dan pekerjaan. Okey, jadi, kemudian menghidupi keperluan-keperluan yang pertama. Jadi, ia tentang mendapat keperluan-keperluan untuk penggunaan. Dan ia sebuah konsep dalam... Maaf. Ia sebuah konsep dalam keperluan-keperluan agile. Jadi, keperluan-keperluan agile adalah kondisi yang penting menghadiri keperluan-keperluan yang terbangun ke Jodhe. Meritakkan kisah, menerimanya, membezalkan feedback. Dan sebuah konsep yang terbangun keperluannya, mengubah demi seluruh perubatan. Okey, jadi, untuk kita, seseorang teknik, kita berpensi dengan kerja keperluan-keperluan dan perubatan-keperluan damai membesar. Jadi, volat kami untuk mengambil keperluan dan kita mengubahnya dengan keadaan kemasan yang terus dan keadaan keadaan keadaan keadaan jadi ini sebenarnya membuat pukul kode dan kemudian ia membuat peraturan terserah dan mengeluarkan jadi anda boleh berfikir seperti membuat pukul kode dan mengubah keadaan jadi keadaan, apa ia dan apa ia tidak jadi pertama keadaan, mengekalkan keadaan yang tinggi jadi ini adalah sebuah statrapas kode yang anda dapat mengenal kita dengan hash atau version jika anda mengenal It also configures the behavior of your application, so think of it like your node environment, node end, your database connections. And it targets specific stakeholders for your application. So let's say developers may access it at dev.myapp.com, product owners may access it at user acceptance testing deployment. And lastly, it defines the infrastructure and availability. So for example, what base system are we using? Is it Santos? Is it Ubuntu? How many VM instances should be running? How are we going to update our applications? So what it looks like is something like this. So you get your services, your applications, which is handled by the deployment manager which extracts away the VM layer. So each of these circles is one application instance. So if you think about it, like when a request comes into the load balancer, the load balancer will route it to one of these instances randomly. So for the updating which I mentioned about earlier, so blue-green deployments, you can take the blue as the old version of the code and the green as the new version. So why deployments are important is because it allows us to have zero downtime. So let's say for example we have eight at the start. So when we are updating our code, we can actually down four of them while upping another four. So we will always have eight which are available to accept requests. So this goes on and on in the same fashion until we reach the fully updated state. And what is it not? So expecting to think of deployments as an environment, but don't do that. Don't correlate your environments with deployments. Because environments defines the product behaviour. Deployments is much more. So for example, and we don't just deploy to production. For example, in our team, we're actually deploying to development after successful build to quality assurance environment after unit and system test, past one and so forth. So here are some common tools that we are actually using. For example, for VMs as PM2, and we are using Nodemon to monitor our notifications. So you can access my slides at the URL which I'll show you later. So these are all links you can check them out. A bit tight on time. So how did we do it? I think it's something that has worked, hopefully continues working. So our product architecture consists of two applications. One front end and one back end. So our front end is a ReactJS application and our back end is an Express application with Swagger. So developers develop in the local development and then they push the CI pipeline which automates everything up to the deployment. So Nectar is our own in-house government deployment engine which actually stands for Next Generation Container Architecture. So you can see that first we run the unit test and then we do continuous integration deployed to a QA environment thereafter integration testing. And finally we deploy to a staging environment and that's fine, then we go on to the production deployment. Okay, so now I'm going to move on to some practical tips that is born from mistakes that we made. So we failed pretty badly at the start but so these are tips to help you not make the same mistakes that we made. So our objectives were to allow developers to push often, deploy easily and frequently and reduce time to production. Okay, so how do we write deployment friendly code? Okay, so first is one code base, many deploy. So always keep your application separate, have one CI pipeline. So this reduces the length it takes to go through the pipeline and it avoids the single point of failure. So assuming our back end had some issue our front end still can get deployed if the checks have passed. So another is to isolate your death dependencies. So inside your package JSON, there's actually a dependencies and a death dependencies property. So use this to define your production and development dependencies. Because when you do that, you can actually lazy-load your development dependencies and exclude it from the production build. So I think most Javascript dollars will actually teach you to ego load which is to put your dependencies first. So what we found was to actually put them inside the code blocks. Inside development specific code blocks so that we could actually package our apps without the development dependencies. So for example, things like webpack take up a lot of space, so we exclude it by doing it like this. So next is use a lot file and PM5 comes with one, Yang comes with one and it helps us to avoid it works on my machine kind of problems. Environment management, keep the configuration out of the code. So use your process.env So it might be tempting to for example even to listen to different ports in different environments, it might be tempting to write it like this but don't do this, let the deployment handle it. The application should only define the behavior, not how it happens. Okay, so we can do this by having a .env file so that's to the .nv MPM package for local development environment. So for actual deployments use Docker Compose and Kubernetes, they allow you to inject and I'll show you how later. Okay, so an exception is for development tools. So let's say you want to have webpack middle where it doesn't make sense to put it outside of the code block. The next is to minimize development and production parity. So this is about keeping the behavior the same and so how we do this is through usage of adapters. So this is just one example so you can see that the code always remains the same no matter how we did the code always remains the same and it is defined by the environment. So for example we can choose a MySQL adapter or we can choose a Postgres adapter and then we require it and then we select a new database. And configuration is done by the environment. So and for persistent data so one way that we handle this is through database migrations. So you can think of database migrations as small incremental changes in the database schema that is written in code and this can be put in your repository which means it can be versionable. So this means that if let's say you have a version 1.0.0 with 3 migrations we can be very certain that the database looks the same as when you run the 3 migrations generally about up and down up and down functions. So up would be something like create a table and down would be something like drop the table. So the current tools of the trade for JavaScript is next and SQLized. Next is a query builder and SQLized is an object relation method and you can see that we can actually define what a table looks like in code. Okay so the next tip for process management keep the application status so avoid things like PID log files and avoid multiple service connections to shut down when your application receives for example a sik term. Okay so how we can do this is through and a single term pattern so if it's not instantiated instantiated if not just return it and keep the connection count to 1 for authentication, prefer json web tokens over sessions and if you really need to use sessions put it as external service because this means that your application can be shut down and updated very quickly. And last tip for development is to log everything and log it to standard output. So this avoids file based logging where if your application needs to shut down suddenly for updates, it can actually shut down everything at once. Okay so in summary, one code base for application, isolate and lazy look left dependencies, use log file keep the configuration out of the code version your data, use stateless authentication and log to standard output. Okay so next building. So include what's needed and let it be enough. So the first tip is to build in an encapsulated environment so we had the issue of when we built something on Mac and then we ran it on Windows and the whole thing just broke down so our solution was to use a Dockerfile because you can actually specify a certain operating system here so for example like Alpine or CentOS and you can trust that the binaries will be the same. So next is to version your dependencies so this one was, this one happened because our builds were taking very very long so one commit easily took like 20-30 minutes for other CI runners to complete running the job. So we solved this by actually hashing the log file and then trying to pull it using the hash as a version from a Docker registry. So if it existed, it pulls, we use that Docker image to build our eventual application and if it didn't exist, only then we run the build because the MPM run install I believe takes 2-3 minutes for our application. Ya so this is what it looks like lah so dependencies you build it so you just copy in your log files build it and then for the actual application building we draw from this image over here and you just copy in the code and it's good to go. Ok so because we are using React, we are using Webpack and we use Aglify for code compression. So this actually takes out like invisible paths Ya and I'm not to show exactly what other automaticizations it does. Ok so next is bundle compression so this allows you to specify how is the file going to be compressed and it's traveling over the network. So the example here is jizid but there's this new compression called broadlead. I haven't used it so I'm not going to recommend it yet. Ok so next will be code splitting. So code splitting results in faster page load timing so for example if you have 5 pages and the user does not always go to all 5 pages. So you split it into 5 pages and the user loads it as they come along. So this improves your initial page load time so do it by development and production dependencies because dependencies don't change often. So by splitting a code that one file can actually be cached Ya so we split the code for no modules for example the development dependencies and the reason for doing this is because we can cache it using a client-side cache So there's this pretty awesome tool Service Worker Precache by Google which lets you cache your front-end assets. Ya so what it looks like Backpack plugin is like this and you can specify whatever files you want to cache So any files within this block changes it will cache bus and your users will download the new versions and the front-end is simply 3 lines of code you just register the Service Worker Ya so in summary version your dependencies minify and optimize the code compress it, split the code by change frequency split the code by features and implement client-side caching Ok automated testing we do this pretty quickly because I think I'm running out of time so ya automated testing is required to prevent things from breaking too quickly So first static analysis, we do this by ESLint and we use Google config for our back-end and Abbi and Abbi for React So the next thing that ESLint can actually do is to scan for security vulnerabilities in the code by ESLint scan.js config and it's implemented in a simple ESLint RC which you can place in your project route Ok so next will be unit and system test I think this is pretty standard mocha framework for the front-end we use Karma for the back-end we use the standard mocha runner and integration test so these are actually happy path test so we initially use code set but eventually it's used to robot so code set is purely JavaScript and it looks something like this very human point of view so let's say your user came in from Facebook you can say that he should be on this page UTM campaign Facebook he should see login via Facebook So for non-functional verification we have getling for load testing for security vulnerabilities we are using NASS actually but a good open-source alternative is actually W3AF So automated test to make sure your code is maintainable maintain the security so write your unit and system test let it run inside the CI pipeline your integration test, load test and penetration test Ok so on releasing is use sample because everyone else uses it most people should know what it means patch version for bug fixes minor for non-breaking changes and major version for breaking changes so next is to avoid package.json this was tempting and we actually did it and this was very troublesome once our team scale because every push to the CI pipeline resulted in another code commit which require us to pull it back again just for that one version change we can view it with this command it's pretty simple and I've actually written a package to help with this process so you can initialize your repository you can get the latest and then you can iterate through the patch minor or major versions ok so next is using docker for immutability so we need to keep the package immutable so we create it using dockerfile and then we push it with the version as the tag ok so summary for releasing useamble avoid using package.json use a git text to version instead and package your application immutably and the last one is deployment ok so getting it out there and ensuring it stays up ok so first thing to note is implement your infrastructure as code because this empowers developers it helps to cultivate DevOps and offsets the shared responsibility so for example using Kubernetes we can deploy using just one command all our infrastructure is defined inside there ok so for the base system we are still using docker and thereafter we deploy using Kubernetes so you can see that we are actually pulling from a docker registry and we can also specify environments inside our spec files so ok all these are actually trimmed for gravity it's actually much longer in between the lines ok so and yeah this is for scaling so for example we can see here how many can actually go down at one time so at any one point in time our application went 15 instances yup so this keeps it up with no downtime and lastly is to expose your application via port binding so for example if our application isn't a port 3000 just leave it as 3000 let the deployment handle the job of implementing SSL ok so service management there are 3 types of services first is the application service you're writing for backing services like your databases and administrative services so things like migrating the database updating it with new data so for your application and backing services we can deploy using the spec file as seen above and as for admin services you can use things like a job spec file which allows you to specify a certain interval at which to run the code ok so one thing to note is to configure for quantity because note is single threaded so we implement currency by scaling outwards by having as many instances as possible so we do this in order to let them die on memory leaks because it's not a matter of if but when ok so setting memory bounce is something like this so you can see that it's setting CPU 0.15 CPU and 100ml CPU over here so what this means is that when it spins up your request for this amount and once it reaches this amount it will get killed and then it will start up again and which is why you need to keep your application stateless ok so in summary define the base system with docker define infrastructure with Kubernetes we use deployments for backing apps and services services for exposing the deployments we use jobs for admin services focus on quantity when scaling and let memory leaks die and one last thing don't deploy on Fridays we did that once and it was a terrible experience ok so thank you that is all yes so you mentioned a little bit about accessibility testing and you said some tool you were using but you went really fast so I couldn't catch it yeah the time was running ok which side was that which side was that you tell me it's like 500 slides ago hahaha it was in testing is it yeah this one would you tell me more about that the penetration testing or the load testing sorry I thought it was accessibility oh right ok so for penetration so the question was what does penetration testing do so what this does is actually to automate scripts automate scripts to try and hack your site so for example stuff like cross-site scripting yeah so you will try to ask my SQL commands inside all your fields and everything automated yeah so the load testing was actually to determine remember earlier the load resource limit so we actually use this load testing to see how much load should each one take so we will check like spikes how far does the memory spike and then we will set that as a limit because we don't want our applications to shut down unexpectedly during spikes yes you didn't mention do you use anything for monitoring CPU usage oh that's actually ok so the question was do we use anything to monitor CPU usage so that actually is part of Kubernetes dashboard I can't access it because this is my personal laptop yeah I don't have it yeah but Kubernetes actually has a nice dashboard where you can see the memory usage of your applications, how many network request are there, how much CPU and how much memory is using as well yes at the back it's actually part of the CI-CD pipeline so we actually run through the application as a human we will note down what API calls are made to the backend and then from there we will simulate these calls by writing code for it and then that is one user so we specify, let's say we run test for 50 users we take this workflow and then we run through it 50 times concurrently onto the server and see how the server will perform so i think some metrics will be how long it takes to respond what does the memory look like i think it's like this any other questions we still have time yes where is your Kubernetes database deployment in-house or in-house we have to use in-house yeah you deploy your database in Kubernetes or do you isolate some partition so the question was how do we do the database so where is our database stored our application is deployed on Kubernetes in our testing environment we are actually using RDS Amazon RDS and inside Nectar which is our internal deployment environment they actually provide something like what AWS provides so we have an internal sort of RDS yes but how the stack looks like okay i think application stack okay i don't have anything on that so what do you want to know about our stack okay so because we have 2 applications we treat this so-called as it's more service oriented instead of microservices for now so this was done with microservices in mind in future we actually do get there i'll come back and give another talk on it so right now is just 2 applications one back-end API and one front-end website did they answer the question so for your databases do you have a separate stack for each environment so that in your QA you have separate QA database yes that is correct so for each deployment there will actually be a different database instance that it connects to do you store your application plan so the question is so the question is where do we store the logs so our logs are actually streamed to the console so basically we just do a console log and we leave it at that so what actually happens when we have that many instances is that each of these instances okay so these instances will be put onto VMs allocated by the deployment manager which in our case is Kubernetes so from there each of them will actually be there's actually a central logs collater for all these correct is there any other questions okay thanks for listening