 Head office eventually got a real address in Sydney. Now we're in Singapore. Head office, this is actually our head office here. We're in nine offices, six countries, four continents. So we have data scientists in New York and both of California. Dev offices opposite of Hague and the Netherlands and engineers scattered around the world. We're not a large company, we're about 50-odd people. We have been larger, we have been smaller and we really progress through the time as technologies change and we react. Now we used to be heavily into large engineering, lots of mobile developers when mobile was hot back in 2010 even when Libby and I have been working together for over 11 years and we've had other companies. But Sakura is focused on three things, cloud, data and security. Cloud, in terms of openness, we were an Amazon partner. Today we're a Google Cloud partner. We are a cloud partner, so we are actively in-market working with cloud on a daily basis. So, and I'll talk about our implementation. Security-wise, we do cyber security, digital forensics. We also help people like the strange local police, track down cyber security criminals and things like that. So we do some pretty cool stuff. Today we do somewhere between 200 of our penetration tests across many of the banks that we use today as well. So we're quite active in the sub-screw space. In the API space, I don't think as a modern technology company it's unlikely that you're not using APRs, right? You've got to be using APRs because that's what it's all about. Orchestration has always been a common word and common concern. Who's actively developing APIs today? Ends on the code. What's your biggest problem? Worship. Worshiping. Worshiping? Yeah. Worshiping. I find multiple data sources and multiple clients being this issue, having to redevelop authentication for multiple different clients, having to distribute a ticket in sync and all sorts of stuff. So for me, when I first got into the cloud space, I love to plug in pictures and be able to just drop in, I do plug in very quickly and go going. So that's what I've addressed myself. Sorry, I should've been the piece of myself first. I'm actually the CTO's group. My background, geez, I've graduated from uni and I've gone to Engineering Mountain County one. So I've been at Engineering since I was only one. I used to do some social lecturing at NTU. So I've been coming to Singapore for 20 years now. I've been living here currently this time for two years. I've been focused mainly on architecture, API development, and security. I really love that thing. All right, so case studies. So both of these case studies are home-based. We're currently working with a machine-based NAS-regulated entity in the FinTech space. So it's a small company. We're talking tens of thousands of users, nothing particularly scary. But what we do, we have a machine-learning investment bot in the background and visualization of that. So we have multiple clients. We're also white-labeling the product. And so we have to white-label it for Thai-based banks. We've had to white-label it now for American-based banks and all sorts of things. So we have to deploy it quickly. We have to have multiple clients connecting. We also have multiple API back-ends that we have to control. So we have in-house Django, class, Ruby and Rails API back-end, and having to pay as a startup the company didn't want to reinvent all of their retirement. So API Gateway was probably very, now the best and fastest way to look at things. We had to deploy across private, data centers as well as public data centers as well. So we're currently deploying now to Amazon and we're also deploying into bank data centers. So we had to have a solution that could be deployed with anywhere. And we had to be able to support different teams and have a common language. So for us and this environment in particular, we had to have a common solution. How we deployed this, it was actually fairly simple. We just went with the basic Postgres deployment. Anyone looking currently today in Postgres? For me, yeah, cool, okay. For me, when you're looking at scanning solution, Postgres is much harder to scale than say, both require good planning. So when you look at scalability, I think looking at the solution, having a scalability plan for growth is essential in the data, getting the data source right number at the beginning is something you need to do. For this company, because we had to deploy many times, fairly small deployments, having an easy to manage Postgres database was very simple. So we just went with our own, we're not using Lambda, we're not using Google Cloud deployment. Sorry, Cloud Launcher. So we just got installed it, whacked it on to the back of Postgres and deploying the correct plugins that we need. Solution, I think I've already said a couple of this. We had to reduce fragmentation. We totally had to generate government in there, always in Amazon Web Services. We wanted to single-convincence, with the AWC recognizing what protection calls and a few other things. So this is actually a main pillar. All front-end is a main UI here. But we also have server consumption, so we have edge-based consumption, we also have some internal-based API as well. Second one, take a quick test of your own color. Here's a big advertiser in the company with Global Reach. This company head-off is, I'm not sure I think it's London, maybe New York. I don't know, CIO, CTO of the company is in New York, but a lot of finance companies come into London. They are the world's largest media company by billing, and they're probably going to be run over in Singapore. Now, they have offices in the US, UK, Hong Kong, Vietnam, Singapore. We've got thousands of youths, and we had to bring together a whole series of different APIs. They were taking eight hours to process, so we actually got our data scientists involved. We had to find the solution again to bring those APIs together as well as scale it, and make sure things were performing well. We've had to go from eight hours to processing time, and then target is five seconds per run. So it's a big difference. So the millisecond delay is very concerning. We have many clients we consider. We've got Amber Jazz, Rubio Riles, Hype on Dingo, Microsoft Excel even. Many different APIs, different security patterns required. You know, we have the GWT event with the Beowulf, and it's like that. So some public, some private. A lot of issues to concern and get in. Where for a bit? Are we for five seconds? We're probably down to two minutes. We have to get to five seconds. So the original APIs were written in Visual Basic.net. And we've brought that all across to Python. And we've custom written an entire algorithm back in Python. And we've actually been able to scale and make Python forward to the back end. And having a front, they'd be okay when they form up the central to the top. So, let's be good. So what do we do? We have with Amazon Web Services, multiple regions on our programming tool group. But regions were primarily in scope. So we have there in Singapore, we have the US, and we have Germany. So in this, we'll actually use Chrome mode. So we have the request coming in. We have Chrome on the app. We have the set right here. And there's that as well. We see shop, that's where it is to report. From CSTAR, CSTAR. Then we have all of our APIs on the back end. And that's basically replicated throughout the regions. So we have Chrome, we have CSTAR, all of the APIs. And again, Cassandra. So Cassandra uses the cross-up protocol combo tool. If you're used to REST-E or console or things like that, the same technology within the dynamics, very easy to scale. So, from regions and zones, Parallel, Parallel would be scaling across any of these regions. Kong is stateless. It handles cache invalidation very well. So, if you make a change in configuration in Kong, it will refresh and change that, and it will replicate throughout all the different nodes in the system of Kong and maintain that state. So, also maintain that memory process experience. Kong cluster and cross-up protocol base, we are using communities. We are very, very active in communities work. We give a lot of package management with helm, if anyone's using helm. We're also a Cassandra partner. So, we do a lot of Cassandra work. Some of our big clients in that space, we currently manage a 20,000 node cluster with 80 million users. So, that's our largest Cassandra production for memory capacity. So, that's part of the solution that we're working through. Lessons we've learned doing these things, what work, is features that count. Obviously, we don't want to rebuild the wheel every time, and if you're having multiple APIs and you can bring different APIs together, having a nice, fast-scaled, front-end base media. I've found Kong documentation really easy to use. So, I've been playing with documentation. Amazon Web Services produces great documentation. Google, a guillotine of producing some documentation that needs to work. Get your analytics and reporting right. So, make sure logs are shipped. Into your favorite reporting tool, that's essential. Using the plugins. We had a quick slide earlier of the sorts of plugins out there, but pulling in a plugin is really simple, and I already encourage you to get into an handle. If you want to, look at launching Kong yourself to play with. Actually, I might finish this slide off with the launcher, a common sense is sure, how easy it is inside Google Cloud. Now, also, I think of Kong as a cool little middleware that only has features when you're in it. So, probably don't like it, don't middleware it, but to me that's really where it's at, and it's really fast if you're going, you can build your own plugins, which we can discuss, and you can get an entire lifecycle with a request and a response. You can do rewriting, you can do all sorts of re-equals. Now, then, that's mainly around your data sources. You've got to pick the right data source from day one. Scanning Postgres can be a pain. Scanning Cassandra is a lot easier, and it can help early as well. So, look at the enterprise, the enterprise support. You know, I'd say I'm about to do your problem. Today, I'm working on a CA-based API gateway product, and that's from Big Bang here in town, and I'm not doing within CA too much. But, you know, I like to work with this. That was a record presentation of that, too, for me, sort of, of time. But in terms of how easy it is to launch it, anyone here using Google Cloud? Anyone here know that it existed? Yeah, great. All right, so, Google Cloud has a Cloud Entrepreneur product. You can easily go in here, and just go, if I type it right, there's from there, quality is from back to the links. You can literally launch here. You can pick, now for instance, is how big, there's this, where I'm loading one, it launches all of this, and just click launch, and you can have phone playing in it in 10 minutes, when you're done. Wait, 48. 48 minutes. Yeah, 20, I've got some months. That's obviously not going to come, that's going to Google. For instance, and the N instances, how are they, computer instances, and you know, on top of that, you've got start and everything like that, consider. By default, this is a Bitnami version. By default, Bitnami, we're going to take requests from 120.0.0.0.1, so you're going to do some instant reporting or something, to work with it. The only production you can spot all that off, if you just, you didn't see it alive, just go into work with it. So you can go, any other tips or tricks in that space, just launch it, it's really easy. Play with it, look at the documentation, work through it. There's a break between tips as well up there, and Chrome holds a lot of webinars as well, so get yourself online as this, and help with what's going on. There you go, that's two use cases. Small company, monitor scale, and a message. VWolves number one, media company, looking more appetite, we're doing billions of dollars of going through Chrome, in terms of data. So, very happy. Any questions on those cases, cool? Are you guys using the Chrome Enterprise right now? We've launched, initially, during this period, with Chrome open source, but we're moving to Chrome Enterprise. The small star, they probably wouldn't look to Enterprise, I reckon they'll look to Enterprise, maybe like that, March, April, the media company, they'll be on Enterprise, I think, quite a few minutes. Yeah, so it's coming in fast. And our pressure look is, it's easy to work with an easy play with, and set it up like that.