 Yeah, my name is Sung, so nice to see you today. Today I want to talk about next cloud architecture in the near future. It's about a near means about one or two years later, okay? So it's a content about, yeah, more or less something, something. But I have some pictures. Can I start, okay, I'm sorry, okay, yeah. So I like this picture when I talk about changes. Let's look at this. It's a 1900 New York City street. There's no carry, actually there's no car in here, only one, okay? But 13 years later, there's no carries at all. That is a change. Nowadays we cannot predict, actually, about one or two years later, our industry, especially ITAE area, right? But I want to talk about our next generation architecture, and please, as I understand, this is my personal opinion, that is the end of this official one, okay? So let's start from the simple is a standard architecture here. So maybe I think you have a microservice architecture, and also maybe you have API server layers, and also maybe you will use API gateways. Also there are some Microsoft modules and application layers, and also maybe you have some cache layers, and also definitely you are using our RDS or something, right? So there's some, you're streaming services like Amazon Kinesis, or you may use Kafka by yourself, and also there's some processing, and also, yeah, there's a recipe or something. So yeah, let's start from here. Okay, the one topic is more solace. Here's question. What is the best way to avoid cloud logging? So there's many answers, right? Okay, my one is this one, on your code, not environment. Yeah, to do this, we have many technical, you know, services and technologies. So typical one is our lambda as a cloud computing, cloud code, cloud computing actually. And also there's a declaration, and the important concept is infrastructure as a code, right? So the more you use your lambda, then you can, you know, avoid cloud logging actually. Because you just own your code, you does not, you know, have environment at all. So here's the problem. Usually you should integrate your service architecture with infrastructure as a code. There is a constant problem. You should support infrastructure as code. Also, you have a structure, you have a lambda or some codes, but you should manage the environment too. And how could that? The simple way is our SAM. So to resolve this situation, we support SAM. It's a serverless application model, so in cloud formation. So yeah, definitely you can use those kind of technologies to, you know, suit your infrastructural cloud login. So you can also make and create a CI-CD pipeline, do it yourself away. And also you can do the other serverless frameworks, but I strongly recommend you use SAM. So basically, declaration is a really, really important technology when you want to, you know, avoid cloud logging. So we, our, you know, SAM model is really important when you are in your definition of your serverless code. So SAM is a specification that finds serverless application. And I strongly recommend you use this SAM with cloud formation and other, your infrastructure as a code. So SPAC is very simple. So it's a YAML style of template files. And there's a function definitions and function execution policy rules. And also environment and variable declarations and event source mapping. That is a typical example of our, yeah, SAM. And when you develop Lambda, then I have some, you know, recommendations about that. So there's a Lambda design tips. So well, definitely use your cloud formation and SAM, and also plan to break your application functionality into decoupled models and embrace a synchronicity. And also never keep application state in your Lambda function code. It's very important. If the server becomes, you know, stateless, stateful server, then you cannot use Lambda or such kind of things in your, you know, cloud, cloud service architecture. And final thing is very important. Use the right data source. And you should ask this question to yourself. Do you really need any RDMS in your work group? Those are very important questions. I'll take a look at it later. And next thing is actually, well, maybe you need ECS like a container orchestration services, definitely. So simply speaking, the more you use dockerization, then you can get more advantages. So we have ECS. Combination Lambda and imagine ECS and also application load balancer, yeah, you will use more, more, more in your infrastructure architecture. So let's take a look at the changes. Here you can see that there is a ECS services and also many Lambda services and also even in your application workload, yeah, the cluster will be replaced by AWS Lambda. Next thing is the next database technologies. So I asked this one question before. Why do you need RDMS in your architecture? So that's why we have no SQL solutions. But based on my experience, I think 80% of your DBMS workload will be replaced by no SQL. Yeah, no SQL technology is growing fast. And also we have no SQL services as the name is a Dynamo database. But when you use the Dynamo database, you usually face one problem. I think you face this problem at least once, the hotkey problem. Hotkey problem is actually traffic hits some partition intensively. That means you generated, you know, your keys not evenly. Or your handling error code does not follow the, you know, backward exponential bag type of logics, yeah. That is a hotkey problem is a common problem when you use Dynamo database and you should avoid these problems. But the one thing you should know that is the formula for calculating the number of partitions. This is the formula. Every Dynamo database up to 3,000 read unique capacities and 1,000 write unique capacities in one partitions. And a single partition stores about 10 gigabyte of data. So calculating is very simple. There is 8 gigabyte and RSU 5000, WSU 500, then number of partition for throughput will be 3. But number of partition for size will be 1. Then we will take maximum, you know, bigger numbers. Then partition number will be 3. That means every, you know, actually RSU and size is evenly distributed to each partitions. So each partition will have, RSU per partition will be 10666 and WSU 106 and data partition will be 2.66 gigabyte. When you change the capacity, when you increase these numbers then problem becomes a different thing. So every capacity evenly distributed that in these cases will increase the number of partitions according to the formula above. And this means each partitions will have equal capacity. But the problem actually occurs in decreasing cases. In these cases, in these cases we do not decrease the number of partitions, only decrease the C use in each partitions. That means a capacity error could occur more often, yeah. So Dynamo Database is a really great service, but there is a lot of good alternative solutions, alternative NoSQL solutions in the world. So one of my recommendation is Couchbase. Now it's Couchbase actually support many great features like, you know, actually Couchbase has architecture, all nodes I call and single nodes. And also that means easy to scale your cluster. And that means there's no single point of failure. And also it has at least five replications in our architecture, yes. I don't want to talk about the couch basis of architecture in here. So my recommendation is here. You can easily integrate Couchbase with elastic search. Couchbase provides some plugins to integrate with elastic search engines. That means you can easily realize your real-time search and analytics databases in your architectures. And as consequences, you can get a huge advantage about high performances and scalability and availability. But maybe you still need your RDS, right? Then please use ORA, ORA is a great cloud, you know, database RDS services we created. Actually it's my personal opinion. There is no reason to use ORA anymore. Yeah. If you have any questions, please let me know later. Okay. And there is another new database technologies. So you can add more new database technologies to your infrastructure that is graph databases. Let's take a look at this. There is a node and relationships. And there are so many relationships between nodes. If you express such kind of information in your traditional relation databases, then it could be a hell, right? Like, you know, the famous SQL join problem. SQL join, yes, actually it requires a lot of calls. So also you actually are executed every time you carry the relationship. And the problem is that basically RDS uses B3 algorithm. That means if you add more data into your, you know, databases, then the number of leaf nodes will increase hugely and it makes your search performance slow, very, very fast. So we need more, we need another architecture for handling such kind of information. That is a, yes, graph databases. The famous one is actually now is Neo4j, right? Neo4j actually contains their data, all stores' data into the memories, cache layer. And they actually do not manage their relationship. Actually they have, they store the pointer, pointer addresses to designate their stored data area. So that means it does not need lookups and so it makes so fast. Visualties here. Okay, if you look at these tables, yeah, you can compare the performances between MySQL and Neo4j. It's much faster than MySQL. And there's one is a code example to carry the Neo4j. So you can easily distinguish which, where you should apply the Neo4j. So if there is more connectednesses over your data set, then I think Neo4j is much, much proper than traditional one. That is, let's take a look at the infrastructure changes. Then there's a huge, well, you can take some elastic search and cache bases, you know, integrations. And also there's, in such cases, you don't need cache layers because cache base itself support cache layers in each node. And also you can use graph databases and you will use Aurora Database Engine 2. And so you will use DynamoDB 2, I think. Yeah, that is the change over the data layers. Next one is analytics. So I think you already very, very, you know, have interested in AI and in learnings. So there's some, you know, many applications. So we have already Lex and Poli and Recognition. But if you want to apply your own, you know, DIM learning training modules, then you should, you know, create your own training model by yourself, usually by using MXNet or Tense Pro. Yeah, in this case, well, DIM learning is very, actually concept itself is very simple. And theory actually started from around 1950s or 60s, right? And maybe I think you know that it's a round one is actually represent neuron and line represent synapses. So there is an input node and also there is an output node. This area is a hidden layer. We don't know how many, you know, how many number of layers is involved in this, you know, part. So that's why we call it as DIP. So we call it as DIP neural network. But the concept, basic calculation of a neural network is very simple. Let's take a look at this. The most common neural model nowadays is signoid function. And you can see that the many numbers in here, okay, let's look at it, all the things. So there's a number. So and you multiply this number by this number. This number is we call it as a weight, weight number. So we can get us some number. In this case, 0.8 and 0.2, we got one, then we have neurons. We applied signoid functions to this number. Then we got the number, 0.73. That is the feed forward way. And there is some, so we call it actually the feed forward means a forward propagation and there's a backward propagation. Forward propagation actually is a recalculation about the weight factors. That is a too minimized margin of error of outputs. So we actually repeatedly, so to train the neural network model, we repeatedly calculate feed forward and feed backward. Then finally, you can get the optimum weight numbers. That is the result is the training model, the learning model. That means you require a lot of computer resources when you create training models. So one of our recommendation is using our AWS Lambda. If you use Lambdas, you can parallel calculate your training models in parallel and you can easily get the result. So it's a very typical examples and then this classification. So there is some examples, so you can get many handwritten examples, then you can train and predict a number of handwritten pictures. Yes, there is an input layer and the hidden layer, output layer. So basically, you will train your models and then you can your own training models. If you want to, well, there's so many applications actually, some face recognition and sentence translation and something. You can use a lot of, you know, also, yes, there is some, our learning Lambda workflow. It says, if you use a Lambda, if you use a Lambda, they can easily calculate the training model. So I will show you the example of the Lambda example. So here is the comparison of performances. If you launch each instance, only one, then it will take four minutes. Relatively, if you use Lambda, it took only, yeah, 47 seconds. We are using many applications actually. So fraud detection, churn prediction, customer support. So finally, we got this architecture. So you are going to apply many AI and deep learning architecture and applications in your analytics, you know, infrastructure step. So that is, you know, my opinion, personal opinion, you can get the next cloud architecture in the near future. Thank you. Yeah. Any questions? Yeah. So in such cases, we don't use, you know, GPU, just the CPU, yes, that's comparison, you know, based on CPU calculation. Yeah. Yeah. And so we support, you know, P2 or G2 types, if you use the, pardon? P2 instances, no, not yet, yeah. Any questions? Okay. Thank you.