 Do you hear me? Do you hear me? Pepe? Nothing, nothing. Attack? Yes. Skushka, skushka. I'm Aleksandra. I work in Red Hat in a debugger team. And I'll tell you why do you want to use Valgrind and GDP together and what I've done to make it easier. So I'll be talking about GDP, about Valgrind, and about how to use them together. I think it's not well known that you can connect Valgrind from inside GDP and ask Valgrind about things. So before some new features came to Valgrind, you used to need two terminals. So you would run Valgrind at one terminal, you would run GDP at auto terminal, and you connect them together using VGDP. I've wrote an article about how to do that and what advantages it may have. Also, I'd like to talk a bit about debugging 4D. So GDP as a debugger is interactive. You'll run your program and your GDP, you can stop it, you can inspect variables, and so on. But Valgrind is not. When you are running a program under Valgrind, Valgrind, you can change anything on the fly, and Valgrind will give you some diagnostics at the end. But when using Valgrind and GDP together, it's different. So Valgrind has a GDP server embedded in it, and when running GDP, you can connect to Valgrind, and when Valgrind will have any problem with your code, like, for example, invalid write happens, there is conditional jump dependent on initialized value or anything. Valgrind will stop GDP. It will stop running your program. And at this point, you can send some comments to Valgrind. You can ask Valgrind about if this variable defined or not, for example, current Valgrind release brings several new features, which makes connecting to Valgrind easier. With VGDP multi-mode, you don't need to use two terminals to connect to Valgrind anymore. Now you can run GDB, and you can run Valgrind from inside GDB. The other new features is Valgrind Python script, which allows you to ask Valgrind about things in a very convenient way. Both of them are already in Fedora 38, and they work out of the box together with Debug InfoD. I'd like to say a few words about Debug InfoD. Debug InfoD is an HTTP-based server which contains Debug Info packages, and various debugging tools can fetch them from there automatically. On Fedora 38, it's enabled by default, and Valgrind will automatically fetch any missing Debug Info packages. And GDB will ask you whether you want to do that. In some cases, when you have a lot of Debug Info packages missing, this might take a lot of time, so it might be inconvenient for you. So Aaron Meri is working on demand downloading. So a few words about VGDP multi-mode. So first of all, what's VGDP? VGDP means Valgrind of GDB, and it's a small tool which serves as an intermediary between Valgrind and GDB. Valgrind GDB server starts only with the program, so we need VGDP to set up some initial communication. For doing that, we are using GDB's remote protocol. So GDB using remote protocol will communicate with VGDP and pass arguments to it, and then VGDP will launch Valgrind. Why do you want to do this? One thing, you don't need two terminals to communicate with Valgrind anymore. You'll run GDB, and you can launch Valgrind from it. And other advantages it brings, you can keep your breakpoints. You have GDB history. This simplifies scripting a lot. But this feature is slightly experimental, and it has some limitations. I want to talk about it. So one of the limitations of this is that VGDP communication uses standard input and standard output, which means that your debug program can't read from standard input and can't output. Standard in and out is pretty much being eaten by GDB VGDP communication. You can avoid this by running VGDP in a separate terminal and connecting them using ports. But this makes things more complicated, and yeah, it makes things more complicated, but you still have the advantages by you can still keep breakpoints, GDB history, and so on. I'm working on addressing this. I'm trying to redirect GDB GDB server communication in general to use sockets instead of standard input and standard output. Other limitation of this feature is you can run Valgrind from inside GDB, but it's not that simple yet. You need to perform some preparational steps. We are working on implementing target Valgrind, which would hide all the preparational steps, and it will make it easier to just run Valgrind. I also, I forget to say, this VGDP multi-feature was designed by Mark VLR, and I co-worked with him on implementing it. So how to actually run Valgrind from inside GDB? First of all, you just run GDB as usual. Then you need to do these preparational steps. You need to set remote exec file to your debug program, then, because this is... So the remote protocol, or GDB, GDB server communication, it thinks that it's some remote debugging, but we are doing things locally, so we need to set this root to slash, and this command, this target extended remote pipe, VGDP, is the command that will launch Valgrind. Here you can see VARCs. Those are arguments passed to Valgrind. Here we are using quiet, because we don't want Valgrind to talk too much. If you want to debug your program repeatedly, if you want to run the command repeatedly, you can do all the preparational steps in one command using GDB's X command. And as I told before, we are working on target Valgrind, which should be... I don't know. I think it will be here soon. So the next feature, which was written or implemented by Philippe Barroquer is Python extension for monitor commands. Monitor commands is the way how to communicate with GDB server. There are special requests, GDB sends to GDB server, and GDB server responds to them. There is no further integration, it's just a text. Valgrind has GDB server, which is embedded, and Valgrind has a specific set of its own monitor commands. And this feature allows better integration. So it brings help. So before this, you had to look up what are the commands. But now you can type help Valgrind, and GDB will tell you which commands are possible. Also, there is auto-completion. So GDB will automatically complete the command, and you don't have to remember it. You can search even before the command. And what I think the best is GDB will evaluate the command arguments. The thing is, so you have some undefined variable, for example, and you want to inspect it. You want to ask Valgrind if this is defined. And before this feature, Valgrind doesn't know anything about variables. It only knows about addresses. So you had to ask GDB to evaluate the address of the variable for you, and then you would feed that to the monitor command. But now, GDB will evaluate any argument for you, which is very convenient. So how it looks like in real life. So you'll run GDB and some program. Then you will perform the preparational steps. So you'll set the remote exec file. You set C through to slash, and then you'll use this command to run Valgrind. With some arguments, you can use other arguments, of course. You can specify Valgrind tool to use, for example, like tool call green, for example, or anything else. Then you can set a break point in this point or anything you want to do is GDB. You can see here that Valgrind script was... Valgrind monitor script was loaded. It is only loaded when Valgrind's GDB server is used. Also, you can see here you can type health Valgrind, and let's do that. So if you type health Valgrind, GDB will list possible Valgrind tools you can use. Note that Memcheck... Memcheck is default tool for Valgrind, but you can use any else. So Memcheck deals with memory issues, and Valgrind deals with threading, for example. But Valgrind has a bunch of other tools. So here you can see that you can get specific help for any of the tool, and it will list a list of comments that it's possible to use. Another interesting thing that this Python extension brings is an option to change Valgrind option dynamically. So when you normally run Valgrind, you'll set some options, and you can change them on the fly. But when Valgrind is connected to GDB, you will stop at some point. For example, you set the breakpoint, and you can add or change some options. You can add verbose. You can set traces calls to yes or any other option. So now I want to show how you may use monitor commands. So for example... The other thing I want to show is that you continue in GDB, and then you get interrupted by Valgrind, and its output is interleaved with GDB's output. So here Valgrind says there is a conditional jump or move dependent on uninitialized value. And here this six-trap signal is a signal generated by Valgrind to stop your program. And GDB will output the line of the code where Valgrind has problems. So there are two flags here, and one of them is uninitialized. And to determine which of them is uninitialized, we can use memcheck.get.bebits command. Here you can see that we can feed memcheck this, and this Python script will evaluate the address of this variable. So get.bebits knows which bit is defined and which is not. And all ones refers to undefined, and zeros refers to defined. So this flag is undefined. So it's the one which was uninitialized. Another interesting memcheck command is who points at... It will list your references to some pointers. So if you think that pointer might be leaked, it can help you. Another interesting Valgrind feature is stop at exit. So normally when your program will end, Valgrind will shut down. But you can ask it to stop at exit. So stop. And at this point you can use some pointer commands. I don't have an example with leaks, sorry. So here I ask memcheck if there is any memory leak, and Valgrind says no, everything was leaked. So for summary, using Valgrind and GDB together might be incredibly useful. But it tends to be a bit tricky to set up. You need to know about how to set up to terminals. And also I think not many people even know it's possible. But those new features Valgrind brings in the current release, it's much, much simpler. But there are some problems we are working on addressing. Here I'm linking my work in progress. I'm trying to redirect GDB-GDB server communication when GDB server is run locally and using standard input standard out to sockets. And also I wrote some articles about how to use Valgrind and GDB together. I have a code example here with intentional bugs there. And I'm giving an example of various commands, various monitor commands. Also I have a personal blog, and sometimes I would blog about my project there. Also I like to thank all the people that made Valgrind and GDB closer. So Aaron Mary is helping implementing debug info desupport for Valgrind and GDB. Philip Barokir is the author of Python monitor commands. And also he was the one who integrated GDB server into Valgrind. Mark Villar designed the GDB multi-mode, and we worked on it together. And Andrew Burgess is helping me with my standard out-to-sockets redirection on GDB site. Thank you for listening. Questions? I'll ask a question right in the innermost room, or we are... Feel free to ask anything. What is the audio main? Just say a question and Alexandra will repeat it into the microphone. Okay, then I will make it short. So Valgrind is a simulator, right? And basically the program gets translated into some byte codes. I think it's called new codes, right? So my question is, when you are debugging a program using this mode, do you still have access to a step instruction to the assembly code? Yes. And do you program like if you were debugging it directly? Yeah, I have to repeat your question. So when debugging... When connected to Valgrind, do we still have access to GDB step? Do you still see your program transparently? Yes, the only thing that you can see is your program output, that it outputs to standard output. Yes, but other than that, it might contain some bugs because it's a completely new feature. You might try it and you might find bugs, but stepping should be possible. So the question is, is it possible to pass both Valgrind and GDB to GCC? I wasn't aware of this feature, so I haven't tried. Sorry, I'll try. Thank you for the question. Yes. So I have to say that I haven't used Valgrind myself. I was using Memcheck when developing this. So I don't know, I would have to try. Can this be adapted to microcontrollers? I mean, you can use GDB with microcontrollers. So if there is Valgrind... Sure, if you have enough memory to install Valgrind, I can see no reason why it won't work, but I haven't tried on microcontrollers myself. But it should work. You try. Great. Thank you. Yeah, I'm not sure at this point. Sorry. I would have to look up documentation. So the question was, if Valgrind stopped because there was using all the memory after free, how exactly we would ask Valgrind using memory monitoring commands to tell us? I'm not sure. So I don't know every monitoring command from the top of my head. I would have to look up documentation. Yes? Yeah, I haven't tried. So the question was, is it possible to integrate Valgrind with LLDV? And my answer is, I haven't tried. But I think it's an interesting thing to try. Thank you for the question. This one? Yes. Can we define which error we want to start? Okay, so the question is, can we ask Valgrind to stop only at specific places? I think we would have to use suppression. So there is a suppression mechanism that you can use with Valgrind for those cases. Yes? Yes, I think I would use suppression. Yeah, that sounds great. Maybe we should start working on that. I would like to remind you that we have a matrix room for this specific room where you can ask questions, or you can of course ask questions at the end or during if it's okay. It's okay. And without any further ado, we are going to start here with distributed SQL transactions. Thank you. Thank you for coming. I'll be talking about how to make your own little transaction across multiple servers using just Python and Postgres. So let me introduce myself. My background is chemical informatics. I worked on software for drug discovery. Then I co-founded machine learning startup. Then I worked for Twisto, a financial startup. I was there since it was about a 20% company until it was sold. It scaled up technically across multiple countries. And now I work for a company called Monitora. We do media monitoring. And again, it's a small company, mostly Czech Republic, Slovakia, and we are working on building a worldwide product. So that's what I do mostly. I help companies to scale up from a small startup to a major company on the technological level. And with my wife, we are also organizing courses, mostly for people who want to learn programming, but who do not want to become programmers. So in Monitora, what do we do? We scan the internet. We watch for news, for printed news, for online news, social networks, and so on. So we can answer the question, what do they write about me? If I'm a big company, a politician, whoever, we can give you the top news from today. We can perform analysis on the articles. On the back end, it means we need to do a lot of crawling. We do OCR, a lot of interesting tech. The deduplication of articles across the web is an interesting problem. We do a lot of machine learning to recognize what the article is about, to be able to summarize it. We search large amounts of texts. And what is interesting in this company is that we are a small team at the moment. It's six people on the back end, and we want to remain a small team as we build a worldwide product. Currently, we are looking for one DevOps person, if anyone is interested. What scale are we talking about? At the moment, we are mostly Czech Republic and Slovakia, partially Germany and other countries, and we have more than six terabytes in Postgres database, and we are aiming for 50 times or 100 times more to be able to scale it to the whole world. So for this, we use a distributed database. So we want to scale it horizontally because that big server would be possible but a bit unwieldy. We chose Citus. It's Postgres extension. You can put only some tables to the distributed mode. So you have clear migration path from your existing Postgres database. But some of the more complex joins between the tables which are on different servers might be limited. And I like to understand how the technology I use works on a deeper level. So here, we will be doing something similar to what Citus does, but implementing it by ourselves. So let's build a toy example. Let's build a piggy bank. We will have some users with their accounts each account has a balance and we want to be able to safely transfer money across the accounts. But because we are a piggy bank, we do not lend money. We do not allow overdrafts. So we want to be able to move money and not lose any. But we will be making it distributed across multiple servers. So first, we need to decide how we will distribute the data themselves. So for this, we can use the sharding approach and just make the same table structure on every worker server. So we have here two servers and each of them has a table of accounts with their balances, but each server stores only half of the data. And how do we decide which server should store which row? We will simply take a hash of the account name and take some operation that will deterministically decide where to put it. So in Python, we will do it this way. We define what shards do we have, which workers should they be placed on. We define a helper function. We call it worker for account and this function will take a name of the account and return the ID of the worker or ID of the shard. And there is another helper function. It's called SQL and it will just take a name of the worker or ID of the worker and SQL commands to execute on that worker. It will come in handy later. I'm not showing the implementation for the second function. There we will link to the code if you want to play with it at the end. So first, we want to put some money into account. We want to create an account for a person. That's easy. We just need to know which worker manages that account and we want to route the insert query to the right worker. That's fine. We know the hash function. We know which worker it is. So it's a matter of the single server. So we want to do an operation which spends multiple servers. For example, we want to know how much money people have at our bank so how much money we might be forced to pay if they come and ask for their money. So, yeah, that's, again, that's kind of easy. We just list all of the workers that we have and we go across the workers, ask them for their sum of the balances and just report it. That's fine. Okay. So that's fine. Case closed. We can go home. We have our big bank across multiple servers. But what if one of the servers fail? I will be talking mostly about what happens when something fails because that's why we do transactions. So one of the workers went kaboom and we cannot longer connect to it. It's bad. We lost half of our data because we have two workers. How do we deal with that in Postgres? We can do replication. Just a standard tool for Postgres database. We replicate data from one server to another and we do it independently for each of the shard workers. So when one of the servers fail, we have some mechanism to route the traffic to the other one. It can be a proxy. It can be a virtual IP that moves. It can be a DNS name that allows us to say, okay, this is the worker one. The worker one is now on a different server. There are standard tools for Postgres that do it. We use Patroni. But if you use Kubernetes, there is a crunchy Postgres operator or Patroni also has Kubernetes support. So this is something that you'll probably want to do because if you have multiple servers, there is a higher probability that at least one of them will fail. But it's a problem that can be managed. Short diversion, the CAP theorem. Just raise me your hand who has never heard of CAP. Okay. So it's a theoretic theorem which says that in case of network partition, some of the servers cannot reach the other servers. Maybe the network failed somewhere. Maybe they are just too busy. Some of the servers cannot talk to all the others. You have to choose between either consistency of your data or availability. Because if you cannot reach the other part of your cluster, that means that either you stop and wait until you can reach them because they may have newer data than the part you can reach. Or you say, oh well, we know some, maybe all the value of the data, so we can serve it from the part of the cluster we know. So there are a lot of trade-offs and a lot of places where you can decide if you want to lean more towards the consistency or more towards the availability part of the problem. For example, when we are doing replication, then we can do either synchronous or asynchronous replication. If we are making replicas synchronous, that means that we will need to wait until all the replicas have confirmed that they have the data which we are writing. That means if the replica fails, we need to wait until we know that either it failed for good or until it recovers. At that moment, we cannot continue writing. That means we are more towards the consistency part or we can make asynchronous replicas and we are more available. If you are interested more into this topic and if you want to contribute to systems and these trade-offs, there is Jebson.io. It's a company by Cal Kingsbury, the person who is very knowledgeable and has very accessible talks and blog posts about this topic. So we made a replica and now we can add an assumption or two to our piggy bank. So in order to be able to build the distributed transactions, we will make the assumption that the node, the server, doesn't lose the data it committed. Once you received a confirmation of a commit, a database commit, it will be never lost. And if some node will fail, it will recover eventually. Maybe from a replica, but it will recover. So we will rely on these assumptions. Okay, so this is our function to transfer money from one account to another. The accounts may be on a different service, so we will need to find a worker for the destination account. We add the money that we transfer, we need to find the worker for the source account, we need to subtract the money that we transfer. It's fine, right? What could go wrong? Nothing, no. Unless the server fails. So yeah, we added money to one account and the other account didn't get debited because we get an exception in our Python code, but the data is already written, so we do not do anything and just lose money. This could happen also when we would want to transfer a larger amount of money than the balances because here in the bottom part, the database constraints, that we say that the balance must be always positive, it would fail. So here we would lose money. So how can we do it better? In plain Postgres, if you would have just one server, you would insert begin and commit to wrap those two commands in a transaction and it either happens all the way or doesn't happen at all. And this is something that we want to do across the service. Luckily for us, Postgres has a feature that supports exactly this use case. We want to prepare the transaction across all the servers. We want to know that it will succeed. The servers should guarantee us that they have the data, they will not lose them, and that the transaction doesn't violate anything and it will safely commit. So once we have this confirmation from all the workers that are part of the transaction, we commit them all. Again, sequentially. So the function that we have would work like this. For each of the servers, we will create the transaction, we will execute our SQL code which is relevant to that particle server and we will append this transaction to a list of transactions to commit in the second step. So it's fine, right? It works. If we didn't get any exception, we go and go over the list and commit everything. If something went wrong, we roll back those transactions that we have. Fine, right? What could go wrong? Something can go wrong. So what if we fail? Remember, we have a list of transactions in the memory and the code, the computer where we run this Python code can fail at any time again. There may be a bug in our code. The computer may catch fire. So these are four discrete steps. So we have three places where we can fail and because the database guarantees us that the transaction will safely commit, that means that it still holds the locks. For example, if we change the balance, we cannot, and we called this prepare commit on one server, then we cannot modify the account from any other transaction at all. So if we fail after the first prepare, that means that this account on that server will be locked forever and we will not be able to touch it until admin comes and unlocks it for us. So what do we do? How do we fix this problem? Well, we, who manage the other servers, have to be a database as well. We have to remember what we did and in each of these steps, we have to be able to recover and continue. So, yeah, we will extend the assumption that the nodes don't lose committed data and that don't fail forever, including the coordinated node, including us. Even though we are not storing any data for the piggy bank, we need to store some data regarding the distributed transactions. Yeah, this is, this first part is something that I have been talking about already, that there is a problem that the transaction may take longer time than you are used to. So from the moment you have made some change on one of the servers, until the moment that all the servers have made their work and you are committing the prepared transaction, all your locks that you hold in the transactions are still locked, so you cannot, you have to wait for them. And if the coordinator does not serialize the access or does not do something clever with regards to the transactions, the changes on those different servers may be visible not atomically, because as you see, when we have these steps that we commit the prepared transactions, there is no way to do them at one moment. We can guarantee that they will either be applied all or none of them, but we cannot guarantee that if someone goes directly to that worker server and asks it for the state that it will be synchronized with the other workers. And again, this is where we touch the CAP theorem. We can let the clients do this without the synchronization and they may get not consistent view of the database or we can handle this somehow in the coordinator and make everyone go through the coordinator and somehow serialize the access. So, this is all for the talk. Thank you for listening to me. There is a link to the example code that I was showing. You can play with it. I can do a show demo now. And if you are interested more in that, go find me here in the hallway after the talk or maybe here in this room there will be no talk after that. And I can go through it with you in more detail. So, I will just show a small demo. How does it work? Actually, are there any questions at this moment? I may answer the questions now. Okay. Yeah. One of the things that we are handling sharding at the application level and replication at the database level, so why not delegate sharding to the database level? Actually, this is what we do with Citus. There is this tool called Citus, the extension to Postgres, and you interact with it with plain SQL. So, it makes some tables that look like they are there, but they are just empty and it passes the SQL queries and passes them on to the other servers. That's what we are doing. But somebody had to write it. The database is an application as well. So, here I am just making a toy, an example with Python, how we would do it, but what we are actually using in production looks like a database. But there is an application code doing exactly that. So, the question is... I'm not sure if I understand the question, so I'll try to repeat it. So, the question is how do one shard know about the other? How do I make a transaction happen at all if the shards don't know about each other? Yeah, that's where the coordinator comes into play. The coordinator knows about the shards, and the coordinator is the one who says, okay, this shard, please subtract the money. Okay, this shard, please add the money. Okay, so everything is fine. This shard, commit it, this shard, commit it. There was a question. So, the question is how to make a consistent view of the database across all the transactions. I'm not sure how this actually works in Citus. It definitely takes some logs by itself, so it excludes some transactions from the others. I am not sure at which level exactly, because on the most strict level, you would have to serialize all the transactions one after another, but you can be more clever to wait only for the transactions that conflict with each other, or you can just let it be and accept that at some point some of the data might be inconsistent. I am not sure how exactly this works in Citus, but you, as an engineer who would be developing this kind of system, you have the possibility to make different trade-offs and decide for yourself, depending on if the application can tolerate it or not. So, yeah, if you let it to the developers, there may be a ton of bugs, yes, but if you are architecting a system, you need to know what kind of inconsistency can it tolerate, because consistency at all cost is slow, very slow. For example, in our case with the online articles, we do not care that much about consistency between the different articles themselves. We do care about the consistency between an article and its related metadata, so we care about the consistency in some pockets of the database. So for us, if we are consistent within one chart, it's actually usually okay for most of the operations. It's okay. In our case, as the media monitoring campaign, we do not usually need the consistency across the charts. For some cases, yes, but they are rare. Yeah, we prefer to be able to... Yeah, I would be happy to discuss with you. We avoided several things. Okay, any other questions? Uh-huh. Yeah, yeah, this is exactly how it works. The coordinator basically here, the coordinator remembers what it did, stores it in the database table, and if it fails and boots up, first thing it does, it looks at the transaction log and goes clean up the transactions that should not be there and commits the transaction that should be committed. Yeah, so, uh-huh, you... The question is whether you have one access point through one coordinator through all the databases. You have to go through some coordinator. Again, depending on your trade-offs, how much consistency do you need? The coordinator may be itself distributed. If you do not need that much consistency and you want higher throughput, you can have a distributed coordinator. But if you want more consistency, you go through one coordinator. This is... This is not about siters. This is about the problem-spacing in general. If you want more throughput, you can get it with more coordinators. You need to have some place which knows about the charts and is able to coordinate them. But it doesn't have to be a single one if you do not need to synchronize a serializable point of view through the database. But the case is, in general, we're thinking about the impact of the process and making your developers thinking about how to make consistency work. How to do it. We have a big problem here. So, what do you think would be consistent or tractable by our one-page thousand... So, the question is... So, the question is about maintainability. So, the question is about maintainability. You talked about... Yeah, yeah. So, you said that this approach is not maintainable. Yeah, I agree with you that this approach is not maintainable and this is not the siters. What I was showing, the Python code, this is my demo here for you. It's about 100-line Python script that implements distributed transactions with two-phase commit. It's on GitHub and you can play with it. This is not siters. Siters is something that you install into your Postgres and for most part, it behaves the same as the tables were behaving before. It has some limitations. It cannot process some of the SQL queries. It cannot enforce some of the foreign key constraints, for example, it cannot, for example, enforce a unique constraint across charts, but for the application developers, it looks like any other SQL table. So, the question is whether we should not use a lock for locking the user balance. Not really because when we use this kind of SQL query where we use balance equals balance plus something, this is atomic, and as soon as you do it, the row is locked. This is how Postgres behaves. This is why we are doing these two phases. At first, we try to do the operation at all of the servers. We do not commit, we just prepare. So, if we do just prepare transaction, the other users, if you run Postgres in read committed serialization level, then if you do just prepare transaction, the change is not visible yet. It is visible only after you commit prepared. But the commit prepared because you do it across multiple servers itself is not, you cannot do it automatically. You cannot do it. There is no way. So, if you need to maintain a consistent balance, a consistent view of the balances across all the system, you have to serialize it in the coordinator. For coordinator, well, here in this case, in case of this small demo, where we use event sourcing approach or something different, you can look at the code. There is no event sourcing. We just log the transaction. Well, if you look at the code, there are four Python functions. They are just called directly. And as you go across these steps, you just log the state. So, you are able to reconstruct it. So, yeah, you can tell it's event sourcing in some way. If you are interested more in that, I'll be here for some time, because I do not want to compete in the launch queue. So, I'll be able to show you the demo itself. Okay, so, Mala, what is the question? Yeah. So, the question was where we thought about using Mongo and something that is horizontally scalable by design as opposed to Postgres, where the horizontal scalability is both a ton. Actually, yes, we thought about using Elasticsearch as the primary source for the data, but there are two proponents with that. One is that we already have quite a big application which is written using Postgres in mind, several years old and so on. But the bigger problem is that these databases like Elasticsearch, they are not well optimized for relation data. Originally, I thought, let's move everything to Elastic, because, yeah, article and it has some restructure. It's a document, after all. Not really. You have authors, you have new sources, you have a lot of metadata, which are highly relational. It surprised me, actually. How much relational it is. We have, like, 500 tables. And it's much easier to work with a secret database. So, instead, we evaluated Cytus as the extension of Postgres and two Postgres-compatible solutions, which are horizontally scalable from day one. It was Cochroach and Yuga Byte. All of the three are cool. If anyone has any more questions, I think you can find it outside, right? So, thank you. Thank you. Yeah, I guess. So, thank you for coming. I hope you had a good lunch. Before we start, I want to remind you that we have a make-the-coup, make-the-coup where you can ask questions if you don't want to ask them out loud. But, of course, you can ask questions if you can, as well. So, we'll read your questions and then we'll answer them. Right? Yeah, that's correct. So, let's get to it. So, we have Tomas and us and Anton Argo and this is the OpenSSL journey. Yeah, welcome. Welcome, everyone. So, this is an OpenSSL journey. It was, wasn't it? So, we are... We have Tomas representing the OpenSSL Foundation and OpenSSL Services Company, which I will talk in a few. We will start with the government, a bit of community and we will finish with the technical updates. Well, let's go. About the OpenSSL. OpenSSL is a cryptographic toolkit which I think all of you normally or unknowingly using, so directly or indirectly. So, we will be happy to answer any questions you may have. We will be specifically happy to find the people around who are using, as a developer, OpenSSL toolkit. So, this is basically what it will be about. And let's go. Yeah, what is the OpenSSL? So, it is a robust commercial grade, full-feature toolkit, which is available to everyone. It is an open source. There is a lot of companies around the world using it for doing big business, small business, et cetera. We also... It's a 25-year-old library, so there is quite an experience behind. And a lot of companies like to rely on us in their work with governments. We also provide shortcuts to get governmental certification for the cryptography needs the company may have. Yeah, so this is what I already mentioned. This is what we represent. OpenSSL Software Foundation is an entity, which is a non-profit entity, which represents projects in the legal capacities and basically take care about the trademarks, copyrights, managing donations, which are coming to our project. And OpenSSL Software Services is everything commercial. It's a for-profit organization. This is basically an organization which pays us so much to do the work for the organization. It's also the entity which is used to sign contracts with the customers and paid support. And it is also a vendor of records for the NISD for the compliance FIPS certification. Speaking of government, governance, so we have... We're basically a small organization and I wonder what people think, how big they are. But we are just eight people, like eight... It's six engineers, right? Yeah, six engineers plus me, who I'm doing a management work. And we have a lady who's taking care of the business operations. So speaking of the governance, so some of these people who are paid resources by the OpenSSL, represents the OpenSSL Management Committee, which is a small group of people who has a decision on a management protection, strategic decisions, everything about business, financial, governmental decisions, and basically, maintain the project resources. The OTC, which is much more interesting, this technical committee represents... Represented by also paid resources of the OpenSSL, but also they're working hard to get on board more people, representing communities, representing customers, representing the world, let's say. And we have quite a straightforward process to get to the OTC. And we're looking for people who are working with the OpenSSL so that they can potentially join the technical committee. It is very important for us to extend our community outreach and have a good representation of the OpenSSL developers and users outside of our small group of engineers who are working in OpenSSL on a project. OpenSSL Technical Committee is the technical voice of the project, maintains the engineering processes, doing the technical decisions, decides on the roadmap, and et cetera. Working Group is a new entity which was created recently where we tried to put together OMC and OTC resources together, but it is limited to the people who are working for OpenSSL directly, I think mostly recently. We have invited an OTC member who never joined, but we may think of extending it. Working Group assembles together every week where we have all the problems. We tackle the difficult problems where we have a discussion what is the best way to do the community outreach, where we should go, we should talk, how we should do some presentation, et cetera. And this is where we come to another interesting point. Early as this year we decided to come up with a mission statement and values for the project. I won't be reading it, have your time to read it. So the idea was to actually understand who are we, what are we doing so that we have a good guidance on our decisions. It turned out as a very important milestone for us to actually know how we want to develop in the future, what we want to accept to the project, who we want to accept to the project, how we want to work with the communities, how we need to rework the policies we have, how to make them more open. And every decision, every discussion we have nowadays we're trying to see through this prism of these values. We share with these statements and values with the wider community. We've got a very good feedback. And this week we adopted it. So from now on we live in these statements and values. And I hope it will serve well. So this is what I partly mentioned. So yeah, we try and as a project to do much better than we did in the past by reaching out to the communities, showing our roadmap, talking about the priorities about the project. We want to hear back from pretty much everyone. We are working hard right now to make our public, our roadmap public, which will happen this year. You're very welcome to follow the OpenSSL on LinkedIn or OpenSSL in our blog on OpenSSL.org. So we're going to have more and more updates as soon as we reach certain milestones about our openness. Part of the things we are changing is the way we do in the release. The interesting thing is the picture is not exactly final. It's under the discussion. It is important from this picture we're going to do releases frequently. We switch to the time-based release schedule. We would like to release every April and October, which is good. It will be much longer released than 25 years here. Yeah, and so an idea is that the time will prevail over the feature sets. We obviously are going to have a discussion of what we would like to have in upcoming releases. But not everything may make it to the release. But we will see how it will go. We are going to start working on the release definition phase. This is where we will be defining what we would like to have in upcoming release. We're going to have a discussion. I think somewhere in the middle of the phase, we will certainly share our plans on the website. And every social and media resource we can touch. So, yeah, we're really, really looking forward for anyone who is working in any capacity in OpenSSL to actually give us a feedback. And this is you. So, now this is the more technical part of the talk. Let's start with what was in OpenSSL 3.1. The release was done in, or released in the middle of March. And there were, it was a very small release because we were already working for a long time on the quick support. But then we decided we need one more release in between to basically support the FIPS 140-3. And so that was added there. And it was also decided that we should have something more. And we decided that, yeah, performance problems with 3.0 release could be improved. But some of the changes were quite invasive. So we decided to not have it as a kind of bug fix for the 3.0 but do it in the form of the 3.1 release. Yeah, those basically all the plugability and flexibility that was added to 3.0 release like the edit support of the library context which you can use to like select or complete, almost completely isolate different users of OpenSSL within a single process. That comes at some cost, especially with multi-threading. You have to have some quite big or quite pervasive locking there and so on. And it was also designed from start with some maybe not that good decisions on this flexibility. So there was one big change which was done by Hugo who is over there in the audience which basically made the library context much less fighting over locks and so on. So that was one of the big invasive change in 3.1. Yeah, on the performance improvement side there is still a lot to improve but yeah, we are getting better and hopefully in future versions it will be even better. In some cases it's already in the master branch. There are some changes that make some jobs done with OpenSSL even faster than 1.1.1 but yeah, these are exceptions. So, I don't know. So 3.1 is released. The FIPS 140-3 is in validation. Of course because it's FIPS 140-3, it's a new standard, many new things in there. It's very likely that before we get 3.1 validated, officially validated by NIST, it will take like one year since the submission so it's more like sometime next year when you will get the 3.1 validated. Now about the 3.2 release. As some of you probably know, the main focus on 3.2 was adding quick support and that's quite a huge thing of course but now we are in late stages of development of it. Basically the implementation is almost feature complete in terms of like quick client. Server was not targeted for 3.2 and yeah, what we think is the biggest advantage or good thing for our users is that the API that you can use for quick will be very like naturally extending the existing API. As I already said, something about that, there are further performance optimizations in 3.2 and there are a lot of other small features, some of them not that small even. There are things like, what is that, certificate compression are going to support basically now implementations in OpenSSL can like internal code of OpenSSL can use multiple threads, it can spawn threads and application can limit the number of threads OpenSSL will spawn by itself. That was for example required not only for Argon2 but also for quick implementation for some of the features of the quick implementation. It's not mandatory but especially like some of the users of quick in OpenSSL can take advantage of it. About the quick API, basically this quick API is nothing completely new. It's building up on the SSL API that's already there. You can use, basically it should be very simple to write blocking quick client and access quick streams from multiple threads simultaneously. These are like the access to the internals is properly locked and so on. So you will be able to, for example, read from one stream, from one thread and write to another stream from another thread and that should all work nicely. You will use the familiar calls like SSL and USSL connect to connect the quick connection to read from streams by SSL read, shut down to close the connection and so on. But of course there are some new APIs which are needed for having multiple stream support in quick. So you create a new stream that's initiated by the local site by using SSL new stream and you accept streams from the peer by SSL accept stream and the SSL stream conclude is to indicate the end of stream. And that's basically it. Before we open for questions, it's also important thing that we are as a company. We would like to grow. We have a lot of ahead of us. If you are working in this open SSL, you have questions, you're welcome. If you would like to explore an options, what we can offer in a company, we'll be happy to talk to everyone. Any questions? And also, most importantly, like we have, or I have, this is a full of t-shirts. The motivator to ask questions, I think that we should be able to provide a t-shirt to everyone who ask a question. If we don't have enough time here for the Q&A, find us outside and we are available for any question. Sorry about that. I did not explain that. So a quick is a new protocol which basically replaces TCP plus the LS. In some cases, it's faster, but especially it provides those multiple streams which don't block each other. It's like you can avoid some so-called head-of-line blocking. So basically, if the application, typical usage is web, where you have multiple streams of data which is flowing to you from the server, like pictures, scrapes, whatever, and you basically get all those independent streams of data independently. You can transfer them independently and if some packets are lost, which contains some of those streams and not all of them, then you basically can continue downloading the others and then only after the packet is read transmitted you get some of the blocked data, but it's not like everything is blocked. If you touched, there were HTTP-3, so it's 3. You could possibly touch quick as well. Basically, quick is the connection layer behind HTTP-3. But quick is like general. It's not directly linked to HTTP-3. There is a CI, basically. How is testing done in OpenSSL and OpenSSL releases? We have CI, which is mostly on GitHub, the GitHub CI, and we have also some internal CI, which is Bilboot CI based on Bilboot, which is basically nothing special, but we are building on a lot of operating systems within the Bilboot and a lot of other, because there are so many options, how you can build and configure OpenSSL. It's like there are hundreds of jobs that are running on GitHub. Yeah, so basically, he's asking about FIPS mode in OpenSSL. What's the difference between 3.0, 3.1, 1.0, FIPS module, and so on. The difference between 3.0 and 3.1, FIPS module, I would not call it FIPS mode, because FIPS mode is something that's more like Fedora, Red Hat, centric thing. And the 1.0 module had actually the FIPS mode where you switch the library into the FIPS mode, and that was basically meaning that you are calling the implementation from the FIPS module. But in 3.0, we added the so-called providers, which is basically those algorithmic implementations which are isolated from the rest of the library. And one of them is the FIPS-validated implementation, which is in loadable module. It's a shared library, shared module, which you can load. If you call implementation, or if you set up the library to call algorithms only from this module, then you are basically in the FIPS-validated mode. And the difference between 3.0 and 3.1 is the targeted FIPS version. FIPS 140-2 is like the 3.0 version is targeting FIPS 140-2 version of the standard, and 3.1 targets FIPS 140-3 version. That's a very long... We can talk about it later. Yeah, it's a good question. What's the legal status of those old extended support releases? Yeah, we are doing so-called extended support releases which contain new fixes for security issues or other critical issues that our customers, premium customers report to us. But those are still open source. It's very similar to what, for example, Red Hat does, because it has paid support for open source software. But the difference is, of course, that we don't release this code of these new patches to public. We give it only to customers, and we ask them to not share it with others, like publicly. They are not bound by license to not share it, but they are not bound by the contract, but we are asking them to not do it, and we can, of course, terminate the contract if they do it. So they won't get the future ones if they share the patches. Here's a file. A new security process of OpenSSL. I think it's, yeah, it's actually not that new. It's like VR. What was I thinking for a new security process? It was a question more. Thank you. Thank you. Choose one. I could repeat the question. What are we doing as security practices for development practice? Yeah, we... What is the security algorithm? We use static analyzers. We use coverity. We use fuzzing. We are tested regularly under the OSS Fuzz. CI. We have some coding guide, how you properly do coding for OpenSSL source code, and we have CI's. Yeah, we use those ASAN, T-SAN, memory sanitizer in the CI jobs. So, yeah, it's... I cannot tell which of those practices is most, like, important for us, but... Yeah. There are a lot of new teams, and the company is trying to reach out and say, let's do more coverage. Let's do this fuzzing with other tool, and it's kind of a problem for us to understand everything and understand if it's useful, if it's not useful. But in general, the perception is that we're doing just all right at this point. Yeah, yeah, yeah, of course, of course. Yeah, as I first said, we are doing... Like, we have a pretty strict code review process when we are accepting patches. That's also very important. How much time do we have? No. That's mostly about the future plans. Yeah, the question was when the server-side quick support will land in OpenSSL. So, I'm... Like, it's not finalized, yeah? We probably would like to have it in 303. Not sure if it's realistic or not, but hopefully. At the latest, 3.4. Yeah, it's definitely... So, the intention for the future is to do the most, actually, everything we can do in public, make it public. So, the problem for this moment is that the tools we are using, we're migrating from one tool to another tool to actually manage the roadmap. It's a question of how much we would like to share in the OpenSSL community. There is nothing to keep in secret, with the few exceptions of the Embargoid security fixes we do. But to answer your question, we are not doing right now almost nothing to be really open. And we're doing our work right now to change it. And if you're failing, reach out to me and tell me that you're failing there. I can add something more. The OTC meeting minutes is public. It's in repo on GitHub. Maybe nobody knows where to get it, but it's there. So, it's like... And of course, the meeting minutes, it's just the most important things from the meeting. It's not like everything that everybody said. Yes, basically, if it's possible to integrate the quick API into event loops like Epo... Yeah, basically, the API... Hugo would be maybe better to answer it, but then you can talk with him. But yeah, it's possible. That's a very open question. I mean, that's a very open question. We can have a discussion outside, but it's like something we can... Last question, okay? Yeah, the question was about a more generic question, how the testing in a such a complex project, like an open SSL or the project which integrates open SSL, do the testing, do the awesome escapes, I don't know. I mean, we can talk about you real well. Another question. We do have... Nice to meet you by the way. The problem is that it's impossible for us to do application testing basically, because there are so many applications. So we depend on... That's basically every open source of where has this problem, which is like used widely. So we depend on users to actually try it, test it before the release, before the final release, and report bugs. If they don't do that, I don't think we have a chance to do that. Open SSL has extremely huge API. The legacy APIs is so big that it's impossible to really envision what can be the problems in new releases. It was mostly... The 3.0 release was most problematic in this regard. The 3.2, if we have some bug in quick, it's a new thing. But the 3.0, which was most re-factoring, that was the problem. Thank you everyone for coming. Everyone who... APPLAUSE So everyone who had a question, or I should ask the question, welcome. You will get your t-shirt and you as well thanks us. If you have more questions, come. We'll use the the... Yeah, it's good. This time we have Arati Kumar. So Redhead. Hey everyone, good afternoon. Welcome to Defconn Check 2023. Thank you for joining and we are really excited to have you all here today. I'm Arati Kumar and I'm a software engineer at Redhead. Before starting the presentation, let me ask you, are there any developers? Great. Are there any front-end developers or UI, UX designers as in? Also, are there any students or freshers? Okay. Yeah, as developers, we have the power to create digital experiences that are inclusive to all users regardless of their ability or disability. In this presentation, we will dive into the world of web accessibility and understand what it means to be accessible to everyone. Trust me, the session is going to be really simple. I request you all to be mindful, sit back, relax and let's learn together how to make the internet a more inclusive place for all users. Before moving on to web accessibility, let's understand what is accessibility. Accessibility simply refers to creating products that are usable by everyone. This means any product from this water bottle to this laptop, everything should be accessible to all users. Now, web accessibility. Web accessibility is an inclusive practice of ensuring that there are no barriers that prevent interaction with or access to the websites on the world-wide web by people who are physically differently able. This means people who are having visual impairment or cognitive impairment or hearing impairment, etc. This is a permanent impairment and the people who are temporarily differently able or situationaly differently able. This can be people who are unable to use the application because they have a broken arm or their eyes are covered due to an eye surgery or, for example, simply the mouse or touchpad doesn't work. Then we have to navigate through the application using keyboard. This is also an example of situational accessibility. The third category is people who are socioeconomically restricted on speed and bandwidth. These are users who live in a remote area or who are in a low bandwidth. Make sure our application is very light that it will load even in the remote and low bandwidth areas. Now let's understand the why. Why should we create accessible application? We know that creating accessible application involves time, effort and money. Then why should we create it? According to World Health Organization around 1.3 billion users or people are currently able. That means one in six of us. These users are unable to use the application only because we created the applications in such a way that we haven't kept their requirements in our mind. So isn't that unfair that we ignore the requirement of these many users? Now there is a business impact as well. For any product that is in the market or that is launching to the market there is a set of potential users. For example, I have a product in the market and that is an online food delivery application. So let's take a sample of 100 potential users. In that 60% are my current active users and 40% are unable to use the application or they are not using the application. In that 40%, 30% people just a second. In that 40%, 10% people chose not to use the application. And 30% people are unable to use the application only because my application is not accessible. So if we are making the application accessible, we can make at least 20% of this people to use our application. So when this 20% people can use our application, it will benefit them as they can do or perform their task. As well as 20% people more using my application benefit the business or my business. So then the first 60% and the 20% make 80% of my potential users which will positively impact my business. So creating accessible application is a win-win for both the business and the customers. So always keep in mind that when we create or when we implement an accessibility feature it not only support the people who are differently able, it benefits all the users. So that is why we should create accessible application. Now we will understand how to create accessible application. In this module we will have a look into the different guidelines that we should follow to create accessible application. There are different methods that we use to enhance the accessibility of an application and an easy method to create accessible application even if you are not a pro JavaScript or UI developer. Now the first one is the accessibility guidelines. There are a set of guidelines or rules created by the accessibility expert of World Wide Web Consortium to create accessible applications in a methodical way. And these are called web content accessibility guidelines. And there are 13 guidelines under WCA 2.1 and these 13 guidelines are organized under four principles and these principles are organized by the, you know, called by the principles of accessibility. And these are the principles where P is for perceivable, O for operable, U for understandable and R for robust. And for all of these guidelines under this principle they have success criteria. They have three levels of success criteria where level A is the minimum and the level triple A is the maximum. We will have a look at the success criteria later. The first principle P for perceivable. Perceivability principle state that the content should be presented in a way that it can be perceived by all users. This means that even if the user is visually impaired, he or she should be able to perceive the application using keyboard and a Steve technology such as screen readers. This is the first principle and now we will go to a few guidelines under the first principle. So, what is content alternative? So, when we add images, videos or audience in the website, the people who can't see or here won't be able to understand what is a content. For these, we need to add the text description of these non-text content. So, these text description are called the text alternative. So, for example, if I need to add this image in the website this image, so here in the example you can see that the alternate text is a red apple sitting on a wooden table so the user can understand while screen reader reads the alternative text. So, for any non-text content, we should add the text alternative. Now, the second guideline under perceivability is color contrast ratio. So, there should be enough color contrast ratio between the text and the background color and it should be at least 4.5 is to 1. So, take the first example. Here, the text color is black and the background color is white which is, we can see the text really clearly and the contrast ratio is 21.0 is to 1 which is really good. Now, take another example where the text color is light blue and the background color is white we cannot clearly see the text. In this case, the contrast ratio is 1.8 is to 1 which is not good so always make sure when you add any text the contrast ratio should be sufficient. And the application should clearly work even in grayscale. I will show a demo of grayscale at the end. Now, the operability operable is the second principle of WCAG. Operable means that the user should be able to interact and navigate through the website using different input devices or their input devices of their choice. And the first principle guideline under operable is keyboard compatibility. The whole application should be navigably using keyboard and make sure there are no keyboard traps which prevent the user from navigating further from a specific point. The second principle second guideline is input mechanism. For example, if the user is having some motor difficulty or cognitive difficulty they might use a mobile phone or a tab to navigate through the website. So, make sure your website is navigably using trust screen. So, make sure your application is navigably using multiple input devices. So, that's all about operability. Now, we will move to the third principle that is understandable. Understandable state that the content should be organized and presented in such a way that the user is able to understand the purpose and functionality of the page. Now, we will move on to the guidelines under understandable. So, the first one is input assistant. The user should be able to help to avoid the mistakes. For example, when the user is filling a form there should be proper guidelines provided in the form and there should be proper error messages shown that the user is not submitting a form with error. So, that is input assistant. The second thing is focus. So, make sure that you add a clear and distinguishable visual indication to understand where the current focus is. It can be either a border color or a background color, but it should be clearly visible even in gray scale so that the user can understand where the focus is. So, this is understandable and now we will move on to the last principle of WCAG that is robust. Robust means that the website should be built using technologies that are widely supported and are accurately or properly interpreted using STF technologies. So, the first thing is first guidelines are the only guidelines under robust is compatible. That means that the content or the website should support current and future tool. When we create a website it should support current version of all the browser, a few previous version of all the browsers and a few future version of the browser. So, make sure that your website is responsive and it supports all the resolution. So, that is robust. So, this is perceivable, operable, understandable and robust. These are the four principles of WCAG. Now, we will move on to the conformance level or success criteria which I described before. So, the first one is level A. Level A is the basic accessibility criteria. This means adding the proper or correct semantic architecture or using a button and not creating a button of depth. So, that is a level A and level A is an level AA or mid range is an advanced accessibility feature. This include adding text alternative for non-text content or maintaining a proper color contrast issue and etc. So, level AA is the recommended one or your website should be at least level AA compliant to say that your website is accessible. Now, we will move on to level AA that is the highest. It is really good to have your website to be level AA compliant but it is really a little difficult to implement all the criteria. For example, in level AA we have the sign language interpretations. So, for any pre-recorded video content, so we have to add the sign language interpretation so that the user can understand what is in the content. So, that is the conformance level. This is a wrap of WCAG. Now, we will move on to assistive technologies. So, people who are differently able use assistive technologies to navigate through the application or use the application. So, assistive technology can be any device, software or equipment that help the people who are differently able to perform tasks. Different assistive technologies includes screen readers, voice recognition, word prediction, screen magnifier, bookmarks and history and so on. I will show you how a screen reader works at the end of the presentation. Now, let's move on to accessibility rich internet application. Accessibility rich internet application are a set of attributes that are used to enhance the accessibility of an application. There are three attributes under area. They are roles, state and properties. First one, roles. These roles are used to define the purpose and meaning of elements which are not natively HTML elements or are HTML elements but are not properly supported by the browsers. For example, if we need to create a tab list for our website, that is not a native element. But still we have to create it. So, we create it using a button and then how the assistive technology or screen reader understands that this is a tab role. For roles, we have added roles tab and tab list so the screen reader will read the tab as a tab and not a button. This is why we use the roles. Now, we will use the states. States are used to describe the current state of an application. Here, in the tab list we have added area selected as true for one and area selected as false for a few. So, here when the screen reader reads it, it can understand the first tab is selected or tab one is selected and the others are not selected. There are multiple attributes for states like area disabled, area hidden and so on. Now, we will move on to properties. Properties are also used to enhance the accessibility of an application. For example, area labeled. Here in this case there is a button and there is no text for this button. There is only an image or icon is added for this button. So, when the screen reader reads it, it will just read as button and it won't read any name. So, the user won't be able to understand what is the purpose of this button. So, I have added the area label as search so that the screen reader will search. Here the screen reader will read it as a search button. That's all about the area attribute which are used to enhance the accessibility of an application. Now, we will move on to design system. So, I mentioned before that we can create an accessible application from this scratch even if we are not a specialized developer. So, there are design systems. Design system is a collection of components which we can use to create accessible application. So, there are multiple design systems available in the market like Google Material Design or IBM Design Language and so on. So, here in Red Hat we use the pattern fly design system. So, if we are creating a website with accessible component the whole website in turn will become accessible. So, pattern fly is an open source design system managed by Red Hat. Now, we can see a small demo of pattern fly design system. So, this is the pattern fly design system and it is managed by Red Hat. So, we have different components then charts, layouts and everything inside it. As I mentioned, this is an open source design system and now we can go to the accessibility feature of this one. Everything about accessibility is clearly mentioned and you can have a look at the features under this. Now, we will move on to the components inside this. There are a lot of components. This is an accordion like almost all the component which we use are there in this design system and for each component there is a react version, HTML version, HTML demos, design guidelines and accessibility. Everything is detailed described that you can understand how this component works and that is about the components and now we will move on to the charts. These are all the components which are available. You can have a look. So, this is patternfly.org you can have a look at the design system and there are different charts available and all of these components are accessible. You can easily use it and there are multiple layouts which are used to create the layout of an application like gallery, grid, flex and so on. Now, as I mentioned, patternfly is an open source design system so you can contribute. There is a GitHub link also provided if you are interested you can contribute to patternfly. We will understand what an accessible application should offer. We now know why should we create accessible application, how can we create it and now we should understand what should it offer. So, here I have taken Hotel of Red Hat which is access.redhat.com and I will show you how this is an accessible application and how this page responds to keyboard accessibility, keyboard navigation, what is the light of score of this page and how the page responds in gray scale and how a screen reader reads the page. Let's have a look. So, this is how a keyboard navigation is happening with the website. You can see the keyboard navigation when using tab key, arrow keys enter key and so on. So, from the top the keyboard navigation is going and it will come down as we navigate through the tab key and if we use enter key it will select that particular button it will see you can see the visual for focus visual indicator as well. It is clearly visible and this is how keyboard accessibility should work in an accessible application. We should be able to completely navigate using the keyboard. Now, I will show the Lighthouse report. If you do not know, Lighthouse is a depth tool which we can understand how what is the accessibility score of any application. So, this is accessibility score of access.data.com and for a good accessibility application accessible application it should be at least 85 and we should have we know we should try to make it at least 90 so that it has the pages really accessible and this sex even in lower resolution like this is like a complete score of the application. So, 94 is accessibility score of access.data.com or the customer portal of Red Hat and this is a pretty good and the application is accessible. Now we will understand how this application is behaving in gray scale. So, the users who are you know color blind or they are having low vision may see the application in a black and red view. So, the application should be clearly visible in gray scale and even the visual focus indicator should be visible in gray scale and should be completely readable. So, this is how it is looking in gray scale. Here I have used Chrome extension to convert the space to gray scale and it is really easy to do that and you can check like how your application is looking in gray scale. Next one is how screen reader reads the application. I have used So, this might not yes, yes there are different type of screen readers available. I have used Mac voice over and you can adjust the speed and everything. So, there is time restriction so I cannot show in detail. Yeah and for some of you this might not make sense like why we should use it. So, if you are having doubt just close your eyes and try to use any application and then you will understand the importance of screen readers or any SDA technology. So, now we understand why to create accessible application, how to create using guidelines in accessibility enhancement and then design system and what should an accessible application offer. Now, let's move on to the Q and A. Any questions? Yes? Yeah. Right. The tool which I use is Lighthouse if you are not available that is a Chrome depth tool. In that we can find the accessibility score of an application and for some icons or if we are using some button and all, there may be it is showing some unwanted error. Like if we are using some component from external library, it will have everything to be make it accessible but still Chrome can show that it is you know the score to showing us 94 and if your application is having some issue it will list all the errors. So, if we are having time then I can show a demo at the end. Like how it is. Yeah, I can show a demo at the end. Before that any other questions? Yes? Yeah. Yes. Actually, I have used Chrome Lighthouse. Like what he mentioned there are multiple tools available. There are a lot of tools available. You, okay. So, he was mentioning like there are other extensions which are available in Chrome which we can use to see the mono condition of the application, right? There are other tools which can be used to understand the accessibility of an application. Yes, there are other tools available like acts, tools and everything. You can use it as per your convenience. Any other questions? Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Great. Yeah. Yeah. Yeah. Yeah. Thank you, sir. What he said is really right. Yeah, yes, ma'am. I haven't have anyone tried with accessibility. Yeah, yes. So, she was asking like if I have any experience in mingling with people who are differently able than how they interact. So, I do not have any direct accessibility but I have worked on a lot of accessibility issues which came after usability testing. And I have fixed a lot of issues with it. So, I have that experience but I never interacted with a customer directly. And like what he said is really correct with the website having moving the keyboard is little difficult. But the thing which we should keep in mind is HTML by default is accessible. All the HTML components are accessible. We make it inaccessible by creating custom components and removing its you know, you know, genuinity. So, we make it inaccessible. We make sure that whenever we use a component use it directly. And any other questions? Now, can you show a demo of how Lighthouse is working? Yeah, okay. Okay, yes. Okay, so this is the customer portal of Red Hat and I will show you the accessibility score and how it is calculating. So, we need to go to inspect and then we can show see the dev tools and here we have Lighthouse. And in the Lighthouse we have multiple things we can check like performance, accessibility, best practices and everything. Now here I am checking the accessibility only and it will calculate the score and it will show all the issues if there are any issues and you can find it in a detailed way. Yes, this is the accessibility of customer portal of Red Hat. See they are saying that these few tabs are missing the roles. That is why it is showing that low like that point is reduced from this. Like now I will show another example of Google Chrome or Google.com. So, this is Google.com and Google is supposed to have a Google is accessible and we can check the Lighthouse score. See here for Google.com it is 89. This means that Google is accessible, we know that. So, there are some things even the Lighthouse can show error like it may reduce the score you know unwoundly. So, go through the requirements or go through the report and if you can correct anything correct it and always make sure to you know make it above 90. That is a really good thing to follow. And now if you are having now if you are having any more question please feel free to reach out to me through my LinkedIn or you can probably gmail me. Please feel free to reach out. Then in the application in the presentation there are a few emails used and these are from FreePick. If you want to use it you can use it. It is a free you know website where we can get good pictures and these are the references. Most of the content is taken from W3.org and WebBame. So, you can go through these reference link as well. Thank you so much. Hello Haru. Is it okay? So, we will start Yes, is the problem? I will increase the text. It's a little bit bigger. Is it okay? Can I start now? But this doesn't work. It's probably broken. Can I start? Yeah, just tell me. What? I have it connected. But I have to say that there is a wrong link. It's already so old that it doesn't work. I will try again. Okay. So, little little technical issues. Will I have some additional time over the, okay. So, hi everyone. I'm Wikia Poczek. I work as software engineer in moro systems which is my favorite software development company in Bernal. I code my applications mainly in spring. Who does use spring here? Okay. And also in Kotlin, which is my preferred language. I would probably never go back to Java. Who does use Kotlin? Great. And do you test your applications, your web applications? Okay, some of you. And do you want to? So, let's have a look on it. Sorry about that. Some technical issues. I will not use this one. So, back to the past. How does a typical web application look? Nowadays, we usually have at least, I would like to use this. Frontend in React, Angular or View. And backend services we write in spring. They are in cloud, so they communicate with other services. Some managed services or third party services. So, how to test those? You have probably seen ideal software pyramid pattern which says you should have a huge amount of unit tests and only little portion of end-to-end tests. You have probably seen also anti-pattern which says having a huge amount of end-to-end tests is wrong. Okay. So, what do you usually test in your web application? So, what I usually test? I write unit tests mainly only for validators or some algorithms because what to test in complex spring system. I also write tests for database queries because we use Postgres and we try to use it to the Macs, so we write very complex queries. But the main part of my application is provide REST API to front-end and it can be covered with acceptance criteria from analysts. For example, it is possible to add, edit and delete customers' branches and so on. So, I write a lot of end-to-end tests for the REST API because I have acceptance criteria which REST API should met. So, it means the antipattern. Is it wrong? Is it a problem? Can't back. Don't think so. He told me in a book if something has worked, do it all the time. So, I try to. He also came with the test-driven development methodology. Are you familiar with that? Okay. And do you sit on daily basis for a long time? Great. But it's possible, definitely. But what I have learned at school, hey, hey student, create database design first and provide services, Dow logic for all functionality and there is a loss of space for unnecessary features and as a software house, you don't want to do that when you are working with REST API and how do you test continuously debugging with Kuro or Postman and what about to test maybe later and I will fix those when I'm done but usually you're not and cat-back said now people tried in different way. So how to develop API according to the DDD? You have acceptance criteria. You can turn them test you can provide rest API for the tests and after that there is time for business logic and after after that for creating services and database if it's needed and continuously you can run the tests to check everything you have done it's correct so I like that but how to do that and also enjoy it you have to clean your testing framework in your space because your tests are also called so give some time to structure structure I sit and make use proper tools and encapsulate what's possible how can write tools can help you here is the application and my tests almost simulates the front-end behavior I write them in code test which is great testing framework for Kotlin but it doesn't matter the principle would be same in every single language or framework I use an intelligent idea for running them continuously I use when I use some mocking I usually mock the whole services or adapters I use mock K for Kotlin but I prefer to use real database and manage services when it's possible because my complex queries or for example some elastic search testing cannot be provided by the mocks and if you have some some third-party APIs there is also solution so let's have a look on them everything is covered with spring which can help you a lot why I like code test maybe you are from a feminine with junior main main difference is that code test utilize utilize running ordinary functions so tests are functions not a function calls not functions like in junior it means you can structure them so here you don't have to read them but you can see the structure like here I can prepare some data here another data here is test I can create wall tests with here and when I run it run the tests in IntelliJ idea it looks like that and I can run the single test in TDD or whole group of the tests when I want to and this is the result great what about mocking mock K is great for Kotlin because in the Mockito on a power mock you cannot mock Kotlin specific things like like ordinary functions or extended function but I would say I don't mock I don't mock services like that I refuse to write copy of implementation like like I have seen in many companies I don't believe to those tests because they are fragile so when I have some some service I never do something like creating mock and passing arguments calling it and expecting no I would not do that when I mock services I use mock K bean which I can which allows me to replace it in a wall spring context easily so I can mock service for database for for a while or some adapter of other services when I have third-party API wire mock can create test the double over HTTP on an HTTP level so we can just configure what it should should return and your ordinary REST API client will use that and I also said that we try to use our Postgres to the max so I appreciate tests over real database in my data access objects I try to test complex mapping to the to the DTOs and if I have complex queries like get all listing that currently have one active directory or delete all locks older than 30 days that have not been opened it's quite too hard to write them on the first try so if I have test I can I can run it continuously but I need real database so test containers is library for me because in my test it runs the Docker container based on configuration which I can specify specify programmatically or if I use spring I can just provide different database driver and it will start this container on this board with those password and it's connected to the spring and it's easy like that you don't have to read that will be demo but where are those end-to-end tests again spring with single annotation can help you run a real server with fully functional REST API and you can inject one of those clients which are already connected to it I prefer web test client it's it's most modern of those and you can see the API is quite comprehensive but you don't want to write those in write those methods in your tests so in Kotlin or even in Java is is really fine to encapsulate those I can encapsulate those parts which are reusable over other calls and luckily it looks like that but I try to go farther and create met the methods for each endpoint and communication like this extension method creates creates some partner get some data and you can see it calls the URL and pass some value and return body into the partner detail if I do that my tests are short and this code is reusable but it takes time of course and I also encapsulate data creation like you have seen on the previous slide I just pass the data so I can have some method for that and use defaults and rewrite only what is possible what if I need to prepare some data in database when when can I do that and how and how I can do that using REST calls so create one check if it's there or I can create them directly in database and I often do that but when I strongly discourage you from doing though doing that before all the tests do not have one single file which which have all data because they will will become tightly coupled and tests will interfere with each other rather do that before the test class or even better before each test and do not forget to clean your database after the tests in a transaction for for database tests and by deletion in in end-to-end tests because test though created creates the data how to prepare them similarly like calling the endpoint I prefer to encapsulate that this is juke but you don't have to understand or read just prepare just imagine function which calls your your favorite entity manager and creates the data but Kotlin can help me to provide really great API because I have some defaults and I have context apply function which which allows me to redefine only those I want and it's stored after that but what about the speed end-to-end tests are supposed to be slow but they don't have to because you can speed them up do not start your containers when using tdd use reusable for for test containers configuration which allows you to start them for the during the first test and after they are used so the tests become really faster because it's like okay and what about your builds in my Gretel build we generate a lot of the lot of data for example for juke and we update our our schema and there's a lot of other actions some can be excluded or you can add some flag to it and IntelliJ and in IntelliJ idea you can configure your test to use this configuration and it will run the test with it and I would say that setting local level to info will also increase the test speed if you don't need them in tests okay let's do some live coding here because I have speak too much so I would like to prove it that it works I hope there will will not be another technical issue like before I will have really small application which because we don't have much time has also has some logic implemented there is some small database with client which have offices and my task is to meet some acceptance criteria I need endpoint for getting the client and when some office is created it should be read with the client as you see in the JSON and when office is deactivated it's removed from clients list quite easy but I think it's it's good for I hope this is only a technical issue so IntelliJ idea hello here is ordinary spring application you can see there is some data source configured there is there are some some migrations create I have also office controller implemented for creating and deactivating my my task will be to implement client controller there is all there is already some client now the types of object which I have some methods great there will also be my work and I have some details so how to start when I don't know how to start for complex tasks I started with test so first first test okay client should get this thing client and I would like to get it I have some accessor here and I told you that I can inject this client here and here is my my entity manager let's use them this client and I would like to have some methods like get client oh it's here of course we don't have so much time but you can imagine where I think it and that was the preparation phase I told you about it calls this world law and return client detail okay so if it's detail so there should be some name I would say it should be first if you have never seen the same that it's like calling method it's called infix method and co-test is based on that because I can easily extend it by my other own matches imagine the unit if you need to so I will run the test and it will be a red off course because I don't have endpoint yet but I need to have red test before before making it green so I can believe it TD this says do it in the most easy way as possible okay I will return it do not laugh I will have working crest API after that and the next phase is refactor so if that's green rest API works great I think I can continue here is some some data object data access object and I would probably want it to be like like that okay but it will not work because it's not implementing maybe it's time to to show you the mocking of the services which is quite easy okay moga bean mm-hmm and just stop it every every client okay if it works I I know okay hello life coding okay so this that's the that's the possibility of mocking the services or adapters but but I would remove that rather I would implement it directly just blink you don't you don't have to have to understand that it just reading from database and it will fail why I don't have data in the database I have told you that there is some DSL context so I will create create client and let's have a look on the method I said that in Kotlin I can easily overwrite that with using context functions this could be this could be considered to be a real test green I know that the service works makes acceptance criteria when office is created it can be read with the client it will be the complex one okay I like context so this is and it should be ripped and I showed you that there is already some create office first this client ID and here I have some helper function so I can use that it's created and I suppose I suppose that test office offices should be should be filled with one one of those okay when I run this structure it will create the office but it's not get with the client so what is the easiest way to implement that okay in controller I will do some some messy thing I will call client now and there is some method and I will test that great that works but I don't want to do it like that of course let's have a look in client now whoa whoa I don't understand Duke I would rather to test it first as I told you I have client data access object as here please please some magic and here is test for the for the offices I have one office and I read it read it and I suppose it to be in the list but I can see it it's not it's not there collection is empty but I have test so I can implement that in the same way like before offices offices assigning we are in the test and it works but my colleague in a review told me hey it's it's hibernate style don't do that use rather that but I am not so experienced and I don't understand it properly but I have tests so when I rerun them I can I can see it's green great so rerun all the tests even with the controller green and back to the last acceptance criteria when office is deactivated it shouldn't be in that list quite easy to implement even in database but I would like to have test so should list when deactivate it I can refactor it a little test client the deactivation method was already here I told you so just passing office after that that I suppose it will be zero I shouldn't run all the tests just single one it's faster and it should be right because I can see the deactivated office in the list I don't understand Duke so first I will modify the client the now data object as the client data object test I will add last office in a inactive office I will use extension method okay and after that I can refactor a little that's why I like structuring the test client from DB offices should not exist ID it's it's lambda here inactive okay here yeah of course it should be read I was too fast in my mind and here I can try to modify the modify the query it's green I will rerun all the tests and my acceptance criteria are met and even better I have tests so I can refactor the inners of the of the application because rest is my my main part but do believe that a prediction works you probably did not so I will I will run it I have it here it will start on localhost localhost 8 8080 as usual I can have a look into database where I have client and some offices and I have postman so when I send that I can see the JSON here and when I set one of those to the false I can I can see it disappeared so everything was great so back to the slides what to take from my talk give a TD a chance even for calls encapsulate your API calls use real database or service is very possible speed up your builds and try that enjoy that I will provide you some links they will be on this qr and thank you for your attention and ask me anything and I have some swag here and for your question you can you can choose coffee a coffee package or or dotted notepad or or at least don't agree with me yes what's the difference between Mock Mock bean and Mock K bean it's because it's use Mock K inside instead of Mock it and it's designed for work for working with Kotlin that's only because if I use Mock K I usually excludes Mock Hito from spring so that's why yeah coffee or dotted notepad no no I have plenty of them not plenty but some so next question for coffee or notepad if I have any alternative for test containers yeah if later in my career I just just do TDD with configuration to the local local docker it also works but we use test containers even on us even on continuous integration so that's why do you have some do you have some problem with them or some reason to notice and I would say that I'm not sure but I would say that it's possible to test containers also works in dot not but I'm not sure here but it's it's multi-language probably I'm not sure but I would expect something like that or you can you can just use that for the development of your application or run it locally but not on say C but I would I would say that those tests are for me to be faster developer not for checking box I would say coffee our own or notepad in some Indonesia Java yeah big Java okay next question yes yes I'm always trying to add just what the text needed so my if there is a need to have some predefined arguments in database I will add them to the function to the encapsulated creation of the data for example keywords keywords column becomes mandatory so I have to put it some put there some default and it usually doesn't break and I'm trying to I'm trying to fix those before making the changes so like that coffee or okay next question please if you have time yeah you can use mock question no but you can have a notepad if you want you're coaching sure thank you for your attention But I was just shooting at the FIT in Prague, just to learn about the reduction. When I just connected the Mac, it would take about five minutes, and then it would be completely dead. It also happened to me that it was just the director of the project, and here it works for me, here it works for me, and here it works for me, and here it works for me, and here it works for me, Let me have a legendary switching. One, two, three. Good? Good. Yeah, one, two, three. Yeah, we can start. Hi everyone, how are you doing today? Great, are you ready for the party tomorrow? Because they didn't call it a party, but it's just, how to say, networking party. Social, yeah, social network. I'm looking forward to it. So today's topic, we discuss how you can absorb your APIs with the help of API gateway plugins. So before we get started, let me introduce myself. My name is Babur, or you can call me Tiger, which is translation of Babur into English Tiger. And my surname directly can be Livermore. Like Tiger, Livermore means Tiger who lives longer. This is a direct translation. If you have a problem with pronunciation of my name, or another version, like you can call me Bob. That's what they write. I'm a software engineer, open source contributor to Apache Sota Foundation. Also developer advocate for project called under ESF, API6. If you have questions regarding my session, or would you like to discuss more about API management, APIs, feel free to reach out to me on these social channels. I will be super happy always to have this conversation. Before we get started, let's do first thing. For my Instagram, I need to take some selfie, right? If you don't mind, like from that part, I think it looks great. Smiles, I don't see smiles, I can't tell you about these. So my job is done, I can go home. Thank you. But today I have actually quite interesting topic to bring. We're gonna start first talking about what's API observability, right? And then I will explain you how you can use API gateway as a central observation point for your APIs. And shortly also introduced what is one of the famous open source projects API6 also. And then we'll dive into three pillars of observability, like logging, metrics and traces. And then I will show you a small demo, how you can use these plugins to observe your APIs, right? So what is API observability? Does anybody know what's API observability? Are you monitoring your APIs? Some people are still coming. So what is the API? Okay, API observability is a very hard question. What is the API? It is a acronym, right? Stands for application programming interface, right? So because we all know this API uses by every service, applications nowadays, and API also will be itself, it's just the next evolution of API monitoring. Like traditionally we monitor APIs using logs, right? Using metrics and traces, because traditional monitoring mostly focus on tracking non-announced, which means we know exactly what we are measuring, like a number of requests per second or maybe number of errors per second, but we don't know exactly value. What will be the value of after we had the resource, how many requests will be for our APIs, right? This is about monitoring. And compared to observability, it focuses on measuring announced, announced, which means there are some metrics that we don't know as a developer, that might happen, right? Like a network latency, or maybe some API version incompatibility, right? Or can be some other issues that related to your environment, depth, staging, product environment, that can be separate metrics or logs for different environments, right? This is about observability. We don't have to confuse with monitoring, usual monitoring and observability. I'm gonna explain later how you will understand like what observability exactly means. And nowadays, actually, observability is welcome. Thank you. Observability is a part of every developing team, right? Because it solves many problems related to API's consistency, reliability, and how fast we are, frequently, we are delivering new features for our customers, right? And for example, as a product manager, I can use API observability to understand the consumption and the usage of my API, right? As I am a developer, I can use it to specify all the requirements I should build these APIs by looking at the API observability. Or security team, they can use API observability to detect and protect against the possible threats of your APIs, right? If you're a business manager or you're working in a business line, you can use API observability to understand the real business value of your API or you can use it for monetization. The different teams, the different purposes using the API observability. Once we understand, there are another question comes. You see the answer already. How easily observe your APIs? There are maybe several ways. You can use platforms, tools, SDKs, some other solutions, but one of the simplest way I understood is using maybe API gateway. Why? Because nowadays, we are building multiple microservices, APIs, serverless APIs, JRPC services, GraphQL, all this like bringing some functionality to your client applications, right? On the left side. The client application can be mobile, web, desktop application, but you do have not one microservice or one service behind the scene. A lot of services are running. And in this scenario, as you can see, API gateway can access a central point that can route your incoming requests from your client applications to the internet destination. What does internet destination, as you can see, can be database also. It can be some serverless APIs, whatever, on the backend. You can build this backend with your own knowledge using Spring Framework, using ESP.NET framework, or maybe Go, Python library. But here in this picture is API gateway access sync layer to catch all the requests. Why it's useful? Because usually if you have multiple microservices, you expose a lot of URLs, right? Endpoints. And this client application should know all the endpoints to get some functionality from them. But what if there is another tool called API gateway can give client application information about all the URLs so that it shouldn't understand what's happening on the behind the scene in your service network? In this case, you can use API gateway to say, okay, these read customers, data should go to customers of service, read the orders, request should go to order service, and so on API gateway can map this sort of request. Is it clear what's API gateway now? Who is using API gateway? Yeah, I know you're too, okay, very good, very good. Which API gateway is it? All right, all right, nice, nice. And if it's clear one, what is API gateway, let's understand what is the plugins for API gateway. Gateway, just a door, right? I can open the door and I can go inside. If door is not open, I cannot go inside. It means it offers some security features also. You can do authentication, authorization, right? With the help of plugins, because plugins just additional component that can be plugged into your API gateway to add some extra power. You can control the traffic, you can also transform your API request or API responses, right? A lot of responsibilities comes with API gateway plugins. Also, as I mentioned earlier, at the beginning of the talk, you can use it for absorbing your APIs. Because as we said, only center stays in the center, knows all the moving traffic, all the logs, all the metrics, in the API gateway level, you can collect this data easily from the gateway to derive further userful metrics without spending time or effort on finding out frameworks. We're using your oven tool, maybe building your own tool. You can use some plugins, pre-built connectors, where you can easily connect to some famous observability tools, like, you know, Prometheus. Who's using Prometheus, by the way? Oh, we have many people, right? Grafana, who's using? Okay, I'm also using Grafana, Prometheus together, like, Datadoc also, that is in the list. You can always connect to your API gateway without any additional, I mean, coding or additional time. This is one of the benefits of using API gateway as plugins. So, one of the API gateway representing, today me, it's called API6. It's also one of the open source projects of Apache Sotri Foundation. Do you know Apache Sotri Foundation? No? Do you know Kafka, Cassandra, Tomcat? These are all the open source projects, right? API6 is also like one of them. Why I brought this example? Not only I am kind of contributing to this project, because I like also some features on this project. Like, let's assume that it's written originally in Lua programming language, which is old Nginx, also plus maybe some other frameworks written in Lua, but if I don't know Lua, I can use ChagGPT, right? If ChagGPT cannot give me some ideas, I can use my open skills, like I'm a Java developer, I can use Java plugin runner to create my open plugin, because not always existing plugins can solve your requirements, like our user needs. Or if you're a goal developer, you can use the plugin development in your favorite programming language. Another feature, if you don't like to write the code, you are lazy enough, you can use the dashboard without writing in the low code, you can use the dashboard to connect one or multiple plugins together in the dashboard. This is also for free, our contributors are still working on the dashboard so to make some advanced kind of solutions for using these plugins. So let's back in the observability topic. Once you understand what's the API gateway, right? Observability, why important? And what is, let's say, API six. Now, let's talk about the observability pillars. There are key areas where we talk about observability. We need to start to look into metrics first, logs and traces, right? Three pillars. First you start to logging with logging, logging because it's trival version and easy to instrument, easy to use. Because logging, I think everybody use logs, right? We use logs for debugging or we use logs for auditing. We use logs for in real time understanding some events by timestamp, right? Events coming in the Kafka or some other things or let's say there are some logger plugins, not only in API six, or all other API gateway providers like Kong, for example. You can also use these sort of plugins to understand your logs. You can use HTTP logger, for example, to send the requests to your log server automatically from API gateway without implementing it in any logic or you can use for the TSP logger, whatever logger you want to use. And next one is, for example, metrics. Metrics, like just the measurement, data measurement over time. You can use the metrics to understand what happened over time. Also, aggregate this data to further and use it in a distributed system, like maybe elastic search. You can maybe understand your metrics on there and maybe based on the metrics, you can also switch on some alerts to take some action. This is what the metrics does. With API six, also you can use from metrics plugin to collect your metrics. Also show it in the graphana. I'm gonna demo soon how you can do that, connect the graphana and maybe visually see your metrics. Last one is observability indicators or pillar is tracing. Who knows tracing or are you using tracing often? Okay, which tools you're using? Yeah, Yeager also. Okay, okay, open telemetry also. By the way, my colleague, Nikola, frankly, he's here. On Sunday, he's also talking about, like from developer perspective, how you can use open telemetry to understand your tracing. It's gonna be interesting because that's why this, today I will skip tracing part, only give you idea how you can do metrics and logging. So this is a different plugin also. By the way, you can also use the ping plugin, of course, to collect some trace and make reports to Yeager or some other platforms. So let's, enough story, right? We can jump into now quick demo I have for today. If you want to try yourself, you can just scan the square code after this session. There is one GitHub repository. It brings you the GitHub repository. Couple of examples, like how you can manage your traffic for your backend. In my case, I am using, for example, in this GitHub repository, you will see soon, I'm using Docker Compulse to create some containers, containers for API 6, because we need to use plugins, right? Containers for Prometheus, Zipkin, Graphon. These are the containers. And then I am also using .NET. Who is a .NET developer? Is there anybody? Now, you will kick me out here. I'm not actually a .NET developer, always in Java, but I also, you write sometimes in .NET, but it can be any backend service, just the example in .NET. You can use your Java skills also to build the backend because API GitHub usually doesn't care about the backend development. We don't have to use, for example, Nouget packages or Maven dependencies to use API gateway, because it's an independent instance, right? So, if it's everything clear, let me show you some interesting stuff here. Let me stop my presentation here. I am talking about this repository, as you can see. In this repository, it demonstrates how you can manage your traffic. Also, if you navigate to different brands, there are different, like, showcases. And for example, if you navigate API hostability, you can learn how you can enable all the hostability plugins. If you open, like, Circuit Breaker, how you can use API gateway to enable circuit breaker or fault handling and so on, you will start with API hostability, right? Let's assume that I am using Windows machine. Who is using Windows machine? Hi, guys, I want to do like this. Sorry, I worked for Microsoft that time, like, I don't have a choice to use. I am still using the Windows machine, but in my case, WSDL, Ubuntu subsystem installed, there is no issue, but I don't have an issue using Ubuntu Windows. But, as you can see, I have one container. This is actually second container. API6.NET Docker, the same, I cloned the project and I'm gonna run this project without using Docker Compost app. I'm using this code. But you can do Docker Compost app as well. I'm gonna start my containers. They will slowly start it. And then, as you can see, I have one backend service. It's product API. If I navigate to it's running, like, 555 forward, but actually it's running, but here is nothing happening, but if you navigate to API products, it's quite simple API, does just get request and returns some response, like with the product list of products. I have two products running, right? For demo purposes. And before I'm there, let's start to absorb these small, simple, tiny API, right? If you open the project in your favorite editor, in my case, VS Code. I use also IntelliJ there, but it comes with VS Code. What we start with now, we're gonna first start understanding logs. For example, if you navigate to command folder in this repository, there is a couple of steps, how you can achieve these three logs, metrics and trace. First thing, in any API gateway providers, you need to create and register your backend service, right? Backend service, how you can register, is it big enough or is it good now? Everybody can see, right? First thing, you need to create and register your backend service, in my case, product API, right? In API six, to register my backend API, I need to create new upstream object. Just one single object, and I'm saying to API gateway, please create single node with one backend service and we will use it later for plugins and rows. Let's register my backend service. This is a product API with a list of product list. I will register it just by running this code command, but if you prefer to use dashboard, as I said earlier, you can use dashboard or you can use even Postman to do this code command if you hate code command. There, we created now, like first our upstream registered product service and next step, I will start with logging, enabling this logging plugin. If I open the next terminal example here, code command, what I am doing, I am creating another object called plugin config. So, plugin config, what it does, you can register all the plugins you want to use for your, it's upstream for your routes. In my case, I want to use HTTP logger plugin to send my logs collected by the gateway to server, log server. Let's assume that I have a log server running on the mockbin.org. Let me show you. If I go to this mock server, you will see there are some information. Yeah, as you can see, it's saying it's your log server. It's my mock log server. In the reality, it can be your old one log server running in the cloud, running on your own premise service, right? Also, if you do slash logs, it will show all the logs I tried before the demo, right? 20 hours ago or maybe a few seconds ago, I also tried to run some API commands. It sends some requests, as you can see. So, let's enable this plugin to send my API logs to that server. If you do so, I will create my plugin config with single HTTP logger plugin. And as you can see, it's enabled. And the next step, let me send some request to my slash API product to send logs, right? Let me do this call command once again. There we go. I can do one more time. As you can see here, it should be reflected. Sometimes it takes time, depending on the network issue or so on. But usually these logs should come. As you can see, like a few seconds ago, one log came. And then if you open these API6 logs available, like it adds additional headers, like for example, headers for rate limiting, how many times you can send this request, so on. Or you can customize these logs as you wish, right? This is how HTTP logger, simply, I enable it. On the other hand, how you do achieve this logger from the Spring Framework also easy, right? In application.properties, you can enable these logs as well. It's also simpler, even like a Spring. So now enable it. As we can see, we can send some logs and we can see the logs. Clear now, HTTP logger? Or any questions? I see some faces, not something not clear or everything is crystal clear, right? Good. Next step in my list is to create a route. So, let's see. I created the first upstream plugin config, route. In API Gateway, it means I will show sometimes some policies, regulations, how my request should be forwarded to backend service. This will specify some routing rules. Because we skip it in the previous step now, I'm gonna explain these routing steps. As you can see what I'm saying to API Gateway, please try to fetch every GET request to our domain if this request slash API slash products enable these plugins. In our case, it was HTTP logger plugin, right? Once we enable these plugins and use this plugin, after that, you can forward the request to the backend product API. That's how the fetching mechanism works in API Gateway, right? If it isn't good, next step, maybe let's try with Prometheus plugin how we can enable this plugin with easy steps. I can, for example, create one, again, plugin config by running patch request. I can add a Prometheus plugin to my plugins list. There you go. If I do this photo command and run it, my Prometheus plugin is also enabled here. How I can see these plugins, if you navigate to Docker, if you remember, some Prometheus also, the instance were running. Let's check this Prometheus instance and if you have any metrics available on this dashboard. For example, usually metrics sent by API 6 call of the HTTP or you can, for example, see other also type of metrics available, like I say, maybe HTTP status. I think it's not yet here, but I can generate some by calling, again, one request because until we send the request, this plugin wasn't enabled, right? Let me send some request and then maybe I can see some sync here, API 6. Was it the status, Nicola? Do you remember? Status, let me change. Or we can HTTP, let's see what it's status is. Oh, we can see the own graph on as well, or it's not coming. Maybe you could use the call of command. Sometimes this work updating not quickly. We can use call of command, like instead of dashboard. Because my metrics are now automatically should be sent to this endpoint, running the same on the Prometheus server, like plesh, Prometheus, plesh metrics. For example, I can use call of command and see this metrics over there, there you go. Good, I think it was this one, API 6, HTTP status. We can try to search now back in dashboard, how this came to my dashboard. Yeah, this is the same request, sent to the Prometheus. The idea is very simple, right? I mean, to implement these the same like future in other frameworks, they can do also similar steps, but in our cases, in API 6 Docker compose example, like everything's set up, like there's a Graphana also configuration. It's sometimes it's hard to configure, but you can also use these capabilities. For example, if I navigate to Graphana, the word now, we are contributors already created this beautiful dashboard. Is it easy to create dashboard on Graphana? No, right? Yeah, it's true. That's a point, like we have already this dashboard is ready to create it, like you can at least understand some optional APIs like request per second, or you can understand some latency, how much latency your APIs are taking, or for example, you can understand some that the trace is later on with open telemetry, right? This is how the Graphana and the Prometheus plugin actually works together, automatically send some request, right? So this is about the logger and Prometheus plugin. If you want to understand like other plugins we have, you can open the official website, list of observability plugins. Let me show you, like here under observability, you can see it's structured by the categories, we have a tracer, we are supporting right now, Lidkin, Skywalking, open telemetry, where if you want to use some metrics, like you can use Datadoc, most of our open source development is in Datadoc, where if you're using like a logger, for example, you can use like a Google logger, or Clickhouse also, we are supporting, and we are adding more extensions on to API6. This is the open source, right? You need to find some contributors who can do this integration part. So, ZVZ, we can slowly close my presentation before that let me finish with the takeaways. What was the takeaways actually, what I presented or tried to explain? First, you can use API gateway, if you want to easily make available your monitoring observability, without using some SDKs or libraries or tools, you can achieve that, as we achieve it easily right now, like I spent even less than five minutes, or you can use some more than API gateways to build some other connections to other platforms you want to use, by adding additional plugins to support other platforms, right? And then also, you can use some plugins without any code on the application side, because we didn't write any code on application side. Usually, you need to write something also to send from application to API gateway, some logs, right? And it's not about API observability, it's API gateway also you can use for other cross-containing functionalities, like you can use, as we said, for transformation, right? You can use for like load balancing also or some diamonds, there are other features. So, it was my talk, there are some references if you are interested to contribute to open source project, you are more than welcome, and you can get this sort of t-shirt, it's kind of more motivating, right? Now, we can jump to questions, if you have any. No questions? Yep, there you go. Yeah? Oh yeah, it's a tricky question, yeah, thank you for that. Do you want to answer or I can answer? I have a question. Oh yeah, let me have a bit of questions. There my, yeah, there you are, my colleague was asking what is the key reason to switch from the API gateway that they are using to API 6? That we are trying to explain now, I am giving a stage to my colleague, Nikola, he would like to understand like which alternative are you using currently? API management, because your workload already in API management, on Azure, right? Infrastructure, okay, in that case, like if you're using combination of different like platforms, you can also use API 6, together with API management. The idea, for me, like it's a performance first, it is quite fast, if we want to speed is the first place. Also, you can use some other, the features that maybe the Azure API management doesn't provide. If I'm missing something, Nikola? Yeah, and what are also the cloud providers using Azure plus, is there anything else? Yeah, I get it. And also one thing, we are talking about API gateway a lot, right? You can also use ingress controller. API 6 can work as an ingress controller for Kubernetes. There's a different topic, that's why we have different sessions for that. If you're interested, you can check it out, because I know 60% of our many contributors now from different companies, they are using ingress instead of API gateway, because everything is inside the Kubernetes, right? This is a solution. I think this is also a selling point. I'm not selling, but a selling point of API 6. I have a T-shirt for you, one, the same like this. I have two more T-shirts. Sorry, once again, it's a little bit far. Yeah, yeah, yeah, good question. I think I will repeat the question. My colleague is asking like, is there any performance indicators you are checking to understand the API's performance with the properties, right? There's additional configuration we need for that, but there is a performance analysis, benchmark analysis we did in the past. Maybe I can also send you the link, specifically for API 6. You can have a look, right? And if you are, your question is more about specifically if we did for one product, API's performance testing, we don't have such tool, honestly, like to test these parameters, it requires additional setup for parameters. Does that answer the question, or maybe we can talk after? Okay, I'll talk time, right? One more question, no. One more question, do we have? Because I have one T-shirt more, so I need to, I think I can give to the next question. Any question? Or I need to show, like, this is a T-shirt, like very quality, high quality, but it's black, you cannot be in the sun. I had a trouble today, so. Thank you in this case for your attention. So if you have other kind of talks, they have two more talks tomorrow, and on Sunday by Nikola. Thank you. Hello, hey, welcome here, right? One, two, three. Oh yeah, one, two, three. Oh, yeah, yeah, yeah. If I put like this, it's much better. I see some faces already from previous session, all right? Did you attend the previous session? You guys, no, not, okay, okay, okay. I think in this case, it will be more interesting. Hi everybody, welcome. I am super happy to see you all and speaking here. Today's topic is about how you can power your AI solutions built on, let's say, chat GPT using API management or one-heart mechanism of API management, like API gateway. But we start first with understanding chat GPT. So let me introduce myself. For those who wasn't in the previous session, my name is Babur. You can call me Tiger, I also repeated this. But my colleagues, they call me sometimes short BI. That stands for Babur International. Like the idea is Babur International, I can easily find out the common language with any nationalities around the world, wherever I was. So that's why you can call me like Babur International. If you have any questions, feel free to reach out to me on the social channels or Instagram, Twitter, LinkedIn, and so on. Today, we will talk about by shortly introducing your, like, what is chat GPT? Like, who is using chat GPT? Ah, so many people, hell. If I ask, it's vice versa, like differently, who is not using chat GPT? Yeah, you're not using chat GPT. Come on, like, you should use from today. You should use from today after my presentation, you understand why chat GPT you should use, okay? And I will also explain like what is an open AI API plus chat GPT, what's the API and API management, right? And also you understand the benefits of integrating this chat GPT or open AI API with API management and how we can use, let's say, plugins, existing plugins to enhance this AI solutions in our applications. In the end, I have also demo to show you how these features can be helpful for the AI solution. So, APIs, right? By now, we are all familiar with this term because we are living in increasingly API-centric world, right? Every service we use today either uses API or they are APIs, right? Even chat GPT uses API, I asked myself in the chat, like if chat GPT uses API, yes, or if you can use the chat GPT. You can use, for example, chat GPT through the API using platform call it or open AI API or also open AI solution by writing some code in the client or by directly calling your REST APIs, right? Where chat GPT or open AI has recently announced that another cool feature where you can create your own custom plugins for provide some AI solutions in your application, right? Or you can use some existing plugins. There are some plugins already available if you are using plus, chat GPT plus paid account, right? I have free account but I migrated to paid to show you this presentation. Yeah, welcome. This is actually plugins why API is important because these plugins are using API under the hood because what it does when you chat and ask in chat GPT, book me this flight and hotel for tomorrow, it uses that's a booking.com API to fetch the data, what kind of booking available automatically and books this flight or hotel for you. That's why it uses both your API data and its own data to have a complex solution. Clear, right? Custom plugins, chat GPT, the APIs. So let's talk about now, open AI API. Is the same as chat GPT? How you are using interface, right? I am asking a question and answering this question. API means you're programmatically ask AI to do something. Like I can ask anything, right? I can ask any sort of API, I can use the rest. I think they are providing now on the rest solution. You can do some code completion or text generation, you will see soon. For example, I can ask one text, for example, what is a Dev conference in the Bruno? It can answer AI if it has an out-to-date information. I think it has like a 20-21 until this data. Or it can help me to write some code, find out some bugs, right? Or it can generate some images or variations of images and so on. This is how API does without interface. And how chat GPT can help developers? This is the most important question everybody asks themselves. Is there any idea? How chat GPT can help developers? Yeah, go ahead. Yeah, this is actually use case. What kind of language, for which programming language? TypeScript. TypeScript, okay, right. But it's helping, right? At least like 50% making up more performance, right? For me also, it helps with generating codes. For example, I am a contributor to API6 project. It's, you can write some plugins, right? In Lua programming language, I don't know Lua, honestly. I don't want to learn because I know advanced languages. I don't have to learn. I can use chat GPT, ask, like, can you write me some Lua code? It does actually 95% correctly like this Lua code is written for me. I was able to create new plugin called File Proxy. You can check a link I will provide, like it is not a topic of today, but what I'm trying to explain to you, it can exactly help you to generate some code. And also write in some test cases. I had a chance to talk to some companies. They are now using chat GPT to write some integration tests because usually for integration tests, you need input data, right? If some user clicks this button, this something should happen, or like this. In this case, chat GPT can create some input for your programs. For testing, it's really good for generating documents. Like if you are writing as a developer some documentation, you don't have to write your own, your own one. You can just ask chat GPT, please explain this API, what it does. And write the swagger definition or sorry, open API definition. It can create open API definition. Is it reduce the time, right? As a use case, I'm gonna skip just to give you idea. You can observe yourself. If you're using chat GPT, you know what the capability is. I'm not promoting chat GPT. I don't work for Microsoft. And this is the differences. Like as I say, chat GPT interface. You can use as a user friendly. You can ask some questions. API just makes this AI models through the rest. And you can interact with AI inside your application. This is one advantage of API using API. I will, you will see how it can take a benefit in my demo. And my statement here, okay, we understood chat GPT uses APIs to write some custom plugins. And custom plugins are able to call API, right? Again, API usage. And also we do have a lot of service using chat GPT through the API. In this case, my API should be secure first of all. And second of all, it should be scalable. I should be able to scale my APIs, right? By running different instances of the same API. So one, if one instance fails, it should always fall back to another instance. And it should be available 100% of the time. How to achieve this? How to make our API secure, performant and scalable? That's actually great, but I asked this question also. But there was a lot of solutions, right? They available, it was in the chat. But then I end up with, this is about API indicators, right? Successful indicators like performance, availability and security. And then one answer wise, one of the answer wise using API gateway. If you want to have a secure performance and scalable APIs, you can use API gateway. Why? From the bruiser talk also, I was explaining like API gateway because in the center, right? In the heart, like it can know all these APIs exposed by your backend services. It means like it can control the security, right? There's a door, I cannot go inside, but in the inside my friends, but to see my friends, I need to stay caught or into how to say access this information, right? API gateway also like a front door to accept the request and to secure your applications. Also it can scale. Your backend service can also scale, but also API gateway can also scale such a ways that like for three service, one API gateway, like there is a concept of backend for front end, right? Can you use separate API gateway for wild applications, separate API gateway for, let's say web application, so on, to show different information on different applications. My colleague, Niko, also here, he has a very famous talk. It's called the backend for front end, right? You can also talk to him like it's, he can explain a lot of usability about this backend for front end solutions. And of course the plugins, another important fact in API gateways, you can use plugins to achieve this stuff, like observability. I had the presentation just before this, how you can observe API using API gateway or how you can make authentication like a single sign-on or how you can use identity providers like Google or maybe some other identity providers in API gateway. So API six, I can introduce one more time. This is on my T-shirt, open source project, also API gateway, we can use it for free. There is a lot of features, good features like plugin hot reloading. For example, what it doesn't mean, let's say we said like our API should be performance, right? You don't have to add, you don't have to stop your current running server if you want to enable new plugin. Without any downtime, you can delete or add additional plugin. That's one of the capabilities of API six without stopping, right? And other capabilities like a plugin development, you can create your own plugin in your favorite programming language. Like as I say, I don't know Lua, I hate Lua, I can use Java plugin runner to create my Java or chat GPT, ask for GPT to write some Lua code. And of course dashboard, you can use dashboard to achieve the same things without writing any code in the local environment. This is some of the cool features why I like API six. And next topic is how, okay, use this API gateway and AI together because my statement was, problem statement was, for chat GPT, we need secure APIs. In this case, I can use API gateway to secure like my API calls, right? I can enable JWT token based authentication or I can enable even a basic authentication of this just user password and login. If you imagine like you want to have this authentication enable it within your code, if you're using Java screen framework, you need to write, you need to use spring, right? You need to do some configuration, do a lot of stuff. Even chat GPT helps you, it takes time. But the API gateway, you can just use one plugin and to enable your security feature. It's a little bit faster, not time consuming because you already tested by the community members. Another thing is performance in chat GPT, for example, if you're using APIs to ask something on AI, it takes, first of all, some time and also it's not for free, you need to pay each time you ask a chat GPT. In this case, you can use API gateway caching mechanism to cache some responses so that you can use this response in the future. It improves also performance, right? This is some indicators. And you can use other plugins just rate limiting. For example, you can limit the API accesses per maybe some amount of time. I can say only three users can request my API for free. 100 requests per maybe minute. If they are using more, I will ask, please pay for the other usage or change your layer, paying layer, like maybe pay as you go and so on. There are some other features you can use with AI. I'm gonna demo soon in this picture. As you can see, this picture illustrates let's assume that I have one client application. I am building one startup project, mobile application maybe. You will see mobile application. And this mobile application talks to API gateway. And API gateway has a one single endpoint slash ask me anything. We can ask anything, right? And then it actually enables some plugins to control the authentication, rate limiting, security and so on. And under the hood, also it does call to AI.ai chat. It's endpoint. It's exposed by my Spring Boot application. I built a Spring Boot application that it interacts with AI. You will see. And basically I have one of my website.com slash ask me anything endpoint. It's my own domain. I am calling to another domain Spring Boot backend service to get some data. Clear, right? This is an architectural diagram. Right, simple, yes. And next one, if you assume that one Spring Boot application receives request slash an AI chat, it can call slash V1 slash completions endpoint of open AI. It's actually existing API exposed by chat GPT. You can use endpoint to ask anything. This part also clear, right? How the request flow going to AI and coming to the from the AI. If it's clear, it can jump to demo session. Here in the presentation, you can also scan the score code. It brings you to GitHub repository with my demo. If you want to try yourself at the end of the session, if you're like interested enough on AI solutions, who is a Java developer here? Oh, I am also a Java developer. Because actually at this demo, I use the Java and Spring framework to achieve. Let me show you. If you open this score code, it brings you to this API six Java chat GPT opening API report. And I have some brains, depending on what you want to achieve. Like one branch, a main branch, for example, shows in the old way how you can use some code commands to enable some plugins. For example, you can also check the API six documentation. Let me switch to the next one. Documentation to use API gateway, you need to, for example, create upstream route or plugins and so on, right? This is one way of configuring API gateway. There is another way of setting up API gateway by using standalone version. Let me switch back. I am gonna demo it with standalone. Because standalone, what it does, you can just write one YAML file, single YAML file with your older routes, plugins configuration, and it, you don't have to change next. If you want to change, it automatically updates because API six is a hot reloading, right? And if you open this branch, there are some folders. One folder is OpenAI API. It's a Java application, simple Java application. It has a POM.xml file, right? If you open POM.xml file, there is one library. It's actually just Spring Bootstarter web application. And I am using one community API that wraps some API calls to charge GPT in Java. You can also build your own like this SDK. It's not difficult to interact with OpenAI API because if you open API API documentation, there are documentation actually. You can see API references. It can teach you how you can authenticate to AI, right? For example, if you authenticate to AI, you need to have API key or if you want to use some other endpoints like let's say chat completion, you need to request this endpoint. If you remember from the architecture diagram, I had this slash V1. I'm gonna call this chat completion endpoint now without Java now, without API gateway. Let's do that. I assume that I have API key here. I have API key. I have paid account. I paid $10 USD to make this demo. But it's not the case. Please don't use my API key. It can overuse. Let's do one request just to show you like how this AI works. For example, I have here prepared some cool comments examples. I'm gonna request AI.com, openAI.com, chat completion point with my authorization key. And I am just saying you are helpful assistant. Let's see. If I send this request, let me open my terminal here and separate chat. There you go. And we can run one more time here. So I have it's visible. If I see, place enter. You see? It's a chat GPT responding me now. Thank you how I can assist you today, right? This is a very simple, right? I can show the model, which model AI model I want to use. I can use chat GPT full. So now I'm using 3.5 turbo version. You can have a look in the documentation and experience which model with what kind of power they can offer. But what if, now I don't want to use cool comments because I want to add some functionality because I'm building application, right? To ask anything. In this case, I built my Java application. As we started from the beginning, let's say under open app documentation. As you can see, I have a simple Java application with single controller class. With single endpoint, also called slash AI chat that uses a community library to send the same request we did with Kotl just now, right? For example, I can ask anything from this chat. So, and I have a Docker compose that builds API six, my Java application, and upsmith, you will see soon because I'm using building full stack startup project. Upsmith is UI solution. It's also open source. You can check. I will see the screen, what I did. First of all, let me show you end solution. What end solution was and I will explain it a little bit down. End solution is I have, let's say, let's run this Docker compose file with two containers, API six, open AI API. So once my contents up and running, I'm using Docker desktop. I will have 10 minutes. Sorry, yeah, I'm going fast, don't worry. If I open this upsmith UI, you will see my UI application here. Local host, there you go. Oh, let's now, it's not opening. Let's do one more time. Yes, upsmith is starting. It's UI builder framework. You don't have to learn front end to build this UI, like JavaScript or HTML, CSS. You can just use a dashboard with widgets, ready components, put these components to build the applications. You will see now, once it's starting, I have already one built application. A little bit of loading, here we go. My first AI application is called using upsmith, if I launch it, it has two pages, let me bring one more time. Here we go, from the editor, I can also launch. Preview version, here we go. It has two pages, one login page and main page. Because we need a security, right, for the application, first of all, so that my users can log in and also use some AI solution. For the login page, I use a simple UI, right, but it's under the hood using API 6. API 6 enables JWT token authentication. Like when I, for example, register my user on API 6, they can use their emails to log in. And then API 6 sends back the token, the JWT token, so that the upsmith application saves this token to the store. The other request will be just direct, using authentication header. Let's go. If you do login, login successful, I am on the main page. I'm not gonna ask about Microsoft Azure. You can ask any questions here. What do you want me to ask here? How old are you? Let's ask it. It's actually automatically reloaded the page, right? You see the response coming from AI, this one. Because on your first load, the webpage, it's responding. How old are you? No, so the JWT cannot respond to specific questions. Let's see. I am AI, blah, blah, blah, blah, I probably don't have the answer. Is innovative, age-gazing design? Yeah. Yeah. Okay. Yeah. Do you have a question? Yeah. Beyond what? No. Do you want me to ask a question here also? How it's pronounced? This one, Dion? Correct? D? D? I am here? Uh-huh. Like this? No. B is Y. Like this? Exactly. What is this? In English. Yeah. Let's ask. Maybe it doesn't know, huh? Because, no. Dion is a comprehensive online tool. Because, no, you are not right. Actually, this old data, right? 21, 2021. That's why AI solution is old. What I'm trying to explain here, actually not the UI or not how we are responding. Also, here is how the security features work. I didn't spend much time for this demo. 15 minutes, I built this JWT-based application with API Gateway, right? For example, I can log in, I can log out, I can do registration just without spending time. For example, if I do log out, now the API 6 just gets an invalidated JWT token and I need to request again. And the same pass repeats, right? And you need to ask some questions and AI solution to these questions. So, that was kind of demo, the simple application here. But under the hood, under the hood, what's happening? Actually, what's happening? First of all, I had one API 6 set up here. As you can see, it's a simple YAML file as you configure your Kubernetes. I am configuring my backend application, my backend application Java, right? Which is running on Docker, yes, five minutes. Docker open API and I have route, call it, ask me anything. If you remember the form diagram, ask me anything and while ask me anything is called from external app submit application, we are using proxy rewrite because this ask me anything endpoint should be rewrited to this URI, right? Java URA, actually AI chat, not asking me anything. As you can see here in this one, this is AI chat and API 6 actually rewriting the URI. And also I am using for the login, slash login endpoint, single plugin, call it JWT. And I have one API consumer. API consumer, as if you remember, I use it, it's email address here, app submit, gmail.com. How it's easy, right? I have created a consumer with single YAML file and that's all API 6 is ready and I have one Java application is ready and with single two pages, my Java, the app submit UI is ready. And now my question to you, who would like to invest in this startup project? Is it, I mean, at least it can do something, right? Of course, sorry, it's a new future requirement. I will add a diamond. Yeah, sometimes that's the point like why CharGPT is added custom plugin development. In this case, I know that the dynamic city exists. I can add them to my open database, to my Java application, for example, and then I can combine these two responses together. In this case, I can empower my AI solution, right? So how much do you want to put into investment? Do you want to have a question? Yeah, if you're interested, just look, please, yeah, go ahead. Yeah, only if you can see in my demo, only AI of an API is paid, but the rest is, of course, for free. I think, I don't remember, it was 18 USD dollars provided also CharGPT for free usage. I used it five months. It's actually long, you can use until you build something real, like my application. Yeah, the question, I will repeat, what we want to implement additional features, right? Yeah, yeah, okay. I want to implement some features, but I couldn't because the restrictions by the open API, yes, there are a lot of restrictions. It's the right question. Restriction is one we found it's all data. We cannot use all data. And the restriction is, we cannot believe the CharGPT some private information related to the company, right? In this case, I should use my open storage. One of my friends, he's actually not in IT, he asked a question, really a good question. Okay, CharGPT we're using, what I want to use also my open data inside the CharGPT and I should also save this response in my storage for the future. Also some, it's impossible now. Maybe there is a plugin, it can be achievable, but it's not as a production ready. Storage like this, yes, yes. There is a plugin, I don't forget to remember, I think there is a plugin also to achieve this. You know, I remember also Azure, they had announced the one CharGPT flow, call it a separate service on Azure, where you can specify flow. I can say one output should be an input to another CharGPT so that you can have complex flows in this business storage. Yeah. Yeah, is there any features that I want to implement, but there is still some time to implement it. Yeah, I want to implement, for example, my own custom plugin. That shows all the discounts in around the city, but the limitation, I haven't get my access to use this custom plugin development. We can discuss with you later, like other limitations. So we can stop because we are running out of time. Thank you all for your attention, I'm happy. You can check this demo, it's great, AI is great, let's see future, what it brings. Thank you. Thank you. So we can stop. Okay. Hello everybody. Thank you for coming to this talk. My name is Dimitri. And today I'll be talking about modern strays. So strays is a Linux system called tracer from user space with the older history. So what is modern strays? Well, it depends. For the purpose of this talk, modern strays is all the features accumulated since the previous talk I made at DevConf, which was also called modern strays. So if you are interested at those things that used to be modern those days, you can have a look at that talk. But now we'll be talking about very new features, actually since 2019. You can, quite a lot of them for these several years. There are several groups of features. Those that affect tracing process itself, most features that are about the tracing output, what would you like to see? Or a few features about filtering, what you don't like to see? And a few features about tampering, how do you like to change what you see? Also one summary option and one funny option that makes a strays show you some tips. So let's start with this feature, which is probably most impressive for all. This example you see, it's an infamous example used to demonstrate how slow a strays could be, how slow the programs it traces could be. And eventually we have a feature that makes all these things upside down. That is, strays no longer slows things down that way. By installing the second BPF program, the TraceIt program runs almost as fast as on TraceIt one. As you can see from these statistics, it's exactly the same example people used to show demonstrating how slow things are. So you see just maybe 10% difference when the high IO loaded process is under strays with second BPF enabled compared to like 40 times slower. It looks too good to be true, right? However, it's really that good. But there are some limitations. Maybe these limitations are not that important for you, but you should know about them just to choose the right tool for you. Because of the nature of second programs that you can install but can remove, your nature refers to use follow forks mode because the second program is inherent on forks. So unless you specify follow forks, this option second BPF is no option. Also, it is not compatible with any option that detaches strays. For essentially the same reason, you can't stop this program. And what will happen if program still stays but the tracer is no longer there? This second retrace stops which are used to communicate with the tracer. They turn into circumferent error stops which effectively disable those skills. And this is probably not the thing you want the program to do. Like in the following example, it's kind of a artificial example but from this example, you can see how bad this could go. We're just tracing exit groups as code. And when there is no second, we just detach strays and nothing is happening. But well, in case of premature stimulation, there is no way for the tracer to work normally after its tracer was detached. So it actually sick falls because at the moment of exit group, it can't perform this system call. And then it goes to a hilltaping instruction that for something like this depends on the architecture and actually sick falls. So it affects the behavior of this process. So while circum instrumentation in strays is really fast, there are cases where you can't use it. You just, you should be aware of this. So this is the reason why you can, for example, enable it by default or there is a quite old feature that enables strays to demonize itself. By default, when you run program under strays or it just forks a tracer and it runs as a child process. But in some cases, you don't want for a strays to be visible. So things are turned upside down and the program process is the tracer itself. And the strays is going as a child, or actually as a grandchild because it forks twice. So not to be the direct child of the tracer. So it wouldn't be visible that way. It's quite an old option. It exists since 2011, I think. So, yeah, for example, if you run something under timeout, there is a clear difference because timeout is sent in signal to this process. And when it strays, it detaches to early so you can see the output. So why I'm talking about a feature which is more than 10 years old? Because it's not enough. Simple demonizing is not enough, as you can see in this example. When, for example, you are sending the C-kill, it kills the strays anyway because the signal is sent to the process group. So we added an option to also move demonized strays to a separate process group. Actually, you can specify true strays and specify triple D to move the strays to separate session if you need this. But for this timeout example, it's enough just to move to separate process group. So this way, strays is not affected by signals sent to its tracing. There are some ideas maybe to enable this demonizing mode by default, demonizing with moving to the separate process group. But the previous behavior existed for too long so we don't know maybe somebody is assuming the traditional behavior and we are too picky about backwards compatibility. So we decided we would rather add this option. It's another option that controls a strays behavior which is very recent is the ability to stop tracing after a specified number of system calls which are those that are not filtered. So if you are tracing just a few syscalls, only those you're tracing are taken into account. People suggest this could be useful for some automated testing scenarios to attach to some running process and capture whatever they are interested in number of system calls in the touch. And in this quite an official example, I demonstrate the way how I use this sometimes. When I want to attach to a process that generates a lot of system calls, but I want just a few of them. I just attach, grab these few syscalls and attach. So for that kind of purpose, it's very handy. And this is a very simple feature, like if you are developing some multiplexer program like Commode or BC Box, which is a program that has several names and its behavior depends on the name. So it's a very easy way to test this program or affect its behavior without installing. Because when it's installed, you can use its regular alias name, but when it's not, this is a very simple. So moving to features that control various aspects of a stress output. These new features allow you to see what's behind process IDs. For example, you can see what's the program name behind process IDs you see in the output. The option is called the code pits com as a proc pid com. That's where it came from. It also has a short alias dash big y because decode file descriptors option has alias dash small y. So it's kind of analogous. When you are tracing programs that are creating pid namespaces, sometimes you want to see not just the process ID that's visible to these traces processes, but also the process ID, how it looks from the stress process name space. Why it could be useful? Because otherwise you wouldn't easily see which process is which. For example, as you can see here, like process with pid two and three is actually process you see later in the left column. So you can actually see which process is which. And really you can combine both of this like display both common names and pid namespace translation. And this way it's even more visible. So you can see the program name kind of process com contains. And the pid in the target namespace and pid in the stress namespace and it's clearly visible. So in this example I use option decode pids all because it's more handy. And maybe someday we'll have alias dash double big i. I don't really know why we don't yet have this alias. It reminds me of dash double small i which corresponds to decoding file descriptors information which I'll be exactly talking about now. We have one more feature to decode file descriptor information. It's information associated with signal of the file descriptors. So it's quite handy when you see system call accepting some signal of the descriptor you can see right away the signal mask associated with it. Looks nice. Also you can see a series of contexts associated with process IDs with file descriptors and with file names. So the same exactly the same example with without this information as you can see it might be quite handy if you use a sylinux. This is a short form and this is the full form. It's so lengthy. By the way all these strange looking arrows you see they are not produced by a stress. So far a stress doesn't produce funny looking arrows. Arrows also funny looking arrows is it's also doesn't produce. Unfortunately, stress output with full sylinux context is very lengthy. So you can see something very long. Another feature which is probably really important for those who are debugging some strange sylinux related errors is showing off sylinux context mismatch. Like in this very artificial example assuming that there is a file with a sylinux context that doesn't match the database. Sylinux will keep this information. The information is in the database is that is unconfined and actually it's a system. So a stress in this second text full mismatch mode would show the difference. You can see how long these lines could be but if you're using a sylinux you wouldn't be surprised. Anyway. And you can also show syscall numbers which is kind of strange why would need a syscall numbers. You need system calls and not the system caller numbers but here's the example. There is a one dying architecture called x86 which used to have and still has a few multiplexing system calls like socket call. And there are still libc libraries that are using this system call for Barker's compatibility. So if for some reason you want to know exactly which way this system call was, this socket call was called by a direct system call over a socket call. You can use this. Okay, let's talk about filtering now. We have a feature that was announced many years ago as a dash z option. It was announced but never worked. And only in 2019 with a proper return status filtering it's actually works. So you can filter system calls by its exit code. You can filter and show just only successful syscalls or only failed or some combinations. By default it of course shows everything but you can see how this be useful from this example. Like if you want to see just only successful syscalls you probably would use this short option because it's really short. But if you want to see something less common like those system calls that don't finish you would use the long option. A one less obvious consequence of using this status filtering is aggregation. For the obvious reason as trace doesn't know whether it would print or would skip particular syscall until it finished. It prints it at the moment it decides whether it should be printed or not. So it no longer prints all this popular unfinished and resumed stuff which could clutter the output. As you can see in this example this is without aggregation and this is the same thing but with aggregation. But be careful it could confuse you. From this output you could think that these nano-slip system calls were issued in this particular order. But you remember from the previous slide that they're invoked almost simultaneously. So the consequence of this aggregation is kind of reordering the output. But there is no other way to, if you want to see whole lines and you don't know whether they will be seen or not. It's probably the only way. Or aggregate afterwards. Like we have actually an aggregating program but it's not modern, it's from the previous talk, sorry. You can also filter system calls by the file descriptors numbers. So a trace would show you just those system calls that are prepared on the specified set of file descriptors. Like in this small program, just a regular CUT program. But the idea is that if you can filter by path, by path to file, for example, there is no path at all. Like if it's, I don't know, signal of the file descriptor it doesn't have any path. So you can use this. It may be it's another slide. We, every now and then, we add more system call filtering classes because people cannot and shouldn't remember which particular system call names exist on this or that architecture. So there are groups, like we added two groups for filtering system calls related to file credentials and to system calls. Oh, okay. Poke injection is nice. We had, we have various kinds of system call injection for quite some time. But so far we didn't inject anything into memory. So this new feature allows you to inject not just into exit status of a system call but right into the memory referenced by system call arguments. And this somewhat artificial example, I substitute the second argument of open-ed system call by changing the string itself. From it is your shadow for to some different name. So the system call succeeds. But unfortunately, it's not that easy to tell what's the file name because it's a hex string and you probably can't that easily read this hex string. It's a pity, but this is the interface we use. And also you can inject into memory after exiting system calls. So when injecting this return value is not enough you can inject actual value that the system call would have written if it would have been called. So in this example, not just the system call value is injected, but the actual value. But in this case, you can see what this hex is about. It's the string read by reading program. But maybe, maybe it's a good idea to add some interface to this trace poke injection to accept actual strings. So it would be a bit more readable, maybe. Don't know. We have one more option to control to control a statistics output. Because for the last few years we added a few features for gathering statistics. We can gather more different information about system calls. But by default, we show this. This is what we show by default. But if you look into the mind page you would see that there are more of them. And for example, you can specify this or some other parameters you are really interested in. So okay, the last but not least feature is called dash dash tips. It makes the straws show you various tips, tricks and tweaks. It was made initially as an April Fool's joke. But it was too good to be kept just as a joke. So it became an integral part of this trace. And in the beginning you had to see some actual strays output before seeing this tip. But now the latest release, you can just see the tip without tracing anything. You can specify which particular tips you would like to see. But by default you will show you some random tip which is kind of nice. So let's have a look how one of the funnest tips looks like. Okay, tip number 31 says, medicinal effects of a trace can be achieved by invoking it with the following set of options. Medicinal effects of a trace. What? Actually this trace was coined by somebody who really used this trace to request some feature, how to use a trace to make programs that don't work actually work. Because sometimes some buggy programs don't work in a regular way, but for some reason, because they are too buggy, they work just under a trace. So that person wanted this to be documented. And now he or she, I don't know, should be happy. This is documented. There is something in this idea because what a trace does in this mode, it doesn't do any printing. All it does is it makes the traces stop twice on every system call. So this affects the order in which programs are executing their system calls. So less traces, slower execution and some bugs don't manifest itself. Okay, so maybe the last thing I wanted to say is tomorrow you can attend a few more talks in this kind of unofficial mini-conference about a trace. Tomorrow, Eugene would be talking about current state of netting the coding. And Renault would be talking about using a trace to troubleshoot his issues. This is going to be in room G202. I don't know where this room is, but please find it. So thank you, and I'm ready to answer your questions. It's not that easy to tell because, yeah, okay, great, thank you so much. So the question was how many tips a trace currently has. And the answer is that it's not specified and it's subject to growth. And you can easily tell because if you specify the number greater than it is, it would round up to the feature number. It just round up. So you can easily tell. If you watch tip one by one, you would notice that you've already seen this. And this probably means that you've seen all the tips. But maybe the trace version is not fresh enough. Maybe the next version will add some more. So these numbers are kind of stable, but we don't promise this. In every new version, you should check. All these tips once more. Any more questions? Yes, please. There is a format for none, but how to specify a password with the format none? This means that the question was what would happen if tips would be called the parameter none. It's the way to turn tips off, I think. Print, no tip, yeah. That's it. Very simple. This means that you can specify tips and then tips equals none, and then tips and it will honor the latest to specify. It's the, this is the way. So Eugene wanted to add that they used, they used, we follow this behavior with most of the options, or maybe all options, for the reason that if you use some errors or a wrapper that uses a trace, so you could always specify something on top and it would override. We have a few more minutes for any questions. I think soon. The question was, can the trace filter for specific terminal? I don't think, no. No, I don't think you can. But I don't know why. It makes sense. Maybe nobody yet came up with a punch. Could be. Sorry, I didn't get the question. What do you want to replace with what? Okay, so, it doesn't matter, okay. Yeah, I think, so the question was whether all conjection or any other injection could change system called arguments. And we don't yet have this for some reason. Maybe for the same reason. Nobody yet contributed anything. Eugen says that he believes it's not possible to every architecture, but we don't have to support a feature for every architecture. For example, SICOM BPR programs are probably not supported on some architecture, so we don't support SICOM BPR. So, this shouldn't be a big problem. Yeah, but we'll have to come up with an interface and somebody will have to implement it. I think. Or unless the feature will implement itself. I don't know. Okay, any more questions? No? If there are no more questions, then we probably can say thank you. So, can you hear me? You can hear me? Okay, cool. Welcome. So, this is the last talk of today. I hope you enjoyed the morning talks. So, I just wanna start by telling you a story about an experience. Not about developer experience. This is simply a user experience. This is back in eight years in Berlin. I was traveling with this ticket and I got caught by a cop. But I was carrying that this ticket. Do you know what is the reason? Reduce fare. What is the radius fare? But there is nowhere it's mentioned it's children. Yeah, I was like, my understanding was when I'm new to Berlin, first time, there's no information that it's for kids. And I traveled with this ticket for a couple of times. I thought it's just two stations are short distance, not for the long one. I traveled and I was like, okay. But I got to know when cop caught me that, hey, you're traveling with a kid's ticket. And this is a kind of experience we see every day here and there. And then I'm gonna talk about a developer experience and also how community can play a crucial role in impacting the developer experience. I'm gonna show you the five techniques which I learned from my recent roles and projects so that you can take that and next week you can apply in your day job. And also if you have any feedback, let me know, I'm happy to do, discuss. And there's another example, another nice example. During the COVID time in the trans stations, they were giving you like, what is a 1.5 meter distance? It's an equivalent of like a horse. How about this? This is so cool, right? Like, in order to give you 1.5 meters, it is how it is the distance you need to maintain. I really like this one. Yeah, yeah, yeah, elephant as well. And I also want you to do like a quick look into your morning until now. Can you think about your experience at this conference? Maybe just share with your neighbor, like just for 10, 15 seconds. What was your biggest takeaway? Yeah, just talk to your neighbors. This is a community talk, so it's important that we talk to each other and help. And what are your biggest takeaway? Like what are your positives, what are negatives? Thank you. Thank you for sharing your experience. And for me, for me, it's a great organization, great organization. And if you look at when you enter, there's a signboards and there's also information where to get Wi-Fi information. All this information, like where's the check-in, is without noticing it's helping you. And also I had a problem today, like 30 minutes ago, the volunteer's organization team is helping me to figure out the Wi-Fi. So good, thank you very much for the organizing team. How cool is this? I really like this. This is a great experience. You don't know if you've seen it, I appreciate it or not. This is a great experience. And also I had a great community event today. Thanks, Buriana, for organizing this. We had a great localization community in Bruno, and we had to talk a lot of really nice conversation. We got to exchange a lot of great ideas. That's a power of community. And a little bit about this DevCon organization, the committee itself. How cool is that? Supporting speakers. I really felt that they're really supporting. I really felt like being like, I could also contribute. This is a great, great, great initiative. And I've not seen, I've spoken to many conference. I've not seen this. Maybe there might be supporting something like this. This is great. And also if you have seen the app, you can also see who is attending. And you can also see what their background is, like what my talk should be. And also I can also add presentations. And I can basically interact with the audience, interact with the community. So this is a great, great platform. And I felt like this was really great. And also you could give a feedback. Is my talk is bad or good? What can I improve? So this way I can also improve. And this is for community effort. And this is for community effort. And I'm also gonna share, like I said before, my recent projects at Localize, Contentful and Commerce Tools. I'm gonna share a couple of stories and how I approached and how community helped me. And I'm from Berlin. I'm currently based in Berlin since 2015. I run several meetups, including design and front end development related communities. There's a saying in Berlin. If you are in Berlin, you don't go to a restaurant for dinner. You go to a meetup. We have meetup pretty much for everything. Every tech thing you take, every front end frameworks, design, marketing, product development, we have meetups for everything. So there's such a great community place. And I learned a lot by organizing and running workshops. How many of you think user experience is same as developer experience? Okay, so good that not many. So let's take a look. I applied user experience techniques into improve the developer tools to improve the developer experience. I straightaway applied what happens like, for example, I was building a design system, a UI components to build some UI applications for Contentful. We have a framework called Format 36. It's just a UI blocks. You can think about like a Lego blocks to build some UI. And as a product manager, I had a lot of backlog of items to work on, like improving the documentation, improving our components. We had a bunch of items. But these information was based on my user interviews. I talked to a lot of our customers. I also talked to a lot of our community developers. I asked like, hey, what are the components? What are the, you're struggling with? I got set of feedback. But we also have a community Slack channel at Contentful where I started answering the questions. I started understanding what are the problems they are struggling with, building this UI application. I started helping them. I just started answering them. I started helping them. And slowly, I also felt like what I had the user interviews, what I got the insights, was totally different from what are the interactions and insights I learned from this community interaction. And I felt like, if you just ask your users or your developer, they're just gonna say, just like Andrew Ford, they're gonna say they have last for fast horses. You're not gonna get a truly what you need. And this is a great quote from Amy from Sales Safari. If you want to know what a person really values, what they really suffer, what they really do, don't listen to their words. Observe their actions. I literally followed this. And if you ask me how I did, since I was answering a lot of questions on Slack channel, I started slowly building a trust among those community developers. And they were starting reaching out to me in a direct message on Slack. And for me, it was very easy like, hey, I'm working on something. I would like to do a pair programming with you. Let's build this together. I want to see what is your environment look like? How do you approach? Do you look into documentation? What is your code ID? ID editors looks like it gave me a lot of insights, which I didn't get during the user interviews. And one of the best insight I got was TypeScript. And we were supporting the TypeScript in the UI application. And what I noticed from our community developers is never looking into documentation. For M, the documentation was types in the code editor itself. And you could able to understand what are arguments, what is the argument types. And I never seen him looking into documentation. But our backlog was full of improving the documentation, improving adding new components. But after this, we changed our strategy to make also include the types should be top quality so that developers can get their job done easily. And another was just an analogy. What we think as a user experience is we just want to deliver a burger. Like, hey, as a customer, you want to deliver what I ask. But actually, a developer is they don't want a burger, but they want a meal. And also, you need to get a proper instruction how to prepare your meal. And also, during that preparation, they also go through a lot of trouble step by step guide. And there's going to be a lot of troubleshooting involved. And this is why my analogy, developer experience is not same as user experience. User experience is for consumers. I just have a restaurant analogy. Like you had a restaurant, you are sitting on the table and you order, you get. That's like a burger, what do you get it, right? Whereas developer experience is for makers. Developers are makers, developers are builders. They want to build things. They want to customize things. They don't want just to have a burger. They want to customize that. So that's why the tools you provide to developers, the way they interact is different from the techniques you apply to the user experience. And I saw the same example at Commerce Tools, building the UIKit, another design system for our customers. And I also did a pair programming sessions with our solution engineer. This is about internal community. I'm going to talk about why it's very important to build internal community first and how you can get a lot of benefits from internal community. And moving on to all this, okay, we understood. There's a user experience and developer experience. Okay, but how can I look, how can I improve the developer experience? How many of you know what is jobs to be done? A framework, a jobs to be done framework? Okay, one hand. Okay, so I can give you a quick look. Example, people don't want your quarter inch drill. They want a quarter inch hole. So they just want to get their job done. They don't care whatever the tool they use. They want to get their job done. So to understand what it is, I have a couple of examples. For example, 20 years ago, if I want to buy a train ticket, I would just go in a physical store. I just have to stand and get, today I don't have to do it. I can just get it from the mobile app. Same job, train ticket, purchasing a train ticket. The way I can get it done, I can sit it at my home. I can do it here. It's much faster. The tools will change, but the job that buying a train ticket will remain the same. So the new tools will make your job getting things much faster. Another example, Netflix. Netflix, when they started, they were selling CDs. But now nobody's gonna watch with the CDs. Everybody watched with their mobile phone or a desktop or their computer. Still you're watching the movies, but the way you watch the movie that is using the tool list has changed. How about for coding? For example, you being a developer, now you have a co-pilot, for example. This is just an example. Co-pilot can help you as a companion to get your job done easy, but still your job is to build a useful software. Another example, AI example. There's a documentation site where you can do a search, what is there in your documentation. But here is the read me documentation where with AI, it can also help you. What do you want it to do? What are the steps? It can be more advanced. It can understand you what you are trying to achieve. And it can provide you more details. And for example, Stripe did with a chat GPT. They have a documentation experiment where you can ask something and it can give you more detailed, concrete example what you have to get it done in a much faster way. And this is another example since we are talking about community. This is at airport. This is at before COVID, before Corona. Like they were asking me near the restrooms, how was your experience? But now they're clever. After COVID, you can just scan it and just share your feedback or concerns. Still the same job. Feedback, but doing it in differently. So think about how you can enhance developer experience with the tools you're building. How their job, the fundamental job can be done in a better way. And there are a few examples. For example, this is from XZeta. XZeta provides you a query database and it gives you TypeScript and JavaScript code and you as a developer don't have to write this. And depending on what query you want it to perform, it gives you a code. You just have to copy paste this. So getting the same job done much faster because you have the code snippet, it will be easy to consume this data. And looking into the interface, compared to the user interface, there are some interface we need to consider when we are building it for developers. That is, of course the code editors, command line, APIs, SDKs and documentation. And also while thinking about all these interface, there will be a troubleshooting. There could be errors. And this is a typical journey of a developer consuming your tool, your product, right? Getting started, reading docs, trying to use your tool. But there could be some troubleshooting. There could be some issues. How are you gonna provide those parts? Make it easy for them. And this is a great example from Cypress. For example, if I get a end to end automation tool for UI automation, where there is an issue, if there is an issue, they also provide you how to fix it. This is a great way. This is a great way you can help your customers if you're just building some great developer tools. And you can see here, this is where the documentation that goes, details, what's the problem, how to fix it. So making their life easier to fix these issues. And another viewpoint, before we go into community part, developer efficiency. What is developer efficiency? Let's take a simple example. As a developer, I'm building a some UI application. How quickly I can make changes on my editor and how quickly I can see the changes in the UI. The faster I can see the changes, the higher the efficiency is. Let's say if I'm working on a monolithic application, I make some changes, it takes one or two minutes to build and deploy and show the local preview, then it's a low efficiency. Similarly, if you are working on a unit test, if it gives you an instant feedback, that's a high efficiency. If it takes some minutes, it's a low efficiency. So think about, I just talk about only those two examples, but we are talking about validating changes on local environment, validating and following some standards, security standards, and deploying and sending it to production. High efficiency environment is where your tool is efficient, making the life easy for developers to build faster. At the end of the day, you're able to helping them to get their job done much quicker. And always aim for using your tool. Your tool should be aimed to get your I efficient environment. And there are more details about what are the low effectiveness and high effectiveness. And the last part of this talk where how can we improve developer experience by community? We talked, we understood what is user experience, what is developer experience, but now how this talk community is gonna play and help you. The first point is how open are you to get a feedback? Are you ready to get a feedback? If not, then your users will just ignore and go. If you're not ready to listen to a feedback or if your system is not ready to accept feedback, they just struggle in the middle and they go, they never, you never get the feedback. But what are some of the examples you can try? For example, this is from Next.js documentation and you as a user, you can just enter email and you can just give you a feedback and previously the email also was optional, you can just send them the feedback. And we did a similar thing at localize. We have the developer portal and I created a simple Google form. And I replicated the same thing, an email field, an optional email field and also what's the problem? You know what, so much of feedback I get every once in a month, it is so much of useful. That way our documentation is up to date. So first tip is open for feedback. Do you have forms? Do you say you're ready to hear feedback? Opening up where your customers will be more place. So that's one point. And second very, very, very important point. If you take one, take away from my talk is this one. Provide value to community before expecting returns. What I mean by this, let's say, let's take this example. This is a tour guide who kind of explains and provides value, show all the historical moment and at the end of the tour, you're gonna give them a tips based on how happy with that guide. Similarly, what I've seen is a lot of product managers, including me, before I have a Slack community, I straight away reach out to those who ask questions, those who have problems. Hey, I'm working on a new project. I would like to get a feedback. Do you wanna be a beta tester? No, they will not come. How are you expecting you to come and help you? Because it's not their problem, it's your problem. I don't have any credibility. I don't have any trust to them. How can you expect them to help you? So the big lesson I learned is build trust. How can I build trust? By helping them, by helping their problem, trying to help their problem. And also recognize community contribution. This is one great example from Contentful, FormA 36 open source repository. Even if I do a small documentation contribution, or even open a bug and suggest a fix, my picture and what type of contribution will be listed here. This is a great way to recognize them. And hey, I contributed to this project. I can really feel good thing about it. So recognizing community contribution is another thing you can definitely do. And as I was explaining the point of building a trust, building a trust, it's not gonna happen in couple of days or even a couple of hours. It might take couple of months, but it's very fundamental in order to get a valuable community feedback for your product. And this is also another very interesting pyramid. One percent of your pyramid is you can expect as a contributor. Let's say I have 1,000 community members. You can expect at least around 10 people might be your true contributors or champions. They would like to contribute. And 9% of those contributors are, they might comment, they might add, they might read and they might, hey, this could be better. Let's say we have a Slack channel, Slack group, and 1% of the passionate champions are talking about your open source tool and challenges, and 9% of them might be comment or interacting or like and sharing. So the remaining 90% of your community is just observing and don't expect them to be kind of like interacting, but they're just consuming the content. It applies same thing for even social media. Only 1% will be sharing the content regularly. 9%, around 9% will be interacting, and 90% they just consume, they just scroll. And you can also learn interesting tactics from social media influencers. If you have seen social media influencers, they always know the pattern, hey, please let me know in the comment section. Please let me know what do you think? These comments, these feedback from the community is actually helping them to prepare what is my next big content I need to create. Same thing I've seen, people are using the LinkedIn polls to collect what is my next topic I should write for. It's again, getting the ideas from community and prioritizing what they should focus on. And this 1%, what do you see here? It's very important you identify who are those 1% of your contributors are. And it's important you reward them. Of course, we as a developers, we love swags. We love conference swags. We love these open source tools swags. And it's up to you to identify those 1% contributors and reward them based on type of contribution they are doing. And you can go one step further if you have more time and if you wanted to do more in a systematic way, like Balsamic for example, they have a customer advisory board where you can list down and you can expect, come up with a plan, what is best way they can contribute to your roadmap and also what your community contributor's gonna get based on their contribution. And there are some ambassador programs from various developer tools where you can clearly mention what are the benefits they're gonna get. So that way you're rewarding your 1% community contributors and also inspiring them so that they can create more such content and contribute. And once you have, okay, I'm gonna wrap it up. Once you have two more points, once we have those 1% contributors and you have a trust, then you have a chance to go deep dive user testing session with community developers. That's where you can go, do a pair programming session, you can go even more than one hour and understand, build something together with your community and you understand the pain points, where they're getting stuck, what you can improve and also asking them to think aloud also helps you to kind of get you learn which you'll not get it from just from user interviews by talking. And another tip I can give you is sharing observation. Along with you, it would also be great if you can also go with your product manager and your designer, they can be a passive observers. That way you can speed up getting feedback and you can align and you don't have to discuss and argue, hey, I don't agree, but if they are observing, they also agree and the feedback and conclusion will be much faster. And this is what I try to explain that we all have different views, but once we attend these pair programming sessions, we start to feel that yes, that's pain is real and we all have after the pair programming session, all the team members agree that the problem exists, we need to fix this. And I can recommend these two books. If you wanted to do the user testing and pair programming session, which gives you more about how to do that and also what questions you need to ask, more open-ended questions. And I use automation tools like Calendly and Zoom. Zoom works perfectly well for me because as soon as somebody books a user testing session, I get a Zoom link and email schedule. So it works perfectly and I can also record and watch it later for research purpose. And another tip is this is a great place. The physical events are a great place to get a feedback. This is where you can find developers and if you're hosting, if you're sponsoring any of this event, and this is where you can talk to your developers and ask for feedback. For example, Auth0 did a Dev Day event. They're running the user testing of their website in the event itself. So this is a great place to find a lot of developers. I highly recommend if you are sponsoring any event, utilize that opportunity so you can meet a lot of developers. An internal community, I would try to wrap it up. This is another very important tip, dog footing, which would start using our tools built by product companies in the company itself. And oh, I don't have time, but I would like to show this. This is a great advice from Kelsey about Kubernetes, how you give Kubernetes core engineers to install the Kubernetes on their system, how they struggled, and how they improved the Kubernetes later. And friction log technique is if anybody joins your company who has a fresh mindset, they can log all the good and bad experience about using your product. And this one, this technique is used by Stripe. And that way you know where the pain is where people are struggling because those new joiners and new developers have a fresh perspective where you as a developer who's building a product might miss that fresh perspective. And I use it for marketing because I work at a marketing team at localize where I also, if I wanna send an email to a developer audience, I do a very quick test with our internal engineers. It doesn't have to be 30 minutes. It can be just five minutes or 10 minute session. And I just ask like, hey, what are you expecting from this email? What do you understand? And we can also improve and make the email shorter and also make it more developer focused. And with that, I will conclude my talk. So here are my takeaways for you. Open for your feedback. Find out the channels where you can get a feedback. Always ask for feedback. If you have a Slack community, always tell that we are here looking for feedback. If you have any feedback. And also some of the companies even go one step further. They put your calendar link where you can book a call. A community member can book a call and talk to you and provide value to your community members before you expecting them return. Always solve their challenges, build a trust. And once you have a trust, go deep dive user testing sessions. That way you can understand their pain points and participate in person events and talk to a lot of developers. And this is a great chance to get a lot of feedback if you have a sponsoring the event and get your feedback for your tool. And in terms of internal community, do the dog footing, use your internal engineering resources, friction lock techniques. If new developers are joining your company, that's a great way to get a lot of feedback for your product because they will be viewing it from a fresh perspective. And do a lot of quick user testing with your developers itself. If you are doing any developer-related improvements because starting with internal community is much easier than looking for developers from outside, from community because it takes time, building trust takes time. But you can start today by talking to your internal community, building internal community of developers. With that, I'm gonna finish my talk. Thank you very much. If we have time, I can take a couple of questions. Any questions? Thank you very much.