 All right. Hello, everyone. Thank you for attending my talk today. I know it's the end of the day, so I really appreciate you all being here. And I'll try to keep it very quick, but also hopefully informative for all of you to walk away with something that you can learn about data-driven operations. So in this talk, I will mainly be going over a couple of things, starting off with what exactly is DevOps, why it's important, and how does measurement fit into the entire DevOps lifecycle, what gave rise to this data-driven development that we're currently seeing everywhere in this new data-enriched era of the world that we're living in. And finally, how can you get started with some metrics while you're on different aspects of your DevOps journey? How can you think about incorporating metrics through whichever phase that you might be in? And a quick demo or an example of how we try to do this at Red Hat. So I hope you find it relatively interesting, and also walk away with some new things that you can learn about. So with that, let me go ahead and introduce myself. So I'm Hema Viradi, and I work at the Emerging Technology Group, which is part of the Office of the CTO at Red Hat. And I work out of Boston in Massachusetts, originally from Bangalore, India. And I came to do my master's here at Boston University. And post then, I've been working as a software engineer at Red Hat, and now part of the Emerging Technology Group. So to sort of dive in, let's start by understanding what exactly is DevOps. So something that I found interesting was that DevOps is part of the Agile manifesto. And in terms of principle, what I think is very relevant is that our highest priority is to satisfy the customer through early and continuous delivery of valuable software. And now, with the data-driven world that we're in, this is actually changing to the continuous delivery of your valuable insights from your data that you have available at hand. So now that we sort of understand what is this essential DevOps and what it can actually lead to, let's try to understand how this DevOps culture came into existence and what sort of critical in order to make a strong DevOps culture foundation in any part of the organization that you're probably working with. So firstly, there are three major components that sort of build and make the strong foundation of a DevOps culture, one of it being the people. So this is mainly focusing on gathering and improving increased collaboration and also having a shared responsibility. So the more closely you sort of understand and work together as a team, the more collectively you sort of get ideas and the more coherent your team becomes. And that sort of leads into making your culture of your DevOps as strong as you can get it to be. And now we can look into the second pillar or the second concept and this is the process, right? So in every aspect of your sort of DevOps lifecycle, there are a bunch of workflows, there are a bunch of processes that are taking place and you are also trying to figure out what is the most effective way to have these processes in build, what are some effective ways you can automate some of these workflows and that's sort of also I would say one of the major components that's involved in this entire lifecycle. And finally, what kind of binds all of this together is of course your technology and that is something that's continuously evolving, continuously changing and as part of the emerging technology group at Red Hat, this is also something that we are highly focusing on or highly passionate about in the sense that there are so many new ways to do different workflows and things are always changing along the way. So whether it's from your security aspects or whether it's from automation pipelines or your quality assurance pipelines or even from a data science perspective, there are so many new ways that you can do a certain process. So having these three sort of are interconnected is what would make your culture as strong as you could get it to begin with. And sometimes it's hard to sort of adopt to this DevOps culture. It may not be existent in a lot of companies and you might have some challenges in ways to adopt to this, especially if the people on your team are not aware of what the end goal is or what the final outcome is. And how do we sort of make this transition easier? How do we sort of help adopt some of these principles? And one way to do that is of course with the help of data and of course powered through metrics. So data is sort of the heart of everything right now and ideally the more the data, the better the knowledge, the more ideas that you can help generate and the more effectively you can work towards creating these insights out of them. And once you have this shared data, the idea is now to have a shared knowledge out of it. So you powered the data that you have but now like we looked at the three pillars, that's what makes the people an aspect of it here because you're trying to make that a shared commodity or a shared entity to power all the data that you have and have some ideas that generate out of it which further again influences your processes. That's the second part of the life cycle. And finally, this can help you take all the necessary actions or all the required decisions around your policies, around your systems, around your services and essentially helping the organization also make critical decisions whether it's around stopping a project because it's not going towards the right direction or whether some of the KPIs that you've defined earlier have now diverged and you're no longer looking at those things, right? So to be able to sort of do these right actions at the right time is also sort of critical and is what most of our developers and architects are so on looking into. And that's sort of what gave rise to this entire data-driven development. So a good data-driven development should not just collect a lot of data or have it made available, but you also sort of need to start aggregating all of the information that you have in various meaningful ways. So there's tons of data sources that you're looking at and oftentimes you tend to sort of keep them in silos but you also need to make sure that you aggregate them and that you're able to have some kind of correlations between them in order to make sure that you're tapping into all the right sources. And once we do that, this actually helps you to provide much more meaningful ways to convey and not just for your business leaders or your project stakeholders, but also for the ICs or the individual contributors because they're the ones who are driving some of these efforts. So it actually gives them an overview as to how is this being supporting the overall goal or the overall mission of the teams that they're doing? How is this one PR that I made actually being utilized into the wider organization or the wider picture of it all? And then of course, similar to how we had the entire test-driven development, it's very similar where you want to sort of answer questions really quickly. You wanna be able to go back in time, you wanna look at historical data, you wanna zoom in as much as possible, zoom out as much as possible, and then you also want to look at different type of visualization. So you no longer just wanna look at static reports, but you also wanna look at interactive dashboards, whether you're prototyping them in like Jupyter Notebooks, or you're prototyping them in some enterprise or open source related dashboards for a more higher level of the work that you're doing. So these are some ways which has actually led to this data-driven approach and why it's needed right now by most of the teams that we're looking at and how we're working it with it, right? And with all of this, now that we sort of know that we have all this data at hand, one sort of missing piece or one sort of secret ingredient that sort of ties everything together are your metrics or are your measurement, right? And one analogy which I think is very easy to relate to is just like how you're exploring and experimenting when you're cooking, it's very much similar to looking at your metrics because with the right metrics, you will be able to leverage all of the insights that you can and you'll be able to change course quickly. Like we said, you can make quicker decisions on the go and when you don't see a benefit in something that you've built, you can automatically validate that against the metrics that you've defined. However, one thing to keep in mind is that everybody is at a different pace or at a different phase in their DevOps journey. And just like cooking, it's an continuous learning process. It's a continuous iterative process. You'll probably not be good at it at the beginning, but the more you experiment, the more you explore, the more combinations you try out, you might end up getting really good at identifying what these metrics are. So don't be discouraged if you don't find the right metrics at the first pass because that's not gonna stay fixed forever, right? Cause you're gonna keep iterating and you're gonna keep evolving. So you have to keep that in mind that everybody's at a different stage of this measurement. So always continuously learning is I think the key to making any of these metrics work. And to actually start working on this metrics, oftentimes people ask, what is that right one metric that I should critically get integrated or what is that right metric that I should have for DevOps? Now, unfortunately, there's no easy answer to this because everybody's measuring things at various levels. Some are at a very high level, some are at a very detailed granular level, right? So there's no one size fits all solution for this. But one way to think about what are those right metrics is categorizing them. And again, similar to the cooking analogy or the food analogy, just like how you have all types of food out there for you to choose from your dairy products, your vegetable products and so on. Similarly, you should also think about your metrics and group them according to different categories. So you might ask, how do I start grouping them? And one way to do that is firstly, think about the outcome. So what is it exactly that you want to achieve by having these metrics? So an outcome, for example, here, I've said something like improving customer satisfaction. So let's say you have a service or you have a product that you have and one of the goals that you want to sort of look into is how do I gauge the customer satisfaction of this product, right? So that is your outcome, that is the goal that you want to achieve. Now, in order to come up with metrics to achieve those outcomes, you have inputs. So these are things which are actually under your control, essentially. So these are things which you can tweak, these are things which you can improve, which will help you get to the outcome of improving customer satisfaction. So some examples here would be, how do I improve the response time? So if it's an app, for example, is the requests that a user is making fast enough? Is there any lag? Is there any latency? Or if you're more interested in the UI perspective, so from a user experience, you want to understand, do I need to improve the UI for it? And are all of these metrics sort of helping me get to the outcome of improving the customer satisfaction? So keeping these two, I guess, the keys in mind, that is your outcome, and then the inputs that drive that outcome, will help you further narrow down what kind of metrics that I should start analyzing, or I should start thinking about incorporating into these services and applications. And one way to do this is the concept of valet metrics. So this was actually a concept introduced by Google's SRE team, and they were working with, I think they had a use case with Home Depot, where they were trying to figure out, how do I measure a particular application or a service? And what I found interesting was, they actually thought about these five major things that you should start looking into when you're coming about it from an operations perspective. And what exactly does this valet stand for? So we here stands for the volume. So how much business volume can my application or service handle? Now if you think about it like, an Android application that you download, or an iOS application that you download from the app store, the volume here would be the number of times the app was downloaded. Now when you think about it from an open source project, the volume here would be things like the number of stars that were there on the GitHub repository. So these are some indications of the way to gauge the traffic for your given application or service. And then we come to availability. So is the service up and running when I need it? Is it easily accessible when somebody logs in? And these would be metrics like uptime, for example. So how often when a process is submitted, how often does it get completed on time? That would be your availability metric. And then comes latency. So here this would be, does a service respond fast when I use it? So what is the average response time for the service to give back the request that the user wanted out of it? And then we have error. Of course, there will be cases where there are a lot of requests that might end up in failure, that might end up in errors. So how many number of times has this happened? And that will again help you gauge where you can improve, or what are the places and the gaps that you can do to avoid this on a frequent basis. And then finally, you have the tickets. So this is basically if the service requires any manual intervention. So nowadays you're trying to automate everything as much as possible, but are there still some existing places where you are still not able to get back the service or there's something else that you have to reach out and the person has to manually go and maybe re-spin a pod or they might have to increase the resources. There's also many other things that can happen which require some manual intervention to take place. So these are again some metrics that you can capture here. So how many times have we created an issue out of this in your repository or if you're using like ticket management services like ServiceNow for example, you have a lot of SRE teams who try a job of their issues as tickets. So how many of these tickets are related to errors and things that users couldn't figure out by themselves is another way to think about the way to measure some of your operations and improve your product or your service. So that's one example which I found very useful when it comes to thinking about what kind of metrics you can start off with. And now that you sort of know what these metrics are, just like how again taking like the concept of cooking or whatever it is that you want to make, you need to also identify the right ingredients or the right data sources. So depending upon your outcome like here, the ingredients look like it's making some kind of Italian dish. So you're trying to figure out what would be the right combination of data sources to help you get those metrics, right? So for example, most of our data is getting collected through Prometheus and just curious to know how many of you are aware of Prometheus or have at least used it. Majority, yeah. So Prometheus is now one of the most widely adopted tool for any operations team or DevOps team or SRE team. Everybody has Prometheus, the reason being that it provides all of the right time series based metrics which help you understand how your service sort of is integrating in terms of like requests or availability and so on. So this is an example of like a UI as to how Prometheus would look like. It has, as you can see the nature of the data is all time series. It has timestamps associated with the values of these metrics. And there's actually like a demo Prometheus available for anybody to just look at some random metrics and play around with how this interface actually looks like. And another aspect about Prometheus is that it supports multidimensional data collection and also it helps you to analyze and diagnose some problems. For example, if an application was down or a service was down for a couple of hours, you should see some gaps. You should see some sudden decrease in values. So these are some aspects which can help you guide in terms of the metrics behavior over time. And Prometheus actually has a PromQL support so it's similar to SQL. It has, it sort of stores all of the data in a time series database and that is why a language called PromQL where you can query aggregate all of your metrics and you can actually do that right here on the UI and play around with some of the queries like sum over time, count over time, things like that to give a better picture in terms of these different metrics that you are looking at. And once you finally know all these different sort of metrics and I identified all the different data sources. Like I said, like Prometheus is one example but there are other various types of data sources that you should also be looking at especially for an open source project. Another data source that you would be interested in is GitHub since that's where most of your project tracking and project health and your releases and version control and everything happens inside of GitHub. So that's another data source that you could also be exploring to understand but just here taking an example of Prometheus for a simpler use case. And once you have all of these metrics sort of identified the next step is to actually start exploring these metrics. So the PromQL is great but it's just for a very high level overview of how these metrics sort of behave and one way to get better insights or explore these metrics in more detail is something called as the Prometheus API client. So this is essentially a Python wrapper for this Prometheus HTTP API and what it does is it provides you some additional ways of wrangling your data and not just limited to the PromQL or the SQL based way to analyze this API client and you can, it's available as a PIPI package so you can easily do a PIP install and get the Prometheus API client. A couple of our colleagues on our team have actually worked on this project very closely and we're trying to add a lot of new features into this where you can easily aggregate these metrics and store them into data frames and data frames is like the chosen sort of format for most of the folks like data scientists or data analysts who are more interested to know the structure of these time series data and the reason why we actually came up with this API client was since Prometheus metrics are all time series we also saw a scope for doing AI or machine learning on top of it. So it has some capabilities where you can maybe detect failures over time or predict anomalies over time on these various time series or timestamp associated values and that's why it was easier to interact with Prometheus metrics through this Prometheus API client and like I said, it's used easily inside your Python scripts or you can use them on top of Jupyter notebooks and give you a closer overview of how you can correlate one or two metrics together and how can we visualize them. So the Prometheus UI does have a small visualization when you query it you can sort of see the time series graphs but let's say you are more interested in seeing like a count over time or you want to compare to three different metrics or you want to do like a pie chart or a histogram so on. It would be easier to do that within your Python libraries and that is the reason why we explore it more closely using the Prometheus API client. And now that you sort of explored it, you've analyzed it just like how you are sort of done with your final dish or your outcome of whatever you're making you need to also make it presentable, visually appealing so that other folks can also understand it and easily digest all these insights that you've made, right? And for that we look at different visualization tools and particularly our team looks at two different type of visualization tools. The first one that comes to mind is of course Grafana, right? So Grafana is again very much compatible with Prometheus it's essentially nothing but a monitoring and operational tool and the use case here or the users here would be your developers, your operation team, your SRE teams and it supports a lot of other different data sources apart from Prometheus as well. So your Postgres SQL, you have influx DB and so on. And SuperSet differs in the sense that it's more of a BI tool. So for example like the GitHub data that you're collecting to understand different project releases or issues and PRs, things like that, it would be easier to visualize them inside of SuperSet. And this is again an open source visualization tool by Apache SuperSet. And it also is compatible with Trino, Postgres SQL, Database, Presto, DB and the users here would be your data analysts or your data scientists and more of the higher business leaders like project owners or project stakeholders who might be more comfortable with looking at these higher level dashboards and interactive dashboards. So that's why we tend to typically switch between these two different types of visualization tools. And with that I will just quickly show a quick use case that we did for our team, a given application or a service. So this is essentially a Jupyter book. So this is nothing but a static HTML rendering of our GitHub repositories actually. So what we did here was instead of having all your documentation inside of GitHub repository, sometimes it's easier to also host it as a standalone website. So we have a service called Meteor over here. So what Meteor does is it essentially allows you to provide a given GitHub URL that you want to build this Jupyter book for. And Jupyter book also comes with not just rendering standalone HTML web pages, but it also helps you to spin up a Jupyter notebook. So Jupyter notebook is the common interactive environment that most data scientists develop all of their work on top of. It's again supported through Python and we use that environment as our main coding environment. So if your repository has a bunch of notebooks, then this service allows you to also publish those notebooks inside of your HTML standalone webpage. And once you sort of give this and submit this request, it'll take a couple of minutes to go through the builds and so on, but essentially what it does is it should finally show you this Meteor that's created. So it's gonna show you what's the repo. And as you can see here, you can either open as a Jupyter hub or you can open as a website. So the website is the HTML rendering that we saw and the Jupyter hub, I'm not gonna go into that too much, but it essentially just pulls up a Jupyter notebook server where you have a bunch of notebooks. If you've added that into your repository, it's also gonna build that notebook environment for you up over here. And what we did here was look at an example application or a service that we had within our team. This is called the Kebushet tool. So Kebushet is a source hop bot that essentially automates and updates dependencies in your Python projects. So most of your Python projects or GitHub repositories will have all of the packages that it needs inside a PIP file or inside a requirements.txt file and this project was actually done by one of our correlated teams called the Thought Team and they basically created this bot which would automatically open a PR against your Python repository to say that these packages that your project is using have now been updated to so-and-so version. So you need to update your project versions as well. And this is nothing but a service that essentially running against all your repositories and it's looking at the entire PIPI index to search for all these different Python packages and try to update to the latest version so that your project is also up to date with the latest dependencies. And they wanted to sort of visualize how this Kebushet is performing. So they wanna know how many repositories are we maintaining, how long is this entire trigger of workflow staking for Kebushet to actually update all of these dependencies, right? So this is there when we were working with them this is sort of the architecture diagram they shared with us. I don't wanna go too much into detail about this but essentially to understand the service a little more and to actually come up with metrics we wanted to know what are the various components involved in this service. How are they interrelated? How are they talking to each other so that you can better recommend where what metrics can be tracked? And once we got an idea of their architecture some of their workflows they have a bunch of different sequence of workflows that they have based on how they're doing this package release or the Python dependency release and so on and then what are some ways they're existingly monitoring this service, right? So all of the tooling that they have we sort of identified how many of these service components so it has service components like Kafka which is the streaming platform they're using they have Argo which is what they use to run all of their workflows doing all of their GitOps work essentially and then they're using Ceph as their S3 storage system and finally they also have a lot of monitoring components here so they're using Prometheus of course to track all of their metrics and they also have various other components involved like a metrics exporter, a SQL metrics exporter and so on so these were the different components that they've sort of integrated with their service and once we sort of understood their metrics as we recollect from the slides the one way to go about thinking of these metrics so you sort of understood their objective that the bot's aim was to update all their package dependencies on various Python projects now how do we think of metrics for this? So one way to do this is to start grouping it or is to start thinking of categories for your metrics so what we did here keeping the concept of the valet metrics as well we also started looking at the different personas who are involved to understand the working or the performance of Kebyshet, right? So there are a bunch of people who would be interested so some of the personas would be a product owner, a project manager, you have your technical architects, you have your operations team, you have your analysts or your data scientists and then you have the end users who are actually using the service so once we sort of identified these personas we also started prioritizing who out of these personas are most important for us and who of these are looking for what type of metrics and try to categorize them even under that valet concept of your metrics so for example, if you look at it from a product owner perspective since Kebyshet is maintaining and looking at a lot of Python projects on GitHub one way to metric that would be relevant for product owners is a number of active users but in this case it would be number of active repositories maintained by Kebyshet so how many Python dependencies is it trying to look at based on all the repositories that we have how many of them have actually enabled this bot on top of their repository and that again would fall under the V of the valet so that would be the volume that you're trying to capture over here and similar there are a lot of other related metrics like this like daily active users like DAU, WAU, things like these are very much accurate from a product owner perspective and then also thinking about it from an operations perspective so here we try to identify which is the persona and then also which is the component from which this metric can actually be fetched from so here for example we are looking at average time taken by Kebyshet to run the workflows average time taken to process requests so these are again falling under your latency aspect of your valet metrics and also looking at like errors for example that's another thing that you also want to capture and then availability here would be all those components that we saw in the architecture diagram you need to find out the availability of all these components so finding out the uptime for each of these components would also be the metrics that you can sort of start working with and start experimenting to analyze these different type of valet metrics so these are some ways to get started of course the more personas you're targeting the more specific you can get in terms of the application also that you're looking at and once we sort of jotted down all of these metrics we sort of identified how we can fetch them how we can aggregate and collect them and so on so these are some steps that we followed to export on these Prometheus metrics and to further work on it from calculating it or making the required calculations and so on and once we had all of this we also had to connect it to a Grafana dashboard so again looking at categorizing the dashboard so looking at only the overview row here would only capture all the uptimes so out of all the components how many of them are up how many of them are having issues and then from an operational metrics point you can look at these different workflows here so you can figure out how many succeeded how many had an error status and so on and you can track that over time and then things like the latency or the average time or the response time these are things that you would want to look at like bucketizing these response times so looking at it from a histogram perspective what you would do here is basically each of these buckets are nothing but the time taken like in seconds how many seconds or how many minutes that it took for a particular workflow to complete it has a lot of workflows in their service so among those how many were categorized into these different duration times and then of course looking at it from more of a product owner lens like we talked about the active users or the active repositories that it's maintaining so Kebyshet currently I think maintains approximately 200 repositories where it's observing and maintaining all these Python dependencies and so on and then also this number of stacks that it's maintaining as well and you can also go like I said when you're thinking about a data-driven development you want to make decisions quick you want to go as far back as you can you want to drill down you want to zoom in zoom out whatever and that's how you can do with the help of your dashboard so you can sort of go back in 15 minutes ago how were my metrics compared to like a week ago how did they behave and so on and that's one aspect of the approach you're taking to drill down on these different granularity levels of the metrics that you want to finally end up with and the other visualization tool that I was talking about was Superset some of my colleagues in the previous session walked through an example where they were tracking metrics for like GitHub projects another colleague of mine will also be presenting tomorrow more closely on a community health aspect of different metrics that you can look at so similarly we try to do this for one of our teams or one of our initiatives if it comes up but essentially what the dashboard was doing is of course it has some errors but oh I should log in okay thank you maybe but yeah while it loads and logs in Superset is another instance that we've deployed on top of our operate first cluster hosted on OpenShift and what you can do with Superset is again it's a BI based tool so you run your SQL queries and you create like interactive charts and dashboards to work on it and you should get let me give it one last try if not it's fine yeah so the UI this is how it looks you can see like a list of dashboards here you have an option to connect databases connect various data sets upload CSVs you can also explore SQL queries so you can open up a tab to actually run some queries look at past queries saved queries and so on and these are some dashboards that our team created essentially it's like intuitive for any other dashboard that you would work with it has a bunch of ways to filter things and you can look at metrics that you want to track right so for a GitHub project for example if you're interested in a particular repository or you're interested in a particular organization within GitHub you can drill down here are some repositories that our small team manages so we wanted to just look at some of these repositories and understand some key metrics which track the behavior or the activity on this repository so some examples would be number of open issues number of closed issues mean time to close issues ratio of open to closed issues so among all those that were open how many were actually closed out and then looking at it from more high level things like who are the main contributors in this repository and how many PRs were we creating over time the different type of activities that go on in the repository so you know you have events like how many people commented on an issue or how many people reviewed an issue how many people opened a pull request how many people reviewed a pull request so these are all GitHub events that are captured inside of any GitHub repository and we also wanted to see sort of the contributions made by each of these team members so like I said at the beginning of the slide you also want to provide insights not just to the senior leadership but also to the ICs you want to know what work you're doing and where am I contributing the most and this sort of gives them a picture in terms of so for every we have like each of our GitHub handles here not to pinpoint on anybody or anything but more to keep it more transparent in terms of who's working on what project who's making what contributions where so just to keep it more transparent and more accountable so that people also feel encouraged to work on more projects as they come up so that was just for like one repository but if we just unfiltered it you should see like a wider distribution in terms of the different organizations or the different repositories they're working on so for every user you should see like a bucketization on what are the different SIGs or this is how we categorized it so we have different special interest groups so based on these special interest groups which contributors working on what type of SIGs and so on so as you can see these two visualization tools are very different in terms of the use case more operational would be Grafana and more high level more project owner specific would be used in SuperSet and more easier to aggregate that kind of data inside of SuperSet so that concludes a little bit of the demo yeah I think I just wanted to conclude by also mentioning about Operate First oh now my slides don't wanna come up okay but yeah lastly just wanna briefly mention Operate First so Operate First is what we used to build most of these things all the dashboards the SuperSet instance the notebooks for exploring the storage the databases that were managing is all hosted on Operate First you might have heard about this in some of the earlier sessions in this conference but the concept was that we wanted to create an open source community cloud environment so that we can open up the operations and show how you can run your software or your services in a production grade environment so typically when users get any software or a service it goes through a bunch of CI CD checks so in the top most part of the diagram here we see that whenever they face any issues with the software that they have they usually try to reach out to the upstream development teams or they work with some vendors who are more closely associated with these upstream development communities and with the concept of this Operate First community cloud it allows the users to sort of play with these different versions of the software that you are providing and they can test it against this community cloud environment and it also allows them to mirror some of the way we are operating and look at the services that we are making and look at the dashboards we're making look at the metrics we're making and not just learn from the community cloud but also contribute back to the community cloud and they can still work on contributing to upstream of course they still have all the access to do that but this is an easier way for them to get faster access into a very much like a production grade environment that they can test and influence some of the work that we eventually want to integrate into all the applications and services so if you wanna learn more about how you can be part of this initiative which is within Red Hat we also have a website that you can go to it also has a lot of resources about like a Slack community that we have we have a YouTube playlist which has a lot of demos from the past of the people who worked on this project and a very active community who's trying to improve all the various services that we provide and enabling more users to replicate this data-driven operation and open up these practices and best ways to monitor any of your applications and services so with that these are some references in terms of some of the material that I covered one being operate first and then the Kebyshed application or service that was managing all the Python dependencies the repository where we were tracking all these metrics making that standalone Jupyter book rendering and so on these are some URLs to get started with all of this work and thank you for attending my talk feel free to connect if you have any questions but happy to answer anything now as well thank you, any questions? If not, thank you all for attending one of the last sessions for today and I hope you enjoy the reception if you're attending it, but thank you