 Well, Argo is enabling us to secure Elastic. So I am Angel Reels. I'm the team lead for the security engineering team at Elastic. I'm based in Massachusetts. I've been at Elastic for three and a half years. And prior to Elastic, I worked in the financial services area. And I was still responsible for security operations, strength intelligence, and identity access management. And I have a bachelor's in information technology. Hey, folks, I'm Chris Coutaier. I'm based in Malta. I've been as a security engineer with Elastic for the past two and almost half years. And prior to Elastic, I've worked with an InfoSec on the payment industry. So we're going to go through what we do at Elastic. We'll give you a high-level overview of the InfoSec environment, the data sources that we feed into our platforms. But most importantly, the ARGO story, our journey, how we found ARGO, how we're using it today, where we're going. We're going to deep dive some of our use cases. We're going to give you a quick walkthrough of how we've developed the internal community and how ARGO is actually being used and expanded across Elastic. And also how all of us can continue supporting the ARGO project and joining the community. So the background here, the security and adherence team, we're responsible for InfoSec's infrastructure and services. We host everything in Kubernetes. And our main job is to keep up the services, build infrastructure, but give the detection team visibility into what is happening in our environment. And we do this by using our file being out of B and connectors to feed data into the stack. So our detection team can build those detections. But most importantly, we also are kind of the main feedback loop to the product team. We test bill candidates before they're released, so we can use those features in a real-life setting. So we use all the production data. We make sure that the product hit scale and the features do what they're supposed to do. So hopefully we find the issues before our customers do. And we also provide feedback to how we can make the product better and try to add new features into the roadmap that customers could eventually use. So our InfoSec team is made up of several different teams, as other InfoSec teams have. But our main goal here is to make our data completely transparent and allow other folks to leverage everything that we collect. We don't want to operate in a vacuum. We're trying to decentralize security because everyone's responsibility to protect an organization. So we take all of the kind of the high-level roadmaps and plans of each of the teams. And it's all built into the stack. The stack is at the heart of everything that we do. And we're going to give you a quick example of how we do that. For folks who are not familiar with the Elastic Stack, at the core of everything we do, we have Elastic Search. And that's what stores and allows you to search your data. Below that, as I mentioned earlier, we have many connectors and integrations to get your data into Elastic Search. So you have Follow Beat, Metron B, Auto Beat, et cetera. There's several of them. Go to our page. You'll see all the various ways you get the data in. And on top of Elastic Search, you have Kibana. Kibana allows you to explore, visualize your data, and take action on it. The beauty of the stack is you have a central platform that has multiple solutions. You have Security Observability Enterprise Search that allows you to slice and dice your data as you need. And it's all already into the Elastic Search. So it's really powerful to just leverage the stack natively to dice your data. So you wouldn't have an infosectivity to have challenges. I think all of us are familiar with that. But we are globally distributed by design. So we have over 3,000 Elasticians in 42 different countries. So one of the challenges we have, we don't have a central network where everyone connects into. So we need to get all the different data sources so we can make sure that the folks connecting in are the appropriate Elasticians. And we have to collect all the various data sources from the different cloud providers, feed them to the stack. So we have quite a bit of data feeding into our stack. And our detection team ensures that they're taking action on anything that might be malicious. See here, our operating procedure is we have dedicated Elastic Search clusters for different workloads. And we try to keep it this way to make it a little easier for us to scale and keep track of what clusters what. But the beauty of Elastic Search is we have a feature called a cross cluster search, which allows us to have one Elastic Search cluster act as the main search. This has the ability to connect into all the various Elastic Search clusters that you may have and return the data. So our detection team uses this to build all the detections. We also use it to help us with all of our metrics because it has the ability to go to all the individual clusters. You no longer have to go into the individual clusters to search a heartbeat for a sec. You could just go to one stop or one place and search it. So here is exactly how we used to maintain that environment. So behind the scenes, you have everything, as I mentioned, is in Kubernetes. And we have eight different clusters. In the past, we would go into each of the clusters, deploy the values file using Helm, and then kick it off. And we would do this eight times, so rinse and repeat. The issue with that was if issues came up, we would have to then go back to that cluster. I've never done this, but I think there's I've heard someone do it. They actually deployed the wrong configurations to the wrong cluster. So we built guardrails to protect against that. But what the beauty of Argo does is, instead of us having to go into the individual clusters and deploy it, we develop a kind of a workflow to deploy the changes using GitHub. We have someone from security operations, I'm sorry, the security engineering team, review the change, merge it, and Argo automatically knows whether a project is out of sync. And then we would be able to easily visualize that in Argo CD and kick that off. And automatically, Argo would take action and upgrade our environment. And as I mentioned earlier, we're constantly upgrading. A lot of what we do is upgrading and pushing configuration changes, because we're constantly testing our products early on. So it can lead to some unexpected behaviors. And the beauty of having Argo is that we know when those issues occur. And we can easily go in and try to isolate it or resolve or rebuild a cluster. And rebuilding a cluster is very simple with this deployment methodology. So we have Argo alerts in that back corner over there telling us what action is taken, whether or not the deployment is synced correctly or there's failures that goes into our Slack channel where we monitor our alerts from Argo CD. Also alerts from Kibana, because we still use the stack as an observability tool. And all that goes into our Slack channel for us to take action on. So this is our Argo journey. We started off with an initiative called self-healing infrastructure. Our main goal there was to take alerts that are generated out of Kibana that we were high-fidelity alerts that something was wrong. And have a machine or a mechanism automatically take action. A good example of this was we had a shard that was kind of that went from green to yellow. And if we could automatically go in and have a mechanism do a sync or try to adjust it and fix that shard, why not do it instead of us manually having to run it in API script. So we started going on that path. We realized that Argo had the ability to manage Kubernetes resources and also had the ability to take commands from a webhook and take action on it. So we started experimenting with Argo events. We did, we resolved that case I just talked, I was talking about the shard being tied up, but we took it a step further and we were like, could it take action on other resources? And next thing you know, we started to see if Argo can help us with our certificate management where we had expired certificates. And since we're running Elastic, Cloud and Kubernetes, that automatically handles our certificates. But if something is about to expire, it used to be a manual effort for us to kick it off. And we decided to use Argo. So we moved that to production and we started handling some of our production workloads and fixing it, fixing our workloads. And then we were like, since it could take action on our resources, why not allow Argo to automatically help us with all of our upgrades and configurations? So we then made a change to automatically have our Argo kickoff upgrades as we merge code. I'll take that back, it's not necessarily automatically, there's still a sync process, which we haven't had the faith yet to have Argo kick it off yet. But that is on our project roadmap as we'll discuss a little later. But it is managing all of our clusters and it tells us whether it's in sync, out of sync. And we're getting a lot of Argo notifications in regards to what access it's taken. And as I just mentioned there, I just revealed that we're using Argo publication. This in combination with Cabana Alerts gives a full visibility of what's happening in our environment. So I'm gonna turn it over to Chris, who is gonna do a deep dive and some of those use cases I mentioned above. Hey folks, so here we're going to do a deep dive on different use cases that we use also different parts of the Argo streets. So the first one, as Angel mentioned, is the cross-cluster search certificates. So our cross-cluster search configuration is very important to us, especially for the incidents analysts that have a one single place to go into and search across clusters rather than have to go into different clusters. So it is important for us to keep it at least 24-7, 365 days available for the users. And one thing is that on an early basis, or more frequent basis, there is a certificate that needs to be refreshed to be able to have the head cluster connected to the remote cluster. So all our endpoints are monitored using heartbeat and Cabana Uptime. And we had an alert prior to Argo, we had an alert whereby once the certificate is going to expire within 15 days, we get a select notification, say, hey folks, your certificate is going to expire within 15 days, please update. And also create a GitHub alert for us to be able to take that action and into our sprint. So what we've done and what Argo give us the flexibility to do is to automate the whole creation and upgrading of this certificate. So using an Argo workflow and event once the certificate is going to expire, Cabana triggers the select alert but also trigger Argo to be able to in parallel create an alert and upgrade all clusters within your certificate. On top of that, we went a step further. So we integrated also Tynes, which is a no automation code platform whereby we created a bot, which recalls a Kenji bot, very creative. So basically with basic slash command, we can recall Tynes bot and that times triggers the same Argo app book that the Cabana alerts trigger. So now we have two areas whereby we can trigger this Argo and only use Argo workflows UI to be able to see sort of the green traffic lights from the blue to yellow and understand that make sure that the workflow has successful run and if not see what went wrong. Moving on to the next use case, within Infosec we are minded to be as transparent as possible. And one of the use cases within Infosec and for sure those who work in Infosec has heard of customer security questionnaire. Basically, it's a questionnaire that a prospective client sends to an organization for security reasons with different security questions. And what we've started to do, so is to make all our information and all our questions as public as possible across Elastic. So we have this cats service customer sharing such as search that have this data available to all Elastic folks. And at the behind the scenes, this is all managed you're using Argo to keep it up to date. And the next slide will show how it works. So basically everything is in GitHub. Once an Infosec creates a PR and this merge GitHub via GitHub webhook Argo is triggered. And Argo then downloads the latest version of GitHub via a Ruby script that we're using by the CSA, the Cloud Security Alliance repository published. Automatically we push the new content or the latest version to Elastic Search and also to app search which leveraged the service we previously elicited in the previous slide. On top of that, we're using the CAIQ Excel sheet also published by the CSA which basically tried to be a little bit more proactive. So basically once this data is updated Argo creates the Excel sheet and via Ruby this Excel sheet is created and updated. Then we push it to a cloud storage bucket and then Argo refreshes the self-service compliance portal that we make this Excel sheet publish to all Elastic sales folks to be able to share it with a prospective client. The next and I think the most important use case for us is the Argo CD, Elastic Search deployment and configuration upgrades. Again, as Angel mentioned the sort of, we're not going into the detail of how Argo CD works cause everything is public, documentation is very thorough and it's very useful and it's would be used as for us to go into deep life here. So basically the process would be once a security engineer raises a PR which would include either an upgrade or a conflict change for the cluster that PR is reviewed by a second engineer and then once merged Argo we'll notice that there's a new version of the configuration and at this point any conflict changes or any upgrades within the clusters we still push the sync button. We have a site process that's reusing the automation sync which works impressively but for now for the critical infrastructure for us we're still using the sync. At that point once we trigger the sync we trigger an Argo notification to our Slack and all of this we're using an application set. So basically all our configuration again is in GitHub both from the Git cluster, IPs, names, et cetera the ham charts, et cetera. Within this application set we're also highlighting what Argo notifications we want to send to our Slack message. We're not using the full-fledged alerts because Quban already provides quite monitoring CAD data for us but we want it a little bit more and Argo CD is giving us this opportunity to make a whole visibility of the infrastructure and then the final ham chart deployment again everything is within GitHub. And the addition of that is apart from before we went to also go into each cluster to check the logs we check everything within Argo CD UI for all clusters we check the logs and the bug where something happens wrong and do we just everything within Argo CD and one stop shop for us. The next two cases again, as we mentioned Argo notifications which provide us the gap fills up the gap data for the monitoring sites with Qubana doesn't provide us. Now we're monitoring with Argo especially also more Argo per se because we notice that sometimes Argo is under provisioned it's not keeping up with all the changes so we're able to notice with everything within Argo notifications. And so these are some of the use cases that we have. Within Elastic also have a process which whereby on a weekly basis each team provides a feedback or an overview of what has happened over the week. It's not from Fasek but from all teams within Fasek which provide again the full transparency that we work with in Elastic. And over the weeks that we've been mentioning Argo CD Argo events one of the biggest teams within Elastic which is the cloud engineering started to reach out to us. Hey folks, what do you think about Argo, Argo CD? How do you implement it, et cetera, et cetera. And over this collaboration within the two different teams and other teams within Elastic we started the Argo CD Project Guild which is an internalized community where on a monthly basis we meet up we share ideas, we share knowledge we share what we've done, what we're doing and feedback to expand our knowledge internally within based on Argo CD and especially the cloud which have a much bigger roadmap on their release to expand the Argo. Yeah, so as Chris mentioned it was great to partner with the cloud security team they're gonna start leveraging this and they have a lot more infrastructure to deal with than we do. So it'll be good to get their ideas and how they're using it. So that is gonna be part of our future roadmap of kind of working together understanding how they're using Argo and continue leveraging those use cases. But anyone familiar with Kubernetes probably knows that secrets are not necessarily the easiest to manage. So one of the things that we're trying to do is integrate the bulk plugin. So hopefully we can kind of automate that process because right now it's quite tedious we're still leveraging a manual make file process to manage that. But another important thing that we wanted to do is continue building our second spot. And that is gonna help us with a lot more of the automated task and we want to be able to just go into Slack say take X action and have Argo do it for us. And a good example is what Chris mentioned earlier in regards to automatically saying delete this certificate and then deploy it to all the clusters. We have other use cases that we're gonna start using Argo for. So we're gonna continue building on that. And I think most importantly once we do additional testing and have our comfort up on automatically doing syncs we're gonna enable that and make sure we have the monitoring to catch any issues and tie it back to our self healing infrastructure. If there are issues, hopefully we could have Argo remediate those issues. And I think the big one for us too is Argo communities are great when we're building some sensors we had folks chime in and give us some immediate support. So kind of thank you everyone enough for that. And I'm hoping that we can keep that going and continue to contribute. So join the channel, raise issues and submit PRs. Let's make Argo successful because it surely has made us successful and we're very thankful for that. But yeah, that completes our session. If there's any questions we'd be happy to take those. If we don't get to those questions and folks wanna reach out to us feel free to have our contacts up here. And again, thank you for spending 30 minutes with us and thank you for your time. Thank you folks.