 In the last few months, in terms of Google Cloud, we have additional support for Cloud Run. That was one thing that was added. And for Azure, I think there are some baking improvements that were done. Typically, when you deploy to the VM, there is a bake and then deploy. In Azure, there was this limitation that you have to have a bake in the pipeline for it to deploy. You couldn't just have it deployed with the pre-existing bake. And also, there were additional improvements, performance improvements that were done with it. So those were some of the big changes in terms of Google Cloud. For the AWS, the launch templates is another one that was implemented. For the launch templates, you can have this combination of spot instances versus the reserved instances together. So those were some of the changes. And which version of Spinnaker are you currently using? Which version of Spinnaker? Like, the latest one is 1.30. We are in Android 29. 29. Yeah. 29. Oh, OK. Yeah, so I think David was the one who was driving the releases, build and releases. Matt, please introduce ourselves. Yeah, sure. I'm Matt. I'm on the TOC for Spinnaker, working at JPMC. So a mixture of PCF, Cloud Foundry, Kubernetes on Chrome and Public Cloud, starting to move towards ECS as well, as well as Lambda. And one of the changes that Matt's been working on for people who target AWS, there have been some plugins for deploying to Lambda's. And he is doing the shift to get those into the main code base. So they're easier for people to deploy. Plugins are a good mechanism for lots of things. But for things that are pretty fundamental to the functionality, we just decided it was easier to have it in the code base. It's easier to maintain. We're kind of moving in this more mono repo direction anyway. So it's true that Spinnaker, being multi-cloud, can sort of bloat pretty easily. But I don't know. All the cool kids are doing Lambda's, whatever. Lambda's are a thing that people want. So we figured we would try to make it easier. Anyway, I have some pull requests to review. But it's great that it's going to get easier. Is there a time frame that you're expecting for that Lambda's plugin to release? Yeah, so it's kind of in a weird state right now where all the deployment logic is already in Cloud Drive. And the plugin literally just adds some stages that use that code. So I've got pull requests open for moving the stuff, just a lift and shift with some minor code smell stuff. There's some pretty weird stuff that it's doing, like forming Cloud Driver URLs natively and calling them using like their own OK HTTP clients and stuff, which is a bit painful. I'm not sure I want to go and refact all of that at the same time as moving it over in case I break it. So that's kind of a debate I'm having right now is whether to go and fix it now before it gets in or get it in and then commit to fixing it over the next couple of weeks or months or whatever. But it's pretty substantial the amount of refactoring that would have to be done. I'm totally fine to move it and fix it later for what it's worth. Have you guys run into any, like there's something that you wish you could do, something you could target with Google Cloud that Spinnaker doesn't do or an AWS function or a Kubernetes function where you feel like you could use a new feature or you feel like you run into a bug or something like that? For now, we are interested to run Cloud Run target, but we didn't do nothing. Yeah, just GCE in GKE, so that's, of course. Yeah, there's been quite a few requests related to Cloud Run recently, so I think Opsomex have been working on that. So I guess that's coming soon, if not already released. I think the base stuff got into 130. I think there's some more PRs that didn't make it, which will probably be in 3.1. I don't actually know the date. You might know better than me, but it's in the next month or so. Yeah, the changes for the GCP would be coming in next month. To go back to your original question about timeframe, so Lambda, pull requests are open, so. Oh, as soon as they get reviewed? So the Lambda, in the 131 career, then you've heard Lambda and the rest of the changes. So we usually also go through the list of open issues in the GIT and see which ones we need to prioritize and things like that. I don't know if you want to do that here, but any issues that you guys have that you want that interest or any concerns with respect to what the functionality that you are seeing now? And now you try to use like a secret team, Google Cloud Security Manager to start the values and like cross driver secrets, you know. We faced some issues, but we didn't go GCP there. So Cloud Driver Secrets Manager was part of 131. So you're saying you tried it and you had some issues? Yeah. So do you raise tickets for that or the issues? I see that there is an open ticket. Oh, there is. But just for AWS Secrets Manager, I think. I see. Yeah, okay. Yeah, I think we have to run through the GIT issues. The last time we looked under the GCE and it wasn't there, but then later on I found that some of these are not labeled correctly. We had to go through those issues and label them properly so we can identify these are the Google issues and assign it to the right kinds. Spring boot upgrades. Yes-ish. So I have a talk tomorrow. There's a TOC panel. Tomorrow we'll all be on this panel talking about kind of more generally what's been going on, but then I have a talk tomorrow afternoon about some specific stuff we've done at Salesforce. And yeah, we're pushing to do a lot of boring stuff. We've been working on spring boot upgrades and we're working on a Gradle and Groovy upgrade and then we're gonna maybe get to Java 17. I want to talk to Matt about that because maybe we can coordinate. We've done a bunch of stuff internally at Salesforce that I just haven't had time to push upstream including a bunch of improvements or new features to the AWS provider that I feel like there have been open issues about for a million years other stuff about launch templates. And it's unfortunately there in like a stack of 100 PRs and it's a lot easier to get them up in order as far as conflicts and stuff. And I've just been stuck every week or a month or whatever it is it feels like every day we have more customers which is great but then we hit some other scale issue. So that's also part of what I'm gonna talk about tomorrow. Yeah, so if people are using or want to be using launch templates with the AWS provider I mean, it's maybe not the best way to manage my task list but if enough people say hey I really need this thing and I'm like I know we have it and we have it internally and it's just a matter of moving it over and making the PR even if it's out of order or whatever like if people talk to me about it and it's something that we've already done internally then that'll motivate me to get it out there. Yeah, so that's a good thing though, right? You have a lot of changes that you haven't upstreamed yet. Yeah, I wish we didn't. I mean, I gave a talk at the last minute or summit about some cool stuff about targeting basically targeting GovCloud, the GovCloud partitions in AWS and there are other people making PRs about like FIPS compliance of this or that and anyway, it's like we did it already but I haven't had a chance to get it upstream. So this conference is, it's the next year, it's 2023 but it's only six months after the previous one anyway. Maybe the next one will actually be a year from now and I'll get to catch up a little bit. You mentioned FIPS compliance. So one of the issues with the current cloud driver with AWS is it's using V1 AWS SDK V1. Yes. And for truly FIPS compliance, he has to use V2 because it has some additional things in there. You know, I guess I should start by saying I am not a FIPS compliance expert. What I think I know is that at least part of FIPS compliance is hitting the right 10 points, like you can go to these web AWS web pages and they have like the regular URL and the FIPS URL and like across a bunch of different regions and a bunch of the different AWS services. And what I think I know is that the code that we have in cloud driver at Salesforce is hitting the FIPS URLs. So, and we're also still using the same version of the AWS SDK as upstream. It's 1.12. Whatever, 1.76 I think is what it is today. So the fact that we're hitting those URLs is a big part of it. And then the fact that the communication actually works means we're using the right ciphers or encryption algorithms that those endpoints accept. So I don't know what anybody else wants. I thought that was it, but again, like I'm not an expert, so I don't know. And I will say that we do have a very special little hunk of code actually written by like an AWS support person that builds those URLs. It's not rocket science, but it's like, instead of being ec2.aws.com or whatever, it's ec2-fips.aws.com. That's of course not the real thing, but you get the idea. So it might be that to get those URLs out of the box with AWS SDK, you do need to upgrade to something newer. And we found another way. So that is the main one. In AWS SDK, v2, if you have a flag called pips, it ensures that that's the only... Right, so you have to do it with your own two hands and now there's like a Spinnaker config flag and it's per, I wanna say we did it per account. So you get to decide if you're targeting this account and you want it to use pips endpoints, you say use pips endpoints, true, and then it uses them. And there's another little flag that I added called like log endpoints and you set that to true. And then you can actually see in the log, like, hey, it's really working, check it out. Cause otherwise like, right, like I'm not, I also am like not on the audit, the things team at Salesforce. So somebody who wants to go to our AWS like panel, you know, dashboards, cloud watch, whatever and prove that we're really using all the right URLs all the time. Like that's not how I spend my day. So, but if I can look at a log message in Spinnaker and I can match it up with an AWS web page, it says, this is the pips endpoint. Then I get to check a box and move on to the next thing. So, all right, people are interested. Good to know. Yeah, so the ones who are catering to federal government, they absolutely have to have some of these things. Right. Hey, I have a bunch of issues here in terms of open issues. Yes, one of the things that we've been trying to do is the triage these issues and put them in the right groups. So at least we know which cloud environment is having a problem. And because we know the cloud run is something that we are doing at AWS is you guys can try out quickly. And I think Apple guys are also doing AWS. So, you guys are also. And I think they've been more involved with the secret stuff too. I don't know, does somebody from Apple usually come to the regular cloud seg meetings? Yeah, it's Ben actually. Yeah, so I don't know if Ben has been writing that code with his own two hands, but somebody on Ben's team has, I don't know if it's 10 a.m. Pacific time, if that's the time that you guys could meet. It's a regular, there's a, that Spinnaker governance repo has like a Google calendar with all the SIG meetings and cloud SIGs on there. Anybody could join whenever they are. And there are some SIG related channels in Spinnaker Slack. And I wouldn't say they're super high traffic, but if you have a question, somebody will notice. And if nothing else, I'll probably notice and I'll ping Ben so that he sees it if he doesn't know the way. And Ben's boss is also here at the conference. He should hit him up, his name's Dadecey. He's a good guy. That's actually all I wanted to do. Can I, in this cloud SIG? If you don't want to go through the details. It's a lot of issues, I think. If you're regular here, then it makes sense. Summit meeting, it makes sense that. Yeah, I mean this is, it's probably something to do offline, but there is somebody from AWS, an actual AWS employee who has spent a bunch of time in the code base for the AWS provider. And she came back from parental leave not too long ago and was like, hey, David, are there any issues I need to work on? And I pointed her at a couple, but I don't know them all. And so if there are some AWS issues, send it to me or send it to her. I don't know if you guys know Pratibaba. Anyway, I can connect you guys. Yes, the issue with the user data cloning the server group, that one still is open. That she was working on that. Yeah, I think it might have gotten auto closed, but. It reopened. Oh, good. Yeah, I've tried to avoid diving into that because it's complicated. But it's great if she's looking at it. And if she's not, maybe. I mean, she's back, she's looking at it. I don't know if she is looking at that specific one. She might need a reminder. I do think that as we get more Spinnaker users who use these other clouds and these other platforms, we'll identify more opportunities for more integration points, adding more cloud providers or improving what is there. It is hard for us as a community, we're not using it. It's hard to understand where the pain points are. And one thing that I've seen in the past is folks will come up with workarounds for the organization. And that's great, because you get your work done. It's good to pass that back to the community to say, hey, your product, like this tool didn't do this thing. So I had to work around it in this way. And then that's a really good insight to us, where like, hey, if everyone's trying this workaround, maybe we should just fix it in the product. Fix it in the project and win for everyone. I guess one other plug is that in the end of it all, if we're software developers and we're writing Java code for a living anyway, if there's a bug in Spinnaker, it's just some more at least JVM code, Java, Kotlin, Groovy, something. And so it's maybe a little intimidating to dive in the first time. And certainly if you're trying to debug a UI issue also, then you're like also in the universe of JavaScript in addition to whatever's happening in the backend, but IntelliJ is a pretty powerful tool you can click around and find out what's happening typically fairly quickly. And I guess this is one of the, just like reality is a Spinnaker. It's hard because it's got its little tentacles into all these different parts of people's systems. They're internal Docker registries or they're credentials for Kubernetes clusters and they're LDAP servers for identity stuff. And so if you have a problem, it could be that it's whatever, the cosmic rays are all aligned and it happens to you, but it's very difficult for somebody in a different environment to reproduce it. And so I know that it's hard to ask for help and it's hard to get help. And sometimes it is actually easier to help yourself. And I'm happy to try to wade through, talk to people about how to set things up on their machine or what I do every day. I never run Spinnaker on my local machine. I never run all of Spinnaker on my local machine. I never run one Spinnaker service on my local machine. I never do it. I figure out everything I need to figure out some other way. Like so people worry about having big expensive machines. And even if you are just running the automated tests of CloudDriver, it takes a lot of juice. So I wanna say that you can do it on a 1984 Mac, but it doesn't have to be that bad. Like you could be a Spinnaker developer tomorrow, just like any other Java project. I do have a, there's a presentation in Spinnaker's YouTube where I go over plugin workshop. And as part of that workshop, I'm running, I forgot what service, but I'm running a service locally. And I show how to use telepresence to run Spinnaker in a cluster and then run your one service locally and you can hit debug end points in your local machine while using the UI in Spinnaker. And it extends to even, you can debug into your plugin's code as well. It's pretty slick. I should probably write up how to do that and post it on the Spinnaker blog by step instruction. So it's a bit more consumable than a two hour video. So I'll take that as a to-do item. Yeah, I think some of the issues, recently there was one issue that said, hey, I've graded to one dot 30. I'm getting a 400 error from gate for pipeline save. And they have this log from gate. No, the gate is not the one that's actually giving error. It's actually coming from a different service. So maybe for us, the best practices to publish the same, how this flow works and get a problem. Because these are distributed microservices. You actually have to look at other services also, where the actual exception is coming from. So maybe we could do some of those things there as well. Yeah, I mean, there are endless documents to write it. This is actually something else that we've been focused on over the last, I don't know, every time we find one, we try to fix it. But we ship all our logs to a log aggregator and when some pipeline fails, we go to the log aggregator and you query the pipeline execution ID and you hope that every log message that Spinnaker emits for that execution has that as a tag, but sometimes it doesn't. And it was a bug in the code that that sort of correlation ID didn't get percolated all the way through. So this is something that probably like, silently goes unnoticed from 1.28 to 1.29.1.30. But I think that is something that is generally getting better as the days go by. And I've been reasonably good about getting those fixes upstream. I mean, what I meant here was, because it's a distributed microservices, you need to have a log aggregator to really look at the errors by simply looking at gators, it's not sufficient. It's the best practice to have a log aggregator be in their search tool. Yeah, I guess, yeah. We all make a lot of assumptions. Most people these days are running Spinnaker and Kubernetes. Most people know that they have to have a log aggregator and ship their logs to somewhere, whether it's Plunk or Logly or somewhere. But people are at different stages of whatever DevOps curves and it's true that to troubleshoot something, we really need all the logs from all the places. There are some reference diagrams on the architecture part of the website where it does go over the life cycle of a deployment. So it shows a deck talks to gate talks to cloud, Orca will hit cloud driver and they'll query back and forth and pull. You can see that interaction. But it's remained pretty stable over time. But it doesn't really help, I mean, it's helpful but I think the log aggregator is a really good solution so you can actually see what's causing this issue. I know people aren't like shopping, looking for ways to contribute, especially not a feature that's probably as gigantic as this, but something like Zipkin or Open Tracing, we don't have and Spinnaker would be really sweet. And it's something else that takes another level of like operational maturity because once you generate all these traces, those traces need to get shipped off to somewhere too. And I would say fewer companies have that set up than have log aggregation. But we would be making the world a better place if at least the code was set up for it next year. So please do come to those meetings like any of these requests that you have in a priority. Raising a issue may not be as urgent as showing up for a meeting and say, hey, I have this issue. Call it today and go for your drinks. Thanks everyone. Thanks. Bye. Is it over? Yeah.