 Hello, everybody. I was really tempted to say good morning, but I am East Coast time and I don't know where all of y'all are but welcome once again to day two of DevConf U.S. and just a reminder, DevConf CZCFP is now open and thank you for coming to our intern showcase. So it's a little bit constrained this year. We have some members of the community that we want to show off their projects. But we don't have a lot of time and we have a lot of projects. So we're going to kind of jump right into it with a program from someone named Fred Martin and his program called SoarCS and I'm going to put up their cool logo and slide while we chit chat about it a little bit. Hi Langdon. Hi Hi everyone. So my name is Fred Martin as Langdon said I'm associate dean for teaching learning and undergraduate studies at the Kennedy College of Sciences at the University of Massachusetts Lowell and I'm also a faculty member in the computer science department. That's where my point is. And so three years ago with support from Red Hat, we created a summer bridge program to increase diversity in the CS department here at UMass Lowell. This is our third year running the program. We have six students are going to be sharing their projects with you. These are all projects they did in four week period. This year we were hybrid so we had some students who were fully online and other students were able to come in one or two days a week and meet with us and do some in-person things. The thing I wanted to share with Langdon and with the whole audience is there's a whole range of prior experience that students had coming into the program. Some had lots of experience have been coding for years. Others this is their first time coding and our our goal of the program is really welcome everyone and support everyone. And I just like to thank you Langdon for sling and Heidi Dempsey at Red Hat and Red Hat has provided financial support, which matters a lot. We used it this summer to pay our undergraduate staff members who really helped make the program happen without them. We wouldn't have a program and then also moral support and encouragement for these activities. Thank you so much, Langdon, Heidi and Red Hat. Thank you. Yeah, we really like doing it. It's a lot of fun. I think next year when we're back in person we'll even be more fun. Fred, I'm not sure how many people at the conference have been to a live one, but it's certainly a lot more fun. I think when we are able to bring everyone on stage and kind of everybody can cheer and all that stuff. So I really look forward to doing this again next year, but, you know, the virtual is still pretty cool, you know, and I think we have a session later today where if you want some more details about any of these projects, you can come by and actually talk about them or get a demo or whatever. And but let's get rolling. All right. Thank you. Yeah, thank you. So first up, we have a very bad notes. Sorry. So Emmanuel, and then do we have, is Zach able to make it? No, he wasn't, right? Unfortunately, no. No. OK, so this is Emmanuel and Emmanuel is going to tell us a little bit about his project as you can see on the screen. And that is a much cooler graphic. I definitely appreciate the change in screenshot. So why don't you tell us a little bit about what you did? Yeah, sure. Hi. Again, I am Emmanuel. And together we created a 2D shooter game, like Galaga, using primarily the Unity game engine. Awesome. That's really cool. And so what would you say kind of the biggest, you know, kind of challenge of doing this project was? The biggest challenge, I think, was learning the coding language C sharp. It's primarily used by Unity, and I had to learn it to create this game. Right. Yeah, minor detail, you know. So it's my old joke, you know, at least wasn't Java. So one of the things that we talked about when we were kind of preparing for this, right, was like one of the kind of interesting challenges is like having the different kind of elements communicating with each other, but that having to kind of live in the individual entity. Can you talk a little bit about that? Oh, 100%. As you can see, I put up a little piece of source code right there. And you have scripts for Unity and you work on them using C sharp. And then once they communicate with the UI of Unity, and you can alter them from within the UI itself. Oh, that's cool. That's really cool. All right. So yeah, thanks so much for coming. And like I said before, I know you wanted to show some demos of the actual game. So anybody who wants to see those, you should definitely come by and check it out. There is a GitHub link to it. And maybe someone can drop it in the stage chat so that you can go and check out the source code for yourself. Thanks so much, Emmanuel. No problem. Thank you very much for the opportunity. I appreciate it. Yeah, no, it's great. So next we'd like to invite to the stage Jen and Maddie. And I think they're both here today, right? Yeah, awesome. So, Jen, why don't you quickly introduce yourself and then we'll have Maddie introduce yourself and then we can talk about the project you actually worked on. All right. Hi, my name is Jen. I am a freshman at the University of Massachusetts Lowell, majoring in computer science. And I'm really excited to be here. Maddie? Yeah, so I'm Maddie. I'm basically the same as Erema Freshman. I'm majoring in computer science and I'm just also excited to be here. Cool. Awesome. So why don't you tell us a little bit about your project? First, I'm going to ask Jen, why don't you tell us kind of what you thought, like why is this project what you thought would be interesting to work on? Well, we were inspired by the game slash toy Bop It and we wanted to create our own game Bop It with this little thing called a microbit. At the beginning of the program, we were given a microbit which is a display of 5 5 5 led lights and it had two buttons and a sensor on top. And if you play the video, you can see me playing the game. You see an arrow going to the right and then I press the button on the right and then when it plays that symbol, I push the button on top. And at the end of the video, you can see me intentionally messing it up and showing the game over screen. That's always a depressing point in any video game, right? So Maddie, what did you think was kind of the most important thing you learned out of this project? Honestly, this was like really my first experience with any sort of coding. So I think it was just like a good experience, like a fun experience to just like get into it and I really enjoyed it. That's cool. Yeah. So do you think that and you know, kind of either one of you feel free to answer, do you think it's it was more interesting to be able to do something that involved like hardware, you know, the microbit rather than straight software? Yeah. Yeah, for me, like, we did some other things that like it was fun, but like this just I found it's so much more fun to do. Like because you could actually see like it in person that you've done something. Oh, yeah. Yeah, that's totally cool. Yeah. So my daughter actually has done some stuff with microbit as well. And yeah, she was she was very impressed with the things that she had built. So it's it's kind of a lot of fun. And, you know, not to mention that bop it is a good game. Jen, what do you what do you think you're going to do kind of with computer science? Have you thought about it at all? What's your what's your kind of long term goal? I have not really thought about it. But I've always been really interested in the creative side of computer science, like video games and stuff. So I definitely do want to, you know, do more things like this bop it game. But if that doesn't work out, I have connections to people in the cyber security industry. If you've heard of a company named Splunk, my father works for them. Oh, cool. That's nice. Yeah, I will say there's a lot of crossover between particularly mobile video games, right? And cyber security. So you could probably marry the two pretty easily. So Maddie, what about you? Have you thought about kind of what you want to do with CS at all? Yeah, no, I'm just going with the flow. Whatever opportunities come up, I'm going to take them. I really don't know yet. Speaking from experience, you know, I spent a long time as a software consultant, then I ended up at a product company and now I ended up as faculty at Boston University. So yeah, I know exactly what you mean. So yeah, more power to you. Computer science, a lot of fun with a lot of flexibility, you know, so I hope you all stay with it. So why don't we move on to our next guest? Which, sorry, I do have two of the same slide. Why is this? Okay, y'all should see Dylan and David's project. Sorry, I'm not sure what I was doing wrong, but you know, controlling slides is not exactly my forte. So yeah, so Dylan, thank you for coming to present your project. And can you tell us a little bit about what you did? Of course, sadly, my partner couldn't make it. But basically, when we were told to make some big project, we both definitely wanted to make a game. So we started thinking about things and we were both really heavily inspired by the classic arcade game, asteroids, where you fly around and shoot down asteroids before they hit you. So we decided to make this game in Python, which was interesting because neither of us had a lot of experience with Python, especially the Pygame library. So a big part of this project was actually learning how to code with Pygame and Python. But we both made our own sprites for this game. You can see them at the bottom of the slide. They're quite small, so the quality isn't great. But the whole gist of it is you just move left and right and fire at the asteroids before they come down and hit you. We did have many different iterations of this game, but we wanted to make it really personalized. So the product on the right is what we ended with. Nice. That's pretty cool. Yeah, by way of interest or whatever. So Red Hat actually built a few open-source games that we use, or they used at various conferences, and one of them is actually an asteroid's similar game. If you wanted to compare notes, you should check it out. It's a lot of fun to play. I kept it open for an entire day and kept it randomly playing. It was pretty funny. So I'll definitely check out yours. I do like that game. I don't know why. So why don't you tell us a little bit about yourself. How did you get to this point? During Source ES, one of the main technologies we learned was Python, so I was able to dip my toes in a little bit. I did have about a year's worth of experience, which in the programming world definitely is not a lot. I didn't know too much about it, but I've always been interested in game development, so I just dove right in without knowing much about Python, and I think in the end it was definitely worth it. I learned a ton, and I'm really happy with what we came up with. Cool, that's awesome. Yeah, by way of interest, there's been a few games that came up on OpenShift TV, which is one of the Red Hat Twitch channels. There's actually a regular show about game development, so you might be interested in checking that out or anybody else. So what do you think you want to do with computer science as a major in the long term? Right now, I took cybersecurity as an option for computer science, and I definitely want to stick in that field, but I've been trying out a ton of different things. I did a little bit of programming, obviously, with this program. I've also been looking into data science, so right now I'm at that point where I'm not completely sure, but I definitely want to try and stick with cybersecurity. Not that I'm biased or anything, but I would definitely recommend data science being a faculty member in the data science department, but cybersecurity is cool too. That's awesome. Thanks so much for being here, and we're really sorry we couldn't have David as well, but why don't we move to our next project, and we can chat about that a little bit. Thank you. Thank you. All right, so the next project we have, I'm going to probably get this wrong, but Brothna, right? Awesome, I win. And Michael, thank you for coming on to talk a little bit about your project, and so this one's a little bit different but from the prior projects. So what was it you kind of worked on this over the summer, I guess, to build? So the technology we used is called Myer or Learn Myer, and it's actually developed by UMass Lowell, former students, and I've learned that it's basically a website where you could just freely model anything that you chose, and in that picture you see right there, we did, we tried to stay faithful to our north campus that's Southwick Hall, and if you go into the project you can see about a third of the campus, which resides in Kitsyn, Fallmouth, and Dandenew, all parts and branches of the computer science department, where most of the classes reside. Nice, so why don't you, or actually Michael, why don't you tell us a little bit about yourself and maybe how you got to kind of into the program? Yes, so I got into the program because like I had taken actually in high school AP computer science, and I really enjoyed it, and then when I got accepted to UMass Lowell, they invited me to the SOAR CS program, and one of the main things that they were showing us was Myer, which is a coding-based program, as Prathma said, mainly like developed in JavaScript, and it was very enjoyable, and I wanted to try making a project that people could relate to considering we're all the people in the program are going to UML, so me and Prathma decided to make North Campus, so. Nice, that's really cool. Prathma, why don't you tell us a little bit about yourself, how you kind of got here? Well, COVID hit me pretty hard. I had no prior coding or programming experience prior to this. I was supposed to take introduction to programming my senior year, but since everything roofed remote, they decided to cancel the course for the year, so this summer it was actually my first time coding, and I actually wanted to challenge myself since it was like practically me stepping my foot into the water for the first time. And did you feel like it was a challenge? Yeah, it was a challenge because so over the past years I used to actually develop some games on Roblox, an actual game engine that some people know, and Blender, so 3D modeling wasn't that bad for me. I enjoyed it roughly, but this was my first hands-on experience actually manually coding like models. Right, right. Yeah, that's cool. So if you were going to go kind of do the next piece of this project, what do you think that would be? The next piece, I think honestly for this project what we would do is probably expand the campus more and try to fill up more of the missing like void that we have. Just since this is only about a third of our campus, that's only North Campus, and then we have a whole other building, and then we have across, we have the Miramac River right next to it, and then we have River Crossing, and there's another hall because North Campus has about a dozen halls. I don't even know how many there are. All right. Cool. Yeah, that'd be neat. I can imagine using a model like this and then actually be able to put it in my classroom and then being able to have some idea of where it is without having to go hunt it down, which might be kind of fun. Michael, what did you see as kind of the next phase of this project? So for the building, we had I think mainly four halls being like Southwick, as Brett and I said, like Southwick, Kitson, Ball, Falmouth, and all those ones, but there's still extension that we actually, to that building that we actually didn't get to, which would be Perry and the other part of Ball Hall. So we could probably start there, but I mean, yeah, that's probably where I would start for the extension. Yeah. Cool. Awesome. And kind of as I've been asking some other people, Michael, what do you kind of think you want to do with computer science? For me, I don't, I'm taking the general option currently, but I mean, I'll just see kind of where the win takes me. And it's all cool stuff, so see what kind of sticks. Nice. Nice. Yeah, just, you know, keep in mind, right? It never has to, you don't have to stick with it, right? Like you can kind of change things around and do different things. It's a lot of fun. What about you, Brata? As of now, I'm not quite sure either, but I'm looking more into data science and it's, that's what's speaking my interest as of now. Cool. Yeah, I just taught my first class of the intro level in data science. So it's, it's kind of interesting. I highly recommend if you're interested, check out the Data 8 program at Berkeley, which is an online, like available course, if you wanted to kind of get your feet wet, watch some videos. So awesome. That's really cool. So let me, I think what we can do is maybe we invite Fred back to the stage and say thanks to all the students. I'm going to stop sharing now. But then I reiterate, definitely come and check out the, you know, the session later on today. Maybe somebody could throw in the stage chat what time that's at. And Fred, did you have any kind of closing remarks you'd like to make about the, about the program? I'm just really proud of all of our students. And for being part of this event, I think is a great professional development opportunity for them. And we had 45 students in our program this year. It's interesting how the virtual format allows us to bring in more students. So the program spanned four weeks from when we started like meeting with them and supporting them and encoding. We also had before that about three weeks of getting to know each other over Discord. And that was really valuable. And to me, really for the students to get to meet one another and bond with each other, both over the web and in person, it really, I hope it sets them off really well for their careers that you must learn beyond. Awesome. That's really cool. All right. Well, thanks so much for being here. We really appreciate it. And maybe we can welcome Sally to the stage. And we can move into our everybody for learning about our students. And thank you, Red Hat. Thank you. Yeah, thanks so much. Hey, yeah, should we jump right into it? We want to give everyone a 10 minute break and we can come back in 10 minutes. I put like a cup of coffee. But our fearless leader tells us that we should continue. We should continue. And I also have my mocktail from the morning cooking show. Ginger, ginger beer, non-alcoholic lime and grapefruit. It was delicious. That sounds really good. Our chef Phoebe, I created a bagel. That's really cool. I've never created a bagel before, especially in such a short amount of time. That was what I was most impressed by. I had a vegan frittata because I don't like eggs. And what else? Oh, and the mocktail was so good. Yeah. That's cool. That's really awesome. Yeah, so moving along. Yeah. Our next guest, Elena. I have to look at my notes. I'm sorry. You all know Langdon because he is our co-organizer. So I am going to skip introducing him again. But I do want to remind everybody that he is now a Boston University professor and no longer a Red Hatter, sadly. But it's the only place that I'd be happy if he left to go. So I'm okay with it. And Elana, we're really honored to have her come on stage with us today. She's going to talk about her work. She is a principal software engineer at Red Hat. She works on the node team with OpenShift. But she's also really active upstream. She's a member of the SIG node group upstream. And she was an SRE also and a technical lead on the Azure Red Hat OpenShift team. She's a chair of the upstream Kubernetes SIG instrumentation group. Also in the wider community, the FOSS community, she's on the Debian Tech community. She is a Python software foundation fellow. So she's got a lot going on. And fortunately, she is taking a few minutes out of her day today to talk with us. We're going to talk about Kubernetes. Most of you are working with Kubernetes in some way or another. So let's get to it. Maybe Elana can come right on the stage now. And there she is. So as mentioned, so I'm Langley White. And we're going to talk to Elana about Kubernetes. But let's kind of start the conversation. Why don't we say, Elana, what brought you to Kubernetes in the first place? That's a great question. I, in the Kubernetes community, when I took a job at a company where they were running these very large on-premises Kubernetes clusters, and they needed some SRE folks for the team, you know, it was relatively difficult at the time to find people who already had Kubernetes experience to run these large-scale clusters. So I had a bunch of systems administration experience from previous work. And so they said, well, hey, come here, learn this Kubernetes thing. And we're really into monitor our clusters. So like, can you figure out, like the monitoring situation, you know, we've got this heap stir, but like, we're not really sure that we have enough observability for most of our needs. And so I sort of begun this open source journey, wherein I ended up looking for all sorts of open source tooling in order to instrument these really large clusters that were not in the cloud. So they lacked a lot of the nice benefits of being in the cloud. You know, we didn't have like cloud-based monitoring systems and whatnot. So I ended up getting involved in this thing called SIG instrumentation, the instrumentation special interest group. And I was trying out various pieces of software, like this thing called cube state metrics, which talks to the API server and gives you state information about your cluster, how many running pods there are, services, namespaces, that kind of thing. And I was like, wait, like, this is, it says it's always supposed to use like this much RAM, but like, I have this 250 node cluster. And it's using like 10 times that. Like, I'm not sure. And it keeps getting umkilled by, you know, this thing that says it should only use so much. So I've like, messed around a little bit with this. Like, is this expected? Do you know about? And so I ended up working with a bunch of folks in that community, including at the time, a number of Red Hatters, who were like, yeah, please test this thing in your giant clusters, because we don't have giant on-prem clusters that we could just do that in. So I was like, sure, send me various like beta editions and whatnot. And I will test them and I'll benchmark them and I'll let you know the stats. And thus, my Kubernetes journey started. So that was really fun. And I, when I got the opportunity to join Red Hat, I was able to move into a role with the Azure Red Hat OpenShift team, where I ended up becoming a tech lead and sort of regional escalation contact for the Americas, as well as working with two other regions, EMEA as well as APAC, where most of our team was based in Australia. And it was really exciting to be able to work. I worked in partnership with Microsoft, because this was running on Azure, to be able to administer all of these customer environments and build a even larger scale platform as a service. Right. Yeah, that seems like a really small thing to have done, right? But yeah, so what I wanted to mention was I actually interviewed you for a Twitch show on OpenShift TV called Kubernetes by Example Insider, a couple of months ago, where we talked about kind of Kubernetes in general. But what we wanted to focus on here today more was you have, you know, you get a lot of questions about Kubernetes, I'm sure, on a regular basis. And one of the things that you keep thinking to yourself when you hear these questions, as we talked about before, is it ain't magic, right? So what does that mean? Let's start there for anybody, particularly non-English speakers, you know, what does that kind of mean to you when you say it ain't magic? So I put out a straw Twitter poll about a month back and I asked folks, like, what surprised you the most when you started using Kubernetes or what confused you when you started Kubernetes? And there were all of these things. And to some extent, I think that people might get a little bit caught up in, like, some of the hype around Kubernetes. And they're like, it can do this and it can do that. And it can do all of these things, like, you know, happy, like magical rainbows. And it's not magic, though. Kubernetes is just an open source API that can be used for workload management. And I think in a lot of cases, like, I mostly work on the Linux side. So I like to think of it as an open source, like distributed abstraction over running containers on Linux. Like that gives you the magic of container orchestration. But like under the hood, you still have a Linux machine. It's still like, you know, it walks like a Linux machine, it quacks like a Linux machine, it will fail like a Linux machine. And folks are like, but I expected these things to magically work. And I'm like, but you still have to set up the network for the Linux machine, you still can fork bomb the Linux machine, you still have to secure the Linux machine. Like there's there's no magic. It is a knowable, tangible thing. Yeah, it's funny. So, you know, Red Hat, of course, has actually made some t-shirts to indicate the containers are linear. But the I think that's that is a common problem that magic, right, whenever we have kind of a new technology that's making getting a lot of traction, right? And everybody's really excited about it. So why don't we delve into a couple of your examples there? So if you if you talk about networking, so what where where is that kind of quote unquote, you know, I hate to say bad assumption, but where where is that assumption in particular that people think is going to happen with networking that really you you don't just get magically. So I say I don't consider myself a networking expert, although I certainly have the fundamentals down. And I would say like myself, one of the things that surprises me is, you know, you go and you look at the Kubernetes documentation, and you read about these things called services. And I'm like, that seems kind of like a load balancer, except it turns out it's not a load balancer. It's more like declaring a load balancer to Kubernetes. But the load balancer has to already exist. And so I in you can implement the networking stack in Kubernetes in any number of different ways. These days, we now have what's called the CNI or the container networking interface. So you can plug and play various ways to set up your network. If you're in the cloud, a lot of people don't realize that I feel like a huge benefit of what the cloud gives you is just like really easy software defined networking on demand. And it's incredibly complex and a little bit, a little bit it's like it's it's platform, it's infrastructure. So it's beneath the surface and people don't realize how much complexity is beneath them. So like to some extent, like the cloud, absolutely it's magic, as far as I'm concerned. And Kubernetes just makes it easier to like, you know, kind of mix and match and move around between clouds. It makes it easier to build technologies such that you can run it on hopefully any arbitrary environment, whether that's a cloud, whether that's on premise. And so like in the networking case, I remember working on premise and the way that we implemented services, you know, we didn't have something like OpenShift's OVN. Rather, we had like, we set up BGP ourselves and then like set up the BGP network on our cluster, such that a service was basically just getting BGP routed to like whatever we happened to declare within the cluster, which was like very different, but like a totally workable thing. And so I, it's also like similarly if your network dies, if you have a network partition, Kubernetes is not necessarily going to be able to fix that for you. It will ensure that your applications are somewhat resilient to that. So it can handle those network partitions relatively well. But if you need network connectivity, it's not going to make the network come back up. That's still like, you know, that's beneath the level of Kubernetes abstractions that you need to have that set up and working. So regarding that, though, do you think that Kubernetes could or should kind of manage that network stuff for you? Like, is that a future plugin? No, it's not a future plugin. So I think that like one of the exciting things about Kubernetes, at least from like my operational perspective, is that it has a really, really excellent architecture. Like my favorite thing about Kubernetes is the architecture. And so fundamentally, if you have a Kubernetes cluster, like, I know where to be able to find things when something breaks down, because Kubernetes architecture allows you to have this really accurate mental model of what's going on in a cluster. Namely, like, everything talks to the API server. The API server is the arbiter of state in the cluster. The state is stored in at CD. And everything in the cluster that talks to the API server is a controller. And each controller has a sync loop, and it works on a particular kind of database object. And if I just follow this flow, like, I can find, you know, what I need to find. Similarly, like the way that a node is architected, a way that Kubernetes kubelet works, it's intended to be resilient to data plane or sorry, the data plane will be resilient to control plane failure. So like, if my kubelet goes away, like I get with a bolt of lightning, the containers on that machine are going to be fine. Like they're going to keep running. And when we revive or resurrect the kubelet from zapping it with a bolt of lightning, it will come online, it will account for these containers, and it will be like, yep, everything's great. So it can handle, like, relatively serious interruptions even without kind of skipping a beat. If we add more and more complexity to what the Kubernetes control plane manages, we won't be able to use the boundaries of that abstraction anymore to figure out what's going wrong in the cluster. Like right now, it's really, I mean, it's not like super straightforward. It requires learning, it requires expertise. It definitely requires a lot of, you know, like just trial and error. But I can figure out pretty quickly what thing is broken in a Kubernetes cluster. When we start to add like the Kubernetes cluster is doing more and more things, we lose the sort of nice like, well, I've got this thing responsible for this, this thing responsible for this, this thing responsible for this. This is where like, you know, if Kubernetes sits on top of this thing, the Linux kernel or a Windows box, perhaps, and like, here is where the kernel is actually the thing breaking down. Here is where I need to care about disk performance. Here is where I need to worry about memory QoS. Like right now, it's easy to sort of separate those things because of Kubernetes architecture. The more things we try to pull into core Kubernetes, the more we dilute that value. So I would say we should not do that. Kubernetes should not magically try to be all of the things. There are other things that already do this stuff better. Right. So I think that is like, that's a, I think a really common, you know, failure or, or tendency, right, of kind of big projects of any kind, right? Whether they're proprietary or open source or anything, the temptation to expand to cover all the things is very, very high, right? And so that's kind of why I wanted to bring that or, you know, ask your perspective on that particular subject. You know, the Linux kernel, I think is a great example of this as well. It's like they've done a pretty good job of not doing all the things, right? They just, you know, they just do what they do. And I think Kubernetes has the opportunity to do the same thing. And it's, it's important that we control how far it gets if we want to be able to. And I really like your point about, you know, kind of once you get the hang of it, keeping a mental model of Kubernetes is, is not easy is not the right word, but it's doable, right? It's, it's something that you can accomplish and maintain. It takes a little while, but it is something that is doable. I can, I can definitely name a lot of other software where it's really not, it's just, you know, the complexity is so deep and so kind of like cyclical and convoluted that it's very, very hard. So I appreciate that as well. So kind of moving on to one of your other topics. We talked a little bit about kind of the quota and resource tracking problem, right? So, and so what have you found that people expect Kubernetes to be doing that it's not, you know, on that subject? That is a great question. So I, like this is pointing no fingers at any particular person. The number of times, like I would be, if I had a dollar for every time this happened, I would not be able to quit my job, but I would be able to buy myself at least a nice dinner. So the number of times that I've seen somebody be like, my MongoDB isn't working. And I go and I look, I say like, okay, share, share your Kubernetes spec. And they've got like, you know, this cluster and they've got like, I don't know, let's say 10 MongoDB instances or something like that. And they send me the spec and I'm like, oh, you know, you don't have any resource requests or limits set on that thing. And you're running with like four gigabyte nodes in your cluster. And there's only like two of them, or like three of them. And you have like 10 of these things and MongoDB needs a lot of RAM. So your nodes are kind of going out of memory. Like, have you thought about that? Have you benchmarked that? And they're like, oh, no, we didn't realize that we needed to set request limits. Okay, let's go do that. And so they go and they set request limits, like the same problem is happening. And, you know, you go at the end, look at like the requests and limits and like, so you have the like the limit here at like 150 megabytes. But like, you know, Mongo needs more RAM than that, right? Like Kubernetes does not magically make MongoDB require less RAM. I thought it would, I thought it would actually go to Amazon and actually buy you more RAM chips and install them. Is that are you saying that's not the case? I need to download more RAM. No, so there are a lot of misconceptions about how, for example, horizontal pod autoscaling works. It just looks at some very simple metrics that you plug into it. You could have it do more sophisticated stuff. But out of the box, like horizontal pod autoscaling looks at like very basic metric. And then, you know, it's like, okay, I'll spin up more replicas. In terms of vertical pod autoscaling. So for example, pod being able to in place say like, Oh, I'm like, using more memory than this, I should request more memory. That hasn't landed yet. That works actually been in flight since the 122 release. Now going into the 123 release, I reviewed some of the kubelet changes. The last release was not quite ready, but very excited about that sort of thing. But folks think like, Oh, it's magically autoscaling. I mean, I can just like throw a thing in there and it'll just work. I don't have to think about it at all. You do have to think about it because as far as Kubernetes is concerned, it doesn't have any knowledge of the inner workings of your application. It's just like, it's container. I'm going to go run it on a machine somewhere. And the only things that it looks at is when you give it the requests and limits on your application, the scheduler says, okay, I see you've requested these things. So I'm going to schedule things based on that request. It doesn't know anything about how your app will actually run. It's just working on what you've told it and says, okay, the request says this. So the scheduler will pack things based on that request. And then when it's actually on a node, the kubelet and C groups and all of this Linux kernel machinery says, okay, the limit is blah. So like, if it exceeds that limit, I will kill it. But like beyond that, there's not a lot of magic going on. There's no like, well, the kubelet like introspects the thing and sees it's using this much stuff. And then it like modifies these things. That's not happening. I think that there's a misconception that that might be happening, but that's not happening currently. But that, of course, at least for me, right? So it begs the question of how do I know what to put in that request as an operator or like software developer or whatever? Like, if it doesn't know, how do I figure that out? I'm sorry, I'm going to tell you the answer that I tell everybody, which is the same way that people have been doing this for the past like three decades, which is run the app on machine, collect telemetry, see what its resource utilization looks like, and then tune based on that. So I did a lot of this in previous jobs, both like looking at like, okay, well, this customer is like not really sure, you know, how much resources this thing is going to be using. And so like, let's go and like, look at the data that we've collected in Prometheus, like what has C advisor said, a C advisor is the agent that the Kubla currently uses to collect container statistics. And like, what does, you know, the overall cluster statistics say, do I have enough space on this node, and so on and so forth. And in a previous talk that I gave at SREcon, I talked about how we also have the opposite problem happen where people are like, I'm just gonna throw some limits on there. And they set like really big limits. And their apps don't use anywhere near those limits. And then they're, if you have been working a cluster that has quotas enabled, they're like, I'm at a quota. I need you to increase my quota. And I, you know, you're only using like 10% of your quota. Have you considered perhaps reducing your memory requests of limits? And they're like, oh, I didn't realize that. Where could I find that? And you point them at the right dashboard or the right statistics. So there's a learning exercise to be done there. But I mean, I think that the fact that these sorts of conversations are happening, to me, that's very exciting because it suggests to me that someone who is writing this application and is now able to just go ahead and push it to production to run it somewhere, they have no knowledge or understanding of kind of the underlying machinery that used to basically be impossible. Like that is that's not a thing that would happen, right? You'd like throw your source code over the wall to the operations team and they would do their dark magic to it. But now, you know, like somebody who has no understanding of like really, you know, what's going on underneath the hood on a Linux machine can be like, I'm going to run my MongoDB. And they can, you know, write like a few lines of YAML and like hit cube code apply. And like, voila, you know, they've got a running Mongo somewhere, assuming that the cluster has been configured correctly. That's very exciting to me. So like, it's, you know, like more problems, but like, these are cool problems. And it also makes me wonder, you know, like, did we set the abstraction levels at the right layer? Like, people have been using Kubernetes directly like this, if we expect people to kind of have some of that knowledge, we've made it so easy, but like, we're revealing a lot of really scary complex stuff now with that relatively simple entry point. And I sort of think back to when I was in like a workshop or a talk that Kelsey Hightower gave, he's like, Kubernetes is not a platform. Kubernetes is a platform for building platforms. And it feels like it rings true. Like, you know, Kubernetes in and of itself, I don't run that anywhere except when I'm testing or doing local development, I might run an open shift or some other distribution that has all of this other stuff baked in. It's like not enough just to have Kubernetes alone. Yeah, yeah, we were actually talking about that, you know, kind of in reference to the Linux kernel, right? It's like you run the kernel, right? You usually run some sort of distribution, you know, in the vast majority of cases, right? Because you want stuff like, you know, a terminal. So, yeah, so I definitely agree with you there. One of the things I was going to ask you about though, is that some of the groups you're involved with, you're trying to improve the information that I can get out of Kubernetes to inform my resource requests, right? And can you talk a little bit about where, you know, where you're doing that and what, you know, like what groups to follow within Kubernetes to kind of know when the next stuff that's going to land about how to know more about my applications would be? Yeah, so a lot of folks seem to be working in instrumentation. So I co-chair instrumentation now after having been involved in that SIG since sometime in 2018. And there are some misconceptions about instrumentation. We do not in fact own all of the instrumentation, that would be far too much work. Rather, we sort of own setting the technical direction and the guideline for various components, as well as maintaining like component libraries. And then we kind of just let everybody, you know, go wild with that sort of thing. So, for example, in the Kubelet, there are a bunch of metrics. Those metrics aren't owned by SIG instrumentation. SIG instrumentation will provide reviews on those metrics to try to be like a subject matter expert on demand. But it's ultimately SIG node who owns the Kubelet owns those metrics in the Kubelet. So incidentally, I'm also involved in SIG node. But so instrumentation likes deal with, there's sort of the, some people talk about that three pillars of observability. And those are sort of the, I know that that's a, I think is becoming more old-time. Logs measure, under logs we sort of include events because events are kind of funny kind of log. And so instrumentation, all of our, you know, CAPS enhancements, I kept as a Kubernetes enhancement proposal. Big feature request slash implementation plan for something upstream in Kubernetes. And everything that we deal with sort of hooks into one of those subject areas. So like on the logging side, for example, you know, right now you may have seen Kubernetes dumps out a lot of logs. And you may have also seen that other than like the audit log, none of them are structured logs. And I feel that this was like one of Kubernetes like original sins. Like I can't believe that in, you know, the year of our Lord 2017 or whatever it was, I mean, the project has been around since I think 2014 or 2015. But even back then, structured logging was kind of the standard, at least in the developments and rules that I was working in, but the Kubernetes components, they currently still, for the most part, do not offer structured logs as an option. And one of the reasons, why would you want structured logs? Well, one, you can parse them with computers. That means two, you can index the fields much more easily, which means three, you can do aggregation and correlation much more easily on your logs. If you don't do that, you have to go and parse a bunch of syslog, and you have to use a bunch of regexes. And it gets kind of ugly. So yeah, it fails. So one of the things that signals to rotation ended up spinning off into a whole working group of its own is a transition in Kubernetes to structured logging. So in 121, we migrated the entire cubelet to structured logging. So now you can enable the JSON format. It changes what the text logs look like as well. And honestly, it probably makes it easier for if you're doing the regex matching thing. But yeah, going forward, it would be great if everything in Kubernetes we could enable this. And, you know, we're still trying to figure out schemas and whatnot, because we've got to get components to migrate to this, and we've got to get people using it. And then we've got to figure out, okay, well, does this key value pair make sense? Is there a particular space that we want to limit the values here, too? And so on and so forth. So that's one thing. Tracings also in flight. We just landed alpha API server tracing in the 122 release. So a trace basically allows you to look at an HTTP request, or perhaps some other protocol, and look at each of the steps along the way of that request and see how long did it take here. And it's almost like a flame graph, but for request latency. A flame graph being for CPU profiling, seeing how much time did I spend in this function call versus this function, and so forth. So also very exciting. Do you want me to talk about metrics land too? No, I think that was good. I was actually going to move to a slightly different subject, which is like our examples thus far, and the discussion we've had thus far, right, has been about the magic being that Kubernetes does a ton, right? But you know, when we were chatting before, you also mentioned that sometimes people have the opposite kind of reaction is that Kubernetes is so complex that I can't use it, right? Because it does everything, it's actually going to be not a good fit for my environment. And I think one of the comments you had is that, no, not so much. And I was wondering if you could elaborate on that. Yeah, for sure. So I long ago, when I was early on in my Kubernetes journey, I went to a workshop that Kelsey Hightower was teaching. And everyone kind of like looks, stares at him kind of stumps looking. He's like, come on, you run Kubernetes with production. Like, you know, what's the simplest Kubernetes cluster? And people just kind of like stare at him blankly. Because the simplest Kubernetes cluster is an API server in front of an etcd. There are no nodes, there is nothing else. You just got an API of the cluster, you can talk to the API server, you can hit it with a kubectl, you can hit it with another Kubernetes compatible client. And like, that's it. And so kind of at its heart, I mentioned at the very beginning of the talk, I think of Kubernetes as this like set of open source APIs for managed distributed Linux and like really that core API, that heart of the cluster, the API server and like everything talking to it, like that's really the people get really bogged down and like, oh, there's so many details and there's so much stuff in this YAML and like, what's a pod and what's a job? And what's a, what's a daemon set? What's, what's a deployment? What's the replica set? Ah, there's so much stuff. Like, what's going on with services that make no sense? What's an ingress? What's they like? There's, there's all these things. This is very true. But if instead you try to think of, okay, I have an API, I can do things with it. I want to run a thing. Start with like the most simple, but the least possible surface and you through a cluster. I think you'll be able to find that like, it's, it's certainly complex. There's certainly a lot of code. I have read through a lot of code code in Kubernetes, Kubernetes, but that being the, the name of the GitHub repo that most of the code lives in. But like, you can, you can trace it through and eventually you can get a really good idea. It helps to have a lot of experience with Linux, helps to have a lot of experience with whatever application you're running. But you can trace this through the entire cluster. So like, you know, submitting a pod through the API. So we're getting scheduled onto a node, that note picking up that pod, that pod getting scheduled, that pod dying, the pod never coming back because pods are mortal contrary to popular misconception. It's, it's knowable. You can do the thing. So just like, don't, don't get distracted by all of the things. It's, it's hard to compare it on the service here, especially when you're dealing with software defined networking, but what I think you bring up is great. Oh, sorry. I was just gonna say, I think you bring a great point where, you know, the, you know, like Kelsey's point about the, you know, the smallest Kubernetes cluster, right? It's just the API server. Like, and I think that's one of the kind of parts of it there too, is that, like, you can kind of grow from there. You don't have to grow, you know, it doesn't have to take over everything all at the same time, right? You know, which I think is the experience you have a lot of time with quote unquote enterprise software. You know, if you don't have all the pieces put in place immediately, it doesn't work well. Whereas, you know, I think you can, it's, it's not quite really the right term of to kind of say a gradual adoption. It's not that so much as like, but it's like, it doesn't, it doesn't just take over the world because kind of to some of the earlier points, it's not doing all of the things. It's not magic, right? So what I want to do is say kind of on that note, it's not magic. I wanted to thank you so much for coming. And, you know, making, you know, day two of DevConf US a great start. And we really appreciate you coming by. Cool. And I think we just kind of want to say, you know, enjoy the rest of the day. There's a bunch more sessions. And make sure you join us for the closing and the trivia later today. I believe it's at four o'clock Eastern, Greensville. But you can always look at the schedule. I'm as you, as anybody who knows me knows, I'm terrible with both dates and times. So I always look at calendars. So please keep that in mind. And again, thank you so much for coming. And we really appreciate it. And we'll see you next time.