 Welcome, everyone. This is a panel discussion, so we're going to try and keep it as interactive as possible. It's better for us and better for you guys if you ask us stuff and we kind of bounce off you as well, rather than just doing a monologue between the four of us. Just before we get started and do introductions, just to kind of gauge the audience a bit. Can you put your hands up if you run machine learning workloads on Kubernetes? OK, so yeah, good chunk of the audience. People online, that was like 60% of the audience. OK, and then of those people, how many of you are running stuff in production? OK, probably most of them as well, so that's good. So like half the room are running production ML workloads on Kube. And I think that's probably easy. That gives us an idea, right, where to start. I'm curious. Of the people that raised their hand, are you guys also training models? So raise your hands also while running on production, managing the infrastructure, and training the models. OK, cool. Cool, right. So what we'll do is we'll kick off with a quick introduction to each of us and maybe a little bit about what machine learning looks like in your organization. So usually do you want to start? Sure, thank you. My name's Yudre. I'm the team leader of data science around time from Bloomberg. So we provide on-prem, bell metal, node-based machine learning infrastructure. We provide multiple products through our platform, including notebook, training, model serving, spark, extra. And we have many very interesting internal use cases, such as machine learning for financial product, for our news product, and extra. So I'm Ed, Ed Shea, I'm head of developer relations at a company called Selden. And we're slightly different from Bloomberg and Spotify in that we build machine learning software that we then give to other customers to deploy stuff on. And we do the last mile of machine learning. So we do deployment, monitoring, and explainability side of things, but work across all sorts of use cases, which is kind of fun to see. Yeah. Hey, my name's Zikashi. I'm an engineer on the ML platform team at Spotify. I mean, ML is almost everything we do at Spotify. People like teams use machine learning for music recommendations and discover new musics. There's all kinds of applications for ML, like ring and so range from small like supervisor learning on the type of data sets all to very complex deep learning on the graphic neural networks. So our team is building centralized ML platform so allow different teams to do different ML ops on our platform so they can do feature engineering, data validation, model training, and evaluation. And hopefully everything looks good. They can deploy model to production. My name's Keith Laban. I am the manager of the cloud native compute runtimes at Bloomberg. So just to add on a little bit more to what Ujoy was saying about Bloomberg. A few years ago, I think I gave a talk maybe in 2018 or so about building a data science platform using custom resources. And since then, I think we started solving some machine learning focus problems to address a lot of the data problems at Bloomberg from natural language to market data across a diverse set of teams at the company. And we ran into a lot of infrastructure-related challenges in building our ML solution. And along the way, we also realized that the tools that we were building, so we were building training tools, inference tools for going into production, tools for data transformation, so using Spark workflow tools to solve machine learning problems, Jupyter Notebooks for your data exploration, we started realizing actually the tools we're building, yet it's useful for machine learning, but it also actually has a lot of other applications to it. And so I think for us over our journey, trying to find the right interface has been really a challenge because there are so many different types of people using our platform. Have you guys found that in your machine learning journey? Yeah, I think actually I'm going to ask another question to the audience quickly here. So can you give your hands up if you are a data scientist, if that's your title? Yeah, like one person in the whole room, two. OK, one, yeah. So I think that's kind of part of the problem here. Unsurprisingly, we don't have tons of data scientists here at KubeCon, but ultimately we're kind of building these platforms for data scientists. Right, so one of the. Who are ML engineers? Who's an ML engineer? Oh yeah, ML engineers. OK. What about ML ops engineers? Getting there. What about ML software data engineers? Yeah, everyone has like a different definition, I think, right? I see a lot of people that want to raise their hand, they want to self-associate to that skill set. Because I think a lot of companies, I've seen plenty of these polls this week and I see the same kind of type of engagement. And so as we're building these platforms, at least for us, we've been trying to define those personas and what is that interface? How abstract is it going to be? I can also talk a little bit. When we think about interface, we also think about what's the right tool and interface to give? Is this a REST API? Is this an SDK? Is KubeControl the right thing to give to data scientists? And also we did a lot of like self-searching. What's the user group? Of course, primarily for data scientists and machine learning engineer who want to run large compute workload and surf model. But at the same time, especially for notebook, there are a lot of no engineering use cases as well, like want to do a report, want to do some data analysis, do some kickoff spark, to do some ETL. So that's something that we have internally discussed a lot. I wonder how the company think about it. Yeah, that's definitely true at Spotify as well. Like the ML problem at Spotify is the problem of long tail, like all kinds of different ML applications. We have very diverse ML practitioners using ML in very different ways. People need the notebooks, people need the pipelines, people need to reliable way to conduct the ML continuously. So when we design a platform, it's really need to think about layers, like say, how about we can design an API so can streamline some standardized process. So when easy get on our platform, they can very quickly to build a prototype and build ML applications through a few steps. But if their application really goes deep, really need a lot of customizations, our platform also need to have the flexibility to allow them to, like, using the low level API to use Kubernetes, to use the computer powers. So they can build a very good application. Yeah, do you, I mean, do you use anything at Bloomberg? Do you have, like, different levels that people can enter the stack from, right? Depending on what they want to do? Yeah, yeah, that's an interesting way of levels, right? Like, so our, my org is called the Runtime Seam. So we've been really focused on the level of exposing up to sort of the technology layer of the runtime. So if it's Spark, it's how do you run Spark on Kubernetes as good as possible? There might be a number of products that you expose on using Spark because Spark turns out to be really good for reporting functionality and analytics tools, but it also happens to be an extremely important project for, or part of the ML lifecycle, the data engineering, right? And so you might want to have Spark as part of a data science product platform in one UI, but at a sufficiently large company, you probably have plenty of other engineers somewhere else that also want to use Spark for much, much different things. And so the direction that we've sort of taken is like, how do we build something flexible enough that other platforms can platform eyes on top of us without having to reinvent the wheel from the ground up? I'm curious, in the audience, who thinks they have too many controls? Too much. There's not enough abstraction. No hands, okay. What about too little abstraction? Who thinks the tools that they're using are really hard to use? There's no abstractions at all. Oh, okay. Is there anyone else here? I think I see a few hands come up. Did I miss any? No, I think it's last session of the day vibes, right? People, we're trying to keep you awake. We do actually have a stooge planted with a mic, by the way. So at any point, if you want to chip into the conversation, please just stick your hand up, and yeah, you can get involved. We actually have a question over there, so yeah, do you want to go over there? While you're doing that, I was going to say for our deployment platform that we build, we tend to see three types of users. There's like your, I don't know what Kubernetes is, and I don't need to know what it is, type user, and for them, there's a lovely UI you can just walk through and drag and drop and get your model deployed, and then there's your Python user who just wants to use the SDK and have it deployed again with a few config prams, and then there's your super user who understands Kube, wants to go and play with a container spec and do all sorts of things underneath, and we feel we have to account for all of those, right? And actually, the more customers use us, the more they end up down in that super user type thing. Do you find supporting all those types of users is a challenge? Yeah, definitely, at Spotify, supporting is a big thing in the platform team, like since we have all kinds of users, they're using all kinds of tools, how do we provide different levels of support to enable them, to empower them to build ML application? It's very challenging. Our platform team has a user engagement team, so-called, they basically gonna bootstrap the ML applications in the feature team from zero to one, so they're gonna go out, collaborate with different teams, and teach them how to use our tools to build ML applications. Then, hopefully, the engineers on the team can learn from those expertise, and eventually, they're gonna take over the products, and it will be from there. Yeah, how does things go? That's interesting, yeah. I mean, we don't have a user engagement team, but I kind of wish we did. We put a lot of energy into documentation and internal training, and we've been quite successful with it, but again, like you said, there's sometimes a misstep in where the level of abstraction actually is, and covering all of those bases sometimes gets really hard, and so I imagine your user engagement team can kind of step in and do some solutions engineering with them. Yeah, definitely. Doc is very important, but Doc can only cover, only can help to a certain extent. If a user really needs to go lower level, say, hey, how do I build a custom component to using a different framework, then you really need people with the expertise to help them to understand the low level library and the low level SDK. Yeah, I mean, you'd think as a software vendor this bit should be easier, but if anything, it's almost harder, right? Because being on top of Kubernetes, you have this like wealth of cloud native tooling that you can use all these fantastic CNCF projects that probably all of us around here are using for things all over our organizations, but that support and education windows suddenly becomes absolutely enormous because you're using a service mesh as part of your product, and people are then asking you, oh, I have an issue with that, it's conflicting with something else on my cluster. Is that now our job to help them with it? And it becomes really difficult, right? All of these kind of things. So yeah, it's a hard one to answer, but what we do is try to educate for us like in many of the common use cases as possible and then just be really flexible where power users need help. I'm curious, do you build a specific tool to help the support, to have a debugging kind of stuff? No, we don't have like a specific tool. I guess, again, one of the great things about being a vendor is at least we have like a paid support option and people can go through that and then you have dedicated people who can, who have skill sets and all these CNCF tools that we use, but yeah, there's no like magic, run the debug with a type thing, unfortunately. Yeah, definitely feel the same. I think one thing we always ask ourselves is that what's the boundary between user responsibility and infrastructure responsibility? And I think that's always a huge challenge for managed infrastructure, basically any type of managed infrastructure. And we ask ourselves like, do we provide enough monitoring? Do we have enough metrics, enough logging, like sufficient dashboard so we can offload sound of those support, turn it into self debugging tools and documentations. But then we always found it's never enough because the user base is very different, like with more stronger engineering background, like having like CPU metrics, memory metrics, very helpful for those group of user. But for a different group of user, maybe focus more on data science or maybe focus more on data analysis, like this kind of tool is not like the right tool for them to do self debugging. I think we have a question now here, yeah. So I come from relatively mature hardware software company and we've recently begun doing ML on our kinematic data, on our vision data, on our video data. But I guess the question is, how do you establish a culture where management kind of understands the importance of ML ops and DevOps? Because right now our pyramid is inverted where the data science analysis comes first and happens on a small scale. But when you try to scale it over your entire fleet around the world, as most people here probably know, you run into difficulties. So how do you establish a culture both in management and within the group? You mentioned documentation is a big thing at Bloomberg. I'm sure that culture didn't come about overnight and others have mentioned other things at your respective companies that have been established. How did you go about establishing them, making their relationship with upper management, C-suite and kind of having that trickle down to the rest of the company? Before we answer that, could I ask you how you define ML ops? Kind of supporting and scaling models, supporting and scaling training, building tooling for the data scientists, basically all of the things that you guys mentioned is how we're defining it. But just like how you said earlier, ML software, data engineer, ML ops, dev ops, whatever hat we kind of have to wear there. But yeah, thank you. I'm gonna try to take a stab. At Spotify, we have called ML Golden Pass, which kind of like a standard of the way to do ML. And our platform also built those like standard ML ops. Say if you wanna do the like feature processing, we have component for you. Say if you wanna do a data validation, there is component for you. And our platform also requires some like standardization or the future schema and evaluation metrics. So trying to standardize the whole ML workflow into a sequence of components. Then when user run those workflow, when user finished prototyping and move the workflow to production, they can follow those steps and the deploy model to production. Yeah, I think like actually I was gonna talk about like a standard we use, you mentioned, kind of having those standards are really important. Something we use is called like the V2 inference protocol or open inference protocol or whatever. I know that Bloomberg use it as well, right? And that despite using a different deployment engine underneath, right? And that's something that's really cool because it means like if I wanna interact with a machine learning model, that's been deployed on Selden, right? It's the same interface that I'm using as if I'm, I suddenly start interacting with Bloomberg's models, right? And so standardizing those things makes it not just easier for the people who have to deploy these things, but actually for people who have to work with it as well. The software engineers and the application owners who are integrating with those APIs as well. So yeah, that kind of thing definitely helps. I can mention one more thing. I feel helpful to establish a culture between. Oh, sorry, can you hear me? Yeah, I would like to mention that one thing that we are like within Bloomberg, we are trying to establish more of string line more for our data science platform is that to define over escalation process better. Like we support like over 1000, we have more than 1000 people just in over support chat and we have 100 teams onboarded into the platform. So to define a way to escalate issues so we can prioritize spider from our side and really address the highest priority issue and also streamline the process that give us enough information to shorten the overall investigation time. That's also something I found so powerful. Yeah, I think a lot of it, documentation, golden paths, all that stuff is really great and something we think about as well. But like you just had like really providing transparency to your upper management, senior management and to all the work you're actually doing is extremely hard, right? And finding ways to do that. Like there's not one answer to how to do that. And then I don't know where you are in your journey towards this, whatever you want to call it but there's also kind of the subtext of just like cloud native mentality in your company. So if you're in a fairly mature company just shifting to cloud native before you even introduced that ML stuff to the conversation is potentially a challenge too. So it's gonna be a mixture between grassroots and finding people that can advocate for you and are really good at talking to C level people. Cool, we have a question here. Yeah, I had a question. Oh, so our company does machine learning platform for PhD researchers. One thing we have trouble with is when we upgrade some of the components we wanna make sure we don't break the codes for the, I guess, the users. My question was two things. First, how do you communicate what will break? And then how do you test and make sure whatever you're doing wouldn't break or interrupt the machine learning side of things or whoever's using the infrastructure. So whether it's a CUDA upgrade or any of the smaller components. Cool, I mean I can take a stab at this while you guys think about it. So from our perspective we're about to go through this actually so we have like a major release of our platform that's coming that's gonna allow people to, when they create their custom Python models it's gonna be slightly different. Right now I won't go into the details but the important thing about this is that the first thing is we plan to support the previous way of doing it for a long period of time. I think that's really important because even if there are tons of benefits to doing something a new way, right? The fact that you've broken the way they did stuff you have to give them time. And then provide as much education as we can. Firstly, like why they should move to the newer method but also to make it as easy as possible. And that's, the great thing is that's actually my role at the company. So it's to create videos and tutorials and blogs and things like that to make it as easy as possible. So it's like, okay, you're upgrading from this version to this version, your custom Python model is gonna change it a little bit. This is exactly what you need to do. Don't worry too much, et cetera. Yeah. I think setting expectations is so important. I think that's something the Kubernetes community I think as a whole has done a really good job of in terms of labeling things as alpha, beta and so on. And they're really clear about what that actually means in terms of interfaces, stability, things like that. And so kind of following those guidelines as a whole, like you wanna pass some of that through to your user base internally. And then we set our SLAs and stuff like that on our own internal offerings of our stuff and kind of plan out our failure domains carefully so that we can roll out upgrades in a way where we know we're not gonna take out everyone all once and we can try to triage the issues as they come in before taking out too much. Yeah, for us, I guess, communication and the collaboration are the keys. Like wherever we do the big upgrades, we let user know like amounts ahead, just tell them you need to expect breaking changes. And then we also put a lot of effort to write in the migration doc so user can follow step by step to update their pipeline, ML workflows. And then we also have the migration force. Basically, we're gonna help them to migrate their pipeline to a certain degree. We had a question here. So I'm curious what your experience has been at your various companies with maturing and machine learning organization into something that's sane and operational. At our company, AI ML is a relatively new thing. I know a lot of companies that's been around for ages and they already know how to do it, but for us, we've acquired companies that have a AI component. And so you have a bunch of data scientists that have systems that they built themselves, their pets, they don't wanna let go of them, they don't wanna play with anyone else, they don't wanna hand this over to IT or to a DevOps or SRE organization. And so you mentioned things like golden paths and stuff as a way of kind of building that culture, but do you have any like war stories or just advice for how does an IT or an infrastructure organization interact with teams like that to help bring them on board? That's a very good question. So I would say especially in a large company, I found it's hard to have one solution to fit all. So I've definitely see there are teams want to build some their own infrastructure or maybe part of it and then run another part of it in over platform. So I think that's actually common because I believe there's no one solution to fit all. But I think as some features material, and we found out there are multiple teams need that feature and eventually this kind of feature will merge into more mature platform. As it matures, I think that's a great way to put it. The problem I think gets better over time if you find the right set of users in the beginning in the company that are gonna be strong advocates for the platform you're building, eventually you might get some of the rest of that market share and you might not. It was definitely a big challenge when we first started everyone had their own metal, whatever, big iron machines, GPUs, whatever that they can do, whatever they want it on. And then when it broke and there was no one there to fix it, they realized, oh wait, I'm not doing data science, I'm not doing this ML research anymore. Actually now an SRE for some machine and so once that sort of resourcing dried up for those types of people and there is more messaging towards, hey, there is another platform that you can use where you don't have to think about these types of issues anymore, then you start picking up some of those, that market share, but it definitely takes a long time to build a mature system where people actually have that faith that they can move over to it. Yeah, definitely we also kind of like rely on the open source tooling and the new technology. For example, Spotify is investing in Ray, like generic distributed processing system for ML and to complement the opinionated ML workflow we have for the production systems. So which means ML researchers and data scientists, they can access more broad ML ecosystems to use more libraries and doing experiments on Ray, which that's another aspect, just offering like more flexible online platform so people can do more things by themselves. I think we had a question over here, so I don't know whether Mike's gone. Yeah, over here, hello. So given most of us, probably in the room are platform engineers where we may have varying levels of understanding of the needs of ML engineers, data scientists, et cetera. Well, to be frank for me, it's very little. I guess my question is, where should someone like ourselves get started in terms of best helping to understand the needs of data scientists and ML engineers from a conceptual basis? Like whether that's like the technical needs or the expertise, like where would you, like just asking for general directional guidance from that perspective, especially from the perspective of someone who doesn't really know those things overly well. That's a tough question, right? Because I think it slightly depends on your size of organization. I think if you're a small company where you're one of the only platform engineers and you just have to support these ML workloads, you might have to be a bit of a hero, right? And learn some of this stuff yourself and work closely with your data scientists, ML engineers, whatever you're calling them, right? If that's the case, right? There's now, the good thing is that like tons of online communities around ML, ML Ops, ML engineering, there's loads of resources and people who can share best practices and experiences. Because I think we're in this ML Ops world, right? Whatever that means, we're kind of, where DevOps was 10, 15 years ago, and obviously it's accelerated way more quickly because we can just reuse all the best practices from DevOps, but there are still a ton of things that are different, like if you're a DevOps engineer, you won't have heard of drift detection or why you need to do that in a machine learning model. And that's where I was gonna say, so if you're a bigger organization, I think you can probably hire people to do ML Ops engineering or machine learning engineers who are kind of closer to the infrastructure side and can understand both sides of the fence and then they can kind of bridge the gap between two teams. So you've kind of, it's probably one of those two approaches, I'd say. I don't know if any of you wanna add anything on that. Well, I was gonna go back to something you said earlier actually about kind of the CNCF landscape as a whole. You were kind of hinting towards that. And that's just a great resource to begin with, right? Like there's so many platforms, like I'm sure you're all here, going to all the talks of metaflow and coopflow and all the different flows, right? And then there is all the SaaS providers as well. And so there's tons of solutions that you can look at that provide some out of the box functionality that maybe you don't actually need to solve all those problems yourself and you can focus more on the ML engineering side of it, right? The more of the, on the research side and you don't need to hire those other types of people. On the other hand at Bloomberg, we're all on-prem and we have really sensitive data so we don't, and we're a larger organization so we tend to build a lot of it ourselves but that also takes a lot of time and energy and investments. Yeah, for Spotify, we run everything on Google Cloud. So we get like a lot of benefits. We bootstrap our ML platform right on TensorFlow Extended. It's like component-based approach. Define standard ML ops through the component so you can run entire ML pipelines on Kubernetes. That helps a lot to ensure a user can and design workflow, then make sure the workflow has everything you need to develop a successful ML application. Great, we have another question I think. Yeah, so I wanted to go back to something you're talking about earlier with debugging, specifically about what we expect people to know about Kubernetes. So it seems like we want to have a lot of abstractions and maybe a data scientist. Maybe they don't need to know what a pod is even, right? At some point. But on the same hand, we've been thinking a lot about debugging and how to expose that to the data scientist to better understand their system and sometimes the errors look like a proxy couldn't connect to the pod because the pod couldn't umkeld so they need to understand what proxy is and maybe their vision's not ready. So what's our replica set? These things kind of end up leaking into the errors that we expose to them. So my question is maybe specifically what do you guys expect data scientists at this point to know and about Kubernetes details? And if so, how or how does that leak into your debugging platform and the errors that you show and expose to them? I mean, yes. No one else wanted to go first, I'll try. Yeah, that's a really hard one. So for our platform, we talk a lot about like error surfacing and where, what's the right level to bring up to a person? And I think it kind of ties back to the personas we talked about earlier of like, are they that, if they're that, don't need any Kubernetes knowledge, kind of self-service, wanna just deploy a model and don't really care how it runs type person. We need to have a way to just surface that there is an error and if it's something that's like, oh, your model was, I don't know, like the wrong, you selected the wrong model type, right? You gave us a TensorFlow model and it was PyTorch, right? That's something we can easily tell them. If it's something much lower level, we probably need to tell them there's an error, but then have a way to have that surface to the right person who's like the platform owner or the team lead who runs that, those deployments, that's, and so that's the kind of challenge we have there. And I think you have to think about it from those personas and see what's the right level of error to surface to them and if not, can we send it off to someone else? Yeah, I don't know what you guys do there or you just give them all the logs. Well, we try, we try to abstract them and thanks Alexa, by the way. It's Alexa, she has a great podcast called Alexa's Input, if you don't know her. Check it out. We try, we try to set up these abstractions, right? Like that's why we are talking about personas and building the right product for the right people, but you forget something. And then some log kind of leaks through and trying to figure out what that expectation is and like do we have the time to radically change our assumptions of how we built the product to fix that specific type of error message in a way that has some clear remediation for an end user. And the truth is like these tools are built on an extremely complicated stack and you hinted at this earlier, right? Like are you using a service mesh? Are you using Istio, LinkerD? Or are you using KServe, which also brings in Knative into your stack, right? And so there are just like a lot of tools in the middle to provide to an end user what's a fairly trivial thing. Like I just wanted to deploy my model, but it turns out there's a whole lot of infrastructure that has to go behind that. And so a little bit I think is setting expectations. And so that's where like ML ops comes into the equation. Do you have people sitting with the ML engineers who are productionizing the models that can speak the language of Kubernetes a little bit better or is that on the runtimes? Is that on the platform team to completely abstract all that stuff away? And it's a real, I think it's a really hard question to answer. Okay, I think we're nearly out of time. So we do one final question here and then we'll wrap up after this one, yeah. I was wondering how you deal with multi-tenancy issues because as a platform engineer, it feels like we have plenty of tools to deal with multiple teams of developers in one big cluster. But once we get into the ML space, we're just basically handing out large chunks of hardware to different applications. And we don't really know what's going on in there and how to add value from a platform perspective. So we actually use Kubernetes like namespace for user isolation. We choose to build over cluster in a multi-tenancy way. That's a very important underlying infrastructure we have. And we isolate different users like credentials, identities, a resource called out this kind of thing and allow them to basically own a share of our clusters and around the kick off the workload. I believe Spotify also has multi-tenancy infrastructure. Yes, we did it a similar way. Every team, they have their own namespace. Then the way it works, we have our own controller like team submit config file defined like what kind of service currency use, what's the owner of the namespace like our controller can automatically set up those permissions for them and make sure all the parts running on that namespace has access to data, has access to the GCP service running on different user projects. Cool. Yeah, we do a little bit of both actually. We have like namespaces and then we also have an abstraction called projects over the top, which allow people to things like models registered in a catalog to be shared between teams and things without them necessarily having to have access to the same namespace as someone else, things like that. Yeah, I think we're pretty much out of time but before we go, I'm gonna put everyone on the spot here and ask you all to give like one piece of advice to everyone in the room and I'll go first so you get a bit of thinking time. I'm actually gonna go back to the question someone asked earlier about how to learn the right resources for machine learning if you're coming from the, you know, Q platform engineer side. And I'd say just try as best as you can if you're interested in the space, learn both, right? Because whilst it's hell of a lot to learn, you'll never learn everything. It's massively growing field and if nothing else, you'll make yourself incredibly employable because people are always looking for people who have skills on both sides. So that's my one hot tip. I don't know if you have one. Yeah, I will definitely suggest to embrace the community. So I started working on ML infrastructure about three years ago and I have no idea what Kubernetes is. And so throughout the past three years, being involved in open source community, being a contributor or sometimes even just contributes very small things. Pick up a small GitHub issue, have a discussion, just learn a lot and along the way, I get to know many, many brilliant people that I just, I feel like I know the smartest brain, the whole world and absorb their knowledge and that's what get myself and I guess also my team so far to provide the science platform. Yeah, I will probably, if you wanna build an ML platform, I really like the design pattern called like progressive complexity, explosion. So you need to offer some like streamlined process like an ML golden pass allow people to quickly start the ML workflow that if they have more needs, they can like dive into the details, look at the low level APIs, low level docs to really customize their applications. I'll give a, what did you say, tip advice? It can be, it doesn't, it could be, don't use this to. As an infrastructure provider, I guess, be really intentional about what you're building and set expectations for your users, especially in the ML space. I think in the early days of our building our platform, it's extremely easy to spread yourself too thin because the amount of things that you need to build just to get a small ML projects off the ground is basically like go to the solution center, it's like something from everyone. And I would really advise you to think hard about what you're actually going to support in the future if you deliver that. So be really intentional about how you approach that problem. Great, all right, well, yeah, thank you all for coming and we'll be around if you have any other questions you wanted to chat to about afterwards. Thank you very much.