 Good morning. Good afternoon. Good evening wherever you're hailing from. Welcome to the first ever data services office hour featuring data scientists. Holy smokes. What's going on? Smart people early in the morning. So I'd like to welcome two new faces to the channel. Sophie Watson and Chris Chase Sophie's favorite algorithm is linear regression and Chris Chase is in love with dogs apparently and has treats for his dog in his desk. I've never thought of doing that but I might in the near future so please introduce yourselves for the audience. Everybody. Thanks Chris. Hey, I'm Sophie. I'm a data scientist. I've been working at Red Hat for the past kind of four years. I've been working at Red Hat and we've been thinking about how can we make data scientists lives easier and how can we make the Kubernetes and OpenShift experience fantastic. I'm Chris Chase. I'm an application developer by training and for the last couple of years I've been trying to take all the good work that data scientists do and incorporate them into different applications using lots of different methodologies. My dog just decided to strut in so now she laid down the doorway. Never mind. Sorry. So for folks that don't know, my dog is a German Shepherd corgi mix and people always want to see it when I say that so yeah. So we were talking before the show and one of the topics we were covering was, you know, there's a lot of AI, there's a lot of ML there's all kinds of, you know, things happening out there but those models are trying to get into applications or actually running in production, as I call it. And, you know, I've spent some time in the research land in my career in DevOps, and I know the struggle of, you know, that researcher trying to get their model into an application actually working kind of deal. So the idea here is to kind of help you the audience, you know, whether you watch it now live or later. You better understand like how you can integrate your models into your applications and actually get all that working together so that your actual work can go to work for your customers or your organization, for example. Right. Yeah, what we're talk talk more Sophie please. But, you know, I think the thing that we've got to remember is that data scientists on application developers or infrastructure experts or Kubernetes experts, we, you know, we don't, we don't get excited about running things on X platform versus X platform. We just want to go off and train that machine learning model explore that data and do what we're comfortable with. And although there's programming involved in that it was certainly not, you know, application developer level of programming mastery. I think in general data scientists work in a way which is not conducive to getting that machine learning component into a larger intelligent application. And so that usually means I just do my bit I'm like yeah I've got a machine learning program I've trained a model I'll just flow my stuff over the fence to Chris, he gets very Yeah, and, and usually when I get it this is not Sophie by the way but just like sometimes from a data scientist who hasn't worked in a software engineering firm before or you know they used to stay in that Jupiter notebook all day every day and that's kind of their world. And so they say hey, I've got this awesome working model that does XYZ. Here you go. It's a notebook where totally detects whatever I want it totally works my fantastic and then the first time I go in there and I try and get it working. Like, you know, like sometimes their cells don't run in order so they've gotten like oh you oh you didn't you ran six and then you go up to four and then you go read it again, and it'll load properly. And then, usually for the app all I want is like the prediction, like I'm not going to train in my app like that's a long process right I just want to know if this thing is this five is a five or if you know this linear aggression, you know what it's going to spit out what this prediction is so I just want that prediction and I just want. I just want its dependencies I don't need all this exploration and you know training and all this other stuff I just need to find that like one function and it's dependencies and that's not what I get I have to tease it out. And it's really hard to do that. And dependencies is a good one as well you mentioned that Chris like yeah. Like all what what all dependencies would come with a train to model. So, talk to me. So, my favorite is when when a data scientist has been working on his local machine and he'll do a pip freeze is like here's your dependencies and it's literally every dependency on his computer. No virtual environment. Yeah, like, so like for the data scientists that are out there, I mean, Sophie you've kind of, I think maybe learned this, you know, working here just through your normal, you know, career that you can't just hand over an entire environment to an IT team or you know somebody that's running production be like here you go. Like there's a process that we have to follow to get it in the prod and downloading the Internet's not part of it usually. Yeah, I think so I think that's true. I don't want you know data scientists to look at this and think, oh gosh I've got to change everything about how I work and I'm the problem here. But because that's, you know, it's just a completely different skill set and a different appreciation for what it takes to put something into a larger application. And I think at the end of the day as data scientists we want to we want to find the insights in the data and we want to share that with people we want to be as useful as possible. So, the way that we've been kind of approaching it here at Red Hat is to try and you know teach some DevOps best practices to data scientists. So still allowing them to work in Jupyter notebooks, not assuming that we're trying to turn data scientists into application developers, but just trying to, you know, educate on what it takes to put something into production. No one, no one teaches you about dependencies when you go to data science school, data science class isn't like. Right, it's like this is the log loss regression function for the third level of your 7000 level neural net and if you integrate this then you're good and like you know you never use that in a day job I'm sure someone somewhere does but the kind of the practical and engineering best practices. I think we as data scientists can learn a lot from. Yeah I mean just something like putting it in a container is like a huge win. You know normally. So if you tell a data scientist how just put this in a container right like you have to get them there first but also why why should they care whether or not it's in a container. So it's kind of like well if it's in a container is more repeatable it's more portable it's more scalable all of the benefits of containers when they translate beautifully to you know machine learning and data science models and workflows and pipelines. But at the end of the day. I don't want to take it like this stuff to write a docker file. Right. And that's the thing where like it teams and your dev ops or sres or, you know, all the people and functional parts of your pipeline of getting applications to production come in and say. Hi, let's work together on this so that we can make something usable for all. Right. Like makes the dream work in the sense, like for sure, like, sorry to be cliche about it but you know it's in my experience working with, you know, data teams, it is all about like, hey, I can build you this thing to help you make this more portable just run it like this. And you can put anything in it you want, and it'll be fine and giving them the context to do that helps tremendously, as well as you know any guiding points and kind of being their help desk for lack of a better term like that is definitely necessary. Yeah, and I don't think we're asking too much from the data scientists like we don't even really need for them to understand containers, even know, right, because if they just give like a fun like if they just are able to separate their functions, you know, write a Python function which they know how to do very well. If they and they understand their dependencies way better than the app devs guy right like oh I need this version of TensorFlow and this version. Like that's really all we're asking is just like just, just know it's important for that app dev to take your work and take it to that next level. Yeah. Cool. So, we do have something we can show off I'm assuming right. Yeah. We're up for it. I mean we can show you what this kind of process of getting machine learning into an application is looking like at the moment on OpenShift. We can use Red Hat OpenShift data science for the data scientist environment so you'll be able to see that that's kind of it's what a data scientist likes it's got the tools data scientist likes is the environment they like to work in. And then I think Chris is going to impose some structure on my work. Yeah, I mean, we can start off with a little project here I guess if you want to see like what what what I would kind of need for you to do your work and so that we can like take it to the next level. Yeah, let me show a little like a sample project I guess. Is that all right. Yeah, whatever you want. Cool. So here is like the most dead simple thing I can think of because like in OpenShift we have S2I and the data scientist has their notebooks and stuff. And so, how do I, there are a bunch of really complex methods of getting data science work and you know we can come cover that I think the future with like CI pipelines and you know, servers like Selden and KF serving and stuff. But really, we could just the two of us could still get our work done just by using OpenShift and like Jupyter notebooks and the OpenShift data science platform. So here you can see this little template which would make that easy for a data scientist like so to get started, it's got some notebooks here for her to get started. And then when she's done there, she can just take out her prediction function put it in a Python file and take out the necessary requirements for her. Her prediction function and put it into the requirements that text. And if she does that for me, like I can take this this already builds because it's already got an application file in it really a very small application file. So we could just like use this template, create a new project out of it. Right. And then like she's ready to go I'll give this get repo to her and she'll just this will function is basically storage version control, and it'll notify OpenShift that hey, you need to rebuild and redeploy so it's doing some heavy lifting there. It's still simple. And there's almost nothing to it as an application developer I'm already used to using up, you know, as to why on OpenShift, and she's already used to using her notebooks and writing Python functions. Right. So it was a dead simple and I would give that to the data scientist to go ahead and get started writing. I think that, I think by starting with already a discussion between the data scientists and the app dev like right this is how we're going to serve the model so this is what we need your work to look like this is what the end goal is. It's nice. It makes me think that, you know, maybe my model will deploy this time. Well yeah and pride in your work is everything right like Deming going back to Deming, you know, having pride in your work means if you're a data scientist you want to see your models out in the world right like that's going to give you the most pride and and make you fill the field which we all want to kind of, you know, feel like we're being productive at our job I hope you know and having that desire to get your models out there I understand that I feel that for a lot of people because I often feel there's this wall of confusion that sometimes data gets thrown over and just doesn't work out somehow and we're trying to, you know, educate people so that it can work out. The show is for our ops people and our data scientists and our app devs all together all in one. So cute. And okay so I will go ahead and show you what. Let me see if that works so you're seeing sweet. So I'm in Red Hat OpenShift Data Science so this is a managed service it's running on top of OpenShift dedicated. And it's an environment that like has all the data science things so I don't have to go to the OpenShift console and look at all those dashboards and find a route or a route as you North Americans say and expose it and then go. You can say dirty Americans that's fine. So yeah so this is Red Hat OpenShift Data Science you can see that the only thing we've got enabled here at the moment is Jupyter Hub because that's all admin has put in our cluster for us at the moment but there's a load of other stuff that you know you can add into here. It's like Chris talked about other ways of serving models you know we could be using a pipeline maybe we're using sold and deploy to serve and manage monitor those models. So that could all be integrated. I think later we're going to hook up to OpenShift Stream for Apache Kafka right Chris maybe. Sure yeah if we have time we can we can do that. So I'm going to launch Jupyter Hub and we can see all these Jupyter notebooks that we've been talking about so completely self service for me as a data scientist I can get my environment I can set my container size. I've got my AWS secrets in there so I can access my data and I'm going to start my server. Suspense. All right so now I'm just in you know standard Jupyter lab environment you know I can launch notebook and start coding in here I'm not going to because Chris sent me up that Git repo. I actually forked it and already put a load of stuff in it otherwise this would be a very slow very boring hour for everybody whilst I typed some slow Python code so I'm going to copy my fork and come and clone it in the environment. All right so I know we've been talking about Jupyter notebooks like if you haven't maybe you haven't used them before maybe you haven't had the absolute delight of trying to interact with them and see them or being sent one and trying to get it to run. This is what they look like so it's essentially like a real notebook it's like you can just pull your thoughts down it's a combination of posts and markdown and code and you can actually execute that code in line and then output is printed below so here you can see we've got these standard Python imports and then we're asking it to print out the TensorFlow version so I know we've talked a bit about dogs Chris you've got dogs as well right. Yes. And other Chris you've got dogs as well right. I do Franklin's here actually maybe we'll try out your model on this dog here. So what we're going to do today is just we're going to use a pre trained model for object detection. And I think the thing I want to emphasize here is that a lot of machine learning models have already been trained for you are not good enough for the job at hand. And there's you know there's some use cases where you're doing something completely new and pushing the boundary, but in general there's things out there. And we want to create cute little dog detector today so that we can, you know, point our camera at dogs and confirm that they are dogs. And I know there was questions earlier about whether dogs are actually humans. I mean, mine does act like a legit human. She has human like needs in the whole nine yards. Yeah. Several words that she understands in the English language and the whole nine yards. She's ignoring me though. So, no, that's typical. Hopefully we get to see her later. Hopefully she might come in here. She's like right outside the doorway to the point where it's like obnoxious to go grab her. So cute. So yeah we're going to put together an object detection app that you know you can show a photo of a dog and hopefully it will tell you yes there is a dog and you can show a photo without a dog and it will say there's no dogs here. So, in order to, you know, start this and test this out, I'm going to need a photo of some dogs. I'm using Max and Margot here today. This is Max. This is Margot. They live in Bristol, England. They're very good dogs. That's all dogs are. That is true. And so I'm using TensorFlow as well. So TensorFlow is like a standard library that's often used for image processing, which is what we're doing here because, you know, machine learning models can't actually take in a photo. They don't know what to do with that. So we're using TensorFlow, turn this into a tensor, which is something more that our model can understand. I said I'm using a pre-trained model. So there's like these image detection models are good and they're very big. So to train them yourself, you need tons of data and you'd need tons of time, you know, load of compute power and there's just no benefit when someone's already trained a model on dogs for me. So we're using this SST MobileNet data model and it's been trained on Google's open images dataset. So it's got 600 types of objects it can detect. We just want dogs in there. So we'll deal with that later. But yeah, we load in the model. We pass it in our tensor here and we can go ahead and ask it to detect and it spits out all of this code, which is essentially telling us the objects that it's seen in this image, how likely it thinks it's seen them. So this is, you know, 85%, 79%, these are much smaller. So it's not very confident that it's seen whatever object it thought this might be, maybe some more classes corresponding to the type of object. Detection boxes. So this model is going to put a box around the thing in the image to show us where it is. And finally the moment of truth, the things that it's actually predicted. So dog, dog, footwear. And so for each of these we've got a detection box. So we're going to be able to impose those back onto the image so that we can see. So we can go ahead and draw bounding boxes on our image containing those predictions. And okay, we've got a dog here. We've got a dog here. The font isn't very clear, but I can tell you that it thinks that their paws are footwear, which I think is really adorable. That is adorable. It is. But for the purpose of today, we just want to detect dogs and not footwear. So there's a couple of things we're going to do. We saw that first off, the model was making predictions that wasn't very sure about. Right. So we're going to fill all of those out. We're going to put a threshold in and say, hey, if you're not at least like 30% sure about this, then don't wake me up in the middle of the night to tell me there's a dog outside that I can go and pet. Like, I don't want to get out there and it's actually, you know, a raccoon. Yeah, awesome. Yeah. I like the gnarly stuff. Yeah. Yeah. Or Coyote. Could it tell the difference between a coyote and a dog? That would be fun. I don't think I know what a coyote is, but really just out later. Yeah. It's more than loony tunes. I'll just leave it at that. All right. And then the other thing we're going to do is filter out any predictions that aren't dog because we only want dog. So if we're only going to get this app to identify things that are a dog and that we're sure are a dog. So we go ahead and, you know, encode that here if label equals dog and score is greater than 0.3, nothing too extreme going on. And now we just got our dogs and no footwear. Fantastic. So, you know, Chris, I'm basically done, right? I'll just throw this over the wall to you and tell you put it in an application. Right. Do the thing. So right now is actually usually where I end up getting it, but that's why we had that definition earlier of like, hey, could you separate out your prediction function into its own Python file, right? Because you don't want me fishing through that and trying to figure out what is important because I don't know. I'm going to break her code. Like I have no idea which function is important. And the same thing goes for dependencies. I don't know if TensorFlow is just for her training or if it was going to use it all of it. There's a bunch of those. You know, Matplotlib is probably not going to be used for my prediction function, right? So she puts that requirement that I'm going to need in the requirements.txt and she puts that prediction function and prediction file. So I'm going to need from screwing up her code and I'll be able to incorporate it. Keep that much easier. Nice. Stuff like bo2 3 that we use to connect to S3 standard Python library to connect to S3, which is where my dog image was stored. Like Chris doesn't need that because Chris is going to be trying to predict new images that are taken on the app. He's not going to be connecting to S3 buckets. Let's start with the requirements. Chris, tell me about these first three. So I added that to the template. You know what? I should have put a comment in like this is mine and this is yours. So that's my bad. I probably should have done that. But six below is kind of her dependencies for like the data science stuff. And up there was my application dependencies, right? So we're kind of keeping them separate. And so that'll allow me to change the application up and work while she's changing her data science without messing with each other too much. S3. And then this notion of putting the things that we need to make the prediction in the prediction.py. So our notebook had everything in it had. Let's look at an image. Let's see how we turn an image into a tensor. Let's look at all the output from our prediction. Let's plot all the bounding boxes including footwear. Let's now amend the code a bit. Okay, now we can see that we can just detect the dogs. So there's kind of a lot of stuff going on in there. It doesn't look like Python code that you would want to share with the class, I guess. It's a communication tool. It's a nice story. You can see how my brain was working and what we were doing to get to the result. I think it's really good for teaching, learning, understanding, but it doesn't make for an application. So in this prediction.py, we've just got all of the things that are important for doing the prediction. We've got three functions in here. And that's all you need, right, Chris? Yeah, and that is so much easier for me to read and get started with and then going through all that exploration stuff in like the notebook, right? It's just going to, like, I can pick it up and go. Yeah, I can show you that actually. All right. All right, let me see here. What do I got? So let's see. Are you finding the mythical tab that you want? I am not finding the mythical tab. I'm just terrible at this. It's okay. Can you see the dog detector service screen? Yes, we do. Okay, yeah, so you can see this is her project that she uploaded. You know, it's got her new prediction file in it, you know, which is what we wanted, right? It's got her requirements in it. My application file was ridiculously simple. This is kind of a dummy project. It just calls her prediction function, right? And this is a buildable S2I project. Like it started off buildable, right? She didn't have to do anything. I could have built it right away, right? So we were already ahead of the game. So if you don't know what S2I is, source to image. Source to image, which is really nice. Like, so I'll show you what it does. So your OpenShift can just read from that Git repo and create an application from it. So let me go to the developer screen and show you how easy that is. I mean, this is really good for development. I wouldn't use it for obviously production systems, but it's really nice for iterating on it, right? If she pushes up a change to Git, or if I push up a change to Git, you know, our service gets automatically redeployed and we can take a look at it. This here is my little app that I'm going to connect to her service, right? So let's go ahead and create her service here. So create a new S2I from Git. We'll put in the Git repo. We can find that. So it's already going to automatically detect as a Python application, because that's what it is. We'll go ahead and create a new separate for this, the dog detector service. And then so my app will call into the dog detector service and request some predictions. Now there it is. So it's building now. So it's going to automatically build from Git and create a containerized deployment and put that in OpenShift for my application to call. And my application is already configured to know where that is. While that's building, because it takes a couple of minutes. Now I wanted to automatically rebuild every time she pushes up a change to her model, because her model is going to get changed quite a bit as she, as we test it out in the real world and see that coyotes are coming back as dogs. We're going to want to retrain that model to leave out coyotes or give you some special super alert. So it's really, so that's really easy since we're using Git and as to why we'll just copy the URL here for the GitHub web book. And we'll go ahead and create a web book into the dog detector service. So we'll do that do Jason. And that's that's pretty much it. And so every time she pushes up a new model that build will go ahead and rebuild and redeploy and we'll get a new deployed service that my Apple call. Every time she updates model, we'll see that right. So let's see how that app works real quick. Let's see. All right, so here's my little app. Come here, buddy. The science school is how to use Git. I had never used Git before I started working at Red Hat and I remember that first week was quite something I think there was a lot of Well, it makes you feel better as if it makes you feel better. My first time, like this is a long time ago, but switching to Git from other version control. I swear to God, I think I screwed up for six months. Like I think I forced pushed a master and I never heard the end of that for like the rest of that job. Okay. Yeah, I think I mean the thing about this is that it's like, because you set up that repo for me. All my files were already there. I just had to edit them. Kind of it feels like it makes it a bit neater. And then the benefit of working with an application developer is that if I do something tragic on Git, which doesn't happen anymore, but it definitely has, then I can just call you up and say, Chris. Although I will say no matter, no matter what happens if it's Git or something else, I do hope that data scientists save versions of their old stuff. Like I have been in that situation is like, oh, hey, can we get that model and he's like, I don't have it anymore. It's like it was in my Jupiter have no book over there and I don't know what happened to it. I can't get it working again. That is that has been frustrated for me too. Yeah, and I think not just the, I mean, for me, it's not just the model as well, you know, we talk about the model as being like the entity that's going to, but it's really like, it's a whole pipeline. It's what data train that model. It's, if we think about getting these stuff into applications, then you've got to add that extra discipline of like, you know, this pretend that everything you do might be audited. Hey, which model were you running on dates, you know, yeah. And the notebooks are super important for that too because that was the notebook usually sometimes they use to train that model right like that they can reproduce that model as long as they have that notebook still there. So, yeah, saving that stuff is super important. Yeah. It's true. It's also the case sometimes that, you know, with notebooks you're never, you're never able to reproduce the model because you know we're talking earlier about running all the cells out of order and kind of you have to impose some discipline to know like what, what happened where things came from and how to how to keep going. But like by, you know, separating out your prediction and your requirements like I could totally change that app without making you do anything. Right. So, I mean, it's crystal here. Do we have time to show I guess we could show Kafka might change that app up to use like a Kafka queue instead. Yeah, let's go for it. Okay. How's it going. Yeah, sorry everyone Chris is having internet issues right now. So I'm not sure if we'll be back but please continue the conversation. All right, we'll pick up where we left off so right now this app is fun but it, it only like works whenever you are clicking on it so if I want to do a monitoring service like if I wanted to monitor something outside for coyotes or for dogs, or whatever to make sure, you know, if it gets on his own I need to be notified about that. So let's see. So, let's go ahead. So instead of doing a single snapshot and uploading it let's go ahead and like put a constant stream of images onto a Kafka queue, read that Kafka queue, and then do the predictions on that Kafka queue. So I went ahead and made a Kafka queue called object detection, right, and it's going to take the images, do the prediction and put the objects on this other queue and then I can read from that objects queue and see if anything was detected I can read those predictions right and so hopefully what we'll get is we can put up a camera, and it'll take that stream of images and it will run the prediction on those images. And like I said, you already done your part. My goal is to redo this app without you having to do anything right. So, so I'll go ahead and see what I can do so go ahead and open up a Jupiter hub app of my own. So just like you did s3 I'm going to go ahead and talk to this Kafka queue. So I need to get the connection information and get that here. And that will go as one of my environment variables which I'll use in there. So I'm going to go ahead and spawn my app. And this is just to show how easy it is to work with Kafka you can do it from inside a notebook and then we'll just see change this app up a little bit. Right, so we'll go ahead and do a new project a new service that Kafka consumers which will go ahead and use from inside Jupiter hub, and we'll see what that looks like. So this is a Kafka consumer which will read from that second queue so we'll go ahead and get it working here. So you can tell I've installed the Kafka Python library. I've read my server information. And this is me creating a consumer that's reading from just a notebook test queue to see if it works so it's subscribed. And then here in another notebook, you can say, go ahead and just send some messages for it to read. And this is a producer. Right, so this is going to put messages on that queue. And so my app was going to go ahead and read the images from one queue and then produce a new message on another queue and you can tell this one is sending some hello messages. And this one is going ahead and receiving those and consuming those. So we've got a consumer and a producer worked out. Let's go ahead and change that up instead of doing the rest for rest service we did before we'll just go ahead and create a consumer to consume those images. That the app is feeding into Kafka and then we'll produce the predictions on there so just do a prediction and just stick that on that other queue. Right, and so you've already got your prediction hasn't changed it's still exactly the same. The requirements your requirements are exactly the same. Mine I just changed to use Kafka Python because it's a Kafka Python consumer. So we can go into the app again. And we can create a new service the same way we did for the dog detector service we can create one for this application right so we'll go ahead and create from gets the same one. Right, and then I'm going to have to put my Kafka information here in my deployment of our variables in my, let me go ahead and hide that from you on my screen because I don't want to have to show you. And there you go so now instead of just talking to your rest, the rest service, I can go ahead and talk to this Kafka consumer. Sorry, the coffee consumer is going to be talking to Kafka and it's going to be adding messages onto that queue, which my app is going to pick up. And once that's done building the same way like the other thing is really the same thing. We can take a look at what that does. All right, let's see this work this is on video mode is going to push up an image every so often and see if that works. All right. All right, if you're walking across along for my flower bed. So the nice thing about this was like the app is totally changed, but Sophie didn't really have to hardly do anything at all. Right. And just by that little bit of structure upfront that, you know, that that lends itself to me being able to change everything without having to bother her over and over again you know she's got her prediction she's got her requirements and it's really easy for me to pick up and move with it. But yeah, like, so all this transition from data scientist app dev I feel like you just talk a little bit beforehand to get a little structure, you find out what I'm going to need and what she can, you know, do just makes everything so much easier. The shared understanding the shared knowledge right like that's what DevOps is all about right like helping others succeed. Like is everybody's role in DevOps and ops in general, you know, especially application development right like we should be helping each other. And your dog still in the background looking at you like why aren't you letting me out. Just one point that out. Shout out to my dog. So, yeah, like, this was an amazing, you know, kind of handoff process that we just witnessed, you know, so if he did the work, train the model. Handed it over to Chris, some shared understanding about what things are needed and Chris implemented it, you know, for lack of a better term so this is the dream right. And it's not prescriptive like this is just kind of what we did and this is a really easy way for us to do it between an application developer and a data scientist right you know there are lots of great ways with you know CI and pipelines and stuff that we can do if we have go a little bit further and have a little bigger team, but this is drop dead simple. Exactly. I think that's the advantage of it. Right and there's also like so many other frameworks that we could build into this right you know, Chris you want to like, like you're talking about pipelines and serving models or yeah, yeah, you know, I think we should cover some of that in the future I would like to, you know, because like your service like I used, you know, that was a custom app and trust me there are a lot of people deploying models in custom flash gaps that's why that that seems to be, but there's, you know, you can use something like seldom which you know gives you, you know drift detection and you know it had all kinds of monitor and what else does it give you Sophie, you know, just like my head. Yeah, just like explainability services. And it kind of automates that roll out of the models. You can route some of your traffic to new models and tell very canary to plumes and stuff. Yeah. Yeah, and it's pretty easy like you can just tell it where your TensorFlow model that serialized model is and it'll be able to deploy it. So it's real, it's real nice and easy you can cover that and then for the building of the model, you know, sometimes you want it to automatically build or build outside of a notebook. And so people will use like, you know, so pipeline of some sort, whether tecton to get to flow pipelines, you know, Argo, whatever it is people will build those things in all kinds of ways. I think there are all kinds of possibilities, whatever's right for your team, but definitely in the transition from data science to SAP dev is really nice if the data scientist knows upfront what to make. Awesome. If you were, you know, somebody that had just watched this and were, you know, and like your Sophie or training models and such. Do you think going to that app deployment team and saying hey, or app development team and saying hey, you know, do you feel like you've given people enough to go with here to take to those teams and be like hey we can work together. Like, somebody has to reach out and build that bridge, and it's not always going to be the operations team or the app dev team because oftentimes they have their head down but if the data scientist needs the model in prod, how would you build that bridge. I mean, I think the process you can work ahead time. Yeah, right like, you know, at some point in time you need to have like a sit down with you know representatives from all sides to kind of say hey, this is the goal we're working towards. How can we facilitate that as a team. Right, I mean you kind of have to have whatever because it also depends on skill set right so if you like how much is that data scientist willing to take on because there are some data scientists who feel pretty comfortable actually in flask and this that and the other thing. And they don't mind doing understanding some of that and there are others who don't want to touch it. So, right. I think there's got to be an agreement between teams like hey this is what we can produce. You guys can pick it up from that point onward. Right, like at the hand off the takeover point. Yes, absolutely. And then you can try out a different product and see what happens works for you whether it's, you know, cancer and keep low word custom apps. Yeah. And I think just like, you know, a real understanding from kind of the upside and the upside that a lot of this is outside of a data scientist comfort zone might know if there's data scientists that are watching this they're just starting out or, you know, in school like data science like nobody, nobody seems to teach you, you know, recent like package control that library control like you do just work on a computer and install every Python library into the same environment like no one. The kind of the architecture and engineering stuff is like definitely a completely new level. And I hope people see that like you don't, you can learn as much or as little of it as you want, you don't have to become an expert, like we're not trying to make data scientists engineers. Right. Yeah, that's the biggest thing right like we got to keep people close to their disciplines, but still integrate right and that's, that's the struggle that's where teamwork comes in right to figuring out where the boundaries start and in for each individual team or person or whatever it may be. But just get get everybody in the room right like have everybody, everybody needs to see to the table, everybody needs to have that you know kind of equal footing in the room, as it were to say like, this is what our capabilities are. And this is where we need to be and figuring out how to get there together is the key, I feel like. I feel like the containerization helps a lot too especially with running those notebooks because like instead of everyone having their own set of things on their local machine, everyone having that common starting point platform for where they're working in Jupiter or whatever helps a lot and me being able to look at that, you know containerized image, even if it's just a Jupiter thing and seeing what dependencies are in that container, you know, helps towards production, a lot to house and pick up and understand what's going on. I thought I heard my dog work but no something else. All right, awesome. So, despite the train wreck of my home internet and you know all that I appreciate everybody coming on and joining us today is there anything else we want to show off before we sign off today or talk about you know future shows maybe we'll be back in two weeks, two weeks or four weeks, four weeks as well. You tell me how often does this repeat, let's see. Remember these things and I've separated it from the whole thing. Every four weeks. Okay, scientists. But basically, every two weeks, the data services office hour will feature storage or science. Right. So, when you think data, we're going to have every other week is going to be alternating between like how to set up the storage, how to build the environment to do these things and then have the actual data science process happen on the show so if you're looking for Chris Blom or bloom and Michelle to Palmer they will be here every other week and Chris and Sophie will be here on alternating weeks basically so it's every two weeks for the show, but every two weeks that's a different crew. Does that make sense. I probably explained that the most awful way possible just now I feel like right because I'm so flustered but anyway. It makes sense once every four weeks. Right, like. I think for for next shows I think I think we got some stuff that people want to see I think we can look at some data exploration from inside notebooks and really nifty tools to do that. I think we can show people some of the, you know, the training pipeline stuff that people use to build their models. And I think we can show them some of those serving things I was talking about we can show off like sell dinner, okay, serving and see, you know, just to show people the different options because this was the drop dead simple option. Like I say if you're willing to go into those other offerings, I think there's a lot of value there, your fun to use. Correct. They are fun, and I wish I was a data scientist, because it seems like a lot of fun. So, thank you Sophie. Thank you Chris. Thank you Bobby for helping out the today Bobby the intern saving me yet again. I don't know what we're going to do when you're gone Bobby, I'm going to be sad. But tune in in two weeks for more data services office hours, but later today we're going to have in the clouds with Kirsten newcomer and hopefully Andrew play Schaefer talking about DevSecOps and shifting left. We are also going to have Dev nation on the air will have get off sky to the galaxy this afternoon. We're going to look into into it into it's coming on to give us a deep dive into Argo. And we're going to wrap things up with the another, you know, kind of series premiere of the stack rocks community office hours today so stick around stay tuned subscribe to the calendar if you're not already which I'll drop in chat. Sonny's back. Hey, Sonny, come here, buddy. Come here. Come here. Come here. Come here. Everybody wants to see. And dog. Oh, she looks terrified. Hold her wrong. It's okay, buddy. You want to go get Julie. Okay, go get you. Oh, don't have me. All right, so now that everybody's seen Sonny, I will should sign off for the day and we will see y'all later stay safe out there folks. Thanks, Chris. Thank you.