 Welcome everyone. Enterprises, as you know, I'm from Germany, so small, middle-sized enterprises are the backbone of the German industry. We only have a few, like really big ones, but plenty of small ones. And I just want to tell you about some stories, some stuff I learned. And I would like to open with a question, who has ever heard about the cooling house experiment? Okay, let me explain it to you. The cooling house experiment, it's actually from a book from Dietrich Dörner. He's a professor of theoretical psychology, and he examines human behavior by making decisions. And the book is down there. Unfortunately, it seems never to be translated into English. I looked it up. I'm very sorry, but it was one of the best books I've ever read. And the experiment is the following. It's a simulation. You are the manager of a supermarket and the automatic control for climate control in the cooling house is broken, and you basically have to manually control it. And it cannot be repaired before the next day, so you have to wait, you have to manage yourself. And it's basically, it sounds like a simple task, just like pushing these two buttons down here, plus one, like to lower or increase the temperature, right? So what happened? So what's your guess? What happened? Easy experiment, everybody's happy, easy solution. Or maybe my question is already implying, probably not. So actually, what happens? People do this, and once it gets to cool, you won't send to push a lot on plus. If it gets to hot, you will send to push a lot on minus. So actually, they never really are able to stabilize at four degrees, which is the target temperature. And actually, the whole thing ends up, people say, this is about totally different experiment. You're fooling me. This is broken. This is wrong. And people go get really mad and crazy about it. And why is that? Because humans, we humans, we tend to maximize cooling or heating. So I'll turn off if we are exactly at four degrees. And why is this not working? Because humans, we're not sometimes not really good at understanding complex problems. And because like temperature is a slow process. So it takes time until it goes up and down, up and down. And you basically, you don't always have to maximize heating or cooling when you're basically very close to your target temperature already. So actually, you saw a reddit behavior as I described earlier. But I still want to emphasize is the reddit behavior introduced people were trying lucky numbers, their birthdays and other magical numbers. So this was what happened here. And why am I starting with this? Because I think it's not only cooling houses because companies are complex organizations and systems. And so there's a lot of going on. There's a lot of stakeholders involved. Everybody wants to bring some extra value to the table or not. And people interact a lot. They chat a lot. So it's not just like a very orderly process very often when you implement new stuff. So if you feel with me like how can you actually make data driven and AI technology happen at enterprises? This is the talk. This is your talk. And I'm Alexander Hendoff and I'd like to tell you a little bit more about what I learned in the field on. I'm a managing partner at the Digital Boutique Consultancy, Königsweg, located in southern Germany. My company is very dedicated to the Python community. So we run the local PyData meetups in Frankfurt and Southwest. And we're also very involved in the German PyConD and PyData Berlin conference. So which looks like fun. We also because we were also like very happy. I think we were the first conference who came back after Corona. So you see this is like the Berlin experience. My work is actually we say transform to work smarter. My job is to enable our clients to make data driven and AI happen. And these are the ingredients for success. You need the strategy. You need the means. We need the skills. You need the culture. And of course, you also need patience, which is probably my angle to the cooling house experiment because innovation is complex. So it's not just like we decide we introduce something new. There's so many stakeholders involved. And I would argue the most and the solution for handling complex tasks is actually to establish a culture. And actually we say we can help you to introduce a new tech, but we also would like to see what about your culture, the company. And of course, if you don't know if you're a startup, usually startups establish a healthy, very cooperative flat hierarchies culture because their startups people are young. And this is basically already the right way to go. But if you look at larger enterprises, they have cultures established over decades. They come back of many industries come back from having like steep hierarchies where people were not allowed to talk to the superiors of their superiors. So and this is something we also have to face because like larger enterprises, they are not startups. Only few startups are that big that you could call them a large enterprise. So you also have to fix or like work on the culture and are probably not fixed but like what introduce a new culture when you want to make innovation work. And this includes not only telling, oh, we have flat hierarchies now or say, okay, is this just like now on paper or do you actually live it because or are you just like in the first phase you tried it and are you aware this the whole process might reverse. Again, because humans are routine animals as well. And sometimes are the default back in all routines, especially if we packed with work and if there's just a lot of things to do. So the one of my questions is, has your organization established a modern company culture already? Do you have an agile mindset to work? Do you work transparently? Do you have a culture asking questions openly and not just like doing in back rooms? So and so of course changes constant but working agile also means you don't basically changes also not like something you have to do because you only want to do change. So let's talk about teams and flat hierarchies because what happens very often? They say, okay, let's do we have this new world, new work, people come in with all great ideas and also the mission to change things. And what is introduced flat hierarchies? Let's do hybrid teams with the department. So the developers in the departments, they work together. They work together on the same level, right? So we have developers, data scientists, everybody on one table. We work agile, scrum. You have constant retros to ask, is everything all right? What can we improve, collect things on product quality and collaboration? And the reality check here are things to keep in mind and also consider and also to point to my job is very often also to point to, okay, this is not moving in the right direction. Again, you're moving in the other direction again is fat hierarchies and hybrid teams is very often only a message. Unfortunately, and it's not, I wouldn't even say people do this on deliberately. They just fall back in their own routines when they're busy. And especially I think the Corona crisis like Corona pandemic also did not help working together because like, I think working together is meeting in a room, having it and online still is just like you meet from time to time, but it's different social bonds that built. So it's even easier if you're not, if you're disconnected and not like in the same building to fall back into these odd routines. Because yeah, you're busy and the whole process, it needs time. You cannot just like say, okay, let's do this. Everybody's great. Let's do this. And that's, and then this is not the solution. You basically have to live it, establish it and recheck if it's really, if changes really happening. And also change requires professionals to guide and help through process. This is for example, something we don't do because we are on the tech side. We collaborate with others that other consultancies that are trained psychologists. They give you feedback, they give feedback. Who's a good team leader? Who's a toxic team leader? All the other things. And of course, it all takes time. And what we also have to point out, we have all these methodologies now like Scrum. So who works in Scrum? Yeah, like half of the room. And what's your, what's your sprint? Like two weeks? Already three weeks? Two? Four weeks? Anyone who's four weeks? No? So what? Is there a retro in every sprint? Yes. So I wonder how you feel about the retros because very often I see there's a retro because we, there's sprint change and we have to do a retro. And especially if you have a very small team, there's probably nothing to retrospect on. So I've attended retros. Everybody said, hey, everything's in order. We have the right rhythm, sprint goal, everything's fine. So basically why not just like say, hey, it's great. We did the retro was this, we don't have anything essential to discuss. Let's get back to work because still we're busy and we still have a lot of stuff to do. But no, the retro is done and I ask you open question. What happens if you ask a developer or an engineer for problems? Will he ever say no? Or she? Or them? Or would you rather expect? Yeah, I can give you like five to 10 problems because engineers, I argue, we are problem solvers. If you ask us for a problem, we have plenty because we like problems. We like to solve them. And of course a retro and this is also the reference to the courting house experiment. If you put a small problem into a retrospective and it's the only problem in the retrospective, suddenly it's a big problem because it's the only problem in the retrospective. And so change is not a purpose in itself. And sometimes it's really good to address. Okay, we like scrum, we like agile, we like retrospectives and everything, but there's nothing to change currently with everything's just like fine. And this is a good message. It's not easy to address that. Because also we don't want to give an impression. So, oh, there are no problems. Don't talk about problems because we want an open culture to keep an open culture addressing problems. So it's not easy. So you need a lot of like, you need to be like sensitive and really see if other problems, other people's have real problems and are not just like openly addressing them a little bit more. So this was the patient's part. And of course when talking to customers, let me tell you another story. So let me demonstrate how executives think. So any executives in the room? Oh, lucky. Yes, okay, welcome. So yeah, so many executives, not all of them. So actually I met an old friend. She had changed jobs. She was a sea level job. And we just met at the train station and I said, hey, it's great to see you. What did you do? I told her, yeah, she told me she had this great new job. She was really looking forward to it. And she asked me, hey, what about you? Yeah, we founded this digital boutique consultancy. We advise their clients on data science and AI. And she said, oh, that's great. I have to buy. I also have to buy AI in my new job. And I was just like, I had this picture in mind. How does she, in this moment, imagine buying AI? Cut or piece? And she's a brilliant person. So there's just a lack of knowledge, because many people think you just go and buy some software. And this is not the case. So it's, there's a totally works different, because actually I would say you cannot buy AI. You have to build AI and you have to implement it, enter your whole ecosystem from starting from data to talking to the departments who are, of course, the domain experts. You have to talk to people running the infrastructure and many more in the whole organization. So it's not just like some software or service you can buy. But you have to build it. We like to build it with open source, of course. That's why we're also like so connected to the community at Kyrgyzstan. But sometimes open source is also not the only solution. You can also do mixes with like preparatory software, use a commodity cloud service for one task. For example, like OCR documents, Google's of that. We don't need to implant that with open source. But we already have a good result. But other things with natural language processing and specialized needs, of course, this requires open source. So we have to build it. So building AI. So let me redefine the term AI here a bit from the enterprise perspective. For customers, actually it very often includes all the technologies that we experts wouldn't agree on being AI. It's understand differently. AI is basically everything with digital transformation, digitalization, robotic process automation, all this stuff. Very often the terminology is a little bit shady. No, not shady. It's like gray. Like the terms are flexible. So of course, and you also have to address this. Okay, this is robotic process automation. I very often say it's great. You save. You already have a project saving your employees time. That's great because waste of time. Not wasting time is always good. But you also have to understand it's just like an intermediary technology. It's a hack basically. This is not a real solution for your data flows and processes to have like these robot process automations in between. And many people are not aware. I think, oh, it's a great technology. I can automate things on my screen. And I also want to point out, it's important to do this in a respectful way because we have to understand we are experts. I have tons of input from all the conferences I attend. But other people have just other interests or other tasks on their job. And then somebody tells them, hey, this is great. And of course, it looks exciting. I mean, when I built my first web scraper, I was also like super excited back in the days. So of course, it's very important to tell people, hey, this is a technology. This is a taxonomy. This is how things work together. And these are the things you should look into or not. And of course, very often the whole analytics space and AI is in the same bag as well. And so the borders between AI, modern analytics, very often driven by open source already, are already a bit blurry. Very often I say, OK, we start this AI project. The first thing is bring your data in order. The next step is see which technologies can we apply. And I always already say, likely we will have like 70% of the value we get. It's just like business analytics because you're data right. Because no, if the data is not right, you cannot do AI. And this is, I think it's very important to communicate this very openly. 70% classical analytics, maybe 20% machine learning. It's 10% deep learning. It's just like fine, we're trying not to do 100% deep learning because we love to do stuff like that. We have to always focus on what's best for the customer and what is enabling the customer. So let me tell you about the lighthouse problem. What are lighthouses? Lighthouses are prototypes, proof of concepts driven by a single, very motivated person or group. They're very often, basically all the time, they're based on open source, block post, and they produce a reasonable result in a very short time. Stakeholders, I'm impressed. So let me introduce a stakeholder and a proof of concept driver. And basically it could be an intern. It could be a working student just like say, hey, we have this need. I found this on the internet or a GitHub repo. And basically you start with some boiler code from Swamp Scratch. And you can build, in a very short time, you can build something very impressive. If you come from the stakeholder perspective, it says, oh, we need new software for analytics. And usually they expect, okay, this is like a process for the buying and everything. It will take like three to five years to implement. And then there's just like an intern saying, hey, look, I have been working on this last week. Magic. So everybody's like, this looks great. There's a big driver. Everybody gets excited. And what happens next? Everybody gets so excited they throw in new ideas. Usually people, because we already see a very motivated person there, they're very happy to pick up on these ideas and even like build more stuff into the prototype, add more, add more, add more stuff until what finally happens, the whole thing falls apart. It starts to work. So it's totally like, it's only working on one machine. Of course, it's built on the happy path. The database is always available. All the services are always available. And of course, this is what happens then. So there's a bunch of code. So what do you think what happens next? Are we recycling the project? Has it go to waste or can we cure it? What's the guess? Who thinks recycle? Who thinks trash? Who thinks curate? See very many, very experienced people. Of course, very often you can just like say, trash it, rebuild it, pick up the ideas, take up the good parts. And actually this is a sketch. A colleague of mine, he did an analysis of one of these Lighthouse projects and he tried to just like, I don't know why. It was like what was happening in the prototype but was missing, he drew the sketch. The analysis. Of course, there you go. So actually, what's the best practice here? Because prototypes are prototypes bad. Lighthouse project is bad. Not at all. I think the best practice is have good ideas, try things out. But we need to know when to stop. Because when to develop further, when is the right time to move it to production level and it's very important to tell everybody moving something like a proof of concept or a prototype to a production level is not easy. It takes time, it probably will take months to integrate everything on a production-ready level because production needs to be stable, production needs to be monitored, production needs to be tested, unit tests, integration tests. And also, of course, if we move something to production, for example, in finance, there's also like a paper trail for regulatory reasons. So it's not just like, okay, we have to take this stupid notebook to production. So it's very important to communicate this, that everybody's aware in an early stage, try things out, but know when to stop and also to see, okay, you want to move it to production, you really want to get the extra value into the organization. Okay, these are the measurements to take. And also, we have to, as an enterprise, you also have to ask yourself, do we actually have the skills and the resources in-house? Because very often, people have a different background, somebody comes in, a student, knows Python from university, but might work or do the internship in a department with PowerXL users. So of course, you see, okay, what are the resources? What do you actually need? Do you want to hire people who can do this? Do you need external help? Do you want to do a hybrid? Say, okay, let's get new stuff in, but also how to get external help in to enable to help people with experience to help to build everything. And you have to also involve all stakeholders. It's not just like the one department, you basically need to discuss everybody who's interested in bringing extra value to the company. So this involves a lot of management as well. So building light houses is not a strategy. It's just basically building a bunch of no silos, and it's very important to know when to step. So this brings me to the final questions we very often ask, when we start a project, or if there's a project in discussion with a client. We actually ask, do you have a strategy or a bunch of ideas? Of course we don't ask that way. So basically we just like rephrase it like this. It's going data-driven part of your long-time strategy. What do you think what happens if they say no? We just want to try some stuff like this. You don't see it like it says, okay, call us. So thank you. It's not what we're looking for, because farce is very important, because the times, the playing times with data and AI and all the stuff, I say they are over. You need to move now or you are being left behind. The whole process is now on its way for more than 10 years. So basically it's really important also to do a good project. It's very important to say, okay, we are serious. We have the budget. We have the means. We really want to get things moving. And we also, everybody's prepared also to say, what do we need to make it happen? And this includes all the technical updates or technical change, also culture change. So the next question that comes to mind is how do you, okay, it's in the strategy. And then the next question is, do people already think end to end or just like in some single solutions? Because very often people think, I buy a software, I buy a solution. I buy, I need something and I'm looking for software solving this problem. And how we work with data, or if you really want to bring the best value to the table working with data, of course, you need data, you need processes, you need software and tools and to make the most of it. So basically you need an end to end thinking. So where does the data start and where is basically your customer? I think many companies have done this brilliantly in the past. So I'm still like impressed. I was really lucky to learn how end to end really works. 2003 when Apple introduced the iTunes music store because they had end to end from the very start when they started a completely new service, which was seem natural that way, but still if you look how many people, or like how many companies try to update their systems, it's quite impressive. They already had this, they thought that far. You could also get data from the store, people who purchased it, of course, anonymized and all this and it was very accessible. And this is basically still the best way to think and to end where does it start, where does this end was in the middle and you have to include external suppliers. You have to take your customers in the process and everybody has also like to work together data-wise. So of course the question here is, should data follow AI or AI follow the data? What do you think? The second one, yes, correct. Because going data-driven involves, you need a strategy, so the best way, if you don't have an overview of your data yet, you need to build a data directory. So of course we start, very often we start from, there's not much there, so we see, okay, where's data? Very often people work with a lot of data and they are not aware, they sometimes forget it's data because they use it every day. You need to qualify which data is actually suitable. You need to see how can you prepare the data to have like an AI-centric data approach because of course it's easier if you can just like draw data and have them ready for training. Instead of, okay, we always have to collect things, many queries and to make it fit. Of course you need to identify the suitable technologies of machine learning and AI based on which data do we have, which data is accessible now or later. And of course, very often the choice is, okay, we want to put value first here. And based on that, you also need to question the infrastructure because what is already in place? Do we have everything? Are we just like living in a Windows world and suddenly we have AI's tools and services that require Linux because they run, they have different requirements? That's all the stuff also defined. What do we need infrastructure? Because many people think this is an AI, data and AI is an IT project and this is also something you have to point out sometimes. So for us, the ideal data and AI-ready reference architecture looks like this. Sorry, from German, but I think you get the idea. So you have all your systems, the data lake is just like a concept. You don't need to have a real data lake where you put your data. Your data just needs to be accessible in an organized fashion. So you basically can scale here when you do want to do stats. If you want to build machine learning models, if you want to introduce ML ops or just like get data for your BI tools. So once you have established this, because the important part is have your data ready to scale experiments, try new things otherwise. If you start from the other side, oh, I have this idea. I always wanted to do something with natural processing. Yeah, of course. Yeah, you will find some text data. You can do an impressive project, but this is not really scalable. It's just like another lighthouse to come back to that. So I say the ability to experiment to qualify is a key point of making data-driven happening. So don't forget, you don't have to buy AI. This is the end of my talk. Do we have time for questions? No, maybe yes. So all answered. Thank you very much.