 This is Product Insider podcast. We are live on YouTube and LinkedIn. So now let me ask you guys this question. You can see us live. Please comment on the chat on both LinkedIn and YouTube and so that we know you are live. Looks like we are and just say hi. What's your name? Where are you from? And today we'll have something very, very exciting which is talk about how to build a successful AI product from scratch was head of AI from very years of experience in the AI space. I'm very excited to have Dr. Ava Pakaki with us today. All right, so let's do this. Let me do a quick intro regarding what is Product Insider that we can get started with today's talk. Hey guys. Hey, I am Dr. Man City. I'm passionate about all things product, career growth, and non-profit. I'm on a mission to help people create amazing products that impact millions of people's lives while getting the work-life balance you deserve. Welcome to Product Insider. shy away from the real talk, no way. Covering everything from imposter syndrome, people management, embarrassed facing the leaders of tomorrow. We're joined by fan-level political leaders to get a lowdown on what's hot right now. And I'm your host, Dr. Man City. I moved to the US with $800 in my pocket and become a direct product in four years. And now, I run product manager accelerator courses that make product management careers accessible to everyone. So are you ready? Welcome to Product Insider. So today, we are going to cover a very hot topic regarding AI and especially demystify the AI product management development process. And specifically, we'll cover how to build successful AI product from scratch. And today, let me welcome our guest, Dr. Abha Agapaki. And she had years of experience from being a professor in the AI space, had her several products, and also as the head of product for AI companies as well. OK, so welcome, Abha. How are you? Oh, good. Thank you so much, Nancy, for introducing me. And so excited to be here with everyone today. So I would be more than happy to talk about this topic that I'm so passionate about. This is awesome. So Abha, can you quickly introduce yourself to the audience? Yes. So hi, everyone. I'm Abha Agapaki, and I'm an experienced product leader, specifically in the AI space with a PhD in computer science from MIT in the University of Cambridge. And let me tell you my journey, how I started, and where I am today. So now I'm the founder and CEO of Hatch Labs, which is a consulting and education service, particularly for AI product management in New York City. And I started with a civil engineering background, which is a very different path for coming into product, coming from Greece originally, with an engineering education. And I was originally designing houses to be resistant to earthquakes. So I started the way in California, then moved to the UK, then back to Boston with my PhD, becoming a professor in AI, and then leading companies and startups in this space as head of product. But in my journey, there are a couple of milestones that were really important, such as, first of all, my education becoming a PM. And Nancy's program, BMA, the accelerator, has been very important for me to understand product strategy and everything you know from product management perspective. So thank you, Nancy, for putting all of this together. And then, absolutely, and being also part of this active community, which is really important. And then being part of, being in industry and corporate environments and crafting my path from a very diverse spectrum of being a professor of leading AI initiatives. And I can talk more specifically about the products that we've built over the years, and scaling teams, and so on. Awesome. So, Eva, let's start this. Given you have years of experience in the AI, let's start from your past experience building AI product. Can you give us an overview regarding what specific AI product you have launched in the past? We knew that you were a professor. You also went to MIT and started from a very interesting background, like designing how to make sustainable houses, now doing AI stuff. Tell us a different kind of AI product that you have launched in the past. Yeah. So it's been a long journey with lots of products under my belt. But the two of them that are really pivotal and that I'm really proud of are, first of all, an AI cognitive service that I launched from scratch at PDC, a software manufacturing company. And secondly, my own, my first startup, Tana Twin, which revolutionized the space in terms of, again, 3D cognitive recognition services. So I can discuss more specifically about those. So basically, if that's OK with the. Yeah. Yeah, I would love to learn more. Tell us the first one and how do you use AI for the product and what's the functionality, what's the problem the product trying to solve using AI? Yes. So all of this happened before AI became this buzzword that we see today in every domain and field. So four years ago, I participated in Innovation Hackathon. And this is really important for companies to scale and accelerate AI development. So I saw an opportunity in an industry that's been dominated by legacy software, most of them like being on-prem. So I saw an opportunity with a collaboration with Microsoft to launch a similar service, but more specifically fine tuned for the manufacturing sector on how we can recognize parts when machines fail. So this is a billion dollar industry, lots of initiatives in the space. So I started from a pure idea and then originally did a lot of research like what's available in the landscape, which was at that time, Azure cognitive services and found an opportunity to scale a team, be awarded as the best innovation product in 2021, and then launch that product, which was a process which is not easy, especially in a medium to large scale company. And there are lots of phases of not just research but development and scaling from a research POC proof concept to an actually scalable product that can sell. How long it took you to launch a product from beginning to end? More than two years, I would say. And I still think it's a short time compared with other industry, right? So we both got PhD and of course you went to MIT to get your PhD. Best place ever, I'm very jealous. And then I remember the first time when I decided you wanna stay in academia or continue just work in corporate America, I asked my professor this question because my PhD was in material science. I was like, how long takes a product from concept to execution from research all the way to someone use what my research result is? He said 15 years. Oh, my kid will go to high school. 15 years later, your mom has achievement. She created something. Being used by your classmates in the field of material science is crazy, but I still think two years is pretty fast compared with other industry. And also, and from end to end launch, right? When I launched my own AI product, which is using machine vision to help see these reduced car crashes. The first MVP was only six months, but when you make it mature, this like continuously improvement, like go to market strategy, different things. And I still think two years is a relatively good amount of time on average speed, at least much faster than other industry. Yeah, that's true. Because for example, in an oil and gas industry, and with my other product in 10 or 20, getting product to market could take exactly like even 10 years. And it's, yeah, it's very, and there are a lot of aspects in that. Like I'm sure like in material science as well, because these are critical aspects like with privacy, data privacy, with safety considerations. Like it's not like, and especially with these AI products, it's not like we're launching and we're focusing on the user experience and go to market strategy on making sure this product with itself. But there are all these other aspects and these industry specific practices that in legal aspects as well, that need to be in compliance before we are given the green light to actually launch. So it's a long process. Exactly. Tell us more regarding a second AI product. What's about, what problem is trying to solve? So that was with my, originally developed with my PhD, for years of research in KALTO. And then about four more years to actually go to market and still I will explain some challenges towards the end about that. So the problem was that these industry, the Olingas manufacturing, heavy manufacturing industries rely on drawings. Like believe it or not, someone in these Olingas factories will not have an accurate understanding of what exists in their factory. Like where is this bulb that maybe it will create or will crash or will have like a major accident, but we don't know. This knowledge is only in the heads of the specialists and the field engineers. So if something happens, we understand the major implications. So there was this technology that was developed laser scanning. It's very specific to this industry. So I saw an opportunity there and with my PhD I developed a proof of concept that will actually take just this data, which are very specific. Like I said, the 3D understanding. So if you have like an iPad, you can create a 3D visualization of your space. So I developed this PUC to help factory managers to see their space in real time and understand what's going on in there. How is it related to AI? How is AI used in this process? Context understanding. So for example, when you have these large data sets of just data that you cannot understand. I mean, you could, but it would take so much time to understand where everything is in the space, in the 3D space. The AI, in particular like deep learning models for 3D data can help you contextualize information. So you know that you have like valves and other like specific objects in the space. So you have like a catalog automatically of your inventory with an AI service that can do that. So this was the product that I developed for that industry. This is awesome. So right now has been used by oil gas companies, right? Is main industry been used? It was in early stages, but I'll discuss some challenges that actually the market was not ready for it. So that's part of that. That is important to understand. Yeah, cool. Which also lead to my next question regarding the end to end product management life cycle for AI product and how it's different from the traditional product management for just tech product in general. So tell us more, what's the differences and what is end to end product management life cycle process for AI product nowadays? So there are four main stages if we can categorize them. The first one is the research air-literation phase, as I call it, which like we discussed before, it could take years, but of course in corporate environments we cannot afford to have years. And this is where AI product managers come into the picture to help research teams accelerate the product execution. And not only technical research, but lots of customer discovery and legal compliance issues, define the product data collection strategies and all the initial research that's happening and which tools we can use, what services are important to solve this industry's problem, this customer's problem. Once all that is finished, then we go to pre-launch phase where there will be, again, a lot of experimentation, validation of the concept, technical feasibility studies, user engagement studies and a very important area, which is particular to AI products, the AI governance, where there are lots of issues we need to consider in terms of ethical considerations. We talked about some industries are very, very, these considerations are very important and we'll change people's lives if something goes wrong, responsible explainability aspects, safety of the product. And all of that is encapsulated into, like evaluating these models and setting expectations that are reasonable for the customers to understand. Once all that is finished, then we go to production phase, which is, again, an intensive process because there will be, for the golden standards of AP testing, user adoption, and another aspect which is important because not all organizations are ready for this AI change. So AI PMs need to help for the organizational changes. They need to educate the rest of the teams to understand expectations, what this product launch is about, and also alignment between customer expectations, business goals, and technical alignment of the products. And lastly, it doesn't finish here, the post-launch consideration. So tracking performance, any issues we have with any new model developments that happen, like we saw this week with OpenAI's Dev Day, lots of new developments, and the product needs to be always up to speed, and this is an iterative process. So I like an analogy for this whole process, like these full steps. It's like an hourglass model. So I applied. What is that? Tell us more. Sure. So you start abroad, you pass your nets wide, again, all the opportunities, customer interviews, everything around the business, feasibility, and technical feasibility, hone down like an hourglass into like, this is the product we're developing. And then again, scaling, like going broad, how we can scale, grow, and sustain this product to life. So that's how I like to simplify things in this process, which may seem like long, like we said, but take years to actually flourish. So let me summarize a little bit. So we want to compare this one with traditional product management, right? So lots of, even traditional product managers, they want to become AI product managers, people working for Uber, want to become AI product manager, or people from Google ads, and want to become AI product manager. I think AI is way more sexy than selling ads to people, of course. They always put out, what new things I need to learn and gain for AI. But some sites from the product management lifecycle perspective, the key differences, what I heard is in the first step, which has research phase for more AI, focus topic, right? And then also for the before launch, before launch phase, which is your third phase, which has a lot to do with the AI governance regarding the ethical to make sure it's working. And also the customers is going to adopt it and also willingly acically, and also knowing how to really control the power of AI. Am I right? That's the main key differences. The other two phases go to market, everything is the same, just it's more tailored to the GTM strategy for people who use AI, right? Yes, I would say yes, this is a really good summary of how you mentioned it. Really the honing in the AI aspects on the early research stages and understanding like all the technical complexities, understanding the language even, because as product managers, we tend to speak more of like the customer's language and from the business perspective, but we need to also now for the first time, I would say really speak the technical language as well when it comes to really being part and embedded into these AI product development processes. Awesome. This leads to very great insight. So regarding the technical language, let's be specific. I assume right now, even years ago, that's 2016 before AI was hot. I was the first one working on AI using machine vision to reduce car crashes. At the time, I literally learned so much regarding all here is how to improve the detection accuracy with the typical challenges people are making, everything regarding machine vision, which also eventually to see very similar technology being used in the self-driving car industry as well, which I learned so much regarding like smart cities, self-driving car in the same industry. Now, let's not just ask this question, say, hey, what else I need to learn in the AI space? Even if sometimes they have permanent experience, right? Sometimes people or even fresh graduates have no experience at all, they wanna become AI PM. I always tell them two things, which is first of all, you need to learn traditional product management, what is product management? And then you learn the technical knowledge of AI, right? Which kind of overlap with what you said earlier, you need to be part of the AI development. So now, can you give people specific examples regarding the AI language, like technical AI, what do you mean so that they can learn ahead of time in addition to product management, which we teach inside a PM accelerator? Yeah, go ahead. Yes, so when we talk about AI these days, it's all about generating AI. So all the language, because like you said, like even 10, 15 years ago, everything, and what I talked about, machine vision, computer vision, reinforcement learning, like someone that has worked, for example, with Netflix recommender systems, like they know the lifecycle, how to launch a product, which is again an AI product. But now things are getting different with the AI, gen AI, a specific technical knowledge. And for example, I'll give some technical terms, like understanding how large language models work. And this is an ongoing research area, by the way, because still with open AI's and everyone is barred and all the llamas and stuff like that, it's still evolving. So understanding the inner workings of LLMs, transformers, at least at high level, finding information online. That's why I started a newsletter. I'm happy to pass it on to everyone in the audience, where I explain all these technicalities in detail from Layman's terms, not from a research perspective with all the mathematical technicalities of it. Also, how to set expectation in terms of outputs of these models, because we will hear terms like hallucinations. Like, what does that mean? Yeah. How, because these models don't know what they don't know. And they will make up stuff, this is crazy. And then looks like sounds so confident, right? And actually there's a many different, like 60 minutes interview regarding hallucination and also had the Google CEO even talking about, well, barred is actually talking about something never exists, but sounds so confident it actually existed. And the fold all the generalists during the live interview, like what we have right now, the AI ran something, so impressive. And then two days later, you're like, let's really verify whatever AI told us. It's, oh, two pieces of information doesn't ever exist. And they just say, yeah, so that's all the problems we were facing in the AI space. Exactly. And another important one is, actually I have two examples. One is regarding AI governance and ethics because the AI generated content needs to be what we call like watermarked because there will be lots of problems, like who will be responsible if the content is not copyrighted, for example. We need to set some guidelines there. And this is not like yet, there are like from the government and lots of like initiatives there. We saw executive order from the White House with national labs responsible for setting guardrails for specifically watermarking strategies. The same with AI safety summit that happened in the UK last week as well. So this is an important that we keep evolving and there will be like lawyers responsible for these running and working together with PMs to understand when we are ready and safe to actually launch these products because we cannot go live, especially in large organizations. With startup it's different, but again, there should be, and now we see a lot of AI law coming up and a lot of initiatives there. So this is one aspect of watermarking. The other aspect is in terms of fine tuning. This is something we hear a lot these days and understanding how user feedback can help these models be improved. And this is like the key in terms of unlocking opportunities and applications and products in lots of industries that need them. But we can see like we can talk about LGBT and GPT models as like all the generalists, but then we have all the more specialized models like industry specific use cases for healthcare, for law compliance, for policy makers. We need these specialists and with fine tuning in their startups, like a lot of content around our LHF, which is the real time feedback from users that we get for these models to actually be working for specialized use cases. Awesome. So if anyone want to crack into AIPM, what different kind of key technical concept they should learn right away? Budget or modeling, I would say, right? But for AI governance, I think it's a very generic terms. It's not technical, right? Not technical, but understanding what are the laws and understanding what are the laws and keeping up to date with what is, complying what is safe to develop and what are the strategies that we need to be aware of before launching a product. So this is important to have everything summarized and of course keeping up to date with the latest news because this will keep evolving. Yeah, so therefore in high level, I recommend everybody who's tuning in the show right now, you start taking all the AI one-on-one courses, large language models or natural language processing, those kind of one-on-one courses first and then continue to get updated with industry knowledge. And actually in the link of description, those videos and or our live podcast, and we have many different kind of like download the top 10 recommended technical courses for people to learn the basics AI or different kind of one-on-one, technical one-on-one courses. It only takes one hour to take and one can just go to link in the description and learn more later on. We'll also add it to the show notes as well. And now let's continue our AI journey. We're, I'm thinking about, now everyone is a trend in the industry. If you look up all the job descriptions online, it's crazy for example, Netflix, hiring AI PMs and getting paid up to $900,000 per year is public news. And then you look up many different large companies roadmap. Everyone is hiring AI product manager or chief strategy officer on the HEAS department. For example, Cisco, we just discovered yesterday with my students, Cisco chief strategy officers having a brand new role as a senior AI product manager, helping him to create the AI roadmap as a strategy and what kind of product they can build. So this really leading to a great question regarding, now everyone wanna do AI, but how do they know? How do they determine we AI is the best solution to address the specific pain point and problem? So it's a specific framework guideline for product manager can use in this scenario. Yeah, that's a great question. And where I think, actually the reason that I started Hatch Labs. That's the main motivation and the opportunity that I saw that there is so much going on on the technology side. Actually, everything that happened with the LLNs and the developments in these days is a testimony of like how quickly research scientists can come up with a solution that actually works or in general terms like putting some guard rails there and success metrics. I can actually solve some use cases. But what I haven't seen is a tested framework that can actually make products from an idea to launch in this space and be successful. I haven't seen that yet. And I know that in big companies and with all the conversations that I'm having with product leaders in this space, it's an evolving area. And that's one of the aspects that I'm working with Hatch Labs to standardize a framework that can actually help accelerate this process. The way... Tell us what your preliminary outcome right now, right? So everyone wanna add AI to their brand. It's so funny. Everyone's like, oh, we have AI. We have AI, the notion. Notion may go public. They're always saying they go public. I don't know when. Okay, so there was, oh, maybe we can add AI to our brand. They become notion AI. So maybe they can go public faster, whatever, selling higher price. And then when I use it, it just a little bit more organization of your notes to now become AI company. So how do you know does AI really solve the problem or not? So give us some preliminary outcome and ideation right now. Exactly. So it's exactly, what I see is a main problem is that right now everyone's rushing to just implement AI just because it's cool and will generate more sales. It's an area that actually works for sales people to sell products faster and generate some traction there. But it should be the reverse way. Like what are the data points that we have and what the customers need to actually solve their problem? Is it an AI solution? Maybe it is, maybe it's not. So it's not like AI will solve everything and then everybody's products will be like out of the market right away. It doesn't work like that. It shouldn't be the way to launch products just because it became so cool and everybody's just like adding AI to everything. This is not the right way. So what I've seen with my early research data point so far is making sure to ingress and integrate to the customer feedback more directly to the product world map. And what I mean with that? I created this generative AI flywheel where we have data, we have AI models and we have user feedback. These flywheel needs to be like the North Star metric. Every time that we're talking about these products though the requirements for data meet the model considerations is user feedback giving us the right insights to actually proceed with this product. If these three are not in alignment that means we have a problem we need to reconsider our strategy and go back and see how we can change the process or understand whether other tools could solve the problem. But if one of these three is not on par with what we want to develop from a business perspective then there is a problem there because these products are very dynamic, very user intensive, the users are part of the product. We cannot neglect that data, we cannot neglect it especially for these models. And we can discuss a little bit more about these data strategies and also the AI model lifecycle, particularly for LLMs and how everything will change in the landscape. So making this process seamless and easy to follow and talk with everyone in this cross-disciplinary landscape of launching these products. Awesome, so let me dive deeper a little bit. So basically three things. Number one, large language model you need be able to construct this model. And second, you need be able to create or source, not create, create data sounds bad, source enough data to train this model, build a model first and having enough data and then you have customer pain point, all three need to exist, right? So now let's dive deeper regarding the data. Okay, how much data do you think you need? That's enough to train your large language model and especially we have already very six of the ones that open AI at the foundation for everyone to use right now. So how much more, maybe any use cases, examples you can give us. Mm-hmm, yeah. So there are not any golden standards developed yet in this space, but the example I can give, for example, from GPT-3 is that the data, the examples that it was trained were 60 times more than the parameters. And what that means is like we had 45 billion parameters that it was trained. So you can imagine how much more the data was. So this is just an illustrative example, but there's not an industry standard developed just yet for that. Whereas for other like machine learning models in any other types of problems, it's usually like 10 times more data compared to the model parameters. But we usually compare that with models parameters and making sure that we always have more data compared to the models parameters. Gotcha, so it's like, sounds like it's more dynamic testing process, right? It has to be more than the model parameters. You have, you build a model first, okay? You have more than what original parameters is, but how much more 10 times more or 100 times more or 1,000 times more, right? So of course the more the better, but in reality it's very costly. Exactly. Yeah, and we also have one many head of AI actually inside a PM accelerator community actually. Another student of mine, she's also the senior director of AI company, is mainly helping like shops or like e-commerce company to reduce the churn and also using AI to predict any kind of customer gonna churn quickly, things like that. Something with a very important concept she brought up, which is data is the fuel of AI in the future. That's what everyone is fighting for the fuel because you can build a model, you can have small PhD from MIT, just thank you. And then let's build a model, but you don't have enough data, right? So data becomes a fuel and you just keep growing. But in real life, let's say you do want to build, let's say using her case, right? So predict the e-commerce churn for customers, you'll build your original model, let's say they are 200 parameters. Do I start with 10 times 10 first and test out how good my model is? So how to even know when to stop, when to continue to collect more data to train your model? So there are in research papers, there are lots of curves related to that, like performance data size and performance in terms of training, size of data and parameters. So you would start with a dataset, like a smaller dataset, like updates, like the X8NX because that has been the recipe for ML pipelines in the past. And you plot like what's your, using some technical terms like loss function, like how successful your model is in terms of performance in your training. And then you check like what's your data size, how your parameters of the models are, and then you continue the process adding more data, how is your model's performance impacted? And so once you create these data points like a 2D plots, for example, data engineers would do that normally. And then from a PM, you collaborate with the engineers asking the right questions of like, why were we doing that? Why we need to have that data, how we generate more data? Do we need to use synthetic data? We need to use your feedback to collect more data. So lots of data strategies and lots of also data labeling companies that are helping with that to orchestrate and streamline the process there. Yeah, years ago in 2016, that was long time ago, one of the first AI products. I remember at the time, there's some company called Mighty AI of some kind of companies like that. They generate AI data for you to train your dataset. Even right now, we have students working for self-driving car companies. They also have a supplier of data simulation environment to train self-driving car. So self-driving car, apparently, of course, you need to have many real life scenario driving on the road, but self-driving car cannot go out, kill lots of people to prove it safe, right? So they have many of those kind of simulated car driving experiences and also environment for self-driving car to drive in those space. That's where they collect data and creating data right now. Another tip for everybody listening in. Years ago, when I created my first Smart Cities product, which we use machine vision to reduce car crashes, the way we collect data is very funny. So basically the product is like, okay, so let's have a video stream and run some AI in real time to see when cars have some dangerous behaviors such as almost hit the pedestrian near misses or running a red light, speeding, all different scenarios, so you can't capture it. And you run AI to understand data and do some predictions and also send data to police officers and before all the car crashes happening. And at the time, the way we did it is so funny, we just record all the street. They say every single day there's a police camera, right? We record everything from the police camera and then we hire interns from Europe to basically mark, yeah, we label. We do labeling of all the activities on the street and then we feed them back into our training model, continue to improve our training accuracy and our North Star metric to understand, do we need even more, 100 times more, $1,000 more data to train this is measured by accuracy because later on we measure, same thing for Smart Cities prevent car crashes, you can't have too many car crashes to really train it, right? Eventually what we decide to do is, let's see if the detection like accuracy is high or low. Let's say it's an industry standard, whatever, let's say it's like 95%. Well, we are only 85% regardless how much data we have at the moment, which means we need more than what we had before to hit even the basic industry standard of 95% of detection accuracy of car crashes or car almost hit the pedestrian, those kinds of stuff or object detections. That's at the time how we use North Star metrics to drive the decision of data. And right now I think people are using different methodology based on the scenarios you have, but you're right, constantly using the new data to keep on training to become better until the next level. Yeah, and you had a really important point there in terms of the thresholds and the North Star metric in terms of performance, like quantitative performance of these models that it's something that for LLMs, it's still ongoing research. Whereas in the past, like for object detection, for example, for computer vision tasks for image segmentation, all these tasks, there was so much, but it took like decades. Like the same, remember, so it's a fun fact for me, it took me like almost like I had to hire a research team to help me labeling all these complex point clouds which took like almost a year and the model development was down in less than a year. So it's really important to understand this and also like defining which metrics should we take because there's so many of them. And that's also something to really understand which are the right metrics, which are those that derive the business value for the product and so on. Yeah, exactly. Awesome. We use the same methodology. Very heavy labor, intensive pull in terms. We heard them, some internship labeling cars, pedestrians, car crashes, through police cameras. That works. So now let's talk about more collaboration, team collaboration in AI product management, right? So can you give an example as AI product manager, how do we work with our entire teams such as data scientists, ML engineers, data engineers, all different people because I believe actually I made a YouTube video about this regarding day in life as a product manager. Everyone can check it out regarding which one I developed my AI product and which I add lots more. We're gonna, hey, I have my own like data science team, my own like machine learning team, different things. I think that's a new construct of collaboration in the world of product management. So can you tell us more? Give us some example. What does it look like if you're AI product manager and then you work with, who do you work with? What does it look like? Tell us more. Yes, that's a really important question. So again, like in these products because they're so much based on research right now, we really need to, like the role of the AIPM that I see is like you're a translator between engineers who have a technical expertise like the hands-on execution of the products and the research scientist vision because they're like so visionary, they really want to have a great impact and they have all these like bold ideas and new models and all of that. So you really need to bridge the gap and translate the engineer's needs with the research scientist vision and of course bring that in line with the business goals. So the way, from my personal experience, the way that I've done that is a three-step process. Like first of all, to partner with the ML engineers and the data engineers because first we need to, after like developing our PRD or alignment on the business goals, we need to translate that business value to a specific and concrete engineering outcome so that these teams can tell me, for example, I need a different model because it doesn't work, it doesn't achieve quantitatively the North Star method that I set in terms of product delivery. And for example, I had some projects for, we were doing bridge inspections, for example, in Florida and there were some different ideas that my team had in terms of what data we need and the models were not the right fit. So I had to collaborate with the team and understand what's really needed so that I can then talk to the customers and adjust the requirements based on what's feasible technically and what they need in terms of understanding damages in a bridge because the customer will be like, yeah, you can develop an AI model that can actually find all my damage types in any type of environment. Of course, we need to set expectations. So that's the first step, like partnering with the data engineers and the ML engineers. Once that's defined, then we go on to the research partnership which is the second step that it's really important because, and this is where the main difference between a traditional and AI PM is because you need to be deeply involved as an AI PM in the process, asking the right questions, understanding the findings of the research scientists and being able to align the expectations with their expectations and their experiments with product milestones. And this varies like, of course, we have like huge large research teams and foreign companies. What I've seen in general from my discussions with them is that research teams are quite independent and they really want to take the lead. So PMs really need to be more educated on the technical side to be on par with the technical expertise of the researchers. And lastly, the orchestration, like this is where the magic happens. Once all these discussions and alignments are achieved, how we can actually synchronize that business value and then come to lead that into the development phases of the product. And like I said in my example, the bridge engineers like really wanted to solve like every type of, understand like in their inspection process, every type of damage that they would see when they go on a manual inspection just have everything automated. But it's not the way that we can expect the product to work especially in an early phase, in an early phase in the pilot phase or better phase of the product. So we need to set expectations, understand the technical complexities and then finally orchestrate the deliveries so that our customers are happy. All right, Ava, now you make people scared about becoming AIPM because it went really technical. So tell us in real life, do people need to know how to go to become or AIPM or do people really need to know how to do data scientists become AIPM? So what's the reality? I have my personal opinion, I wanna save it after you say it later on, okay? So tell us, demystify a little bit. A lot of people are just scared and oh gosh, it's so terrible. But I still think there's many ways to become AIPM even if you're not data scientists, you don't know how to code. So tell us more, what's your opinion? Yeah, so my personal opinion is that I think that someone with technical expertise is, I mean, there are so many different types of AIPMs. So there are roles that are deeply technical which I think someone with technical background and the hands-on experience like being a software engineer or data scientist or an ML researcher or engineer, they can actually have an edge in terms of understanding the technical complexity. Of course they need to develop their business skills, but they will have an edge in terms of the technical side of the picture. So on the other side, I think that there are ways to learn these skills. And without, I mean, I wouldn't say, don't even touch like any type of like coding, even though there are so many tools nowadays, like even with ChargeGVT, you can even use that to create like little code snippets like to understand some basics. Like if someone is not willing, because we are in the software lifecycle, like at least like understanding some basics from some basic coding experience, I would say, because it will be harder to be fully engaged in conversations if someone has nothing, like it's absolutely zero coding experience. But I would say that this is something that you can learn by taking, like you said, some courses online or using these tools that we have these days to augment the learning of these skills. Because AIPNs, in my opinion, really need to have to master both technical and business skills. And this is the difference with the traditional PNs that we had in the past. On the other side, like if someone just wants to continue like growing on the growth marketing side or roles of PNs, which still will be AIPNs in the future, having like deeper and technical understanding is not so much needed on those roles of PNs. So just high level expertise in that case. So it really differentiated on the role that someone wants to take on the career path. But as I said, there are ways to augment the skill set in order to enter these fields. But it is a must to have. Yeah, so let me add something to it based on an observation of hundreds of people become part manager and also many of them become head of product and AI companies. And we literally have someone who is in PR, a marketing background and become head of product in an AI company. She literally doesn't know how to code for sure. But she took a class about AI. So I believe here's what's going on. So for example, my PhD was in material science. I don't know how to code. I just know how to mix chemistries together. So that's my PhD. And up to today, I still don't know how to code but I launched AI product, cloud product, edge computing product, very technical product. I actually heard my brain when launch was product because very technical and learn lots of things. But here's a demystified process regarding on a sudden becoming AI product management is that the way to communicate with engineers and how you are smart questions and how you really identify when is the wrong technical parameters of engineers set up, how they need to retrain the model, is a model is a number of size of data big enough. If so, we'll get to move to next step. If not, where can we find data? So those should be some technical conversation and knowing how to make decisions. And that's something I believe you don't need to really be someone who actually write the code but you need to be able to be smart ask the right questions and make technical decisions. You can make technical decision like myself. I still as of today, I have no knowledge how to code but very good at making technical decisions and also sometimes call out the BS from some of the engineers that I think no, you told me it takes a year to do this. Trust me, I can prove to you takes three months. And then my engineers I have full respect because like, yeah, you're right. They say, there can be faster. Yeah, cool. Let me send you this resources. It's more about how much you learn so that you're able to make technical decisions and they said we have different kind of like one-on-one like AI classes or like different like data model classes. I want to kind of download in our free resources which we can send out later on. And then actually tonight, we have another private alumni panel which someone's one hour soon became head of product and AI company teaching people how to learn a job at the head of product AI company. So it's like different ways to make it happen but I don't want people feel discouraged saying, oh, I don't know how to code. I cannot become one. It's more, are you smart enough to ask smart questions? So I think that's the key. And willing to learn like this. Exactly. Yeah, that's exactly. And also in terms of like the products also that you're launching, like for example, if it's a deeply technical product in terms of like how to launch the next LLM in open AI then that's a different story in terms of being too technical. Yeah. That they need a bunch of PhDs from MIT and then give them an MBA class and then they become up here, something like that. Cool. So now let's switch tools regarding the switch angles regarding the AI tools. We talked about different kind of like AI tools earlier. So what are some of the AI tools you use as a product manager in a day-to-day life or you recommend other people to use in a day-to-day life? How has that make any impact to management space? Mm-hmm. Yeah. So I already mentioned one of them. I use strategy literally every day in terms of, and it has improved indeed my productivity in many aspects in terms of like being my assistant when I generate content. And then of course there need to be a lot of edits afterwards in terms of like being authentic because I'm sure everyone has seen like, I mean it's great content, but then in terms of like having your own like authenticity. Style. Exactly. Your own style. In terms of PN tools, what I've used recently is Jira's product discovery tool. I started using it when it was even in beta version and I loved it in terms of like gathering product requirements, assessing like early stage products, solutions and generating all these pre-made templates that really made my life easier in terms of generating user requirements, templates for retrospectives, for feature launches and all of that. So I really recommend it. And then for meetings, because I'm sure like everybody's as a PM has lots of meetings, what I really find important is like this assistance that you have in meetings. Of course, you need to ask consent from the participants, especially if it's like a customer in terms of gathering nodes with this AI assistance. And I've seen like a really improvement in the types of nodes that are collected like summarization, I also actually was mentoring a startup in that space last year as well in terms of launching a product there. Read the AI was one that I've seen recently and I want to start using it that really collects and summarizes nodes and gives insights and sentiment analysis and lots of things like that. And lastly, design tools. I use a lot of Adobe Firefly and they actually watermark, they have copyrighted inside on the images that are generated with their tools and Canva, it's a tool that I also use for design. So are you saying that Adobe now they have a tool to create images for you using AI? Yes, and it's really nice. You specify how you want the image, what style you want, you just write a prompt and it creates, it's called Adobe Firefly and it creates lots of images for you and you can optimize them as well. This is very cool. Nice, definitely try out this design tools because not good at design at all but other tools we use already like design. Now something can help you at least speed up your learning process, this is great. Cool. Now let me ask you this question, last question. What are some of the challenges you face for a manager when launching AI product nowadays? So there are lots but I'll summarize them in my top three that I've seen. So the first one is actually what I was telling about my startup and the product that I launched out of research that I, and this is a challenge that I've seen in terms of customers being super, our potential customer being super excited about technology and thrilled and this is like the wow effect that I call in terms of these like super cool technologies but then the actually the market is not ready for it in terms of like lots of different reasons and that's something that I've seen in other startups as well like not being able to understand the market well and early on and spending all of the resources on technical product development even like with customer interviews and all that but they miss the insights that actually the markets and those end customers are not ready to adopt the solution. So that's a very important challenge to understand and having a strategy there to avoid misalignment in terms of AI product market bit. Yeah. Yeah, this also speak to something resonated with me in a different market a year ago with your crypto market, right? So now crypto market crashed. Years ago when people tried to add everything to crypto sounds cool, but I don't see customers are ready to adopt, right? So now this AI era, I think AI is more legit compared to crypto based on what we are the stage we are right now but it's a similar pattern. You cannot just add AI for the purpose of AI and your customers are ready. And I don't think customer will go in to say, what's your pain point right now? Oh, I want to use AI. I don't think they will say it in the customer interview. They will tell you the true pain point like a mom have this ABCD challenges and then you can think about if AI is going to solve problems for mom. So mom will never ask you will tell you of my pain point is there's not enough AI in my life. And same thing years ago, not enough crypto in my life. So let's use crypto. It doesn't work that way. But still I'm still like a big fan of web three where we can talk about this in a different topic but that people try to really just try to make money in the crypto space for the sake of starting something. The incentive is different. I hope people do not make the same mistakes when we launch AI product nowadays. Exactly. Yeah, that's super important because everyone, like we said earlier will be super excited about when they hear the buzzword of AI we're launching this super AI product. Are you interested? Well, it doesn't, this is not a good sign. Definitely not. Exactly. So make step away if you hear that. Yeah. Awesome. So let me let me ask you final questions. So when we repops this talk and regarding what's your opinion regarding girls mindset which means investing yourself on growing and I'm a big believer having people, like having girls mindsets with a career but people really define girls mindset differently. So tell me what do you think is girls mindset and how important it is and how whether people should or how they should invest in themselves? What do you think? I love that question. So I'm really fun of like ever of learning like as a lifelong skill and really investing and pivoting when necessary to things that you're really passionate about and not feeling like trapped or like this is not for me like we discussed earlier like someone that has never coded before and this is like AI performance and it's like really great. I want to try it out. And yes, if you invest in yourself and you don't need to go back to school and have a PhD. Well, if you really want to and that's also an option. Four years guys. Four years investment. Yeah. But it depends on and this growth mindset changes across someone's career and life and lots of different internal and external factors that change that but investing in yourself. It's something that no one will like give you. I mean, it's something that will pay off longer term like really positioning and honing in what you really love and what you're passionate about. There are so many resources nowadays. So many. It's just a matter of like picking the right ones and really showing and squeezing like we say the value out of them and positioning yourself on how you want to see yourself in the next couple of years, decades. Yeah. Awesome. I love you describing this as a lifelong learning and it's a journey everyone needs to embrace regardless how you choose your learning methodology and get a PhD. You can take some AI classes. We also have AI Prime Management classes coming up and there's many opportunities for people to learn but continuous learnings as a lifelong learner and it's such an important tip and also summary for today's podcast. Awesome. Thank you so much Dr. Ava Agapaki. Agapaki. Yeah. Thank you for joining us and everybody else who joined us today. So if you like all the insight we have make sure to give a like on our YouTube live, LinkedIn live and once we post this podcast on Spotify and Apple podcasts make sure to leave as a five star review because we actually like spend lots of time creating amazing content like this to educate everyone on the hardest trend in the tech industry and especially today with our AI. Awesome. So another quick announcement and anybody who is part of our inner circle you are going to have private Q and A right after this public event with Ava. So in your email registration email this morning you will already receive the Zoom link where you can join us to do private Q and A to ask even more in depth question that's specific to you. Okay. So everybody have the link and join our Zoom link and everyone who doesn't know what is inner circle go to darknancy.com my website darknancy.com everywhere LinkedIn everywhere has my website darknancy.com and then one of the drop down circle it's called inner circle and click there and you're able to see lots of information over there. So you can join inner circle it is free and check it out and also have a join the private discussions ever right after this. Awesome. Okay. Great. So cool. Thank you for joining us Ava. Thank you. Nice meeting you. Yes. Thank you everyone for joining. Thank you Nancy. It was great. Great. So great to have you. Awesome. Great. Cool. No. Okay. So now Ava here's what we do. So with our recording just end it. Do not drop. Let me do this. Yeah.