 Nathan. It's nice to be here. So we're gonna talk about AI. I would say it's the biggest technological shift since the internet. And we're at the stage right now when mature technologies cross-fertilize. But I would say that AI is the underlying force. And as an economist we always look at when we study technological paradigm shift. We always want to detect what is the, what is it that we have drastically reduced cost of? Like going history-wise it would be reduced cost for production, transportation, information, and now predictions. So it creates enormous values for individuals, society, and organization. We know that. Yet there are challenges. And that's why my initiative and research is on AI sustainability. Long-term gains, avoiding short-term pitfalls. So AI sustainability is a purpose-driven approach to AI. It's based on Nordic values. We're not saying we're the best on ethic, but we have a history of putting ethics and values on the top of the agenda and combining it with sustainable business models. So it's a form of updated CSR in this data-driven AI area. So it also means we have to demystify AI. We need to govern and measure how it's scaling in a broader ethical context. So I want to talk about why AI is different. Why is it so hard for CEOs to govern AI? Why is the skills among boards when it comes to AI so low? And it is different because considerations are reproduced by self-learning algorithms on constantly refreshed databases. And in most cases, they're targeted towards goals as profits and increased sales. Because you see that when it's applied, the profits are seductive. And I want to hear your examples on this, Jonas, later. So also the profits are seductive, but the ethical pitfalls are hard to detect. So the regulatory framework is lagging. So there's a regulatory blind spot here. So we need to, we see China and US investing on just one side of the coin. That's the engineering AI. Whereas I say that we need to create skills and competence on humanistic AI. That is how it's scaling and affecting us. So Jonas, do you want to talk about your perspective on how AI is different from your... Well, I think I totally agree with you. I think the machine learning AI perspective is really fundamentally changing society as we see it. For me, it's one of the biggest revolution since the Internet. It will dramatically change the way we think of actually delivering customer values and customer products. And it's so seductive because it scales so fast. And I think that's the challenge for when you come in from an ethical perspective. It's that it's very hard for top management executives boards to understand the scaling and how fast this scales. So what started as a very, very seductive, beautiful way of earning more money really comes into perspective. How do you code no values? How do you do that? Because if you code things, you need to have that line of code. Before you could have a gut feeling, this feels right, this feels wrong. But for me, if we're sitting in the basement and coding it, we need to say, OK, what are the values? I need that code line. That's like an ethical lens, right? Yes. And that, of course, is difficult. But I take the example of the Chinese emperor and the rice of grains. Because if this peasant who only asked for, I just want one grain of rice for the first board of the chess board, then I want two, then I want four. But if you put that sum together, that's more rice than China would produce in nine years. Because it's so hard for the individual to understand how exponential growth is. And that is the same thing I see when we're building propensity models and we're doing supply chain. It dramatically changed the way we're doing things. And then people are code a bit off guard. And then we just had a meeting today and the first reaction is like, can we get rid of data? Yeah. No one wants to touch it. Oh, shit. If we're ever going down this path. But it's seducive. But companies need to make money. But you also need maybe to backtrack. How do you code your values? That's how... Yeah, that's the ethical lens. Nathan, what's your perspective on how AI is different? Yeah, I would generally agree with what's been said as well. I think the nuance is like, how do you properly express human values in code, especially if you have a very complex value chain? And these models are often trying to learn very complex representations of data where, as humans, we're not necessarily sure what we're looking for. And so how do you properly express that into a function that a machine can optimize in? And I think this is best displayed in the games world, where you can use a function that describes doing well in this particular game is just scoring very highly on that game. But then you kind of observe the behaviors that the agent has learned in that environment in order to optimize the score. And it often displays things that humans wouldn't really do. And so as like a toy example that shows you that encapsulating that value function is very important. And we have to try and study how to best express it or maybe how to build rails, build safeguards around these systems which create bounds in which they can make certain decisions or not. Because some critics argue that AI is an experience where you don't bother to look at the result. You just look at the targeted goal variable. And I think from an academia perspective, as well as an organization, you work in silos, you have the engineers, but then you have to educate engineers, how do you detect gender bias? And I say, I believe that the core in AI ethics and AI sustainability should lay in the engineer school. Because they're the ones that are going to solve the problems. They're the ones that are going to have algorithms to detect biases. But they need to have influence and they need to cooperate multidisciplinary with the gender researcher, with the law, the academics in the law schools. So we really need to get away from the silos as well as in the organizations. Yeah. I think the general view is just to regard the building AI systems as not a purely engineering problem. You know, building the system itself might be finding the best variables and the best model and the best data, etc. But these systems are interacting with people. And the most exciting part of building AI is that it's starting to impact very consequential decisions in people's lives like health and financial products and logistics, etc. So it's about making sure the teams are working on more than just purely benchmarks, which academia has largely been built around. It's like, what is your performance on ImageNet? And that's all that people care about in computer vision. So if we start to change those benchmarks around, then I think that that's a forcing function on where research ends up flowing. Yeah, I would even take it a step further. I think, you know, the Fundamenta with AI would actually maybe fundamentally change your business model and your customer offering. So I think everyone out there needs to look themselves in the mirror and say, okay, if we implement this, will this change our customer offering? And if it changes your customer offering, then you're actually either enhancing that offering or not, but it actually puts you in a different place. And for me, that's a very strategic move. And then it needs to have a very high governance in the board in the top executive because you're actually moving to a new product. And that, of course, needs to include all parts. And I think just doing that in the basement and coding that and doing it in silos, you've been able to drive business because you've been able to do it in silos because you have a very clear business model. But if you come to the conclusion, this will change your business model and we are working, then you also need to involve much more people to actually rethink what are you serving. And I think you're spot on and also reflecting what has been the discussion on many stages here that from the beginning, you need to take the decision on where you want to be on the scale of maximizing your profits or being super ethical, right? Because you want to drive profits. But I want to talk about the pitfalls, but the pitfalls are often unintended. So there is discrimination, there's manipulation, there's fake news and privacy intrusive and faulty recommendations and so on. And in my research, I try to frame it with four groups of unintended pitfalls that's misuse of data. So even though you are GDPR compliant, your data can be used or AI can create intelligence that is privacy intrusive. And I think today we have a false sense of security due to GDPR. And then the second is the bias of the creator. And that's where the programmer is intendedly or unintentionally just programming his or her values and also not knowing how it is scaled in a broader context. And then we have immature data. You've mentioned, Nathan, that the face recognition, how it's not working on the whole of the population. And then where the research has gone the furthest, that's bias in data, when the data is not reflecting the reality or the preferred reality. But who's to say the preferred reality? Who's to... So that's also... But I want to hear Jonas about the pitfalls because you've been so hands on coming to the stages where you have to lift questions to the boards to push them against the wall saying you have to decide where you want to be on the scale. Yes, I've built quite a lot of these systems and implemented some of them as from a consulting perspective. And the interesting thing is when they come to us as builders, they say, okay, we want to drive revenue, we want to drive profit, or we want to take cost out. That's often the initiative of why we want to do AI. And then you start doing that, and then often what people see today is propensity models and these things upsell, which is great. And then the second thing, then you actually very fast turn into profit. But that puts the complexity into, okay, how do we rethink? And there is an enormous amount of pitfalls in this because now you're only pushing against one variable. And that is profit and cost take out. And I think we should go over to Nathan. Yeah, I think a lot of the discussion at the moment relates towards businesses that are like obviously based on the internet or enterprise software or transactional. But like another layer into the ethical discussion that I think about a lot is where there's already proof that machine learning systems can perform better than humans on certain tasks that are currently performed very poorly. So for example, like breast cancer screening or mammography image analysis, in the clinic today there's up to 30% false positives that occur and 30% false negatives as well on diagnosis of cancers in X-rays. And so if we already have experimental evidence that machine learning based systems are better at diagnosing these kinds of conditions and same applies to optical computer tomography for the eye, then like do we not have some kind of moral imperative towards accelerating the implementation of these technologies because just waiting there and trying to retrofit them into the existing FDA regulatory framework in which they don't really fit because they're autonomous decision-making systems as opposed to rules-based ones? I think that's like a flip side of the ethical discussion. We have responsibility as well, yeah. Yeah, I think so. Are you back, Jonas? I don't know. Am I? Yeah. Yeah, okay. No, I agree. And I think it's a leap. I think the leapfrog of technology can put this in a very, you know, in different places. And I think one of the discussions we've had is like when you say around biases is like would you like, you know, a computer employee someone? And you're like, you say, no, of course not. And I'm like, of course, because I think the bias there is much better than the human bias. So I would be much more confident having a computer employee someone than, you know, a human. So it's more like what kind of perspective do you say? But in the end, what kind of code do you do and how do you build that? So I want to talk about transparency, because at EU level, they talk about standards around the corner for AI ethics. And they talk about transparency or accountability and predictability. And it's like plowing this will really slow down innovation and growth, I would say, but not doing it would lead to discrimination and pitfall. So you really need to adjust it. And from my point of view, I would say that the level of transparency must be different in different sector based on the risks and the cost, both the financial cost and the social cost. But what's more interesting is to talk about explaining the algorithms. Like GDPR is the right for explanation. Here's your algorithm. Here's how we've figured that out. But actually, we cannot now, at this moment, explain what kind of data set that hosts the algorithm to take a certain decision. So what do we do with the explainability stuff? Explainability stuff, yeah. It's complex. Yeah, I think you're totally right. And it depends on the fault tolerance of the user. And in areas where there's low fault tolerance, where they don't want to see a mistake that is not fair, there I think we have definitely an input is to explain, at least to a professional degree, that what a human would do, why a system made a certain decision. And I think you can either do this by choosing certain models that are interpretable, but you can also do this in a way that you design your system to have sort of intermediate steps that you can peel back in order to unveil some of the decision making. So there was another good example in the eye scanning work where, in theory, you can take an eye scan and then predict whether it's a severe condition or not severe condition that you would treat differently. And you can do this fully end to end with just one big black box, if you will. But it turns out that in the clinical setting, it's a lot better and performs equally well to have this intermediate setting, where, effectively, you're taking in the raw image and just painting it where you're labeling each pixel in the image with a specific layer of the eye, which actually helps the doctor understand, okay, this particular layer looks weird, this particular layer looks okay, and that combination means that it's severe or not severe. And so just by having that little intermediate step provides a certain level of explainability, which the clinician would otherwise do anyway, performs the same way, doesn't have drawbacks, actually has positivity because it's more transferable across different systems. So I think that's like one level. Will it hamper the slow down the development? Are you afraid of any standards just slowing down? I would more address it. I think the whole transparency perspective has a double-edged sword, of course, because especially when it comes to consumers, because I think companies need to decide if you have dynamic pricing models and different pricing discounts, how do you display that? Should that be public? Should it not be public? I think there's one perspective, what do we define as public to our end consumers? Could you explain that to the audience how that works? Well, it's like every, the flight industry is really good about putting dynamic pricing on your flight ticket, and we have accepted that as consumer, and that is happening basically all over the industry that you're having dynamic pricing based on recommendations and also loyalty and so on. So that is happening more and more. And I would say, but if you would be fully transparent with that, people would be very fast in gaming the system. If you know exactly how the flight industry would actually price their tickets, people will be starting doing purchase against it. So it becomes difficult from a consumer perspective if you believe in price discrimination to have it transparent. But on the other hand, from a company perspective, the executives need to have a very clear view of how they're selling it. And I think this also goes into healthcare where it's going to be very, very difficult for governments to be transparent. When do you give different kind of healthcare? Who gets what kind of service in hospitals and so on? Because there is a gaming element of all of these things. So I think for the people building it, you need to have a very clear view of how you're building a transparency, but I'm not 100% clear that it should be 100% transparent against consumers. Then the other challenge is it's very difficult to understand exactly what is happening because there's so many layers and so many things happening constantly. So it's more like, oh, if you show a surfer video, then we can actually sell more coffee because the correlations that the computer is constantly playing with comes up with ideas and thoughts that we could never even have figured out. It just happens and it tests it and it shows it's positive. But putting all that and wrapping that and making it transparent becomes super, super complicated in a boardroom. But who's gonna educate the regulators? Who's gonna educate our politicians here? That's your job, Anna. I'm trying, doing my best. But okay, so let's go over to what a startup can do. How can a startup apply ethical lens from the beginning and scaling in a purpose and conscious way? I think a very simple way to do it is just to be extremely clear with your data capture, data processing, use cases, and how that flows into your products and do this very simply on your webpage. So there are a couple of companies there that very explicitly demonstrate this and in a way that it's easily digestible and it's not like a 10 page, like eight point font, indigestible piece of information. And I think that's kind of important in this GDPR because it starts to build way more trust that the business that you're interfacing with actually cares about protecting your privacy or not selling your data to third parties. So I think just communication and openness and just being very clear with what you're doing is far better than not. And there's countless examples of companies not doing this and really having a back start. Is that what's gonna spur and make this kind of transparency and explain it? Yeah, I mean I think consumers vote with their feet and there's an order number of companies out there that can serve essentially the same value proposition. So I don't think you can afford to really screw up on that mark because it's just another. And I also believe, because you wanna talk about trust and I think that's very important because there's a study on the paradox on how you perceive your own integrity level or border because you can say that you have high integrity but you are active on social platforms totally paradoxal to what you say where your integrity lies. And I think that the trust issued, you're not gonna read all the, even though the GDPR made it easier for us, it's even too hard to grasp what AI can combine and create. So it's about what kind of company you trust, what you build in to give away your integrity because actually it's a balance for an individual or consumer, it's a balance between giving away your integrity and receiving convenience, time optimization or efficiency. And that balance is not based on reading all the GDPR. No, I think if you go back to the question of startups, I think be very, very clear of your value proposition. What are you offering the customer? What are you selling? And then nearly all startups need some kind of data feed. So it's just don't get seducted by short-term profit. You really keep the eyes on the horizon and really care about the product that you really want to build and you want to be proud about because it's so easy that you go down a slippery slope and making short cash and you have investors, others breathing down your neck to show better profitability. So have a very clear standards of this is what we want to deliver and try to keep that and remind yourself of that because it becomes really difficult to track of it. You employ a lot of people, you go from three people to 30 people, you're doing a lot of things in combination. So if you have a very clear perspective, this is what we're offering, this is what we stand for, this is what we believe. I think that should be your guiding star and then technology comes in second. Yeah, yeah. So I'm wrapping up here, but we just received a question from the audience that I love. It says, from a research perspective, how can we bring together social and computer science to better understand the impacts of AI? So that's what I'm doing in my initiative. I'm actually bringing together academic from a multidisciplinary view, regulators, politicians, organizations and startup to create a regulatory framework for responsible AI. And I think what is attracting the other, the social science to this area is that they know that their sector or research area and they're in the long run business cases based on research are gonna have AI, it's gonna be AI implications on that. So we all need to understand that. And also coming back to where I'm started, the solutions are in the engineering field, but as we go along, the pitfalls will just increase if we don't build up the other side of the coin, the humanistic side of it. I agree and I think we need to think the same way you talked about silos. And we cannot only have the computer scientists try to solve this. We need to get together with the legal people, the philosophers, how do we work with leadership teams? How do we work together on defining this kind of new code of conduct when it comes to how do we do deploy data and how do we work with that? And I think it's by bringing everyone to the table and spending the time needed to try to figure out how do we solve this exponential problem? Because it's not always a yes or no because there's a lot of edge cases. How do we do here? And there's a different ways of solving it. What are we solving for? And it's actually, this technology is actually pushing us to take more ethical stance. And it's pushing us to be, to taking the decision and live it through. I would even put you need to define the boundaries. You know, how do you, how do you encapsulate things and what do you think is right and wrong? And that comes for me as a business question. You're just again enabling new technologies to drive new business and you need to have values and standards and that has been, you know, all along. You know, what kind of quality? So it's basically, for me, a business model impact. Okay, so Nathan, it's up to you to create hands-on algorithm tool kits for purpose-driven AI. It's a bold plan. Okay, so thank you all for listening and have a great night. Thank you. Thanks so much. Bye. Thank you.