 And our first speaker is Seth Dobrin. He's VP of data and AI and Chief Data Officer for Cloud and Cognitive Software with IBM. Seth is not physically with us today, but he has recorded his presentation. So we're going to play that recording momentarily. And because he's not here, we won't really have an opportunity to do a live Q&A at the end of his presentation. So just be aware of that. So I guess without further ado, we'll go to Seth's presentation. We're in the midst of an AI revolution, but people are still too suspicious and too fearful of AI. And this is the time for us to fully unleash the potential of AI to reinvent our future. But to do that, beyond the advancement in research and technology, what we really need is for people to trust the algorithms that impact their lives. But actually, let me start from here. The biological circuits of neurons help to explain a popular machine learning technique at the heart of deep learning, artificial neural networks. Yes, today, as the IBM Global Chief AI Officer, I'm here to talk to you about machine learning, algorithms, and AI, but not from the perspective you might expect. Trust is the real driver of AI adoption. And it's critical to begin to approach this technology from a human perspective by actually taking a step back. This image is an analogy from the field where I started my career, human genetics, specifically psychiatric genetics. And I bet by now you might be wondering how a human geneticist became the first-ever Chief AI Officer in one of the world's biggest companies in the realm of AI. And in fact, this is precisely what a lot of people ask me. Believe it or not, there's a lot in common between being a human geneticist and leading data and AI transformation for enterprises, even if it doesn't seem like it. For instance, in the late 1990s, at the time of my early academic career was the time of the founding of what we know today as big data, which is not a term I like, but one that everyone knows. And if you think about it, the origins of big data come from hard sciences such as astrophysics and genetics. Additionally, the applications of tools that are so popular today in solving math problems like R and Python arose during this time. And throughout my entire career, from human genetics to plant genetics to academics, startups to multinational corporations, it's all been rooted in pushing the boundaries on the combination of advanced automation methodologies at scale, leveraging data and math. All with the aim of creating better, faster, and cheaper solutions that solve real human problems and could be trusted by people across the organization. In fact, trust is crucial when it comes to scaling AI. Trustworthy AI and AI ethics are among the most discussed topics today, and there's many frameworks, guidelines, assessments, and regulations that are being provided and proposed. And I know what companies have to deal with. It's pretty complex. And if we're honest with ourselves, at best, it's a mess. But people tend to forget that ensuring the design and development of trustworthy AI systems that are fully integrated with a company's business processes requires much more than technologies to mitigate bias or address regulations. The matter is much more complex than that. Operationalizing trustworthy AI at scale starts from growing and maintaining an open and diverse culture that fosters inclusiveness, a culture where AI is human-centered. It's the culture in an organization that allows it to set a proper data and AI governance that ensures the development of AI solutions that are grounded in ethical principles. And the path towards trustworthy AI is long. It requires a human-centered approach rooted in the value of diversity to use AI to solve humans' problems to generate real, tangible value. And I realize that with time, that almost every initiative I led from the beginning of my career has laid the foundation for the creation of this human-centered culture I'm talking about. A culture that is essential for putting trust at the core of AI technologies and generating value for organizations. And it's through this lens of my journey in data and AI that I'm going to talk to you about how to set the basis to instill trustworthy AI technologies in an enterprise. And to be honest, making the transition from hard sciences like genetics to heading AI transformations is not as much of a leap as it may sound. Taking a big step in a totally different direction. That's the challenge. And this is precisely where the perceived leap comes in. If you think about what's at the heart of data science and AI in the enterprise, it's basically applying the scientific method using math and computer science to solve business problems at scale all with the goal of creating new value. And ultimately, with a slight step to the side here and there, I've brought the application of the scientific method in a field very far away from genetics by completely changing industries when I joined IBM. This is how innovation happens. By putting ourselves at the intersection of different fields and making an effort to connect the dots along the way to draw a bigger picture. In my case, that picture was cross-functional data-driven solutions that helped companies to reinvent their business models. That's why one of the first initiatives I led when I joined IBM was the creation of a new team, the data science and AI elite team. I designed this team as a pool of experts that had deep skills in machine learning, deep learning, data engineering, data visualization, optimization research, and data journalism to help IBM and our customers successfully execute, operationalize, and scale AI in the enterprise. Through my journey, I've realized how companies invest heavily in software, hardware, and time hiring top talents. And yet despite all this effort and money, many of these companies get back little to no value. This primarily happens because they spend all the resources on too much experimentation and projects with no clear business purpose set up front, which as a result, they don't align with the organization's strategic initiatives. With the data science and AI elite team, I wanted to turn data science from being a scam into a source of value by showing enterprises how they need to consider turning their data science programs from research endeavors into integral parts of their business processes. The experiment of the data science and AI elite team resulted in huge business success. The team acquired more than 110 client references and has completed more than 260 client engagements. The way the team operates has profoundly changed even in IBM the mindset of how to approach data science and AI and how to use it to generate new business value. A year after the creation of the data science and AI elite team, we started another team designed to help to accelerate AI driven value creation. This is a team of domain specific chief technology officers who also had a deep expertise in a given industry sector. These CTOs helped to tie AI to business outcomes in order to accelerate the generation of value for their specific industries through AI. Although the actual strength and immense success of these teams truly dwell in its diversity. Think of a menu. Do you eat everything on it? Absolutely not. You pick and choose, right? This is yet another analogy that comes from my wife an analogy between a menu and a job description. Long before joining IBM, while working at a previous company I was trying to find simply one good data scientist to add to my team. One, not more than 100 as it was with the data science and AI elite team. So not only was I having trouble finding qualified candidates but I also realized there was zero diversity in the candidate pool. Tired of my frustration and complaints, my life helped me to realize that the job description I had written contained more than two dozen qualifications. To frame it in her words, it was too much of a wish list and she was right. Most of the criteria were not skills necessarily required for success. She suggested reframing the job description as a short menu of desired attributes. And that's where that there's what I want and there's what I need. For instance, I don't need someone with Python, R, Julia, Scala, Java and JavaScript skills. I need someone with Python or R plus one other language. I don't need someone with an understanding of every database known to humankind. I need someone with experience in three of the following. I shouldn't care about the number of years of experience. What I really care about is the demonstrated level of mastery. Well, after doing that, I suddenly had twice the number of qualified interview candidates including women, people of color and other represented groups. There's an entire body of research out there explaining why diversity increased so much with this approach. And when it came time to begin assembling the IBM Data Science and AI elite team I adopted the same strategy to ensure that it was highly diverse. And with some of my then colleagues, our HR partner, our talent acquisition partner we reevaluated the entire hiring cycle from writing the job descriptions to making the candidates an offer. The main rule we established was no conducting interviews until we had a diverse pool of qualified candidates. An approach we drew confidence in from many studies and research in the field. Following these practices, we minimized bias during the hiring process and created a diverse team of highly technical and highly talented individuals from 21 different nationalities speaking 27 different languages of which more than 40% were women. I did this not only because it was and still is the right thing to do but because diverse teams drive better business outcomes and this is proven by top academic and real world research. There's ample evidence that diversity is good for business. A Mackenzie report found that companies ranking in the top quartile for racial and ethnic diversity were 35% more likely to have financial returns above their respective national industry medians. But diversity is not just about hiring diverse talents. It's also about establishing forward thinking policies and processes to nurture and preserve diversity within an organization. With this belief in mind, I drove a number initiative in IBM to support different communities and grow talents in data science. For instance, together with several female senior leaders, we co-founded the diversity network called GROW, Guidance Resource and Outreach for Women in IBM, to advocate for women, and to retain and grow talents in data science, I started leading the IBM Data Science and AI Profession Board, of which I'm the executive sponsor. Through the board, we've provided a clear career path for data scientists through a skills-based assessment and qualification program inside IBM as well as via the open group. Also, as part of the data science and AI elite team, I started the first-ever apprenticeship program in data science to give people with zero background in this discipline the opportunity to learn a high-demand skill and develop a career in tech. Along with these initiatives, I also tried to change the old-fashioned way of mentoring diverse communities because mentoring is important, but it must be backed by sponsorship and advocacy. Diversity and inclusiveness spur innovation. Innovation generates concrete business results. A diverse and inclusive team is more than male-female, black-brown-white, or differences in sexual orientation. It's also differences in backgrounds, socio-economic, cultural, language, skills, training, cognitive processes, and so on. In fact, multi-disciplinary was one of the facets of diversity and inclusiveness that we included in the data science and AI elite team. Prior to joining IBM, while leading one of our digital transformation, I remember taking part in a project about implementing a mobile experience to interact with farmers in India through an app. To everyone on the team, it sounded like a brilliant idea. Everyone was really excited about the project until after a design-thinking workshop, we discovered that actually the farmers, which are end users, could only use the SMS service on their phone, meaning an app was worthless. If we had pursued what we thought was a great idea, without thinking about the user first, we would have wasted time, resources, and money. Approaching AI from a human-centered perspective is fundamental. We only provide tangible business value to organizations when we create solutions that solve real human problems. Remember, AI is only a means to an end. The ultimate goal is to help and to augment humans. In fact, another choice I made was to equip the team of data scientists with data journalists and data designers in order to radically change the traditional data science practices applied to business problems by rooting them in a human-centered approach. Throughout my journey from data scientist to business leader, I've learned several lessons. One of them is that by bringing a diverse cross-section of wild ducks together with purpose, we create innovation. And innovation has indeed been the objective of my entire career in an effort to bridge the gap between business, AI, and human needs. And the focus on humans brought into the data science and AI elite engagements led to the adoption of design-thinking workshops in data science which were even more radically innovated by adopting them to data and AI in order to better serve both the end user and the data scientists. In addition, the human approach revealed the importance of adequately communicating AI outcomes to the end user. And this promotes an adoption of data storytelling to transform algorithms into actionable knowledge and concrete business results. Ultimately, the attention to the users uncovered the need for having a well-defined data and AI strategy to craft an overarching story that helps to envision a new AI scenario and explain how this scenario may play out. The experience of the data science and AI elite team made me realize that what I had accomplished throughout my career has always been directed towards the creation of a human-centered approach for data and AI business practices. In fact, to lead a portion of my company's digital... former company's digital transformation, I created a framework I called the Decision Portfolio to identify the company's strategic initiatives in data and AI based on the user's needs and to assess those initiatives by aligning them to business value. Yet once I joined IBM while leading the digital transformation of the cloud and cognitive software organization, I realized that the framework I created couldn't really help me as it didn't scale. I needed a way to fasten the time to gather user requirements on the data they needed to run the business efficiently. Also, I needed a way to make sure to develop solutions that would make a real impact. Therefore, the team started adopting the design thinking practices and reinventing them through the lens of this Decision Portfolio. We used design thinking to identify business problems from a human perspective through a data-driven approach. This was groundbreaking. We used it to identify real user problems to uncover all the critical business questions they have and to use those questions to extract the information they need. We then turned the information into data sources the data scientists could use to design and develop solutions that solve the user's problems. By approaching digital transformation from the human needs and unknowns hidden in the business strategy we provided the business a tool carefully designed to transform data into actionable insights explaining the business dynamics and answering all the critical questions that users have through trusted data at their fingertips. Through an IBM-wide effort across finance, sales, designs, development, and product teams we're able to create a management system that today is IBM's single source of truth for business and is adopted by every senior executive in the company. This methodology changed the way data is used for business by teams in accelerated IBM's digital transformation. By re-engineering the decision portfolio through the lens of a human-centered approach using data-centric design we sparked an innovative collaboration between IBM Design and our organization. The outcome of this work is what is today called IBM Design Thinking for Data and AI. And this tool, IBM Design Thinking for Data and AI is a human-centered methodology that illuminates how to employ data and AI to build an AI strategy and an execution plan for trusted AI solutions. This is done by bridging the gap between strategy and execution. And this framework enables teams to design and build solutions that provide concrete business value to their organization while solving human-centric problems. It puts up front who will be using the data or AI how they will be using it what they will be using it for and why they even need it in the first place. With this framework which is divided into a strategy session and a series of technical sessions the teams transfer the business attempts into concrete action. In addition, by the use of specific AI ethics and AI explainability activities this methodology enables the design of an AI strategy that puts trust at the core of every solution that is ultimately implemented and operationalized in the company's business. In fact, this is one of my missions as IBM's first-ever chief AI officer helping companies design an AI strategy that aligns their business needs and puts trust at the core. Once companies achieve a mature digital transformation it's time to focus on strategy to industrialize the data and analytics to enable AI to reinvent the core of the modern enterprise. Trust is vital in this process. We started the journey from here from this image about the three pillars of trustworthy AI but you've probably gathered at this point trustworthy AI is the result of different integrated processes tools and technology that cross the cultural fabric of an organization. The level of maturity of enterprises in the AI realm with the operationalization of AI models has promoted the need to trust the AI's outcome. And this is possible by creating a data and AI governance framework through which monitoring the entire AI lifecycle to ensure that models are compliant with AI ethics principles. For us, these principles are transparency meaning AI is easily inspected. Explainability, because people are entitled to understand how an AI arrived at a decision. Fairness, AI should always help people make fairer choices. Robustness, as AI has to be robust enough to handle exceptional conditions effectively and minimize security risks. And finally, they need to preserve privacy because AI solutions must preserve the privacy and security of users' data. These five principles guide the development of trusted and responsible business outcomes driving more value for companies by making them more profitable, more efficient, and more secure. In fact, companies cannot generate business value through AI unless they can trust the outcome of their AI systems. Think about the recommendations or the information we provide to our customers when we employ AI. If AI is not trustworthy, the link between business strategy and AI strategy inevitably breaks. Because of the urgent need for companies to define what trustworthy AI represents to their organizations and integrate that knowledge into actual business operations, we started an IBM-wide effort around creating a trustworthy AI set of solutions. Through these solutions, we help customers deliver trustworthy AI systems and put AI ethics principles into practice through an execution model focused on four different areas that reflect the different needs of organizations today. These areas encompass a mix of tools, IBM technologies, as well as the expertise of our services organizations that help companies with AI governance frameworks by building with them from scratch the conditions to develop trustworthy AI systems. The monitoring of full AI lifecycle, we partner with customers to plan, build, deploy, and manage new AI solutions while we ensure the trustworthiness of the AI. The ability to assess, audit, and mitigate risk, meaning we offer guidance and tools to help customers assess, audit, and mitigate risk in their AI solutions. And ultimately, we also help with education and guidance by providing best practice for building trustworthy AI solutions through an education and guidance for data scientists, developers, and decision makers, and also by offering standard courses and certifications. Now, this was quite a journey, but it doesn't have to end with this talk. Actually, I hope it's only the beginning of a new conversation. There is still a lot to do to make AI responsible and fair. That's why it's crucial that experts in this domain make sure to focus their work on changing the narrative around AI by putting a genuine effort into creating trustworthy solutions that can truly help human beings to reimagine a better and fairer future. Thank you.