 But important to open this session with one of our key sponsors, IBM. And so I'm briefly going to introduce Rahul from IBM, who leads the world-wide data sciences and AI technical sales at IBM. So Rahul first got involved in open source communities more than 15 years ago and contributed to numerous open source projects and industry standards. He's an elected member of the Apache Software Foundation. So that itself is very impressive. And in the related space, Rahul has done work on standards with the World Wide Web Consortium. He represented IBM in multiple working groups, and he's an open group certified distinguished architect. Rahul is also an IBM master inventor with a dozen of research papers and issued patents. And so Rahul really brings us closer to AI, because this feature sessions the latest of IBM's announcements bringing AI capabilities to data wherever it exists in the cloud on hybrid data environments. And so infusing trust and transparency needed from an industrial AI perspective into any business process with confidence. So Rahul, over to you and entertain the session. Thank you. Thank you. So first of all, I want to welcome you all from IBM, and thank you, Apple. When I was told that there's a chance for me to talk at the Lifelong Learning Institute in Singapore to an audience that is interested in open source and artificial intelligence, it seemed like all of my key interests had come together, and it was nearly impossible to say no to this opportunity. So I am thrilled to be here, and I'm thrilled to be speaking to an audience of technologists and developers. For a long time, we have been underappreciated in many ways, but this venue itself is symptomatic of the changing landscape from developers, right? This takes me back to when I was teaching graduate school students, and by virtue of just the layout of this room, we are forcing all of the speakers to look up to the developers, right? So this is just an indication of how the world is changing, and more and more the developers and the technologists are leading the way. So I want to talk to you about IBM, an IBM's contribution to AI, and if there's one thing that you can take out of this talk, is at the end of the day, I hope that I can show you how IBM is essentially synonymous with open source and its adoption for the cloud, for data, and for AI. So given all of the powerful open source capabilities, particularly in the artificial intelligence landscape, you know, what does IBM stand? Why do clients need IBM to participate in this particular landscape? And one of the ways to clearly depict that is through what we refer to in IBM as the AI ladder. And this is sort of the prescriptive guidance that we give to our clients in terms of adoption for artificial intelligence in computing, we often use the term garbage in, garbage out, right? Artificial intelligence in some sense is just as effective as the data, the training data, the models, and the approaches that we take to it. And so you don't get to artificial intelligence, you know, sort of as a Boolean state. It's a transition from the current state of a large enterprise or a large client. So there may be a lot of data. You start with collecting the right data, making it simple and accessible. There's a governance model that has to be layered on top of that, right? Who can access the data? How is it cataloged? How do we get the data scientists, the data stewards, the application developers to collaborate and work together to decide what the right business outcomes are. And then the analytics that sits on top of that. And finally, infusing the artificial intelligence in existing business processes and workflows. So you could have the best possible predictive model for a certain business case, but if you don't apply it in the right workflow to the right end user, in the right user context, it is often going to not be used as effectively. So the AI ladder really helps, you know, large companies who have large data states transition up through higher value to get to artificial intelligence. And we believe in a multi-cloud world, right? So towards the right-hand side, what that graphic is meant to illustrate is the reality of the world is for most large companies and large clients, your data doesn't reside in a single cloud or doesn't all reside on premise, right? It's often a mix and it's scattered, and we help them bring it together. So there's two sort of, you know, fundamental fallacies that we often help and we use a lot of open source capabilities and I'll talk about that through the rest of the talk. But the two common fallacies that we talk to clients very commonly. The first one is what I call moonshot or nothing. And this is where, you know, you have a lofty goal in terms of the AI capabilities that you want to bring, but you forget that there's an iterative collaborative journey to get there, right? You have to work hard towards, with your data, with your cloud infrastructure, to deliver the AI in the right form. And that journey is often overlooked and so we help clients with that journey. And the second thing which is equally important as a fallacy is what I call one and done, which means because you built a predictive model that works within the data set that you trained it on, doesn't mean you're done and you declare a victory on your artificial intelligence initiative, right? You have to, there's a care and feeding of those systems that has to happen consistently. It has to be retrained. It has to be kept up to date with the real world. And we have a lot of those capabilities all in the form and built on top of open source tooling and libraries and we bring it together. Our fundamental central tenant is part of our CREEP principles for IBM's approach. It stands on open and this is not a chart that was created for this particular presentation. This is part of, you know, IBM's strategy and open is central and fundamental. I'll give you a few data points around that. Those of you who have followed some of the work that IBM has done in the context of natural language understanding and artificial intelligence are familiar with our grand challenge, which was the Jeopardy grand challenge, where the machine kind of competed with, you know, two of the grand masters or the best folks at Jeopardy. But the important point that I want to make as part of that story, and again, this is almost 15 years ago at this point, is even some of the work that contributed to that grand challenge was based on fundamentally open principles and software that was available in the open source community. So the unstructured information management architecture or UIMA, that was part of something that we had donated to Apache. It was one piece of the puzzle that went into the development of the Jeopardy initiative. So we've been leading with contributions to open source just as we've been consuming a lot of open source software. This is our sort of information architecture, very high level view that, you know, from an IBM perspective, the way we talk about artificial intelligence and openness is focused on all of these layers. So we believe that the future relies very heavily on open communities and open source, and you'll see that in each layer of this architecture, whether it's the cloud layer at the bottom, where we're focusing on, again, open infrastructures, containers, Kubernetes to the data layer where a number of NoSQL databases and other data technologies that are open source to the artificial intelligence layer where we leverage a lot of the open source toolkits and libraries, our entire stack from the cloud onwards to delivering AI is all based on open principles and a lot of open source software and tooling. Of course, the IBM differentiation is to provide on top of that a services consulting and support set of capabilities that some of the largest companies and clients really need. So it's hard for a very large enterprise to just pick up a few open source libraries and manage the entire life cycle of taking AI into their products. The support that has to follow beyond and really the help in terms of making the best use of some of the open source capabilities, and that's where IBM helps with best practices, helps with support, services, and consulting. If you look at the top workloads that IBM is involved in on our IBM clouds and particularly workloads that have to do with artificial intelligence capabilities, so that's predictive models, deep learning models, decision optimization type of models, you will see that all of those workloads, again, IBM leads with developing open communities and leverages the open source software that exists. So if you go sort of clockwise from the top right, blockchain, of course, the Linux Foundation and the Hyperledger project were some of the biggest contributors. If you look at the data fabric, I talked about NoSQL and other types of databases. A lot of these have very vibrant open source communities. When we talk about essentially cloud-native applications and modernizing applications into microservices, architectures, I mean, we're talking about containers and container orchestration projects like Kubernetes. Even though VMware is proprietary, when we talk about embedding AI into these processes, there's a lot of open communities that we are involved in. And of course, the core machine learning and AI capabilities. And finally, IoT is another space which is very vibrant in terms of open communities. So IBM plays a pivotal role in some of these communities in making these contributions and really, again, helping our clients with the adoption of open source in the context of cloud data and AI. This is just a simple illustration. I'm a member of the Apache Software Foundation, but there's a lot of other contributors within IBM, and you'll see that we really focus within the data and AI space in particular on making some of these contributions widely available to the community. There's some statistics at the bottom out of the Linux Foundation Kaggle and Gardner, and you'll see how open source is really fundamental in terms of quick acceleration and adoption of these technologies. This will be preaching to the choir, so I'm not going to actually talk about this chart, other than to say that we all know that development of AI and development of predictive and deep learning models involves a very sophisticated and iterative process, something that there's a lot of due diligence, especially that you have to look into in order to make sure that the models that you're building are essentially fair and essentially are explainable, and you can show in a regulatory or other context why certain decisions were made, and all of that has to be really governed in a very well-orchestrated fashion. So the way IBM actually helps with that, and that previous lifecycle was true with respect to what tooling or libraries you're using, where IBM helps is with that end-to-end governance for AI, they provide a set of tools, including Watson Studio, if you've heard of it, if you're not, I would suggest you certainly check it out. It is essentially, you can think of it as your IDE for AI. It allows you to bring together a variety of open-source capabilities, as well as IBM proprietary capabilities together in terms of the development of your predictive and deep learning orders and optimization models, and it allows you to do that in a collaborative fashion where you can work with your data scientists, your data stewards, your application developers, and your business analysts to deliver the right outcome. Now, some of this, when you essentially get started with hacking together some AI models, it might seem very far off in terms of interacting with all of these different personas, but when you talk about large companies and large enterprises, this is very much a real problem. Most of those are not single-person or two-person shops. There's very much a collaborative model that has to be put in place, and we provide with that end-to-end tooling that allows for that. Once you use Watson Studio to actually build your models, you can use Watson Machine Learning or any number of other model-serve engines to essentially serve those models, to deploy those models so that they can start making those predictions either in cloud or on your premise. And there's obviously a cycle of continuous learning that has to happen when your model is deployed. Part of it is in Watson Machine Learning, and part of it is in what we call Watson Open Scale. And what Open Scale does is as at runtime, not at build time for the models, not when you're training the model, but when you have the model deployed and it's making the predictions, it allows you to monitor that runtime performance of that model and give you subtle indications as to when some part of that model might have drifted away and you need to retrain the model. It also allows for all of those capabilities around being able to explain the decisions that the model is making, as well as showing that your model is essentially fair across all of the feature spaces that are involved in making that prediction. And these are very much fundamentally important principles for any enterprise to adopt AI, and we provide the tooling built on top of all of the open-source capabilities to do that. This is a slightly different view of the same picture, but this view actually talks a little bit about the IBM portfolio, right? So IBM Cloud Private for Data is essentially our data infrastructure layer, and it sits on top of any cloud. So you can take this entire stack to Azure, you can take it to Amazon, you can take it to the IBM Cloud, or you can take it on-premise on your own hardware. It allows for that sort of portability, if you will, in terms of the deployment infrastructure. And sitting on top of that, I talked about studio and machine learning and open-scale right now and the value that it provides, but we also provide in the context and the spirit of transfer learning and pre-trained capabilities, we have a lot of pre-trained APIs that can really accelerate the development of AI-based applications, and particularly if you're looking at things like natural language understanding, right? So I did a project where we used some of those AI capabilities to really understand the tax code, and it's a pretty enormous piece of work, but we were able to do that because we were able to train our AI and our natural language understanding capabilities using custom statistical language models that were built using our tooling that understood what a deduction means in the context of personal income tax or what a move means in the context of personal income tax, right? So it really has a lot of, you know, potential in terms of accelerating how you get to end value. And those are the services that sit on top. All of these, of course, are available as APIs. You can try them out today on the IBM Cloud. You can get a... You can subscribe. There's a 30-day trial, you can buy out all of these services. There's a hackathon, a three-day hackathon that's planned as part of this event, and I would certainly welcome you to participate in that. And finally, on top, we provide some essentially what we call applications which are a set of capabilities that are focused on delivering certain AI against certain AI use cases. So Watson Assistant, for example, allows you to very easily build conversational AI solutions, where you're essentially having a dialogue or a back-and-forth with your end users. And Discovery allows you to really search and explore into large document corpora, which would otherwise be very hard for you to make sense of and extract insights from it. Now, the entire Watson AI suite, or the IBM Data and AI suite, leverages through Watson Studio all of these open-source capabilities. So it's not a, you know, either use the proprietary capability or the open-source capability, right? It's a harmonious relationship, and using our tooling like Watson Studio, it comes together. So if for your use case, you can use the open-source libraries, and that's all you need, and that's great. If you want to leverage some of the Watson APIs, those are available for you. And there may be some cases where all you do is you use the Watson Assistant or some of the Watson APIs, because that's what you're interested in. So all of these models of interactions are fully supported using the tooling. Now, I mentioned that, you know, it's one thing to use Watson AI capabilities and open-source capabilities and how our tooling is based on open-source, and you can leverage a lot of open-source capabilities. But what about give-back, right? I mean, what are we doing to really advance the state of the art with respect to artificial intelligence and the way most of our communities, open communities, look at artificial intelligence? So we donate a lot of our technology in various open-source fora. And these are just few examples that I have on the screen of libraries that we have essentially made available in an open-source capacity. The first one is the AI Fairness 360 Toolkit that, you know, is... I talked about model fairness. This is really critical. A lot of AI today is making decisions that really affect people's lives in fundamental manners, right? You may have heard about AI that makes decisions about whether people get some type of mortgage or a loan. You may have heard about AI which is involved in the decision-making process about whether somebody who is incarcerated gets out on parole. You may have heard about AI where it's really advising an oncologist in terms of what possible treatment models may be available to a particular person with cancer. And all of these are really... really important decisions. And the aspect of fairness around some of these decisions, we cannot underplay that one bit, right? So that's what the Fairness 360 Toolkit does. The adversarial robustness Toolkit is really around, you know, how do you protect your machine-learning and predictive models against sort of malicious attacks? And, you know, all of you have probably seen the Lyme example where, you know, you're trying to classify whether something in an image is a wolf or a dog, and you actually find out that the machine-learning image recognition algorithm is really focusing on snow being in the picture, and if there's a lot of snow, it's deemed to be a wolf. And if you find ways in which you can extract these type of deficiencies out of model and then exploit them in various cases, that obviously can lead to some very unsavory outcomes in the real world. And so we provide this Toolkit to be able to mitigate against those types of attacks. And finally, our fabric for deep learning. We want to democratize deep learning. Deep learning still, the development of, you know, deep neural networks is it's hard to access some of the hardware and the infrastructure capabilities needed to run and train these models at scale. And we're making all of these available on the IBM cloud and indeed other clouds, and we're making it such that the Toolkit is available as a fabric in an open source library. Yes, so one of the last things, and this is the, we created a repository of models to allow for sharing of predictive models. And this really, almost always, you have to use a fit for purpose model that is trained by data and for the right algorithm and perhaps it's an ensemble model for the purpose. But you can get started quickly with this model asset exchange. So, I'll end on this chart. Again, I hope that having seen some of this when you think of IBM, you think of something that is synonymous, the entity that is synonymous with bringing open source to the enterprise in the context of cloud, data and AI, right? That's what IBM's, you know, is a fundamental central tenant to IBM's mission. And there's a lot of opportunity for you to engage with us more deeply beyond this talk. We have a number of speaking sessions. We have two hands-on workshops. I would very much recommend that. Those workshop focus on aspects that we have talked about. And finally, I did mention the hackathon. The hackathon is actually something that we are supporting through a period of three days. There's an interesting contest and some prizes at the end of it. So I would certainly ask that you check it out. Thank you so much for your time. It was great to be here.