 So, thank you for the introduction. We are going to talk about AI, of course, and I am going to make a lot of questions during the talk. So the first thing I wanted to, with these presentations, to always start with the end of the world is coming. And this is typical. So, as you can see here in this video, there are a lot of companies that in the next 10 years will disappear, and a lot of people is replaced by AI, so more or less is this. The end of the world is coming, as you can see in all the messages. What is the conclusion of this? Why are we explaining this? As you can see here, right now, seven out of the ten biggest companies in the world are technological companies. And this could be impressive, but I think that it is even more relevant that most or all of these companies didn't exist 15 years ago. And I want to emphasize one thing. How do we call these companies? Technological companies. They are companies that, just with technology, are disrupting different markets. For example, when you talk about or you think about Amazon, most of you are thinking about a retail company. If you ask, or when in any interview they ask Jeff Bezos, what is Amazon? He always say the same, Amazon is a technological company. In fact, it's a company that is disrupting different markets with technology. They started with books, retail, now they are in the media sector, and they are making fantastic series, movies, so really they are disrupting different sectors with technology. Carmen yesterday pointed out that in England, I was last week in England, and in all the TV news, it was Google entered in the banking in the financial sector. Why? Because now you can open a bank account with Citibank with your Google account. So they are jumping from sector to sector, disrupting with technology. But the question is, how are they using technology to have such a breakthrough, to have this edge in any sector they jump? This is one of the questions I want to answer during this talk. So more or less, this is the situation for any traditional company. Traditional company, when I am saying traditional company, I'm not meaning bad companies. Traditional companies are very good companies. Traditional companies are companies that have been winning money for the last 20, 30, 40 years, and that is not easy. So they are very good companies, but they have history. And because they have history, they have this technology. So if you are thinking about a bank, retailer, telecom, utility companies, normally they have technologies from three generations. Banks they have the core banking in an IBM host, in a mainframe. This is technology from the 70s, the 80s. So we are talking here about 40 years old technology. Then they are also using applications with Java, Oracle data stores, when Internet was born, the 80s, the 90s. So you have their combination. You have very old one, old one, and then came new generation. Distributed technologies, small computers, small machines working together. And traditional companies, because they have history, add this technology to the other two. So what is the situation of a traditional company? The situation is that they have three generations. It is very complex, the cost is very high, and they are very slow. How many of you are working in traditional companies or with a combination of these, any of these three or two generations? Okay. I was raising my hand just to emphasize or to, for empathy, but of course I am working in the last one. Well, this is the normal. So technology companies, what are they using? They were born 10 years ago, they are only using the third one. So what is the edge? The edge is this, 10x, that is the edge. They are 10 times more competitive in any sector, or they should have this edge, in any sector they jump, 10 times better. Not 10%, 30%, 50%, 10 times better. That is the reason Google goes to the financial sector. Why? Because they know they are going to have this edge, 10x, and how are we helping traditional companies? How are we helping you? With technology, right? Well, that is wrong. With technology, we are not winning and you are not winning. And you are not going to win. It is impossible. Why? Because you are trying to compete with technology versus against technological companies. And what is the edge? This is the edge. How many tier one experts in technology do you have in your company? I mean the best ones worldwide, best engineers. Of course you are the best engineers, that is the reason you are here. But how many, how much do you have in your company? 10? Five? None? 100? 100 is a lot. Even for the biggest banks, biggest utility companies, telecom, whatever, 100 is a lot. How many tier one engineers, technical guys, has a technological company? Amazon, Netflix, Google, in the thousands. More or less any of the companies I mentioned, they have more than 50,000 tier one engineers. So do you think in this playground in technology you are going to beat them or to recover some of the market share you have lost or the one that you are going to lose? No way. So, Sun Thu, choose your playground. You want to win any battle, choose the playground. This is what I am going to try to explain. So we have started, as I said, the end of the world is coming and you don't have hope. But now we are going to see that, yes, there is light at the end of the tunnel, so how? Well, for us to change this situation, there are three disruptions that you have to achieve. The first one comes with something that normally, in this, in big things, to think about the first one is very easy, very normal, data. But any data is good enough to recover some of this market share? No. What is the main problem in our opinion, this trusted data? Normally the data you have is untrusted data. So this is the classical definition of trusted data. When we think about trusted data, normally we are thinking about data with quality. So it is more or less just the first word, clean data, data with quality. That is not trusted data. That is clean data, but it is not trusted. Trusted data means that things like privacy or clear access guidelines goes with the data. It doesn't matter where the data is. So data in a relational data store, in a SQL data store, just with metadata and with access control list with security, that is not trusted data. As soon as you move it, and I import that into a still spreadsheet, anyone can access. So what is the clear access guideline? That is not trusted data. I can do whatever I want with that data. So trusted data is something really peculiar. With this definition, how many of you have or are using trusted data in your company? Really trusted data. How many? Okay? I should say yes, but so-so. How many of you have untrusted data, bad quality, bad security, and so on? Well, I emphasize empathy. So this is the normal situation, but this is the approach to the market. If you were yesterday in the Google talk, the first one in the morning, we were talking there about applications development, data analytics, AI. So application development. This is the first part. The top, you can see there, operational applications. When you build an application, what is the focus? The code, the programming, or the data that you are generating? The programming, being honest. Data is just, we will put it in any SQL data store. The data is there, and I focus programming the code, because this is what I really like, because I am a programmer. So because the focus is not data, what is the situation when we build an application? All that is generating untrusted data, because we don't care. What we really care is the code, and because that is untrusted data, well, I would say that it is very, very untrusted data. Not just access guidelines or security, but the quality, even the quality is very bad, and the data is not consistent. Why? Because if I develop this application, maybe I change the address of a customer, I do not synchronize the address with this other application that also has the address. So the quality is bad, the data is not consistent, very untrusted data. How many of you are applications developers? You are programming in Java, .NET, whatever. And data analytics, scientists, data engineers, how many of you are working? Okay. The first conclusion is there are a lot of people that are not working. So if you are an application developer, if you think about just what I explained, being honest, you are generating untrusted data, because you really don't care about that. So what was the solution? We invented analytics. So we move data from those untrusted sources of data to analytics, and we clean it, security, and we can do things, because it is more trusted. So more or less this is the situation. But in this situation, because of this legacy technology I explained at the beginning, this application development that do not care about data, what really the effect of this towards or division untrusted data is that to use AI, traditional companies are very slow. Because you have to move the data from the sources that are untrusted to data analytics, clean it, do a lot of things. Then you are able to do the AI model, and then you have to move the AI model to the applications that are going to get the benefits from that AI model. So we are super slow, very expensive, and 50% of the AI models that you will create never will be used in the business applications. Because it is difficult to integrate that back. What are doing the technological companies? What is the difference? And I want really, if you are going to just remember one message, this is the message. What they are doing, and if you think about any of these seven companies is generating and collecting a lot of data, but not any type of data, trusted data. Good quality, and with this data, they can do things. They are doing things. They use it massively. So they collect trusted data, and they use it to rule and to have to be the brain of any business process of these companies, Netflix. If I am watching a series or a movie, they will use it to recommend another one, Google. If I click any advertisement, they will continue showing tennis things or golf things because it was about golf or tennis. So they learned. So they are collecting trusted data and using inside their applications for the applications to become smarter. So this is the real difference. With trusted data, they use AI inside their business processes to rule, to be the brain of all their businesses. So every second or every time we use any of these companies, they become smarter. Your normal traditional bank or electricity company, when you are checking the bill or you are doing something with the bank mobile application, are they becoming smarter? Are they learning something about you? No. In fact, that application is not generating trusted data with your interaction. So this is a huge difference. You cannot compete with a company that is smarter by the second, while you are not as smart or better. So this is the difference. So how can we solve it? The first approach is, okay, let's do a big bang. Let's get rid of all that technology, all those business applications, refactor, do it with AI inside and generating trusted data. Hooray. Who want to do that? No one. Very risky, long time, a lot of cost. So the approach in the market right now is what we call a trusted data fabric. The first disruption for a trusted data fabric is this one. Finally you will hear a lot about this virtualization because your data is not trusted. All the data that you have in those business applications is generated data that is not trusted with virtualization. You can sell, discover all the data that you have generated during the last 20, 30, 40 years, and then you create a data catalog. And in that data catalog, you can start to transform that data from untrusted to trusted. But the first step is to discover that data. I will put an example. When GDPR, GDPR, we should comply with GDPR in continental Europe. And the first challenge was to know where is your customer data. In this mess, the real problem is that you don't know where is customer data. How can you solve or be compliant with GDPR if you don't know where is the customer data? So virtualization, sell, discover the data and with this try to chain or transform data from untrusted to trusted. Because you have transformed that data to trusted, you can do the AI models and with virtualization, you can move the AI models to the legacy applications you have built that will get the benefits for the AI. So this is like a shortcut. When we talk always any product about virtualization, we are talking about something like this. Because your data is a mess, we will virtualize with a data layer in cloud or in private or hybrid, your data to transform that data in something that is useful. So this is the first disruption. Discover your data and make it useful. Okay, we have some benefits. Second disruption. The second disruption is really, really important. For trusted data, to be real trusted data, you need meaning. Does your data has meaning right now? Does it relate to business terms? I will put a very easy example. Have you seen in any table a column with some names, very descriptive names like a column in a table or a table called A023? Is that meaningful? And why we find some in some of the business applications when you are developing and using a SQL data store A023 or whatever? Because normally the projects, when you are in a rush, the first thing that you are going to do is to go to shortcuts and data is the second citizen in the equation. So you are going to put names that are not meaningful. And the metadata is not going to be good. So normally the first problem for trusted data is that your data has no meaning. No business meaning. How are we trying to solve this? We are trying to solve this with people. Is it possible to solve it with people? If you think about something with this, we are trying to move to AI. If we are trying to move to AI, why not to use AI to solve this? How are you solving this with people? For sure you have in your companies, a head of data, chief data officer, chief data owners, chief data stewards, some initiative for data governability. Paco Nathan was talking, a fantastic talk, I recommend everybody to listen to his talk about data governability. It is like a trendy thing and it is like boring. But why everybody, every company is worried about data governability? Because you have to put business meaning to your data. Well, with people that is not possible or it's very, very difficult and it is not cost effective. How are you solving this? What we are trying there is to help to do that with AI. So what is the point here? As you can see there in the bottom, in your companies, you have a lot of people, data stewards, data owners, chief data officers, doing the technical dictionaries, technical dictionaries, and then business glossaries, the business terms. And then these people, some way, they are matching the business terms with the data they discovered. What is the cardinality of this problem? More or less a big company, traditional company, have something the technical dictionaries like between 100 data store and 1,000, 10 tables and 100, 10 columns and 100 columns. So between 1 million columns and 10 million columns, there. How many business terms do you have? You are in financial, telecom, utility, whatever, between 20,000 business terms and 50,000. You cannot match 20,000 business terms with 1 million columns with people. People only can manage five factors, very smart people. Normally it is between three and four. When we are talking about data governability, we have to manage something between 20 and 30 factors. So with people, it will take a lot of time, long time, very slow, high cost, and when you finish, because the data has changed, you have to start again. So it is not sustainable. That's it. Sound familiar to you? Are you doing something like this and it's taking years? Well, I think yes. So how to solve it, how to help? Well you want to move to AI, use AI to solve the problem. So the solution was with the business terms, you create ontologies. Ontologies is a formal definition of a business term. What is the most universal business term? Customer. So customer, the formal definition, you have attributes. The name, the address, the postal code, all those are attributes. And for the postal codes, you have quality rules. Five digits and the digits have to be numbers. So what we are doing with the ontologies is doing a vector that is mapping the quality rules of an ontology. With this vector, you can put this vector with the data you have self-discovered in this hybrid virtualization, because in the hybrid virtualization, you already have mapped all your data and trusted and trusted. All your data is mapped here. So you put the quality vector here, the data that is closed. You have a probability for that data to be a postal code. So we are doing an automatic matching between business terms and ontologies with the data you have. And with this, with AI, we are giving meaning to your data. And to give meaning to your data is the first step to transform that data from untrusted to trusted. Why? When you were thinking, we were talking about trusted data, I was saying it is not only quality rules, but quality rules is a good step. So once you know that this data is postal code, you can apply the quality rules, give a scoring, and then also solve some of the quality problems this data has. So you can transform the data once the data has meaning. This column or table that was A023, now it is not A023, it is postal codes. You apply quality rules and you clean it. And then you apply the access security rules for postal codes because you know that A023 are postal codes. So you have clear guidelines about the access of the data. So you can transform data from untrusted to trusted. That is the first step. But in order to do that, you have to give meaning to the data you have. It's the first step. So this is what we are doing. With virtualization, you map the data you have. With ontologies, you match business terms with that data you discover. And once you have assigned a business meaning to the data you have, you can solve most of the problems and transform that data you have been generating for the last 40, 30, 20 years from untrusted to trusted. So fantastic. We have trusted data. We are closer to these technological companies because they really care about data and they are collecting trusted data. OK, we are transforming our history from untrusted to trusted. Is this enough? Of course not. What we were generating here was not only to transform data from untrusted to trusted. It was to generate that business data layer. A business data layer is a data layer that has ontologies in a semantic ontologies. So semantic ontologies are giving meaning to the data and relations. With this business data layer, once you have created that, you can change not only the data you have, but the way you are doing IT, transform your IT. I was at the beginning explaining that when you are building an application, you don't care about data. So in the applications that you are developing, you will generate untrusted data. After saying this, maybe you will try to generate trusted data. But as soon as the project is in a hurry, your data will become untrusted again. So for us, the solution is the applications you develop, the sources of the data, has to generate trusted data. And what is the situation in the market? The situation in the market, sorry, is this one. Programming and code has evolved like this. Most of you are working object oriented or functional programming. Object oriented and functional programming has more business meaning. But you don't care about data, data stores, just programming. But we have moved to business meaning. What about data? In data, we are there. Still, we are accessing data, cleaning data. We are more or less in the zeros and ones. So what is the evolution for data in the companies? What should be the evolution? Objects and business meaning. Objects and business meaning in data is a semantic ontology layer. With ontologies, you are giving meaning and relations. So this is the normal evolution of data. But we are way behind. You are using data in the zeros and ones. Row data, cleaning data, without meaning. That should evolve. So how to evolve really fast and assure that that evolution will help with the business data layer? Once you have created the business data layer, semantic ontology layer with business meaning, you can develop your applications with the business data layer. So yesterday in the Google talk, just talking about AI, the definition, the easiest definition for AI should be this one. Solve this goal, maybe risk scoring, with this data set. Because we are working with zeros and ones in data, the current tools and technology are awful and we have to do a lot of things. Clean the data, give meaning to the data, filter selection. And then after that, we can apply algorithms. Because we are working with zeros and ones. So it is not as simple as this statement. What happen if you are using a trusted data fabric? With trusted data fabric, you have a business data layer. Business meaningful layer with semantic ontologies. So the automated machine learning tools can do just that. Let's see it working. This is an example in which we are going to use the business data layer with business meaning and something called automated machine learning. Automated machine learning is to generate AI models with minimum human intervention. So in reality, what we are doing is this. Solve this goal with this data set. So is this possible right now? The technology is possible, is it able to do that? Yes, it is. The first thing, we are not working with your technical data with this column on our table, A023. We are working with a data set that has business meaning and ontology relations. So because it has business meaning, the automatic machine learning can see the relations between data and do everything that a scientist data engineer is normally doing in an automated way. So this is the challenge. With a business data layer and automated machine learning, we can beat the work of any scientist data engineer working for weeks in five minutes. We will improve your model. And then some of you are thinking, okay, I am unemployed. No, we will help to accelerate your work and then you can fine tune. But here what we are seeing is just that. Collecting the data, data with meaning, making feature selections. This is some of the things that is taking weeks for the scientist data engineers. Here we are doing automatically feature selection. The pink ones are feature combinations. And with all this, the system will apply 200 algorithms to see the best combination of features with algorithms to get the best result. Why can't we do that? Because the data set has meaning. And the data is trusted. So we can do it in five minutes and we can improve the work of any person. So this is not a concept. This is not, okay, we build the business data layer. It has meaning, no. We build the business data layer and we use it. Because with the ontology's relations, any machine learning or deep learning or AI model that I will build will be better. And we can automate most of the work. So what is the difference between doing this with people and doing that with AI in a trusted data fabric? The trusted data fabric is not giving meaning and business terms just to document your data. Most of the data governability initiatives in the market are the most expensive projects for documentation. Because they will document data, but you cannot use it, the business terms. So what is the benefit? It is just a shining object you are trying to achieve. What we are talking about is, build a business data layer, give meaning and use it. So your workflows, your data intelligence model, your AI, you will not build those ones with the technical data. Oops. You will not build those ones with the technical data, you will build those ones with the business data layer. So you change the way you are doing IT. I will try to jump to the second or the third disruption. So now you have a business data layer giving meaning and ontologies, semantic ontologies with relations about your data. And then you can change the way you develop applications, not only data intelligence models, applications. So how you could build applications? Well, as you can see, it changed. You can use different tools, your tools, but you will use the business data layer as semantic ontologies with relations to build those applications. So I know you like to program. I like, you like to program in Java or whatever. You don't care a lot about data. Okay, the business data layer will take care. In fact, most of you right now you are developing distributed applications. So you are using containers, microservices, and this is super cool, but that is very expensive and it will take long time to build a distributed application is really complex. So with the business data layer, everything about the domain of that new application, the data with meaning and the access to the data is solved by the business data layer. And because it has the ontologies and the meaning, every time you generate a data like address, the business data layer will apply the quality rules automatically. I will repeat this. You don't have to program the quality rules. Also because you don't like it. So the business data layer will execute in any generation of data, because we know that this data you are generating is the address or the postal code or the name or whatever. The business data layer has the meaning, the quality rules and will apply that in the execution. So it will force any application that you develop to generate trusted data. So with this hybrid virtualization building a business data layer, you are able to transfer your untrusted data to trusted data and the applications that you built using the business data layer will generate trusted data. And it doesn't matter you are doing something and generating an awful code and it is with bad quality. Business data layer will not allow that. That is the reason it has a middleware and it will, with the business term, apply the quality rules or the access control list or the security and all that metadata will move with the data. Because maybe you are generating the postal code there in that application and in other application. But because the business data layer map that data with postal code, the access control list, the security, the quality rules will go with the data. It doesn't matter where the data is. Even if the data is in a STL spreadsheet because the business data layer knows that that is a postal code, it will know who can access that data. And this is the only way to have trusted data. There is no other way because if not the security, the meaning, the quality, once you move the data, they are lost. So virtualization, business meaning, mapping all the data, it doesn't matter where it is created or where you move it, that is the only way to have trusted data in a sustainable way. So when you see this, okay, to do AI, it was something close to this business data layer, how to build an application? Is this a concept? No, we can do it. So here, this is a real application, digital onboarding. And you are generating a lot of components, transactional and so on because you are using the business data layer. The development cost for five countries is only the development cost for one country because once you develop the application, the business data layer will match the data of the other countries. So a single development can be reused in five different countries. So reusability with a business data layer is something you can achieve. And the same time to market, you can go five times faster, everything change with this. Why? Because the business data layer decoupled your development from the technical data stores, countries or business units you are using. And it is the only way to really reuse components or complete applications. So this is one video, but I will go fast in which the business data layer is generating the data objects, the microservices, the access to the data. And once you access any data or create any data, it apply the data quality rules. It generates the business domains. Anyone that is working with distributed applications, microservices, you know how important are the business domains and these best practices. So it is generating everything automatically and it is applying at the end the quality rules. And you don't have to care about the quality rules because the system knows that this data is postal codes, the one that you're generating and will apply the quality rules for postal codes. So you will generate trusted data, although maybe you don't want, it will force to do it. So this is a real example for one big bank. They wanted to develop a product, global product for five countries. 41 million 36 months. Do we have three years to launch a product to the market? No. So with a trusted data fabric, they develop that product for one country and the same development was used in the other four countries. Only they have to change it to regulation. So it was eight millions and more or less nine months. And as you can see, when you have meaning and trusted data, everything change and because the application is generating trusted data, you can use AI just as the technological companies. So with this, how are we changing this? This is your situation with trusted data fabric you have business meaning. So who can help your business experts? Your business experts can have an active role in IT. In fact, with automated machine learning and the business data layer, business meaningful data layer, they can do even the data intelligence models, the AI models, they can help you to do applications. So you move to IT for business. Your business experts will have this active role in IT and what we are getting is this, change the playground, just what I explained at the beginning. In the technological playground, can you win? No way. In the business playground, how many business experts do you have? Hundreds, thousands. With a business data layer, they can have an active role in IT, evolve IT exponentially and in that playground, you can win and you will recover some of the market share you are losing. So this is the way. This already has been done in B2C. Yahoo was managing internet with these folders with people classifying the websites in folders. For example, Ferrari in the folder cars, sport cars, Italian cars with people and then Google say no, no, no. We shall discover the internet websites. We apply AI with Page to Run, Page to Run to classify those websites in an automatic way and then we govern the internet and this is history. So with trusted data fabric, you can do the same for B2B. And all that, okay, we are transforming the way we do data analytics, the way we develop applications, changing and trusted data to trusted data to go to B2. So don't trust us. Do not believe this. We are convinced in 10 years all the companies in your companies, you will build applications and data intelligence models with a trusted data fabric, business data layers, business meaningful data. But you can start to do it that now. So in our booth, you can go and do a component with a business data layer right now with those tools we have seen. This is a video of some of the things just yesterday, some of the people that is here were doing using these tools, business data layer, trusted data fabric. You can choose to create a data intelligence model for an application and just with this, in 10 minutes, you create this component, you launch it and you can reuse it. So these are some of the numbers just in one day. In eight hours, working with a trusted data fabric with a business data layer that has business meaning to the data we have created hundreds of data intelligence components or business applications, applications development, everything in less than 10 minutes. So this is the summary for the first day. 151 assets, components created with the trusted data fabric, 34 companies and the average, I think it was something between 10 and 15 minutes. So don't trust us, please go, use it and see the way you can change your IT because in 10 years, everybody will be working like this and it is again changing. So thanks a lot.