 Good morning. My name is Julian Sharca. I'm the founder and CEO of iGenius. I'd like to start with my takeaway from the conference so far. ROI. Everyone wants it, but only if you can have it, right? So why is that? We have consumed so much content, so much great content during these days. And for me integration is the main challenge. We all know that. And adoption, of course. We're building amazing dashboards, we're building amazing NLQ experiences, but we're not getting the adoption that we're expecting from these products. So almost 400 years ago, philosopher John Locke said that knowledge comes from experience alone. And I think this connects very well with the context of what's happening today in BI. So we have done a lot of investment over the past 20 years to create amazing datasets. These datasets have turned into becoming a goldmine for the company. But what's going wrong? Well, what's holding back the potential is not data and analytics investments. It's actually business users being used to experiences that are different, that move at a completely different speed from what we have in the enterprise today. So you can think of, you know, connected experiences that are mobile, that are personalized. So you go home and you speak with your coffee machine. And then you come back to office and you have to deal with a lot of complexity. So this gap is killing the value that we're trying to bring out there. So this is not about innovation. There is a lot of innovation everywhere. But leaders are asking themselves if this innovation is enough. And some of them are actually fearing they're going to get out of business because of AI. So this is holding back potential. So how can you solve it? We need a knowledge upgrade. To the point of John Locke, we need to make experience with AI safe, simple to make a difference. This is what we have been trying to do with iGenio since we started back in 2016. We're trying to contribute by combining data science, machine learning and advanced conversational AI to build a private AI brain that makes data human. We call this GPT for numbers. It starts with connecting your structured data sources, combining them with business logics and then training it through generative models to build a safe, private brain that business users can speak with. So we started one year after Open AI. They went away of text, image and video working on large language models. So when we were approaching language models, you know, we had this session at the company where we said, can a company run without the P&L, without payroll, numbers are at the heart of the company. But how can we make these numbers democratized for everyone to unleash that potential? The way we're doing it is with a completely opposite approach to large language models. We call these small and wide language models which start from an empty box technology that you can control, that you can trust because you install it within your own IT. And by connecting your data and inputting business logics with a self-service console, you can actually create this private brain and get the benefits of the future of AI today. So this is designed for business use cases since the very beginning. So when we think of language models, we think that they are going to transform everything, but they are a very small component of an end-to-end complex system that we need to democratize information. This is not just about making it accessible, it is about certifying it. It is about making it comfortable for the business users to use it anywhere without thinking of trust. So we believe this needs to include at least three components. The first one which we call business modeling is a no-code experience for connecting the data sources and combining it with business logics to create a cocktail of metadata that is going to generate the consumerized experience. Then you can literally talk with it and have a conversation with your data, but that's not enough because we have been making so much investment in data science and now we can have that in the mix as well. So imagine business users consuming data science models conversationally, that's huge. Data scientists, data analysts and data teams can focus on building value and letting AI to democratize that value at scale. So this is how we are closing the gap. This is one way of doing it of course. We think making it conversational, personalized, AI powered, which means not just having users make questions but also having the AI make the first step. When we think of prompt engineers, you know, this is not democratizing AI because we're going to have some users being able to get the most value out of it and others that are going to be left behind. So this is why we think that generative AI should be employed in different ways and decouple the data analytics model in a way that will empower both user personas. So let me show you how easy it is to speak with data. So I'm going to just pull my phone now and make a question to Crystal, which is our GPT for numbers. So I'm going to, this is connected to sales force. Can you show me sales versus margin for the last three months? Here's what I found for last three months. The total of margin had a variation of minus 2.78% and the total of sales minus 0.88% over the previous year. Let's check out the comparison. So this is a multi-model AI. So I can start analyzing my data in front of coffee in the morning and of course as I'm done with that and my phone is back on my pocket and I reach my desk or a meeting online, it synchronizes in real time and with the same level of user experience makes that data accessible at par equally for everyone and of course safe because these are results that have been certified in the concept of the private AI brain. So how we do this? First of all, we start with not changing the way things work. Again, you have made so much investment in the past 20 years. We believe that with AI we can augment it. So we elevate complexity to a virtual layer. We have a data fabric that sits on the side of your infrastructure no matter if that is fully cloud-based hybrid or private on-premises and with a metadata-only approach we can integrate all of your data models. We go for real-time query first and then have adaptive caching options to handle performance as you're transitioning towards a more scalable data structure. If you have that in place, it's plug-and-play. Of course certifying the data and providing you with governance features is part of the data fabric mix as well and then everything else is completely automatically generated. So this private AI brain for your business is designed to augment your existing investments and what makes it special is a technology that we handcrafted at iGenius which we call the business knowledge graph and this is not a knowledge graph technology. The way this works is that it abstracts through the metadata all the key components that the conversational AI needs in order to turn information into conversation. So it starts with what information we're going to connect, what is the department, what is the application that we want to democratize because we have that in place. So who is going to use that information? Who are the user personas? And last, what content do they need to serve and to get answers from their business questions? So this mix then is being this cocktail of metadata is being sent to a model that we call GPT for numbers to then multi-dimensionally explore that information through metadata and enable you to connect these neurons through natural language. So as you make these questions, the graph is going to adapt to your questions while keeping this information certified. And as you consume information, it keeps learning from the end user in a similar fashion to what happens with Netflix as you consume movies or with TikTok or other consumer apps. And it keeps generating metadata privately within your dedicated tenant and growing over time so you can enjoy the power of transfer learning, incremental learning and all of the cutting edge techniques in AI. And this is not just about learning from the way users consume knowledge, also how data is changing on the background because that signal is another important one we can use to adapt the experience in real time. So this composite AI approach, it's multimodal by design and by default. So this is how it works. When you make a question, we are able to query the knowledge graph that was built automatically and I'm going to show you how we do that. Then in real time, we are able to query the information that we need precisely for answering that specific business question. And we have a dynamic data representation engine that is able to make sense out of that query and present it to the user in a form that they can consume it from everywhere. So this requires conversational intelligence and NLP and NLG are key to enable this. But when we think of adoption, it's not just only about making it simple. People are not going to use it. So we really need to make data proactive. We really need to have AI make the first step in the conversation to achieve that adoption, but most importantly also to achieve high engagement rates with our data. So we've built this proactive and contextual components, which include the recommendation engine that adapts in real time to how you consume that information. So imagine how it happens with Netflix today, right? So watch a movie, you close it, when you come back, it shows you the most relevant movies for you. We do this with data. So this case study involves a complex CRM data model connecting data across customer information, employee information, contracts, assets and more, as you can see on this slide. And with that unified single source of truth, it is able to serve different user personas. So upper management at the top can actually measure the performance of the business. Managers can actually plan for their operations. While financial advisors in this case are able to deliver an exceptional customer experience, because five minutes before meeting with a customer, they have the data they need at their fingertips. Whenever that is from their mobile device, tablet device, within Microsoft Teams, we have an integration for that as well. It's so easy. And we call this multimodal use cases because they started from the CRM and now they are expanding, for example, to finance and HR. So basically we're making it possible to keep injecting knowledge in this knowledge graph and have the business users depending on their personas and permissioning to consume it. So how do we set this on the back end? The data fabric that I showed before, it starts with connecting and linking your data sources first. We have connectors for that. And then immediately after we have a system that links permission, so who is going to access the system to then create what we call topics. Topics are an advanced concept of the KPI designed for the conversation. So within a four steps process, we start with connecting our data sources and then injecting business logics through the no-code self-service platform. And then we can tailor the experience to certify that information and make sure that the answers that are going to be provided by the AI are safe. So this can be done in minutes. Of course, in the real world, it will take days, but we have no project that took more than 30 days to deliver and to end to thousands of users potentially, if that is your goal. So just to recap, business modeling is key here. We need to start from the integration. As much as we love AI, integration is breaking things today and it's holding back potential. And we think that generative AI should be used equally for data teams as it is being used for the frontline information consumer. And no code here is a game changer because we can have teams collaborate and inject that knowledge and that led generative AI make it a virtual private brain. Then we can consume it. And with the consumer apps, we can really transform the way people work, because this is not about data. Remember, this is about people. So no matter if you have 10 lines on an Excel file or you have billions of rows on your data warehouse, it should work the same. This is how we deliver the iPhone-like experience for the business user. And we need to get data science models out there. We have so much value whenever that is for predictions, clustering, correlation, risk management, you name it, all of those models today are synchronous. So by decoupling this and putting a knowledge graph in the middle, we are making it possible to enable something we call asynchronous data science. So a business user can leverage natural language capabilities to inject parameters asynchronously in the data science model. And automatically, this data science model will do some magic in the background, which means, for example, extracting a time series and putting it into different models, having these models compete with each other and then in a very simple way send a push notification to the end user to consume it. And every business user can do this on their own while the data teams keep creating value and keep perfecting their data science models to deliver impact. So when we think of ROI, let's think of ROI for both sides. We want and we need ROI for the data teams as much as we need it for the business teams. And these are some insights from customers that are using our products in production and have shared with us the impact that they are getting. With 70% in increase in productivity, we mean that people now can actually free up time. We give people back time to focus on the quality side of their work instead of looking for information. Accelerate existing investments means that through no code and generative capabilities, we can get all of that annoying stuff of training the systems or tailoring them for a specific business case automated. This is not magic. It's just automation. So there is a new kid on the block and we believe our GPT for numbers is going to be the next big thing in generative AI. And we're just getting started. What we have in production today is GPT N2.5. And we're working hard to release GPT N3.0 in Q3 of this year and GPT N4.0 in 2024. And yes, we love numbers. That was not casual. So if you want to get early access to these features, you can scan this QR code and we're going to make sure to have you to include you in the early adopters list. Thank you so much. And let's open questions now as we have some time. So we have someone with the microphone going around. So feel free to raise your hand. Otherwise, for who is in a hurry and wants lunch, we're giving you back 10 minutes. There's a question there. Absolutely. Great question. And the integration here is key. So as much as, you know, we showed the front end user experience, we have done a lot of work to make integration easy. And the way we do this is by leveraging the virtualization of complexity through metadata to map the different data assets no matter where they are. And as you are connecting, for example, different tables which in a more traditional fashion you join and you do all of that work, here you're injecting entities in a graph. And we have a tool that extract these entities and the unique components of your data, the most atomic level of your data. If you think of a CRM data model, you're going to have 20 of those even though you might have, you know, a lot more tables. And then we enable you to augment those entities and to certify those entities both at a schema level and at a content level. So this can be done in many ways. No code. It means adding digital twins and adding metadata so that we can figure out and translate that towards the data sources in real time as business users consuming. We have also a low code component which is a Python SDK where you can actually do more. So there are some accelerators and the short answer is yes, we've done work on that. Thank you. Any more questions? There is a question there. Can you please? How do you handle security? How do you show right data to the right people? So the question is how do we handle security and how we show the right data to the right people? We do this by working on three different layers. So first of all, this is a private, dedicated tenant with the technology being delivered as a whole for each single customer. It starts as an empty box. So this means that we're not sharing information across tenants and this is how the generative model works actually. We're not doing that just for security. We're doing it because it is designed to work like that. So we have this physical safety in the virtual world because you can literally install it offline without an internet connection and start training from there. Then we federate that tenant with your IDP so we understand your existing permissioning system and then we leverage that to map it with the content. So this is how we actually make it safe in terms of who has control of the data and how we make sure that only the right people can see the data they have access to. Then we certify it with a no code experience by extracting the entities I mentioned before and being able to inject business logic in those entities. So every time the tool is going to use that information it is going to comply with the business logic that we set. So we're basically creating these certified neurons and then from there the content that you're going to get is going to be certified as well. And you know the term with AI, right? I'm not going to say the first word but basically if you put the wrong data you're going to get the wrong answer. So this is why we augment the data and make it right first and then we're able to deliver not 99.9 certified data. This is 100% because when it is not complying with the business logic it will get back to you, disambiguate and ask for example questions saying did you mean this or that or it will just say you know there are no records for you or you don't have access to that and all of that is being logged. So this means that the data teams and IT can audit it at any time so we have that explainability component embedded in the system as well. I understand that metadata is very important. Is it possible that your tool can ingest the metadata or read the metadata for existing from the existing DI or read your tool like TAPTO, Informatica and things like that? Yes, we do that with the low code SDK. So we're able to connect to these existing semantic layers and metadata layers and we have done integrations that we can show to you live as well. So maybe we can meet at the booth later on or if you have some time and leave us your contact we'll send you more information on that. There is a question here Jonathan, can you please? My question is can this work on on-premises environments or has to be only online and in such a second how secure it can be handled? Thank you. Absolutely, I'm not sure I got the first question, I didn't get it, sorry. Does it work on-premises or online? Absolutely, so this works as a cloud tool in the sense that we serve it as a SaaS application and this is managed by us by default and it can connect via VPN to all of the data assets you have without replicating the data because it is querying at the real time but we do have customers that prefer to install this Kubernetes doc or Docker container, we provide both within their own private infrastructure. So we went for the full model, so we let IT and compliance decide if they want this to be completely managed by us, if they want a flexible approach or if they want to manage it 100% on-site. So this makes it private and safe to start with. Then on a security level we have lots of options such as federating with the existing permissioning system, the IDP, it can be Azure, Google or it can be Auth, authentication via API, we support all of those enterprise features in a single sign-on fashion so that you can actually control who is seeing the data and the application is transparent to that. If you don't have that model in place we have built no code tools that you can map the security and the access levels at a raw data level and do it within the application with a few clicks.