 Okay, so thanks anyone for making the time for joining this presentation today. During this session, I'm going to introduce you one of the application we have implemented for one of our customers. The application makes use of text analytics and artificial intelligence to reduce the risk of GDPR breaches. But before diving into that, let me introduce myself. My name is Filippo Sassi. I am a senior software engineer. I've been working in the industry for quite a few years by now in companies like IBM, Concentrics, and obviously Virtual One, which I joined in 2014. In my career, I covered a number of different roles. .NET Web Developers, Scrum Master, Tech Lead. Since 2019, when I joined the Virtual One Innovation Labs, where I am now one of the leaders. Virtual One is an IT consultancy firm driving customer success through over 20 years of market leadership and innovation in IT services. Virtual One believes in modernizing, innovating, and accelerating our customer's business transformation. Our greatest strength is in balance in our efforts to keep growing in all the three sides of our strategic triangle. The first side is customer success, so making a real difference through long-term outcome-focused relationships. Then, empower people, selecting, empowering, and trusting people who are wired to deliver customer success. And the third side is strong organization, so a high-performing, financially strong organization of the highest integrity. We believe that this is what makes Virtual One different. And more importantly, our customers agree. On this slide, some stats about Virtual One. The interesting thing, I suppose, is the quick growing rate on some of the figures. I'm not going to lie to you, to create this deck, I reused some of the slides from a preview presentation we ran in October 2020. This slide at the time shows just over 1,300 employees in just two quarters. We're already reaching 1.5K. I think that more than any other number, this demonstrates how Virtual One is growing while committing to our core values. DAFAM is the Irish Government Department of Agriculture, Food, and the Marine. DAFAM vision is to be an innovative and sustainable agri-food sector operating to the highest standards. DAFAM is one of the oldest Virtual One customers, and Virtual One provides many teams dealing with different DAFAM schemes, applications, and more. One of these teams is the BPS team. BPS stands for Basic Payment Scheme. BPS is the largest payment scheme run by the department, and specifically, the scheme is responsible for issuing grant funding to the value of 1.2 billion of euros to 120,000 farmers in lines with the European Union Regulation. The team handles application and payments of farmer grants through the BPS application, which can be accessed through modern digital channels, which makes the customer journey easier with fewer administrative overhead. In the last couple of years, DAFAM has invested heavily in the OpenShift container platform. This choice was primarily justified by one of the key strategic aims for the department to provide capability for fast, flexible application deployment, and at the same time, to be responsive to changing and emerging needs over time. All of this while focusing on small products that can be designed quickly, iterated, and released often. In particular, the OpenShift container platform was a suitable choice for the project, I'm sure, going to introduce you, because there was real concern about using public cloud services to scan and analyze documents which might contain personally sensitive information. This solution reaffirmed the department belief that the investment in the OpenShift platform would provide long-term strategic gains. In line with the public service ICT strategy, DAFAM is focused on digital transformation, including both front-end and back-office transformation to deliver services for citizens, businesses, and the government. From May 2018, the General Data Protection GDPR regulation came into effect, requiring businesses to protect the personal data and privacy of the European citizens for any transaction that occurs within the European member states. In line with these requirements, with this regulation, one of DAFAM's priority for transformation was to protect the personal data for not only the DAFAM customers, but also for the customers of the public service as a whole. And in particular, we consider the following use case. To receive grant payments, the farmers must upload various documentation through the department website. These documents often contain personally sensitive information, which might not be indicated by the user. There is a checkbox on the form, and that indicates that the document contains PSIs, if ticked, just certain levels of the staff can access the document. However, very often the end users don't indicate the option correctly, and this leads to a situation whereby department staff reads a documentation to which they should not have access. Another challenge, of course, when agents acting on behalf of the user sometime upload their own documentation, and these lead to approximately 60 major GDPR breaches every year. So whatever the source of the breach, both scenarios could lead to privacy violation and GDPR breaches due to the staff accessing the document without sufficient clearance. These breaches require significant effort to address, and they are obviously taken very seriously by DAFAM. The department wanted to understand how technology could be applied to assist, and to answer this question, DAFAM version one on site team contacted the version one innovation labs. The labs are very well at the service that version one provide to its customers to explore disruptive technologies. So a couple of points to note here, so it's a version one customers, that means that whatever we do, we do it for the clients which are already within the version one customers base. And for them we are a value added service, so we are free of charge. That doesn't mean that we are free of cost. Indeed, we're expecting to use their data. We're expecting to use their resources. These will have particularly an impact on costs if we decided to go cloud. We're expecting to interview their employees to better list their requirements. We're expecting them to test the POV. And finally, we're expecting at least one person from the customer side to play the role of the product owner and to actively collaborate with us almost on a day-to-day basis to implement a proof of value. The proof of value is the same thing of a proof of concept, basically a fully working prototype. We just applied the semantic switch to highlight what we do actually bring values into the customers' businesses. So far, we have implemented at least one POV in all the technological areas shown on the slide, the only exception being the IoT. Some of those POVs were quite cool. I remember one of the first ones I worked at when I joined the lab was proof of value for a virtual reality application using Oculus Kit headset. For the same customer, we immediately implemented another POV this time using augmented reality on an Android tablet just to show them the different experiences. Both the POVs were very well accepted, received by the customers, but we understood that to push this forward, to move this into production and to provide the client with the wow factor they were looking for, we simply didn't have the right capabilities within the company. That's because these technologies are quite neat and they require very advanced graphical skills, especially 3D graphical skills, which are almost those required in the gaming industry. So from 2020, we decided instead to focus on those technological domains where A, we got plenty of expertise within the company, and B, where we think that our customers would have benefited the most. And those domains are machine learning, artificial intelligence, and robotic process automation. The innovation engagement process with DAFAM was exactly the same standard approach that any version one customer faces when engaging with the labs. The process is the following. It always starts from ideation. So we are constantly talking with our customers to understand if they are facing business problems which are not solvable by standard day-to-day technology. When we identify one of those problems, we start researching. So we look for academic or industrial resources. We run brainstorming and design thinking sessions until we find a technology that could help solving the problem at hand. And when we identify such a technology, we start experimenting with it. When we are happy enough, when we think we have found a potential solution, we formalize it into an innovation canvas. The canvas acts like a contract between us and the customer. And the document contains information such as the problem we are trying to solve, the proposed solution, the people who will make the development team, a timeline, and the metrics that will be used at the end of the project to determine its success. When all of this is agreed and the canvas is signed, we start with the actual implementation. We are following an agile, iterative, and incremental methodology called SCRAM. We take up to six bi-weekly sprints to implement the POV. We won't do six for the sake of it. If at the end of a sprint during the sprint review, the customer agrees that we have solved the problem under investigation. We consider we have proven the value of the technology. We got in touch with the rest of the verse one delivery teams to define a roadmap for moving the POV live. So this is exactly the same process as Daphne followed when engaging with us on this particular use case. And the outcome of the whole process is smart text. So using best of breed open source technology, smart text provide text analytic capabilities to extract meaningful insights from unstructured data. So documents, images, PDFs, et cetera. These insights are the features that are later used for artificial intelligence modeling to ultimately classify if the document contained or not personally sensitive information. Obviously this is just one of the many possible applications. Smart text could be used by many other scenarios and we will shortly see some example. But for now, let me just dive a little bit more into the components of the solution. The first one is the OCR. OCR stands for optical character recognition. And this component extracts the textual content from the instructor documents. This textual content is then utilized to derive useful metadata attributes from the other smart text components which are sentiment analysis, topic modeling, semantic research, regular expression, extraction, and name entity recognition. Each of these components is exposed as a separate API, ensuring those coupling and easy recombination. The APIs use cutting edge open source libraries with appropriate customization for these and other use cases. As example of customization, we are currently retraining the open source machine learning model with specific set of documents for making the models domain specific. The smart text solution in DAFAM is deployed on-prem, but all the components are deployed as containers to ensure portability of deployment across cloud 2. From a deployment perspective, we said that already DAFAM made significant investment in an on-prem OpenShift container platform. As a consequence, we wanted that smart text utilize the power of the platform to demonstrate its value. And that came out to be a great source as the OpenShift platform helped us solving some of the issues that we could have faced otherwise. For instance, the smart text solution was designed to take advantage of the Python machine learning libraries, but this architecture was not supported in the DAFAM infrastructure. The OpenShift platform allowed for secure deployment and build of readout published containers, but would have been impossible otherwise given the available budget and time. Likewise, building our test and production environment for the project would have normally been another large cost, but this was easily overcome with OpenShift and IMAID streams. The solution is currently live, actively mitigating GDPR risk for farmers and agents, flagging potential errors during the documents upload. This has enabled the department to switch from a reactive to a proactive approach of identifying data breaches and isolating them and preventing them from occurring. This obviously reduces the administrative overhead and the lost business hours of the employees having to resolve any potential breaches, and obviously this also reduced reputational damage to DAFAM. The project demonstrated that the department led the way in using cutting-edge open source technology such as OpenShift and natural language processing libraries. For what concern in the labs, we were able to demonstrate our credibility in the areas of text analytics, machine learning and artificial intelligence. The smart text solution is now a key piece of our smart action suite that we are developing. We will shortly talk about the smart action suite here. I just would like to say that since we have implemented the solution, we are having many conversations with our customers and smart text storage generated really interested. We immediately understood that creating an ability to extract valuable insights and metadata from a structured document, being them forms, handwritten letters, images of documents, whatever would be hugely valuable behind the initial use case. For instance, for one of our customers in the UK, we have been recently implementing a document summarization tool, and the goal of the tool is to provide key pieces of information from the end-to-end users from a set of documents without the user having to read any of those documents. At the core of this solution, there is smart text. We have also recently demonstrated it to many other clients both in Ireland and in the UK. All in all, we think that this project is an excellent demonstration of how open source technology could be utilized and augmented to develop solutions which are comparable to the major cloud vendors. Indeed, we commissioned a report to compare smart text solutions with similar technologies from Azure and AWS. And this report showed that the performance from those smart text are very much comparable to those of Microsoft computer vision and cognitive services on one side and AWS text threat and comprend on the other. Within DAF and the smart text solution was the first application deployed on the OpenShift container platform and as such it turned out all the user technical challenges we deployed onto a new platform. I was not directly involved in the original development, so I won't spend too much time here on the technical challenges and the subsequent learnings. However, talking with one of the main developers, I found particularly interesting that one of the weakest points of the original implementation was the central role of the orchestrator components in the original architecture. Because of the orchestrator, that architecture was highly coupled, working through a set of well-defined steps to be executed together. Being so, the orchestrator needed to know everything about anything else, making it the single point of failure. That is, the orchestrator goes down, everything goes down too. So we look at more and more architectural approaches. So at the end we went for a reactive base architecture which makes the single components responsive to relevant changes in the data. The benefit of this architecture are many more responsiveness, resilience, elasticity. I previously mentioned the Smart Action Suite, so before concluding this presentation, please allow me to quickly introduce it to you. Before we look at the standard innovation journey our customers are facing when engaging with the innovation labs. The journey goes from ideation to the successful implementation of the APOV. However, over time we noticed that many of our customers were facing similar problems, so instead of reinventing the wheel all the time, we have decided to start productizing over existing POVs and build what we call the Smart Action Suite. This is a suite of components which could be used either in isolation or like LEGO bricks could be combined together in different numbers in order to build many solutions which could apply to different use cases and scenarios. Some of the components like Smart Text and Smart Data Capture have already been developed, the other will be implemented in the next future. The overall idea here is to provide our clients with a hyper-automation set of apps which empower their employees, allowing them to take a better and more efficient decision in a shorter time. In a nutshell, the key components are shown on the slide. We already talked about Smart Text, I will just introduce another couple of them. One is Smart FAQ, which is our smart bot providing organization with always on 24-7 service answering FAQs to customer queries. Smart Data Capture, an app to support enterprise data capture requirements. Smart Search, a solution providing intelligent documents search where a user can search for queries in conversational language and the right reference from the documents will be returned. Smart Automation, the best of real automation tools to develop hyper-automation so with a combination of RPA and AI and finally Smart Process Advisor, which is designed to guide staff through organizational processes, advising them in step of the way. And that was all I wanted to share with you today. I hope you find it interesting. Thank you very much for your attention. If you have any questions, you can enter it in the chat below.