 Okay, so now that we've heard Scott talk about smart data fabrics, it's time to see this in action. Right now we're joined by Jess Jowdy, who's the manager of healthcare field engineering at InterSystems. She's going to give a demo of how smart data fabrics actually work and she's going to show how embedding a wide range of analytics capabilities, including data exploration, business intelligence, natural language processing and machine learning directly within the fabric makes it faster and easier for organizations to gain new insights and power intelligence, predictive and prescriptive services and applications. Now, according to InterSystems, smart data fabrics are applicable across many industries from financial services to supply chain to healthcare and more. Jess today is going to be speaking through the lens of a healthcare focused demo. Don't worry, Joe Lichtenberg will get into some of the other use cases that you're probably interested in hearing about. That will be in our third segment, but for now let's turn it over to Jess. Jess, good to see you. Hi, yeah, thank you so much for having me. And so for this demo, we're really going to be bucketing these features of a smart data fabric into four different segments. We're going to be dealing with connections, collections, refinements and analysis. And so we'll see that throughout the demo as we go. So without further ado, let's just go ahead and jump into this demo and you'll see my screen pop up here. I actually like to start at the end of the demo. So I like to begin by illustrating what an end user is going to see and don't mind the screen because I gave you a little sneak peek of what's about to happen. But essentially what I'm going to be doing is using Postman to simulate a call from an external application. So we talked about being in the healthcare industry. This could be, for instance, a mobile application that a patient is using to view an aggregated summary of information across that patient's continuity of care or some other kind of application. So we might be pulling information in this case from an electronic medical record. We might be grabbing clinical history from that. We might be grabbing clinical notes from a medical transcription software or adverse reaction warnings from a clinical risk grouping application and so much more. So I'm really going to be assimilating a patient logging in on their phone and retrieving this information through this Postman call. So what I'm going to do is I'm just going to hit send. I've already preloaded everything here. And I'm going to be looking for information where the last name of this patient is Simmons and their medical record number or their patient identifier in the system is 32345. And so as you can see, I have this single Jason payload that showed up here of just relevant clinical information for my patient whose last name is Simmons, all within a single response. So fantastic, right? Typically though, when we see responses that look like this, there is an assumption that this service is interacting with a single backend system. And that single backend system is in charge of packaging that information up and returning it back to this caller. But in a smart data fabric architecture, we're able to expand the scope to handle information across different, in this case, clinical applications. So how did this actually happen? Let's peel back another layer and really take a look at what happened in the background. What you're looking at here is our mission control center for our smart data fabric. On the left, we have our APIs that allow users to interact with particular services. On the right, we have our connections to our different data silos. And in the middle here, we have our data fabric coordinator, which is going to be in charge of this refinement and analysis, those key pieces of our smart data fabric. So let's look back and think about the example we just showed. I received an inbound request for information for a patient whose last name is Simmons. My end user is requesting to connect to that service, and that's happening here at my patient data retrieval API location. Users can define any number of different services and APIs depending on their use cases. And to that end, we do also support full lifecycle API management within this platform. When you're dealing with APIs, I always like to make a little shout out on this, that you really want to make sure you have enough, like a granular enough security model to handle and limit which APIs and which services a consumer can interact with. In this Iris platform, which we're talking about today, we have a very granular role-based security model that allows you to handle that, but it's really important in a smart data fabric to consider who's accessing your data and in what contact. Can I just interrupt you for a second just so you were showing on the left-hand side of the demo, a couple of APIs. I presume that can be a very long list. I mean, what do you see as typical? I mean, you could have hundreds of these APIs depending on what services an organization is serving up for their consumers. So yeah, we've seen hundreds of these services listed here. So my question is, obviously security is critical in the healthcare industry, and API securities are a really hot topic these days. How do you deal with that? Yeah, and I think API security is interesting because it can happen at so many layers. So there's interactions with the API itself. So can I even see this API and leverage it? And then within an API call, you then have to deal with, all right, which endpoints or what kind of interactions within that API am I allowed to do? What data am I getting back? And with healthcare data, the whole idea of consent to see certain pieces of data is critical. So the way that we handle that is like I said, same thing at different layers. There is access to a particular API which can happen within the IRS product. And also we see it happening with an API management layer, which has become a really hot topic with a lot of organizations. And then when it comes to data security, that really happens under the hood within your smart data fabric. So that role-based access control becomes very important in assigning roles and permissions to certain pieces of information. Getting that granular becomes the cornerstone of security. And that's been designed in. It's not a bolt on as they like to say. Okay, can we get it to collect now? Of course, we're going to move on to the collection piece at this point in time, which involves pulling information from each of my different data silos to create an overall aggregated record. So commonly each data source requires a different method for establishing connections and collecting this information. So for instance, interactions with an EMR may require leveraging a standard healthcare messaging format like FIRE. Interactions with a homegrown enterprise data warehouse for instance may use SQL. For a cloud-based solutions managed by a vendor they may only allow you to use web service calls to pull data. So it's really important that your data fabric platform that you're using has the flexibility to connect to all of these different systems and applications. And I'm about to log out so I'm going to keep my session going here. So therefore it's incredibly important that your data fabric has the flexibility to connect to all these different kinds of applications and data sources and all these different kinds of formats and over all of these different kinds of protocols. So let's think back on our example here. I had four different applications that I was requesting information for to create that payload that we saw initially. Those are listed here under this operation section. So these are going out and connecting to downstream systems to pull information into my smart data fabric. What's great about the IRIS platform is it has an embedded interoperability platform. So there's all of these native adapters that can support these common connections that we see for different kinds of applications. So using REST or SOAP or SQL or FTP regardless of that protocol there's an adapter to help you work with that. And we also think of the types of formats that we typically see data coming in as in healthcare. We have HL7, we have FHIR, we have CCDs. Across the industry, Jason is really hitting a market strong now and XML payloads, flat files. We need to be able to handle all these different kinds of formats over these different kinds of protocols. So to illustrate that, if I click through these when I select a particular connection on the right side panel I'm gonna see the different settings that are associated with that particular connection that allows me to collect information back into my smart data fabric. In this scenario, my connection to my chart script application in this example communicates over a SOAP connection. When I'm grabbing information for my clinical risk grouping application I'm using a SQL based connection. When I'm connecting to my EMR I'm leveraging a standard healthcare messaging format known as FHIR, which is a REST based protocol. And then when I'm working with my health record management system I'm leveraging a standard HTTP adapter. So you can see how we can be flexible when dealing with these different kinds of applications and systems. And then it becomes important to be able to validate that you've established those connections correctly and be able to do it in a reliable and quick way because if you think about it you could have hundreds of these different kinds of applications built out and you wanna make sure that you're maintaining and understanding those connections. So I can actually go ahead and test one of these applications and put in for instance my patient's last name and their EMR and make sure that I'm actually getting data back from that system. So it's a nice little sanity check as we're building out that data fabric to ensure that we're able to establish these connections appropriately. So turn key adapters are fantastic. As you can see, we're leveraging them all here but sometimes these connections are gonna require going one step further and building something really specific for an application. So why don't we go one step further here and talk about doing something custom or doing something innovative. And so it's important for users to have the ability to develop and go beyond what's an out of the box or black box approach to be able to develop things that are specific to their data fabric or specific to their particular connection. In this scenario, the IRIS data platform gives users access to the entire underlying code base. So you not only get an opportunity to view how we're establishing these connections or how we're building out these processes but you have the opportunity to inject your own kind of processing, your own kinds of pipelines into this. So as an example, you can leverage any number of different programming languages right within this pipeline. And so I went ahead and I injected Python. So Python is a very up and coming language, right? We see more and more developers turning towards Python to do their development. So it's important that your data fabric supports those kinds of developers and users that have standardized on these kinds of programming languages. This particular script here, as you can see actually calls out to our turnkey adapters. So we see a combination of out of the box code that is provided in near this data fabric platform from IRIS combined with organization specific or user specific customizations that are included in this Python method. So it's a nice little combination of how do we bring the developer experience in and mix it with out of the box capabilities that we can provide in a smart data fabric. Wow. Yeah, I'll pause. There's a lot here. Actually, if I could, if I just want to sort of play that back. So we went to the connect and the collect phase. We're going into refines. So it's a good place to stop. Yeah, so before we get there, so we heard a lot about fine-grained security which is crucial. We heard a lot about different data types, multiple formats. You've got the ability to bring in different dev tools. We heard about FIRE, which is of course big in healthcare. Absolutely. It's standard and then SQL for traditional kind of structured data and web services like HTTP you mentioned. And so you have a rich collection of capabilities within this single platform. Absolutely. And I think that's really important when you're dealing with a smart data fabric because what you're effectively doing is you're consolidating all of your processing, all of your collection into a single platform. So that platform needs to be able to handle any number of different kinds of scenarios and technical challenges. So you've got to pack that platform with as many of these features as you can to consolidate that processing. All right, so now we're going into refines. We're going into refinement, exciting. So how do we actually do refinement? Where does refinement happen and how does this whole thing end up being performant? Well, the key to all of that is this SDF coordinator or stands for Smart Data Fabric Coordinator. And what this particular process is doing is essentially orchestrating all of these calls to all of these different downstream systems. It's aggregating, it's collecting that information, it's aggregating it, and it's refining it into that single payload that we saw get returned to the user. So really this coordinator is the main event when it comes to our data fabric. And in the IRIS platform, we actually allow users to build these coordinators using web-based tool sets to make it intuitive. So we can take a sneak peek at what that looks like. And as you can see, it follows a flow chart-like structure. So there's a start, there's an end, and then there are these different arrows that point to different activities throughout the business process. And so there's all these different actions that are being taken within our coordinator. You can see an action for each of the calls to each of our different data sources to go retrieve information. And then we also have this sync call at the end that is in charge of essentially making sure that all of those responses come back before we package them together and send them out. So this becomes really crucial when we're creating that data fabric. And this is a very simple data fabric example where we're just grabbing data and we're consolidating it together, but you can have really complex orchestrators and coordinators that do any number of different things. So for instance, I could inject SQL logic into this SQL code, I can have conditional logic, I can do looping, I can do error trapping and handling. So we're talking about a whole number of different features that can be included in this coordinator. So like I said, we have a really very simple process here that's just calling out, grabbing all those different data elements from all those different data sources and consolidating it. We'll look back at this coordinator in a second when we introduce or we make this data fabric a bit smarter and we start introducing that analytics piece to it. So this is in charge of the refinement and so at this point in time, we've looked at connections, collections and refinements. And just to summarize what we've seen, because I always like to go back and take a look at everything that we've seen, we have our initial API connection, we have our connections to our individual data sources and we have our coordinators there in the middle that are in charge of collecting the data and refining it into a single payload. As you can imagine, there's a lot going on behind the scenes of a smart data fabric, right? There's all these different processes that are interacting. So it's really important that your smart data fabric platform has really good traceability, really good logging because you need to be able to know, if there was an issue, where did that issue happen in which connected process and how did it affect the other processes that are related to it. In Iris, we have this concept called a visual trace and what our clients use this for is basically to be able to step through the entire history of a request from when it initially came into the smart data fabric to when data was sent back out from that smart data fabric. So I didn't record the time but I bet if you recorded the time, it was this time that we sent that request in and you can see my patient's name and their medical record number here and you can see that that instigated four different calls to four different systems and they're represented by these arrows going out. So we sent something to chart script, to our health record management system, to our clinical risk grouping application into my EMR through their fire server. So every request, every outbound application gets a request and we pull back all of those individual pieces of information from all of those different systems and we bundle them together and from my fire lovers, here's our fire bundle that we got back from our fire server. So this is a really good way of being able to validate that I am appropriately grabbing the data from all these different applications and then ultimately consolidating it into one payload. Now we change this into a JSON format before we deliver it but this is those data elements brought together and this screen would also be used for being able to see things like error trapping or errors that were thrown, alerts, warnings, developers might put log statements in just to validate that certain pieces of code are executing. So this really becomes the one-stop-shop for understanding what's happening behind the scenes with your data fabric. That's your who did, what went where, what did the machine do, what went wrong and where did that go wrong? Exactly. Right in your fingertips. Right and I'm a visual person so a bunch of log files to me is not the most helpful of being able to see this happened at this time and in this location gives me that understanding I need to actually troubleshoot a problem. This business orchestration piece, can you say a little bit more about that? How people are using it? What's the business impact of the business orchestration? The business orchestration, especially in the smart data fabric is really that crucial part of being able to create a smart data fabric. So think of your business orchestrator as doing the heavy lifting of any kind of processing that involves data, right? It's bringing data in, it's analyzing that information, it's transforming that data in a format that your consumer is not going to understand. It's doing any additional injection of custom logic. So really your coordinator, that orchestrator that sits in the middle is the brains behind your smart data fabric. And this is available today? It's all available today? Yeah, it's all works and we have a number of clients that are using this technology to support these kinds of use cases. Awesome demo, anything else you wanna show us? Well, we can keep going. I have a lot to say, but really this is our data fabric. The core competency of Iris is making it smart, right? So I won't spend too much time on this but essentially if we go back to our coordinator here, we can see here's that original, that pipeline that we saw where we're pulling data from all these different systems and we're collecting it and we're sending it out. But then we see two more at the end here which involves getting a readmission prediction and then returning a prediction. So we can not only deliver data back as part of a smart data fabric but we can also deliver insights back to users and consumers based on data that we've aggregated as part of a smart data fabric. So in this scenario, we're actually taking all that data that we just looked at and we're running it through a machine learning model that exists within the smart data fabric pipeline and producing a readmission score to determine if this particular patient is at risk for readmission within the next 30 days which is a typical problem that we see in the healthcare space. So what's really exciting about what we're doing in the Iris world is we're bringing analytics close to the data with integrated ML. So in this scenario, we're actually creating the model, training the model and then executing the model directly within the Iris platform. So there's no shuffling of data, there's no external connections to make this happen and it doesn't really require having a PhD in data science to understand how to do that. It leverages all really basic SQL-like syntax to be able to construct and execute these predictions. So it's going one step further than the traditional data fabric example to introduce this ability to define actionable insights to our users based on the data that we brought together. Well, that readmission probability is huge, right? Because it directly affects the cost for the provider and the patient. So if you can anticipate the probability of readmission and either do things at that moment or as an outpatient perhaps to minimize the probability then that's huge. That drops right to the bottom line. Absolutely, absolutely. That really brings us from that data fabric to that smart data fabric at the end of the day which is what makes this so exciting. Awesome demo. Thank you. Fantastic, people, are you cool? If people want to get in touch with you. Oh yes, absolutely. So you can find me on LinkedIn, Jessica Jowdy and we'd love to hear from you. I always love talking about this topic so we'd be happy to engage on that. Great stuff, thank you, Jessica. Appreciate it. Thank you so much. Okay, don't go away because in the next segment we're going to dig into the use cases where data fabric is driving business value. Stay right there.