 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Manager of DataVersity. We'd like to thank you for joining the latest installment of the DataVersity Webinar Series, Data in Science and Analytics, brought to you in partnership with First San Francisco Partners. Today, Kelly, O'Neill, and John Lathley will discuss big data as a gateway to knowledge management. Just a couple of points to get us started, due to the large number of people that attend these sessions, you will be muted during the webinar. To open the familiar chat and Q&A panels, just go to the bottom middle to find those icons. And the Q&A can be found by clicking the icon that looks like three little dots. For questions, we will be collecting them via the Q&A in the bottom right-hand corner of your screen. Or if you'd like to tweet, we encourage you to share our highlights or questions via Twitter using hashtag DI Analytics. If you'd like to chat with us or with each other, we certainly encourage you to do so. Again, just click the chat icon in the bottom middle of your screen for that feature. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and additional information requested throughout the webinar. Now, let me introduce to you our series speakers, Kelly O'Neill and John Ladly. John is a business technology thought leader and recognized Enterprise Information Management Authority. His 30 years of experience include planning, project management, implementing information systems, and improving IT functions. John writes and speaks on a variety of topics and enjoys sharing his expertise on strategic planning, data governance, and practical technology applications that solve business problems. Kelly is the founder and CEO of First-San Francisco Partners, an information management consulting firm. She is a veteran industry leader, speaker, author, and trainer. Kelly is passionate about helping companies leverage the value of data, empowering them to derive insights from that informed decision-making and improve results. And with that, I will turn it over to Kelly and John to get today's webinar started. Hello and welcome. Hello. Good morning. Good afternoon. Good evening. I hope everyone had a wonderful Halloween yesterday. The best holiday of the year. John, did you? It was the perfect kind of Halloween I like to have. Very good. We'll leave it there. Very good. Well, today is kind of a follow-up to our September webinar, actually. And so that webinar, Advanced Databases and Knowledge Management, really today we're extending that to talk specifically about how big data and knowledge management work together and how data science and artificial intelligence techniques can be used to automate that process that's traditionally been highly, highly manual. So pundits are saying that knowledge management might be one of the areas that is truly transformed through big data, AI, and machine learning. So that's all what we're going to talk about today. Really exciting to continue to dive into this subject further. So we'll do a very brief overview of what knowledge management is. In 1998, Gartner defined knowledge management as a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise's information assets. These assets may include databases, documents, policies, procedures, and previously uncaptured expertise and experience in individual workers. So I wanted to start with that and read that because I think that that identification and definition is important. And I think also the fact that it was in 1998 is important. And we're going to talk about that a little bit in our presentation here. We're going to talk about how knowledge management has changed over the years and the scope of current knowledge management capabilities and technologies. This is going to lead us into use cases that are driven by analytics and big data. And then we'll explore a little bit about how companies are trying to experiment and innovate and so what we are seeing coming up in the future around knowledge management and usage of those techniques. And we will, of course, wrap up with best practices and takeaways. We will monitor the Q&A and the chat section throughout the webinar. And as appropriate, we can take questions within the webinar and we can also take questions at the end to make sure that we can get through as many as possible. And today we are going to follow an interview style similar to what we did with George Juha's in our October webinar. It seemed to go very well and so we thought we would repeat that again today. So I will be the interviewer, John will be the interviewee. Does that work? Works for me. Fantastic. Okay. So I started with a definition from Gartner in 1998 and knowledge management was a buzzword back then. It was getting a lot of attention from the big consulting companies, the thought leaders. You know, we see Davenport and Prusak on the slide here. So maybe give us a little bit of background on what was happening in the 90s and, you know, the early knots, I should say. Why knowledge management was really a buzzword then and was getting a lot of attention. It was kind of an offshoot of understanding the upcoming capabilities of large amounts of data and the data warehouse movement was in full force. In fact, was getting ready for an upheaval to a second generation. And I curio, no, not that's the name down there had written a bunch of seminal works about organizational learning and organizations being able to learn and matching up strategy and what you know to what you want to accomplish. Larry Davenport took that kind of wrote the seminal business version of that of that work. And it was all around what, you know, what don't we know what should we know. We need to capture it there the drivers behind that were very legitimate. We were starting to see that data volumes were going to become overwhelming the first books about how do you deal with lots and lots of data were coming out at this time. There was also this understanding that 80% of the content that we talked about in in in any organization we hadn't addressed at all during the data warehouse movement because it was unstructured. And so that was starting to be kicked around. There was a very deep concern, which has been delayed, but I think is relevant that a lot of the as the boomers retire, a lot of expertise is walking out the door. It's been delayed because they've all been hired they all leave on Friday and they come back in Monday as a consultant. But that is sooner or later that will come home to roost and then the big the big I guess the big brass ring is is reusing what you know, don't try to repeat lessons. Don't climb the same learning curves over and over and over again. So that's where it was all coming from. It was also considered for a little while the next big thing cause a lot of money was made off of data warehouse and be I know the next thing, you know, we have data then we got information. So obviously the next thing after information is knowledge and that's going to be a really, really big market as well. So a lot of people paid a lot of attention to this in the late 90s. And that's kind of where it became a relatively common word. Great. Sorry, searching for my mute button. There we go. Awesome. Well then, so knowledge management was really focused on sort of organizational learning and collaboration and pulling information from people's heads and and trying to get that tacit knowledge but then here we're looking at the how this has been transitioned into business intelligence. So can you talk a little bit about why business intelligence has become that capability that has extended the use case for knowledge management. Yeah, we're going to cover some of the some of. I'm sorry, it wasn't Larry Depport Tom Depport Larry Prusek covers Mr. Professor Davenport reasoning here a little bit. But when knowledge management burst on the scene, the solution areas over human capital which is capturing what people know and organizations getting smarter, working collaboratively and as we've said before on a serious collaboration is not the same as cooperation. It is a actual learned behavior. And then where is the stuff you know. And then it was well be I tells me this much information but there's so much more possible out there and of course even back then. I mean work I was doing in organizations in the early 90s, we were we were doing predictive and descriptive analytics so those those things aren't new, but we were beginning to see this bright light shining through the cracks right. And we had the technology to start to bring in unstructured data because the initial ideas and concepts that ended up becoming Hadoop we're getting kicked around there and there were several attempts made at text mining long, long, long before Hadoop and multiple format handling came along. The other thing was we the the be I we were getting reports through data warehouse reports, but we weren't working any better. We were getting better reports but all of the promise benefits weren't exactly happening. And with some exploration that I did some my peers back then did. We discovered that actionable was a lot more than just delivering a really really excellent excellent scorecard or something like that. You had to actually look ahead and say how are you going to behave once you get this data. And as you learn more and more about this really excellent data you're supposed to get. How do you keep that inside and a lot of terms were kicked around like collaborative be I and and things like that. Oh and that course means we got a track at management and then of course, someone says well wait a minute. We're going to have an algorithm and that algorithm will say every time these conditions exist we do this. So let's close the loop. And what will what we now call machine learning and pattern recognition and AI. We recall we were just saying develop closed loop agents back then. But and it was of course a lot cruder than what we have now. But so that was so but the technology really the whole be I world had that stuff in it. So it was really able to go so this again that's in our whole topic here. So you can see that there's a natural entry point. There's a natural intersection now between knowledge management as we had defined it. Almost 20 years ago out and and and are advanced analytics type technologies. Yeah, and I think I wanted to highlight this concept of, you know, identification tracking classification and kind of sorting and pulling together this knowledge which is one of the really difficult parts of this right so it's not just getting to the knowledge that's how do I classify it in a way that it can be consumed and like you said action. Well we were in clients all the time where a big part of data governance that's implemented is is not doing duplicate reports and duplicate scorecards and duplicating tables and stuff like that. Because we don't know what we have what we have. And short of some brute force processor procedure, some, some extension of what we do in the world of BI gives us technologies that we can now track what we've done and let people access that landscape of information assets. Great. And then if we look at and we now start going into the future usage of of where it's headed. You know, the, the role of knowledge management was to create this ability or this capability for an organization to establish kind of situational awareness. Right. So in the quit from you here it's, you know, all those brains the end of the computer you know this collective know how it's like it is this situational awareness of doing something within a context and making the right decisions. So how do you see metadata as a critical component of that. Metadata is is the critical technology for this that's the one thing again where we have an intersection of analytics and advanced analytics and artificial intelligence and knowledge management because both of those are meta data driven disciplines. Right. When you talk about collecting what you know. Well, that's metadata. I mean, it's the old definition of metadata is is, you know, just a card catalog type thing, you know, and where where is it the the the next part of that metadata is is the term context and this is where we had to lead in from our September event right graph databases and being able to have almost an infinite relationship to find but take those two powerful capabilities together now I have context. Now I can in in a crude way, start to overlay this tacit knowledge and relate how things are related. I'm sorry that was that was a poor choice of words connect the dots and create the relationships that are embedded in people's minds. It's still not elegant, but we now have the ability to do that that is all metadata driven as important as the values are in the rows and the columns and the information buried in the documents and the web clicks is is this information about your information now, that is, and that was the heart and soul of knowledge management. And we'll see here in a minute that might have been an obstacle. And now it is a in the heart and soul of AI and advanced analytics. And it is it is totally embraced. Right. And I don't think many people argue about that. That's absolutely right. And I think the other thing is we look at the quote on the right hand side. And we think about knowledge management and future usage. It's moving beyond the internal assets of an organization and looking at external information as well and incorporating external information into the ability to have knowledge. Wouldn't you say? Yeah. Yeah, it's, you know, it's, it, it, it go back to some of the academic works that came out in the 80s and the 90s. And there was this sense that we should be able to really understand what we know because of all these upcoming, you know, growing infinite, seemingly infinite scalability of technology. So we should be able to, to, to be smarter about rather than just just crudely storing stuff, we should really be able to store what we've learned from that. And of course, that means the entire big picture and any evidence as we go through our daily lives are an amalgam of concrete facts we know, but also impressions and experience of what we've done as we proceed through life. And that's exactly the kind of things we want to capture in an organization as well. So, so, and that's why you need that's why it's a 360 degree view of everything that influences how anyone does anything. But that sounds kind of hard, John. Right. Yeah. It's really hard. Oh boy, did we get a smack in the face. Yeah. Some of the reasons we'll find those of us here will will will. These will sound familiar. It was too hard to change behavior. You know, the culture change, right. And that plagues anything new because people don't like to change no matter how exotic the concept seems to be. And it required an awful lot of discipline to change gears and and and and we're going to see that with AI when when the machine says go left and everyone else in the room feels they should go right. All right. So the technology, everything devolved to technology. And again, we have that problem with data science and analytics now where everyone says we can do really cool things with the data and everyone spins up the data science area and a bunch of stuff is cranked out. And if you're lucky, some of it is well, usually the first initial things show benefit. And if you're lucky, you have sustainable benefit. And if you're not, you have a pretty familiar song that we're hearing a lot lately that it's just not fulfilling any promises there at the same time this was happening. There was some technologies coming to the forefront that everyone thought had solved the problem, at least from an uninformed managerial level. And that was SharePoint and Google. And SharePoint is an excellent collaborative engine. Everyone hammers on for SharePoint, terrifically, but that's just because it's implemented poorly period. Okay, I mean, it has a great deal of power that it has features that very few organizations scratch. Google, of course, as you sit down, type in a few words and boom, there's everything you think you need to know about it. So, oh, that's knowledge management. So that was a big thing there. And then there's Google doing this stuff, SharePoint doing us. And at the same time, you're trying to catalog your knowledge and the tools that we had that were coming up were really crude and labor intensive. The last thing was, and this is a hard one, and it kind of closes the loop back to behavior, is when we got things we learned back in the late 90s using crude analytics on, say, smaller data volumes or doing large data volumes but taking literally weeks to process the data sets. And we came out with big conclusions. Nobody ever did anything about it. All right, they were one-time things. There's a few examples in aerospace, which actually we'll talk about one here in a little while, where we actually did. But very rarely did anybody learn anything and then build it into their business. Just culturally, it just did not happen. So with all of that going on, it just lost interest. So Tom Davenport wrote this article in 2015. There are still big organizations that have knowledge management departments. There is still associations out there. There are still knowledge management conferences. There are still a lot of people that believe in this, but it has changed considerably from the academic pursuit that it was, and has really realigned itself more along with the technology that can deliver stuff that we can to start to talk about embedding in our organizations. And of course, that's the analytics and machine learning. So that's kind of how we ended up where we are now. Absolutely. And then as we look forward, one of the challenges that was identified on the previous slide is that everything devolved into technology. And of course, we saw Google on the previous slide. And so that was kind of how knowledge management was difficult and one of the challenges that needed to be overcome. Well, here on this slide is a slide talking about technology again. And of course, Google is in forefront. What's different now? So what's actually changed so that this technology landscape can actually make an impact on knowledge management? Well, it's really interesting. I was doing the research for this. I couldn't find any of that. I went through and found some old notes from the late 90s and early twos and some old presentations I'd done. And none of the vendors I mentioned in any of those were in existence or had been absorbed into other technologies. So we have a whole new cast of characters. But if you take a look at this cast, the characters, and you take a look at what we've been talking about in the series for the last couple of years, you'll see some familiar names. Because a lot of the really cool ideas, which is going through the analytics and then capturing using machine learning to detect the pattern and training some type of repetitive behavior is in fact some type of organizational learning. Well, and then of course, we need the metadata to go with that. And we need the collaborative tools and we need to manage the unstructured content and the digital content as well. So we've got, again, none of these I would say you can't say, I'm going to go out and buy an integrated knowledge management suite. That doesn't exist. But you have an awful lot pointing in that direction right now. Now, do you want to say, well, we have an AI program. We're doing big data and analytics. And we don't have all the we don't have all the popularity we had three years ago when it was the next big thing. So we're going to call it knowledge management and we'll be the next big thing again. I don't think that I don't think that's a brand that is worth your trouble. But I think you do have an awful lot of technology pointing in the right direction where you've got, you know, Calibra with really good solid. I put clear there because it's really good workflow. People actually have to use this to manage their, to manage what they know about what they know their metadata. And Elation uses AI and Watson and OpenCV are really, really smart learning technologies that will tell you what you need to do without any human intervention whatsoever. And then, of course, you have to manage it all there with all the different types of technologies. And we've got some examples there as well. And of course, the other examples for document and content management that are all connected. If you take a look at, say, Confluence, which is in the JIRA Confluence Agile universe, you go, wow, there's all kinds of workflow around technology that we're working on now. And all of this is being captured. All of these technologies allow you to capture your interactions and build enormous databases of work and that you can actually study how you did your work as well as the data that was produced by the work. So we're really getting close. It's still quite a collection. But boy, is it a lot more powerful than it was when we first started to dip our toe in the water with this. Got it. So then what's changing is that it's not, the technology now supports the process in a much better way that in fact is optimizing the manual processes through things like, you know, AI machine learning, linguistic analysis, for example. And knowledge graphs, right, what we talked about in September. I looked at my first graph database in 2001 or 2002 and I'm like, wow, this is really great, but it couldn't hold more than a few thousand relationships before it was so slow that which is just like relational was in 1984. All right. You know, that just we weren't ready. I participated in a business venture in 1998 or 99 where we tried to develop a collaborative tool that collected work. And we couldn't find anything to hold all the data. It was just we overnight we overwhelmed ourselves with the amount of data we were we were there we couldn't handle it so you know our brains were 20 years ahead of our capabilities but now the capabilities are there. So a lot of kicked around back then you can really get serious about this stuff now. Right. And so, you know, Google is using knowledge graphs and text mining and machine learning, you know, translation services and speech recognition image recognition and, you know, things like that that I think we're starting to take advantage as just a given. I think cool technology just popped into my head Kelly is. I thought it was science fiction I thought it was a joke but there is actually a little Bluetooth device that you stick in your ear and it translates a conversation in another language. Now that came from hitchhiker sky to the galaxy was called the babble fish right. That might be what the technology is called I don't know. I'm not in marketing but that power that that that absolute raw power is what we need was what you need for this and we've got it. So now it's like what do we do with it. How do we go. So let's let's talk about how right so we've been talking about the what we've been talking about. You know the history. So now let's talk about the how and let's explore some of the use cases for analytics and big data that are specific to knowledge management and let's talk specifically about how AI and machine learning can automate these. You know, two of the biggest challenges that we talked about before which is what the challenges that impeded the widespread adoption and those challenges were things like extracting and categorizing and representing knowledge right. So we talked about that in the previous slide in terms of the classification. Getting it out of people's heads through the understanding of properties. So what are we seeing now then in terms of leveraging big data to do that for us if you will. Well, when you start to talk about practical use cases and big data one thing you got to obviously you're dealing with all kinds of different formats and different types of content. And what you need to though I understand it's not just it's a big lump of stuff in the lake. All right, that that can get you in trouble. The first thing you need to do is understand the spectrum of where contents coming from you looking at our little chart there. Contents goes from individual content. Things we do with our personal knowledge bases our calendars our personal spreadsheets, things like that. Then it goes all the way to global universal community type content. And there's different layers of structure and complexity within that too. And if you're going to start to say, well the technology is pretty cool I'm going to go after this. You first of all have to have this picture in your head. All right. You have to understand how you're going to get visibility across all these categories classifications nooks and crannies of data. You're going to be looking at external stuff that's competitive intelligence public malabashes knowledge bases, such as a library, which has been, you know, a public knowledge base for thousands of years. All kinds of systems and things and then there's that human capital, which is your tacit knowledge that you still have to figure a way to put the veneer of human experience on top of this as well. Now, I will to a certain extent start to replicate that type of learning because machine learning will, you know, look at the patterns and you'll build repeatable models and things like that. So that gives you a use case for knowledge management in that using AI and all these technologies to process all these massive varieties of content. Now gives you a case for quote unquote, knowledge management. We're going to talk about here. Soon though this AI, there's a hook here with the AI right, you know, is AI and machine learning going to replace tacit knowledge or supplement it. And if not supplement and how do we capture it so those are some of the things that these cases have to deal with as well. Well actually a question came in that I thought was really good maybe we can take it here. So in terms of AI using the information to make sense out of this implicit and tacit knowledge. The question is, does knowledge management require some good data stewards that really know the data. So how does this concept of quality come into leveraging AI for knowledge management. There's a sideways answer to that data quality is really important for AI to be correct. We are overlooking and it's this is the scary part of AI we're overlooking the fact that a machine learning fed bad data will come up with a Frankenstein recommendation. And it will not be what you want your organization to do data quality is extremely important. So from that standpoint, you need to somehow be able to get the data quality to the appropriate position that a learning model can utilize it and that means removing all the bias out of it. We're not just talking about fat finger numbers we're talking about built in bias cultural bias in data because that bias will come out on the other side of the AI. So that's where some type of human awareness and ability to to correct that goes comes to bear. The specific question about requiring a data steward that really know the data. You're not going to like the answer the answer that one is the reason came around is we thought that was a way to shortcut having so many smart people having to know so much about the data. There's a certain twisted evil purpose to KM which was get rid of this this embedded department expert on everything we we were relying on them too much. So, so when you say no the data, you got to define what is it about the data you need to know quality contextual type behaviors of the data, but from a caretaking standpoint. And, and oh you want this data I know where it is or you want that data I know where that data is too. Those are two different things. The latter example where where someone has the the data landscape in her head, we're trying to get rid of that. KM is explicitly trying to get rid of that. But we still need what's in between people's ears, because someone knows that well on fourth quarter that number is never really accurate because it's some end of the year issue or something like that. That's the stuff that we don't know. And that's for some type of expertise or subject matter experts important why I would call that a steward or not is up to debate. But that's how that that's how I answer that question. I could go on for that one. That's a good question. I could go for two hours. That'll be another webinar. Yeah, great. Yeah. And then there was also a comment on the previous slide to mention other technologies such as Ulterics. And I guess I should purpose that previous slide to say that that's not meant to be comprehensive. It was meant to be representative and to call out some of the the names that we're seeing as leaders in each of the spaces. Ulterics absolutely the leader within that space as well. So the idea was to just kind of pepper pepper the technologies. Well, what we're trying to do is show the BI and analytics stuff how it's pushing in. And there is KM like Ulterics is a KM exclusive tool, but we didn't have time to get into that one. So that's that's a good comment. Yes. Yeah, absolutely. And then as we look at, you know, moving into these use cases and today's knowledge extraction using, you know, machine learning, linguistics analysis. It's actually becoming cheaper and, you know, a more broadly used mechanism to capture some of that domain expertise that you were just talking about to kind of capture the expert knowledge, you know, departmental expert knowledge. But I think the other thing that that comes up here is this sort of continuum and the fact that the data is generated on a regular basis too. So that you've got all of this data feeding into the potential for AI and machine learning on a regular basis versus something that's choppy in the past. Oh, yeah. Great. Well, yeah, I mean, you've kind of got the hierarchy of needs here on the right, you've got to have a well managed data supply chain. Absolutely first and foremost for any use case for this. Then you apply the AI and machine learning is a subset of of AI and when all of that happens now you can start to talk about knowledge management. Understanding how I ended kind of ties back to this question that was submitted here that you need to take to heart what people know about that that that data. All right. So I think it would be that there are data quality usefulness issues that aren't able to be detected by any type of model that there are data movement or contextual issues. Like a seasonality example I mentioned that affect the results. Do you still need to apply what people already know? I maybe I'm being a stick in the mud on that and there's some PhD and AI out there saying I'm going to eliminate people altogether fine and dandy but right now, not a chance. Okay, just not a chance. So what's tacit? Where is that tacit knowledge and where is it important? And everything else still needs to be accessible navigable and contextual. You just can't spit out the result of a machine learning pattern recognition and say this is gospel and make it so people need to be able to look at it, understand it, understand its effects and understand the context that that model was generated within. And if you if you behave yourself along all those way, you are getting now up this ladder. You are getting into knowledge management, which is, you know, that's pretty cool stuff. Right. So the idea of big data and data science has given us some tools to get us, you know, a little bit further up that ladder, if you will. And we can leverage the ability to identify classified information in an organized way via AI and machine learning, but applying the contextual analysis around it is still one of these more sort of human intervention requirements. So it's helping us along that chain. Okay, well, let's let's as we dive into this how this is one of my favorite slides here on this is a great representation that you put together to show, you know, really what does this look like and how does this fit into, you know, a process and how, you know, how do we go through this so maybe John, why don't you walk us through this because I think that this is a really great way to make it real for the folks on the webinar here. Yeah, so you've got big data and so that's the big disk symbol over there. And, and it has subsets within it you've got structured data, which we call data and content which might be a digital media or a document or an email or, or something like that. And starting from the top and working our way down to the bottom. The beginning of this was you have your data you do be I am reporting, you look at the report, you know what it means, the person the individual knows the context. From this I get new information and new experiences and that needs to go into the future knowledge base I need to store an insight as to what happened in response to the information and enable the action so I need to capture the work that happened as a result of this report and report output I need to those arrows going into knowledge base need to be a new type I say it's new information. It's not what's on the report or the scorecard. It's, it's a combination of what I've done with the report and recordings of the action in the book. For example, if I have a workflow engine, and I have a report I, I say hey look at this number and it goes to the workflow, and, and then someone else says oh my that's quite a number and in the workflow and as always tracked and trace and help desk software is a perfect example of that type of metaphor. Now, we take the bottom thing you have content and very, very similar we we do analytics on the unstructured used to be called text mining or something like that now we just use analytics, and we have the same meaning in context but we because we've got already the stuff is already going through an algorithm I can actually tag it now, because I because of the document is in I can actually get some sense of knowledge and learning from that. But I also will have this new information because someone's going to use the result of that text analysis or studying the web clicks or something and, and do something and will, you know, we've got to put that somewhere as well. Then, of course, in the middle is our new world of advanced analytics or relatively new will comes brand new insight. Wow, we didn't know this before that every time that you know the sun comes up over here. This happens over there. Alright, so I have, you know, new context and new information that I have to ask and I use the word as scribe. That's the right word for it, the meaning and context of these new insights. And because our algorithmic I can tag them, and I can take the experience from the work we're doing with that, and I can load that into the knowledge base as well. So the, the, the ultimate output, the brass ring is, is insight. And that can be accessed then through a knowledge map. And I have a very crude image of a knowledge map and there was no intent for that to look like mouse's ears. I really, I just noticed that and I just didn't, there's no intent there at all. That's funny. If we, if we take an example, kind of a managing human capital human interaction managing work. Alright, so we're in a very sophisticated organization Kelly and we're doing. It's a work that first San Francisco's familiar with is a product management product MDM. And there's a lot of work that goes in creating a new product, creating the packaging around the new product, and it happens again and again and again in a sophisticated consumer products company right. So, and what often happens is they do a really great product launch and in two years later they have another one and guess what. Can you guess what happens Kelly. They lose all the information that they learned in the process. They have to do it all over again and someone says upstairs, looking at this cost of launching a new product goes what the heck. I'm going to start a knowledge management project and that's how, you know, some of this got started and we're going to capture the way everyone works. So, as products are set up, we go with something that's maybe more collaborative and there are specific software products now that are intersect with workflow with product management. And you build a metadata layer there so I have my emails and stuff so that's my content I can analyze. I have maybe external data from marketing on what products work and what don't and activities on the website and things like that so I can do some some blended analytics there and come out with some new insights on products and then I can just do some plain old grinding what sold last year what didn't sell last year, etc, etc, etc. All of this goes into that knowledge, knowledge base, and I can using analytics and using the insights of AI and stuff like that I can start to put some ideas together like if you do this product this way at this time of year it probably won't sell too well. All right, or if you don't get this product out in a certain amount of time you probably won't sell as many as you wanted to sell. And at some point in time, this type of structure and this type of knowledge base you build allows you to do that closed loop agent we used to call it, but it's really a machine learning. You can use AI taking over a decision point in your business and taking over a business process where you start to say well based on all this data and all these patterns. This product is an awful lot like a product we did five years ago. Here is the workflow for that product. I recommend that we do the product this way. Or somebody is going to do a new product and they sit down and type in we're going to do a new product and it's kind of like this and kind of like that. And the AI matches that up with the old workflows and find something similar based on all these data and research that it's got and recommends a process to move this new product to market. That's how this type of running all these what we used to think of as distinct past data, big data and content together in and and then dumping that into the knowledge base and remember what's also in this content. And also there in terms of of experience on a couple of those lines is is tracking what people are actually doing and how they're interacting with various systems and measuring their actual work. That's all going into the knowledge base to you get all that one big pile and wow you can do really awesome things with that. Absolutely. And so if we think of so this is kind of a process lens if we think about it as an architecture lens and look at our reference architecture here. That knowledge base is really what we're seeing on the right hand side represented as a knowledge lake or even a knowledge graph right isn't that how we're kind of viewing it here. It's kind of the circle and like you had indicated the closed loop where you're also using it's not only pulling the data from a big data environment but it's also using the big data to be that that knowledge lake not just the data lake. Absolutely that is a knowledge lake is, if you think a data lake is sophisticated and and really it isn't it's just a big place you put stuff. But the knowledge lake is has a lot more agents in it and it has those things you've learned and those behaviors and those responses and basically rules are created to respond to certain scenarios. The knowledge graph is again using graph technology and hyperbolic trees and all those cool interfaces this stuff has is how I would navigate that. Because remember if I'm a user that I need to find stuff on both sides of the lake. And I might want to look at source documents and I might want to drill all the way back to data sources I might want to drill all the way back to the old ERP system on the vintage side of this thing to hence the red box the abstraction engine. And this is something that is not on our normal reference architecture. But that's that is an arbitrator and a translator between what is structured unstructured explicit knowledge and then positions it in such a way to make it useful to the AI. Because again, if you just take the raw data and and you just push it into the AI, you're likely to induce bias into those models and and you've got to have some way to to balance that out. Also notice we have some things appearing on the vintage side and that is work people are still doing work on the vintage side. But now we're going to try to actual capture vintage work, especially if they're doing it on an intranet, you know, and doing web stuff, web technologies, then we can capture that work. Also collaboration now enters as a formal technology to use data. And instead of just output in data usage, I now have the actual work being done on the data and capturing actual work. So there's a lot of extra things coming into your architecture here. But you know, is anyone to throw the switch and do this tomorrow probably not but this is a bit of a target. If you're starting to think about these things and metadata comes back into play here right we started the conversation in the in the beginning about how metadata is a critical component. So, so maybe also talk about how metadata fits into this to John. You know, we were drawn a slide and I was working on I didn't know what I have to metadata an old metadata and a new metadata. But really, it's not. Don't let one red box indicated it's one metadata product there this is a metadata capability. Right, I think that's probably the right word for it. And it is a capability of knowing where everything is who touches everything. What is the context of things. And it is the it is it is the driving engine it is the heart of of all your data and content management for an architecture like this. In fact, it should be so ubiquitous at some point in time that we don't even talk about it. It's when we get into a car, we know that when we turn the key this thing under the hood called an engine does something. I'm getting to the point now, well, we don't really care whether that was a hybrid or gas or diesel or all electric. We just turn the key or push a button and we go and at some point in time metadata is going to be like that. It's going to be the engine that drives all of this and it's it will be invisible. But for now, you know, you're going to be blending several products to get those capabilities. Absolutely. And it needs to be explicit. Let's start to wrap up. We want to make sure that we're sensitive to time here. Let's look at how this fits into the operating framework. And so again, kind of circling back to some of the slides that we saw in the beginning where we looked at how knowledge management enabled certain aspects of the organization. And then we talked about what's changed in knowledge management. So as we circle back now what's new about these supporting programs on the right hand side and the way that it can support things like human capital. Yeah, and I'll go through this one pretty quickly. Basically, what when I look across areas that say they're doing knowledge management or doing something like KM and not calling that they're supporting innovative efforts and trying to remember how they are innovative. How do we how do we be innovative? They're trying to capture those smarts and there's lots of cool things happening there to do that and you all can read that. All right. And then there's they're also supporting conventional efforts, which might be more complicated, really, really large in terms of impact to the organization content or disruption. And I put like a disruptive regulation like GDPR is a classic thing where a KM department if it was in place could really help in an organization. But but KM areas and I called areas because they can't really find a place where they live. And I can't find a consistent place where you can say KM always is under this, the CIO or always under human capital and they're all over the place is innovation projects, basically offer a service to help you manage your large initiative, offer you a service for content management and content tagging, offer a service for searching for things, kind of an internal Google or offer services for finding the expertise to help you with your project. We know what we know. We're going to let you find out where it is so you don't have to go find it yourself and all the all this interacts with all these supporting programs on the right. You've got your basic enterprise architecture data process and your capability architectures and everyone can read all these other type of organizations and the big one at the top is organization change. Again, if you know there was a question here that came through I just comply about, you know, influencing behaviors. It kills KM, it kills governance, it kills MDM, it kills EIM, the culture, the soft issues are really important on that. That's kind of a nice, that's a very abstract view of an operating framework. Okay, absolutely. So we're going to wrap up here and just start to summarize that, you know, ultimately, this is about supporting organizational learning and the human capital development. I just saw this slide back in September. But Bill on this slide is the net new, right, and what's available now based on the technologies that are available to do some of this very sophisticated analysis. And did you want to add anything to this slide if not we can go to our... The question mark is what do you do with what you've learned? That is the real cultural aspect. Until human beings become cyborgs, there's going to be something between somebody's ears that's really important. And how do you manage that? And that's key and that's the $64 billion question. Exactly. That's right. So if we then go into, well, how do we take advantage of this from, you know, because we aren't yet cyborgs and we talk about how to make this work within your organization. So most of the companies that are on the phone have probably started to head down a data science path. Some people have jumped into the lake 100% and some are just dipping their toes into the data lake. But the idea is that, you know, as you have gone down this path to take a look at what you're doing and see if you can leverage some of those data science techniques for knowledge management also. So for example, if you are doing linguistics analysis somewhere else in your data science organization, could that be applied to an internal process that generates knowledge? For example, this is being done in decision support systems or, you know, trouble ticketing an IT system. So can you apply linguistics analysis to support calls and therefore improve the self-service component of your IT help desk? What else are you seeing this become practical from this perspective, John? Yeah, I mean, there are some good examples of the run for years. Anyone who watches television sees the little commercial where Watson is in the elevator and Watson says you got to fix the elevator because it's going to break in two days, right? That is a meantime between failure analysis that's been going on in complex aerospace and the military for decades. It is sophisticated statistical analysis. It's a lot of number crunching. But that's a classic example of learning from things and then adjusting behaviors in the organization and closing the loop because you're telling someone to replace a perfectly good operational part. But it saves you a ton of money when you do that. So that's one really good example. You mentioned help desks, also healthcare obviously, drug interactions, things like that. Those are things that are going now. So there are organizations that absolutely require knowledge management, whether they call it that or not. But anyone in any organization can find something to benefit. If you're talking data science and you're starting to whisper AI, you need to study this topic. Absolutely. And so just a reminder to get those questions in and we're going to go to our final slide of our key takeaway. So essentially, and this is a theme that you always hear from John and I, is this pragmatism. And so leveraging big data in analytics and AI to create this pragmatic gateway to knowledge management. So leverage what you're already doing in your analysis process. Leverage the data that exists in the lake already and see if there's a way you can apply it internally to improve the understanding of processes and the efficiency of processes. So John, why don't you also go through and highlight a couple of these as well. The learning organization as academically presented in the 80s is a long way off. Luckily, I'm one of the folks that I'm worried about AI. Hal is a long way off. And if you're a young person and don't know who Hal is just ask the old person next to you next cubicle over. I think it's a long way off still, but but it doesn't mean that AI isn't beneficial and useful right now. It is a tremendous supplement to what people know it is a it is a foil. So Bob over here in the corner says we got to go this way but the model says we got to go this way that is powerful in its own right, having those two options in front of you. So again, we're far away from knowledge management and impure academic view and not so much as a department or a program or a system. I view this now more of as a capability that the entire organization moves towards the same way that in its ideal sense data governance is a capability that should be embedded and go away as a separate type area. So those are those are some of the takeaways that we've got a couple more questions came in here. Can I want you to read one? I want one just popped in and we'll take a run at that. Yeah, absolutely. I think this is a really interesting question. So this one is how do we represent knowledge? Any standards or reference architecture? Yes, there is a standard. Yeah. Okay. Go for it, John. It's called the Dublin core. What's that? Have you ever been to a library? You see those numbers on the side of the books? That is a knowledge standard of how to categorize and find knowledge. It's called the Dewey Decimal System. And a lot of people don't know this but the Dewey Decimal System is licensed proprietary intellectual property in every library in the country pays for it. And it is the first knowledge management product. It goes back to the 19th century and it is still a standard reference architecture for finding stuff topical. So that's the first thing then of course the other thing is the OMG years ago and I know is the OMG still around Kelly? They had many, many, many expressions of other knowledge reference models that were based on industry data models and then they applied some XML type stuff to them and have since applied some other graph type concepts to them. And those are sitting around out there. Is there one that everyone's going to use like UML or something like that? No, there isn't. I suspect sometime the next, I'd say we have a trends talk next month and one of the trends we're going to talk about is what this might look like in a year or two. Something's going to happen there but there isn't one. Yeah. Yeah. That's right. That's really cool though. Dublin Core can be reviewed on the website. It's very, very cool. You'll look at it and you go, dang, that is really neat. It's very, very cool. So we've provided a reference architecture on how some of these repositories, if you will, could fit together but there really isn't a metadata standard for knowledge at this moment. So there's lots of different metadata standards and definitional standards and ontology standards and things like that but from what we've seen it hasn't quite yet made its way to knowledge management. And I think with that, we are at the top of the hour, are we? Yeah, we are. I just want to say if you're diving into this area, one other thing to study is ontologies and taxonomies. Those are going to be a great deal. Great. Well, thank you everyone. Shannon to you. Thank you Kelly. Thank you, John. As always, another great presentation and thanks to our attendees for being so engaged in everything we do. We just love all the questions that have come in for this session. And just a reminder, I will send a follow up email by end of day Monday for this presentation with links to the slides and links to the recording. Again, thanks everybody. I hope you all have a great day. Thanks Kelly. Thanks, John. Bye bye. Take care.