 Good morning, everyone. I work for Moody's Corporation. I think I'll start at the beginning here with reading and rewriting MLG. I was in the midst of a three-year content transformation at Moody's. What I was done was, I think, the first year when they came to me and said, I'd like this project that they wanted us to do in addition to the very large, straight transformation that would be corporate-wide across the entire agency, which is roughly a thousand people. So we were going, in many ways, for content revolution, to produce content in a better way, to drive a lot of efficiencies and capabilities than the firm was needing to do to get to the next generation. So I got this little project. It was quite small and they wanted us to use. There was a, someone in one of our business lines, was an avid baseball fan. An article about how papers are not really sending writers anymore to a great deal of college baseball names. So they were using the box score to write essentially what happened at the game. And that box score is, you know, it's getting my inning. And so some PhD students picked up on that use case. And they were a number of NLG products that came to market. There are people who are in the space and they're four dominant commercial products. In the market, we picked one of them. But that really began our journey. And at that point, I knew really nothing about natural language generation, other than, you know, that. So we went down this road. And if you go to the next slide, talk a little bit about it. So the first thing before we give you sort of what we did, I think I would just kind of talk a little bit about sort of the business case behind it. So, you know, the reality is that, you know, particularly in financial services and in other businesses, you don't transact often. Even if someone says the stock is a buy, or you should go see this movie, the reality is you don't go see off of the 86 tomato points. You read some of the commentary. You know, you need to go, we need to go more than that. I think in financial markets, it's really different. It's the insights in many ways that are the nugget more than the rating or the score. So, you know, I think as a, you know, when you sort of look at transactional businesses that are data driven, you know, it's what's behind the data. In many cases, it's important as the data itself. So, you know, I think what that means is that, you know, if you're in a business, where coverage is important, right, you know, very few people want to read about one AFL team. They want to read about the entire season or, you know, primarily one team and one perspective. They want a whole perspective. So, you know, coverage is important. So what that means is that you've got to cover the market, the entire thing, so that you can sort of make a battle specific points about what you're talking about. You know, if you give coverage as an important part of your business, that means you have to do everything. You have to be able to talk about everything. So, enabled in doing so, I think as most of you can figure out, that's resource heavy or resource constrained. You can ultimately do everything. So that's where machines are coming in. And so, you know, when we first started this, this was a project, right? You know, this was simply, oh, we need, there's this new technology. We're going to try it. We're going to see what we can do. And then ultimately, we're going to deliver it and our business will be happy and everybody will be super excited. So, some of that's not exactly true, but we'll talk about that in a minute. But what you slowly begin to see is that, you know, you're in a huge, if you watch the news and sort of understand where things are going, you know, Stanford is coming up with stuff every day in the space. So, it's MITs or a host of other universities. So, the market is changing daily. But commercial products are based on PhD projects from probably a decade ago. So, we're seeing a little bit of stagnation in the market itself. But at the end of the day, what we want to do, we can't go by, right? So, as a buy-in product of that, we're bringing a person on the moon, Mars, or whatever you want to call it, but it is something that potentially no one has done before. And I think that we think that that's important is ultimately the very small world, large radio agencies, financial services. It can be quite small when it talks about, you know, who has market leadership. And honestly, there is, if we don't do it, there will be someone who does. Jeff Bezos is going to do it in the next five years. He's been very clear about his intentions in the journalistic space. You know, the last thing we want to do, people buying radio is on Amazon. Ultimately, there are probably only five men and women in a barrage somewhere that are trying to figure this out. So, you know, we believe the level of disruption potential is quite high. So, what we did, and as I kind of indicated before, was we built a, we used tograph to understand what it is we were trying to do, and honestly, from a business perspective, we didn't exactly know. And the other side of it was we had a package that did a very specific thing, but it didn't do the whole thing, right? So, we went through a process of understanding those capabilities from a business perspective that we needed to do that were evolving through the project. By the same token, we wanted to know as well sort of how we could use this package and how we could evolve this package to sort of understand what we were going to be able to do. So, what we realized was, you know, we had to acquire, well, we had to acquire a set of data that we felt was going to describe that we were going to write about. We had nine pieces of data over five years. It's literally 45 pieces of data that produces a four-page piece of content. I'm wishing you writes about those nine pieces of data and how they trend over a period of time. So, it ultimately, tograph helped us get there. Essentially, we were very clear. What it really did was help us zero in on what was we were trying to do. It forced us to write manually the perfect report and then sort of look at what were the elements of that perfect report that allowed us to write. So, with that done, we went to market. The space that we were serving was highly successful. But there was an enormous amount of criticism on leap forward. People really didn't get how this would help in their businesses and in many of the business lines. So, and even in the sort of the people who ultimately paid for it, you know, was this sort of a one-and-done niche type thing. You know, many of us felt that, no, this was going to be something that was going to be paid for the company. And so, you know, we started to sort of roadshow it and try to sell it. Fortunately, there were some visionaries including one of our CEOs, the CEO of our inner business, who was sort of highly excited about it. And so, we got a lot of attention off of it. We kept going. Now, what this slide is telling us is that if you have these nine pieces of data and you spend seven figures, you can write a report at scale. You can write content at scale. You can write thousands, 10,000s, hundreds of thousands of reports based on this small model. And I think there are no limits, right? It's very rule-driven. Decision tree, a very large decision tree, it reworking the content of the particular company's value of performance. And ultimately, you get a somewhat unique report. If I feed the same nine data points in, 10 times, I get 10 different reports. It's got very strong syn dimensions where it can sort of express itself in very unique ways. And honestly, it causes a lot of problems because what happens is I do write it once, and you want to edit it, and you want the machine to rewrite it. It's a completely different report, so you can't sort of edit it effectively. But what we came out of this with was a good view of what the tool does. Why it does it. Insights that did not come out of the initial architecture that we had from the way, and it became important to create. So what we realized was, if you look at the platform itself, you have to go figure out, and this is what we did, this is one of the gaps that we had to plug was, you have to go figure out where your data is located. Consistency in data, and because we do a process manually, and you're looking at some data manually, and you're a highly trained financial expert, you can sort of interpolate your way through holes in the data. But this, you can't get the machine, at this level it's not interpolating. So what we're going to do is start to look at a few things and start to disassemble. So when we built the first phase architecture, it had a lot of, the gentleman before me who was speaking, it had a lot of stuff built into the NLG engine that really shouldn't be, we were building charts and graphs and we were putting corrections to information in that and sort of figuring out error correction and doing a bunch of stuff. So we use this as an opportunity to sort of deassemble all of that and really take out all the things that we put in the first phase out of the core language writing capability. We didn't fundamentally change it in any way other than we disassembled what they call the CineDimension because we wanted to have the ability to have users or business lines create both synonyms in their own language. So it was written by one team, it would sound like X and if it was written by another team it would sound like Y. So we were able to sort of disassemble it. We added to it, we added some more industries to it and again, pretty successful. The writing is about, when you're reaching writing, it's about 80% reduction in human. It becomes more of an edit or a new process, that a construction process and so I think we were able to sort of accomplish a lot with that. But again, we were still having adoption problems. Adoption within the firm was difficult. People were struggling to see how do you take a level of complexity in human writing and human thinking and sort of hand that over to this very rule-driven machine. And honestly, we break. So we were sort of knowledgeable enough or aware enough of our own limitations and a lot of that was driven by sort of the token work that we had done that took the entire architecture work we sort of knew that we were sort of about this to a great extent. But what we knew was that we had a world of data a world of data across 24 countries. What was instinctive to us was that we wanted a machine who could self-destruct and self-aware and be able to sort of look at it instead of data and say, hey, I've seen this kind of thing before if I apply these rules and these rules could I get to a point where I don't have to hardwire the road to the data because I think that's ultimately the way most of these commercial products work. So we started thinking and we started doing more work around capabilities where we wanted to go so we went to the next level. So we started breaking down our models we looked at market share and we started we did an exercise where we went out to the business and said, hey, we'll write anything you want we won't charge you anything and see how much we can flex this action. So we have people who ask us about market share who ask us about time series about product mix about how revenue is generated and we just kept writing doing more and more projects. Each one was iterative we followed the TOGEP principles around the forward iterative process and then ultimately what we did was we sort of found the boundaries of the technology we found sort of where things were when did we start to get repetitive? But for each of these pieces we had to build a separate data model a separate logic model and ultimately a separate product and so as we started to do this we started to realize how much inefficiency in the construction process there was and how much cost there would be to sort of maintain on this. So it wasn't really we had gone from being sort of bleeding edge to being just sort of another system that required a lot of support and effort. We wanted to build a different way to look at energy and what that required us to do then was to sort of say to look at the four major commercial products in the space all of which we're doing all big financial service clients the cosmetic clients different things but they were doing the same thing we were doing almost all of them were sort of iterating half power line some of them were getting to a point where they were saying well what I'll do is I'll write a draft for you and then if you don't like what the draft is you can sort of modify some of the drivers and then I'll write a new draft for you but at the end of the day it was the same engine the same value chain so we were struggling so we took a break from the commercial products and at about this time one of the things that happened for me personally and I think for my team was that we had sort of carved out the space and at Moody's in kind of a strange way and it took over this section of our innovation and so lots of new opportunities were veiling themselves to us we were getting invited to a lot of machine learning we were getting invited to a lot of natural language processing like somehow convinced a bunch of money to go find research to perform the top natural language universities in the country and so that all went on and so we started to develop a broader context we started to think a little bit bigger than this one technology we go to the next level that is real and what drove us to sort of think about things differently you know we started thinking about BI for the first time we started thinking about data modeling we started thinking about you know well what if someone in what if a reporter is writing about something that supports the point we're making should we try to connect that external what about our analysts who write all day long and unrelated research should we be able to sort of grab that and pull that in can we take machine based insights and human based insights to implement them into a higher level those became sort of all those kind of questions sort of opened up to us do you go to the next level so we started looking at hydrating energy with NLP with machine learning with NLU natural language understanding as well as processing machine learning and we said okay well let's let's try to build something that is self aware but yet still so thanks to institution that PhD students trying to solve big issues you know they really encouraged us to look at the sentence itself to think a little bit about the sentence structure and sort of how you actually write the sentence right so you have the metric which is what you're going to essentially write about this ratio this data point and then ultimately you have the ontology which is sort of what is the subject matter and how does that ontology influence sort of the outcome and then ultimately the semantics which is what we historically called synonyms but it's now brought into all different aspects of the sentence itself so this ontology approach was not existed in any of the commercial products you know the ability which is sort of because we kept asking the question of well why do I have to build a model why can't you figure out you know I've written about this thing before why can't you figure that out so now what we're saying is to break all these four pieces out and we wanted to build technology stacks around and that's where we ended up going next slide we let each other pieces do what they do and we let them do what they do well so the modeling exercises were important in terms of drawing these insights but the insights are not narrative oriented the insights are within the data itself we then use ontology to sort of look for properties within the data set because not all industries are the same not all I mean the what company A's revenue is might be more of a driver if in its market is large and they own a large market share in that market versus company B who is in the smaller market and maybe a mainstream player so we looked at that and then we looked at sort of building out and we're still in this process of building out what is our style of writing what is our consistent style of writing we want to drive through the firm so we look at things like copy at it we're building a product that is based on open source projects that are just publishing now and so we're trying to blend all of that together and when you think about doing all of those types of things you're really solving 50, 60 problems all at once with different teams focusing on that and so you think about sort of that kind of movement understand how close you're getting to getting that band of the moon what you end up having to do is have framework to be able to sort of describe how close are you getting to that end stage when are you going to be ready to sort of pull it together and slow the teams down to sort of start that integration something process and Tokyo has been important and probably the most important part of how we do that because we know where we are with data we know where we are with capabilities and we know where we are with our ability to manage and many ways crunch this is sort of an example of how it all lays out I think the point I was making about revenue whether I'm talking about hardware revenue or software revenue so all of this work is sort of ongoing and because the firm is doing these things but for the business itself and trying to move us forward particularly in the modeling space we are not a we're not really a driver there so it's a man thinks we're a driver data and ontology the data side we're not such a driver but the ontology side we are like I said previously you need a way to sort of figure out where these are going all of these have three to five year road maps and so we need to figure out at which point through the process we can take a pause and sort of see where how we're doing against the our overall road map we would like to see and we are seeing now for the first time machine written content that's sort of our and so the only way you can do that is by looking this is how our brains work this is how we think about it we do analytic work in order to do that analytic work we do modeling work we set up models we think about those models we think about how it all holds together semantics is a sort of an unknown I mean we don't really most of us except for people who are in that editing business the writers typically don't necessarily think semantically about there is a certain stream of conscious for some writers some writers write in sort of an outline form so there isn't really this way to sort of do that but what we do what we can do is create what the human brain does is create connection between what we're writing about and what we're exploring and what we're researching and that's what autology really does for us it's a super powerful way of interconnecting everything that's in the external world in a way that we would interconnect in our mind to writing a sentence what we have found and I think what research has found in most organizations are not so atypical as that that there is only really a finite number of ways that you can express yourself so the core when you look at a sentence the sentence itself is it is it is decorated by the original language but the core sentence most people or most companies express themselves in about 300 uniquely structured sentences we think we're a little bit closer to 2000 so by looking at that sentence and figuring out how to build that out and then how to flavor that with rich language how to flavor that with insights coming from the market insights coming from other places within your enterprise and you know assembling that sentence to in a way that you have maybe flavored it in the past using machine learning to sort of stay okay this can be expanded in the following ways how you might do transitions across sentences machine learning is very good with that so we're seeing a lot of development in that area and so we'll go to the next slide so what we are attempting to do now and I think in our third generation of this capability we're trying to attempt to do what they try to do with electric and power you have to know what's in front of you you have to know what's in the back of you you have to know what your speed is relative to the quality of the road there are a number of problems heat statements that you're trying to solve so in the 24 month cycle from the beginning of this year to the end of next year we have a number of milestones that we're trying to accomplish we've already accomplished some of them to give you some examples the ability to figure out if a sentence is active or passive and if it's passive how to make it active which is important when you're writing externally facing the material we are polling we have been very fortunate and I want to just compliment the European vendors we work with the European vendors and for some reason I think I can't figure this out but I don't want to put down American vendors but the European vendors seem to be much more able to take the intellectual challenge that we sort of give them and come up with new ways so we have we chose an energy provider here in Europe and we came to them and said look your product just as if you're going to make it we wrote sort of our own manifesto of what we thought natural language should be and what it should do and over a course of maybe two or three meetings we were able to convince them and they're going to fundamentally change their product so I think what we've been able to do through and I think the documents and what we showed that was really very toga centric it allowed their PhDs and thinkers to sort of get inside of us and our heads and what we were trying to do and now if booties are successful as well this vendor will likely own the market right and so it's been it's been an amazing exercise I think for them and what they didn't even know was they had PhDs that were working for them that had worked on problems in university which were forgoing these very interesting problems in order to work on the product that they had in front of them so it not only has the thinking sort of evolved in our internal organization it's evolved there as well so this is our goal and hopefully in a couple of years I'll get to tell you if we accomplished the conclusion of the sentence and the words that were so interesting so as you would expect with this audience you're interested in the togeps yeah I know there will be more on that tomorrow but one specific he mentioned was a four week iterative process in using togep now one of the things that people say about togep and in product generally is oh isn't that slow isn't it waterfall opening and clearly there are people who know how to do it more agile way how are we going to do it well I think you have to in order to do this kind of work you have to sort of figure out every time we do one of these we want to do something that's different so we haven't done it in the past because I think when you look at sort of target set architecture and you say to yourself oh this will cover the world but you know the real world is out there and so we have to we have to have a way to be able to look at the real world it's in a very lab oriented exercise we have two or three going on every day and we go four week increments but it wasn't so difficult but it was difficult you know if you old legacy thing where's my business requirement doctor where's my functional requirement doctor to start thinking more conceptual thinking more why is this not like everything else we've done do you have plans for adaption of content to a reader's view point for example different articles written for the requirement for a different medical use the ultimate the ultimate end game to be able to have a conversation with the machine you look at web design today responsive design really the most important thing where the best end game would be I have an interface that I can converse with right out of the two thousand two thousand you're right being able to curate a conversation that's based on the feedback of the person you're conversing with is the ultimate all the machine can you say a little bit about the the individuals who work on these work on these projects their kind of backgrounds their skills is it a mix of people who talk specifically about academic work what kind of folks are involved in these projects it's an interesting question when we first started the business was not really interesting we had a difficult time we had because of the work we did we had a very good network some good thinkers in the business they were helpful I would say most of it was IT in the beginning and somewhere along the line the light bulb went on we started to get a little bit ahead and finding our own requirements finding our own capabilities and then I think what happened was there was a little bit of a good conflict that resulted in them really taking over so in the beginning we didn't have a lot of strong business interests but where it's completely different now the business is driving more obviously but data mostly for me having the data financial data ratings data is somewhere between you know that's fairly big for me the other thing is academia it's been a lot of time with academia it's been a lot of time trying to network with people who are doing things there was a a gentleman who worked in colorado named Zach Taut who wrote the fifth book in the game of thrones through machine learning just to give an example of how seriously to make this I hired him for two weeks to come in and talk to us about what he had achieved so he came in talked a little bit about his models we shared academia partners there were certain things that were honestly not very well thought through but there were some things that he accomplished that worked out an interesting test but we are constantly monitoring so that monitoring process requires some special skills we are always looking for some kind of edge to kind of clearly our architecture folks we have both integrated and integrated architects we have Peter Havelin who's been very involved so it's but I think really it's more people who can work who can change the way they work and the way they think oh we're doing it this way you know and now we want to do it so nice adaptability sort of not get locked into a set of facts because the set of facts is just when applying MLG in an enterprise what are the key barriers to growing through and how do you better prepare for that people don't like to be replaced by machinery that's the thing and we don't really look at it that way I mean honestly it's the question is it's the same thing that was being said when ERP was big in the 90's you know that we're spending too much time on the things we shouldn't be spending time on right and so feel to show a couple things one is I'm a big believer of product you got to get product out there every four weeks we produce something that's good I like the script I like the water on the building like have you seen this talking to MPs all the time trying to get people to sort of realize what we're doing and how we're advancing and I think that's really honestly I think one side of it is we've been able to hunt people in the business who have an idea that aren't being heard and we've been able to as well match that with technical capabilities and I think we've become as much sort of I would consider as much sort of matchmaker or even like a venture capitalist internally to try to do things like that okay how long does it take to develop a new cognitive model for these days, months, years my counterpart in the back that's tomorrow I'll tell you it's very quick we did a model just recently we've done a couple where we brought people in I think what we like to do is bring more people into it because what happens in technology is you get sort of a certain acceleration in one area, everyone else is left behind you know there's profit for the budgets and all kinds of things so what we have to do is we sort of encourage all the other teams to sort of get involved so we had a woman who she was a BA I asked her to write up some stuff that we had done through she learning around figuring out weights factors what really drove it and so she did that we kind of shopped it around we went all the way to the hospitals to the CAO and she said unbeknownst to me two weeks later she came back and she said I've been exploring some of these I created an In-Thon account looking at things by rating class looking at a finer level looking at cities under a million populations sort of applying that same technology she built a model in two weeks and it hasn't impressed me she's not a developer she's an idiot I think the accessibility of technology is in such a big way and I know people like to talk about the millennia but we have we have some very good and gifted development some of the biggest accomplishments I think an equal number of accomplishments has been in our plus 50 you know and so it's very interesting the plus 50s are more disruptive you know they're like I'm tired of doing it this way I don't want to think of this story anymore last question how did you obtain the data model and the technology experts that you did well we definitely used we have a fairly large data team within our use so that was less of an issue and we have spent if you really think about what we've done we've spent the last 50, 60, 70 years building out models to help us do what we're doing so the modeling was not such a thing but it really influenced my model right long ago we came in a couple of years ago and really started looking at the use cases of Mongo we've been able to do in Mongo and that really started thinking around the technology we've ultimately kind of gone in a slightly different direction there but I think being influenced by what we saw in the market in sort of the no seable space we'll let people get coffee and we'll leave it there