 Okay, hello everyone. Good morning. I'm Richard Lurig and I'm the senior vice president for Innovation Development Center at CoreLogic. We are a company that I'll get into a little bit of the details in in a moment. One of the things that I wanted to do and highlight for everyone is that we will cover Q&A towards the end. But we have a very strong contingent of people here from CoreLogic, running the gambit from architecture, software engineering, infrastructure and DevOps. And so I welcome everyone in the spirit of Cloud Foundry Summit and collaborating and sharing information that you meet with these folks. Several of them are down here in the front row, Brando from DevOps, William and Eric from the infrastructure, architecture and security areas. And then Tim Steele, one of our senior architects and software engineers who's been involved with Cloud Foundry since we started the project. February 4th of last year on our first product, we developed on Cloud Foundry. So a lot of people always ask, what is CoreLogic? And somewhat we are an enigma to most of the people outside of the industries where we service products and serve up our data and analytics. Our vision is to deliver unique property level insights that power the global real estate economy. Really what that means for us is discernible differences in analytics, data, platforms and data enabled solutions that service the mortgage lending space, capital markets, the real estate space specifically to realtors and multiple listing services, as well as in the insurance and to some degrees energy and oil and gas in our spatial businesses. So the background, CoreLogic looks at everything from a property level viewpoint. We have characteristics, analytics, information and risk modeling around properties, all things related to a property. Everything from how you pay your real estate taxes, every year we pay $111 billion in real estate taxes on behalf of mortgage lenders and borrowers of theirs to 23,000 taxing real estate or taxing jurisdictions across the United States every year. We also service the insurance industry by delivering underwriting information for construction value costs for a property. So if you own a property or own a house and you have to ensure that property for reconstruction in case of a fire, we maintain on a zip code level all of the information required to reconstruct a certain type of house in a certain zip code on behalf of insurance companies. We also manage the mobile and platform areas for the multiple listing services in the United States. I mean, 65% of the multiple listing services in the United States either use our mobile platform or our hosted service platform from an application perspective. So if you've bought a house, you've probably either had, or if you sold a house, the multiple listing service has had your house on that listing service. It's usually through CoreLogic. If you've bought a house, you've probably received information on the houses that you were going to buy and they were from CoreLogic. It allows a lot of data, a lot of information, and what we do is we wrap around that data and information with applications, platforms, technology platforms, but primarily analytics in order to deliver a unique type of experience for our customers. So about two, two and a half years ago, we embarked upon a journey to look at what is our next generation platform going to look like. The first thing we did is we looked at the landscape of all of the applications that CoreLogic had. And as you can see, like many large enterprises, we had a number of applications, a number of technologies. In fact, we found that while we were surveying the landscape, there wasn't a technology that we didn't like. Almost every database, every operating system, everything that you could possibly imagine that had grown up organically over time from a number of acquired companies. We had grown up through acquisition, like many of you probably have. And so we ended up with a lot of different technologies. And so we said, well, this is kind of a mess. So we have two alternatives. You know, we were trying to build new products that were creative and innovative. We were using innovation as a technology enabler for our business in order to bring new products to market in a much faster manner. The problem was with, you can see by burdened with all of these technologies and all of the cost associated with these technologies, building efficient new products was very, very difficult for us. So we said, okay, what do we want? Well, the first thing we did is we looked at the industry and we said, things have changed quite a bit. Everything as a service is a norm now. You know, these big mainframes, i-series, you know, big monolithic applications, that's not the norm. We had to look at is data services for our business. How do we serve up data on a property to a product without having to go to 10 different databases or 10 different applications that we historically have had? How do we build services platforms so that we can have common services like storing a document? We store literally billions of documents and images based on the businesses that we're in, but we do it about 50 or 60 different ways. We do security authentication on our web portals 128 different ways. So we said, okay, we're going to do it differently. What are people in the future doing? And I coined the phrase when we started talking to different vendors, we don't want five years ago, we want five years from now. Where are things going? How can we develop this as a service? How can we focus our developers on the secret sauce of core logic and less on this behemoth monolithic infrastructure that we have? So we looked at some design principles and we said, what are we trying to build for? Well, we want our developers no longer focused on tweaking settings and web logic or understanding how Red Hat Linux works or OS 400 or DB2 or database or any of those myriad of other things that we had. We want a user experience framework where our users have a better experience in consuming products from core logic, especially in their businesses because our customers are not predominantly consumers but they're other businesses. We wanted components separated from the application. Really when you look at this, it's kind of like a service oriented architecture but we landed upon microservices as being a new paradigm that we wanted to move to. We looked at the flexibility enabled by standard technologies. If you're developing things with standard technologies, developing new products will be quicker. We looked at reusability and all of the illities. We wanted it kind of from a resiliency perspective. From every perspective we wanted it to be from the very beginning. You can imagine with this legacy of applications, every application did not have DR, did not have business continuity, was not elastic to any of our demands from a compute perspective. We wanted that built in and we wanted to run on multiple infrastructures as a service. So we're primarily a VMware shop but we wanted to take the opportunity to move things to AWS if we wanted or any other new infrastructure as a service that came about. We basically created an architecture model. There's a lot more detail behind this architecture model kind of in a reference architecture type of framework but from a user perspective, this is how it looks. We wanted engagement with our users through mobile and through web to be common. We wanted common services underlying that where we were able to order all products through all portals or through all B2B connections or through all mobile devices. Underlying that we wanted the CoreLogic data repository or a data lake that allowed CoreLogic data products to be consumed by our products without having to try to figure out where the data for that product might come from. And then we wanted it to run on our Dell dedicated cloud. We had outsourced our data centers and when we started this we were in the process of moving to a Dell dedicated private cloud environment running on VMware technology. And what we wanted is to make sure that this would run on that but also run on other infrastructures as a service. We engaged a number of technology experts and we looked at a number of things from, I mean I'll have to tell you, there were some legacy technology vendors that had a pretty good story and were able to do POCs on applications that we wanted. But when you pulled back the covers and looked at what those technologies were, they were five or ten year old technologies. They couldn't run in cloud, they couldn't elastically scale. It was a very, very big problem for some of the legacy vendors that we talked to. We also talked to some of the emerging technology vendors, Oracle, I mean Google, Salesforce and whatforce.com had to offer. In AWS we talked to about how we could use AWS and AWS services to build this platform. We stumbled across Pivotal quite frankly. We didn't quite understand why EMC was contacting us when we embarked upon this two years ago because they didn't really have a good what I called elevator pitch. You know they said trust us you want to talk to us and we said okay we may waste a day but we'll talk to you. And then we met the Pivotal guys and then we went to Pivotal Labs. We looked at Cloud Foundry outside of that paradigm, really just Cloud Foundry as a standalone pass. And we started to find that very interesting. We also were a very big Red Hat customer so we started to look at things like OpenShift and OpenStack from the Red Hat perspective. And we looked at the whole thing including market adoption and to be honest none of the emerging technologies at that point had huge market adoption. We ran POCs and they were very close between Red Hat and Pivotal. But we felt like there was like a 360 degree view that Pivotal brought to bear on this. This was more than just a technology transformation. It was a business transformation. It was how we could deliver products faster. Cloud Foundry was a huge part of that. This is Cloud Foundry Summit and we'd be remiss in not saying without Cloud Foundry underlying what we're doing. You know a lot of the things that we've been able to do in the last year wouldn't be possible. We also engaged Pivotal from a Pivotal Labs perspective and a Pivotal Big Data Suite perspective. So since 2014 we embarked upon our first project, our first product, February 4th of 2014. Originally using our legacy technologies that product was going to go to market about right now using twice as many software engineers and using an entire QA team. It was going to take to this day after February 4th for us to launch the product and get revenue in the door. We started February 4th of 2014. We went production live July 1st, 2014. We used six software engineers paired. We used one user experience designer and we used one product manager. Now those user experience and product managers were basically also paired at times as they were learning the Pivotal methodology. I think a big enablement for that was Cloud Foundry and not getting bogged down in our legacy infrastructures. We built CondoSafe. That was the first application. Then we went on to build CondoSafe as a condominium underwriting application for the mortgage lending space. We built a leasing manager which is a product used in multi-tenant, multi-family to manage an apartment workflow. From when you basically pick an apartment that you want to rent to the time that you move in to eventually the time that you move out and everything in between. It will be eventually a full life cycle product. We've launched part of that leasing manager product now. LoanSafe Connect which is a fraud portal that helps mortgage lenders through the fraud remediation process as you're going through the origination process. All three of those have been completed. We're building two new products now that will go to market soon, probably by mid-year. We're building two of the big components of our architecture now on Cloud Foundry. One of which is the biggest of which is our engagement services through MyCoreLogic. What we're doing is we're bringing to bear and looking at personas of the individual customers that use our systems. We're trying to get from 27 different portals over 100 different security authentication methods down to a single portal, but with a unique experience for every single customer that logs on. We're building all of that and the first version of that is actually being built right now on Cloud Foundry. From an architecture perspective, it's a little hard to read that, but we redacted any useful information from this that you might use. But in all seriousness, what we've found is we have found some learnings along the way with Cloud Foundry. Some are really, really good and probably exceeded our expectations. Other things still need some improvement. From an architecture perspective, you see at the top our Apache environment. We originally thought that we were going to run Cloud Foundry Foundation in the front end in a DMZ type environment. We're a very, very secure company. You can imagine working with underwriting. You don't want your information lost, so we were going to create kind of a DMZ tier with using Foundation. Instead, we've gone to an Apache type of a DMZ environment to basically isolate our Cloud Foundry applications from the Internet. We are still using Oracle Rack, big debate inside of our company. Oracle Rack, as you know, is not a Cloud Foundry service. You can't dynamically provision Oracle Rack, and we still have the encumberment from an agile perspective of Oracle Rack on the back end. But we have some very, very high volume transaction databases. We are in the process and have been under development for moving a lot of things over to HDFS using a polyglot of database technologies for our data lake. And that will be coming out this year. But any of our high volume transaction processing workloads were still running on Oracle Rack at this time. From the perspective of Dell, we're running the infrastructure in the Dell VMware environment on our Dell Private Cloud. It has been a little bit of a unique experience because we had outsourced to Dell. So we had Dell involved with trying to figure out how to put Cloud Foundry on. We had, of course, the people from Pivotal that were helping us. And then we had our team coming up the learning curve last year about this time on how we were going to deploy this into a Dell environment. All of that we've done successfully. In fact, our first application, CondoSafe, we developed up in the run Pivotal environment. And then we moved it over without changing the application once we had put Cloud Foundry into the Dell environment. So lessons learned. There's a number of them here, and I'm going to go through them really quickly and highlight some things. For the positives, it's all the things you can imagine. Ease of application deployment and scalability. We've literally at the push of a button can scale up CondoSafe right now. Support for configurability of frameworks pre-packaged in the build packs. Of course, we're trying to err on the side of standardizing and keeping the build packs without any customization. Pivotal support for Cloud Foundry has been outstanding. They've just been with us side by side every step along the way when we first went live. We literally had mobile phone numbers and different things from Pivotal. It was just a very, very good experience and continues to be a good experience. I wonder what it will be like though when we have 1,000 people on Cloud Foundry or 10,000 people on Cloud Foundry. But I'm hoping it will stay the same as that goes. From an upgrade perspective, they literally are push of the button and very straightforward. The hardest thing though that we're finding is that we have to push them into foundation at the same time. So we're running multiple applications. So it's a little bit of scary when you're used to an environment where you do rolling or kind of staged upgrades across an application portfolio. Easy integration with a lot of the tools that we use, including Splunk and AppDynamics. On the areas for improvement, recoverability is very good, but there's still some concerns we have about and we have a very lengthy discussion about some of the things that we found. Our applications automatically restart when they have a problem, which is great, but sometimes that doesn't work as well as we would expect or as we would expect it to work. We would like a health dashboard that looks at the entire ecosystem. A lot of people have developed those on their own. We'd like it as part of the product. We need more, well, backups is another good one to touch upon. The documentation says do backups often, do lots of backups, and then there's a lengthy process that if you followed it would take a half a day. So we automated that process and we're able to do it in a very short period of time, but it really doesn't come out of the box with the product or with Cloud Foundry at this time. We need finer grain controls around security. It's just basically for our industry and our business, it's too broad right now. We have to give too broad of kind of remit to a number of the people working on the system today. And the admin function does not allow us to look at the details of those at the level we want in a very easy and seamless way. So it's kind of like security needs to come up the learning curve a little bit in the Cloud Foundry world. The other thing is we have environment variables that sometimes inadvertently because of the way the system works and the PAS works are inadvertently exposed out to tools like AppDynamics. So things that we don't want necessarily to be in logs or out in an AppDynamics world are actually exposed there in the clear. So there are areas of interest for us. Well, very quickly, we continue to look at how do we balance between a Cloud Foundry world, agile development, and quickly deploying with all of the enterprise standards we have. The CAB, the change process, so we're trying to converge our change process and what's required by our industry and by our customers in with this agile notion, with this Cloud Foundry notion of quick development with a platform as a service and deploying it quickly. And that is, to be honest, we have a really, really difficult time telling our business users right now how much an application is going to cost. And that's because with the Elastic Compute, with the ability to scale up our applications but also because they're sharing a lot of the same environment. And before we, very costly, but we had individual environments for all these applications. Now we're putting all of these products CoreLogic has into kind of the Cloud Foundry bucket doing business cases and explaining and articulating what the allocation or how to spread the money out is very difficult still and very immature in this environment for us. Network planning took us probably the bulk of the time it always seems to from a DNS security perspective. It continues to be something that we're learning about. And then managing multiple foundations still is very difficult because you have to interact primarily and our team will elaborate more on any questions about how we have to interact with multiple foundations individually without being able to see the big picture all in once. And then obviously one of our big concerns in our side is keeping the build packs as standard as possible to avoid any kind of future upgrade problems which will get us back into the world that I outlined at the beginning. The one thing I wanted to note here is we're now looking at taking the legacy applications and moving them over to Cloud Foundry. So that's not where we started, that's now where we are we're starting to deploy Cloud Foundry into things like AWS and vCloud Air and have a hybrid environment for our application. So that's a big focus for us in 2015. What's next? The CoreLogic platform we're going to continue to evolve our dev ops environments, automation and monitoring continue to be a huge focus for us and we're going to continue as I said to look at the migration of existing applications looking at the legacy stacks and seeing what that will come over for Cloud Foundry. And then our development as a service in the data lake that I talked about with you will be the big thing that we roll out in 2015 that will be the backbone behind all of our new products as well as the My CoreLogic product suite. So from that perspective I think I got through and was seven minutes or so for Q&A. So we have all the experts from CoreLogic here so I don't know. So the question was why are we looking at a hybrid Cloud? Why are we looking at AWS in addition to the Dell environment? Primarily because we believe it's more cost-effective and more efficient and especially when you look at dynamic computing or the elastic compute environments that we have. Dell is still, even though we're under a managed service, still a very static way to look at infrastructure and we want a much more dynamic way to look at infrastructure and we do believe ultimately that in the next three to five years Cloud, public Cloud computing will be kind of more of the norm than kind of this private Cloud environment and will be adopted eventually by the mortgage lending side of things where we're regulated heavily. So yeah, so the question was ESB in our environment and it shows an ESB up there. Actually we've been working very closely between the Pivotal Cloud Foundry guys and TIBCO. We have a lot of legacy applications and the need to interact with non-Cloud Foundry based microservices in the CoreLogic environment, especially as we aggregate other third parties. And so what we believe is there's room for both kinds of environments or the need in our case for both kinds of environments, how we interact with the legacy kind of CoreLogic service tier environments and then how we blend that over to the Cloud Foundry environment. So we still believe in the microservices infrastructure at Cloud Foundry but we're interfacing with a lot of third parties and a lot of downstream legacy applications and so we believe there has to be a blended approach to the two. It can't be all one or all the other. And believe me, there's religious camps on both sides and they're all in here from CoreLogic and I'm sure many of you are there and we seem to be on the side of, yeah, we're going to have to do both and the reason why is we've looked at the various workloads and it won't all work on a microservices type environment for us. So I don't know if you guys want to take that but basically Oracle RAC just operates as a standalone static environment in a clustered RAC environment and the applications are not using standard Cloud Foundry service to access Oracle RAC, they're using just the standard Oracle interface, really. Right, Tim? JDBC. Yes. Yes. Okay, so the question was are they Greenfield applications or were they transitioned over from a legacy environment? All of the applications I listed were Greenfield new products for us that were written entirely on Cloud Foundry but in each case they interfaced it with backend services and other things that existed for us that they had to interoperate with but they were all Greenfield type of applications. As I said this year, we're going to look at taking actual legacy applications without major rewriting and try to move those over to the Cloud Foundry environment but our goal was speed to market of new products and being more competitive so for us it was building new products not necessarily just a cost rationalization play. We're trying to go down both paths this year. Okay. To version three? Brando? Yeah, I don't think we've determined when we're going to do that. Yes, we're keeping up with the upgrades that Pivotal Cloud Foundry releases so when those are available we'll start testing them. I think as we grow the Cloud Foundry application portfolio there is still ongoing review of the infrastructure and I don't know Eric if you want to talk about that but from a bottleneck perspective we're pretty confident on what we're doing right now. We've added the three products in and they're being used in production but Eric do you want to elaborate at all? We don't have any real concerns about scalability. One thing we would like to see is a continuous feedback loop between the underlying IAS and the PAS so that it can self-provision. The reality is that scaling things up requires intervention so we'd like to see that in part of the evolution. We're also likely to be running more than one foundation in the future to segregate out sensitive data from less sensitive data primarily. Also because we are trying to build enterprise class applications and really a PAS on top of a PAS authentication and entitlement services document storage we anticipate running multiple foundations within each data center. I think I have time for one more question. I don't believe we're using Docker at all right now. We're using a little Docker. For our continuous integration environment we're starting to use that. See the DevOps people are so agile and nimble they're using stuff I didn't even know last month. One more back there. We don't only see the need of those tools but more. Eric you want to cover that. You definitely need to augment it with monitoring. Now you get a GMX endpoint but you have to have some place to send that and AppDynamics gives us visibility into the applications that we wouldn't otherwise have so you absolutely need to build a monitoring framework around Cloud Foundry. Anyway I appreciate everyone. Like I said there's a lot of people here from CoreLogic. This is all about collaboration and sharing information. We've got every aspect of our stack here from resource perspective. We're very excited about what we're doing. We're very excited to hear about what all of you are doing. So thank you very much and have a great summit.