 We figured out the technical difficulties. Okay, great. I wanted to thank everybody for joining us. Yeah, it's a lot more effective with a microphone, I think. And, you know, as always, the most valuable thing you ever share with us is your time. And it's, you know, evening, and it's a beautiful night outside, and we have a great reception lined up for after this, even up on the rooftop, so we can all enjoy the weather outside. We are so excited to have you here with us today on this really important announcement and launch of our Watson Project data works. This is really some fantastic work by the team. I'm really fundamentally thinking and rethinking about how we're using all sorts of advanced artificial intelligence, cognitive capability to rethink making data easy and putting data to work in your enterprise, no matter how big that enterprise is or no matter how small it is. And really those are the fundamental key points, and I think you're really going to see that come to life with the live demonstrations and the discussions we have about how we really put that to work and how we've even rethought an engagement model and methodology to really make any organization of any size successful with the important opportunities ahead of us. So if you haven't read it already on Bob Pitchiano, I have the great privilege of being the Senior Vice President of Analytics for IBM. And as I was reflecting on earlier today, I've been in this space for 29 years. I started as a research developer in Yorktown Heights working on relational databases. And so I've had the great pleasure inside of IBM of dedicating my entire career to the field of information management and analytics. And I've been so fortunate that IBM has always been on the forefront of innovations in this space, have continued to make great investments both organically as well as inorganically to add to our portfolio and to deliver value for our clients. And as great as the time has been for the last 29 years, I think in many ways we're just getting started. And today is really the dawn of a whole new day in this space. And I really want to talk about why I think that is. As we ushered in this whole capability around the cognitive era, there was really a bigger thing that was underlying all of this. And I think it's a fundamental inflection point on how people think about business, how they think about value, how they think about serving their clients, how they think about seizing opportunity. So let me rewind a little bit. You know, we think about how we started applying information technology to business, right? We call that the programmatic era. It was really all about how do you help an organization really codify some business process that they were doing into the form of an application so it can be repeatable, it can represent the attribute to the brand, and they could scale it. So when you think about that, it's how you hire an employee, how you open an account, how you process a claim, how you report your financials. All those things relate to business processes. And that codification of business process allowed organizations to really think about scaling into regional and national and international operations. And it served business extraordinarily well, and it served IBM and many other companies extraordinarily well. Now, all along the way of creating those applications and codifying the business process were little pools of operational data, and then along the way people started thinking about, well, I got to ask questions of that operational data, but it was always this post-facto view of what was going on in the business. And the fundamental way people thought about the value of IT was, you know, how did it allow them to scale? And they were preoccupied with things like Moore's Law, I need to have more throughput, I need to be able to go faster, I need to scale more. Now I think we're at a whole new different space in the market. And I think the inflection point really is not about how well we can run, automate, or scale business process, but it's how well we can really scale insight, knowledge, and really raise the level of expertise in the companies that we serve, and in the way that people serve their clients as real experts. I call this the shift from a process economy of IT to an insight economy of IT. And I think that transformation that occurs at that inflection point, let me be clear, it's an inflection point, it's not a shift per se, it's the additional value and additional capability that comes on top of what we've already established in the IT marketplace to really unlock the power of information and insight to really help businesses compete better and really serve their clients at a whole new level. And I think this is, like I said, just getting started. And in order really to unlock the potential of that, we need to rethink everything about the way we used to work with information. We need to rethink about the data science, we need to rethink about the collaboration of how roles worked across that space, we need to think about the way that application developers work, we need to think about the way people interact with analytics, and we need new platforms to be able to allow that insight economy to scale, to usher in the revolution around building a cognitive enterprise, a cognitive business of any size, and really delivering keen insights. Now, there are problems associated with this notion of unlocking the data because much of the data that organizations are now struggling with is the unstructured data, the data that represents the human journey, that's a natural language, and also data that isn't necessarily resident inside of their company, but exists outside of their corporate firewalls, it's the exogenous data, and it's mixing that data up or data from other trusted partners with the data inside their firewall to really unlock that value. And we have been doing this with our clients for many, many years. As you know, IBM as a leader in the analytics space, with over 17,000 analytics consultants who've done over 45,000 engagements with clients, every time we sit down to help a large client, we may be working with hundreds and hundreds of data sources. We know how to do this, but it's a complex task. It's our mission to really make it simple and really allow that simplicity to put data to work for all sorts of organizations and put the insight to work in the form of new insight products, the data to work in the form of new data products, and this is an entirely new space and a new opportunity. Now, in an effort to understand how you make it simple, you also have to recognize who you're making it simple for. And in the cognitive area that we're living in now, there are new roles that are transformational to helping us lead this revolution. There are the Magellans, if you will, or the Galileo's, if you will, of sort of leading organizations into digital transformation, cognitive transformation, digital disruption, and seizing the opportunity that is hidden inside their data, because it's been so difficult, both to glean insights from and to aggregate in a meaningful way. And those new roles are people like the data engineer and the data scientist that we talk quite a lot about and we're going to show you how we're building new service platforms, new capabilities to be able to really use cognitive capabilities in AI to propel them forward, but across all these roles, create a cognitive platform to be able to serve them and allow them to collaborate better with one another. Because every organization, large or small, has these roles and sometimes maybe the data scientist and the data engineer are the same person, maybe the citizen analyst is a sales operations leader, not a dedicated business analyst who needs to be able to use capabilities to unlock the power of insight on their data. But one thing is for sure, they're all collaborating around a business opportunity or a business problem or serving a client. And most platforms today don't allow for effective collaboration across that space, and we're going to show you something that we believe answers that requirement and delivers on that promise. Now, along with these new roles, obviously, are a whole bunch of new capabilities that have come out into this space. And at IBM, we have been very, very dedicated on innovating at the speed of open source, embracing not just the lineage of what has been in that process-based economic era of IT and the great capabilities that delivered on that promise but also marrying it with the new capabilities that are open source-based, happening in the open environments where the standards are defined because of collaboration around great ideas and building those into our capabilities so that we're liberating them to interoperate with the rest of the IT space and really continuing to deliver where there's an abundance of skills, where there's an abundance of promise and new capabilities like Spark to fundamentally rethink the way we solve problems, and you'll see how integral that is into what it is we're doing and how it is we're doing it. So we're very, very excited to announce here today IBM Watson DataWorks project. And, you know, this really is the first platform that's a cloud-based data analytics platform to support the requirements of building a cognitive organization of any size, large or small. We are super excited around this. This is a collaboration not just of the IBM team but the IBM team and our partners and our clients. Many times it's a chief data officer or a chief data scientist who are talking to us about their requirements and challenging us with the audacious goal of addressing the needs of the market so they can be more effective. And so to let you see what this actually is and really gets you to who the star of the show is which is Watson DataWorks, I'm going to ask Ritika Gunnar to take the stage who's our Vice President and Product Manager for IBM Analytics. Thank you very much. Well, thank you. What an exciting crowd tonight. You know, to take to heart our mission of making data simple and accessible to all, we engaged thousands of organizations around the world and almost every industry to really understand how they are making use and valuing data. And, you know, when it comes to data and analytics, I will tell you the rules of the games have changed and we believe that they've changed forever. And we are going to reinvent that. You know, value generation in an organization now is not just about where that data is stored. It's not just about the operational database the data warehouse, the Hadoop system. Value is now moving to how that data is accessed and how that data is accessed in a very trusted manner and used across that organization. Value generation from data actually stems now from teams being able to collaborate in and around that data so that organizations actually have faster outcomes from that data. But that has a really fundamental implication if you think about it. That means that you need to be able to break down the silos that exist within an organization. Whether those silos are the tooling that exists between the organization and the different professionals that are using that tooling, whether that be the organizational boundaries that exist in your organization between the different people and how they bring value to data, but more importantly, the cultural aspects of those boundaries that have to be broken down. And if you look at that, sustaining a longer-term value now is about being able to create an advantage out there based on the latest and greatest technologies out there. And so you see an emerging need to be based on open standards and open source type capabilities so you can embrace that speed of innovation that's happening out there in the market. And finally, for as much data that is being created today, you can't rely on one analytical technique on that data. You really have to be able to apply many different blends of types of analytical tooling, algorithms, machine learning, cognitive type capabilities to ensure that your organization gets the best out of all the data and the most value in your organizations. You know, I would say when we are looking at the shift, we are already leading that. And it actually started with our investment in Apache Spark. You know, if you recall in June of 2015, we made an investment, not just an investment, a commitment to Apache Spark as the simplified way to be able to access and to work with data. And through our Spark Technology Center, we are now the leading contributor to some of the most important packages like machine learning and PySpark and many others. More importantly, we're helping hundreds of clients on their journey to adopting Apache Spark as a way to be able to have access to that data and to be able to work with that data. And we're also taking well over 35 of our production ready IBM applications and offerings building them in and around Spark. So if we've claimed Apache Spark as the analytics operating system, what do you guys think every operating system needs? Well, it needs an IDE or an enterprise application built around that to be able to gain more value from it. And in June, we launched the Data Science Experience, an application built on Apache Spark to really help the data science community learn about what it means to become a better data professional in their area, to be able to create data products regardless of the type of language that they want to be able to learn it, whether that be R, Python or Scala, and to be able to collaborate within their own community. Well, I'm excited to announce that since our announcement in June, we have over 10,000 users on our Data Science Experience and tonight it is fully open for every single one of you to experience that same ease to be able to learn, create and collaborate. And of course, we included a thriving community of partners that really embrace open standards and communities to be able to provide you with innovations to solve your most challenging needs. So if you take a look at what it means to be able to have an analytics operating system, an ID or applications built for that operating system and a thriving community, what do you think you need next? That's the reason we're all here, guys. You need a platform. And so I'm excited tonight and thrilled to be able to give you guys a first look into IBM Watson Data Works. Now, is that a great platform to integrate all data types for AI-powered decision-making? The platform automates the intelligent deployment of data products on the IBM Cloud using machine learning, cognitive capabilities, and Apache Spark. So the ecosystem, which enables partners who embrace open source to easily snap into Data Works and a method that really helps and provides the expertise and a game plan to ensure you get the most value from data. Look at the platform over here. It's comprised of individual user experiences that are designed for each user's skill levels, yet it's bound together to allow rapid collaboration and founded on a comprehensive set of data and analytics services. The DNA of the platform is based on being open. Open standards, open source, intelligence infused at every point of the platform founded by machine learning and our cognitive-assisted capabilities and encompassing data from any location, anywhere, effectively, that hybrid component that we're talking about. But instead of talking about it, are you guys ready to take a look? Are you guys ready to see it in action? Yes! To the stage, my partner and development executive for the Data Works project, Steve Astorino. Steve, where are you? Welcome to a company called the Great Outdoors, which is experiencing sales declines in its areas, and coins for Data Works. First, we're going to show you how we connect all users to a variety of type of data. Second, we're going to show you how we discover new opportunities and possibilities based on the intelligence, automation, and machine learning capabilities that are built into the platform. And finally, we're going to show you how we can accelerate those insights by being able to operationalize those and being able to deploy those models into production and being able to embed those insights into the Great Outdoors web application. So, let's get started on... Ouch! That's yours, I think. I brought the house down. One of the inhibitors to being able to get value from data is actually being able to find and access data that they're enabled to be able to access, so that the Data Works platform addresses. So, Steve, can you show us through the demo how Data Works addresses this? Sure. So, as you first get into the Data Works platform, this is what you see. So, let me go jump right into our catalogs. So, we created a concept of catalogs, and what it does... So, I'm going to jump into Great Outdoors, so we created this. And what we've done here, is we've created some data. So, if you can see on the left, I have a search bar. We're calling this Shop for Data Experience. All you have to do is come in here, type in a tag or a keyword, and look for something you're looking for. So, in this case, I'm going to say Outdoors. And within a fraction of a second, I can see results on data assets that are now relevant to what I'm looking for. It's pretty amazing and transformational. If you think about what it would take in a traditional environment, whether it's different data stores, permissions, approvals, you name it. This is amazing on what it can do, and how quick it can do it. And I'm assuming this is both internal and external data state, right? It is. And as we're bringing data into the DataWorks platform, we're building a catalog of metadata and bringing in insights, our analytics insights, to be able to classify data and get an understanding of what we're putting into the catalog. So, it's pretty amazing. Let me show you also a dashboard for the data engineer and the CDO to be able to come in here and take a look at what's going on. So, as we're bringing data into the DataWorks platform, we're classifying it, as I said. So, you can see here business types, for example. Country code, city, US zip code. We're understanding what's in the data and we're showing that to you here. It's pretty cool. Also, the catalogs and statistics about how many assets, how big the data lake is, or the catalog is, and how many have been accessed, and you can filter on all this information. So, it's pretty amazing. I want to pause here for a second, because when we talk about one of the core differentiators being intelligence at all points of the platform, here we show an example that at the point of access, as we catalog that data, we're actually determining the data type. We're classifying it, we're cataloging it, and we're cleansing it. Extremely differentiating in an example of intelligence infused into the platform. Steve, now that you showed us how to find assets using DataWorks, how do you share and collaborate amongst users of different skill sets within DataWorks? Let me show you a couple of things. One is access control. Right into the catalog, we know this is about protected access to data. In the catalog, you can now come in and add collaborators. That's how you get access to that data in the catalog. It's very simple. I click on a button, I can select a user, and I can give them access to their editor, an auditor, or an administrator. It's as simple as that. But more importantly, let me take you to a project space that we've created. I have a project here called New Sales Campaign. Think of this as a sandbox, the collaboration space for where all the work happens to be able to solve a business problem. The first thing I would do when I create a project, I would go into collaborators. I need to add a data scientist, a business analyst, an application developer and give them different permissions. That's where I would do this. You can see here on the right, the different permissions we're providing to the user. This is where we're selecting who can contribute to this project to help solve that business problem. The next important thing, obviously, is I want to be able to share data assets. Actually, what we've done here, we've brought forward the same shop for data experience that you saw in the catalog before. I can do exactly the same thing if I can type it in right. All of a sudden, I see the assets that I have access to from that catalog. All I need to do here, again, I click on the file on the asset and I say add, and now those collaborators have access with the different permissions to that data asset. Very similar to the notebooks, not only the data scientist needs to work on the notebooks, but also collaborate with the other personas. The idea of being able to share is enabled through projects where you have collaborators, people of different types, the assets themselves, and the notebooks or the actual models that can be shared regardless. Can you show how we can discover new opportunities? For example, if you're a business analyst and then potentially as a data scientist? Yeah, so one of the things we've done here, if I look at the data assets, I can basically click on these little three dots on the right and say, hey, I want to open this in Watson Analytics. I want this analyst to be able to come in here, access that data asset that we just shared in our collaboration area, be able to bring it in into Watson Analytics and do that initial exploration and hypothesis of what the prison is problem that they're trying to solve. So in that case, what we're actually doing is taking that same shared asset, being able to leverage that in something like a Watson Analytics and being able to leverage that in terms of the data scientist, the data science experience as well. That is pretty powerful and effectively really improves what it means in terms of the faster time to delivery on those insights. So what are you showing us here, Steve? So I click on the button on Watson Analytics and I can see that the data asset is being processed. So what we're going to do here once it's finished processing, we're going to use natural language recognition to be able to understand what the solution to our problem is. So what I'm going to do here, I'm going to say what drives product line and I'm going to take its first recommendation and what it's going to do is it's going to tell me the top three fields that have an impact to the decline in sales that we're seeing. And what we're going to do with that is the business analyst is going to share that information with the data scientist. So we have weeks of work for the data scientist to be able to identify that information. So as you can see here, profession, gender and age are the top three things that are told us that we should be looking at. Well, how would this look now to a data scientist in a tooling that is very familiar to them? So let me go back to the project area. Again, our collaboration area. And now this is where the data scientist would come in into the notebooks One is for completing the data analysis and the exploration and the other one is to apply machine learning and building models to be able to predict and understand what our decline is and being able to build the target sales campaign around that problem. So let me go into the first one and what you're going to see here is the data science experience that Riddica talked about earlier that we launched in back in June and we've added a lot of capability to this and for data scientists, this is heaven by the way, and we've added a bunch of libraries to make life easier. One of them called Pixie Dust. So, yeah, Pixie Dust is actually pretty monumental to any of you Python programmers out there. Pixie Dust enables within these Jupyter notebooks as a developer you to access those same spark libraries that other programmers in Scala etc can use. So effectively what we're doing through the use of Pixie Dust is being able to make Python a first class citizen within Apache Spark and this is pretty monumental. Additionally, what we've done is we've added additional capabilities to allow you to easily graph and visualize capabilities as Steve is showing here. Effectively things that took you hundreds of lines of code before, you can now do in one line of code and this capability is readily available in the data science experience or as an open source project you can get that from GitHub today. Right, so as you can see here with this one line of code we can generate this visualization so it's pretty awesome and we've done that throughout a few more times but let me jump over to the machine learning piece and show so this is based in R and what we're doing here is we have two models that we're building. One is called an association model and the other one is called a classification model sorry. So let me just jump to the first one and I'm going to point out here one more thing, Brunel, so this for those of you who might know it's another visualization library again, open source by IBM and this chart that you see here one line of code again to be able to visualize and generate it. Again it's pretty awesome but what we're going to showcase here is really if you look at outdoor protection goods we knew there was a decline from the data exploration that the business analyst has done and we're now seeing an association with camping equipment so this is telling us that we want to concentrate on that and build our sales campaign. So let me scroll down to classification model that we built and with this you can see now that all the blue areas are the area where we know our consumers are spending the most money for the great outdoors company. So we've classified that and built a profile so that we know that anyone under the age of 44, a specific profession married status over 29 we want to target that user type or the consumer type. So Steve can you show me how I can then operationalize and actually put this model this classification model into production and eventually even operationalize that in the great outdoors store front. Sure. So here's a model that we created so what I'm going to show you now is a quick sneak preview of what's coming and you can see here there's a schedule jobs and deployments so what we're going to provide the capability of is being able to schedule those notebooks to run on a regular basis and retrain those models and then the other one is to actually deploy those models on a blue mix platform so making it very simple you can schedule both of them and you can make them deploy them immediately just just through this screen that's coming but let me show you an application that we built so this is called the great outdoors it's built on a blue mix platform it's using the blue mix and analytics services it's using cognitive Watson to understand natural language recognition so as you come into the store or online we want to be able to ask a few questions in English to understand the profile of that consumer so in this case I'm looking for a tent what kind of tent are you looking for and I'm going to say I'm looking for a tent that's less than $250 for how many people I'm going to say myself and my wife and what color red where do you usually camp in Acadia and which month let's say June alright so here we go now it understood all those answers that I provided and that's using cognitive and then it correlated the location it's giving me weather information it's showing me results of the tents that fit that profile including the price but more importantly than that if you look on the right I have a coupon now that's given discount on outdoor protection goods so based on the machine learning model and tying it in together with the application developer they built a sales campaign for that consumer profile to be able to turn that sales decline around the power of an integrated cognitive platform guys what did you think Steve let's switch to the rest of the presentation I thank you for that it was great to have you to be able to do that demonstration so what you saw in the demo was a whole new way to be able to experience data regardless of what background you have and to be able to get organizations to really get behind working around data what you saw was the ability to be able to access all data based on spark and based on the cataloging capabilities you saw the ability to be able to collaborate in and around that data where users use their individual tool sets yet work together you saw the power of intelligence infused all across the platform and the ability to be able to operationalize those in a production like environment this is a cognitive based platform that helps you work with data in a very natural and unobtrusive way so how do you get started well you know it's a cloud based model there's no maintenance and we kick start your progress with a week long workshop you can start with a very easy self service model that starts for as little as $50 a month and be able to start with a slice of data works or we have an enterprise data works plan that combines all the aspects of all the experiences and the data and analytics services that you saw two easy ways to get started with data works today so when we talk about the ecosystem one of the principles of data works is that it's open built on open standards with well respected organizations including Apache Software Foundation ODPI and now we're getting numb focused to that it's built with open communities in mind including Apache Hadoop R, Python and Spark some of the ones you already saw in the demo and many more and lastly we have open partnerships meaning those partners that integrate with data works and embrace open source in a very non proprietary way now openly participate with other partners and other clients that are part of that data works platform so now that you know what data works is do you know how you can deliver more value from data within your organization you know we talked about technology not being enough and that the broader challenge really is about bringing along the people the processes and the cultural shifts that are required tonight we're excited to announce IBM Data First Method our professional services which are here to be able to help you with your strategy bringing our expertise and helping you build a game plan to be able to get the most value from data you can start at one of the many tracks that we have and we will bring our capabilities and our expertise through our over 17,000 consultants that we have within IBM and key partners like Galvanize from who you'll hear more in just a moment so we want every one of you who are thinking about how to get information in your organization to engage with us in the data first method so I in closing am really excited to have been able to share with you the IBM Watson data works platform the platform the ecosystem and the method you know this has been built with that simple mission of keeping data simple and accessible for everyone so that each one of us can really get the data to work for ourselves for our organizations and for society as a whole so to be able to share more about his experience I would like to invite Jim Dieters the founder and the CEO of Galvanize to talk to us about how he helps others embark on that data first mission I was going to high five high five welcome Steve Vertica I think they kicked what Bob as well thanks for having me you're going to give my clock timer some time here to keep me honest there we go a pleasure to be here my name is Jim Dieters I'm the founder and CEO of Galvanize and Galvanize is a very interesting place it's a place where most of us at least myself would have loved to have gone to school had it existed when we were coming of age and learning skills essentially imagine walking into a beautiful cafe where you can't tell who's a distinguishing engineer who's a venture capitalist who's an aspiring data engineer and you're hanging out with some of the best and brightest minds in the world where you're there to learn you're there to get the skills that make you successful in the economy and imagine that part of that learning experience is integrated with some of the best data scientist and best companies in the world that want to hire you that's what Galvanize is all about a first century school, a learning community to modernize existing engineers and data scientists and organizations we have one of the largest data science faculties in the world with campuses in San Francisco Seattle, Austin, Denver Boulder we're building a very big campus here in Soho right now and Phoenix will open up this fall so to talk a little bit about where we came from and to tie it into this inflection point that Bob talked about we are in the midst of an entrepreneurial renaissance we're in the midst of a transformation where whether any company likes it or not they're a software company and they're a data company whether you're a 117 year old financial services firm a 124 year old industrial company or an 87 year old insurance company every company needs to adapt and they must use the tools and technology but equally important they must enable the human capital in accelerating that transformation a little bit about Galvanize I spent my entire career basically organically building skill sets and talents alongside a lot of my IBM colleagues in the room I recognize the trend moving to a skills based economy and I recognize the trend of the granular technology and skill sets that were coming of age and we had this crazy idea that we would build a school based on competencies and not just credentials but by specifically knowing exactly how well do you know R how well do you know Node how well do you know Java or Scala or command line or HTML5 and how do we actually build a school that allows you to acquire those skills that's more immersive so our flagship program was a a six month program to learn how to be a software engineer and we launched a 12 week data science and data engineering program where we take people from very diverse backgrounds to apply mathematics degrees from a great school or they were a poker player or literally a clerk at Best Buy we take anybody with the aptitude drive and determination and give them the skills to be successful in the 21st century to be successful on products like data works on offerings like Spark and be successful so along those lines and I would offer to all of you as you look to deploy technology and organizations you need to take a close look at your human capital and look for ways to identify return on investment learn how to find real world examples to apply solutions and work your way through the maturity model of how you deploy and become a data product organization essentially one of the things we help organizations do is help them learn how to think and act and iterate like a startup along those lines we wanted to point out that if you are in one of those organizations data is a leadership problem this is a sponsorship and commitment to be a transformative company and to not just find how to reduce cost or store your data but how to innovate and drive new business models from your data to get us going in that regard one of our first offerings that we announced tonight with IBM and part of the data first methodology is we have a workshop a two day workshop where we work with your executive team to help them understand how to find problems to work to find sponsorship as well as to help look at opportunities with your analysts to modernize them on the open source toolings on the IBM products and methodologies that actually will make them successful I'm going to keep us on track and it was a pleasure to be in front of you my name is Jim at Galvanize and my pleasure to introduce one of our other partners that's a leader in the open source community and has a wonderful product built on top of Python and that's Travis Oliphant from Continuum that was great you got me going, ok thanks Bob, pleasure, thanks Riddica Bob for having me here it's a real pleasure to be here as mentioned I'm the CEO and co-founder of Continuum Analytics Continuum Analytics is about bringing Anaconda and really bringing Python and the open data science ecosystem to the world we're really excited about the potential in data works and ensuring that the full capability of the thousands of open source projects that are becoming part of the Anaconda ecosystem are part of your day-to-day work the open data science program is an open all-inclusive movement that really connects data analytics and compute it's something that we've been a part of for a long time I've been working in open source for almost 20 years I know I look like I'm getting a little old but I've loved that journey because it's enabled me to connect with thousands of developers all across the world and see firsthand the innovation that's taking place and what it's done is really revolutionized the availability of innovation that's why people are downloading projects doing innovative things quickly much faster than anything possible that's this open source community ecosystem that really brings innovation and it's going to continue with us for a long time so how do we leverage that and take advantage of that is that Python has been really good at is connecting all these people together if you ever go to a Python conference it's a mismatch, it's a pastiche of all kinds of different people all kinds of different perspectives which is wonderful because then they build solutions but it's not the only one there's other languages as well R and Scala and SQL, Java they all bring people together to solve these problems and then Python again becomes a glue between all of those as well and so we use this to create Anaconda Anaconda is an open data science ecosystem Big Ten philosophy bring all of it together make it easy to use in your organizations we've had lots of downloads lots of people excited about what we're doing with Anaconda over almost 8 million downloads of Anaconda since its inception it's really become the lingua franca of how people do data science the Python community is huge the R community is huge what we've noticed as my friends in the Spark community who I've worked with they tell me that they see more than 50% growing to 75% of users of Spark are using it through Python so how do we make that really amazing how do we make that experience amazing I'm so thrilled that IBM and Continuum can work together through Anaconda to make that experience for everyone in data works and beyond really amazing now with Anaconda we've seen all kinds of use cases one of the things I love about my job is being a part of the open source ecosystem and being a part of NumPy being a part of SciPy I've been able to connect with people doing real problems real world solutions throughout the world in large investment banks, global consumer product manufacturers and oil and gas all of them have real problems to solve real things they want to get done and we've been making tools to make that possible I'm not going to be able to go into detail on all of these use cases but it gives you a feel whether it's a risk assessment that you want to deploy to thousands of nodes whether it's you're trying to figure out whether that products don't go bad on the shelf or in transit and you want to put together a workflow to watch a project in transit and watch the weather and incorporate it all together or build a machine learning algorithm or connect your data all the sensor data that's becoming available how do you connect that sensor data in real world well Python brings that together and makes it possible because it fits in the head of people people like me, I was a scientist when I got started in this world I was a machine learning scientist I taught school for many years I was doing the work and I found Python made it easy for me to do that work and not be a developer at the same time I could still have access to all the powerful tools developers give so it's a beautiful combination and folks around the world have found this same combination to work I'm excited that we're also being able to do things like this is a study we did with the tax brain association big data is everywhere including our government how do we make that data accessible to people here it was exciting because they were using SaaS is a great tool but it's kind of old and getting ancient and it's hard to find SaaS developers it's also hard to deploy and this tax brain project we were able to actually translate their SaaS code to Python code and not only was it 100 times faster making use of our compiler strategy technology but it also was much easier to deploy on simple systems so this is the kinds of projects that are possible and with Anaconda Anaconda has a promise of accelerating time to value connecting data analytics and compute together in one and empowering data science teams many of the same things that data works is trying to accomplish so I'm so excited that with Anaconda and data works together we can make Spark and Python and all the other tools that are part of the open data science ecosystem really give superpowers to you all and others who change the world thank you very much the science community is obviously important and we talked about that I would now like to introduce Dr. Samesh who is our global our global CDO within IBM and he's going to talk to us about the role of bringing a data driven culture in our organization welcome so I report actually to the CDO the first CDO of IBM in the Parliamentary I'm responsible for data and information governance and Jim I thought you were talking to me when you said 100-year-old pharmaceutical company and 75-year-old insurance company that those are exactly two things I worked for all my life in healthcare and for the longest time I was a customer of IBM technology and in fact IBM's commitment to analytics and data is what brought me here you know you pointed out I think one of the slides said that data is a leadership challenge and indeed that's very true and I think it's really thanks Bob Pitchiano's vision that he early on realized that you know to be successful in analytics you really need a chief data office and an organization that can help bring all of this data together so what is this modern CDO my boss in Nepal likes to call it a craft it's somewhere between art and science you're trying to bring data together from a variety of different sources the old adage garbage and garbage out is even more relevant in the environment of big data you really do need to make sure that the data is highly curated appropriately provided meets all the standards around security, privacy, etc but really more importantly you work as an internal network of experts who understand what each element of that structure data is how do you integrate unstructured data how do you bring it all together how do you bring it in the analytics environment and that's really the role of a CDO it's an evolving role we came out of a very very successful CDO summit just this week earlier this week sorry last week in Boston just think about the numbers there were only about a 30 or so CDOs in a small room three years ago went up to about 70 or so last year about 70 CDOs from top all of the top US companies and the number just keeps on growing so this role is evolving this obviously profession or craft as Indipal likes to call it evolving and one of the things that Indipal gave me a tip and it's a tip I'll pass on to you that if you find an organization that wants to hire you as a CDO but they don't want you to focus on data governance there's a lot of connotations that come with it don't take that job I think it's very very critical that governance plays a central role in your data strategy so we're shifting from data governance now to building a culture of data savvy professionals and part of what we're doing in IBM is not only create this central chief data office but in fact create a network of seeded all across IBM businesses and they bring their challenges to us and we together solve them through a central framework and that allows a lot of leveraging to occur global business services can learn from GTS, Watson can learn from analytics, you can have new things happening in the cloud research could be coming up with a brand new opportunity or a set of technologies that could allow establishment of a trust platform and all of that sort of finds a place an ecosystem where everything can thrive and you can learn from each other not quite unlike the data first ecosystem that you're talking about here so if we continue what are some of the key elements that are needed to become data driven it's pretty important to create an environment where all of this data can come together certainly you're seeing that with data first we are obviously eating our own kitchen by creating this environment internally for IBM by having that hub and having people who understand the data that's really very foundational as you develop more and more deep data expertise and partnerships you really are working with business it's really a business issue are you solving business problems on any given day or are you simply going around the circles trying to take care of the data with no end in sight what's the real ingredient if you try to do governance for all of your data assets to the same level of stringence that'll be boiling the ocean but if you are able to drive from business question one of the first things we did as we established CDO was to go around all the business unit as we talked about what's the role of a business unit data officer what's the role of CDO how do you create data and how do you share that journey it became clear that businesses had a lot of questions of their own those questions fed into our strategy which was finally approved by all of the SVPs so I do feel that it's Bob used the word inflection point it truly is it's bringing a lot of information together like it has never been brought together before and the technologies that we are building such as the one here a data first is what will make it happen yet proof of concepts and pilots and ability to scale rapidly across through a very scalable cloud based platform regardless of what problem you are trying to solve so let me try to give you a sense of how we are going to you know eat our own cooking so to say I came from an environment in healthcare where I use IBM technologies a lot of structured and unstructured data to do things like predicting diabetes predicting who is likely to be hospitalized predicting who is going to get complications of the disease that are going to be addressed now that need to be addressed now who can benefit from home care or remote monitoring it's a classic IOT case if you think about the data that's coming and who can be whose hospitalization can be avoided so those are the kind of problems that I solved before coming here but when I came here to IBM there's a bigger problem at stake which is that the foundational you look at these three projects and you'll say what's there you know everybody has client 360 I'm sure many of your companies are not one but it doesn't such initiative is running around there's a difference the difference is that when you add the word cognitive which is possible now because of all the platforms that we have they take on another light so client 360 is simply taking a single view of all of your customers data in one place but it is actually using that data together with all the external data that's coming and imagine your sales person is able to walk in because something that has just happened a week ago emerged at an acquisition perhaps something else regulatory climate change which now has a completely different meaning for that person who he's trying to target or she's trying to talk to or she's trying to deal in cloud or predictive modeling and so forth this is the kind of work that generally was done by human our boss in the power tries to use four ease educate these are systems that educate themselves they're not programmed they express you saw some of the examples natural language so they you interact with them in a natural manner through voice through text etc they build expertise like that of a human just like we've talked about oncology figuring out the right cancer treatment and they evolve so they share a lot of human characteristics so imagine if you were to put cognitive and all of that client 360 becomes you know from a fairly routine exercise to collect a lot of data to really become a differentiator for your business digital business you know we are used to in the traditional world brick and mortar world being able to walk up to the buildings or companies talk to different business owners try to figure out what are their business problems how do we help solve them when someone shows up on your digital property digital enterprise well you don't have that interaction how do you create a cognitive assistant who actually can act by looking at all the data so when a person logs into your website you exactly know what their needs are you know where they're likely to go oh by the way such and such news came out or a weather pattern is going to change and that was another type of interaction to occur so these are essentially intelligence systems that allow you know the challenge of essentially maintaining your digital life cycle and take it to another level because you really understand them well beyond chatbots and so forth right it's really an intelligent interaction cognitive contracts this is something I didn't realize how big a deal it was millions of contracts we all have in our major companies tens of millions in IBM right everything comes up not that small in Europe there is a new sweeping law that has just been unleashed EU GDPR privacy will be seen in very different ways you will need not only to be able to answer what kind of data you have on Samesh but you will have to be able to say now I have a right to be forgotten most companies can't even figure out where their data is think about how to address this there's only two years to do so once you start looking at contracts in a different way you can actually say well which contracts really are at risk where do you have exposure where can we do something to improve our ability or change parts of the contract to become more compliant when I started talking to our procurement people they wanted to know which contracts are at risk of either not getting fulfilled or having delays or having escalations or other types of risk can we do something to make our contracts more tight deliver a better customer experience so I think what you will find that by adding this new set of technologies as you add more and more cognitive dimensions the entire enterprise become more intelligent it becomes self learning and that's really where the differentiation occurs so I would like you to think about what all can you make cognitive interval has challenged us to make CDO cognitive we are working on 10 years. We are working on a new process to convert them into cognitive processes. I would like to now turn it over to Nick to continue the journey here. Thank you. Ladies and gentlemen please lower your voices so we can hear the presenters. Ladies and gentlemen please lower your voices so we can hear the presenters. Ladies and gentlemen please lower your voices so we can hear the presenters. So like I said we are the parent company for two banners food lion and brothers and for that banner data brand and how we go to market. So being in the grocery business, we've learned a lot about our customers, and this is a unique part of our strategy and a part of the journey that we've been on from a data perspective. Our strategy for Foodline is all about easy, fresh, and affordable. And for a lot of people, that may sound very superficial on the surface, but for us when we think about who our customer is, it's extremely important. A lot of our customers are people who may be on government assistance or who may be trying to figure out ways to make sure that their family can be fed in a nutritious manner, and they're trying to stretch their dollar as far as it can go. And so for us, when we think about our technology strategies, how do we enable our banner to ensure that it can achieve its business strategy? So it starts with the idea, oh, we went backwards, okay. There we go, our journey, okay. So when you think about where we were as a company about three years ago, we really weren't treating data as an asset. We really didn't see it critical to our strategies and critical to how we move our business forward. And what we've done over the past three years is really put data at the center of our strategy. Where that started was getting a deeper engagement with our customer. So when you go back to this idea of easy, fresh, and affordable, one of the things that we wanted to do was start to revolutionize our digital experience with our customers. And how do you go about doing that? Well, for us, it was about creating a new web presence, which now we're moving out to mobile. And I know for a lot of people, the first website that you want to go to is the website of your grocery store, that's okay. But what we're doing there is really engaging our customers in a different manner. And what we've done in this engagement, in this specific implementation, we leveraged the cloud and no SQL technology that no one in our organization was familiar with. This was one of the things where we had a project underway to redo our digital experience, and at this time, no one realized that we actually needed data to help run this site. So you talk about a project that was running in an agile manner, and the team reached out to me and the rest of my team to say, halfway through the project, we forgot we need data, a critical component of this architecture. And so when you think about how do we get speed to value, and this was essentially a crisis mode, because for us, this was about engaging our customers differently. We had media buys. We had all kinds of things that we were rolling out in association with this website. So for us, we leaned into our partnership with IBM, and that was really my first introduction into cloud within the enterprise. And we implemented, there are no SQL offering. We were able to implement on time, on budget, and it was a phenomenal experience, because for us, we were implementing a web-ready, mobile-ready database that no one in our organization had experience with. That to me is the value of going to the cloud and leveraging data platforms within the cloud, because you start to remove the entire engineering component out of the process and out of your projects. Allows you to get to value extremely quickly. As we moved along this journey, so now we've stood up our new web presence. We are starting to engage our customers a little bit differently. We have a loyalty program, and that's a program that we've had for many years. But you can ask anyone within our business, how well have we leveraged that data, and we haven't. And so one of the initiatives that we led was really starting to rethink data warehousing. And after our experience with the no SQL platform in the cloud, I said, why don't we move our data warehouse platform to the cloud as well? The intent behind this implementation was really to give our business users access to data that they had never had before at a scale that was unprecedented for us as a business. And when you think about some of the unique challenges of an organization like ours, we are a traditional retailer, a grocery store. Technology is not what we do. However, we need to innovate in order to remain competitive, in order to continue to adapt to where our customers are asking from us. And so we looked at this platform and said, how can we do something quickly? How can we change how we go about managing data at the enterprise level? And when we thought about our implementation with no SQL and we saw the offering for dash DB, we said this was a great pairing for us. So for us, we started stitching together kind of our vision of this data works platform. So it's great to be here because for me, I feel like we've been on this journey with IBM all along. And it's great to see kind of where things are going and how they've evolved. And so for us, it was about getting our point of sale data, which as you saw at the beginning of the presentation, lots of transactions, billions of transactions. This wasn't a trivial undertaking and moving this to the cloud. We had to convince a lot of people within the organization that this was the right thing to do so that way we could scale this platform in order to allow business users from around the organization to have access to this data to drive insights. Think about linking this back to our strategy around easy, fresh, and affordable. When we think about affordable, how can we make sure that we're giving you the right promotions for the products that you find most valuable to ensure that you're getting access to nutritious products, products that your family desires at a price point that you can afford? That becomes such an important part of how we go to market and it becomes how we differentiate in the marketplace. The next part of our journey and this is one that we're currently undergoing is really leveraging big insights. And this is about looking at how do we optimize our network from a supply chain perspective. So you might say to yourself, again, what does this have to do with this easy, fresh, affordable? Well, one, okay, I think I have an email message, all right? So when we think about big data and supply chain for us and network optimization, every penny that we can pull out of the supply chain, everything that we can pull out of our cogs, we can transfer to our customers. I know for a lot of us on a daily basis, we aren't concerned about where our next meal is coming from or how are we actually going to get to the grocery store. These things aren't problems that we're very familiar with, but for our customers, they are reality on a daily basis. If you think about having stockouts in the store, yeah, on the surface that sounds like we'll lost revenue. But take that to a deeper level, think about easy, fresh and affordable. Think about being a mother, a single mother of four, you're working two jobs, whatever the case may be, and you have to ride the bus to go to your grocery store. And they're out of product. That for us is a very serious problem that we never want to have our customers exposed to. Their day-to-day life is filled with chaos. And we really see our position in the marketplace as the one place where they can go, they can be treated with respect and dignity. They can get product at a price point that they can afford. And we ensure that their experience is seamless and that we have the products available at a price that they can afford. So for us, when we think about our data strategy, it's truly about enabling our business and ensuring that we can go to market and engage our customers the appropriate way. This platform is, again, like I said, one that we're rolling out. And for us, this is new technology that we're not familiar with. And that gives us really the ability to leverage the cloud in a different way. We don't have to spin up an engineering team. We don't have to make a huge capital acquisition in order to acquire software and hardware. We can really start to learn and iterate in an agile manner and move extremely quickly and respond to business needs rapidly. This is a platform that we definitely see expanding and taking on a much larger footprint within our data platform. So when you think about some of the lessons learned from this journey, there are a couple of things for us that have been very transformative. One, how do we think about implementations? We go fast now. We start to remove some of the overhead and the headache of trying to stand up complex data platforms. From a finance perspective, we totally changed the conversation. There's no more huge capital acquisition. You're starting to move more to your operating expenses and understand what your costs are on a regular basis. And how can you link those costs in a very transparent manner to the outcomes and to the business value that you're trying to achieve? For the first time, we really have an understanding of how our data platforms, what our data platforms costs? And then how do we link those to the business outcomes to ensure that we're getting the appropriate ROI? One of the things that we've also had to adapt, and this is really about the culture. As an organization, we've really started thinking data first. Now, as a company, when we think about business strategies, I immediately get called into the conversation. We start thinking about what are the implications from a data perspective? What are we gonna collect? How are we gonna use this? How are we gonna think about answering the questions, measuring the strategy, and improving the strategy with data? And so for us, this partnership with IBM and going through this journey and seeing where we both are at this point has been phenomenal. I really appreciate having the opportunity to talk with everyone here and I thank you with that. Nick has been a phenomenal partner and customer and we have really enjoyed embarking on the journey with Del Hayes and what they're doing. I now like to invite to the stage, Shiv Sengal, who is gonna talk to us and a partner who's gonna talk to us about how they're solutioning a lot of their capabilities around our data and analytics solutions. Shiv, welcome to the stage. Appreciate it. Good. Thank you. So good evening, everyone. My name is Shiv Sengal. I'm the Solutions Architect and Product Manager for RSG's Media Montra platform and I wanna quickly thank IBM for their support and the ability to share our story and how we're leveraging IBM's Watson Databricks platform. So let's get to Brastax. Who is RSG and what do we do? So RSG is a software solution provider. We provide software for the world's leading media and entertainment companies. And what do we do? Well, our business is quite simple. We help our clients make more money. And how do we do that? That's what we're gonna get to in just a second. So let's start with Media 101. And before I do, one thing to note. In the media and entertainment industry, there's been massive, massive shifts, whether it's the cable and the broadcast networks like the Viacoms, AMCs, Discovery Channels of the world. Whether it's the MSOs, which are your cable operators like the Comcast and Time Warner's, the studios, the over the top, which is the new folks like the Netflix and the Hooloos of the world. So there's been a lot of new players in the market. And that's what I'm here to talk to you a little bit about. So let's start with Media 101 and we have a bunch of people here. So I think this might be interesting. Just a quick show of hands, how many folks have a traditional cable subscription still to date? All right, large overwhelming majority. In terms of folks who don't have a cable subscription and just Netflix or just Hulu, just a quick show of hands. How many folks are you? All right, nice. So this is actually what's happening right now in the media and entertainment industry. Traditional TV viewing is down. And the numbers you're seeing here, think of it as there's been 8% this past year. Two years ago there was about a 6% drop off in traditional TV viewing. Now this is me, myself sitting with my family in the living room watching TV. Now there's been an explosion of platforms, an explosion of content. And the traditional business model is being shaken. Right now what we're seeing is not just a drop of puts, which is people using televisions. The number of eyeballs watching traditional television in the living room is going down. And it's going down whether it's 12% overall, but actually for millennials, the number right now is 20% decrease in traditional TV viewing. Now what's going on here? Well there's been, like I said, the TV business is all about eyeballs and understanding where the eyeballs are going. Right now there's about the average consumer watches about 10 hours of content a day. So if they're not watching traditional TV in the living room, what are they doing? Well now they're streaming content on their mobile phones on their tablets. They have access to more content, whether it's through YouTube or Twitter. So the eyeballs are being scattered across platforms. And that's what's happening today. We have not only more programming created. There's been an explosion in the types of programming that we have. Now we have the SD version, the HD version, the broadcast version, the director's cut, so on and so forth. So there's been an explosion in the types of variations of content that we have. There's also been a diversification in the media and entertainment platform where now there's more platforms. You have your traditional TV, you have your smart TV, you have your tablet, you have your mobile phone, and there's more ways of viewing platforms for subscribers to access content. And on top of all that, the complexity of being able to deliver the content to all these various platforms, that just adds another layer of complexity. So how do we help our clients? It starts off with a very basic set of questions. How is my content performing across platforms? Just not on linear television, for example, on demand. We want to understand how the Wizard of Oz, not the 1930s version of Wizard of Oz, but the James Franco version and not the SD version, the HD version, how did that perform on linear TV and then on demand? And then how many clips and trailers were watched? Now, this is the basis for companies today to be able to understand not just how their content is performing, but how to take this information and help strategize a little bit better to understand not just how their content is performing, but what content should I be acquiring so I can better engage our audience base for content, advertising, and marketing. And as I mentioned, this is just the baseline. We need to understand how our content is performing across platforms, what advertisements are being delivered across platforms. We need to understand what content is driving viewership, what are viewers engaged in, what's the top performer, what's the bottom performer. So this is just one of many questions that our clients have, whether it's in the cable and the broadcast space, whether it's the MSOs or the studios. So how do we help our clients? Well, like I said, our business is all about eyeballs. And very simply, we want to help our clients make more money. And this is all about data. And I love this phrase right here, data first. It's all about understanding data, whether it's getting the Nielsen data feeds, whether it's the Hulu, the Twitters, the Crackles, the YouTubes. It's all getting all this data in-house under one roof and then being able to interact and analyze whether it's through algorithms, whether it's through interesting applications, such as schedule optimization application that we have. It's again powered by IBM's Watson's DataWorks platform. So what we're seeing here is two quick screenshots. One, how do my content perform on linear TV? And we were talking about eyeballs before. Where are those eyeballs going? For those folks who are watching TV in their living room, 30 minutes before they watch the program, what do they watch? 30 minutes after they watch that program, where do they go? And these simple questions seem simple, but ultimately, this whole application is helping program and schedulers make sure that for that time period, for that day part, on that network, what viewers are watching, how do we make sure that our viewers stay engaged and don't go to competitor networks? So this is what you're seeing here where we have, it's a little blurry and fuzzy, I apologize, but impressions is synonymous with eyeballs. And we can see program by program, how many eyeballs were actually engaged and watched that content. And the second viewer behavior, this is the audience flow analysis, where are those eyeballs going? Whether it's on linear, whether it's on cross-platform, and how do we take this information to provide the content that is a little bit more personalized and make sure that that's what your audience-based watches. Now, this is all about after you get the data in-house, it's all about the tools and how you are able to engage with that data. And this is where the Spark and Python algorithms come into place. After you understand how your content is performing, well, how about some recommendations for what I should be scheduling? And that's the basis for schedule optimization, some of the screenshots we were just seeing before. What we strive to do is to understand how did our content perform historically? How is it going to perform in the future? This one-off analysis is the predictive models, for example, to understand what is the rating for the show. If I switch it from the morning day part to the afternoon day part, will our ratings increase, will they decrease? This is the goal of some of our predictive algorithms, not just to forecast that rating, that impression that audience share, but to provide recommendations to schedulers to say, hey, at 4.30 on Thursday, your viewers are going to another network. This is the set of programming that might help keeping those eyeballs on your network instead. So how is this all possible? Again, we're leveraging IBM's Watson DataWorks platform, and it's all about the data. So I'm gonna focus on the bottom layer first, whether it's structured data, unstructured data, Nielsen data, Twitter data, we have to have a pipeline that just manages our content and gets it not only from A to B, but Mary infused those data sets so we could understand that one asset that James Franco HD version, how is that performing across platforms? So we're using Spark, we're using a bunch of technologies, but even matching algorithms to make sure that we can expedite the time for analysis. Because just like in the finance industry, information is key, but timely information, that's the strategic advantage. So rather than manually having to manipulate data and make sure that these are the matches, well, we can do this in an automated fashion where it's a key strategic advantage to understand how did yesterday's show perform? And then was there a digital lift on Hulu, for example? You can't figure out the answers to those questions unless you have the tools to manage the data. And that's the bottom layer. The next layer, we were seeing some algorithms earlier, well, that's the Spark and the Python layer where this makes the interaction between the models a little bit more engaging. If we wanna do some what-if analysis, if we wanna understand how to schedule A, look for schedule B, we need to make sure that our clients have the tools to be able to do this what-if analysis and not wait 24 hours just to get the output. Within minutes, we wanna be able to have various iterations and that's what the modeling layer is. And the best part is the cognitive side. This is IBM's Watson DataWorks platform and we wanna make sure we can nudge the schedulers and the programmers to say, hey, take a look at this day part. This is an opportunity to exploit. And that nudge was never around before. There's a lot of investment in prime time, but there wasn't the ability to analyze the rest of the schedule. And to be able to do this in an automated fashion, to do this in a quick and fast process, that's a key value add that programmers and schedulers never had before. So for a little sound bites, what's the value? What's the ROI of our products? Well, we can proudly state using IBM's platform that on the advertising side, we can create a lift of up to $50 million by playing the right ad at the right time or the right network for that right specific audience. This is all about hyper-targeting and making sure that we can understand what are the business and contractual constraints? For example, don't play Coke after Pepsi. We have to guarantee X amount of eyeballs. If we over-deliver, that's not good. If we under-deliver, that's not good. We need that Goldilocks zone and that's where the algorithms come into play. We were talking earlier about schedule optimization and what we could proudly state using IBM's platform is not only to be boost viewership, we were able to boost revenue. That's a 4.13% lift in eyeballs watching your content and a nice little $6 million boost in revenues. And lastly, the promo side, we can help drive more viewers to your program and using less inventory. And that's exactly what happened in partnership with Viacom where we were able to boost viewership by 17.5% using 30% less inventory. And that makes a bunch of advertisers happy because marketing and advertising, typically, they go head to head and if you could use less marketing inventory, that's more ad sales revenue. So I thank you guys for your time. Appreciate sharing our story and if there's any questions, just let me know. You guys like to make money. Who likes to make money? We announced Spark Maker Build contest three months ago. And part of that was really to drive the community, you guys, to be able to build and innovate in and around Apache Spark and to be able to provide a hackathon in partnership with DevPost, Netflix, Tesla, Silicon Valley Lab, Silicon Valley Data Science. And this ran for three months and was actually sponsored by our VP of development, Rob Thomas. And also the founder, if you will, the brainchild and the father of our investment and our commitment with an IBM to Apache Spark. So I'm gonna bring on Rob on the stage. And we had people that actually entered into the contest. We had 23 that we actually evaluated and from those 23, we actually put them down to some finalists and all of these were phenomenal. Quite honestly, it was such an amazing thing to be able to judge all of these. You can visit all of the entries by visiting apachespark.devpost.com to see all of our final submissions. And Rob actually sponsored over $100,000 in prize money for the winners. And we're gonna go through some of the winners today. You'll see most of those entries, most of the winners actually downstairs in the exposed floor and you'll see a lot of their entries and what they were able to do and build in and around Apache Spark. So are you guys ready to see who the winners were? The app was actually a Yelp recommendation engine by Sareka Kalye. This was a Yelp recommendation engine based on their preference of using Apache Spark and ML Lib. Sareka is not here today, but she will get for the best student at a $1,500 check. That's not it, that's not it, that's the next person. This person actually will get a check. So Rowski, Radek actually had a extremely nice project that demonstrated the concept of how beacons can be used within stores together with the power of Apache Spark within the IBM data science experience to discover knowledge and Yelp data sets to build recommendation web services which empower local businesses to personalize their customer experience. So Radek, welcome to the stage. And Radek, your prize is a $2,500, a $2,500 check. So Radek, thank you. Radek, you actually had a couple of submissions if I recall, right? I had two. You had two, so congratulations. Thank you. Thank you. You can take this home on the phone. There is an honorable mention, and he happens not to be here, but I thought this was a great entry. It was Amelitix, and Amelitix actually is an application that tracks your customer's face while they're shopping, and the data is then trained by Apache Spark to give meaningful insights into the business establishment. So congratulations to Abhishek. Abhishek is actually not here today, but he will be receiving a check for $10,000 for his award. Oh yes, and it only gets better. We are excited to award Bolteo. Bolteo was submitted by Dylan Hall, Shima Rahid, and Nathan Hall. Bolteo is an investment platform that lets users build a custom portfolio of tailored to their investment priorities around stocks that they already own. This was a real-world application, so I'd like to welcome to the stage Dylan, Shima, and Nathan to the stage to be able to accept their award for $12,500. Congratulations. Was Dr. Spark by Zahid Alar's, and this all the way from the Netherlands. Dr. Spark is an ultra-fast, low-cost, personalized DNA analysis and diagnostics offering with Apache Spark, and this was a phenomenal use case of how they're using it to be able to evaluate DNA. So, Zahid, please come up on stage, and we are excited to award you a check for $25,000, actually created Sense AG. Great demonstration, check it out downstairs. Sense AG helps farmers make smart irrigation decisions by using analytics to provide insightful and timely information. So, Mike and Anthony, for your entry and winning the overall prize, you earn a grand prize. Rob, if you'd fund another one, and he did say yes. Innovations that are happening in and around the Spark community. So, I want to congratulate their entries and all the 523 entrants that were part of our Apache Spark Maker Build contest. It was a phenomenal success. We're looking forward to be able to see more. So, finally, and closing. Sorry, well, yeah, be able to join the movement that we created tonight. To be able to make data first and be able to bring more value in your organization from data. I want you to come join us on DataWorks, become a user of the data science experience, sign up in our ecosystem, or invite us to be able to share more about our data-first method with you. So, guys, I want you guys to have a great time. Let's go up on the rooftop or downstairs to be able to see more demonstrations. Thank you for everything tonight.