 So Chris, if you're ready, can I ask you to start the presentation? Well, thank you Simon, and welcome everyone to this open group webinar on SOA and open platform 3.0 enabling big data. I'm Chris Harding, I'm the open group director for interoperability and that includes managing and supporting our work on open platform 3.0. And I'll describe what that is in a little detail in a moment. But the purpose of this webinar is to talk about how SOA can enable big data which is one of the key constituent technologies of open platform 3.0. And we're lucky to have as panelists today three experts in these topics, Trinetzels of IBM, Helen Sun of Oracle and Nikhil Kumar of Applied Technology Solutions. And I should say that the ideas that we were presenting here are by and large due to the participants and to their companies. They are not at this point consensus standards of the open group. They are input to our work rather than published output. But it's a great set of ideas. So if we can move to the next slide, what we're going to talk about in this webinar, I'm going to give a brief overview of open platform 3.0. And we're then going to talk about how SOA can, an architectural approach based on SOA can enable big data and managing and handling big data and then go through a few industry use cases to illustrate that. So moving to the next slide, I've already introduced you briefly to the speakers. This slide gives you the details of them and the email addresses by which you can contact us if you wish to. Okay, let's move to the next slide. So I'm going to give a brief overview of open platform 3.0. So on the next slide, we can start that. The objectives of open platform 3.0 are that enterprises should be able to create, evolve, adopt and use solutions based on the current and future emerging technologies to achieve business value. And the basis for this is that analysts, for example, Gartner and IDC are forecasting that technologies such as cloud computing, big data, mobility, social media are a nexus that are going to drive the way that business uses IT over the next few years. And in fact, the bulk of the growth on spending on information technology will be on these technologies while spending on traditional technologies will stagnate. And these technologies are changing the way that business uses information technology and giving new ways by which companies can achieve business value. So we, in the open group, we want to help companies to develop solutions that use the technologies and these solutions should enable boundaries information flow. The proposition that information should not be locked up in IT silos but should be available within and between enterprises as and where needed. It should allow users to mix and match technology products and services so that we have portability and interoperability. And it should support the business innovation that enterprises want to make, including the new style of business operations that we are seeing. So if we move to the next slide. The technologies that the platform should enable include mobile, social, cloud, big data and the Internet of Things, that is to say networked sensors and controls. And it should also be adaptable and extensible to include other new technologies as they come forward and emerge. The platform should enable development and deployment of open and interoperable solutions that use these technologies by business people and IT specialists. We understand that it is a major requirement that business people are now getting smarter in their use of technology and want to use it more directly. Support for identity entitlement and access management is going to be a key part of this to preserve and retain and enable enterprises to build value in the information that they exchange. Measurement tracking and control of quality of service and quality of data will be a crucial requirement. As enterprises use a mix and match of different services and take data from different sources, they will need to understand the quality of what it is they are using. It must provide support for governments to enable compliance officers and architects to ensure that the systems comply to external standards and regulations and enterprise architecture standards. It must provide portability and last but not least, it must be easy to use and must make the technologies easy to use. So we look at the next slide. This is the vision that we have for the platform that it should enable enterprises to take input from social media, mobile devices, sensors in the Internet of Things, as well as from their internal systems, for example, their retail systems and others. It should enable them to manage and process that information, possibly using a cloud infrastructure, and to analyze and display and visualize it in a form that promotes understanding. That's what our vision is for the platform. So let's move on. We started the initiative back in April at our Sydney conference. We started in July, the development of a business scenario, a Tograff business scenario to develop our understanding of requirements. We published that just before the October conference in London, and we're now working on a snapshot of what the platform should look like. We expect that we hope that there will be a published standard a year after that, and then probably there will be incremental versions of that standard in future years. So moving on. Okay, so we're now going to look at SO-enabled big data architecture, and I'm going to speak to the first slide of this. I see there's some good questions coming in, and then hand over to Helen. So if we can look at that first slide. This is an illustration of the open group, so a reference architecture version one, which is really our understanding of how SOA works. It was developed by the SOA work group, and Nikhil, in fact, was the co-chair of that project and is a co-chair of the follow-on project that's developing a new version of this, aligned with standards of work in ISO and the IEEE. And this reference architecture really shows how you can build services out of your operational systems, orchestrate business processes using the services, and interface them to consumers in various ways, and how you can integrate all this together. So let's look at the security and management and other quality of service aspects at information management and governance. And this has been a very, and continues to be, a very influential standard and in fact one of our most popular standards downloads. So I will now hand over to Helen to talk about how this kind of service-oriented architecture can be the basis of a big data architecture. Okay. Thank you, Chris. Can everybody hear me okay? So good morning, afternoon, or evening, wherever you are. Thank you again for joining the WebEx. I am Helen Sun with the North America Public Sector Organization at Oracle. My key areas of focus at Oracle, including big data, information architecture, and enterprise data management. As the lead author of Oracle Information Architecture Framework and Development Process, I have published various white papers and webinars on these topics. I'm also the co-author of Oracle Big Data Handbook, published in September last year. So just now, Chris has provided a very good overview of the open platform 3.0, and it helps set the context and background for this Webcast. Now before we move to and go through the rest of the slides on SOA enabling big data, I'd like to just ask a question. Why are we talking about big data, first of all? Earlier this year, Garner has identified top 10 strategic technology trends for 2013. Four of them pertain to data and information, and they are strategic big data, actionable analytics, in-memory computing, and integrated ecosystems. And just this month, IDC had predicted that the market for big data will reach $16.1 billion in 2014, and it's growing six times faster than the overall IT market. As Chris has already pointed out, big data is already reshaping the IT industry. Simon, can we move to the next slide, please? Thank you. There's some overlay issues with the diagram. Apologize for that. This typically happens when we move from a Mac environment to a Windows environment. So apologize, and we can get that fixed very easily. So as we know, big data is characterized by 3Vs, volume, velocity, and variety, and it calls for new capabilities to manage it effectively. So before we delve into the new capabilities, we want to first take a look at how big data fit into the overall information architecture. Now what you've seen here is the information architecture framework. As you can see, it is composed of two key elements, data ROMs and also the management capabilities. Simon, could you also forward one so we have the animation come out? Oh, no, I'm sorry. Can you go back? Sorry. Yeah. So the data ROMs are the black slices in the middle of the diagram. There are different types and classification of data, including master data, reference data, metadata transactions, analytical data, documents and content, and big data. The second element, which is depicted outside circle of this diagram, is the capabilities we need to manage different aspects of varieties of those data types and classifications. There are nine level one management capabilities as we look at them. They include data integration, data sharing and distribution, infrastructure management, business intelligence data warehouse, data governance and metadata management, data security, master data management, and enterprise data model. If you need more details about those capabilities, maturity model, and architecture development processes, please feel free to contact us and you have the contact information at the beginning of the slide deck, and we'll be happy to answer your questions or share more resources with you. Now the reason I'm showing you this capability model and information framework model is that it is important to avoid looking at big data through a siloed lens. Rather, we need to find out how it fits into your existing information architecture to be able to build out new capabilities incrementally and to maximize your existing technology investment. Next slide, please. So here is a holistic view of an integrated big data conceptual capability model. As various data come in and being captured, they can be stored and processed in traditional RDBMS, files, HDFS, NoSQL database, or streaming event model. Architecturally, one of the critical components that link big data to the rest of the data realms is the integration and data processing layer in the middle here under the Organized tab. This integration layer needs to extend across all the data types and domains and bridge the gap between traditional and new data acquisition and processing framework. While there's clear benefit to a centralized physical data warehousing strategy, more and more organizations are looking to establish a logical data warehouse, which is composed of a data hub or a data reservoir that combines different databases, relational or NoSQL, and HDFS. Capabilities in data federations including canonical data model, schema mapping, conflict resolution, as well as autonomy, policy enforcement are very important for that. The analytical layer contains advanced analytics capabilities such as various data mining and predictive analytics algorithms, spatial and text analytics, in addition to the event processing engine to analyze stream data in real time. Now, the next bar is the data virtualization. It will provide metadata services for abstraction and seamless data access from distributed data sources. And moving down to the next layer, the consumption, your BI layer will be equipped with information exploration and discovery, advanced visualization on top of your traditional BI components such as reports, dashboards, queries, and scorecardings. And towards the bottom, governance, security, and operational management also covers the entire spectrum of data and information landscape at the enterprise level. Next slide, please. So we've covered the big data capabilities at this high level. Now, why does so a matter to big data? Many recent analyst predictions call for the need to build out smaller and nimble analytics capabilities, and this is exactly where SOA plays a key role. As Chris mentioned earlier, big data can leverage core SOA capabilities to acquire, organize, analyze, and consume information. And at the same time, services can be created to expose big data capabilities at many levels. Now let's take a look. Next slide, please. Here's what we call a summary of SOAP for big data. SOAP here stands for service on a page. So we organize some based on the big data capability map. Start with acquisition. Services can be utilized for real time data integration. For organization, integration infrastructure can be exposed to provide data enrichment and aggregation as a service. In the analysis area, SOA-based event processing can facilitate event correlation and analysis. Moving on to consumption, BIA infrastructure can be exposed to provide reports, graphs, and advanced visualization as a service. Content search, publishing, and the subscription capabilities can also be exposed as services. Now governance, people and the business rules infrastructure can be exposed to provide business rule decisions as a service. And we'll cover this in more detail in a later slide. Security infrastructure can be exposed to perform authentication, access control, auditing, data protection, encryption, decryption, etc. as a service. Last but not least, schema mapping, abstraction, and virtualized data access services can be created based on the federation and virtualization layer. So in the next few slides we'll drill into each of those areas in more detail. Next slide, please. So we'll start with data acquisition again. On the left you'll see the architecture details based on the different layers of SOA reference architecture that Chris had just mentioned. On top you have the consumer interfaces and moving down business processes supported and enabled by services, which are composed of service components. And the last layer is the operational systems that provide backbone for everything above. And those operational systems include different data store mechanisms and middleware systems. So three high levels of categories of services are included here in the data acquisition space. Data movement services for bulk and incremental movement and replication services. Data processing services for batch in real time as well as streaming processing. And storage information life cycle management services for archiving decisions and retention management. There are three other categories of services you'll see throughout the rest of the slides in the service layer. There are common utilities such as logging services, et cetera. Metadata services and federation and virtualization services we believe that are applicable across all of those big data processing steps. Next slide, please. Now IDC predicted the cohabitation for the foreseeable future of traditional database technologies. Those are DBMS systems with the newer Hadoop ecosystem and no SQL databases. And concluding that in the short term information management will become more complex for most organizations. The data organization capabilities as we mentioned need to cover the entire spectrum of velocity and frequency. And it also needs to handle extreme and ever-growing volume requirements. And it also needs to bridge and federate the variety of data structures. Now various organization services play a critical role in reducing this complexity. The categories of services that we as a group identified include but not limited to data enrichment services, data aggregation services, data joining and merging services for different types of merging, cohabitation, identical, tertiary, primary, secondary. Data model services for canonical data modeling, schema mapping and conflict resolution, and also integrated metadata services that are certainly critical to providing common language for data residing across different data storage models. Next slide, please. Now let's move on to analytics as a service. Event correlation services are getting more and more mainstream for smart grid, fraud, detection, healthcare, cybersecurity intelligence, financial services, and public safety use cases. Now other categories of analysis services including data mining, predictive analytics, and spatial and tax. Typical data mining services will expose algorithms to perform classification, pattern matching, anomaly detection, attribute importance, determine which attribute play more important role, and prediction and determination, association rules, and clustering. Whereas on the predictive and analytic service side, regression is one of the most common form. Others include time-serial analysis, case-based reasoning neural networks, and multi-layer perceptron. Next slide, please. So moving on to consumption. Various consumption services include the traditional query and reports, distribution and alerting services. More advanced consumption services have surfaced recent years. One example is services to provide federated searching, automatically clustering of data, data taxonomy, crossing analytics, and contact-based faceted search. Another example is the virtual document services that can dynamically build a consolidated view based on access and interest point of the consumer. New types of exploration and visualization services that combine tabular, tag clouds, guided navigations, maps, charts, statistical visualization packages from R, for example, to send striking messages and discern data relationship that simple tables and numbers just can provide. Next slide, please. So data governance. Now going back to the IDC prediction we talked about earlier. With the rise of machine learning, data mining, and other advanced analytics, IDC predicts that decision and automation solutions utilizing a mix of cognitive computing, rules management, analytics, biometrics, rich media recognition software, and commercialized high-performance computing infrastructure will proliferate. IIA also predicts that we'll see a continued move to machine learning and automation to keep pace with speed and volume data that as they strive to operationalize analytics but encounter challenges with the over-automation of decisions, companies will focus more on optimal mix between human and machine capability and judgment. So data governance in the context of big data will focus on services to determine when automated decisions make sense and when human intervention and interpretation is required. These services will use business rules and could be embedded into the business processes leveraging people capabilities that software provides. Services to synthesize analytical models to establish and measure analytical qualities will also emerge as we see to provide effective governance for business decision-making. Next slide, security. Now we noticed that there is a question from the audience about security that's certainly one of the very important capabilities that we need to establish for any data management solution, especially for big data. So the security services we believe include two main categories, services to ensure right level of data access such as authentication, authorization, and auditing, the most famous triple-A security we often talk about, and the services to provide data protection such as encryption, masking, and redaction. Next slide, please. So we come to the conclusion. With that, this is the summary slide for big data as a service blueprint. Through integrated data and shared services, the main objectives of big data as a service, in all of you, are to provide data interoperability, integrated analytics, and agile data platform that allows you to establish quick time-to-value capabilities but also enables you to continue to embrace open-source innovations. We have covered all of those services areas in our previous slides. So starting from the bottom, they are acquisition services, organization services, data federation virtualization services, analysis services, consumption, security, and governance services on the two sides. So our view is that SOA and big data are integral to each other. And the purpose of this blueprint is to help you understand and organize your big data vision, prioritize the capability requirements and gaps, and establish a roadmap to incrementally build up those capabilities in a holistic and thoughtful manner. With that, I'm turning to Trinette and Nikhil to walk you through some of the big data use cases and demonstrate the role of SOA, the role that SOA plays in enabling those solutions. Thank you, Helen. Hello, I am Trinette Sorals. I work for IBM as a worldwide executive architect supporting IBM's big data initiative. Today, Nikhil and I will be talking through a few industry cases where we believe are great, are nice examples where SOA is enabling big data. We'll provide an understanding of the business need for incorporating big data into their business processes. We'll talk through some of the key requirements in order to address the business needs. And we'll also discuss what are some of the key components from an SOA perspective and also from a big data perspective that will significantly impact the business and provide business benefit. We'll also talk through what the integration points are between SOA and big data and how utilizing SOA will enable the solution. So to start off, oh, sorry, can you forward to slides, please? Thank you. So to start off, Nikhil is going to talk through a couple, a few of our industry use cases within the healthcare and life sciences industries. So welcome and thank you, Helen, or other Trinette. This is the area that we're going to talk about, which is healthcare and life sciences. Big data is starting to play an enormous role in these two particular fields. In the case of healthcare, there's a huge amount of change going on driven by multiple factors. One is there's an evolving presence of personalized medicine, which is genetic data starting to play a role. The second is electronic medical records is an area which has started to stabilize and which has got a large amount of adoption. I think 75% of the providers, which is the hospital systems in the United States, have completed some level of EMR adoption. Social media and wellness are driving factors moving forward in the healthcare space. So the use cases we're going to look at address two critical scenarios and scenarios which are really at the heart of a lot of chief medical officers and healthcare organizations on both the payer and the provider side. So if the terminology, payers are the insurance companies, providers are the hospital systems. So from a little bit of a background perspective, I've been the co-author of the SOA RA for the first version. I'm co-chair of the SOA RA for version two. I've been working in the SOA Enterprise Architecture and Cloudspace for almost a decade, a decade and a half. In the context of healthcare, I work actively in the personalized medicine and healthcare space. I'm a trustee in one of the larger health systems in the country, in the U.S. and I'm actually rolling out product to actually manage and integrate personalized medicine into the healthcare space. So that's just a little bit of background. The first scenario that we're going to look at, which is using outcomes for comorbidity, identification and wellness. What this basically means is comorbidity is when two or more diseases interact together. And we're discovering that this scenario of two or more diseases interacting together can be predictors for chronic diseases. And most of our key diseases today in the Western world are typically driven by chronic scenarios like diabetes, like obesity, like Alzheimer's. These are the diseases of our times. So if you look at why these diseases are, I mean, why big data plays a role, we can see that from the fact that we have this flood of information coming from EMRs and new standards, such as ICD-10, new releases of another standard called SNOMED, which is a clinical terminology standard. And what that is doing is it's increasing the available possibility, the options, and the amounts of data by orders of magnitude. Forward-looking organizations are already looking at that information, and they're actually conducting big data analytics to help reduce our improved outcomes, such as hospital readmissions. So the other major thing and big change that's occurring is patients are starting to communicate with their providers, with their doctors, through social media. And some of the hospital systems have already started mining this data to start looking for better treatment plans and protocols for their patients. So what's the solution for this? Natural language processing and structured searches of unstructured and structured data are becoming very heavily adopted, or are starting to get rapidly adopted. And that's a major feature of big data, because that's one of the ways in which we leverage data, the big data that has been captured. The other major thing that's occurring is the leveraging of predictive models against structured and unstructured data. What this means in practice is that we're having large volumes of data coming in, and we're able to start taking proactive decisions The poster child is typically oncology, which is cancer, but it's starting to spread to things such as obesity and other scenarios. Type 1 and type 2 diabetes is also actively being pursued. We're rapidly using this data in areas such as combining EMR information, ICD-10, which is for diagnosis codes, SNOMED information, which is for clinical terminology, and different forms of genomic data to actually define and create improved outcomes for patients who are being identified with cancer early on. Or in the Angelina Jolie case is the poster child for that, but there are other scenarios which are evolving. Diabetes has been really an area where things are changing and then the epigenetic impact in the space of Alzheimer's disease. What are the requirements and what are the solution components? Data warehouse appliances become critical where information needs to be processed really fast. In scenarios where we have very large volumes of information and you could potentially predict an outcome for comorbidity for a patient, the ability to use these appliances and basically do the processing in silicon is providing another level of capability to hospitals and provider organizations. Streaming computing to facilitate effective streaming of data so that this information can be essentially parsed and routed to the right providers as well as processed and viewed in a big data analytic perspective. The ability to store this data consistently, the organized portion of the model that Helen spoke to. The ability to support security and encryption for patient privacy that becomes extremely important from both perspectives, from the perspective of research of this information, as well as patient privacy from a provider perspective. So HIPAA and CFR Part 11 both apply to the scenario and the ability to support adaptive informed consent is an important field which is evolving in the space. Ontology association, the ability to associate ontologies such as SNOMED or LOINC, which is the lab codes, to the actual electronic medical records and claims, helps improve wellness and treatment of health. What's the business impact? Firstly, with changes in the legal system across the world and especially in the United States, payment for process is being linked to wellness of the patient. And so there's a direct correlation from a physical perspective. And where healthcare organizations are seeing 30 to 40 sometimes 50% losses in revenue, this becomes really important. Proactively, target care management. This is critical if you want to improve wellness. For those who work in the healthcare space, we're seeing a shift from fee for service and transactional healthcare to episodic healthcare. And so that's a major opportunity and major change from a business perspective. The next thing that comes in is risk management and resource allocation. We are capturing information for patient profiles from a population health perspective. So if I were to look at a population for, let's say downtown Detroit versus the population in suburban San Francisco, you will see the kinds of diseases, the genetic profiles and the age profiles of population demographic information that we're capturing. There's a lot of difference. So how does a physician practice or a health system determine how to invest, how to determine what risks are there and how to manage their prioritization in what is really a very dynamic space for the healthcare environment. That is where big data is starting to play a big role. The ability to correlate this information to get it from all sorts of disparate sources and with different levels of trust. The integration of translational medicine, which is basically the medicine which is shifting from theory into practice is the way I couch it and that's becoming extremely important because the healthcare space is changing from a career perspective altogether. It's moving from our traditional perspective of anatomical and physiological medicine into more of an omic, which is genomic, epigenetic and other factors being used to determine how we treat patients. And this huge change as we start understanding protein metabolism is really fundamental and what's happening is because the amount of data taken into consideration to determine diagnostics and determine treatment paths is evolving very rapidly, this is a very large aspect. I think we can go to the next slide, Simon. So I'm going to kind of go over this slide a little faster because I've kind of discussed many aspects of this. Another key area in that particular space of healthcare use cases is early detection of medical events and that's important to a large extent because we're able to start taking rapid decisions to save patient mortality and in particular things such as if you have prenatal scenarios where decisions have to be taken very fast, a huge number of different attributes have to be combined and we need to then associate them with decision points. And so what do we see as key solution aspects? Again, you'll see a pattern emerging. We need to be able to capture vast amounts of information. Much of this information is going to be starting to come in from new sources such as the Internet of Things and machine to machine sources. There's a large increase in change in the standards used in the industry and there's the advent of social media and image data in the decisioning process. For example, in the case of cancer patients. One of the big things in this particular use case is the need for secure encrypted data connections. Things such as the scenario where we deal with encrypted data for or rather where you have a prenatal scenario where a patient or child is being born or is prior to birth, just prior to birth, you're going to have remote telemetry and remote monitoring by the doctor and the ability to look at this information has to be extremely secure and extremely reliable. Simon, can we move on to the next slide? The life sciences space is another area where frankly there's been disruption, enormous disruption. Largely because of a shift in the healthcare sector to personalized medicine and genomics. The entire space has changed and what's happened is traditional genomic activity was based off of analysis of relatively structured information gathered from PCRs from microarrays and then we basically came to conclusions by comparing codons. Well, that started to evolve very rapidly in the last four to five years where we started to talk about different information sources and these information sources are increasing very rapidly. Standards are changing very rapidly and to add to that we're starting to look at because of the volume of big data coming, researchers are trying to look at information from hospital providers. So an interoperability factor is coming into this. So that's part of the area where the SOA aspect comes in and the big data aspect comes in. So what is involved for, what's the solution? Metadata interoperability, predictive modeling. Again, huge amounts of physiological and genomic information. Ability to access providers and look at integrative omics. That's a big thing now with a shift from just traditional genomics to be able to include all the other forms of omic information which is proteomics and there's a whole large space in that area. Why did I go into looking at genomics in particular for personalized medicine and for life sciences? Because that's the future of life sciences and that's really where up to 90% in some of large healthcare, rather large pharma companies, the investment is 90% of new drug development is focused on that space. Another area which is going to be really important is virtual clinical trials. So solution components, you know, typically data warehouse appliances become more important here but also commoditized information, commoditized resources to be able to do large analytics to run against, say, comparison of Alzheimer's, codons to identify some information. Stream computing to facilitate effective streaming of data. Again, security encryption for patient privacy because we're looking at patient data in a lot of these scenarios and CFR 21, Type 11 and Title 11 as well as HIPAA start applying in these scenarios and life sciences companies are covered by both. Metadata repositories are becoming extremely important because you have so many different forms of information that that interoperability is required to be able to really leverage that information. Where does the big data and so on and alignment come? It comes to the conversation we've had about encryption, transport, integration. It comes from the perspective that so and big data and big data in particular has become the basic decision point for taking determinations of where and how new drug discovery is done, how it's monitored, and how pharmacovigilance is going to be done after drugs are released. Again, this is a larger topic so I'll be glad to take questions afterwards but this gives it kind of a high-level perspective. At this point, Trinette, I'd like to hand over to you and Sam, we probably want to move to the next slide. Thank you, everyone. Again, this is Trinette. I'm going to talk through a couple of use cases and the first one that I'm going to talk through is the automotive industry. More so about the connected vehicle. There have been numerous statistics that are floating around out there about the volume and variety of the information within the automotive industry and the connected vehicle. For instance, 25 gigabytes of data is being generated from a plug-in hybrid vehicle in just one hour. So that's just one vehicle every hour, 25 gigabytes of data generated. Areas where the customers are interested within the connected vehicle are the driver assistance systems where the basic needs are assisted parking. We've seen that frequently with a lot of the vehicles. Progressing into slowing or stopping a car during the driving when the proximity between cars reaches a certain point. And in the future, as many of you have seen with the Google car, is the entire driving experience being manned by the car instead of the driver. So another area is enabling the driver and passengers real-time information on weather, traffic, and other insights to enhance their driving experience. And last but not least is determining problems with car components earlier than ever before by feeding the component performance, health, and other statistics information to the maintenance teams. So on that note, the amount of data from the maintenance logs and car components, and along with the dependencies, there is a need for reference data for each of those parts. And what we find is that 80% of that data is unstructured, from the traffic and the component maintenance logs, the documents associated with the car components. The need for master data management and reference data management is required and also being able to do text analytics and extracting the information from that detail that is there from the unstructured information to support the analytics and anomaly detection for the maintenance crews. This will provide the insight into the conditions around the vehicle components and the usage and also take into consideration the environmental conditions of the car for each of those components to provide better insight into what could perhaps be deteriorating earlier than expected or breakage of components. In order to support the safety of passengers, a requirement of less than 40 microseconds response to millions of events per second is required. So this requires low latency, which we take into providing stream computing in order to facilitate that streaming of information inbound, and then also MQ telemetry transport for sending of the real-time alerts to the driver and back to the vehicle for the safety response. In order to provide a lot of that insight, we need to have the warehouse appliance in order to analyze the massive amount of data that is being received from the maintenance logs, also from all of the events that are occurring every second within the vehicle. The SOA integration is from the device connectivity, the vehicle connectivity, where I talked about the MQ telemetry transport that we're utilizing in many of the implementations, and also with the vehicle security. If you can think about the privacy of the information from an end user, the passenger, the driver, providing insight into the actual for instance, if you use Waze, it provides information into what police officers are in that area or what the road conditions, and also integrating that directly into the car. There is security integration components that are required from a SOA perspective. Okay, going on to the next one. If you can... Chris, can you forward? Thank you. So the next area is in the utilities industry. What we're seeing is a lot of customers wanting to know how much throughput of utilities are going into their house and what their consumption really is. We also need to be able to provide real-time allocation for the future and for now to be able to dynamically reroute the paths of the utilities in case of storms or in case of high consumption based upon the time of the year. We also need to be able to provide more efficient and reliability of the utilities that are being consumed. So what we are implementing is more of the two-way communication of the power consumption from an end user and back to the power company so that you can see what you're actually utilizing and be able to also turn on and off the appliances within your house. So then that also provides more management from an end user perspective and provide insight into who is using more of the actual energy and be able to drive more of a power consumption feedback into the, sorry. So what we need to be able to do is to do more of the forecasting and more of the integration into the city networks in order to provide the reroute capabilities and also ensure that the power consumption to each of the areas is being utilized at its best capacity. What we need to be able to provide from a, from a stellar perspective, is the interoperability between each of those platforms and also be able to provide the information from the SCADA devices and the social media integration into the feeling or being able to provide the actual services from a utilities company because many of the utility companies are now being deregulated so they're selling services and they're not just a power company any longer so they have to be on the top of their game on what and how they're providing services to the customers. Going into the last industry is the finance insurance area and this is similar in a sense of the connected vehicle from a telematic perspective. One of the areas for cars, as many of you have seen, at least in the states, we're starting to sell insurance based upon your driving usage. We're also wanting, so that goes along with a price model management. We also want to be able to provide from a financial perspective for an insurance company to reduce the loss development and improve the accuracy so that they actually have the actuarial risk reduced and that will provide a better internal reserve and enable the organization better availability of their cash and investment. In order to provide this, we need to be able to also have real-time data feeds from various data. For instance, on the car insurance, we need to be able to bring in the telematic usage information real-time so that whenever you want to change the pricing based upon the usage. We also want to be able to immediately pull in when someone is wanting to receive insurance all of the disparate data sources that are outside of the organization and also internal to the organization if they have other insurance policies. If you want to be able to check their ratings and also be able to see what other social information is out there that might provide a better risk assessment of who is requesting that insurance. You also want to be able to provide a single view of the customer from all of the different business areas that may be providing service to that customer and also be able to associate all of the telematic information and all of the external information about that end user and any social media information that may provide a better understanding of that customer and how to service that customer better. Need to also be able to provide the business users the capability to do hypothesis and what-if scenarios. In order to do this, we need to have that single view of the customer, but we also need to have all of the relevant data from across the organization in order to do that real-time risk assessment and also be able to provide the best answer back to the person who is creating the policies or wanting to work, wanting to determine the risk, overall risk to the organization. So in order to provide that real-time and the hypothesis what-if scenario, you need to have a catalog of what we talked about earlier of having all of the metadata reference for all of the data so that a data scientist or a risk management profiler can actually pull in all of the information based upon the type of information they're wanting. So having a catalog of all of that information and the catalog integrate with the SELA services in order to provide that information back to the data scientist so that they don't have to know the actual location of the information or the transformation that's required. It will provide that interface for the end user and the services capabilities to provide that information back. The other key area for SELA alignment is the audit and compliance that is required to protect the sensitive information of the customers. So we need to be able to provide the information to all of the individuals that are working within the company but based upon their roles and responsibility and also be able to anonymize the information for the Panario building. So those are the use cases and thank you very much for listening to us today. Chris, do you want to... Okay, if we move through to the next slide because I think we've been now pretty much at the hour and it's time to wrap up. This slide and the slides will be available and a presentation recording will also be available after the event. It may take a day or two for the recording to be posted and for us to assemble some answers to the questions and put those with the slides and for those to be posted. If anyone has any questions and as we are at the hour, I think we will cut the Q&A session verbally but if anyone has any questions, if they could send them to Simon, is your email address posted? Yeah, that's fine. If anyone has a copy of my email address, I'll happily go between the four of those questions. Send the questions to Simon and the panellists will try and provide answers to those. Along with the unanswered questions from this session. So I'd finally like to wrap up by thanking Trinette and Helen and Nikhil for sharing their insights with us today and to thank Simon for organizing it and to thank all of you who have participated for your participation and attention to the webinar and if you do wish to participate in Open Platform 3.0 for work within the Open Group, this final slide explains how to do it and I look forward to working with those of you who decide to do that. So thanks very much everybody and goodbye.