 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Officer of DataVersity. We would like to thank you for joining this DataVersity webinar, Empowering the Data Driven Business with Modern Business Intelligence, sponsored today by Google Cloud. Just a couple of points to get us started due to the large number of people that attend these sessions. You will be muted during the webinar. For questions, we will be collecting them by the Q&A or if you'd like to tweet, we encourage you to share our questions via Twitter using hashtag DataVersity. And if you'd like to chat with us or with each other, we certainly encourage you to do so. And just to note Zoom defaults the chat to send to just the panelists, but you may absolutely change that to network with everyone. To find the Q&A or the chat panels, you may click those icons found in the bottom middle of your screen for those features. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of this session, and any additional information requested throughout the webinar. Now let me introduce to you our speakers for today, Alex Worm and Manoj Gunti. Alex is a senior analyst at Nucleus Research covering the data analytics and data management markets. Manoj is in product marketing at Google Cloud. He leads the go-to market strategy for BigQuery at Google Cloud with over 10 years of experience in helping organizations build and scale their data-driven transformation programs. And with that, I will give the floor to Alex and Manoj to get today's webinar started. Hello and welcome. Thank you so much, Shannon. Hi, everyone. Welcome to the webcast. My name is Manoj Gunti. Thank you so much for joining us today. In this webcast, we will be covering some of the data and analytics challenges that organizations are facing today. And we will learn how Google Cloud's BigQuery is helping businesses overcome these challenges. We see organizations going through quite a bit of challenges, especially given the current macroeconomic conditions. So we're going to address some of those. And then I would like to pass it over to Alex to share some of his findings from recent research and customer interviews. With that, let's get started. We are currently in a very dynamic market with nearly a 40-year high inflation, very tight labor market and supply chain disruptions. Many organizations are under immense pressure to deliver profitability. Some companies are doubling down on their core competencies, while others are looking to move beyond their established patterns and ideas that are no longer adding value to their business. But broadly, we see that almost all organizations globally are heading towards optimizing their costs and gaining maximum value from their investments. At Google, working with tens and thousands of customers, we see that by combining your existing data with artificial intelligence, organizations are able to rapidly drive profitability. And they're doing this in two ways. One, optimizing their existing business processes. And number two, unlocking new growth strategies which could be around monetization of data or sharing data with external users. Alex, do you want to add anything in terms of what you're hearing from organizations globally? Thanks, Manoj. We have also been seeing a greater emphasis on cost reduction here at Nucleus. This is especially true in the data analytics market, as legacy infrastructures are failing to keep up with growing data volumes becoming cost prohibited at scale. The continued shortage of technical staff has also accelerated adoption with modern analytics, with many organizations prioritizing managed services to do more with less. Back to you Manoj. Thank you, Alex. On the other hand, what we are also seeing is primarily due to the eruption of digital services during the pandemic, organizations are seeing an unprecedented growth in the volume of data that they are capturing. But it isn't just the volume of data or the growth of data that's the problem. The problem is the unpredictable nature of the data. Take a simple example of data sharing and analytics exchanges. There is a tremendous opportunity in augmenting your data with partner data and industry data. But today, not many can provide a limitless and frictionless data and analytics infrastructure that lets you scale without running into things like capacity issues or data operations or even administration issues. So the bottom line here is that data is currently limitless, but most data platforms aren't. Even those platforms that are easy to set up and demo ultimately run out of steam and become very expensive to manage at scale. At Google we are really fortunate to work with many organizations and learn from them. What we are seeing is a team around the challenges getting in the way for organizations when they're trying to close the data to value gap. The number one challenge that we see is that data is big and it's available in multiple formats. It comes in all shapes and forms, speeds and sources. For example, text files, images, video, audio, all of these need to be ingested at some point. They can be ingested all at once. Or in many cases, there is an increasing need for it to be ingested, transformed and worked in real time. And finally, we also have evolved into a multi-cloud world. We find ourselves in the center of multi-format, multi-source data ecosystem that the typical monolithic and expensive architectures of the past just are not able to scale and manage. Secondly, the way we work with data has also evolved. It's no longer just about SQL. It involves everything from like machine learning frameworks to programming languages, handling multiple different types of formats, real time stream processing to data rich and data intensive applications. And the third, your data is just reaching everyone, not just within your organization, but also outside your organization. It's just not the data team that is using the data that is being collected and processed today. Employees, customers, partners and suppliers, everybody is getting the data in some form or the other. It could be raw data, it could be analyzed data, or it could be data in the form of business intelligence reports. Today, data is just reaching beyond the boundaries of your organization. So these are some of the challenges. I wanted to ask Alex if you are hearing any other challenges from customers that you're talking to. Thanks, Manoj. I think you are spot on with your evaluation. I just have one to add regarding data reliability and tracking. With the mass accumulation of data, organizations often contend with poor data quality and experienced challenges tracking data lineage as well. This is especially true for organizations resulting relying on multiple siloed systems. Thanks Alex. I think you bring up a good point about data quality that's definitely a big challenge that we are seeing so thanks for highlighting that. Alright, so this is where BigQuery comes to play. From storage to computing to networking, BigQuery is designed to bring all of Google's innovation and resources at the service of your queries. BigQuery stands on Google's global infrastructure system which handles replication, recovery, distributed management, etc. So there is no single source of failure or no, there is no single point of failure. Google's cluster management system automatically routes processing tasks across our highly scalable infrastructure. But the most important thing is that Google's BigQuery's query execution engine is completely stateless. What this means is that if a node or even for that matter, even if a cluster goes down, running queries will not be interrupted at all. And when you combine all of this with Google's high performance networking infrastructure, you will find that BigQuery is almost 10x reliable than alternative cloud data warehousing products. Let's delve a little deeper into this. So BigQuery provides unique capabilities to help you get the most of your data across workloads in a way that will help all kinds of users. Moreover, this is done in a way that is cost effective, highly productive, security focused, compliant, as well as open. BigQuery is completely serverless. It massively simplifies the processing of working with your data and enables limitless scale. The unique architecture of BigQuery takes full advantage of Google's infrastructure that sits underneath some of the world's largest websites as well as applications. But what does being truly serverless mean? Truly serverless allows you not only to easily manage various aspects of your data and analytics, but it also helps you not to worry about capacity planning, cluster sizing, optimization, or even administration or any other limits that you typically face from a legacy or VM based data warehousing products. Next, organizations are looking for a platform that has support for data of all types and at any scale and on any cloud. For example, BigQuery has one of the widest and deepest support for all types of data, be it structured data, semi-structured data, or unstructured data. Any type of data can be analyzed across cloud environments in BigQuery. In addition to the data that you bring, Google Cloud has a data marketplace that includes exclusive rich data sets like Geospatial that you can use to argument your own data. You get all of this natively without integrations into different tools and services. The last thing is almost every conversation that we have with customers today is never complete without bringing up the topic of AI and ML. However, we see that organizations today are at different stages of their AI adoption journey. So to meet where you are, BigQuery offers built-in AI and ML that you can use to have a production-ready model deployed, trained, and start predicting in just a few steps. The beauty of it is that you can leverage the power of AI straight within BigQuery itself. And that's why we see that almost 80% of our top customers are using BigQuery ML today. Lastly, we are bringing data to every persona through built-in business intelligence, through connected sheets, data studio, and lookup. Hope this provides you a good overview of BigQuery. Now let me turn it over to Alex to share insights from his recent research and interviews from BigQuery customers. All right. Thank you again, Mendoz. Hi, everyone. I'm Alex Worm, senior analyst at Nucleus Research covering the data analytics and data management markets. Thank you for joining me today to hear how organizations are using BigQuery to drive more performant analytics and promote cost efficiency. Over the past few years, organizations have invested heavily into cloud data warehouse modernization. Today, I will highlight a few experiences and show some actual benefits of customers who selected BigQuery. Here is a brief overview of the research we will discuss today. First, I will give you a quick introduction to my firm, Nucleus Research, and share some aspects that differentiate our approach. Next, we will go over some typical challenges organizations have encountered to motivate their data warehouse modernization initiatives, as well as the benefits users have experienced from their BigQuery deployments. These benefits include 75% faster processing, 25% reduced administrative costs, and improved customer satisfaction. We will also see some unquantified benefits that drive organizations to select BigQuery in competitive evaluations. Afterwards, I will present two enterprise deployments to show how customers are actually achieving these benefits. This will include the challenges that encouraged modernization with BigQuery, the strategy involved in the implementation, and the benefits seen post deployment. Following these customer stories, I will also go over some key takeaways, including lessons learned and best practices. Now, let me tell you a bit about my firm, Nucleus Research. Nucleus delivers independent research and advisory services focused specifically on ROI and the financial value achieved from technology deployments. Rather than focusing exclusively on the technology, our research targets the office of the CFO and aims to connect every reported benefit with a tangible result. With our research, Nucleus gives end users a better understanding of what product to select and supports its analysis with relevant customer examples. As a matter of fact, all of our research is sourced directly from end user interactions, and we cannot make a claim without independently verifying it with an actual customer. Since our founding in 2000, Nucleus has published over 1000 case studies and has established itself as a leader in quantitative ROI and value based research. Now that you understand our perspective and approach, let me share some of our credentials. To start, Nucleus is the only research organization certified by the National Association of State Boards of Accountancy. This means that all of our research follows the exact same accounting processes used by financial professionals. Furthermore, all of our research is published directly to the Bloomberg Terminal without external review with an average monthly readership of 2800 investment professionals. Finally, Nucleus work products are fully independent, ensuring that users can trust our research and analysis. With that in mind, let's take a look at how Nucleus analysts like myself evaluate the success of a technology deployment. Although benefits can take multiple forms to translate them into financial value, we try to fit everything into three categories that either directly or indirectly impact an organization's profit and loss statement. The first are impacts to revenue and profit. This is frequently seen as a result of analytics deployments as organizations can better utilize their data to drive efficient operations and capture new customer segments. The second are costs that an organization either fully eliminates or partially cuts as a result of their deployment. This usually involves any subscription or license costs related to prior solutions, as well as any maintenance or administrative work. The third and final category involves any effects on user productivity that impact an organization's budgets. This can either be a direct benefit by avoiding additional headcount or an indirect savings by improving the efficiency of existing processes. Before getting into some of the benefits that BigQuery can enable, I think it is important to review some typical challenges that organizations face with their legacy infrastructures prior to cloud modernization. The first are compute limitations. To illustrate more of a traditional example, I recently spoke with a large biotechnology company who experienced multiple days of latency running weekly reports across its drug development, manufacturing, and commercial delivery business units. This extended insight delivery and individual business units often lacked control over their execution timelines. Compute limitations have also become increasingly relevant as organizations have embraced machine learning workflows and real-time data processing. I saw a great example of this recently when I interviewed one of America's largest telecom companies. This organization uses a classification model for real-time fraud detection and had strict latency requirements of under 50 milliseconds per transaction. In this case, the organization's compute limitations did not extend latency, but rather restricted the number of features it could integrate into its fraud detection model. The second challenge I've commonly seen are high maintenance and administrative costs, which often become unreasonable as data operation scale. I recently spoke with a large pharmaceutical company that spent over $2.5 million per year maintaining and managing its legacy infrastructure. This legacy infrastructure was also approaching the end of its upgrade cycle, which would have required an additional investment of a million dollars, which spurred its decision to modernize. Another common challenge I see in my day-to-day involves having multiple data silos. About a month ago, I spoke to an organization that maintains six different on-premise data management systems. This not only scaled administrative costs, but impaired the reliability of data and analytics. As a result, users often encountered inconsistent metrics built on data at different points in time and levels of quality. Now let's see how organizations addressed these challenges with BigQuery using some aggregate benefits customers have reported from their deployments. One of the largest benefits involved over 75% accelerated data processing. This was achieved using BigQuery's scalable and elastic architecture to bypass the compute limitations of legacy systems. Organizations were also able to facilitate faster processing using dynamic resource allocation, enabling business units to assign compute resources as necessary to manage execution timelines without scaling costs. Organizations also leverage auto-tuning capabilities to continually adjust their underlying infrastructure and optimize performance. This is especially relevant going into 2023 as data volumes continue to scale and compute intensive use cases such as machine learning and real-time processing have become increasingly integrated into traditional business processes. Another significant benefit mentioned by organizations adopting BigQuery involved 25% administrative cost savings. This was largely driven by BigQuery's fully managed approach to data warehousing, allowing organizations to offload many security and maintenance tasks among others. This enabled organizations to do more with less as they scaled data utilization. Furthermore, organizations have noted significant productivity savings as a result of their BigQuery deployments with capabilities such as auto-scaling and auto-provisioning, saving an average of 20 to 40 hours per month. This is crucial in today's market as many organizations are struggling to find and retain technical workers and encounter cost restraints when looking to expand technical teams. Solutions like BigQuery allow organizations to maintain agile IT and administrative teams and scale data operations without increasing headcount. Personalized and improved customer experiences was cited as another key benefit of BigQuery deployments. BigQuery enabled customers to position data faster for its customers. Driving improved satisfaction with highly personalized experiences and improved data access. This is important as many organizations are increasingly looking to differentiate their offerings with data and analytics to reduce churn and extend customer lifetime value. Some organizations are also using data and analytics to offer upgraded services at higher pricing tiers and create brand new revenue streams that would otherwise be unavailable. There were also a few unquantifiable benefits that led organizations to prioritize BigQuery in competitive evaluations. First, BigQuery offers support for popular BI tools such as Looker, Tablo and Power BI, allowing analysts to continue leveraging the tools they already use and know. Additionally, many organizations understand that analytics infrastructure is a significant investment that can strongly influence the success of their data strategy. For this reason, organizations often had a longer term view and prioritized solutions future prospects. Specific to BigQuery, organizations valued its R&D budget, roadmap and pace of innovation, realizing that its value proposition would scale over time. Let's look at some enterprise deployments that illustrate how organizations actually achieve the benefits we discussed. This organization is one of America's largest providers of security services with products for home security, safety monitoring and smart home automation, serving over 6 million customers. Before adopting BigQuery, the organization relied on legacy data warehousing and experienced latency in data processing. This limited the reach of data throughout business internally. This latency also made it hard to provide customers with relevant analytics within their mobile applications. To address these challenges, the organization decided to adopt BigQuery as a part of its data warehouse modernization initiative. The organization spent 11 months implementing BigQuery along with various new data management tools and set up the necessary data pipelines and front-end connectors. Following deployment, the organization achieved over 90% faster query processing with expanded storage and compute thresholds. This allowed the organization to position data faster for all of its customers and create personable experiences to differentiate with security services and drive customer satisfaction. Now, BigQuery is a central component driving the success of the business and analytics delivered within customer-facing applications has improved customer retention and lifetime value. Now for the second enterprise deployment. This organization is one of Canada's largest telecom companies offering internet service, entertainment, healthcare, video and television products to more than 70 million clients. Before adopting BigQuery, the organization maintained multiple legacy data warehouses, creating disparate silos and a degree of redundancy from inefficient data management. To improve data management and consolidate its data infrastructure, the organization chose BigQuery for modern cloud data warehousing and implemented the solution over 14 months alongside various data integration and data management solutions. Once deployed, the telecoms company noted 30% improved storage cost savings by actually deleting its redundant data using BigQuery as a single source of truth. This centralized data enabled the organization to access data faster and achieve more reliable analysis. The organization also reduced analytic processing latency by over 80% using dynamic resource allocation. This enabled individual business units to better control their own budgets and execution timelines with the ability to assign compute resources as necessary to optimize for cost and latency. Additionally, the organization noted ongoing administrative savings of over 25% by maintaining one cloud data warehouse rather than multiple siloed systems. This future proved the organization's analytic strategy and enabled it to scale without significantly expanding technical headcount. To wrap up, let's discuss some key takeaways and best practices drawing from these enterprise examples. First of all, data and analytics are becoming increasingly important for not just internal utilization, but also to empower customers and elevate their experiences. It's a security services provider giving users a better understanding of the safety of their home. For a telecommunications company helping customers find what plan is right for them, data and analytics are becoming a key differentiator. Additionally, we found that organizations contending with multiple data silos and legacy systems stand to benefit the most from BigQuery deployments. This can include direct cost savings from retiring multiple systems as well as indirect cost savings from simplifying maintenance and administrative tasks. Analysts also benefit from improved data access, which is critical given that most analyst teams spend a majority of their time on data wrangling. We also saw the importance of serverless capabilities and managed servers services to keep costs in check as data utilization scales. This is especially important for larger organizations with multiple departments for business units that require individual ownership of budgets and execution timelines. With that, I will pass it back over to Shannon for some Q&A. Thank you so much for these great presentations. If you have questions for either of them, feel free to submit them in the Q&A portion of your screen and just answer the most commonly asked questions. Just a reminder, I will send a follow-up email by end of day Monday for this webinar with links to the slides and links to the recording. So diving in here, how does BigQuery help analyze data across clouds? That's a good question. So BigQuery offers a capability called BigQuery Omni. And using BigQuery Omni, customers can basically analyze data across the data which is currently residing either in Google Cloud or AWS or even Azure without actually moving it. So BigQuery Omni provides like a single pane of glass through which you can not only query, but also analyze data across different formats and across different clouds. All right. And does Google offers BigQuery and Vertex AI? How should we go about using these? That's a good question. So BigQuery offers built-in AI and machine learning capabilities that customers can basically use to get started with building models and running predictions, etc. And at the same time, Google Cloud also offers Vertex AI. So what we would recommend is if you are planning to get started with AI, you can start building models with BigQuery ML. And as you progress within your AI adoption journey and where you are using AI at scale and where you have a lot of models that need to be managed and taken care of, that's when we would move all those models from BigQuery ML into Vertex AI for much more scale and ease of management. So what is the approach of Google BigQuery for sharing data inside and outside of the organization? Good question. So Google Cloud offers, and specifically around BigQuery, something called Analytics Hub. You can look for more information on the website. Analytics Hub is our platform for sharing data both within the organization and outside the organization. Analytics Hub comes with a lot of capabilities around managing security, governance, packaging of data into more consumable products and things like that. Can you confirm if BigQuery supports BigBucket to get connections? So BigQuery does support get connections. Probably maybe after the webcast, I can share the links to documentation where you can learn how it works and how to get started. Thank you. That's all the questions I have right now, but I'll give everyone a quick moment to submit their questions. So while we are waiting for people to, giving them people an opportunity to type, Mino, what's the aha moment that people have when they install Google Cloud and what's that thing that they didn't expect that customers are really excited about? The biggest factor is, I think I would highlight two factors. One is the scale that BigQuery is able to handle. We have several customers who are analyzing terabytes of data within minutes and the way BigQuery is able to scale and manage seamlessly is a big aha moment for our customers. The second thing is just the ease of use. Everything in BigQuery can be done through simple SQL, so our data users don't necessarily have to go through any sort of trainings and additional effort to get trained on things like AIML, etc. Sounds great. And BigQuery supports automation if there are resources on that as well, if you have that. I would like to know a bit more specific about what exact automation the question is referring to. It's a big, it's a bit broader question at this point, but if there are any specifics that I'm able to help. Oh, there is an additional note in here. Scheduled query plus event driven triggering. Yes. Scheduled query is definitely available in BigQuery. Events are, event triggering happens in two ways. We do have products called Dataflow and Dataproc which help stream events in real time as well as in a scheduled manner. But definitely even triggering is the capability that BigQuery supports. What are the competitors? Well, I don't, we are a vendor neutral space, so I don't want to ask you to list your competitors or their weaknesses, but what is, what makes Google BigQuery, Google BigQuery stand out? What are the reasons that people are selecting BigQuery? Sure. And maybe I could jump in on this just given my experience with the market itself. One thing I've really seen jump out with BigQuery is just the adjacency benefits of some of these solutions. So having a looker integrate very well as well as the AI capabilities and machine learning that Manoj was discussing earlier. Thanks, thanks Alex. It gives a good insight from a third party to validate what BigQuery's differentiators are. Just to add to what you said, one key differentiator of BigQuery is the ecosystem that we bring in. Not just from a partner's perspective or not just from an SI perspective, but Google Cloud offers data cloud, not just data warehousing technology. So we have all peripheral products required under one roof for data extraction, data processing, querying, building AI and ML, and as Alex said, into business intelligence with products like Luke or etc. So that's one of the key reasons why a lot of customers work with BigQuery. Very nice. Well, that is all the current questions that we have. Alex, let me turn it over to you for a second and give you anything else you want to add before we give everyone just a brief moment to submit any additional questions. Sure. Yeah. Like I had mentioned earlier, I think that a lot of the value that businesses are really seeing is just consolidation. So consolidating not just data warehousing, but really the whole analytics strategy. So whether that's using BigQuery, whether that's using Looker, whether that's using those additional capabilities for AI and ML. This allows customers to really centralize their expertise, which allows more agile IT teams and lower costs going forward. Perfect. And just another question come in here. Is there any integration of Python Jupyter notebook with BigQuery? Yes. As I mentioned, BigQuery supports multiple programming languages, not just SQL. So we do this through different ways. One of the ways is user defined functions. So definitely Python integration. Python is definitely supported within BigQuery. Perfect. And it was mentioned earlier. Looker is now. That's correct. All right. Well, that is all the questions that we have for today. Alex and Manoj, thank you so much for this great presentation and thanks to Google Cloud for sponsoring today's webinar. Thanks to all of our attendees. Again, just a reminder, I will send a follow up email by end of day Monday with links to the slides and links to the recording from today. Thanks, y'all. Thanks Alex. Thanks Manoj. Yeah, thank you Shannon. Thanks everyone. Bye.