 Hello everyone, thank you for joining this webinar where we'll talk about best practices for building and deploying embedded analytics. Alright, as we all gathered here, you probably know that in today's market, any organization that strives to grow and be relevant on the market needs access to data. Data that is accurate, real-time and accessible to all stakeholders, beat your internal teams, customers or your business partners. International Data Corporation recently predicted that the amount of data produced in the next three years is going to surpass the amount produced within the last 30 years. Considering this, it is not a coincidence that 98% of IT executives surveyed by polls stated that shifting from simple data collection to insights consumption is among their top priorities for the upcoming years. So, how can companies quickly adapt what is coming? One part of the equation is to consider employing embedded analytics, whether it's in your SaaS application or web portal, which will make data-driven decision-making possible for anyone. And that's exactly what we're going to cover today. We're going to focus on three main areas. First, we will offer a brief introduction and explain what we actually mean when we say embedded analytics. Then, we will address the question of how to approach embedded analytics today before looking at integration options and scalability. My name is Anya and I'm a product marketing specialist at Goodata. Goodata is an mission to deliver growth through analytics to companies around the globe. Serving over 140,000 global businesses, we believe that it is time to move beyond the monolithic nature of existing BI tools and instead leverage the unique opportunity the data industry has been given today. Our focus is on building data products that generate revenue and help any organization become truly data-driven by delivering data as a service. We believe the data as a service infrastructure is the future of analytics delivered with these three sets of important capabilities. A headless BI analytics engine that provides and manages the semantic model and ensures the consistency of the computed measures. Second is the open API that allows any application or tool to consume the analytics engine results programmatically and composably UI widgets that can be embedded into any mobile web or on-premise application while including the embeddable self-service analytics tools. An analytics solution of this type with robust dashboards, customer insights and unmatched governance options can actually be easily integrated with any other product or software, as we will see today. We have discussed why you should care about easily accessible data analytics in the first place. But what exactly do we mean when we say embedded analytics? Embedded analytics represents a set of analytical tools seamlessly integrated into your existing product in a way that the user will see it as an inherent part of it, despite running as a third-party analytical platform within the backend. It is about integration and management of all the aspects that make interactive and accurate data visualizations possible, including connection to various data sources, fast updates and changes, and importantly, guarantee data privacy and security. On top of that, its ability to be fully customized, being in terms of data ingestion or UI options, has given it unprecedented popularity among many large organizations. Let's dive deeper into this topic and some of the best practices that we've accumulated over the years of working with companies from various sectors, such as software firms, healthcare providers, financial organizations, educational institutions and many others. Let me start with the first area of today's webinar, which is actually the question of how to approach analytics today. You heard me mention third-party application and you may be wondering why third-party application? Can't we build it ourselves? The answer is you probably could, but you'd need to carefully weigh up what is at stake. What you would need to consider first is the time needed to develop an analytics tool from scratch. Second comes the question of resources. And here we're not talking only about monetary resources, but also about the number of developers you have at hand. Do you have enough of them? Do you need to hire new team members, which will ultimately impact your finances as well? Let's say you managed to build your first analytical application. What will happen next? Who takes care of it down the road and at what cost? After all these calculations are done, teams, especially product ones, quickly realize that relying on an external vendor who already has experienced delivering embedded analytics is going to be more convenient. This way your teams will get a chance to focus on your company's core competencies to stay relevant on the market. There is another crucial element in this should I build or should I buy equation, which is the question of compliance and security that we often see companies neglecting when evaluating which path to take. You'll need a solution that meets various levels of security requirements and regulatory standards that are constantly evolving. And yet again, product teams and decision makers will have to think about where they want their company to be in a couple of years. Most executives would like to see their organizations enter new markets and new regions, and it is in these cases in particular that the probability of bumping into insufficient compliance requirements increases. Meeting the necessary standards is a long and arduous process and faced with it, you may be confronted with a challenge of sticking to the original roadmap of your core business. So what should be the ideal scenario then? The best practice would be to partner with an expert in the field of data processing and analytics. One that offers transparency, strong governance features and reliable risk management. These attributes can be identified as part of three security layers. First would be cloud and physical security, which relates to regular monitoring of the analytics platform and its underlying components. Then there is data security allowing for physical and secure isolation of hundreds and even thousands of customers. And operational security, which includes access to the strictly controlled production environment. So even with the best intentions, your company may simply lack the resources, capacity and team to focus on building a strong and security compliant analytics solution while at the same time managing your core business. Okay, so we've spoken about a key element, which is the question of the actual approach in terms of build versus buy. As I mentioned at the beginning of this webinar, we're going to look at two more key areas surrounding best practice for embedded analytics. Let's now look at integration. Generally speaking, integration can be divided into two parts, backend data integration and front end UI integration. We'll start with data integration first. And the reason why I want to address this first is because we often see companies that are interested in deploying embedded analytics, approach us and ask questions like what will happen to my existing tech infrastructure after I partner with a third party data analytics provider. What about the investments I have already made? What if I lose control over my data? These are all understandable concerns. And it is exactly because of this that I'm going to share a couple of tips with you now, helping you to understand what to look for in an embedded analytics provider. An ideal partner for your embedded analytics is the one that will utilize and support your existing data infrastructure. This, in fact, shouldn't be optional, but really a must with respect to your company's future development. Not only will this eliminate the risk of abandoning your existing tech stack, but it will also lead to a quicker and more economical implementation as well as more reliable data analytics. Regarding tech infrastructure, you'll need to make sure that your data analytics partner of choice can support various types of data sources. And these can range from cloud data warehouses such as Snowflake, Redshift or BigQuery to simple CSV file upload. And this can be manual upload from your computer or a more sophisticated approach through storage buckets such as Amazon S3. Apart from these options, you can go for end-to-end solution that utilizes pre-built connectors to different systems, apps and platforms. With the last option, what we recommend is to make sure your data analytics vendor of choice can store large amounts of data from various sources. They should have high performance operations to store the full data history with the ability to extract all data quickly when needed. Now, let's say you chose your preferred data source or even multiple data sources and you are set on that. What comes next is the question of actual data ingestion and the automated data distribution process. A well-functioning solution that will be able to support your business in the long run is the one that will allow for easy adjustments of your source data structure to the data model in your analytics platform. In this way, the process will require as little of your time and manual adjustments as possible. This will actually give you the ability to fine-tune the data pipeline for every customer or a business partner and you'll be able to meet their bespoke needs. More importantly, your customers will always have their data up-to-date so that the insights that will later serve as a basis for their decision-making are as reliable and accurate as possible. Now that we have spoken about data integration, let's move to perhaps the more fun part, the front-end integration options, the ones that will actually be visible to your customers. So, let's say we live in an ideal world. What would your ideal embedded analytics solution look like? It would probably be seamlessly integrated, respecting your branding, your styling options, and it would allow you to create custom visualizations and dashboards. On top of that, it would be easy to use for your business users, meaning non-technical users, and it would offer them rich out-of-the-box tools. So, you may be wondering why you would need to give your end users access to these tools instead of serving them. Consider that each of your customers is different and has various business needs, requirements, and expectations. In order to retain your customers, you would then need to be able to quickly react to their demands. If we take a further step back, your engineers and analysts would have to meticulously keep track of all the requests which, if you have thousands of customers, will soon prove to be unmanageable, and what's even worse, your customers may end up with broken reports we can trust and potentially profit loss. Let me mention one more added benefit of giving your end users self-service capabilities before I move on to showing you an example. What you have to keep in mind is that you won't always have to meet only the ad hoc requirements of individual users, but also those of entire teams. Instead, by giving them ability to create their own insights, they'd get an invaluable opportunity to share information with their team easily, anytime and anywhere, even when building reports from scratch. Now, let me explain what we've covered so far in the form of a short demo. We are now in a demo environment, and what you see is the data analytics interface of a fictional company called Shopboard. Shopboard is an e-commerce platform that helps online retailers manage their marketplaces. They decided to offer added value to their SaaS application by offering an analytics product as part of their platform. They use a third-party data analytics platform, in this case Goodata, which is embedded directly into the Shopboard app. What you see on the screen now is a private data analytics environment for a single merchant that sells technology gadgets through Shopboard. Here, they can view their sales-related data, information about their buyers, the popularity of individual products, as well as their marketing data. All of this is provided to these merchants and users so that they can make better informed decisions. Let's use a regional manager as a sample user of the Shopboard platform. What this user can access immediately after they log in are different insights provided as basic visualizations by Shopboard. As you can see already on their homepage, this regional manager can get an overview of the weekly sales information, top five rated products, total sales information about their users, and so on. They are also able to filter by product brand or even by product category. Within this data analytics product that Shopboard offers, a regional manager can also access more detailed dashboards available under the KPI tab, or to create their own ad hoc insights via analytical designer. Let's look at the KPIs and alerts first. What this contains is a library of dashboards that can be grouped by different categories to display individual information available from your data, as well as individual KPIs. You can see these KPIs at the top of this dashboard and they are here to display the most important information immediately. You can also think of dashboards as a dynamic presentation layer for your data analytics, which allows a wide array of adjustments such as filtering or drilling, so that your users can dive deeper into the information that they see. Based on their customers needs, Shopboard provides a couple of baseline dashboards such as this overview dashboard, where we can see more detailed sales overview compared to what we've seen on the homepage. There's also information about monthly inventory and returns, information about engagement analysis, which displays things like page views, product reviews, ratings, and so on. And there's also profit margin analysis. Let's say a regional manager is interested in monthly returns and he wants to go beyond what Shopboard has provided. He is interested in seeing not only the return quantity and the price, but also in the types of products that are most often returned. This is where the self-service capabilities, which we mentioned earlier, come in handy and it's at this point that we'll look at the analytical designer section. An analytical designer is an environment for data discovery and visualization, which contains the measures and attributes coming from all of the underlying data. It serves for the creation of insights and different visualizations as well as slicing and filtering of the data. It can be used by technical users to create complex visualizations from scratch, but also by business users such as our regional manager to examine data ad hoc without having to wait for the data analysts to respond to their requests. What we see in the left sidebar can be imagined as a data catalog that contains all the measures available in your logical data model, and I will return to the logical data model shortly. We said that our manager wants to examine which types of products buyers return most often in a given time period. So how can he do this? By first looking at a data catalog, we can identify which measures we have available for returns. And since we want to see the quantity in a given category, I am simply going to drag the measure and drop it to where I want to see it. But before I do that, you will notice that the platform is offering me dynamic recommendations as to how I can analyze the data. This happens thanks to the powerful underlying semantic model, which makes sure that only the pieces of data that can actually be analyzed together are analyzed together. To be more specific, it prevents business users from making calculations and insights that wouldn't make any sense and hides combinations that can't work together as it offers consistency in data and how they are viewed together. Let's now finish the creation of this insight. So I have my return unit quantity selected. And now I want to see it by category. I am going to select product category and view by. And I can see that it already gave me some information, but because it's not that easy to read yet, I am going to display it as bar chart. And I also want to see just this quarter. Now, I could leave it as it is because it already gives me more information into returns. Or I could go one step further and compare it to the same period last year, which will allow our regional manager to further investigate trends in the returns. Now that I have my insight, I am going to name it returns by category, and I'm going to save it. To be able to access it easily. I am also going to enrich my inventory and returns dashboard by adding it there. And again, I am simply going to edit this dashboard. All the insights that are created by me I already have available here so far it is just one insight. And again, I'm just going to drag it and drop it to where I want it to be. And I'm going to expand it so that it's easier to read. And again, I can name it returns by category and save. You saw me reuse existing attributes and metrics to create a new insight and you might be wondering if it is possible to do so also with more complex metrics, or even if it is possible to build new ones from scratch. And the answer is yes, it is. And again, it is thanks to the power of the underlying semantic layer. This ensures that attributes and metrics are context aware, meaning that if something changes in the metric definition this change will be automatically propagated to all of shop boards users without breaking their individual customizations. The new metric can be created using muckl which is good data's proprietary query language used for creating metrics and aggregations of the underlying data. What you have seen here comes from the actual good data platform running behind the scenes and is embedded without using any code via iframe. So this is the original environment. This is the original good data environment. And as you can see, it contains the same types of data available in the embedded product. So what this type of embedding gives you is the ability to include either only certain pieces of analytics into your application or entire dashboards and editing capabilities. For more complex needs, your developers should also be able to create customized embedded visualizations. In the case of good data, this is offered by providing your developers access to REST APIs, SDKs and a pre-built JavaScript library accessible under the good data documentation. And as I said, it contains source code from which your developers can make these custom visualizations. So why am I showing you this underlying good data application? The reason simply is that it's good primarily for administrators and your technical users as it offers access to additional features such as the metric editor. From here, your technical users can view how the metrics are written, they can edit their definitions or they can even create completely new ones from the underlying data in the logical data model. And it is also from this section that your technical users can access the logical data model. And the reason I'm mentioning also the logical data model is because it is another crucial part of the semantic layer. Called LDM for short, it is from here that the data are available in the analytical designer in the data catalog. The LDM represents relationships between data objects in your customers environments, as we can see here. It is this definition of relationships that gives your end users the ability to easily explore their data without the need to write complicated SQL queries. On top of that, it allows administrators to change the structure of the source data models without any implications for the content that is already available within the showboards users environments. All of this offers the benefit of the collaboration options that I mentioned earlier. The insights and dashboards that you create can easily be shared with your colleagues, either as PDF files or as regularly scheduled emails. As you can see in this demo, shop board and good data are tied together so seamlessly that end users really aren't able to notice that the underlying analytical engine is not actually from shop board. This is made possible due to both seamless data integration, the LDM and semantic layer, as well as the embedding options offered on the front end, and these include also white labeling and single sign on. And all of these allow for smooth access to data analytics. If you currently have only a couple of customers or are thinking about distributing analytics to only a couple of internal teams, self service capabilities may not seem like a pressing issue to you. But the reason we discussed it in such detail goes hand in hand with scalability, which brings us to our next topic. Oftentimes, product teams involved in the evaluation process fail to look at their future state and focus only on the present. This is actually not entirely their mistake, as most data analytics software vendors focus on delivering the desired solution fast while offering rich visualization and integration options. This often leads to neglecting the important aspect of ability to scale and offer a solution that will evolve with your company's development and customer base instead of you having to adapt your business to the capabilities of the analytics platform. This approach that only has the potential to result in lengthy processes and complicated product iterations, but it can also impact the viability and profitability of the product in the long run. To avoid any such problems, we recommend that you discuss with your potential data analytics vendor the available automation tools to take care of your change management and release processes. These tools should offer your teams the ability to manage and release new versions of the data analytics part of your offering in accordance with updates in your core product, specific customer segments and tiers or specific geographies. On top of that, such automation should work in parallel with the self service capabilities keeping your solution consistent. The individual customizations that your customers make should be protected and the iterations your product and development teams make shouldn't actually overwrite what they've built for themselves. Another important aspect of these automation tools and scalability is the dynamic provisioning and deep provisioning of individual users or entire customers. You'll need to make sure that adding new customers, changing user success and refreshing data is fully automated. This is important not only because of the operational processes and costs, but also because it removes the burden of having to manually keep track of and deep provision users or customers in order to keep the data safe and avoid any leakage. And we spoke about this at the beginning of this webinar. Hand in hand with tools allowing for effective scaling are the profitability and costs associated with your data analytics solution. And this is the last topic that we are going to briefly discuss today. Commonly, you'll encounter pricing models that charge per end user and per query. It goes without saying that such pricing structure lacks transparency and can negatively impact your margins and ROI. Instead, your ideal vendor should offer you a customer based pricing model with the quote unquote menu on top of that. And here's what I mean by that. Your base price should be determined by the number of customers using data analytics software where, as we already said at the beginning of this webinar, a customer is either the entire organization, your business partner or a specific internal team. The menu is then a list of additional tools and features that you can pick and choose from at any time based on your company's needs. All right, with that, we have reached the end of our webinar. So to sum it up, we focused on the best practices for embedding analytics and we identified three main areas that need to be taken into careful consideration when making data analytics an inherent part of your product. First, we spoke about the best approach to building an analytics solution which revolved around the topic of building a solution in-house versus partnering with a dedicated data analytics provider in order to save time and resources. Second, we talked about the complexity of data integration both from the perspective of data ingestion and management, as well as front-end integration. With regards to that, we mentioned that our ideal data analytics vendor should have vast experience in both of these areas and should offer extensive embedding and integration options so that the products blend seamlessly. And lastly, when talking about embedded analytics, the question of scalability has to be addressed as the solution you ultimately choose will eventually have to remain flexible with the ability to grow together with your business. In case you want to explore this or any other data analytics related topic further, I would like to point you to our website and learn page where you'll find free on-demand resources and from which you'll also be able to join open office hours sessions with good data engineers to discuss any questions you may have. Finally, I really want to thank you all for your time in attending this webinar. And in case you'd like to experience the power of embedded analytics first-hand using your company data, don't hesitate to apply for a good data free account at gooddata.com. Thank you all very much.