 Let me introduce our next speaker, number third of this first day at the attic. We are all familiar with cloud infrastructure, with giants like AWS and Azure and the application crowd provided by Salesforce and WorkDate that add the front office to the cloud. Some of them were mentioned already. Our next speaker believes that now we are seeing the emergence of a third layer, one devoted to data analytics and cross-cloud collaboration. He is the senior director of international product marketing at Snowflake. So please, let's welcome Ross Pérez, Ross, welcome, how are you? Thank you so much. I'm doing wonderful. I'm doing wonderful. Ross, let me tell our audience a little problem that we had with our previous speaker is that the questions arrive a bit late. So guys, remember, Ross has around 35 minute talk, the last five minutes, we have the Q&A. We have Ricardo and Luis from the previous speaker. You send lovely interesting questions. Ricardo and Luis, sorry, I've received them on the iPad, but it was a bit late. So if you want to speak to Ross, if you want to ask him questions, don't hesitate and don't wait. Do it as soon as possible. Receive them immediately. But on the previous case, we miss you. We miss those questions. So, Ross, remember, there's questions for the audience. So remember to think about them and remind them every once in a while, okay? So we're looking forward to listening to you, Ross. So all yours whenever you're ready, welcome. Thank you so much. And I'm excited to be talking about our vision for data in a multi-cloud world. Let's get started. I think most people on the call are probably already familiar with the application cloud. This is where the front office is being run. Tools like Workday, Salesforce, ServiceNow are helping us run our businesses in the cloud. On the other side, of course, we have the infrastructure cloud, where companies like AWS and Azure and GCP are helping organizations actually run their products and their businesses in the back end. Now these two clouds obviously are hosting an enormous amount of data. And many organizations are looking at the data that they're storing about the application cloud and infrastructure cloud and realizing that this data is really meaningful and valuable when it can be combined with one another. And in addition to having these two separate silos of information in the application and the infrastructure cloud, we're also seeing that data silos are appearing certainly throughout the organization. It's more than just these two different clouds that we're dealing with. It's also perhaps having one part of the organization who has deployed its own database to be able to serve its own needs. And then other organizations may have their own database. And then you have spreadsheets and different files lying around. The bottom line is that it's becoming ever more difficult to be able to combine these different data sources into a unified view for analytics. Data silos are really the key issue that's preventing organizations from getting at those really valuable insights they need to get from their data. And it's not simply, like I said, the different clouds that organizations are using. We see it organizationally across many different parts of companies that we're working with. We see it across different technologies, whether it's applications or databases. And the bottom line is that when you have all of these different silos separated out, it's just a very difficult situation to be able to combine them and get a unified insight. Now, I hope you'll forgive me for giving a real life example that's a little bit marketing focused. I'm a marketer myself. But I think this is something that most people will be able to understand. So what am I talking about? Well, data silos in real life can look a lot like this. Let's say that as a marketing organization, our goal is to send an email send in to get people to click through on that email, arrive on the website, and then perhaps sign up for an offer on the website. Now, most organizations are using an email automation tool to send those emails out to their customers. And that data in many organizations can be stored in Mercado as an example. Now, on the other side, it's quite common for web log and clickstream data to be stored in a data lake on AWS or GCP or Azure. In this case, let's say that it's sitting in S3. Now, having these two valuable data sets on their own, we can analyze them and be able to understand independent of one another the effects that they're having. As an example, I can see how many people click through my email that I sent in Mercado. And on the other side, I can see what the effect was of people landing on the website and signing up for different offers. But the real question is, and I think the higher degree of value that can be offered to the organization is by combining these two valuable data sets together. In other words, being able to say not just how many people click through on an email and not how many people signed up for an offer on the website, but which specific people and which piece of the email they were interacting with in order to get to the website and actually click through that offer. Those are where we're able to increase not only the value that each individual email is providing to the organization and optimize the website, be able to do so for specific and individualized people. So how do we do that? How do we actually get that unified insight? Well, I would argue, and Snowflake's argument, is that there's a place and a huge need for a third cloud, the data cloud. It's a place where you can combine data from the application cloud and from the infrastructure cloud into a single unified location that you can use to then analyze and find those contextual pieces of information that are so much more valuable, just like I was saying before where you're able to see not just who clicked on an email and how many people ended up signing up for an offer, but which emails were most effective at guiding the customers that were most interested in to signing up for the right things on the website. Well, what would we need in order to actually bring this data to fruition, to bring all our data together, to unify it and then to network it together so that we could actually get these unified insights? Well, there are a few things that I think are very important. The first one would be that, well, this layer, this data cloud, would need to be completely crowd-agnostic, certainly in the sense of being able to combine data from the infrastructure and application clouds, but also in being crowd-agnostic to which infrastructure cloud you're using. So if you're using AWS or Azure or GCP, you can take data from any one of those infrastructure clouds and bring it together and network it together no matter where you're coming from. The second piece is unlimited performance and scale. Having all of that data in a unified location is certainly a wonderful goal and something that can offer a lot of value. But if you can't analyze the data at the speed and cadence that your organization expects, then it's not going to be able to solve the problem that you're actually going after, which is getting those next level insights to everyone in a much more efficient fashion. The third piece that is incredibly important, I think maybe the most important piece of the data cloud is being able to seamlessly and securely share data not just within your organization, but with partners and customers, trusted partners and customers outside of your organization. If there's a secure way of sharing data, I think that opens up a lot of opportunities for additional contextual data sets that you can bring in and certainly for added value that you can add to your customers and partners who want to have reasonable data sets for themselves to look at from you. The last piece is that this needs to be secure and governed. Having this enormous amount of power and unified data that you can analyze on the fly and share with anybody that you see fit to grant access to is really more dangerous than anything else if you're not able to secure and govern it. So of course, we need that additional element. Now, we believe at Snowflake that Snowflake's data platform can support the data cloud with our innovative solution that enables you to bring all of your data into one unified location on the cloud and then run multiple workloads on top of it. So Snowflake can help you with data engineering, with data lake, with data warehouse, data science, data applications, and data sharing all in one location on the cloud. And you can connect Snowflake to your well, TP databases, your enterprise applications, your third party data, web and log data. And on the other side, of course, you can serve your data monetization, reporting needs, ad hoc analytics, and real-time analytics. Now, this may seem like a relatively large amount of things that the Snowflake platform is able to solve. And the reason why Snowflake is able to do this is architecture. So very simply, you can take Snowflake's cloud agnostic layer as the first step. So we enable you to run Snowflake on any one of the infrastructure clouds that you choose. So whether it's GCP or Azure or AWS, Snowflake can run in those clouds, and not simply in the cloud, but also within the region of your choosing. And on top of the cloud agnostic layer, Snowflake utilizes the storage systems, the cloud storage systems of our cloud infrastructure providers to enable you to store unlimited amounts of data in the cloud. On top of that, you can use Snowflake's multi-cluster compute to spin up as many compute clusters as you need at any given time to serve different use cases and workloads within the organization. This is how Snowflake, in part, how Snowflake is able to serve such a diverse array of workloads, everything from data lake and data warehouse to data science. You can't be doing all of those things at the same time without a single centralized store of data that you can have independent compute running on top of. And that's really the key to Snowflake providing these really wide diversity of workloads access to data at the same time. And the third piece is cloud services. So this is all run as a service with the optimization and management and transactions taken care of for you by Snowflake. Now, let's look at how Snowflake can sort of fulfill the needs of the data cloud as we talked about before. And the first one, of course, is cloud agnostic. And Snowflake is available not just in the different cloud infrastructure providers, but in many regions worldwide. So if you're an organization that is either looking at a multi-cloud strategy or you work with partners that are on a different cloud than you, or perhaps you are looking into a globally available data platform, then Snowflake can certainly support you with our worldwide coverage. And it's not simply being available in any one of these individual regions that make Snowflake so powerful. It's also being able to network this all together. This is really the key to the data cloud is being able to not only regionally replicate across regions from, say, AWS Oregon to AWS Virginia, but also replicating across continents from AWS Virginia to AWS Dublin or cross cloud from AWS Dublin to Azure Amsterdam. This is simply an example, but it certainly could be true across a multitude of different regions and cloud infrastructure providers. Bottom line is, of course, that you're able to bring together a unified view of data worldwide. This is a concept that we call the global data mesh. So having all of these different regions and having data residing in different ones and being able to bring it together is really where the global data mesh comes in. And it's an incredibly powerful thing to be able to do. Now the other piece that we're going to need for the data cloud is unlimited performance and scale. So having all of your data in one location, being able to network it all together in different regions with the global data mesh is very powerful. But how are we going to actually get at that data and serve the diverse needs of the organization? Well, in the past, it was very difficult to do so because you were dealing with limited fixed resources. Generally, if you had a data platform or something that you installed a database on a server and that was sort of the amount of capacity that you had to work with. So that means that if you had multiple people who were creating the database at the same time, they'd often conflict with one another and it would create a concurrency issue. You'd also have workload issues where, let's say that you're loading data at a particular time, say Monday morning. Well, if you have people querying that data at the same time with a business intelligence product, you'll often run into a situation where you have workload contention and those two conflicting workloads will run into one another. In the past, people would often try to tune their way out of these performance issues and these concurrency issues, even though it was really only a palliative way of dealing with the problem. But people would tune and index and partition and turn knobs to try to improve performance and it's something that was, of course, very time consuming and difficult to do, but didn't really actually solve the problem. And this tuning, of course, would lead in the long run to manual upkeep where you have to re-index and you have to repartition and you have to continually do this tuning in order to be able to arrive at a good situation. And of course, many traditional data platforms and databases just quite simply needed to be installed on-prem and that had its own manual upkeep along with it. This definitely approach to performance is significantly different. I talked about architecture already where you're able to isolate different workloads on different compute clusters and that is, of course, the basis of the immense power that Snuffa can help you to give to your organization on top of unified data. So this workload isolation can work in multiple different ways. The two most powerful ways it can work is by, first of all, helping you to separate out the loading of data and the querying of data into two separate operations, but also at the same time you can be running data science or different workloads like that on top of the same shared and unified data. But the second way that workload, well, the second way that Snuffa can help you to deal with these performance issues is by enabling you to scale to different numbers of people. So if you've got a Monday morning rush of a lot of people trying to query data in their BI tool, well, Snuffa can expand to serve that use case. And to make it even more powerful, it's delivered as a service. So it's really intended to be very simple, self-tuning and automated for your purposes that you don't have to spend all of that time manually tuning and re-tuning your data platform to arrive where you need to. And before I mentioned that seamless and secure data sharing was perhaps the most important piece of the data cloud. And I'd like to go into a little more detail about how Snuffa helps you to support this. But traditionally sharing data has been very difficult to do. And it really goes down to the ultimate truth of data sharing, which is that if you need to give somebody else data, then usually what you'd be doing is you'd be actually physically taking a copy of that data and sending it to them, whether it's in ETL pipeline or data marts or APIs or FTPs in crowd buckets. You're sending a file to someone else to give that data to them. And of course, the issues that that would bring are quite a few. So for instance, of course, it's difficult to copy and move the data, it takes time. It's costly to maintain these pipelines and data marts and tools that you're using to share data. It's certainly error prone in the sense that once you share data, anything could happen with that file, which is also why it's a bit unsecure. And of course, because you're physically sending a file, your data is going to be delayed by definition. You're sending a snapshot of the data that you're working with. Now, Snowflake's multi-cloud and cross-cloud data platform provides a much better way to share data. Because all of your data is networked together in Snowflake, we can do something very interesting, which is by enabling you to share data and access to data as opposed to sharing a piece of the data when sending that data to someone else. So instead of sending a file, you actually decide to give them a secure and governed access to the data that's sitting in your Snowflake instance. And they can view that with their own Snowflake instance. What this means is that there's no copy and removing. Again, you're querying data directly from someone else's Snowflake instance, with their permission, of course. This means that the data is live with no delays. You can share very personalized and secure views across a multitude of different customers or partners or people that you'd like to share data with in a very automated fashion. And of course, because this is something that is done in an automated fashion, it's governed and you can revoke this access when you want to. And it's globally available across the global data measures I was talking about before. Now, taking this up a step further, what if you want to share data programmatically across a large number of customers or partners that you're working with? Well, that's where the Snowflake data exchange comes in. This is a private exchange of data where you can put certain data sources that you think are valuable up for people to be able to access. And of course, you can control who has access to all of this data. This means that you can publish and create a catalog of the data that you think other people, your partners and customers, may find valuable and be able to connect to that data so that they can use it within their own analytics. And on the other side of this, because the data exchange works both ways, if you have a customer or a partner that you would like to receive data from, you can certainly do so through the Snowflake data exchange using data sharing in the backup. And again, taking it up one step further, we have the Snowflake data marketplace where you're able to publish this data publicly so that you have live and ready to create data available to people, to anyone in the world, if you so choose. Again, this is not something that is mandatory. It's something that if you want to or you have the need, perhaps a compliance need to share data with everyone in the world, well, you can certainly do so with the Snowflake data marketplace. On the other side, again, it's a two-way street. There's certainly a large number of organizations that Snowflake is working with that have published data on the data marketplace that you can bring in, and I'll talk about that in just a second. For instance, we have WeatherSource, which has made weather data available on the Snowflake data marketplace. So for instance, if you're an airport operator and you would like to know why particular days have been worse for operations, well, you can use WeatherSource data to see if weather was affecting that and bring that in live with live access to data through the Snowflake data marketplace. We also have a partner, Star Schema, who has published on the Snowflake data marketplace a comprehensive data set of COVID-19 data worldwide. So you can actually go connect to the Star Schema data within Snowflake and you can dig into worldwide data of the COVID-19 infection and the work of governments to go and combat that. The last piece that we need beyond being crowd agnostic and enabling you to host all of your data from multiple different clouds and having unlimited performance and scale to serve all of your users and enabling seamless and secure data sharing with customers and partners and people within your organization. If we're able to do all of those things, well, we'll only be powerful if we're able to secure and govern it. And certainly Snowflake security is incredibly well thought out. We have the SOC and PCI and FedRAMP certifications and of course RBAC for accessing and logging into the product along with SAM or multi-factor authentication for enabling you to connect with your organization's authentication protocols and ensure that only the right people are having access to your Snowflake data platform. I won't go into every single piece of this slide but of course the idea and what I'm trying to share is the in-depth view that Snowflake has taken to security. Data governance is a different angle on the security of your data and the value of your data. And it's one that I'm very interested in personally. In fact, tomorrow I'll be speaking on the panel on data ethics and I think data governance is a big piece of this where you're asking the questions about who in the organization should not have access to data but also who should have access to data and how we can ensure that the right people have access to the right insights at the right time. Of course, governance is beyond just control of data. It's also about understanding your data and what's inside of it. Where potential issues may come to light and being able to combine your governance and your control of that data across the entire organization. And governance isn't particularly interesting when talking about data silos. If you have data in multiple different silos and multiple different clouds in different locations, it's very difficult to be able to govern that. That's why Snowflake can provide a significant amount of value in the data cloud by putting your data into a location that you can control more easily. At the end of the day, having these four components can enable some real collaboration inside of the data cloud. And of course, Snowflake is what makes that possible. Let me talk about a real example. Instead of just talking about stuff like I'd actually like to share. So there's just much more interesting which is this in action, how our customers are actually using the data cloud to power their analytics and their own customers. So Atheon is a very interesting company. They're a leader in the analytics of the flow of goods and their customers, so Atheon's customers are suppliers of consumer package goods. So people like Krasberg and Heinz and their goal of course is to enable their customers, these suppliers of consumer package goods to understand when products may be out of stock and ensure that they are never run out of product on the shelves. Now I'll talk a little bit more about that in a second, how Atheon actually uses Snowflake to do so. But if you think about it, Atheon's situation is relatively complex. Now they have multiple customers, they are working with a multitude of consumer package goods companies and they all need to see data at the same time. And these customers are sitting on different clouds as well. So you have this situation of working with lots of different organizations that are in different clouds and what's the most effective way to get at that? Now at the same time, Atheon has multiple partners who need to provide data on multiple clouds to Atheon. So these are the supermarkets, the places where the consumer package goods companies are selling their product to. And because of the multitude of different customers and partners they're working with, they're also dealing with a very large volume of data that's bi-directional. So they're getting data from their partners, the grocery stores that people are actually selling these goods in and they're receiving data and sharing data back to their customers, the companies that are selling the consumer package goods. At the end of the day, Atheon's goal is to ensure that this never happens, that no product is ever out of stock in the grocery store. Their product is called SkewTrack and SkewTrack of course is powered by Snowflake and it works relatively simply, at least in this view here where Atheon is taking data from multiple different retailers combining it in their SkewTrack automated data transformation system and they're providing it up through dashboards and their own Explorer product to their customers, the consumer package goods companies like Heinz and Carlsberg. In the back end, Snowflake is powering this through our data sharing tool and they're using reader accounts within any one of their multiple customers to give a very fine-grain slice of data back to each specific customer. What's interesting about this is that Atheon can create a single database and then create customized views automatically for each one of their customers that they can then provide these powerful D3 and Tableau visualizations on top of. Essentially what this means is that they can automate the entire process from the ingestion of data from their partners to the storing of that data, the sharing with their customers and then of course the display and distribution of the data on the back end. Because this is all unified and it's live and it's cross-cloud, it means that everybody's looking at the same single number. Well, what do their customers have to say? Well, Peter Hall, who's a UK sales director for Kraft Heinz said that Skewtrak means that they have a single view of their performance across all of their retailers. They can understand their data much faster and then speak to them in their language about what to do, ultimately resulting in fewer empty shelves. It's an incredibly powerful thing that they've enabled them to do. Of course, this gives them a much better view into their business. Alex Rowell, who's the head of convenience for Krolsberg, said that equipped with the data from Skewtrak, he was able to drop a note to the co-op supply chain team and subsequently they made a significant order of around 2,000 hectare readers, which is 400,000 pounds in terms of sales for Krolsberg. So incredibly powerful insight. They were also able to tip ring from their data that they're receiving from Atheon. And this is a good example of contextual data like I was talking about before. And where you're able to not just understand what the orders were, but being able to see when they, using pass data, when the orders may have run out and when they can go in for an additional sale and create more revenue for the organization. Now, I'm coming close to the end of my presentation. So if you have any questions I want to remind you right now that this is a really good time to be able to send your questions in. So you can be sure to get to them. I want to make sure to be able to get to all of the questions that you may have about the data cloud or about Snowflake and of course about our wonderful customer Atheon. Now, of course it isn't just Atheon that Snowflake is working with them. We have thousands of customers worldwide including Fortune 10 companies like McKesson, large organizations in Europe such as Sainsbury's and everything from technology organizations to insurers. So really the Snowflake platform and the data cloud is something that can be usable not just to organizations like Atheon but really to a broad range of organizations in different industries as well. Now, we wouldn't be as powerful as we are without being able to work alongside the other tools that organizations are using. So this is everything from the BI tools like Tableau and ThoughtSpot to that the ETL tools that you're using every day like FiveTran and Matillion to of course the S&I partners that you're using to implement all of these technologies in a really effective way. If you bring it all together, it really looks to the rise of the data cloud where this third, this all important third layer for data is starting to emerge in the cloud where you can take data from the application cloud and infrastructure clouds in multiple different regions across multiple different cloud infrastructure providers into a single unified location that can help you to serve those contextual and relevant insights back to your organization and with your customers and partners as we saw with Atheon. It's an exceptionally powerful idea and one that we're seeing in action all across the world and certainly within the media. In fact, Snowflake will be hosting our own summit, the data cloud summit this week. And so if you're interested in learning a bit more, I would just look up the data cloud summit and please join us for that event occurring this week. So that brings me to the end of my presentation and I would love to take questions. Hopefully I gave enough of a warning there so that we could get the questions. I'm not sure about that because we're a bit slow here. I don't know why, but it was fantastic, Ross having you talking about giving us all the overview about Snowflake. Do you know now that Snowflakes in winter will never be the same for anyone again? Every time we see a Snowflake, we'll think of you. So thank you to Ross Perez if you actually define yourself as a data guy, a B2B marketer, but as a data guy. And it was a very clear overview of the three clouds that you mentioned and I mentioned in the introduction, you talk about the application cloud, the infrastructure cloud, and the data cloud, of course. And you told us the secret is the architecture, the whole architecture of Snowflake, that was fantastic. So in that sense, let me see questions, but we don't have much time, but you did a good job. They're asking you regarding the access to data you mentioned as a way of sharing data, despite being a two-way private process, could you explain more on the security side, the authentication you were mentioning? Could you give us more details about that security? Yeah, of course. So the data sharing that Snowflake enables is enabled through a secure view. So you can think of, in the back end, the data is being stored in Snowflake very similar to a relational database that you may have used in the past. So it's all sitting in a database on the table with rows and columns. And within those rows and columns, you can use a secure view to basically say, for any given customer or partner that you want to share data with, exactly which rows and columns are available for them to view. And in fact, Snowflake can also enable you to hide certain pieces of data that you, or obfuscate them so that they can be a little bit more secure depending on how you want to share data. So a really good example of this is, you might want to share the country code of the phone numbers that your customers or partners might be using with a different person, but you don't want to share the actual phone number itself because that's, of course, something that's personally identifiable. And so you can use Snowflake to mask that data when you're sharing it. So there's really a lot of fine-tuned capability that you can have to be very specific about not only who you're sharing data with, but what you're sharing and what specific pieces would have been to sell are visible to those people. Okay, well, at the end of the day, what they're asking you as, and I'm sure this question comes up all the time, is the security, how secure it is? Is it safe? Which is the million, the question that comes repeats when you talk about data, which is worrying. And we always have unheard examples of not so secure environments, but obviously it's not your case. They're also asking you about the governance. You mentioned, which I'm not sure, Ross, if you're going to speak about this tomorrow in your panel and at the lounge panel you're having tomorrow afternoon, but to explain a bit more about the governance, how do you decide who in the organization who should have and not have certain data? How does Snowflake deals with that? Well, no, that's, and I'm so glad you brought that up. Is this actually something I'm super passionate about? My background is actually from the BI world. So I used to work at Tableau and you'd see lots of customers trying to spread insight throughout their organization. So I'm a big fan of actually sharing data with a lot of people in the organization, not saying just to flip it, maybe that, but I think it's about doing it with the right questions. So in other words, it's like, you're starting from the question of what data do people need to be able to actually answer the questions that they need to answer in order to do their jobs correctly? And we should be providing data to people in the organization fairly broad. However, there are certainly different pieces of data that are going to be highly sensitive. Those things need to be protected, they need to be secured, they need to be significantly more tightly governed. But I like to approach governance from the perspective of who does need access to data, rather than immediately saying that everybody should be blocked, because what's the point in having all this wonderful data in the organization? Exactly. We have still a bit more time for another question. Alex asks you, thank you for the clear presentation. Can you please expand on the benefits of using Snowflake with supply chain businesses? Thanks. Of course, yeah. And of course, the Atheon example is a really good one. I think the supply chain use cases is perhaps one of the most interesting for having stuff like in the data cloud, because you have such a diverse network of people that you're working with and a lot of people. So an example of where this adds a lot of value is, if you're an organization that's working in the supply chain at any level, chances are that you're working with dozens or potentially even hundreds of organizations. Now, if you have a need to share data with them, that means that you could be creating dozens or hundreds of different files to send the same data to different people or even more complicated than what is sending different data to different people. With Snowflake and Atheon sharing technology, what's really powerful is that instead of doing all of this individual file sends, you can use that secure view that I mentioned before. And literally with one command, you can give access in an automated fashion to all of these different organizations, but you can give them, like I was saying, a very restricted subset of view, which I think is so powerful. The other thing that we see lots of supply chain organizations running into is complexity in different types of data. So they may receive data from customers or from their partners. So let's say that use the Atheon example, right? They're working with Sainsbury's, they're working with Co-op, they're working with all these different companies. And when they're receiving data, they may receive it in different formats from all of these different suppliers. So what's so important is being able to take JSON, Avera, XML, Parquet data, transform it and flatten it if they want to, or perhaps just store it in its format and put it in Snowflake where they can then kind of in a unified way, transform the data into a single picture that they can then share back. So taking that complexity and turning it into an asset is really what the data cloud is all about for supply chain organizations. It is such a complex and difficult world, but it doesn't have to be quite as hard as it has been in the past. Well, Ross, thank you so much. We ran out of time, unfortunately, that's the bad news. The good news is that if you wanna see Ross very soon, well, he mentioned that Data Cloud Summit that they're organizing this week, right? Or soon, but if you can't wait until then, you can see Ross again and talk about Snowflake tomorrow at the lounge. They have a panel boom of data ethics. Is that right? So I'm sure we're gonna hear a lot more interesting issues and there'll be more questions coming up. So watch out for that. If you wanna speak to Ross, you have questions pending. Meet him tomorrow at lounge at 6.15, 18 hours, 15. Ross, it was lovely having you. I told you, the Snowflake's will never be the same again. We'll see you tomorrow at the lounge. See you then. Thank you so much. And for the rest of the audience, don't go very far because in a few minutes we're gonna meet our last speaker, the last keynote of today. So see you at the attic in a minute. Bye-bye.