 So let me start. Good morning, Vancouver. With four minutes left before I can still wish you a good morning. So basically, I will come to why you are listening to me. But before that, to your question about how I came up with this, a few months back, I was talking to a technical co-founder of a company who are quite a bit into open source. And the point of discussion was that they are about to embark into moving everything into, I should say moving everything. So this is a digital native company. They have already been building on cloud, but they are really looking to sort of re architect for the future as they were the growth stage. And the main topic of discussion was how should they continue to retain all the benefits of open source while also take advantage of what the cloud was offering to them. And through this discussion, and as we actually discussed, that's when the idea of this talk came to me. If we actually look at the evolution of how the open source and the open data ecosystem has happened over the years, that actually gives us some valuable information that we can actually take when we try and decide that how our future architecture should look like. So that's where the main motivation of this talk came from. So personally about me, why you should be sitting here listening to me. I'm running the chat as you can see there. I work as a data analytics specialist within Google cloud in the, I'm based out of London. And while I sort of travel across EMEA most more often, but my main focus area is the UK and Ireland market. And, and we are quite fortunate to have a reasonably vibrant digital native and startup ecosystem in in London, which helps me get a good view of what these companies are thinking of. So a bit about a little bit about me and why how I'm linked to all of these open source thing is in back in 2009 I was working as a software engineer in Yahoo. And that was about the time when Hadoop was being developed within Yahoo and it was just coming out of Yahoo Labs and into the house productions environment. And a product team who were the one of the first teams to be actually building on top of the Hadoop platform back in 2009 when the world still didn't know that much about Hadoop. It was mainly around in the companies like Yahoo, LinkedIn and Twitter, a few companies that knew about it. Of course, Google as well. Now, 2012 onwards for the next six years, I was working on a personal capacity as a consultant, working with a few of the very big enterprises in UK, building their clusters and helping them on to their journey on to the Hadoop ecosystem. So mainly around that was around Apigee Spark or Apigee Hadoop and these these sort of systems. Back in 2018 and just just just before I proceed any further, I'm having a bit of a bad throat today, which is why I chose to take the hand mic. So if my throat misbehaves, I can quickly spare you the pain and just move it away. So please excuse me if I'm not sounding right always. So back in 2018 I joined Google Cloud and and that's when I started sort of appreciating or understanding from the inside of what the cloud was really providing us and over the last few years. So 2020 onwards I've been mainly focusing on digital natives. So which gives me a wide sort of a view of the enterprise world, the digital native world and what really matters to them and how the open source systems have been evolving alongside the cloud. And that's that's and from this experience, what you will be seeing today or hearing today is more of my experiences and the lessons that I have learned out of out of the last 10, 12 years. So that's the agenda. We will go through an introduction and then cover the early evolution of open source big data analytics. Then, as part of the challenges of the changing world, this is where we will look at what the digital natives think of what happened before, or whether they really care. And then we will briefly go through the cloud journey because interestingly enough, as the open source data analytics ecosystem was developing, the cloud also developed around the same time. I would say somewhat in parallel, but there were a lot of crossovers as well. So today, the cloud is everywhere. So it is important to understand how the two interact and and finally we'll end with the key takeaways from these two. So, looking back at the history. So why is this important? Why should why do we even care about what happened, say 20 years back. And the reason why I thought it will be good to understand is there were there was. I mean, there are a good few reasons why how to become as big of a thing as it did. And if we understand why, then that helps us in to to sort of design for the future so that we can be aware of the pitfalls that we might fall into. So, here I've taken a graph from the Google Trends, basically the search trends. And really this is this is a proxy by no means is this an absolute data. But what we what I wanted to show here is the popularity in terms of search results for the various different types of data. And as a proxy of how popular these technologies are in in in the common like developer community. So, a few interesting facts here. So late 90s, early 2000 that is when open source itself was getting quite popular. We mostly we who most of us were aware of the open source world we know that Linux has played a very big role in that and and in particular also played a huge role in sort of making open source mainstream amongst the enterprises. And this was happening in the late 90s, early 2000s, and then around 2004 is when Google published the MapReduce paper, and that gave rise to a few attempts to sort of move into a move into the data analytics space where we can sort of really make industry quality data analytics products using open source. So, how do came out of this roughly around 2006 2007 time. And then it was shortly followed by a number of other systems. So we had apache storm, Cassandra each base flume some number of different systems which came out of different companies around this time. And also it was shortly followed by apache spark, which is quite ubiquitous today. But for the initial good few years spark really didn't spark that much of an interest in people. And it was it was. So my belief is it was the integration with the Hadoop and the HDFS ecosystem that really led to spark taking off as the big thing. And, and some of these we can actually see here. Now, interestingly enough, I have picked up Databricks as one of the one of the outlets here and there is nothing specific about Databricks but Databricks is one of the companies who have built a business out of open source. Even though, like we can't say that everything that is coming out of Databricks is open source but one of the companies that have really built their business on on apache spark. And the interesting fact here is that while we can see that roughly the trend of apache spark is stable, or maybe with a slight downward trend, but Databricks is having a very strong upward trend. And this is this is where we'll actually come back to the benefits of how cloud so Databricks is one of the companies which are, I would say they're quite digital native in the in terms of providing their product on top of the clouds. And this is where the crossover between the cloud services and open source can be exemplified. Now, no specific reason why I picked up these these four products, the reason the key reason is before the advent of Hadoop and popularity of Hadoop. These four products were mainly sort of standing in as the systems that were catering to the use cases that Hadoop was catering to later on. So the main reason here is all of these products are very good products, no doubt about it. But the purpose of this again this is search trends. So this does not talk anything about the quality of the products or anything, but this does show that from the search point of view, the popularity of these products have gone down over the same period. When the popularity of the open source data products have gone up. So what we will actually go into what was happening here. So let's look at what led to what led to Hadoop being so popular. So let's put some context there that what was happening in early 2000s. So this was a time when there were three to four decades of dominance of the major major database players were also providing the data analytics system so data, the data and analytical databases and analytical systems. Data analytics itself was emerging out as a separate stream away from the online transactional systems or the relational databases. This was also a period of time when the disk prices were going down and companies were actually able to store a lot more data in a cost efficient manner. So suddenly they did not have to throw away all that data. And the other thing in context is this is just after the internet boom. So there was a lot of user data that was getting generated around this time. And and while previously it was challenging, it was quite a bit challenging as well as expensive to store all of this data because of the commercial reasons that was no more a big deal. But there was another thing that all of this data that was getting generated was not quite in the form that the existing database databases like so there was also a long chain of transformations that needed to be done in order to really get this data into the into those data systems. So roughly this was this was the context. Let's look at what were the problems that the industry was facing around this time. So extortionate usage fees. So extortionate I put that in double quotes because extortion is of course a perception. Different companies can think of different different ways of it, but there was one key thing as we saw that there were a handful of vendors who are really providing the products which are even closely capable of handling the analytical work loads. And and for the customers of these products, there was no other alternative to go to. So whatever was the fees, whether that was high or not, they didn't really have much of an alternative. So the feeling the feeling of not having the freedom for an alternative is what gave rise to the feeling that there was an extortionate amount of fees. Whether open source really reduced or completely did away with all that that's that's debatable. But at least that was the feeling in those days. High hardware cost was definitely a factor because most of these analytical systems had a very monolithic structure and needed very high spec machines to to be working on. And quite often they would actually need specific machines from specific hardware vendors to run and and these were usually quite expensive scalability was another issue. Because again because of the monolithic structure they were physical limits so how much it can scale, how many CPUs you can actually put into put into these machines and hence how much compute you can actually get out of it. High migration cost. So again, I think this is down to the fact that most of these systems were proprietary systems. So if if a company had to move from one to the other then there was a lot of migration involved in both data as well as the workloads on top of it. Business continuity disaster recovery high availability each of these were quite complex. So for the enterprise world they had to design in complex architectures to take care of this. And finally integration nightmare. So there was some there were some attempts to standardize in terms of JDBC and SQL compliance and all that but still there was a lot of proprietary details in there. So in practice integration between different systems unless you are within the closed ecosystem of these vendors was was a big problem. So how did how did the open source movement help. So one of the things that Hadoop was built on and I think that was one of the best things to have happened in the industry was on four major principles. The first was it was open source. So even before Hadoop was an Apache product. It was still an open source product within Yahoo. It was built with the principle to run on low cost commodity hardware. It was built designed for horizontal scalability without any technical limitations to how much it can scale. And finally it was designed to be fault tolerant in itself. So these four and the significance of it was Hadoop being the very first system to have been built on these set the benchmark quite high for every other system to come. And we know that since then whatever the new systems both in streaming or batch or one of the analytical systems have come. These four have become the pillars based on which every other system has been built. So let's look at how these these principles are these four key features led to elevating some of these issues. So extortion fees. So suddenly because Hadoop is open source, there was freedom and freedom means customers had a choice. So having a choice immediately gives you the perception that you can actually get things done in your own ways. And this immediately alleviated that feeling of getting bound into a particular ecosystem and getting locked in, as we say. High hardware cost. Yes, because of the ability to run on commodity hardware, there was no need to buy very high end machines and we could pretty easily run it on. And we know that there had been even some attempts to run Hadoop on raspberry pies, which did run, at least. So yeah, I mean the point here is that you can run it on any size machine as you please to suit your budget. And, and finally it was massively scalable. There is no doubt about it that there was no technical limitations. Usually would be in the form of the size of the size of the data center itself, or maybe one of the limitations quite often was the network bandwidth, which yes, which, which was a hardware limitation again. So, but in general, the software would actually scale quite seamlessly. Now, for some of the other issues, I would say my view is that Hadoop was only able to partially address some of these. So high migration cost there. So Hadoop came with some open formats file formats, which allowed customers to directly migrate files from one platform to another. And hence, elevated some of the high migration cost. Having said that my view is that there is still significant effort in migrating even open source systems. So I wouldn't say that we completely have solved that problem. Next, the business continuity disaster recovery and fault tolerance tolerance is another thing. So high availability is one thing that has been resolved due to the fault tolerant nature of Hadoop. At least at a local level, we don't need to be, we don't need to think of high availability. However, business continuity and disaster recovery is still, they are still challenges that we need to consider. And finally, integration nightmares. Yes, integration is still an issue, probably less so than before. Probably some might actually argue that it may not be the case. I would say there are some like the ecosystem of different products that interact with Hadoop has increased over the years. And this gives rise to a larger ecosystem to choose from. And hence, probably in my view, we have reduced those integration nightmares to an extent. So challenges of the new order. So what's the new order, the new order by the order I mean the digital natives. So we spoke about the issues that data warehouses, the traditional data warehouses had before. But then there was this bunch of companies, which as we call them the digital native of the cloud native, who were not even aware of these issues because they were born in an age where Hadoop was really the standard. So what did they realize when they came in? What they saw from their, their eyes was here is a very inefficient system that we are dealing with. Hadoop cluster takes a lot of effort, a lot of engineering resources, very expensive engineering resources to actually design and build the cluster. They need continuous employment of expensive engineering resources to manage and maintain the underlying infrastructure. There was a time when Hadoop ops roles, the people who used to actually like manage clusters or build clusters, they were the highest paid people in that ecosystem of developers working within the Hadoop roles. And after all that is done, then came the actual application, the business application developers would actually develop on that. So there was a huge cost associated with managing and maintaining on Hadoop clusters and then some value on top of it. Next, continuing with the inefficiencies of the pains, scaling pains. So yeah, Hadoop was horizontally scalable, technically, yes. In practice, what happened was there was weeks to months of delay when development teams needed to scale up because they had to wait for budget to be approved for procurement of new hardware. And then the new hardware has to go through its own procurement process. Then you need security clearances and then deployment of those hardware into the data center before the actual scaling can happen, which depending on the size of companies can actually takes weeks to months. So in practice, it was quite slow and painful scaling up utilization pains were there. Because in the case of self-managed Hadoop clusters, when designing a cluster, you have to do a capacity planning and you would either do a capacity planning based on an optimistic view of how fast the company will be growing and hence over provision. Or you might see that you have actually provisioned cluster, but the business is growing much faster than that and hence very quickly the utilization goes very high and then systems slow down. So utilization was either very high or very low. It was never quite right. Finally, it came with a high upfront cost because all of that hardware, all of that, the data center listed they needed to be paid for and hardware upgrades were also painful. So if there is a new class of CPU or a new class of GPU that has come, it's difficult to actually get that because maybe just a couple of years back you have procured a set of new hardware and that's still getting amortized and you don't want to just throw that off. So these were some of the, yeah, and then at the software level they were inefficiencies. Even though there was this big ecosystem of software that actually interoperates with one another in the Hadoop world, each of these are different projects having their own different plans. So effectively at the software level we had library mismatches, version mismatches, what we call as the dependency hell to deal with. We had API incompatibilities between different versions and then there would be performance issues coming up because of all these. So these are all the pain points that the digital natives were actually seeing in what was a huge relief from the previous world. Now it's important to actually, as I said, understand the cloud as well because very close to as the Hadoop ecosystem or the open source data analytics systems were developing, cloud was also developing. So back in 2006 I think Amazon released their first cloud product which was then after a few years followed by Microsoft and Google. So today of course cloud is the first choice and hence it's important to actually understand that how did the cloud develop and how we can then join the two to figure out that what's the best way forward. So the very first version of cloud was focused on infrastructure basically built on the economics of scale, bringing a large number of these cases together and hence building large clusters and distributing the cost across. Freedom key advantages were of course freedom from data center management for the end customers moving ownership to like from an ownership model to a tenancy model. So you don't own that anymore. So what that means is that first of all your capital expenses go away. You don't have to have a high investment to start with. And secondly, when you want to upgrade that's an easy process you don't have to think of like previous sunk cost and etc etc. And this also resulted in shorter turnaround times for upgrades or scaling clusters. The next was we have the cloud 2.x so the next generation of cloud where we saw that apart from just infrastructure, we saw the evolution into databases, web servers, clusters even fully managed clusters evolving. So this gave rise to a few more advantages as like we immediately saw better utilization of infrastructure models like pay as you use. It came into existence and infra upgrades and all of these things were no more required because the cloud provider would do that. So we could just provision a database and just go on with it. Some examples would actually be products like BigQuery, Dataproc or Redshift would be some of the examples of these products. And then we had the cloud 3.x or I would say this is where we probably are right now. And still sort of maturing we are, I wouldn't say we have done this yet fully still in the process. And this is the age of the software as a service or SaaS which is mainly focused on the business needs. So can we actually come up with fully managed business solutions instead of building blocks for business solutions? So these are like serverless, we don't see server, we don't see infrastructure, we don't see any software behind it. There is no operations and we pay by use. So again here we completely do away with anything do with owning the software itself. We just pay a particular subscription charge and use it. Examples would be Salesforce, HubSpot, Looker in the BI space or Slack in the chat and monitoring space, these sort of products. And finally what was also developing around these all through this are the standard open data standards. So one thing that we saw that there was a big support, I would say, from the cloud providers in sort of developing some of these data standards as well. So we had like Parquet or Capro, these were some of the formats that had existed in the Hadoop world, which in the cloud world were actually well adopted as well. More recently we saw the development of open table formats. So iceberg, hoodie, delta. So that itself gave us a big shift in terms of not being tied down to a specific data warehouses for performance or future benefits and sort of standardizing those out into the storage format itself. And then we had the standardized information exchange protocols like the wire protocol, protocol buffers, so some of the popular ones. And finally we had the domain specific data schemas, which are getting so this is this is still in evolution, I would say, things like say the fire protocol for health data exchange, and also open banking APIs for this is this is I don't know if you're aware of the open banking API is this is quite big in Europe, which has sort of revolutionized the banking industry with the advent of a large number of like new banks now. Similarly, we have the NASA open open API switch have been recently. I don't know whether it's recently or not, but NASA has published these APIs as standardized formats for space and earth observation. So these standardized open formats are also developing. So with this context, how can we actually get like with all of this knowledge how can we actually bring them together and look at how to develop for the future. So, this is one of the things that personally I think, and this is, I would say, at a personal level that this is my key learning in the last few years, based on my previous learning from my work in the open source world as well as currently within a cloud service provider for a few years is we can actually like if you look at any software architecture, whether it's data analytics or not, we can always break it down into which bits are really, I would say the gold for the business. What's really useful for the business that the business wants to protect what we can say as the digital intellectual property and what's fluff for the business. So what what are all that scaffolding bits that you need around it because that is required to run it. And if we can separate those things out, and then we can actually make use of this pyramid and the actual value and cost pyramid because I think the value is in protecting what's valuable for the business and this is where we should be using everything that we have learned about the evolution of open source and what problems the open source in the open source industry has solved for us. And in the future, if you want to protect our digital intellectual property, if you want to protect retention of value from this, then we should continue to invest in ensuring that this value is retained in open standards so that we are not overly reliant on a particular vendor or on a particular technology. Now everything else which is fluff. To be honest, we don't really care, because what we what we really want is to run that top layer. So if there are cloud service providers or other service providers who can run on on the cloud wherever we are operating, if we can actually just get that at the most commercially best deal that we can get and do away with it, then that's all we care about. And to be honest, I don't think we should be investing a lot into doing them on open source or managing our own systems or whatever it is, because we don't do really care. And with that principle, I think this is so here is a few here are a few ways that we can actually protect value. So I won't really focus on the lower bits, because that's probably one thing that yes, for the lower bits, I don't care. How do we how can I protect value for the for the top part of the pyramid. And, and that's where I, I would recommend using open formats using open standards, and, and try and insulate away from deep deep dependencies into into any technology that does not that is not open. Now this is this is one area that I think there we are still evolving. So we can do the open formats open, the table formats, apis and everything that we have spoken of. One thing that we are still, we are running a risk is something called technology lock in. And that's where I don't think that we are still at a place where we can see that we are completely, we can insulate ourselves. And what do I mean by that. And if you refer back to the very first slide that I showed on the trends and how systems are developing, we can actually see that every technology has has its time. It grows and then it withers off and getting and being overly dependent on any specific technology does building the risks of technology lock in, which are not too dissimilar to vendor lock in. Because if we if we define lock in as the cost of coming out of that particular system, then vendor lock in and technology lock in we can look at both as the cost of coming migrating away from that. And, and this is something that I personally feel that we are not quite yet there. Even in the open source world. There are technologies which where we can get entrenched into a particular technology build quite widely on top of it. And as that technology itself goes out of favor, we will be faced with a big challenge of migrating in on to something else. Now, how can we insulate against them? I think there are a few like a few new ways of like few ways developing to address this. And we, we are recently actually seeing the separation of the abstraction between the definition of of the actual applications or defining what transformations you are. You want to do on the data or what sort of operations you want to do. And actually the underlying engine that executes those operations and these abstraction I think is important in order to insulate us from the technique. Take lock in. As of now, there are two examples that I can think of, which are steps in the right direction. One is project called Apache beam. I don't know if you have come across this, which really separates out the definition definition from the execution. And hence it is possible to with that approach it is possible to have your definition which is where your business logic will be sitting in. And if the underlying technology that executes that business logic does go out of favor, then it gives you the ability to move on to another new engine without having to do any migration simply because your business logic is not hard coupled to the execution engine. Having said that Apache beam is a project that came as a collaboration out of Google and the flink community. So flink is still has a very big support for it. But apart from that, there are, I think Apache spark also does support beam but as an execution engine. But it's still the reason I'm saying that it's in its early days is that adoption is still quite low. Another technology or another feature in spark that I recently came across is called spark connect that tries to abstract out the two layers to an extent, which I think could be at least from thinking point of view I think is the thought in the right direction. But this is one of the areas that I think we will still need to have more development within the community before we can see we have a solution to the tech lock in risk. So just to wrap up what we learned or what what we saw today. The open source analytics open source data analytics can be efficient or inefficient, depending on the way we use it, we should be aware of the inefficiencies that it can bring cloud services can optimize cost. So we knew we did not worry about the fluff and open open data open source, everything within the open source community can actually help us retain value and optimize value. And that's what we should be looking at for protecting the gold within our business. That's it. Thank you very much. Thank you. Yeah, for questions, if you can please take the microphone so that your question does get recorded. So in open source formats, what's kind of weird happening is there's open source format like Delta, but then the vendors are pitching their managed offerings. And what's happening is the manage offerings are better than the open source ones because if you use Delta, there's like problems with like compaction. And they're like, Oh, if you use the managed version, you can do the same. And that seems to be the trend where all these open source formats have a managed vendor. And in my opinion, it doesn't seem like there's a true open source community in the data space, because there is an incentive for the vendors to create a managed offering or just curious in your perspective on this. Yeah, actually, that's a that's a very good point. Not something that I haven't noticed and there is always open source. But so I think we have actually seen this from companies that have built their businesses on open source because it's once you have an open source technology, how do you actually monetize that? And that has always been an issue. So there was a time when we had companies like Hortonworks and Cloudera where the main monetization was based on the service. So that they would build their technology on open source, but they will say that we will manage it for you and that's where the money comes from. But we saw that the gradual evolution of that happened where these open source companies gradually built up a layer on top of it. That was not open source where they would actually provide some more optimizations on top and that is effectively finding out a way on how to actually survive. So I think that's the that's the continuous friction or the tension you can actually see between open source being like open source being something good for the technology of the developer community for the technology community, but also trying to trying to survive within a commercial ecosystem. How do we actually continue to do that? One thing that I would actually see is that even like when we think of open source, there are different ways of open source. And again, personally, I think a real success of open source, the very good example is of Linux. So if you look at the contribution, if you look at the contributor list of Linux, there is a very good wide range of contributors in Linux. However, if you look at many of the common open source systems that we are seeing in the data analytics space, we see that a lot of this contribution comes from a few companies, and which means that there is still, even though the source is open, but the technology is still being controlled by not a very wide community. And this is where typically the problem happens is that now because of this dependency on commercial companies who have who are also struggling to sort of return value to their own investors, we are seeing this sort of things. To be honest, there is no clear answer to yes, what can we do about it apart from hopefully if the open source community grows larger and larger, then we will see that these sort of things will actually, and a very good example is actually generative AI. So generative AI had been in the behind the closed doors for so many years with open AI, coming up with something like chat GPT and GPT for suddenly all of these closed doors had to be opened up, and which caused an immense leap in how development is happening in the open. And I think I think that is what will happen. So what you're saying is right, but I think we are seeing we are seeing we are somewhere in the middle of that evolution down. Hopefully, the best case scenario is that open source community will actually catch up with these these things. I just wanted to give a completely selfish plug. I'm the developer relations lead for the tree new open source query engine, which does support Delta Lake iceberg hoodie and hive and also all the various proprietary like an open source relational databases and elastic search, and is a SQL query engine that's completely open source and it's supported by multiple vendors, including Apple, Starburst, who I work for, but also Netflix and Bloomberg and many others used by places like Salesforce. And that is an open source alternative that aims to be that kind of place that allows you to scale that so right you all to try it out. Thank you. Yes, thanks for your talk. So I have a question about open data sets for non commercial usage, things like research, things like policy making things like that are not traditionally research. And, you know, there's this real kind of somewhat of an oligopoly for a handful of cloud vendors that kind of have the capacity to build and maintain these very large scale and sometimes global data sets. How should one think about the kind of making these data sets more democratized and open and available and accessible. In terms of vendor policy or choice choosing which vendor, you know, Amazon versus Google versus IBM versus Azure, there's all this sort of internal jockeying going on meanwhile this data these data sets are really important for, you know, global access and free access to some degree. You know how how do how should one think about that kind of use case versus the commercial use case where you know there's a clear path to commercial benefit. Yeah. So, so here, my perspective is actually, yes, you are right that this sort of law only very few companies actually have the resources to manage and maintain these data sets and we can even generally generalize that to overall cloud service providers as well, because cloud service providing is a very serious resource in intensive business so we haven't actually seen many small companies emerge out of as cloud service providers for this very reason. I think the good thing, or at least as far as I can actually see that much of these data set is still sort of being stored in an open format so either a lot of these data is either in maybe CSV for maths or park a formats like sort of open formats on. So, like as far as as long as that's there, I think we do not run the risk that yes this data set is getting locked in to an extent I have seen at least in the UK. There has been pushed from the regulatory bodies and from the government bodies that any public data set has to be in an interoperable format. Particularly in the health care space, which is not very well known for adopting open formats. The country is making regulation that any any new product or any new company, they necessarily need to have an interoperability strategy if they are to be part of any government procurement. So there is definitely a public support in terms of through through these regulatory bodies for that that is definitely one one relief, I would say. But we can't we can't shy away from the fact that yes the requirements are so big that only apart from the few big commercial organizations or maybe a few big universities only have the capability of doing that. But my personal view has been data that's locked in the university that even more difficult to actually get get our hands on compared to those in the commercial space. And to that being part of a cloud provider, I can see that for much of that public data sets that we have there is a commercial model in how cloud providers can actually monetize or get a return on the money that they're spending and hosting this. And hence, I can actually see for them, at least for the foreseeable future, we should be fine with that. Again, that's my view. Okay. Thank you very much. I am aware that I'm five minutes overtime. So thanks for bearing with me and have a lovely time in Vancouver. Thank you.