 I just landed a flight from Los Angeles. Hey, what's up? Sorry, how are you? Hey, do you want to sit here? Oh, no, no, I'm just kidding. Nice to see you. Nice to see you. I just immediately want to say hi. Sorry, I was... Did you see this thing with NVIDIA 72 core? It looks cool, no? This arm. I didn't see it at all. They launched something. Some kind of arm solution. So it's an almost arm compatible. I just flew from Los Angeles this morning. I was filming at the display week. They were showing displays. It was really cool. Yeah, it's like 4K VR resolution, 16K light field display. So each field is 4K. So it's like super clear. Tons of displays and stuff. Quantum dot, micro LED stuff. Wait a second. Hi. They call displays. OK, cool. I'll tell you about it. OK, cool. All right. So we're live streaming. And I'm going to check on my phone. And for the first time, I'm trying the new bonding system. I can see that it's live. So let's share it to Elon Musk. Let's try to speak it out. Live streaming on the front row. Live streaming. There's a lot to speak in. Live streaming. This is live streaming. OK. Hi, Tonsong. Where is it? China? So it's going to be exciting. I'm going to be launching the LAKA optimized, super high performance, arm chip, server dominating chip design. Let's roll out. Everybody is OK to be filmed. There are no choice. Dope. Yup. But if I had dope in my luggage, I would have been here for like 30 years. Maybe it's pork. I'm not sure. But in Taiwan, they're very strict about pork or something. I'm not allowed to bring any ham sandwich. To bring a ham sandwich, they make you pay a million Taiwan dollars. Ladies and gentlemen, welcome to this afternoon's event. I'm going to be doing a live streaming. I'm going to be starting right after five minutes. Please take a seat. Thank you. At the same time, we've provided services for the services related to the app. If you need it, you can go to the website. You can borrow the app. At the same time, I'm going to move your phone or your phone. Thank you for your cooperation. Ladies and gentlemen, welcome to... Come and take the RMCEO keynote speech. Our event will be starting in five minutes. Please take your seat. Today, we offer simultaneous interpretation service. If you need it, you can get the interpretation device and the media reception desk. Thank you for your cooperation. And don't forget to put your phones on silent mode. Thank you. Good point about the silent mode. I don't want my awkward phone tone. Thanks a lot for watching my display week coverage. I have something like 30 more videos to upload. I mean, a bunch of them are uploaded. I was just like an airplane. And I couldn't really get on their Wi-Fi on the airplane. They didn't have Starlink. Even though I was on Starlux. Such a cool... Direct from Los Angeles to Taipei. Not too expensive. It's a light. Don't hear me. Sorry, but I'm... And then... I think I'll see you soon. All right. Good morning to people who is mourning. Good night to people who are in wait. And here's Blackhoon. He's getting so exciting. Everybody knows what RMCEO is going to announce. You can spoil the news in the chat. You're welcome. If anybody understands what people are saying, you can translate real time in the chat. You're welcome. The RMCEO really has to know more about the future of computing. But today is a very important time. We're very happy to welcome the most important guest. I'm very, very honored to be here. Let's welcome the president of China's foreign exchange company, Mr. Huang. Next, let's welcome Mr. Huang. Mr. Huang, please come up to the stage. Mr. Huang, I'm delighted to welcome you all. Thank you so much for coming to this country and kindly anticipating on the keynote. And today, we have a panel of hosting our dear friend, we named him. We named him, he's an old friend of Taiwan and an old friend of China today. It's a good question. But this is his first appearance for a very exciting. And all of these micro-d devices. On part. As the most leading semiconductor ID company on is globally recognized for its cutting-edge technology that powers the vast variety of devices and applications. Ong's contribution to the industry has been immense and transformative. Ong and Taiwan share a long-standing history of collaboration. Ong's advanced technology and strong partnership with Taiwan's factories, IC design houses, OEM and OEM companies have been instrumental in innovation and growth of Taiwan's saving another industry. So we can't wait to hear the exciting news and updates that we need to bring. So ladies and gentlemen, please join me in giving a warm welcome to the class, the CEO of Ong. This everything from town square to town square. Ong is defining the future of competing. A future built upon the most successful technology ecosystem in the world. We feel the smart-phone revolution and will cover every technology revolution moving forward. We're not finding the next universal mobile experience and redefining what's possible with IoT and auto-engine. It can't compete with analysts like us. A 98th history of trust and security and a legacy of energy and power efficiency brings hope for a sustainable future. Change the world once. Now we have a chance to change it again. It's built upon... Introduction. When you say an old friend of Taiwan, I don't know exactly how to think about that, but I have been to Taiwan so many times, I think this is, I don't know, 17 conflicts that participated in some level. My first one was 2006 in person. And now here we are live again in 2023. It is fantastic to be back. It's great to see such a whole room about people and individuals. And I'm very excited today to talk to you about our company and where we're going and what the future was like. To say the least, a lot has happened. With our arms since we last saw you in 2019 and a little bit of fun, a little bit of drama, but we're very, very pleased to be here, independent, looking forward, and really welcome to Taiwan. So one of the little stories that I get asked a lot about is how did arm get started? What is the genesis of the company? What is the DNA of the roots? And I think it's a fascinating story in terms of how the company and usually got started how our technology was born and where we ended up and why we are today. 30 plus years ago, there was an effort being done to create a brand new microprocessor. And at that time, the mobile revolution was just getting started. And one of the tasks that the arms had was to figure out how to design a microprocessor and put it into a plastic package. Now this seems a little antiquated if we package technology, but 30 plus years ago, the state of the art of microprocessors was putting it into a ceramic package. And one of the tasks that the early architects had was to get the cost down, to get the power down, let's figure out how to put this inside the plastic package. And so they did. They created the very first ARM-1, the very first CPU. They were able to fit it into a plastic package. And when they first began to measure the output current, and we're going back in time to that instrument below is the silt silt. When they measured the power of that microprocessor, lo and behold, what they found out was, it was running, but it was drawing no current. It was actually operating off from leakage currents, taking place on the rest of the circuit board. Pretty amazing, actually, when we looked back in time. But there, behold, was the birth of the ARM-CPU. And I think that's very important to understand because the DNA of this company was born around power efficiency, running off a battery, not plugged into a wall. And when we think about going forward where computing is going, the insatiable need for more and more performance, we don't have unlimited resources on the track. So I think our forefathers at ARM were incredibly insightful and had a lot of foresight to develop such a product. Now that first CPU went into one of the very earliest mobile products for those who remember it, and I'm open enough to not only remember it, but I have one, was the very first Apple Weekend, which was a PDA that was way, way ahead of time. Handwritten recognition, didn't have a modem, kept a contact list, but the CPU inside that was on. And that is a bit of how we were born. So we are a company whose DNA was really built around power efficiency. And boy, 30 years later, what a ride it's in. It is now what we call the world's most invasive computer architecture. Our first 10 years between our silicon partners over a billion chips were chip losing on. But that was just the beginning. In the next 12 years of the 2013, we went from one billion to 50 billion, a huge astronomical growth. And it's really just accelerated from there. The next number of years, we got to 200 billion cubal chips. And we're now looking at, since the inception of ARM, over 250 billion chips have been chip using the ARM architecture. There is no other computer architecture that's ever been invented that's even close to this number. And that has a lot of benefits for the ecosystem and a lot of benefits for developers. But most of all, I have a lot of benefits for users that have such a broad architecture, they're such amazing support. Now, going back to Newton, what drove a lot of this growth, probably five to 10 years ago, was the ascent of the smart phone. And when we saw the smart phone start to grow, that was really fueled on growth. I think that story is well known. But as we advance into new markets, what we start to see is a lot of products begin to resemble the smart phones. High compute, complex software needs to run off the battery. And now what we are seeing is across the board an insatiable demand for compute resources, whether it's data centers, whether it's automotive, and obviously around the areas of artificial intelligence we could have incredible buzz over the last number of years and now even over the last number of months, given everything going on with general AI. The needs of compute resources is insatiable. I remember coming to a Computex, I don't know, maybe 2009, 2010, and this was just after the financial crisis. And at that time, we were starting to work on tablets and things of that nature, but there was a gentle feeling of, gosh, what's the new killer app? What's going to be new beyond tablets because people were starting to run out of ideas whether there was going to be a release of compute resources. And my gosh, 10 years later, how wrong were we because the digitization of our lives has changed everything. And the needs for compute are going up beyond our imagination. You know, in the data center, you look at the growth 260% over the last number of years. But equally, you have these data centers now consuming a significant amount of energy. 1%, depending on where you look at the numbers in other countries, it's larger than that. And that is obviously a huge concern in terms of what do we do around sustainability. In autonomous, whether we're talking about robotics or autonomous vehicles or anything that runs off a brain system, the level of tops required for these autonomous systems can be the increase. Up to 4,000 tops required for level five. And as we move more and more to vehicles that are EV, you wanna use as many of those resources for the battery that moves the vehicle. So this computer on wheels that needs to be able to have a huge amount of compute capability needs to be sustainable. And then when you get into AI, my goodness, the models are just exploding relative to what is required to run some of these operations. And it's only gonna get more and more complex. More and more training requires more and more inference, which drives more and more training which drives more and more inference. All of that is gonna require computing resources like we've never seen before and require a sustainability picture. So in terms of demand for computing resources, and as you've mentioned before, I'm an old friend of Taiwan, I'm an old person in this industry, I've never seen it like this. Honestly, I think we are in a golden age now in terms of computing and what's required to change our lives. It's amazing. Now, that being said, there are certainly some challenges relative to how to solve this. Scaling and building chips is becoming more and more complicated. Moore's law, Gordon Moore, one of the icons of our industry just passed this last year, his famous axiom in terms of the doubling of transistors happening every couple of years in increasing computing performance is coming out against the wall. We know that. As transistors get smaller and smaller, the need for design becomes more and more complex. And with that, these complex AI systems require innovative design. How to solve this problem? It is a very, very tricky problem in terms of the technology nodes, the processing, what needs to take place. Now, the cost and complexity of these designs has just gone up massively, over the last number of years. If you look at seven nanometers going north through the advanced geometry, you can see what the numbers look like in terms of the cost associated with this. There's no mystery that this is the technology in sort of paint that takes a tremendous capital investment to be able to build these complex semiconductors. At the same time, when we start going to narrow or narrow geometries, these are the most valuable transistors on the planet. These transistors need to be used for the most important tasks, which is essentially to compute. And that's where it needs to get done on the advanced nodes. So we're having a tremendous amount of tension relative to solving the problem. Additionally, a lot of the cost goes into this, means around software development and verification. So even when people start talking about, oh, chat QPC is going to design the next level circuitry, not so fast, this is incredibly complex work. And at the same time, it's a heavy load, heavy load. And these are some huge key digits for the industry. These designs are getting harder and harder. In Taiwan, we're in the land of the world's most advanced semiconductor fab on the planet and QSMC lives. Everything is getting more and more complicated relative to building the chip. The compute complexity that I talked about earlier, you need more and more transistors to solve it. But yet the cycle times for building these products is getting shorter and shorter in terms of you need to develop a product in a year, a smartphone needs to come out of the year. Yet the cycle times in terms of getting process notes, the glittery products is getting longer. So this is the conundrum. And AI only compounds this problem in that these designs get more and more complex. So is this an unsolvable problem where we're looking at a fork in the road relative to where products are going? We actually believe that we are on the cusp of something very, very innovative, but at the same time, we think they've seen this movie before around building systems on chip. So again, I started my career 500 years ago when a motherboard had a North grid, a South grid, a memory controller, all kinds of different components and all these components shrunk into an SOC. And as it shrunk into an SOC, we had shame benefits, we had cost benefits, we had market benefits. And then there were many companies that went away. There were a lot of companies who were building a lot of the core logic around PC chipsets, other areas on a motherboard, and they all started to fall away and everything went under a system on chip. But now the system on a chip is starting to bump up against the wall. So what happens when all this integration starts to bump up against the wall? Well, just like fashion, where things said to come back that didn't look fashionable years ago, but now they are some years ago, the SOC is just actually giving. And the South grid becomes the motherboard. And the motherboard is now housing all these complex chips. So one of the immediate benefits of this is you can start to apply some of the most expensive transistors on the planet to the compute would be advanced nodes. But other areas of the chip, communications, let's say, or some of the IO, or things that may not iterate as fast as other technologies, you don't have to potentially use those very, very expensive 3nm or 2nm transistors. And that is a huge inflection point that people have talked about, but it's now real. It's now moving to a reality in terms of the systems we're going. And at ARM, we have embraced this. So we have been traditionally an IP supplier. And as an IP supplier, we develop a configuration. We work with our partners and they take our IP, they integrate it with their sets of IP. And over the last couple of years, given the issue I just talked about, we started to see tension with this in that it was just taking longer and longer for our IP to integrate with the other IP than to be developed into an asset of C. So we have moved into what we call subsystems. And the benefit of these subsystems is we take the products that we build, the CPU, let's say, the memory subsystem, the compute block, and we integrate it, we configure it, we validate it, and we can deliver a full subsystem design. And the benefits, if you can imagine, are very significant in terms of the end user because of all the issues relative to how hard to develop products. But this is real. This is not a concept. We've rolled this out to a few different customers and we've seen significant benefits already. We have seen one customer from the time we did a kickoff to the time they did a case out, 13 months. What used to be multiple years has been shaved out. And it's just massively beneficial in terms of getting products out, getting customers happy, delivering solutions. Additionally, it's a huge, huge resource asset. In one case, over 80 manuals, say. Now, these aren't jobs that go away, these are manuals that can be used for other pieces of work, but it is a significant benefit to the end customer and end to the market. And again, this is largely driven by the fact that we have reached a new paradigm relative to how difficult it is to build kits. They are just so complicated. They are so innovative and AI which continues to drive even more and more innovation adds to this. Now, a very interesting example I'd like to share with you that I think was probably talked about a few hours ago and Vinnyas, you know, is a, no coincidence there. His was first, mine was second. This is an example of how these substances can be used. So Vinnyas has developed a product called race which is a CPU, it's what they call a super chip. Underneath the hood, when you peel it back, you have two blocks of Neoverse, which is our product line for the data center. Two blocks of 72 different cores on one sub die, almost 144. Now, this gives incredible amount of capabilities relative to performance of a lot. If you just look at the numbers there. We are well known for our efficiency. We go back to the birth of the company. And it's really important to recognize that because when the DNA of the company is all about power efficiency, when the DNA of the company is all about thinking about how to run out of battery, that's far wired into how the engineers think. So, I've got lots of questions since I took over as CEO about why is the only winning of the data center, why is the only winning in economics, and what's the significant, what have you done differently? It's a combination obviously of product development, but it's also a combination of just innate for the products is a sensibility about efficiency. Which is incredibly important. So we were thrilled to partner with NVIDIA to set the product, which Jen was talking about earlier today. And when you combine that CQ complex with the GQ complex, you have an incredible innovative AI engine. But this is really just an example of what I talked about earlier. This is a real life example of where scaling is an issue, where time to market is an issue, where complex sub-blocks need to be integrated and delivered, and here you see the real instantiation of it. And it's gonna be a very, very significant product we think going forward in terms of AI, but as you can imagine, there are a lot of different examples to talk about. Now, as you've probably seen in the press, we are in a quiet period. So I would love to talk to you about all kinds of different things that are coming down the pipe, but there's so many cameras in this room that I am afraid to go much further. But since NVIDIA talked about this earlier today, this was one of the ones that you talked about. And there's a real end application to this that is hugely exciting. And one is really around an edge data center that can be a virtual RAN network, and at the same time, we have generated AI engine. So think about here the concept of taking a single block computer and with that, being able to provide a industry standard block that looks a bit like a server to run industry standard software. We'll basically run all the Linux stacks, et cetera, et cetera, but at the same time, we can generate AI. And this is one small example of something that is gonna be moving forward. You can see that this is a triangle of ourselves with NVIDIA, and this is a concept now that soft bank is moving forward with. Now, the area that we are really also thrilled about in terms of where things are going with the data center is the fact that if you do generate AI and you do it inside the data center, all of the CUDA stacks, CUDA being the proprietary language that NVIDIA uses for programming that you've used, those stacks all run on the ARM CPU. What that basically means from a top level, developers can now develop their code, they can generate their models, and through the stack, and through everything that kind of runs through the entire software suite, can decide whether that runs on the ARM CPU or the GPU. The bottom line is, when you're running AI, AI is gonna run great on our arm. Now that's an application that is very significant in the data center. But actually, one of the things I am very excited about is the end points. So all of this training that takes place in the data center around generate AI needs to run similar, right? The connection is, well, your training has got to be able to connect to something on the other end, and that's where I have to think of the gigantic opportunity is, and it's at the end points. And we are seeing an explosion for AI at end points. And this is a virtuous cycle, where the more intelligent the models get, the more work that gets done in the cloud, that's gonna drive me for AI inference at the edge. And what is required at the edge for these complex AI algorithms? You need A, a lot of compute, and B, you need power efficiency. That's a given. And we have a huge, scalable solution that can run, again, relative to the overall software stack. We have M-class at the very bottom. We move forward to our smaller A-class and then up to our A-class and modeling CPUs. So we have been working on this for a number of years, and again, as we've seen this opportunity now just exploding relative to all the AI algorithms that are going to be required at the edge, we've delivered many of these solutions to partners already. We have these in the marketplace, and this is driving significant opportunities of companies going forward. But it's not just a product roadmap. There are lots of real examples and applications where it's taking place today. You know, one is the Google Pixel 7, which does live translate, it does live captain. It's a virtual assistant. I don't know if you've ever played through this product, but through the Penser Engine, working with the ARM Cortex X1, which has a multiple CPU block and GPU, this product related to do a number of different innovations relative to translating what you're talking about, whether it can make predictive analysis on captains, and all of that is AI at the edge. That is essentially all those workloads that are now running locally that were trained on a cloud and have to run in the palm of your hand. And oh, by the way, it can't consume any more power. And that is the virtuous need that takes place in the products. Amazon Echo. So Amazon Echo uses a product from Amazon. The Echo that will essentially do all the voice commands relative to translations, ordering, anything that you want to do relative to your interact with Amazon. All that takes place with Amazon Echo. And that's all running off the analogic CPU, running on ARM, and provides an incredible experience relative to an AI background. And again, this might not be something that you might think is AI, but it is AI, because it's taking commands and learning and training based upon what your voice is telling it, and at the same time it does learning. One of the really cool examples, and this is actually gone on an Emmy class scholar, this is a intelligent traffic monitoring system. Now, it actually embeds into the traffic lights. So what the lights are able to do is based upon learning, based upon patterns of traffic, are actually able to modulate traffic lights. So this is done in Korea. And by the way, I've always noticed in Taiwan the traffic lights are really long. I'm not sure why they're long, but we have the longest traffic lights I've ever seen in the world. Hopefully AI can find its way through 90 seconds in a simple time. Because what this product does is, on a real-time basis, we'll modulate lights, and it will do it not just based upon a previous day or a previous week, but it's doing real-time active monitoring. And you can imagine in the past days of the traffic lights in the CPU, I didn't know that, but I guess it does, but now it's gonna need a CPU that is intelligent, that can actually run algorithms based upon training that's done, and the training to be done in the cloud each night, and it can send information back down to the lights. Very real-time example. And again, it needs to be very coefficient because you're not adding anything new to the electronic subsystem, but it's a very real application of where we see growth. Another fairly cool example that you would not think intuitively is an AI piece is a smart appliance. And again, what would AI have to do with a smart refrigerator? Well, when you think about the life cycle of the compressors and the life cycle of everything involved with the subsystem, having some level of AI involved that can monitor, track, and reduce power is a huge issue. So this company, which I think is based in, I'm gonna say Czech Republic is 30 minutes, probably gonna mix it up, they had an interesting innovation here where they invented armed technology, doing AI around compressors. But this is not your mindset of a refrigerator that has a display and you talk to it and it makes ice for you. This is a real-life example of saving power, saving energy, which is one of the areas that if you think about what's required for the planet is really significant. And lastly, one that I didn't know about until we started talking about it in the different examples that you're getting right for Taiwan, this one I think is really fascinating. So first off, you probably weren't aware that the B population has been decreasing by about 30% a year. Yeah, I didn't know of the opportunity, they keep that. And these are incredibly important to the world ecosystem. And the way that people in the industry who are beekeepers who manage and run bee farms is very hands-on. And the folks go out to the craze, they go out to the hives, they monitor what's going on by hand, and then they come back and try to make the fixes. So there's a company called Peacewise, and I would recommend you Google this and take a look at their video online afterwards. They have amazing technology that has now completely gone to automation. There are robotics involved in terms of changing out slots. There's robotics involved in terms of monitoring the health of bees. And there's a huge amount of work that's going on relative to the health, the life cycle of feeding. And guess what? This is all based on ARM. The combination of ARM technology, Raspberry Pi, and also some work that was done within video. But the point being is that when we think about everything going on, the AI, everything's going on, that we think about generative models and get a lot of hype. There's a fantastic amount of examples that will be very real for us, that will help our communities and help our planet. They need a high level of compute and they need power efficiency. And that's a great place, great place for us. Now, none of this can happen without software and a software-developed ecosystem that drives the application. And I've been again in industry my whole career. I've been allowing lots of office CPUs and worked with a lot of CPUs that are now in the CPUs' radiar. I've worked with a few companies that are in that radiar. The value and power of the CPU comes down to the software. It comes down to developers. It comes down to software ecosystem. It comes down to people who are writing code for that software. But it is a worthless loop relative to the more developers that are working on your platform create more hardware, or hardware lends credence for more applications which lends credence for more people writing software. And there is no software ecosystem on the planet that's even close to what Arm has shared. And that comes from 30 plus years working across broad number of areas, broad number of developments. And we've just seen this accelerate over the last number of years. You can look at the number of developer hours that have been sent on our version nine compared to version eight. We've had a huge increase. You look at the number of developers who write software on Arm. It's a number unlike any other. And this is needed to develop great products and to need to get great products to market sooner. Because in this world of insatiable demand for compute resources, and the fact that it takes longer and longer to build these systems, software is everything. We're very, very pleased to be able to say that the developed community has been so strong and so effective to all the folks who develop on our platform. We are at an amazing time really in our industry. And you can see by the number of folks who are in the room attending, obviously with huge amount of interest with all things going on with our technology and compute. As I said, I have never seen a time quite like it relative to our industry. So much so that when I think about predicting the next five years in terms of what is going to be required in technology landscape, it's pretty hard to think about and imagine what the next generation of products are going to look like. But I think you do know a few things. They're going to be extremely compute-pensive. AI is going to be everywhere and we're going to get harder and harder to build. But for that reason, we think that the future of computing is built on. Because everything, everything now is a computer. Your coaster is a computer. Your car is a computer. Your NIST wrestling machine is a computer. And building those computers is getting harder and harder and harder. And when you lay AI on top of it, my God, there's a lot of work to do. But I'm very, very confident that armed together with a developed community like no other, and we have fantastic partners here in Taiwan that we've been working with for decades. So I thank deeply for everything that you've done for us. The future could never be better. Interesting, number of last few years for armed, but I am personally very happy to be standing on stage. We've used back-to-talk effects. I thank the future for our industry and armed with never been better. Thank you so much. We're on stage because we're going to take a group photo. It's time for a group photo. We're going to have a big event. Once again, we welcome the president of the Foreign Affairs Committee, Mr. Huang Zifeng, to come to the stage to come on stage and take a group photo with Renee. We're going to have a group photo. We're going to have a group photo. Thank you so much. We're going to have a group photo. We're going to have a group photo. Let's go straight to the media. Yeah, the big photo. The big smile. We're going to have a group photo. Okay, thank you. Okay, thank you. We're going to have a group photo. Okay, thank you. Thank you, everyone. Thank you for being here. And please return to your seat. I want to make sure everyone is okay. This is the first time for everyone. By the way, I'm still part of my band now. I'm going to be the first one to come here. All right. All right, I will respond to this. Thank you, Mr. President. Thank you, Mr. President. Thank you, Mr. President. So it was weird because... We're going to have a room for everyone. And thank you very much. Thank you so much. Thank you. Thank you. Thank you. Thank you very much. Thank you. Thank you. Thank you so much. Welcome everyone to Braxton and the showbiz And don't forget to mention the intermission Third of the intermission is for you Thank you for your coming today Right, so that was a... I guess it was no announcement Right? No announcement? I wonder why they have such a... huge setup And to have everybody show up The only announcement was... Please like and video Do that kind of thing Yes So I'm not sure if I can do any interviews I'll try to be asking around But I guess it shouldn't be on the live stream Yup I'm going to... Get back to my Airbnb And... I might check if there's another event I don't know, I didn't really look up I might go back to my Airbnb To get 20 more videos published From the display week Hopefully before tomorrow When the Computex is starting So I'll be live streaming The whole day tomorrow For this setup Seems to be working with the Hyatt So I'll be live streaming As I go around interviewing All the exhibitors at the Computex Hopefully I can get some views And some of these videos Right So thanks a lot for watching I don't really like the bubble tea I'll try something else