 to Erin. Today I'll be talking about machine learning for everyone. So who am I? Well I am Erin Maugh. I'm 12 years old and I have a tremendous passion in computer science ever since I was five. I'm a hardcore software and hardware developer and I'm the world's youngest potential contributor. I love robotics, machine learning algorithms, TensorFlow, Python, C++ and much more. I'm also the world's youngest graduated from Udacity so I'm trying to start an engineering program and much more. And you can follow me on Twitter at ErinH Maugh. So for those of you who've seen me talk before, hello there. For those of you who are new to see me talk, hi there. So today we'll be going on a journey to the moon. Will we come back? Let's find out. Oh wait, we're at Flastal. So hold on tight and get ready for Flastal. So now let's start talking about machine learning core concepts. So first of all, machine learning is not a joke. So you're laughing? No. Very unacceptable. Let's begin. So machine learning is the study of its complex algorithm and statistical model. It has the ability to learn and improve from its own experience without being explicitly programmed by any human intervention or assistance. The goal in machine learning is to find the algorithm along the weights and biases that comes with it when tweaking the input hyperparameters that we as humans give it. So machine learning is not magic or a black box. It's writing tools like technologies to answer questions based on data. So here's the machine learning pipeline. On the left hand side, we gather our data and apply preprocessing techniques to the data to make sure it's in a stable format for machine learning. Then on the right hand side, we apply a learning algorithm to the data set and then train our model. Our model has a pretty good accuracy. We won't want to deploy it to the real world. If not, we might want to update our model again and reach me in it in hope of getting a better accuracy. Don't worry if this doesn't make sense. We'll cover all of this in my talk. So let's take that machine learning in a human friendly way. So let's meet John. John loves new music. He loves music with a fast tempo and the genre of rock. But he dislikes music with a slow tempo and the genre of pop. So let's we can plot his like music and dislike music on an x y axis graph as you can see here. The x axis is a music genre from pop to rock. The y axis is the music tempo from relaxed to fast. So let's say John was listening to a new song. Master a puppet by Metallica. Where do you think the point will go at? If you guess it was around here, you're correct. How about bad guy by Billy Elish? Where do you think the point will be at? So now some of you might be saying John probably would like this song and some of you might be saying don't probably dislike the song. Well, here we can use a machine learning algorithm known as K nearest neighbors. So what we do here is we need to set a K parameter. So if K was four, then we draw a circle with four data points inside. Now we can go with the majority and say that John would probably like the song. So let's take a look at a few more scenarios of machine learning in the real world. For example, Twitter, I'm pretty sure most of us here uses Twitter. So have you ever wondered how Twitter gives you recommended tweets? So whenever you follow someone, there's always an algorithm watching you to bring the content for you. But how? Well, whenever you follow someone, the algorithm automatically gets tweets from that user and then calculates probability if you will like this tweet or not. And then I'll show the tweets from the best to the worst in your Explorer card. How about Uber? Well, Uber uses machine learning and a wide majority of their products. For example, in the rideshare app, they can predict the number of sales they're going to have in a day and also supply and demand. Also for their support, they have automated checkbox to help users quickly get the support they need. Also, there's account personalization, dynamic pricing, and also detecting front activity. All of this is based on machine learning. So let's learn machine learning today. So there are three fields of study in machine learning. The first field of study is artificial intelligence or AI. This is the broad discipline of creating intelligent machines to think for their own selves. Machine learning or ML is a subset of AI. It's a system that can learn from its own experience about any human intervention. Deep learning or DL is a subset of AI and machine learning. It's a system that can learn from experience on humongous data sets. So when we're talking about machine learning, we're actually referring to three different things. The first thing is called surprise learning. It's test driven to take the data as the input in the labels as the target. Without training, tomorrow we'll find some correlation between the input and the target. Surprise learning is best at solving regression and classification problems. Regression is where we need to predict a continuous value. For example, what is the price of the stock apple going to be on a particular day? Classification on the other hand comes in the form of yes, no question. For example, is this picture a cat? Yes or no? There's also unsurprised learning. This is data driven, so which means there's no labels and the data is not structured. We can only solve question problem where we need to group similar things together. Finally, there's reinforcement learning and it's algorithm driven, so it learns from its own experience. So if it does something good, it gets a reward. If it doesn't move back, the reward is taken away. So surprise learning is what we're going to focus on in this talk. Unsurprisingly, beyond the scope right now, and you want to learn about reinforcement learning, don't forget to go to my website at AaronHsma.com and click on view my previous talks and you can see my talk on reinforcement learning where you get to build your own sell-getting car in the browser. How cool is that? So let's take a look at a simple super exciting call. For example, classification. So have you ever wondered how Gmail spam folder works? Well, here's how it works. There's a bunch of emails marked as spam and not spam. Then we feed it into a computer and the computer will learn the relationship between emails marked as spam or not spam. So when email comes in, you can automatically categorize if it's spam or not spam. But why shouldn't you use machine learning where? Well, let's take a look at that in a traditional approach. In the traditional way, we will study the problem first and write a bunch of rules. For example, the text can be awesome for you or free many, many times, and it's probably going to be spam. And if not, it's probably not going to be spam. So we're going to evaluate this approach and it's pretty good. Then we'll probably just launch it. If it's bad, we want to write some more rules. But for example, what is a spam or change of the O and awesome to a zero? Sorry, consumer, we forgot to block that spam email. So this is why we should use machine learning. Once we study the problem, then we can trade our machine learning algorithm based on the data. And it's pretty good, then we can launch it. And if not, we might want to update our model a little bit. But the reason we use the machine learning approach is that this can be automated. So the user reports us email a spam. As long as we get new data, get online to train our model again to get even better results. How about clustering in unsupervised learning? Take a look at these images on this slide. What if there was a million of these images? And then I ask the story into three different groups, it can take more than 20 years just to do this. This is where unsupervised learning shines. Using the unsupervised learning algorithm, I like to identify which items are similar to each other. In this case, the triangles will go together, the cubes together, and the circles will go together. In total, there will be three groups. How about reinforcement learning? Take a look at this maze. Pretty easy, right? But how can we teach this to a computer? Well, to do this, we better need to better understand what reinforcement learning is. The mouse in this place is the machine learning agent, which will play in our environment, which is the maze. So in the first learning epoch, the agent might just keep moving forward and it gets electrocuted because it dies. But after many, many learning epochs eventually will get to the desired output, which is the cheese, and it will get there quickly and efficiently. How about neural networks? Neural networks are the foundational building blocks for machine learning. It's a computer software that was inspired by the brain and made up of units like neuron and our brain to solve a problem together. So a neural network is basically a stack of layers. Each layer is made up of units. For example, in our input layer, we have three units. You may also notice that each of these layers are fully connected. We can call this a dense layer. You may also notice in the hidden layer in the output layer, we have Ws and Bs. These are called the Ws and Bs, which are the internal levels of our model, which are going to be updated to our training. And by model, we're basically representing a neural network. So here, literally, is the simplest neural network. All we do is just taking an x and y, and if you remember from your algebra one classes, that function, which is multiplied x by y to get the output. So if x was three and y was seven, the output will be 21. So here it is in Python. Basically, this is the exact same thing, except for we're just we're just multiplying 8.5 by 0.1. And we have 0.85. But that's just silly. In today's monorail, there's many lots, lots of hidden layers. That's why let's take a look at a deep dive and deep neural networks. But hey, congratulations on trying the most criminal part on the rocket. So let's say we're going to build a neural network to classify a square, a triangle and a star. So we'll feed these images into our input layer. Then we'll pass in those images from our input layer to our hidden layers. So perform the required computation for our model. Finally, the output layer will hold our outputs. There will be three yards in our output layer. The number of yards in a classification form like this one depends on the number of types that you're feeding in to your model. In this case, three, a square, a triangle and a star. Let's say we're going to feed it an image of a triangle to a neural network. So this image is 20 by 28 pixels, or in total, 784 pixels. So there will be 784 neurons in our input layer, one for each pixel, then we'll pass those into our hidden layers, the lines called channels. These channels has its own number known as the weight at each neuron in the hidden layer as its own number, the bias. Thus, we can formulate the equation y equals wx plus b, where x is the data in the input neuron, w is the weight and b is the bias. Once we have that number, we need to go through an activation function. In this case, we want to go through a sigmoid function, where it gives us value from zero to one. So we can use it for predicting probabilities as the output. And also it's called sigmoid because if we point it out on chart, we get the shape of an s, thus sigmoid. This step is calculated every step of our hidden layer. Now, if we go to our end, our output layer, we can see we have a wrong prediction. But this is only half of the story. This is called forward propagation. Right after forward propagation, we see how much we need to improve. But here, we can see our machine's prediction, and also our errors, and also the targets. And the one here beside the triangle tells us that is the desired output. Now for the other half of the story, back propagation. Now we know how to improve our model, we go through back propagation where we adjust the weight and biases in hopes of reducing the error. So at the many epochs of forward propagation, back propagation, forward propagation, and so much more, we eventually have a correct prediction. And you can see that after many iterations of training, we have reduced error and increased accuracy. So now you might be thinking, how long does this training take? What can take anyone from minutes to hours to even a continuous amount of days? So let's take a look at the machine learning process. So here it is, blah, blah, blah, blah. Let's take a look at that in an easier way. The first step is to gather our data. So machine learning depends largely on our training data. So we need a large line of data and high quality data. So where can you find a large line of data and high quality data? Well, you can find it from one of these sources. For example, Google's data sets, Kaggle, USAN Machine Learning repo, the hacker Earth, Amazon, Microsoft, etc. The next step is to pre process our data. This basically allows selecting your data, filtering it, transforming it, and also visualizing the data to get better sense of it. Basically, just cleaning our data to make it similar for machine learning. Next, we choose an algorithm. This algorithm will be used to train our model. So make sure you choose wisely. So if you choose a model of an algorithm that is not suitable for your data, then you'll have very bad accuracy to choose an algorithm that's suited for your data to have a high accuracy. So you can get some algorithm from popular lines like scikit-learn and keras. So most commonly used machine learning algorithms include linear regression, logistic regression, decision tree, k-means, etc. The next step is to build and compile our model. Here, we'll build our model layer by layer. And each layer will have its own ways that correspond to the layer. And also don't forget to add activation function. So here, we will build a model by first defining a sequential layer, which basically defines a linear stack of network layers. And here we can add layers to our model using the model.add API. And also here, we can see that we're adding a dense layer, which is basically a fully connected neural network. That's the Rayleigh activation function. So then we need to compile our model using the optimizer, like how we're going to improve our model, the loss, calculating the error of our model, and the metrics, and also the weights. The next step is to train our model. So we make a prediction based on the current state of the model, and then calculate how bad the prediction is. And then we go to the backup application. So we update the weights and biases to minimize this error and make this model better. So we train our model using the model of fit API, with the training data, and also the training label. Then we also add the number of epochs, and also the back size, and the callback. Make sure that when choosing your epochs, you're choosing a good amount. For example, we're choosing such a very high number of epochs, you'll go through something called overfitting, which basically means your model has already memorized the training data, and it does horribly on the testing data set, and underfitting is where your model didn't learn anything during training, and also it performed badly during testing. And also the back size says the number of samples that will iterate through our data set in an epoch. And also a callback is, for example, if we reach a specific accuracy or loss, we might want to stop training in order to not overfit. For the next step, it's a tester model. So you see how well our model data set throughout training, and we predict our prediction using model.predict along with the testing x data. And then we repeat this process. So machinery is this easy, who uses it? Well, there's researchers, data scientists, machinery engineers, and users, sorry, developers, and me. So congratulations on making it this far. Now we're at the second stage separation. Okay, let's talk about traditional learning versus machine learning. To make sure you pay attention to this part, because this is a very commonly asked interview question. In fact, if you go to a machine learning interview, this might be the first question that we ask you. What is the difference between traditional learning and machine learning? Well, let's take a look at that. So let's say a game. We're not really a game because media controllers aren't allowed. So you don't have one. Good job. So traditional software development, we already know the input and the algorithm, and we just write a function that gets the alpha in machine learning on the other hand, we're taking pairs and pairs and pairs, and put it out the data, and we create a model that will figure out the algorithm in machine learning. We're more focusing on how data has been represented, while in traditional software development, we're more focusing on our code. So let's say we're solving the Celsius different problem using traditional software development. Well, here we're taking our input, which is Celsius, and can use the algorithm, which is Celsius times 1.8 plus 32 to get the alpha. But in machine learning, we're taking a pair of infinite alpha data, and we create a model and through iterations, we'll figure out the algorithm. So let's take a look at that in a simple scenario. For example, different languages say hello, hello, there are so many languages to say hello. So in traditional software development, based on the text, let's say what language it is. So the text is hello, then the language is English. And the text is French, and hello in French is Bonjour. So the text is not hello, the language is French. Spanish is Spanish, low is Hola. So the text is not low or Bonjour. Then it's going to be Spanish. But what language is this in? What language? It's Chinese. Oh, no, our model has no idea what that means. So the problem was, was that in traditional software development, we have to explicitly set every single possible thing that might happen. By the time you're done with that, you'd be like this. But then you ask, oh, no, there's so many other phrases that you have to write in traditional software. For example, bye, what are you going to do in so many other stuff like that? No, well, let's use the magic wand of machine learning. Start, we'll gather our data set, train our model where we'll learn the relationship between the data and the answers to figure out the rules. So let's try it again. What language is this in? Hold on to what language? And as expected, the language is Chinese. Much better. So if you're lazy like me, you can use something called BERT or bidirectional encoder representation from Transformers. This is a state of the art natural language processing model that was rolled up by Google AI. It's a mechanism that allows you to learn relationship between words in a text. And now we're at the third stage ignition. So we're almost at the moon. Let's talk about the history of machine learning. So now as you probably know, we have these fancy, water-cooled neural networks, and also scary water-cooled TPUs. But let's just hold on and then rewind to the year of 1941. In 1940, the idea of machine learning was introduced by the ally power during World War II. The ally power thought, if we can have this machine learning thing, we'll definitely win World War II and beat the axis power. But sadly, there wasn't enough computational power for this to happen. Then surprisingly, 11 years later, the first neural learner, by Marvel MC from the MIT, created a box that was made using a neural network. Then just one year later, the first reinforcement learning agent by Archive Daniel was made. Then people were like, this machine learning is so good, it can definitely match the ability of humans. So people started funding to machine learning. But when that happened, in 1974, the first machine learning so cold winter started. But then out of the blue, 12 years later, in 1996, I mean, the defeat of the world's chess champion, Gary Petra. Just four years later, that propagation for neural networks was introduced, draws neural networks to train faster and more accurately. Then in 2014, Dean Ryan was formed. Just two years later, Dean went off ago, he's the world's best co-player. And in 2030, AI ROF will be taking over our job in daily tasks. Oh, did you hear that? You're not going to have a job in 2030. So how do you still have a job in 2030? My learning tensor flow, tensor flow, tensor flow, the only one I've got to learn to still have a job in 2030. Okay, so what is this tensor flow thing? Well, tensor was originally titled disbelief, that the global brain team started in 2011. Then they needed some more help. They decided to open source it and reading the product to tensor flow in 2015. Then in 2017, 10th rule one was announced, and it also became the world's most honest machine learning and it also included Keras, which allowed people to develop their models quickly, and easier to use. Then in 10th rule two was released last year in September. So what's new in 10th rule two? Well, your execution is available by default. And instead of importing Keras, you can use TF dot Keras. Also, TF dot data is a simplified API that allows you to read your training data based on input pipeline. Also, the TF function decorator automatically translates Python products into central grass for you. So you no longer have to use TensorFlow session and can still run 10th rule one code of the 2.0 release. But if you're professional, don't do it. So the architecture of 10th rule two actually is pretty easy. We start by reading and preprocessing your data. Then once we have that, we use Keras or a premade estimator to build and compile our model. Then we can train our models using the distribution chain. I'll take full advantage of our CPU GPU and CPU. Once it's done training, we can use safe model, which will last the player model to cloud on a phone that on a browser and also other TensorFlow language binding. So consider this your first whole application in TensorFlow. So here we can import TensorFlow as TF. Now we can print out the TensorFlow version. And here you can see I'm on the latest TensorFlow version 2.2. Now here there's so many machine learning Python languages out there. So why should I use TensorFlow? Well, more quickly, which machine library should you use? Well, I got to cover, friend. The top three most popular machine learning libraries as of today are TensorFlow, PyTorch, and scikit-learn. TensorFlow, by far, has the most popular get-of-stars, and it's backed by Google. And under the hood is based on Keras. PyTorch, on the other hand, has 40k stars, and it's backed by Facebook, and it's built on top of K2. Finally, scikit-learn has 41k get-of-stars, and it's backed by the open source community. Let's take a look at statistics. Statistics never lie. But clearly, in the past 12 months, worldwide, TensorFlow is the world's most popular search term in the machine learning category. So the winner of the machine learning Python library is, Drone World, please. TensorFlow! Yes! Everyone loves TensorFlow. So now we're at a third stage burnout. No one else has got some other tools for machine learning. The first one is Jupyter Notebook, which allows you to quickly write Python, markdown, and other language bindings from the comfort of your own browser. There's also Google Collab, which is basically Google's copy of Jupyter. So here you can see a simple Google Collab here, but ha ha ha, hidden the rest of the Collab. But what I think is good about Collab is they're free stuff. Oh, yes, it's completely free. They are free GPUs and TPUs for youth. How about NumPy? NumPy, or numerical Python, is an open source library that you can use for scientific and numerical computing. It was built using Python and C, and it allows efficient computation on arrays. There's also pandas. So here you can see pandas, pandas is a high performance data manipulation and analysis tool. And it basically just allows you to clean it, load your data, and then clean it, and also plot your data. So here you can see in under three lines, you can import a data set from the internet. How cool is that? Let's take a look at that in a human friendly way. So here we're importing pandas as PD. Okay, we imported pandas. So it's reading our pandas.csv file. Okay, we have our CSV file. Now let's clean our data using df.drop. Okay, we're left with shreds of bamboo. And now we have a great meal. So beautiful. So now let's take a look at Matplotlib. Matplotlib allows you to create beautiful data visualizations with high quality graphs, charts, and figures, and much more. So here you can see some Matplotlib graphs that you can create here. Now for your first product, fashion eminence. So now you might be a little sad because it's product time. But coding? Yes. Let's take a look at that. Okay. So here I'm in the colab here. So today we'll be taking a look at a fashion eminence demo where we will build a neural network to classify images of closing. So here we're going to import packages and load our data set. If you're basically import everything in the print out potential version. So next, we'll load our data set from the Keras fashion eminence here. And here we're actually going to split it into the training, training set and the testing set. So the training data is what I'm going to use to teach my model and the testing set is what I'm going to use to see how all my models have learned the data set. So next I'm going to explore let's explore our data set. So we actually have 10 different types of data inside our data. So we have t-shirt trousers and all the way to ankle boots here. So in our training set, we have 60,000 images each 20 by 28 pixels. And our testing images has 10,000 images each 20 by 28 pixels. So here we can see a simple image and it's an ankle boot. But there's one thing we need to convert it to grayscale because machine model can't take in a colored image right now. So here in Python, we can actually do it pretty easy. We only need to do it's just divided by 255 to convert it to grayscale. So here you can see all our images are now grayscale. So now we'll build our model. So here I'll build our model. So here, first of all, in our input layer, we'll convert our data to a one dimensional array. Then in our hidden layer, we'll have 128 neurons, and we'll be using the Rayleigh activation function. So introduce non-linearity to our model. And our output layer has 10 neurons because there's going to be 10 different types of input that's going to be fed to our model here. And then we're going to compile our model using the odd atom automizer, which will find the individual learning rates for each parameter. Finally, the sparse category across entropy will measure the similarity between the predicted class probability and observed class labels. Here we'll also calculate our metrics, and here we'll be using the accuracy metric. So now let's train our model. We can train our models by passing the x data, which is images, and also training labels. And here we have a capacity number of epochs. We can also pass verbose so we don't see any output, and also callbacks, but here I'm just going to set an epoch number and our data. So now let's test them all. And can see we have a 87% testing accuracy. Hmm, I think we overfitted a little bit here. Because here you can see in our final epoch, we have 98% accuracy on the training data. But here in our testing accuracy, we only have 87% accuracy. So you can see that we have overfitted a little bit. So here's a challenge for you. Try and play around with my model and also play around with the number of epochs to see if you can have an accuracy that's greater than 95%. If you need any help, feel free to reach out to me on Discord. Now we'll take a look at our predictions. And you can see here, our predictions is not. So remember from our class names, right? So the nine is our index insider array, and see it's an echo boot. So let's take a look at the correct label. And it's also nice. So this is a correct prediction. Let's plot out that prediction and can see that we got it correct. So the blue means that we got it correct. And the red means we got it wrong. So our model got sandal wrong here. So let's plot everything. So our model is actually doing pretty good here. All except for the sneaker. So congratulations on solving your first project in machine learning. So now we're at the moon. Congratulations. So now for my grand finale drum roll please. What is what that's why where there's no code required to create machine learning models and no machine learning experience required from robles introducing teachable machine. Teachable machine is a web based tool that allows you to create machine learning models fast, easy and accessible for everyone. Teachable machine is great for kids. But more importantly, it's something for everyone. The teachable machine currently can handle images, sounds and body poses. So here's how it works. First step, you need to gather all of your data set that you train your model. And then you can test your model to see how it performs on training and also export it for your own product. Let's take a look at a live demo here. So here I'm going to create a new image monitor. So today I'm going to let it classify images of me holding three and five. So I'm going to be able to create five. So here I'm going to upload images of me holding three. Okay, I have my data, three now five. Now let's train them all. So you can see we have we train for 50 epochs and for each epoch 16 the batches in the learning will be 0.001. Okay, now it's trained. You can see I can test my model. So here I can see three and it's pretty good here how about five pretty good. But what I think is great about teachable machine is that once you have this all you need to do is put expert model and can put it on Peter website loaded from TensorFlow and also put it on your own mobile app. So what just happened there? Well, what just happened was transfer learning transfer learning allows you to train on your own input data and then and then change the input data and retrain again is about the same accuracy. So an example of this is for example, neural style transfer here we have a content image and a style image, we get a generate image. The generating image will be the style image with the content image on the top. So here you can see the person points along style of Van Gogh into the final generating image. A more advanced example of this is style G&M. We're basically here we're generating faces of fake people based on faces of real people. So congratulations on making it this far. So now for some takeaways of my talk. Machine learning is everywhere and machinery was for everyone, which means you, yeah, you senior chair, you can become a machine learning engineer with a little help from Google TensorFlow. We're back on earth you survive. Congratulations. Yeah, everyone might listen to this quote from the founder of Coursera and deep green dot AI Andrew and G AI is akin to building a rocket ship. We need a huge engine and a lot of fuel. Rocket engine is the learning algorithms, but the fuel is the humanism of the data that gives me to those algorithms, Andrew and G. So what he's saying is that to build a really good machine learning model, you need a good algorithm and a ton of data. The more data we have, the better. So here's some next steps in your machine learning journey. You can install TensorFlow from the official TensorFlow site. And also I highly recommend you check out Google's free machine learning crash course where they cover the most of the machine learning algorithms. And also awesome TensorFlow, which contains a great resource for learning TensorFlow and also central playground. Yes, central playground. It allows you to build a neural network in your browser. And don't forget to check out machine learning to hear on YouTube, where it's where my good friend learns morning on the Google TensorFlow team. Now give yourself a round of applause for learning the basics of such a hard concept. So thank you for listening. I'm Erin Ma. Don't forget to check out my my website at erinagema.com. And also don't forget to send an email at high at erinagema.com. I'm available at 14 hours, seven days. And don't forget to follow me on Twitter at erinagema, where I'll be posting out the slide deck, also the tips and tricks into machine learning. Thank you for listening and have a great day. Goodbye. Hey, that's great. That's a very, very good talk. It's like one of the best talk I've ever seen in Europe, Python, to be honest. And actually, there's one question for you. And actually, because your talk is so amazing, all the things that you do is so interesting. So do you find actually the things in school a bit boring for you? I would say it's actually pretty boring, to be honest. It's not that fun, because basically follow the basics of Comcore. They don't teach a lot. They really teach this, like this, he goes blah, blah, blah. Because I don't think I think it's both pretty boring. Yeah, so you want to just stick with machine learning and all this AI and all this stuff, right? Yeah, pretty cool. That's really a limited thing to do. Yeah, but I think like you can maybe do both, because like the school stuff is so easy for you, you can do everything actually. Yeah, that's great. Or maybe you can use machine learning to finish for the school work. Yeah, okay. So yeah, so I think now is almost the time for the closing sessions. So that would happen in Brian. So thank you so much. And we would love to see you again, or maybe show something like exciting next time. Yeah, so if you have any more questions, you can maybe continue the chat on Discord server. There's actually a dedicated room for this talk. So you can actually continue the discussion there. And so that's it for the Pirates room now. And the closing session again is in the Microsoft room. So I'll see you all there. Okay, bye.