 Great. So I'd like to welcome everyone to the privacy and tech and AI model building. So a little bit of additional information. So before we get started, so I work over at doc.ai on the head of edge infrastructure. doc.ai is a Silicon Valley based digital health company. We build mobile health products that have a, that work with AI technologies and algorithms, privacy preserving infrastructure. We work with a variety of organizations across the board and including very large healthcare organizations. For me, part of what I've done over at doc.ai is I started the federated learning and AI privacy teams, built up the initial teams around them. I also focus on infrastructure related things to help healthcare companies adopt zero trust and cloud native environments. I also do a lot of work in the open source space. So I do a lot of work with network service mesh, which is a CNCF sandbox project and is helping with lower level networking and zero trust networking and policy. And we're seeing early adoption in both telecom and enterprise. I've also worked in a variety of different areas across the board throughout infrastructure. So let's get started. So the agenda is we're going to run through a survey and the survey is going to start off with some AI model building technologies and then we're going to look at survey infrastructure technologies after that. And then we'll finish it up with a short example that uses some of these technologies. So the problem statement, so when you start looking at what artificial intelligence is, specifically how neural networks work, they learn to predict their outputs based on their input features. And so you have a set, if you look at the graph on the right, you have a set of input features that come in. They run through some set of network that network will learn things about the about the input dimensions and then we'll try to make predictions in the output. So this is generally a good property in that as you learn information about the input, then you can make better decisions about what the output should be. But this also has an impact on privacy. And the reason why is because you're literally learning to identify features about your inputs to predict your output. So if you're training a model that has access to sensitive data, or you're training a model which what you're effectively doing is you're training the model to recognize the training set, to recognize properties about the training set. So the reason that this happens is there's an effect that is called overfitting that happens within neural networks. So when you have a given input, the system is not immediately capable of generalizing right up front. And there are techniques you can use to help with that generalization. But what ends up happening is whatever input you send is going to be what identifies. So if you're doing a bunch of inputs, like you're making an animal detector and all you put in is cats, you're probably not going to be able to detect dogs with it. And this particular technique is a form of bias, it's a form of overfitting. And to make matters even worse, overfitted models aren't too much information about the individual inputs themselves. What this means is that if a model were to escape your environment and end up in the black market, then you don't have a way to remove that model off of there. And people can analyze the model you have and discover information about the inputs, which could be very sensitive data. So to give you a visual example, we have two lines in the center. And each of these represent two very simple models. So we have the straight line, which is a more generalized predictor. And then we have the dotted line. And the dotted line is something that has been overtrained. And so the question then is predict where the blue and red, sorry, where the blue and where are the squares and where are the circles? So if you look, you can see has had the line squiggles around. You can see where it reaches out to encompass a circle. You can see where it reaches out to encompass a square. And if we were to populate this with the initial training set, you can actually see how this particular model has overtrained. And in this scenario, you can see the linear, the line version, the solid version is still predicting, is still giving you information about the overall data set, and but is also giving you less information about any individual. So this means that the very first thing you should do in terms of protecting your, your information is you should reduce your over fitting. And so there's several techniques we could which you can use. This is not a full survey of how to reduce that that form of bias. But there are tools that are very well used throughout throughout the industry that can help you get started. So the very first thing to realize is that smaller networks learn less information. A network that has 10 nodes or 100 nodes is going to learn a lot less information than a network that has a million nodes or 10 million nodes. And so the first thing you should do is try to work out what's the smallest network that I can provide that still gets me a good prediction, but, but isn't too, isn't too large so that you're not learning too much information about the, about the specific entries within your, your data set. There's other techniques such as dropout that will shut off random nodes within your graph. So that that node will, will learn less information and will generalize a little bit better. You can also train on more data. So if you train on a small data set, you'll learn details about that small detail, that small data set, but you train on a large data set, you, depending on the bias within the data, of course, you learn, generally learn less information about any single, about any single entry. There's regular, your regularization techniques that you can add in that help suppress some of some of these overtraining. And of course, you also have things like data augmentation, the ability to add additional noise into the system in order to help reduce some of the bias. Now these are not the only things you should do and we'll get into more rigorous techniques shortly. So another problem that, that we have as well is sensitive access to data. So most models are developed on centralized data sets. So imagine you're creating a model to try to detect some form of cancer. So the way that it typically works is people will gain access through, could be through hospitals, could be through research organizations that have collected this information. And they'll centralize all that information into a central repository. And that central repository, which this is, which could have sensitive information is, is then provided to a small group of researchers who then can run their AI model training loop. And so you can see in the environments on the right that the centralized data and the AI model training loop are both coupled closely together within the training environment. So one of the questions that comes up is what are the parties who own the data do not want to share the information to the central group or to the central authority? So one technique that we have to, to work with this is to use something called federated learning. And federated learning allows you to develop a model without having access to the model directly, but instead you work in coordinate with lots of other agents so that they each train a small part of the network and then they send you back updates. So if you look in the graph on the right, we have three stages. We have the first one is the model. We have a central model that has some initial predictor that gets pushed out to a remote set of agents. Those agents have some data that they can train on. Each of them does their local training loop and produces a, produces a new model. Those, the updates to those new models get sent back to the original, to your repository. And that those models then become aggregated. They become, they can be ensemble together. They could also be averaged out. There's various techniques you can use to work out how do you want to join them. But what ends up happening is that there's a way that these models can get joined together to produce a single model or a single set of models. So this helps, this helps with the issue where you might have sensitive data that multiple organizations don't want to share directly with another organization for a variety of reasons. And so it still allows you to gain access to train on these models across multiple organizations without owning the data itself, without ever having access to that original data. But even then that's still not enough. So can we, one of the things in this problem is that can we model a population without modeling the individuals in the population? Because if you recall, you still have this problem that exists. You still have this line that can get created that still exposes information. And federated learning by itself doesn't solve that, although it is a piece of the step towards removing access and preserving privacy. So again, phrasing the question again, can we model a population without modeling the individuals of the population? So if you have a sensitive question, a sensitive question to be something like, do you have any history of mental illness in your family? You get a complex answer because there may be legal and social implications. The same may may be about drug use in the past or criminal history or so on. So these types of pieces of sensitive information, if you ask this question to a person, they may choose not to answer it and you get bias from people who are more comfortable answering it or they may lie about it because they want to participate and being seen not to participate, they feel can give away information about themselves. So they would talk to lie instead and add additional bias into your system. So one of the techniques we can use to help with this is a technique called differential privacy. So differential privacy can be used without machine learning. And what we do is we add noise to both the inputs and the outputs. And so I have a very simple example. Let's take that same question that we had before and about whether you have a history of mental illness in your family. And you were to ask this as part of a survey, just like a written survey. And if you just ask the question, you'll get bias. So what we can do is we can put a person into a private room and there is a non-biased coin that is there. Maybe the user even provides the coin themselves so they know that it has not been tampered with. And so what they do is they start, the instructions say, you toss the coin. If it's heads, you toss the coin again, which erases the initial answer, and then you answer the question. If the first coin toss tails, you toss the coin again. If it's heads, you answer yes. If it's tails, you answer no. What this allows us to do is it adds plausible deniability into the answers. By plausible deniability, what this means is if you ask the person who filled out the survey, hey, you answered yes. That means you have this particular condition or you have this history. The person can say, oh, no, I answered the coin toss. So it gives them the ability to not provide information about themselves directly, but it still gives you the ability to find something about the overall population because you know the probability of the coin toss. And you are able to then model that into your system. So it turns out that this technique, you can also apply into machine learning. You can apply noise to the input. You can apply noise to the output. You can cap the learning rates so that you don't learn too much information on a single pass. And you're able to then quantify how much information has been encoded into the system. And this actually turns out to also help generalize your model because you're not learning about information about an individual user in the same level. So another technique that we use as well that is related is secure multi-party compute. So secure multi-party compute is where you have two parties who want to collaborate but do not want to share information. So what happens is the data is converted into multiple components basis. So in other words, A gets turned into A1, A2, B gets turned into B1, B2, and B3. And in this graph on the right, we have three separate organizations. We have one organization that owns all the circles. One organization that owns all the squares. And a third company, which is a trusted third party, is the diamond. So what happens is that the components, these component spaces, these A1, A2, and A3 are produced from A and they get sent to, each piece gets sent to each party respectively. The same occurs with B. And they all perform the same computation on the data. And what happens is that the center graphs, the center parts, like the one that's performing that function F, is not able to reason about what is A from A1, what is B from B1. The same is true with the square. And the same is true for the diamond. But then once you combine the information back together, then you can get the result of of the process without having shared the original information itself. Now, there are some limitations to this technique. In short, if you have any form, and this is more for some of the mathematicians in the room, if you have anything that is nonlinear, then you need to be very careful with this technique. It tends to break. What this means is that, from an AI perspective, you're able to perform the initial work on the graph. You're able to do the multiplication and the addition. But if you tie in something like a tangent, then your output is likely to break in that scenario. But branch predictions still work. So this means that you're able to perform your predictions as long as you stick to things that are linear. So this does limit some techniques that you can use, but it's still a powerful technique that when your problem fits this space, then it is a tool in your box that you can reach for and use. So going on to some of the infrastructure work, there is an environment called a trusted execution environment. And so what this is is, if you look at traditional containerization techniques, you have virtual machines, you have containers, Linux containers that have come up, and all of these are centered around protecting the host, the host operating system from the guest. And so that guest one would, in most scenarios, would have limited to no capability to influence guest two or the host. In a trusted execution environment, it's about protecting in the opposite path. It's about protecting the guest from a host. So effectively, trusted execution environments are containers which protect the guest from snooping from the host. The way that this works is that there is a hardware, there's some new hardware that's gone into the latest set of chips, so you have Intel and AMD and ARM and so on, have a way to encrypt the memory of the process. So you're able to create what's called a secure enclave. And the secure enclave you can then deploy software into and keep it separate from the host because the host and each of the guests are all encrypted with a different key. And it minimizes the performance impact because you're not doing the computation on an expensive key and it's hardware accelerated in its encryption and decryption path. And the keys are typically stored in the processor itself or in a trusted chip that's alongside the processor. So what this allows us to do is that assuming you can attest the guest that you're deploying to and guarantee that it is in fact running in a trusted execution environment, it means that you can ship a sensitive workload to a cloud environment and have some protection from the host being able to inspect what's inside of them. There is also another technique that's coming around. And if you look at how systems are defended against today in your security model, you tend to have something called perimeter defense. The idea behind a perimeter defense is if you look at the top graphic, you have a trusted network with a workload inside of it. This trusted workload is needs to communicate with a second workload that's in a different network. So what ends up happening is typically these, there'll be a secure connection between the two networks that's established and that's that connection will be defended by putting a firewall between the two networks or they may have a VPN that allows one network to communicate with another network. So one of the problems is if the attacker enters the trusted network, then they're typically able to access systems within that trusted network with very few limitations. And so we see these types of attacks go on quite often and there's some very famous attacks that have occurred where the operator has had some web service that's exposed to the internet and the attacker will end up breaking into that web service. And once they have access to that web service, then that web service straddles both the internal and the external network. So they can then scan the internal network and find databases that exist within that and start asking, start to extract information. And there's some very high profile and we'll give names in this scenario with some very high profile attacks that had to this particular profile that have led to very significant breaches throughout the industry, both within healthcare and out of healthcare. So what Zero Trust does is it's a different way of thinking of how to perform security where the idea is to minimize your overall perimeter to the smallest thing possible for a given set of workloads. So we're not saying there's two trusted networks, instead we're saying I have one workload that needs to talk to another workload, let's limit the communication to those workloads that are involved regardless as to what network they're in. And so the first company to implement this at scale was Google. So after they were, if I recall properly, after they were attacked by a advanced persistent threat, they decided to move over to this approach. And the way that this particular approach tends to work is every workload receives some form of cryptographic identity. It's, you can think of it like a certificate in a web server, when you visit your bank, it has a cryptographic identity. So it's the same type of cryptographic identity, but assigned to an internal workload. And when two workloads communicate with each other, they have to prove to each other by showing their certificates who they are before that secure connection can occur. Once that secure connection is established, we can then control the communication between them using declarative policy. And declarative policy is basically describing what workloads can talk with other workloads, what messages can you send across the, across the wire, as opposed to a more imperative approach, which say these IP addresses can talk to these IP addresses over this port, which, which becomes difficult to manage at scale. And it is how people do it today, but it's a, it's a very hard problem to, to scale up in, when you're dealing only with IPs, because there's not an implicit relationship between an IP, or rather there is an implicit relationship between an IP and an identity, rather than an explicit relationship. So this allows us to decouple and it allows us to work in more edge environments, cloud native environments, groups, groups like, like Kubernetes, where your workloads can spin up and spin down quite quickly. The IPs that receive change on a, on a regular basis. And they get, they get rotated amongst different workloads. And so this allows you to, to focus on creating your identity based upon what the workload actually is, rather than on an underlying detail. So preserving privacy itself, what part of the reason why this is important is because we have a whole chain of things that need to, to occur. We have, when you're, when you're building out a AI, we, or you're building out a model, you need to know, is where I'm grabbing the data from a trusted environment? Am I applying the right type of, of, of privacy into it? Am I adding things like differential privacy? What type of communication can I have if it's like a federated learning example? How do I, how do I know that I'm talking to an entity that I, that I trust who's, who's remote? So these type of, these type of models, you can't just focus on, on one layer. You have to focus on all of the layers down from the hardware and, and what's running all the way up to the actual processes themselves and what's, and what's running, and what's running on top of them. So this means that you have to have cooperation from your, from your data modelers. You have to have cooperation from your, from your infrastructure team, the people who are building out the pipelines and the vendors who are selling you the, the hardware that this stuff, that this stuff runs on. So in terms of, in terms of privacy, we've applied these particular systems to several of our, of our products that we have, but we want to focus specifically in this example on bottom left with, with Passport. So Passport itself is a secure application that is designed to allow them to have an employee dashboard that is designed to help teams get back to a shared physical space while simultaneously preserving their privacy. So the way that it ends up working is that there's a series of questions that are, that are asked, that, that are sensitive, but are necessary in order to protect the population. And what we do is, is these questions get pushed to the phone in the same way, use the same heuristics that a set of heuristics. The questions get pushed to the, to the phone and we, we then work on that on those set of questions. We never send the the information to the, the result of any given question to the, back to the central location. So the, the sensitive information stays on, on the phone. And if they've, if they've answered all the questions in a particular way that demonstrates that they are a safe candidate to come back in, then they report back saying that, that they've replied successfully with to each of the question in a way that is, that determines that they are safe and or low risk, I should say, rather than safe, that they're low risk. And so we have, this gives us the, the, just enough information to report back to the employer and to approve the, the person while simultaneously not giving the employer any detailed information about what goes in. So it's the sharing a minimum quantity of data back, sensitive, sensitive information itself is not transmitted. And the end result is a cryptographically signed QR code that exists on the phone that they can then show and scan. It says basically, the effectively says, yes, I answered this, I answered these set of questions and have, and I've attested that, that the person is low risk. So on the backend, we also make use of, of a zero trust environment. And so each, each of these connections in the internal system are based upon, are based upon Spiffy and Spire. So Spiffy is a CNCF project that describes a set of, like, how do you give out these identities? How do you rotate these identities? And Spire is the reference implementation of the Spiffy protocol. And so it's, these, both of these are CNCF projects, which is the, which is part, the same organization, which also manages Kubernetes. So they're, they're both, they're all sibling projects to each other. And so Spire does the actual work of handing out the identities to the workloads, verifying information about the workloads, what images they're running, what systems are they running on. And then open policy agent is a system which you can write declarative policy. And that declarative policy is human readable and machine readable. So you can say this workload is a lot, this application is allowed to talk to this database. And I want them to prove who they are using the identities I received from Spire. And they're allowed to send these messages. And so you can specify the interactions using open policy agent and open policy agent will then read the, the input from the requests. And then we'll give you a decision like, yes, this, I want to allow this or no, I'm not going to allow this. And here's why. And it gives you that explainability. And finally, these, the information that is necessary to report back to the employer is then sent back to a, to a metrics and logging infrastructure where the employer can then look in a, in a dashboard. So as an example, it's trying to reduce the, the total amount of information it's sent. It tries to separate out components. The, the policy is separate for, from the application itself and is applied and is, and is, and is applied as a, as a, as a uniform thing that you can control with that with Spire giving, giving out the, the identities themselves. Like the application cannot ask for a specific identity. It gets assigned an identity based upon, based upon its properties. With that, as a recap, we want to make sure that we collect less data overall. So when you're building out an AI system or any other system that is collecting sensitive data, it doesn't have to be ML AI. These, these type of techniques also work in other environments or in other, with other, with other approaches. So you want to collect less data overall, try to collect less sensitive data if you can. Exercise, good processes for data that was collected. In fact, part of this is not just about the technology. It's also about your internal processes you have internally. If you have, if you have good discipline within your processes, then this discipline will reflect in your technology and in your, in in your choices and will help you identify clean lines of ownership and clean contracts. And all of this ends up applying good software engineering techniques will, will help you in this path and make sure that you, the same way that you have your features, consider privacy as a feature. Don't consider it has a secondary bolt on thing that you've added. It is a feature in and itself. And it's something that is cross-cutting other features design that design and prioritize that privacy in your architecture. And importantly, do not assume that the AI models or the data that you, that you trained on sensitive data is inherently privacy preserving. So one really famous example of this is if you look at the, the Netflix study that was done in, there was a, there was a Netflix prize that was released and people came up with some pretty, the idea was to provide enough information about what movie a user has watched and in order to provide better predictions. It turns out that this dataset, when you pair by itself, it was de-identified and by itself it was, it was pretty innocuous, but when you paired it with information from Twitter, from Facebook, from other social media platforms, then it turns out that there was enough information in there that you could then start to personally identify people and what movies they watched. Like a person might say, I watched a movie tonight and I watched this or this is my favorite movie and how they rated it. All of this information can be paired with other, with other things. And so if a model is, that is supposedly private, is extracted and is, and is leaked, make the assumption that people are going to try to pair this up with other datasets to try to reduce the, to try to reduce the overall privacy. The differential privacy that we discussed earlier helps a lot in this particular space because it turns out the, because you, because you, you have that plausible deniability that's built into it, it makes it very difficult to work out whether some, whether some spike in your, in your data is real or not. It reduces, it reduces that, that privacy leak through, through that property. And think about privacy across your whole chain. You have the models, you have your pipelines you build, you have in the infrastructure, people tend to separate their infrastructure into three parts. You have compute, you have the transport, the network and the storage. All of them have to be designed to work with each other. They're not isolated. They're all part of a, of a, of a solution or all part of a chain. Make sure that things are designed so that they have clean interactions with each other and can help each other. It can help, they can help defend against each other. Like on the compute side, you, you may have a secure storage and a secure transport. But if you have a, if you have a bug in your compute and the actual hardware, maybe there's some information that can get trans, that can be transported to the side, things like heart bleed or similar techniques that, that may come out in the future. And so how do you, how do you defend against compute? Well, maybe what you do is you only run processes from a specific user on a specific processor that's been dedicated to them for a specific period of time. And, and you don't co-mingle certain types of processes. And so like think of these type of problems. If you're working on data that's not sensitive, it doesn't matter as much. But if you're working on very sensitive data, start asking these type of questions as to like, what can go wrong in my computer? What can go wrong in my model? How can they, how can they defend each other and cover each other's weaknesses? And finally, strongly recommend that you engage with open source communities who are focusing on privacy in this area. So there are certainly multiple organizations within the machine learning and AI space that are starting to have a focus on this. And this is the tip of the spear. I'm also hoping that Linux Foundation, IEEE and other similar groups start to also invest heavily on not just the security side, like there's a new security organization within Linux Foundation, but also focus on privacy because security and privacy are, even though they're related, are not the same thing. Private security is very often focused around who's doing what, how do I make sure that only I'm unauthorized, that I can authenticate and authorize a user and defend, and defend things. Privacy is about how do I not learn or how do I not share sensitive information. They're both tightly coupled with each other. But they're, they're two separate orthogonal things that do have some implications with each other. And with that, I want to thank everyone for joining and listening in. And if you have any questions, feel free to ask here in the webinar while it's on. You can also reach out to me. I have my name here on the CNCF Slack. If you're not on the CNCF Slack, there's a link there on how to gain access for information about doc.ai and its products and what we're doing, or if you want to join us in, in producing or building some of this stuff, you can send us an email at info at doc.ai as well, and we'll be happy to answer any questions. Thank you very much. Okay, thanks everyone for joining us. We're going to go ahead and have Frederick answer a few questions here and share my screen. So in terms of questions, so me go ahead and read some of them. So one of the questions is what do you do to recommend to assure compliance of AI models with privacy regulations like the GDPR in Europe? So GDPR is, is a very interesting scenario. So I'm not an expert in GDPR, so I want to be a little bit careful. Please take, take in mind with, take my answer a little bit of grain of salt and get some expert advice in this area. My understanding with this from, from the privacy side is that if you have data that is, that originates from a user, you have to be able to track and be able to delete information from that user in specific ways. And there's rules on, on who can perform actions as a, as a processor of that, of that data. And so in some ways, there are some, some similar, there are, there are some similarities with how HIPAA tends to work within the United States. So like with HIPAA, if you want to work, if you want to share some sensitive information to have some processing done or you, you want someone to work in, in a specific way on some sensitive data, you generally have to, you have to get them under a, what's called a BAA, I believe it stands for Business Associate Agreement. And this means that you can audit, you may have to audit the, the organization in some scenarios, you have certain requirements in terms of how the data is passed. And there's a, a legal infrastructure that gets established between, between the companies. And I think that if, that with GDPR, if you want to play it safely, it's not just about the technology, it's also about the people. So we want to make sure that, that you establish those same type of things. They like, Hey, I have some sensitive data, and I'm going to provide you with that sensitive data as you can provide a service, set up that legal infrastructure so that you can ensure that they protect the data as well. And also that if they breach the, the agreement that they take responsibility for, for it as well, that they, they don't just palm it off back on you and say, Oh, we were just the, we were just the processor. And remind, you know, that the techniques that they used were, were not particularly, particularly good in that scenario. So I hope that answers the question properly. In terms of another question, have we experimented with homomorphic encryption? So that was, that's a fantastic question. I have experimented with homomorphic encryption. One of the things that I'm a little bothered with in terms of homomorphic encryption is when you start to do things at scale, that tends to have significant slowdowns. As far as I know, this is not really a solved problem. There are some excellent use cases for homomorphic encryption when you have smaller quantities of data. And I would recommend that you make use of those when, when you can, if the problem fits. But when you start training on very large quantities of data, then homomorphic encryption starts to, starts to have some, some problems there. And so my hope is that we, in the future, we have some advances in the industry that helps make, makes this a more, a more attractable technology. That being said, there, there are certainly areas where it can, where it can work out. And so if, if you experiment with it and you see that it, that it fits your, your problem, then by all means definitely, definitely use it. There's another question about insecure multi-party compute. Why is the function F not labeled F1, F2 and F3? Thank you for, for asking that question. It's a fantastic question. So what we want to, we want every function, we want to run the exact same function across all three datasets. So we're not deviating the function. We're, we're deviating the data that is sent to the function across the three organizations. So there's, there's actually a really important point because it turns out that you have sensitivity in the data. Most people focus on sensitivity, sensitivity of the data. But what if the algorithm itself is sensitive and can give away information about what you're trying to do? Then in that scenario, secure multi-party compute may not be the right choice for you because you're effectively giving your algorithm to two or three other organizations. And in that scenario, you probably want to focus on a trusted execution environment where you can, where you can send a workload to to a secure enclave and then you bootstrap it from there, pull your algorithm using a, using your PKI, your public key infrastructure and then run it in, in that trusted environment. So, and that can, that can give you, that would allow you to deviate the function itself. Thank you for that question. In terms of this, in terms of the slides, yes, we can, we can post the slides. I'll make sure the Linux Foundation actually, they do have access to the slides, so we'll make sure they get posted out. Let's see, there is a, there is a question about Passport. Melissa or Marina, maybe, maybe you want to answer this particular one on Passport. Basically, the question is about what would the information the employer would be able to gather from the employee's responses? Would it be anonymous? Is it, of the help, what gets sent back? My recommendation is to ask, send an email to info at doc.ai and get you an accurate response because I'm not able to properly answer that specific question. Thank you, thank you for posting that. With that, I think that answers all of the questions. Other than, I don't know when the webinar will be, will be posted. I'll work with the Linux Foundation to, to get that out and we'll, we'll put a message on our Twitter when it gets posted as well. I can jump in here, Frederick. Yes, so the slides and the presentation will be posted to the Linux Foundation YouTube page and you can look for that in the next week. Okay, and I'm actually going to switch the host over to Marina. She's going to answer the Passport question. Oh, as I was just saying, that question probably would be best answered, answered by email. We can follow up in a bit more detail on, on Passports specifically. So the, the email to contact us at is info at doc.ai. Great. Is there any other questions? Okay. If we have no more questions, we will close out for today. Last call for questions. Okay. Thanks everybody for joining us and we look forward to seeing you next time. Thank you.