 So we can start. So welcome everyone to the second seminar of this quarter. Our speaker today is Dr. Manili Banderu from VMware. So he's going to talk about Internet of Devices deployed to a smart grid. And before we start, before we introduce a speaker and start the seminar, there are some rules that I would like to talk about. So everyone will be muted upon entry to reduce background noise. And there are three features that you can use during the presentation. The first one is this check feature. So if you have any technical difficulties, you can use this to communicate with us. So so in the past, we have people not able to hear the speaker or see the slide so you can use this feature. And the second feature will be this raise hand. If you have any, if you need any quick clarification, you can use this. For example, if there is a definition, you would like the speaker to clarify, you can use this. And there is this, the third feature is this Q&A feature. So if you have any in depth questions, you should type your questions in the Q&A section. There will be address at the end of the presentation. I also want to remind you, this is the schedule for this quarter. I would like to remind you that our next seminar is next week, next Thursday. And the speaker is Burak Ospinisi from Oak Ridge National Lab. And he'll be talking about charging of electric vehicles if you're interested in one. So I'll let Professor Ram Rajagopa introduce our speaker. Hi everyone. Welcome to the second smart grid seminar. It's a pleasure to introduce Dr. Malini Bandaru. She leads the open source IoT and edge efforts at VMware. She has had a long career. She has her PhD in AI and machine learning and has worked in companies such as Intel, Verizon, Nuance. On a range of things from autonomous vehicles to open source cloud software. And also a lot of the power management for processors and performance management for that. Besides her work at VMware, she has been engaging with Stanford in a collaboration around IoT that has been very exciting. So I'm very happy to have her here presenting today and I'm eager to listen to her talk. Thank you so much, Malini. You can share your slide. Okay. Thank you everyone. We get to my slides. There we go. Okay. Can you see my slides? Yes, we can. So thank you so much everyone for having me here today. I'm super excited to be here. Malini, before you start, the bar from your Mac is appearing. Okay. How do we get rid of that guy? Did it go? Did the bar go yet? Not for me. Let's flip it in one second. Can you see my, oh wait. You're seeing the speaker view then. Hang on. Sorry guys, technical difficulties. Swap. You slide show. Perfect. Everything good now? Yeah. Okay, cool. So thank you everyone for having me here today. I'm super excited to be here. Thank you so much. I'm going to start with the new in person. And before we get started, I just want to tell you who all our collaborators are on this project. We have Diana and Swetomir from our Bulgaria team. From here in the US and Nicola also to from here, the US Palo Alto office. We've also worked with folks from Camu energy, James and Cody and Armando. We have a capstone project and we're working with them on that. So with that, we don't need this. Ram has already introduced me. So just a few words about VMware. It is a company that believes in sustainability and being a force for good. It's joined an effort to plant many trees to reduce carbon dioxide in the atmosphere. It's also working towards having a microgrid. And that's why we're talking to you today. And the picture, the very bottom here shows you two buildings. In particular one over here that has solar panels on the top and there's another one. We have some EV charge stations and that's how this whole, you know, microgrid effort has got launched. And we're also proud about our virtualization software that helps you consolidate multiple servers on a single physical server and reduce your energy footprint, take advantage of the latest, you know, processor performance improvements. And that's pretty much how I got involved in cloud and my whole cloud and open source journey. So with that, what are we going to talk about today? We're doing to talk about renewals in the dark IoT and smart grid VMware smart grid POC. It's still a work in progress, not very much complete and open source. I'm from the open source technology center. And so you have a lot of open source flavor here. So renewal bills are definitely on the rise and you see more and more windmills, especially if you're in California, more solar panels, we're enriched and endowed with a lot of sunshine. So we want to make use of that. But those are distributed energy sources and that brings in complexities. It's weather dependent time of day dependent. And that brings us to the duck. So you might have seen this graph before multiple times and at different days in the year. But essentially what this captures is as the sun comes up and then your solar panels connect and collect energy or your windmills are collecting wind power. When you have that energy might not be the time when you want to use it. And at the peak time in the day at the bottom of this curve here, you don't have too much usage. So you have to do something about that. Otherwise, you know, you're still going to depend on the grid to meet your needs later on at night as you peak and have your, you know, people coming home using their ovens and dishwashers and television sets and what have you. So what are we really looking for at the peak time of day when you have all this energy that's coming in from your solar panels. You'd like to store this energy and use it at night or some other time when you need it and you don't have it coming, you know, plentifully. Another thing is, is all that workload that you have at night really have to run at night can you shift it. So we're really looking at two pieces of this puzzle where all can you store and how can you store and then how can you shift your load and addressing some of those will help us, you know, better use our natural resources, your renewable resources. And by so doing, we can reduce the need for like nuclear power plants or at least how much they output and coal power plants, etc. And those are what's still driving the traditional legacy, you know, utility companies and we can't let go of that if we want all the resiliency and availability that we have today. So how can you save energy store energy and that's pretty much how it got involved with Ram and Stanford. I mean, there's a lot of technology that's coming up becoming cheaper with respect to storage, your traditional storages such as your hydroelectric facilities and dams. You can't have them everywhere and they're expensive and they're hard on the environment. We have improved battery technology and something that's really coming up is, you know, vehicles, your electric vehicles are also mobile batteries. So what about shifting load? Can we, you know, make people use their power hungry devices when the power is cheaper? Or can we move it from a different time of day when power is cheaper? Or can we be watching the price of power and things like that? But what's this all telling you we can't have a human say, Oh, now it's cheap. Let me run my dishwasher. Or now it's cheap or it's plentiful. Let me start charging my car. It's nice if all these can be automated. So this is kind of telling you that we need some kind of control mechanism, some kind of software to control all this. And that's already bringing you along into this IoT journey, the Internet of Things. So with that, what are the typical things that we have with any IoT kind of system? You want to be constantly sensing and monitoring to see what's coming in terms of power, what's coming in terms of load usage. And take into consideration any historical data for prediction purposes. What happened in the middle of, you know, October or in the middle of summer, people were like putting on their air conditioners. And some of it is seasonal and some of it is time of day. Okay, so we want to take those predictions in. And if you can give about a 24 hour prediction to your utility company, it helps them for planning purposes. And there are all kinds of other issues. But before we get to improving the whole utility system and, you know, reducing our carbon footprint, we have to start measuring more. And that's all of that is IoT and it's very data intensive. So you want to be processing your data closer to the source of the data. Not all of it is interesting. Can you do some high level analytics there? Can you get some models out there? And then share that with your utility company or your neighbors or your neighborhood and beyond so that you can do better planning. And with all that some control. So what's the smart grid and, you know, I'm sure all of you have an opinion. The way I look at it is that we have to still have everything that we had in the past. We want safety, reliability, it should be economical. We should meet the demands. So the supply should meet demands and we want to be the future, which is, you know, less carbon. So we have less greenhouse gases and therefore we have less global warming. We want it to be flexible and everything is reliant is always there and resilient. It should be able to pick up itself and manage itself. So this also tells us another thing. If we're going to have these solar panels and windmills everywhere and not just those few, you know, hydroelectric plants and nuclear coal power plants, etc. We're talking about distributed systems and they're mostly renewable. They'll have to be connected. We want to monitor them, manage them and maybe even sell some excess power. So with that and wanting to get towards that smarter grid, like any software, like any system, you need a test bed, a little environment that you can control and not cause damage and play around and play with your algorithms, play with your sensing. Am I sensing often enough? Am I modeling adequately? We've got good algorithms in place to say who should give power to whom or who should charge which car right now or who needs how much charge for their car. So the best kind of test bed for this sort of process is a microgrid and that's pretty much why VMware decided to be part of this whole effort into the smart grid and build its own little microgrid. This really comes top down from our company. Our CEO, Pat Jelsinger really believes in sustainability and the office of the CTO is very involved and invested in sustainability. So where are we? And with respect to this proof of concept. So what we really want to have in place is something that monitors and manages our electric vehicle chargers. We have about 178 charge ports. Five of those stations and that's about 10 ports are the ones that we have paid extra, you know, API manageability support. So through those 10 we can do things like who's charging right now, how can I curb them, how can I uncurb them and things like that. Then something very dear to VMware's heart is our data centers, you know, that's our bread butter. That's how we got into this whole cloud business through our virtualization software. And then, you know, anybody who has a physical office might want to control that I mean what's the temperature you've set for the heating and I drop it by one degree up or down or whatever it's if you're running an air conditioner you can save a lot of power by just tolerating one extra degree of heat or warmth and things like that. So those are the three main things over here and then we have two buildings with photovoltaic cells on top. They have, you know, 125 kilowatt capacity. We have also two batteries on campus one megawatt megawatt hour. And what we're looking is making this an island and being able to control this island and if necessary disconnect from the grid. In case of some brownout or blackout like our recent fire scenarios, maybe even contribute power back to the grid or to the city of Palo Alto, or maybe with Stanford and our campus help the city of Palo Alto or other surroundings. So it's really, really early. I have nothing much to show here except some comments. So the first comment is, are we looking at the right things? I mean, we mentioned electric vehicle chargers, we mentioned the data center, we mentioned the buildings. So let's see if we're even focusing at the right thing in our microgrid. And the answer is, I think so. Why? Because even though with COVID right now, you know, not people, not many people are buying cars and not many people are driving around. So we did have a dip in car sales. We did have a dip in EV sales, too. But it's anticipated that over the next 30 years, this is only rising. In fact, in 20 years, we're expecting about 60% of all car sales to be electric vehicles. And if an electric vehicle such as the Tesla that you see here has a 75 kilowatt hour capacity, it's a pretty significant battery. And a few of these, even the city of Palo Alto and, you know, in the surroundings, that's a lot of cars. So that really adds up. And what about data centers? Are we looking at a significant problem here? Yes. Why? Because even, and this was a talk that we had at the Stanford campus. There was some many months back, maybe nine months, seven, eight months back. A Google person came along and said, hey, back in 2015, we used to use 5.8 terawatts globally in our data centers. 2018, this had already nearly doubled. And who knows what is in 2020. And if you look at something like China, they are using across their country so much power that their data centers are using the capacity generated by their three gorgeous dam and that's the largest hydroelectric project across the world. And they're using that much power already. And what about buildings? You know, we have global warming and people are getting wealthier and technology is becoming more affordable. So with time in just about another 20, 30 years, it's anticipated that the number of air conditioning units themselves is going to go from 1.6 to 5.6 billion units. It's going to contribute about 12.7% of all total energy usage across the globe. So we are looking at the right problems. So now that I've told you, yes, we have a microgrid POC in our mind. Let's look at where we are with respect to that implementation and what we're doing an open source with respect to that. So this is the year of COVID-19 and it kind of always makes me nervous to say COVID-19 because who knows what's coming up next at 20 or 21. But yes, we are here in COVID times. And we were pretty thrilled that we figured out, hey, we need to spend a little more money to get the management API for our charge EV charge stations. So we have a vendor called Charge Point and we had to pay about $200 per station. So, you know, we did that. And what does that management API provide us? It gives us the load at a group of stations at an individual station at the level of a single port on a station. We can make as many calls as we want to this API. We can shed load and the shed load can be a percentage, say it's, you know, like I'm using 100, I can shed it by 10% or I can even shed to an absolute max, you know, I need you to be within 27 units of power or whatever. And when you're kind of done with your, you know, power crunch, you can say, hey, clear all those settings free for all go ahead charge. So how are things before COVID struck our charge stations were always fully occupied VMware gives one and a half hours of free charge so you know anybody who had an EV would come in and charge. And if they didn't get off the system they would be actually built for the extra time. So the typical vehicle would charge between one and a half hours to about five hours. And why is this important for us. It tells us how much distance maybe our drivers are driving I mean our employees are coming from far or near or whatever, or they're just stopping off saying hey you know it's free. I come into the office let me just kind of fill up my belly type of thing. And what else did we see in this scenario here this little data that I shared with you know all the IDs and all kind of like obfuscated. You can tell when a car is getting getting full when it's at its maximum charging level and then when it's dropping, you'll see a dip like this you know from 5.8 it'll drop to three something and then finally it'll go down to zero. That's called a strickel charging. So data like this even if the driver is not sharing with us, where they live how much they typically drive in a week, you can tell when a car is getting full. And over time you can build some kind of analytics and historical data saying you know if it's somebody like malny she typically drives only maybe 22 miles a day I mean I live about 11 miles away from the campus. That can tell me in situations such as a crunch. Hey if I give malny you know enough charge to go home, she's going to be pretty happy and then that way I can satisfy my whole employee base under some limited conditions type of stuff, or that if I want to see if we have a crunch and I need to run the air conditioners. Maybe malny can sell me power from her car. I don't have an electric vehicle just yet, but maybe I could sell back to the grid or back to our campus and be self reliant over there under you know a power crunch type of thing as long as I have enough charge left to go home type of stuff so that's the sort of things we wanted to get to. But what happened March 16 came and we all had to go home and our EV charging facilities are pretty much idle. So that brings us to project kidney so when we first got that API in place. You know we just did curl commands could get the API and the values from you know a soap call. And we could do it. Then next thing you want to do is do it programmatically and remember still go with hasn't happened so we have all these visions of our micro grid and doing it. So we started building out a Python library, you know basically a client to this charge point API soap server five server, and we could control the vehicles. So with COVID we decided the next thing we need to do and also we had been told you know please don't muck with the cars too much and maybe 510 minutes you can turn the power off and you know nobody's going to miss it too much and all. We said we need a simulator because nothing's happening. And what do we do in the simulator it's a go simulator it meets the same soap API as the charge point API. Let you say hey you know I want five stations versus 10 stations versus 50 stations so you can kind of mimic different, you know, installments whether it's an office or a mall or something. And also we can say what the location is so you can then kind of mimic different vehicle behavior like if it's an office. The connection might be like, if it's 1.5 hours of free you know we can kind of put all that in our model, or we can say it's a home so the timing is from six to. 8am in the morning 6pm to 8am. This vehicle will be there but doesn't mean you have to charge it instantly especially when there's peak power and if this is a standard behavior you can say when it comes and connects and disconnects and we put some randomization over there like somebody goes home early somebody comes or whatever leaves early comes late type of stuff. So we decided being the open source team that let's do this in the open and the vision we have for Kini this is a project and this work we did with chemo energy a startup, also local to the area that we support the charge point API but our vision is hey there are other EV vehicle providers Tesla then Nissan and then you know maybe the one in Stanford. If we can put all these API's and have an abstraction layer then it doesn't matter for us. You know who's the provider we know how to talk to the uniform interface and we thought that would add value to the community. But right now we only support charge point we haven't got all the other people involved and interested because covid happened too. And what else can we say about this chemo energy has since got a customer and they were very happy to use the software they demoed it and you know I think it's going to be useful in a commercial environment now because of that. What else our team, you know, I mentioned earlier Diana and Svetomir and myself we have been involved with an edge IoT framework called Ajax Foundry. And the essence of this project is it provides you southbound connectivity across multiple protocols like ZigBee, Bluetooth, Lora van etc. So right now this charge point one just needed you know IP connectivity it was making this cause no big deal but down the road if we have different kinds of devices like as we're looking at, you know, buildings and and wind turbines and whatever. This sort of system would be valuable to be able to communicate with different kinds of end devices. This project's about four years old. We have made contributions to it on the security front I was in fact the security work group co chair for a year. It has enough security built into like check who's trying to come in so there's a proxy server with some secure firewall over there. We have some best practices in place for like cloud native development so your containers will not be root containers and things like that. It has a rules engine so you can collect data do some local analytics and if you need to send up an alarm to a higher level monitoring station you can. So those are the benefits of ejects foundry and it was therefore one of our obvious choices to use this in our microgrid project and I provided some details here so it's open source Apache to license. Hardware gnostics operating system agnostic. So please feel free to try if you have any questions anything or would like to contribute. So that's one place we have invested in. So then the vision that we had of that earlier picture brought down to using ejects foundry for this connectivity and for this local analytics and you know making some decisions and control. This is how we view it. We might have you know some input coming from a weather sensor a data center because that's a public consumer to a charge facility maybe some grid input saying hey I'm overloaded right now solar panels and batteries. And it doesn't have to be one edge node that takes everything from campus if you have a lot of them you can have a hierarchy of these and then maybe across the town you can have multiple of these sort of things one per campus one for hospital one for whatever. And then have a town level you know processing and control. So this is our vision and ejects does only the local piece. But what else would you need for a real IOT application you need to know that all your edge nodes are functioning. So you want to monitor and manage those edge nodes you want to push you know application software updates security patches etc. So essentially any such IOT application will have two components a domain component. And you know this is common to all IOT applications of monitoring and management application. There is an open source version for this to for the monitoring management and it's called open horizon we haven't yet worked much on it. But once we get our local piece done that would be something we look at next. Okay and now let's just talk briefly about the data center we said it has a big energy footprint and how can we do something there so are all your jobs super important do they have to run absolutely now when you have a blackout or brown out scenario obviously not just like you don't have to charge your car fully if you can get home in an emerging scenario that's plenty. So one of the things that VMware has been doing for a while is that we've been working on virtual machines and we have the ability to see which virtual machine is not you know overloaded or it's idling. And if a bunch of the virtual machines and a host are idling we have the ability through software called DPM distributed power management and then another piece of software called VMotion where we can move these virtual machines off a host if there's only one or two then sort of like the hundred or whatever you expect to then move them off to another host and then turn this machine off. And why is it important to be able to turn off a machine, because the power used by a server. It doesn't really, you know, go linearly with its, you know, level of busyness or how idle it is. It's more like, you know, exponentials like ramps up very quickly to about 80%. And so you really want to like knock off the workloads on it and then power it off. So there is such ability. But now, let's look at the jobs that are being submitted to your data center or cloud. So we need to know like is this job essential, or is it a best effort job or is it a bad job you're just running some reports or if it is your website. Maybe instead of having across a load balancer, hundreds of instances, maybe one or two is okay. Cripple the service a little, but you know, it's okay under these sort of conditions. So what we're working with our USF students, Pranay and myself is in a Kubernetes. I mean, that's the modern king of the, you know, orchestration cloud native kind of orchestrator. And that's why we chose it and working with it is let's look at the jobs that are being submitted, be able to see what kind of job it is so we may have to change the API and stuff like that to permit us to provide this additional insight. And then under some power constraints, take some remedial actions like dropping duplicates, migrating workloads from one data center to the other and then finally being able to turn off machines. So this is still work in progress and in a month or two we should have something nice demo the students have their first demo on Monday so it's it's made a lot of progress. What else in this microgrid and as I mentioned and I'm being very honest it's still a work in progress. We do have our solar panels in place we have our batteries in place but we are still looking for some other pieces. We have some contract that is evolving with price to transmit battery data to something called an IBIS system so I still don't have enough of that in place. And I don't know how to work with it. That will become online I think mid December. We have the building load data already available and something funny is after COVID to the consumption hasn't dropped much so definitely shows you it's not a smart building. We have people from Vanderbilt University who have analyzed that data and put in some machine learning models using long and short term memory for machine modeling over there to see how to adapt that machine load. They'll be publishing a paper but it's not been integrated into the VMware microgrid. Not yet at least. So what are we thinking in terms of future work. Very near term we want to be developing some analytics with this you know simulated data saying you know who drives how much and how long are they connected and why is how long they connected matters. Sometimes if you have everybody in the neighborhood having electric cars and they all want to charge at the same time you can overload the system. But if you know they're going to be connected for eight hours you could say hey I'll you know finish neighbor one neighbor two neighbor three and then it'll be Malini's turn or we can. All of us be given a lower level charge rate and slowly charge overnight so they're different options and so things like connection time and then length of the connection how long it stays connected and then typical drive distance. These will all be useful for developing some smart algorithms. We want to extend the device discovery piece in ejects foundry. And why is that important. Maybe you don't want to hand code like hey you know the VMware campus has you know 150 stations and so many ports and blah blah blah blah. The moment you say get load all that information is coming through that API call we can populate it and create structures like that. So we need to do a little more extension on the ejects foundry front for that and I think it will be useful for more people than us. We also want to start adopting. You know the data center workload to a power signal so if we say hey you know curb yourself 10% or curb yourself to a certain limit. We want to say are we in that budget if we're in the budget nothing to do but if we're not then start dropping workloads and try and fit ourselves into that budget. So those are our near term interests are mid-range you know future work they were looking at is connecting this to those other things that I had mentioned like the VMware DPM and its extensions. And we want to get this microgrid pretty much in place and working and functional so as our in collaboration with Stanford progresses and we're looking at something called trusted there that means we have distributed energy sources and we want to trust it. We want to be able to look at our batteries as one logical single entity not Malini's battery versus somebody else's arms battery etc and maybe a little charge more charge but a central point from which you can get it. So it's more reliable resilient just like you know virtual disk storage. Okay, so our microgrid we hope will be a useful valuable tool for you know this testing for all the stuff that we'd be doing with Stanford. And further out you know as EV and COVID and all start you know becoming more controlled things like EV picks up. COVID goes away or at least is more controlled. We would like to have this an open source a you know layer of abstraction so that you can work with vehicles from different vendors and you know still be sensible and manage things. Further down as you know these distributed energy sources become more viable and more plentiful and then there's enough that you know you have a tangible presence towards the grid and the grid can say hey you know I can maybe reduce one utility company that does some coal firing plants. Then we can have some sale and use things like blockchain and even in this space VMware is becoming very energy conscious. And why because with all this blockchain they used to be a piece that was energy inefficient to determine you know can I attach this chain. So we have a more energy efficient chaining process and we'd like to leverage that. So that's pretty much our ideas on future work. We'd love to see more use of these open source projects. We'd love to see contributions and so I just put these two projects. You have you know my email can connect you to our team so if there's any interest, please come and talk to us. And thank you. Hi Malini. Thank you very much for your presentation. So now we will open for Q&A. I'll start by just reading off a few questions that were submitted on the Q&A and get things going here. So the very first question it says here anonymous attendee. Does Project Kenny support simulation of EV charging and discharging under different varying campus topologies. So the campus topology didn't matter for us the API lets us get to the, the whole collection. So you can say a station group so if you did want something like topology related you could say, hey all the charge stations on hilltop GM in VMware has a few sites like the top of the hill as a few centers, something at the bottom near the creek has some. So you could do things like that to capture the notion of topology but once you have a group, you can get the load and you can curtail the load to that station group or even to an individual port. Does that answer your question. Yeah, I don't know the check that here. Okay, okay, sorry. Okay. Yeah, since there was no follow up question I think it does answer the questions. The next question. There were three questions submitted by soulmaz Nazar Ali Zadeh. She's asking about first time delays and communications. How is that taken into account in IoT. And how is the processing time or processor time considered as well in your platform. Very, very good question and IoT is about, you know, latency you want to be close to the data source be close to those sensors and actuators so you can, you know, analyze your data and respond in a timely manner. That said soulmaz very important to think about what your actual application is even when you talk real time. What is that latency you're really interested is is it some millisecond is it seconds is it ours etc. These EV charge facilities. If I give a signal saying curb or or uncurve or whatever the clear shed, it takes about 30 seconds or more to get a response from the system. And that's not bad. So, if you say a minute kind of latency it's still great in our scenario. If it was some other kind of application such as an industrial application where you're revetting something or soldering something and or even some AR VR kind of thing then you're going to look for millisecond kind of response latencies. So that also dictates where you put this facility and things that are an important thing in this EV charging thing is. I'm not really talking to that EV charge point and the power line over there. I'm really talking to their API service so there's several layers of, you know, in direction over there because the charge point charge API station service getting its data and then I'm getting it from that entity. And, you know, we really need to go closer we would have to go to the actual physical device and and have another source of control over there. So then there was a, I will summarize the other two questions. She's wondering about how can IOT aid in the stability of the grid in terms of the voltage frequency, and even the assessment of stability. And this is an area I don't know much about I'm hoping to learn way more from Ram and Abbas and all about battery voltage phase etc. I'm just picking up right now. But one of the things that we see over here is you maybe need real time operating systems control at a much finer granularity than my one millisecond type of thing but again, this is an area I have to learn more about. Are we looking at sub sub minute kind of stability type of stuff, and it'll become problem based and how we attack the problem. Okay, very good. The next question is from Emmanuel Balogun. He's curious about the efficient blockchain methodology that VMware will be using. Can you expound on this? Absolutely, Emmanuel. Please search on, I mean, like in your Google browser, go ahead and say VMware and project conquered blockchain. We have a whole bunch of blogs and blogs. And that's the project conquered. You can understand how it works and then, you know, explore if you want to use it, talk to the engineers and that's one thing about open source. You can think of us as friendly people. You can think of us as lonely people. You can think of us as eager people who want to share. So those engineers on the Slack channel on their email would be more than happy to, you know, talk to you, help you use it, would love you to work with it, leverage it, etc. And one of the ways that you reduce your power consumption and these sort of blockchain systems is it's a sense of trust like you let in folks that you believe in trust and therefore the amount of work you have to do to check that they know they're doing the right thing or honest or whatever can be reduced. So the larger your net is and the more unknown people you have. It's harder to trust so you have more stringent checks and proof of work type of things, but in smaller communities where you trust entities your proof of work therefore reduces so your compute cycles reduce so you're more power efficient. And that's the philosophy behind these systems. Okay, very good. So, Alan chow has a question. If we want to implement what has been learned from a micro grid setting to a larger scale power grid setting. What do you see that needs to be changed for the large scale power grid. So Alan, think about it as a hierarchical system, whatever you do in your little micro grid, think of it as a local thing and then let's say, just in the city of Palo Alto you say you have your Palo Alto VMware campus then maybe the hospital is one campus Stanford campus. So you have to basically work in a distributed fashion and and have a hierarchical chain of command. And if you're trying to get a signal from the grid the frequency at which you get that and then you have to play games like maybe the hospital and its needs are more important so if there is a crunch. So a device at the lowest leaf level might be your EV charger might be your building but at the next higher level. The device is really a composite. It's the VMware campus and the Stanford campus and then you then have ideas of minimum and maximum. Let's say now we're talking Stanford campus in a crunch. We'll say the minimum power is maybe for the hospital and then the maximum is the whole campus and all the offices and the buildings and the whatever whatever. So we'd have to have a notion of these composites or loads in terms of minimum requirement maximum requirement importance, no different from the way we're thinking about those workloads in the data center. Allow us to reason and then say hey you know in this crunch. How am I distributing power to these different sub entities. And that's how you get to scale by not getting into like at the top level being able to control all the little devices across your whole city as you know top down. Immediately to you you scale by fanning out like this through a hierarchy. Very, very good. Thank you, Malini. There was that Vanderbilt question so a Vanderbilt also has a DOE grant they're working on a system called RIAPS. You know I can find you that information and send if you send Ram email and then we can get back in touch with you. But they analyze the building historical data was in life because all that IBIS and Johnson control and all is not yet in place, but we had the historical data monthly yearly etc. IOT working with model free approach we receive just data we know. Ah, so IOT you can define your models in edgex we have something called a device profile. So you can have one for a light bulb and then the profile is light bulb but you can have multiple instances of it which might just differ in terms like is it on is it off. It's a location so it's free format you can define these profiles and then once you have a profile defined you can discover it from the data it sends and says hey you know I'm a light bulb and then you know how to treat it. We can have a composite device such as this EV charge facility you don't need to model it at each port etc. You can say hey EV charger I want you to do something smart. I have a power budget. See which car you'll feed or which one you want and you can have levels of hiddenness if you want or you can go right down to each port on this charge facility. Eric has a question about smart grid technology standards and regulation. Are they being established and what are the bodies and regulatory agencies. Very good question. Yeah, yeah. So, Eric's question like Ram was saying you know what's happening in the land of standards and regulation and standards bodies. So one of the things is standards bodies are good but it can be slow and in today's software environment. Code comes first and then if it works and you modify it and then if it doesn't work and iterate and blah blah blah and once it kind of feels like it's the right thing code then becomes the standard. And in that context there is something called the Linux foundation and under that umbrella it's like a free thing and you know. It's a body and companies pay money to it and they say hey you know I don't own it we all own it and that's the whole open source ethos and the other foundations to like that. They have under their umbrella LF energy and they're trying to develop standards but it's not yet then the smart grid world. There are few utility companies. So it's really our space if we want to start putting things in and that was why I even thought of this project kidney and I didn't even put it under the VMware company umbrella I said let's put it in cameo energy umbrella because at least it's an energy company. And it's not our core business though we're interested in it from a sustainability. So, we can bring the kidneys of the world projects like that into LF energy. We can contribute to that. You know umbrella. And you know as more and more people start adopting and feeling the need for it will have code through open source and then that can eventually become standards. So LF energy is definitely one of them. And the IOT stuff there is the eclipse foundation there's a Linux foundation and the Linux foundations for LF edge, because we know that you can't just have your IOT devices connect to a cloud. It comes back to another question that we had earlier what's the latency in getting these signals right so you start doing more edge processing closer to the data source. Okay. I don't see any additional questions here so maybe I will ask a couple of questions Malini. Go ahead. Can you comment a little bit about what are the challenges you see for IOT in terms of security when you're dealing with grid applications. Very good question. We definitely want our grid reliable we don't want people hacking into it. So one of the things is with IOT security is a bigger deal because it's not like you have it in a data center with guards over there with multiple doors with multiple locks and if anybody has been to a data center one interesting point I had heard is when you go in your way when you're coming out your way. So if you were to take even a little USB stick drive and introduce a bug or leave it behind, it would get captured. Okay, so we don't have so many things in place. So one of the things that is possible is using some hardware based security solution on that device whether it's your Raspberry Pi or your Intel Nook or a server. These things are called TPMs trusted platform modules. There's another alternative solution from ARM. So there's an open source project called Parsec. That's abstracting this layer of implementation whether you're using a trusted platform module from Infineon on your Intel chip or your AMD chip or you're using ARM solution. So that layers abstracted so you can say hey you know please encrypt what I'm going to put so that this disk gets stolen. No loss. Nobody can see it and you can encrypt that and a part of this hardware security model is like you want identity you want reliable identity like if I get a message from Ram saying hey you know the seminar is cancelled today and I don't show up. It could be very damaging. It's like a denial of service or how do I be sure it's Ram. So one thing with these trusted platform modules and their in case that there's three components to it. One is an ID that can only be set by the manufacturer. So we're talking Intel that or Infineon they said that ID. Second level ID comes from the OEM the person who put all this together and sold you the box could be Dell. A third level is when you take ownership of it then you set a key. Okay so let's say today I owned it and tomorrow I gave it to Ram then he owns it he resets everything. So at that point you have three keys that are unique to this device and if that's the device you know or expect. And when I sent it out from the factory and said hey you know I'm going to put this in Starbucks or I'm going to put it in my electric grid somewhere. I have that ID at that point you know the manufacturer the OEM and then the owner in this case Ram. So I have proof of the ID a known ID and then from that point on. No private keys leave this hardware security model it's all encrypted you to really break it in by that time the device is useless. So that's how you can save configuration information keys passwords etc and you know have legitimate ID and then once you have that ID and these keys you can do encrypted communication. So that's possible. And another very important thing in addition to security is reliability. You don't want your edge node to be a singleton. You should think about at least two nodes over there so in case one falls apart the other ones are live and working before you can roll out a truck so your batteries still online that how still gets power the hospital still gets power. So one of the things we're thinking is, you know, even though Ajax foundry was like hey I can all live on this collection of microservices can all run on a single node. The node dies very deep water. So we think of these things as edge clouds so that you can have resiliency and high availability and things that it's also good for software update coming back to your security thing. Let's say you see a new vulnerability like heart bleed or I love you virus or whatever. You can update one node, and then that point put that in maintenance mode and other ones still running when this gets updated move your workloads remember how we had said we can do V motion for virtual machines, same way containers can launch on another machine. And that's how you have security reliability etc. Thank you. There is time for maybe one more question so this one is from on Iraq see west of a how IOT provides multiple services stability with reliability resiliency for extreme scenarios at a large scale system. So you don't even need to think large scale. You need at least two for minimal, you know, one note goes down another one is there if you have three it's even better. How does it give you reliability. When you're talking things like cloud applications or virtual machine type of applications. If you looked at the project called Kubernetes a cloud orchid straight up containers. There is a process over there again everything for reliability will not be one service there'll be three of them and there's leader election. These are standard software practices at this point. So it watches all the processes that are running say oops this one died it relaunches that same one so that's how you get resiliency and reliability by monitoring processes that watch for these things and say hey it's like a watchdog process. Then, coming back to your question, it gave you that it's stability because once this is there it's there it's running and then if you have more load you can launch another instance of it. So you can do some kind of load balancing. So these are all the sort of supports that we have nowadays in these modern cloud applications. Great. And maybe as a another question here. Emanuel is asking, how are you protecting the microgrid from cyber attacks how information is masks. Any redundancies considered. So the redundancies I mentioned with like multiple nodes cyber attack is not just IOT specific I mean you can have a cyber attack on your servers in your data center so the best practices there you will also use in your microgrid. So everything that you're doing in your data center you can do it at in your IOT device it's always the point is the money how much are you going to do how much are you going to invest in it. So you're transmitting like a telco cloud that's also an edge kind of application, then you're going to put some beefy processing power at that edge, like, you know, Zeon server type of stuff. They can even have something called QAT quick assist technology so it can do very fast crypto. So it depends which kind of demands you put on it but otherwise the technology for encryption and all is pretty standard. Things like denial or service and checking for that is pretty standard. And then if it's a very high value device, you can even have other, you know, machine learning models over there watching for anomalies you can have other stuff they're watching for what packets are coming and do deep packet inspection so that body of literature and the body of technologies is is well known and available and it's just the price point. If it's a high value IOT edge like a utility edge, you'll do it. But if it's just your home's panels, it depends how much are you willing to spend it's like $100 about $70 is a Raspberry Pi and might be enough. One battery going offline is maybe not the worst thing but then hey if it's your home office you need it online you need it to work you'll be maybe ready to spend $1,000. So, I think it'll be a matter of configuring based on your needs. Oh yes and absolutely available to contact me. You can, you know, search for Malini Bandar and LinkedIn and connect with me there and just say hey you know, Stanford bits and watts and you can get your slides will be available right through canvas. Perfect yeah and my email is there. I think your email contact will be there as well. So, I think that's another mechanism to reach out. Awesome. Okay, I think there are no more questions and wonderful questions. Thank you. Okay. Thank you very much for your presentation Malini and thank you everyone for participating in the webinar today.