 Hello, this talk is about electrical vehicles, green energy and what happens when mixing them with Aegex Foundry, an open source platform for IoT and edge computing. Who are we? I'm Diana Tanasova, a software engineer in the open source technology center, VMware, Bulgaria. I primarily worked on an IoT-related project called Aegex Foundry, which creates common open platform for IoT edge solutions. My previous experience before VMware was designing and developing a network management system which offers a full cycle of network deployment from planning to managing, monitoring and maintaining services. Later, I worked on creating management and orchestration system responsible for zero touch configuration of UCP devices and onboarding of virtual network functions. My hobbies include singing in a choir and snowboarding in the winter. My name is Svetomir Sturyanov and I am also a software engineer in the open source technology center in VMware, Bulgaria. I work mainly on the new kernel tracing infrastructure and the user space ecosystem around it. But recently, I joined a team that works on various IoT and edge-related open source projects. This is a completely new area for me. Before joining VMware, I was part of a small company designing and developing various network devices, routers and switches. I have a rich background, more than 20 years in system and whole-level network software design and implementation. In my free time, I like to practice all kinds of outdoor activities, skiing during the winter, swimming and diving in the summer, trekking and climbing in all seasons or just camping with my friends. What will we talk about? First, we will say a few words about green energy sources, electric vehicles and a smart power grid. What are the problems with legacy power infrastructure and the green energy? What is a smart power grid and how can leverage it to connect various power sources and consumers in an efficient way? After that, we are going to talk about Kini project. This is an open source project which aims to build a proof-of-concept solution for balancing a powered microgrid. As our solution is based on Agix Foundry, we will do a short introduction to Agix Foundry. This is an open source framework for building IoT and H applications. Green energy has been growing rapidly in the last years and it is likely to continue growing more dramatically in the coming years. Just 10 years ago, wind turbines and solar panels were exotic view. They were very expensive and inefficient. Today, they are everywhere. Their price dropped significantly and their efficiency increased. Now, they are part of the landscape. On top of huge warehouses or isolated solar or wind turbine farms, they are cheap enough to be found even on the rooftops of small village houses. Just a few years ago, electric vehicles weren't taken seriously as a solution for transport decarbonisation, at least for the near future. They were too expensive and with limited driving range. But now, they are in mass production and the customer demand exceeds the factories capacity. It looks like this growth will continue rapidly. There are few important factors which drive this increase. Rapid improvements in renewable technology lead to increased efficiency. Mass productions have led to a huge drop in prices. Today, you can buy one kilowatt battery capacity for $100. 10 years ago, the same was more than $1,000. Also, friendly government policies and regulations is another factor. Today, almost all countries have targets for renewable energy adoption. There are also increased corporate commitments. More than half of the Fortune 500 companies have committed to renewable energy strategy. But are we ready for this rapid growth? Can our old legacy power infrastructure handle this new type of energy? The legacy power grid was designed more than 100 years ago as a centralised to unidirectional grid. During the years, it became more complex, large and interconnected, but the idea remains the same. Unidirectional grid from huge power producers to a lot of small power consumers. Usually, the energy is produced by a coil or nuclear power plant that are designed to run constantly, 24 hours, seven days a week, with a constant watt. At the other end of the wire are factories, domestic houses or offices, which are supposed only to consume energy. That's why the grid is designed in that way. Electricity flows constantly and in one direction only. There is a solid backbone of high voltage power lines designed to transfer electricity in long distances with minimal losses. Because usually, these huge power plants are located far from the consumers. This grid has been operating without any significant design changes still now. But with the rise of renewable energy, this design is no longer actual. The grid has to evolve to something more flexible and smarter. What will be the requirements of a smart power grid? What should such a grid do? It should track the energy production in real time, while traditional power producers are predictable. This is not the case with renewables. Solar and wind energy production is volatile. The smart grid must track how it changes every minute and to take decisions how to react in real time. It should also track the energy consumption in real time and to take fast decisions on how to balance the grid. Monitor weather conditions, some wind and to predict how the renewable energy production may change in the near future. And to predict the world, how the world could change based on the time of the day, day of the week, season and the current temperature, etc. And using machine learning or artificial intelligence algorithms are a perfect fit for solving such kind of problems. So, how will this modern smart electric grid look? It would be a decentralized mesh. It will connect a huge number of nodes. It is bi-directional. Every node could produce and consume energy at the same time. More likely, it will be interconnected meshes of almost self-sufficient microgrids, which will need occasionally external energy. Transition from the legacy grid to this microgrid concept cannot happen in one day. It will be an evolution process. With the advancement of the technology and dropping the prices, building such microgrids will become more popular. The main energy will still come from the huge legacy power plants and will reach through these microgrids through a long distance, high voltage power lines, at least for the beginning of this transformation. But when the microgrids and the interconnections between them reach to a critical mass, the input for energy from these huge power plants will drop, and may one day they disappear. Let's see how the power grid world has changed since Lazy Cape. 10 years ago, the power energy was produced mainly by coil nuclear and gas power stations. All of these produce a constant amount of energy. It is not easy and efficient to stop on nuclear or coil power stations. But is the power consumption constant during the day? There is a peak in the morning, usually around 6-7 a.m. when people wake up and start the day. They take a shower, prepare breakfast, turn on the TV, and then later, before noon, there is a small drop. You can see how the curve looked like back in 2012. At the end of the day, there is a steep jump, usually around 6 p.m. for the evening hours. This jump is a big challenge for the power companies and power grid managers. They have how to balance between constant power production and volatile power consumption. But look what happened with that curve last year, how it has changed since 2012. With the boost of the green energy, the drop during the midday became significant. From the grid manager's point of view, it looks like a drop in demand, but the power consumption is the same. The users consume solar energy instead of the traditional one. Every year, with the increased installation of solar panels, the curve changes. The shape looks like a duck. That's why it is known as a duck curve. The drop in demand causes problems with the current power infrastructure. The evening cramp becomes more steeper and when the sun goes down, the production of solar panels stops, but then in the evening covers the energy demand rises. So the traditional power plants have to rapidly increase the production, but this is hard with the legacy power infrastructure. The coil and nuclear power plants are most efficient when running at a constant load. If such a power plant has to be turned off and on each day, it will break its economics. That's why these kinds of power stations usually have contracts to produce a constant amount of energy. This brings the problem to the operators of the power grid. They have to solve much bigger disbalances. They cannot temporarily cut off the nuclear or coil plant, so the target is the renewables. If there is too much solar energy, they have to switch off some solar panels in order not to overload or even damage the grid. What happens is that this excess of solar energy is wasted. This oversupply is one of the challenges that must be resolved in order for solar energy to move forward. If you want that kind of energy to power almost everything, we have to find a way to flatten the deck, shrinking kids belly by shifting energy supply and demand. There are two general approaches for doing that. First one is to store the oversupply solar energy during the day and to use it after sunset. Or the second one is to modify the user's energy usage. Encourage the consumers to use electric power when it is plentiful and cheap. Both approaches can be combined to achieve the goal. The most popular method for storing spare energy is pumped hydroelectric energy stations. They have been used for years to store the excess electricity from renewables and nuclear plants and to balance the power grid. The idea is simple. During the periods of energy excess, use that power to pump water from over to higher point. When extra energy is needed, use the water to produce electricity in a hydroelectric station. The main advantage of this method is that it can store a huge amount of energy, but it is not applicable everywhere. And usually there is a lot of electricity loss. The power needed to pump up the water is bigger than the electricity produced by the same amount of water. Common method for short-term storage of solar energy is using batteries. They can be used everywhere and there is no electricity loss or at least it is negligible. You can store and get almost the same amount of energy. The building of huge battery storage stations has been gaining popularity. They can be isolated or co-located with solar or wind farms. They offer extremely short start times and are used to handle peak loss of the grid to a few hours. Expensive and inefficient in the past, they become attractive with technology progress and battery prices drop recent years. Today there are battery stations from 20 megawatts up to 400 megawatts which can supply energy up to eight hours. One other very interesting solution which has been gaining popularity recently is vehicle to grid. Electric vehicles have batteries usually from 50 up to 80 kilowatts. The number of electric vehicles is increasing rapidly and the trend is expected to keep this direction. We already know how to control the charging process of electric cars in a way that allows the charging current to be increased or decreased when needed or even to be stopped. But let's look at the car batteries as electricity storage. Storages that could be used not only by the car itself but by the grid as well. The energy could be pushed back into the grid to flatten the curve. There could be rewards for users that allow their car batteries to be used as grid balancing elements. The users could be encouraged to charge their cars during off-peak hours and discharging them during peak. This has a cost of course as usually every charging cycle shortens the battery life but with the technology advancement and battery price drop this can be overcome. And the batteries are already there. A big amount of batteries that will rapidly increase in the near future with a huge amount of storage capacity. And most of them are idle most of the time. All that is needed is a little improvement in the charging current charging infrastructure and some smart IoT solutions that can transparently and efficiently control this process. Other players should sit together and make a step in this direction. Power grid managers, car and battery manufacturers, governments and this process has already started. The other approach to flatten the curve is to change users power usage behavior. Time of use pricing is an odd technique that has been used for years even before the green energy boost. Usually the energy is cheaper during the night and when the demand is over. And electricity prices for these periods are known in advance and are rarely changed, usually a few times per year. That way the users can plan their power consumption according to the cheaper energy. But this method is not suitable for green energy use case where the condition and price may change every hour. The real-time pricing cannot be used by users to plan their power consumption. The idea this area is a huge opportunity for automated action IoT solutions to intelligently control the power consumption even using some machine learning or artificial intelligence algorithms. In our project we are focusing exactly on this use case, balancing the grid by intelligently controlling the power usage. Now for the more technical part of our presentation we are going to describe Project Kiney, our microgrid proof of concept solution. You will find what happens when connecting edgings foundry and electric vehicle charging stations and how this can help to balance the grid. Quick introduction. VMware has a long-standing commitment for driving innovation in the sustainability. The collective impact of virtualization technology has avoided hundreds of millions metric tons of CO2 into the atmosphere. The sustainability commitment incorporates VMware's main compass in Palo Alto where their rooftop solar panels arrays, large batteries, multiple electric vehicles charging stations and smart office buildings. Less than a year ago a collaboration between VMware and KAMS Energy, a startup company developing open source smart grid management solution launched an open source project called Kiney. The purpose of this project is to build a smart grid proof of concept solution. Behind the scene this project Kiney uses another open source project called edgings foundry. Before diving into the explanation of how we approach the creation of our microgrid proof of concept let's talk a bit about the challenges IoT applications commonly face. One of the biggest challenges in the IoT is that it accommodates a lot of knowledge. We could say that IoT is a knowledge collection gathered for the last few decades. It includes everything we know about network devices and protocols, Edge and cloud computing. This mix of platforms, applications, domain specific knowledge, intelligence and things. The other set of devices usually constrained in different ways, CPU, RAM, storage, network capability and even availability, all connected into a single IoT platform is a challenging task to achieve. And because it is all about the data, processing and analyzing this big amount of data produced at DH that is constantly growing is hard to be handled. Traditional cloud computing networks have been highly centralized meaning the data produced at DH goes all the way from the edge to the main servers for processing. This is why a more hierarchical network with devices reporting their data to edge nodes close at the hand and nodes being able to apply local intelligence and immediately trigger actions back to the devices could meet the requirements of IoT. Processing the data closer to the data source brings a lot of benefits. It greatly reduces the response latency, it lowers the network bandwidth and even preserve privacy. This is a high design overview of our kidney project. We preserve the hierarchical IoT architecture having edge nodes between the cloud and the edge devices. In the data center, there will be two main applications, one responsible for monitoring and management of the remote edge nodes. This application will monitor the health and the probability status of edge nodes. It will take care of edge node provisioning and applying configurational changes. The second application will be responsible to improve the energy efficiency, balance the electric power demand and supply and minimize the electricity loss. This application will communicate with different energy sources, nuclear, coal, all kinds of renewables, as well as with DH nodes. Based on the information gathered from all of them, this application will make decisions and will execute commands in order to keep the grid efficient and balanced. DH nodes located close to the energy consumers and producers will be responsible to monitor them in real time. The filter, the data, apply local intelligence, trigger actions back to the devices and report to the cloud only the most valuable information. DH nodes, as a middleware between the north and the south, will communicate in both directions. They will receive commands from the cloud and will control the devices. Their view scope will be limited only to the attached devices while the cloud will have a global view of the entire grid. We choose to build the Edge Part of our project using EdgeX Foundry, an open-source platform for IoT and edge computing. So let's do a quick recap of this platform. EdgeX Foundry is a common open platform for IoT Edge computing whose main goal is to unify the IoT Edge and thus accelerate the development and deployment of IoT solutions across wide variety of industrial and enterprise use cases. It creates an ecosystem of plug-and-play components ready to be used or customized. It is a loosely coupled microservices framework written mainly in GoLang. The project is under Linux Foundation Edge umbrella. Its license is a patch tool, which means it is a business friendly. You could take the code, fork it, modify it, do whatever you want without asking for any permissions. Of course, contributing back to the project is a main point within the open source. Together we could build and develop bigger, faster, and with higher quality. The project is software and hardware agnostic. It could run on any kind of hardware like Unix, Linux, or Intel. The diagram represents EdgeX Foundry architecture. As you can see, there are multiple boxes on this diagram. Each one represents a microservice. The collection, this collection of microservices is layered. At the bottom, we have so-called device services. These services are closest to the devices and are responsible for communication with them. Each device service picks different network protocol. They transform the data coming from the devices into a format well known for the rest of the EdgeX. We could think of this process as a normalization of the data in the core service layer. Core data microservice persists the data for a specific period. Data persistence is an optional feature. It all depends on the specific use case you're working on and the storage capability of your device. There are some reasons why you may want to keep the data persistence present on your system. The first reason could be limited or intermittent connectivity experience that commonly happens in the Edge. The data could be saved locally until the connection is restored. Once the connection is okay, the data could be forwarded to a cloud system. Another reason for storing the data could be our desire to enable historical queries. You may want to know what was the temperature an hour ago or in our case what was the electricity load two hours ago. Core command microservice exposes the device commands in a common normalized way to simplify communication with the devices but the device services actually knows how to speak with these devices. The next microservice metadata is the brain of the system. It keeps different kind of metadata as well as the inventory of all the devices in the system. The next microservice in this layer is configuration and registry. This microservice provides centralized management of configuration of all Edge's foundry microservices as well as their current status. The layer above is supporting services layer. This layer encompasses a wide range of duties such as logging, scheduling and notifications. The next layer application services are meant to get data from Edge's foundry to external systems. The data could be filtered, transformed in some way, encrypted or compressed. Endpoints supported out of the box today include HTTPM and MQTT but they are going to be extended to summarize. We could think of Edge's foundry as a double transformation engine that applies local intelligence. First transformation happens in the device services layer, the second in the application and export services. This transformation goes in both directions from south to the north and vice versa. One of the main goals of the project is to stay easily extendable and interoperable no matter how much it grows and how complex it becomes. Each microservice has a well-defined API that ensures it could be easily replaced with a custom one. The most customizable components of the framework of the framework are device services and application services. The replacement is actually facilitated by the provided SDKs that greatly simplifies this process. This is how the edge part of our Kini project maps to the Edge's foundry architecture. The goal is to run it as close as possible to the edge devices and periodically collect the data from the local segment of the microgrid and further aggregate that information and send it to the data center somewhere in the cloud. Edge's foundry project already has all we need to achieve this goal. It also has the flexibility to cut off any additional services that we do not need in order to meet the low hardware limitations of the Edge devices that Edge devices typically have. We only need to implement the device's specific layer and the logic responsible for communicating with the data center. Right, a custom device service for Edge XCIS TVU. There is a device SDK which handles the whole mandatory communication with the other services so we can focus only on our device specific logic. The first device that we attach to our edge is the electric vehicle charge station. In the VMware power auto campus the EV charging infrastructure is based on charge point stations. These stations have a swap based API which can be used to remotely monitor and control the charging process. The chargers do not support returning energy from the cars battery back to the grid so we cannot implement and test vehicle to grid use case. What we can do is the following. There are various APIs to retrieve attached charging stations and the properties of each charge port. We use it to implement auto discovery and health status check of the stations. There is an API for getting the current power volt and the car or driver which is using the port currently. There is an API for shedding the charging power of each port. The charging can be limited to a maximum amount of power or to a percent of the current volt. There is an API to remove any charging limitations from a port. There are of course a lot more API notifications and alarms but for the first stage of our proof of concept we limit our efforts to these APIs only. Charge point infrastructure is organized hierarchically. There is an entity called organization. In each organization there are multiple charge groups. In each charge group there are charge stations. Each charge station can have multiple charge ports. The APIs can work on each of these levels. The charging volt can be limited per group, per station or per port. This flexibility allows to define different relations between the Aegix device and the charging device. We can have device service for each port, for each station or for each charging group, depending on the use case scenarios and algorithms that we want to test. At the first stage we decided to have a device service to charge group relation. So from the Aegix point of view the device is a whole charge group. The charge point APIs are secured. They are encrypted and they have authentication in each request. We implemented our device service and start to collect statistical data from the charging stations. But then came that COVID virus and broke our plans. There were almost no people in the campus, no cars and no charging data. We decided then to put our efforts in the creating simulator that emulates a charge point device. We do not need to support our APIs only those that are used in our device service. It should emulate the whole charge point infrastructure with its hierarchy, organizations, groups, stations and ports and should behave as a real charging device. The easiest approach was just to reply the collected historical data. But replying this data is not enough. We cannot emulate the shading quote logic which is mandatory for our algorithms. The simulator evolved from a simple repeater of collected data to a completely random simulator. Currently it can work in multiple modes. As a repeater of real historical data, working with this data is valuable as it reflects a real charging process with real electric vehicles. Generator of full charge infrastructures with random organizations, groups, stations and ports. And the count of these entities can also be random or fixed in a configuration file. It can work with fixed charging infrastructure defined in a configuration file. And all the electric vehicles being charged are also randomly generated. There are a few probability parameters which allow modeling different behavior of the simulator. Probability of a port being occupied by a charging vehicle and the probability of a vehicle to be unplugged before its battery is fully charged. When the simulator works with randomly generated vehicles, it's a post logic for shading the whole port. And it turns out that this COVID situation sped up our project as we no longer rely only on the few charging stations that we have on the campus. Now we can emulate different charging infrastructures with different cars behavior. We can emulate charge stations at offices, at homes or at mall. All of them have different typical charging sessions. This allows us to test more complex algorithms in our micro grid. The next part of the puzzle was to implement a custom application service. This is using the provided application service SDK. We have created a custom application service that could communicate with the cloud management system and with the edge devices. Based on periodically reported data coming from the device service, our application service monitors the current power load of the micro grid segment the edge node is responsible for. Because of our desire to make the solution applicable to a broader use cases, we decided to group the charging stations according to their location like home office shopping center. So when a new charge group appears and it is registered in the edge node, it reports its location time belonging. Our application service monitors the load of the entire grid segment and the load per location type. When electrical shortage appears in the grid, the main electric management system in the cloud sends commands down to the edge nodes telling what percentage should be reduced from the current load. These commands are received by the application services in each edge node and they are responsible to determine the charge group to be curtailed and the capacity load to be curtailed with. They then drill this command down to the target and devices. Our initial algorithm is based on important weights assigned to the charge group depending on both the daytime period morning evening night and the location they belong to. If it is 5 p.m. and there is a shortage, the cars that are already at home and the cars in the shopping center are assigned with lower importance waste number than the cars in the office. Most probably people in the office will want to get home soon. That's why the Teoman process will start with the cars at home and the cars in the malls. Because it is more likely one edge node to work with devices that belongs to only one location group type, our future work most probably will propagate this labeling idea on a higher level. So to summarize what we have till now, Kenny is still in its infancy. We developed it as our idle tasks and couldn't spend much time on it. That's why currently we have more ideas than implemented features. There is a fully working prototype of H0 based on edgics foundry. We have a custom device service that talks to charge point electric vehicle chargers and there is a simulator that understands basic charge point APIs and can emulate a charging facilities and EV charging process. We have a custom application service that has a logic to curtail the power of the charges using some simple algorithms. This is a good foundation that can be used for building a more complex smart power grid proof of concept solution. Here comes the interesting part with our ideas for future work and enhancements. There are a lot of things to do before having a full smart grid proof of concept solution. First of all, more devices. We are planning to attach more power devices to the edge node. Write custom device services for each of them. In the VMware campus, campus in powwow2, there are solar panel arrays, battery packs and smart buildings. This will introduce power producers and storage to our smart grid and more power consumers with different usage patterns. We will also attach a weather sensor device. Its data will be used in balancing algorithms to predict green power production and power usage. After adding these new devices, new algorithms in the application service will be developed with more complex logic for balancing the grid. Design edge hierarchical architecture to easy the scaling of the grid and connect the edge node to the cloud. Send filtered and aggregated information from the attached devices and receive commands and apply them to the devices. Implement an electric management system in the cloud group of services that will maintain the state of the entire microgrid and apply logic to balance it, leveraging machine learning or artificial intelligence algorithms. That's it. If you are interested, we welcome you to join us. Here are the links of Ajax Foundry and Kini projects Github repositories. Thank you for being with us. If you have any questions, feel free to ask. We would be happy to answer them.