 Okay, everybody, the music's come down and I'm told that's the beginning of showtime. Don't be afraid to come up front, I'm feeling lonely up here. Today we're going to talk a little bit about smart city networks, partially because I believe and I believe the panel believes that they're really a network of the future and it's where we're going to see and have to do a tremendous amount of innovation. That innovation is going to require open source software to make it happen and it's going to require the insertion of lots of new technologies, some of which I don't think we've even thought about yet or even know what we're on the cusp of. And so that's where the machine learning and artificial intelligence techniques come in, myself, I'm Cliff Grossner and I'm part of the technology unit at IHS market. We're an analyst firm that spans 140 countries, we have over 50,000 clients and I'm happy to be here today with all of you. With that, let's take a little, what I want to do is spend just a few minutes, I have some research to share with you about why we believe this is the direction, one of the direction that's going to be very important in shaping the next 10, 15 years. And then I'm going to ask one of our panel members, Rich, who's just over here on my right, to talk to us a bit about smart city networks. The reason I asked Rich to do that is he's involved in some very large initiatives around smart cities in Canada and can share with us some of how those developments are likely to go. And after that, the rest of our panel will get a chance to introduce, will introduce themselves and they are actually experts in the machine learning and AR piece of this title. And then we'll ask them some questions about where do we think smart city networks are going and how they're going to adopt AI techniques in machine learning. So with that, let me jump in and I'm going to share with you a slide that everyone has probably seen in some facet before, but I'm going to ask you to think about it in a slightly different way. Sure, we all know digitization is disrupting everything. That's not really what the purpose of this slide. My purpose of this slide is to say now in the context of a smart city where we have things we want to have happen, smart buildings, potentially shared infrastructure, what is this going to mean? So it's not just traffic. It's about how the traffic is going to be used by the different segments of the people that live there, 24 by 7. That means nonstop computing. It's going to mean things like self-driving cars. It's going to mean things like automated buildings. It's going to mean things like automated elevators. It's going to mean things like automated factories. None of which can stand potentially five to ten seconds of, even five to ten seconds of latency. I was talking to one of the close people that I talked to from a major cloud service provider and they were talking to me about how they actually put one note of their data center in a factory floor because they needed to get the latency down to under a couple of milliseconds for gathering data from the machinery. They couldn't actually take it across to their data center without doing some preprocessing. So think about that. The next thing I wanted to share with you as an analyst, people always ask me how much is hype, how much is reality? And so part of what I like to do is say, okay, here's how fast we think it's going to go. And so this is a chart from some of our research on connected devices. And as you can see for 2021, I don't like to go too far. Forget the rest of that. That's always who knows, right? But 2021, we're projecting over 30 billion connected devices. And also what I wanted to share with you is there are two segments that seem to be high growth segments. And guess what? Both those segments are going to be driving traffic on smart city networks. One is industrial. So I just talked about all those new sensors that are going into the factories and into the objects we use every day. And the second is consumers. And so there's a lot of those in smart cities or in big cities. And so those are two segments that we really have to think about as we move forward. The other thing I want to share with you, because now that we have sensors and we have data, we better be computing with them. Otherwise, there's not going to be any machine learning in AI. And so our forecast for compute in the data center for servers. And we can talk a little bit about offline because I don't have time here about what a server is going to be in five years from now, what's going to be in it. In fact, we're going to talk a little bit about that with the committee in terms of what Silicon is going to be in it. But we project $57 billion in revenue for 2021. But the important thing is you can see a return to growth. We see a sort of a decline in compute and then a return to growth. Why is the growth happening? Because we spent time going back, we went back about 20 years in time. And we looked at the demand for worldwide compute and how that's been changing over time. And what we actually saw was a huge spike around the time that smartphones went viral. And since then, the growth rate of compute worldwide, the demand for it's been declining. Now it's troughed at about 50% year over year growth. And we believe that by 2020, it's going to start to go up again because of IoT. And so that coupled with the fact that virtualization of servers is now saturating means we're going to need more physical servers. Not to mention the fact that we're seeing less of a performance uplift from each successive generation of CPU Silicon. And the last thing I want to share with you before I start to turn it over to the committee is our forecast for cloud services. So we do forecast the off premise cloud services market. We have it forecast at near $350 billion out to 2021. But what's really important is the two segments in the middle. Those are the high growth segments. You can see they come from not too much, near zero to some pretty large portion of the market. Those represent cloud as a service and platform as a service. So cloud as a service is something we call out, makes us unique. We all have to have our products differentiated too. And what that is is that's an orchestrated cloud that is provided by a cloud service provider versus going and putting your credit card down and getting a CPU instance. So look at the high growth in the orchestrated cloud element. Now that could be an open-stack orchestrated cloud that's applied with cloud service provider. It could be Kubernetes, but it's in that camp and it's in that. The other one is platform as a service where cloud service providers provide pre-built components such as machine learning and AI. So those are the two high growth segments. And why I'm bringing this here is because right now the major cloud service providers are making it easy for designers and developers to incorporate machine learning and AI into their applications. And I believe that's just the first step of us then taking that and bringing it into the network and bringing it in end to end compute. So with that, I'm pretty comfortable that machine learning and AI will happen. I'm very confident it's going to happen in smart city networks. And I'm just going to leave you with the idea that the transformation is going to bring the disruption, that the IoT traffic is the next big wave. I think we already know that and now we know how big it might be. And that because cloud service providers make machine learning and AI easy, we're going to see them in smart city networks. So with that, that's me by the way. I'm going to just ask Bob to introduce himself. All right, so my name is Bob Gaffari. I'm with Intel, very happy to be here. Thanks for inviting me, Cliff. So when we look at Intel and what Intel has been focused on, we've been quite involved in a lot of the open source activities. So Linux has been a major focus, open stack, of course. Things like DPDK, Intel was really one of the major contributors to that, bringing this into the open source community to really help with the whole packet processing, the network side and trying to be able to create a framework of solutions that the broader industry can utilize to be able to go help with network kind of solutions. And then when we take a look at, for example, what Intel has been recently doing in terms of just AI or analytics overall, we've been quite active in acquiring some companies. And so Nirvana being one of the latest acquisitions. And what we're really trying to create here is a framework of solutions that can be easily used by the broader community and also have a way to make sure that the solutions can address needs, no matter what you're doing, that will impact things that are possibly latency sensitive, that require network kinds of capabilities. So things like VR, AR, where there's latency aspects to it. We want to make sure that there's capabilities where you're doing that kind of processing, possibly on the edge, as well as back into the data center. So quite happy to be here and looking forward to the panel. Thanks, thanks Bob. I think next we have Sameeth. And Sameeth, I believe you were just acquired by Juniper. Yes, I was. Thank you for having Bob. So I'm Sameeth Singh. I'm at Juniper Networks. Juniper has, of course, a long history in building routing and switching and essentially powering the internet. Juniper also has a very long history in building automation. I mean, if you look at all the surveys and the use cases and the customers, at this moment in time, like really what Juniper is pioneering is this notion of self-driving networks. And at our four mix, what we were pioneering was this notion of self-driving cloud. And late last year, we merged with Juniper. And now what we are building is that future where clouds are going to be increasingly distributed, as we talk about smart cities or as we talk about the growth in industrial IoT or even end-user IoT. Our belief is that networks get more, like the computing, gets more distributed, more closer to the end-users. And then what we truly want to do is build out the automation that helps us run all of those environments in a very efficient way. And then use AI behind the scenes to ensure that there are never any faults, to ensure that the way I like to put it, that if we are, as an operator, we are able to sleep at night, where we can trust that everything will keep running. Thanks, Samit. And we're just going to skip over Rich for a second and ask Steve to introduce himself. Hi, I'm Steve Vogelsang of EPO and Strategy and CTO for Nokia. Nokia's IP and Optical Networking Division. This area of machine learning and artificial intelligence and its applicability to networks is something that we've been investing pretty heavily in recently. And one of the big challenges that we see is that as the internet becomes more complex, more distributed, the topologies become more highly interconnected, much higher number of applications running across those topologies. And then with IoT and Smart Cities, begin to connect to many, many, many devices that are very different types of devices. We saw a big challenge in, first, understanding how is it performing. And then secondly, once you understand that, what can you do to make it perform better? And then the third big area is related to security. So what's the security implications of all these devices coming onto the network? So we recently acquired a company called Deepfield. And what Deepfield had figured out is that you can actually get intelligence without going down the traditional approach of putting probes all over the place, but rather tapping into existing data streams that are available on the nodes themselves. So starting with flow data, adding dimensions to that data, ultimately gives you a picture of what's running on the network. Once you understand that, you can begin to use machine learning to isolate abnormalities. So a big example is a DDoS attack, perhaps originating from IoT devices. How do you zero in on that and understand exactly what it is, where it's originating, what it's attacking? And this is one of the big areas that we're focused on in applying machine learning and artificial intelligence. Okay, and now we have Rich DeZone. Hi, good morning. So my name is Rich DeZone. I'm the CEO of an organization called Senjin based up in Ottawa, Canada. And Senjin stands for the Centre of Excellence in Next Generation Networks. We're funded in part by the federal government of Canada, as well as large industry members. So actually all of our members on the panel are members of Senjin. So what do we do? Basically we've created an environment, a cloud, an open stack cloud, which is a connection back to this group, that we're going to be expanding across the province in the next 12 months. And effectively what we're going to do is link all of the innovation centers across the province of Ontario. And then our goal is to extend that across actually the rest of the country. We're very tied in with all levels of government, so municipal, provincial, and federal. So provincial would be the same as state here in the US. From a smart city perspective, we've been asked to participate. The federal government of Canada is very serious in terms of adding the technology capability to make sure that the country remains competitive. We haven't been on a great trend in the last couple of years and we hope to get that back. In terms of smart city, I am a network person, so I believe that basically laying the groundwork to actually build a proper infrastructure. Because let's face it, if someone is designing a water system, electricity, buildings, HVAC, public safety, if those are not actually connected somehow, then it doesn't make sense, right? Because effectively it's not smart. I mean, they need to build actually with proper security, of course, be able to be interconnected amongst the various organizations. And this is not technology for the sake of technology. No one has any interest in that, be it government or be it industry, right? So there needs to be a real problem that needs to be solved. And I think this is a significant one. And just to show that how serious the federal government is in Canada, they put out a $300 million proposal for a smart city challenge to get all of these cities to compete with one another, to show how collaborative and innovative they can be. And they've also put another $2 billion to basically kickstart that investment. So I guess my message is Canada is open for business, and please come and see us. On the right-hand side, this is probably a box of kind of what maybe you've been used to seeing this week. You know, my team, my technical team is here. We run, with a very small team, we run a pretty interesting infrastructure, which involves a number of our member companies, which include the four largest service providers and, as I mentioned, folks like Juniper, and Cisco, and Nokia, and Intel, and so on. What we're doing effectively is we're building this open-stack cloud to allow all of these innovative areas to basically share that same infrastructure. So whether it's precision agriculture, connected vehicles, what have you, they all need to share a common infrastructure. Basically, we're gonna provide that for proof of concepts and industry validations. We also have a number of students that come in from the various universities and colleges and small Canadian companies who are doing innovative programs. Basically, this is a demonstration area for them to show off their technologies. This is one area that the provincial and federal government have asked us to look into, and this is a concept that we did last year with Juniper, and one of the companies called Inasai, who you may know very active in the open source, open daylight space. I don't know what it's like here in the US and elsewhere, but in Canada we have great internet in the cities, but when you go just five miles outside of the city, the internet is terrible, and so what we've been asked to do is look at an architecture that would be open and standards-based effectively. This is what we demonstrated last year and something that we wanna continue over the next number of years to basically get both a wired with fiber as well as a 5G access technology, which would be interspersed amongst for an open broadband network. Thanks, Rich, and I had one question for you about smart cities. So is the infrastructure likely to be public, shared infrastructure like water, the networking, like waterways and utilities, or how do you see that requirement folding out for smart cities when it comes to networking? Well, I think the city probably has a view, so if I was the city of Toronto, the city of Ottawa, and so on, they believed that they would put in the infrastructure that would, because they're responsible for it, they're accountable to the citizens of that particular city. That being said, they're not technology people, so they would need to rely on the experts from companies such as here to help them with that. So what we're providing is, at Sanjen, is a think tank effectively that would give advice in terms of what is the best open standard way to do it, but ultimately I believe the infrastructure, they would be responsible for it. Who puts it in, who manages it, that's another discussion. Okay, so we see that we're gonna have this large shared infrastructure that is going to have to respond to all the requirements of IoT traffic flowing across it, potentially shared for business purposes, shared for community interests, and shared by devices that need low latency cannot stand any downtime whatsoever because the results could be catastrophic in nature if they happen. And the question is how do we get there because none of us today, I think, live in a world where networks have that level of reliability. So with that, I'm gonna open up the questions and I'm gonna actually take some questions from the audience in a few minutes. I just have one or two for the committee myself, so get ready, there are mics here if you wanna ask the committee members anything, or panel members anything, but my first question was, so let's take a look at AI techniques. And when I say AI techniques, by the way, my original dissertation work was in AI way back when. And so we worked on things like neural networks, we looked on things like rule-based systems, we looked at semantic networks, and all of these AI techniques, and there are more today, we've advanced in terms of how we do machine learning and how we put layers of systems together to be better at learning than they did in the past. But the question I have is, in Smart City, never in particular, how are these techniques gonna play a role in achieving the requirements that we just talked about? So I'll open it up to anybody on the panel that wants to take a crack at it. Well, I guess I'll take a crack at it. So as I was mentioning before, really a big area that we're focused on in applying AI and machine learning is to build effectively a baseline to understand what a healthy operation environment looks like for the network. So what is the state that you want to see with the applications flowing across the network? And then you always are going to see abnormality, so some spike in traffic. And the key thing is we want to get to the point where the response to that type of event is fully automated. And so this is where we see machine learning as a key tool. So when you see these sort of abnormalities, you take some sort of action, those actions, by the way, initially might be manual actions taken by a network operator, but the machine learning system is watching that, seeing the result, did it fix the problem, did it not fix the problem? If it did fix the problem, you're using AI to learn that, yes, this looks like perhaps a fix if I see something that looks like that previous abnormality. And so what this allows us to do over time, if you imagine that we now allow the software itself to make changes to the network, so we're now automating via software, you sort of have a system that's automatically learning and it's learning when it's taking actions that are fixing the network. It's learning when it takes an action that perhaps degrade performance. And over time, it just gets better and better and better. And so this is an example of how we're applying the technology. In this case, it would be for network optimization, really to keep the network in a very good operational state. Sounds like Simeet wants to say something. Yep, so it's actually a very, so at our fore mix, like really, when we went down this path of using machine learning and AI to, let's say, improve performance, improve reliability, what we started to see was that the number of sensors really is just growing exponentially. The volume of data that's being produced today is 10 times more than yesterday and perhaps tomorrow it's 10 times more than today. And at the same time, it's not just the volume just doesn't come from the number of sensors, it's also how frequently we need to query these sensors. If you're trying to perhaps do the self-driving car, you can't wait minutes to make your decision. You want to make your decisions instantaneously. So we essentially took all of the science and looked to the future and said, look, it's all got to be distributed. It's all got to be pushed out to the edge. So you got to think about edge computing. And I think that's truly where things will go as we look to this world of IoT and smart cities and self-driving cars. And that decision-making needs to be closer to the edge. We need to provide more computing closer to the edge. That closed-loop automation that we want to do, we want the decisions to be made quicker. So that's perhaps where we are far away focused on and perhaps more of us will be focused on. And now if we keep following that method, the next thing to think is that, hey, there's so much data, it's coming at me so quickly, can I even store it? That becomes another challenge. So then you got to start thinking about processing everything in memory using streaming algorithms. And soon enough, you get to a pretty sophisticated system and you're able to do that automation. So maybe to add on to that. I think we definitely, I agree with Sumi's comments there, there's gonna be a lot of things happening on the edge, especially with the explosion, a lot of IoT. And I think it ultimately comes down to when you're looking to be able to deliver solutions, it's gonna come down to what are you trying to solve and the ultimate requirements of the capability you need to have on the edge. And there's definitely a clear requirement to be much smarter, much quicker, and be able to process things much faster, especially for the low-latency applications on the edge. So that is quite important. The other thing is that, you got to sort of look at the economic model. And if you can shove things into the cloud and you can afford the latency, you'll probably want to do it there. But there's gonna be clearly applications that need that computational capability in the edge. And I think that when we look at this distributed architecture and need to look at the whole thing from pretty much the IoT device all the way into the cloud and making sure that you're making the right kind of decisions of where you wanna land these workloads. So ultimately, it economically makes sense. Thanks, Bob. And actually I wanna second something that I heard from the panel. The fact that today we have such a wealth of data coming in really quickly. And I will tell you that when I was looking at AI 30 years ago, one of the reasons I took my career in a different direction and gave up on AI was the fact that I looked at it and said, these systems are so brittle, they just don't have enough data to do any reasonable inferencing. And I don't see anything happening in the next 10 years that's gonna change that. So I changed my career direction. Now finally, things have changed and I agree. We now have the data we need and that's making a difference. Now I wanna actually ask a question about one of my pet ideas is we've all talked about compute moving to the edge and we're using the word compute. But compute, does that mean CPU? Does it mean GPU? Does it mean TPU? What does it mean? What do we need to do in terms of how we compute to really derive the parallelism required for AI and machine learning? And I'm gonna turn it back over to the panel to tell us what they think. Well, I'll jump in here, I guess. And probably a colleague from Intel has something to say about what kind of compute. I mean, simply put the way I think of compute is when you start with a new application, you probably run it on a CPU because it's the most general type of processor, has good performance. And moving on to GPU, TPU, whatever optimized processor is more a function of how big is the workload? If we find we're doing something over and over and over at very large scale, you almost certainly want to look at can you optimize that and put that into a more optimal piece of silicon that's really designed for that specific problem. So call it workload specific compute. But to me, the workload comes first. We need to know, we need to scale that workload. Once we know that, we can then go and optimize it and put it in dedicated silicon. Yeah, so I'll add on to that. I agree with what Steve said. And so when we take a look at just the history of workloads and what's happened, generally a lot of things end up running on a compute infrastructure that everybody's familiar with. Then you get smart, the algorithms get more challenging. You basically look at, well, I got bottlenecks. You want to solve those bottlenecks and then there's a variety of ways to go solve these bottlenecks. And we've historically seen, there's basically PCI acceleration cars for some shape or form. It basically could be helping for packet processing. It could be helping for deep learning kind of applications. And what we have fundamentally thought through is that at the end, the algorithms that you're trying to run, the solutions you're trying to run. And we want to make sure that we create a framework that is easy to be able to figure out how to go get these things most optimally. And so we basically, for example, done things that when you look at certain applications, we've given the options of how you want to be able to run that either on a standard compute infrastructure or using consistent APIs to be able to bring that onto some kind of offload mechanism. And so we've always thought through how do you really want to bring that exploration capability if the customers want that? And so we continue to do that. And we do this, especially with machine learning and deep learning as well, where there's basically other products that can provide that capability that extend beyond the Intel Xeon kind of products out there. And Bob, I think Intel made a few acquisitions in that area in the last little while. Yes, yeah. We've definitely made a huge investment in machine learning, deep learning. And it's really taking on and addressing some of the deeper learning kind of applications that we think are coming up quite a bit. And so we are pretty excited about, how do we really take what we have and extend this? So for the most highest performing parallel processing kind of applications out there, we want to make sure that we can provide that kind of continuous framework that we've normally done. OK, well, I think we have about 10 minutes left. And I wanted to open the floor to a question or two. So we have a gentleman here that's keen to ask a question. Yes, I'd love the panel to share from their experience where some of the smartest cities in the world and what are the top applications and what makes them so smart. Maybe I can start. So I'm chuckling a little bit because I helped write a chapter in a smart city world book actually recently. And of course, every country's got their favorite city or cities. So what we submitted was the city of Ottawa. And of course, by doing that, we singled out and kind of ticked off 10 or 12 other major cities across the country. That being said, I think we're just at the infancy, at least from my viewpoint, of what we're capable of. I mean, we're doing some really cool things. But I think it's a race. I mean, you see the headlines, the smart city here, smart city there. A lot of folks are doing proof of concept. They're doing hey, what if. Unfortunately, most of them are not dealing with a green field. So they're having to take what exists today and modify it and so on. So just from that standpoint, I would say there's the normal ones you hear about, like Barcelona and Bristol. And there's lots of cool things happening all over the place. And I think there's nothing unique about a city. Whatever you do in Ottawa, why is it different than Boston? So there might be a few. It's a little colder in Canada, right? But other than that, there's not a lot of differences. So I think the good news is that the work that's being done of it's based on an open and standard. It can be replicated across. And why reinvent something that actually is pretty darn good? Anybody else care to wait in on that or open the floor for another question? OK, does anyone else want to ask the committee a question? We have someone here. I would like to. Hello. My question is regarding the integration points because all these smart infrastructure requires a lot of integration. So also, as you mentioned, this is a race. So everyone is trying to push their platform and to have their own platform. To everyone comes in, it's like a platform revolution. So in your opinion, which would be the first steps for a city to establish first the platform or to learn the ecosystem so they can build their own platform, which would be the right approach. Well, the approach that we're using is, and I don't know if it's the right one, to be honest. It's what we believe is the right answer is to create an environment where innovators can come and demonstrate what they have. So universities, colleges, large multinationals, small companies, they're the innovators of this space. And ultimately, the customer, which is the municipalities, the provincial government, the federal government, who are acting on behalf of the citizens. So I would say create an environment, which is kind of what we're doing. And then let's try some of the stuff out, kind of like what the Linux approach is. Let's fail fast. Let's try it. But I think the fundamentals of having a proper infrastructure, making sure you have decent internet, thinking about security from day one, open data sets, right from the get-go, so you can share information. That would be some of the key points that I'd have. Have another question over here? Yeah. Oh, I'm sorry. Go ahead. I thought they were talking. So I've been working with this for a while, myself, and kind of identify three different categories of analytics, which one is real-time, then there's near-real-time, and then there's forensic analysis after the fact, because you didn't really know what you were looking for the first time. Two of those at least imply some level of storage as well, the near-real-time for sure, as well. And the forensic, definitely, probably larger. I'm just wondering what the take is on where, you know, we're going to move the compute out to the edge. Where are we going to store the data? I mean, you want to reduce the amount of data going across the network, I assume, try to keep it somewhere where it can be used or reached, but not necessarily transferred to the center of the universe. So I was just wondering if comments on that. Yeah, I'll just basically go really quick. Yeah, so I definitely agree with that. So it's going to come down to a data problem and where the data is. And I think the challenge is going to be, how do you really capture the right data, especially if you have a latent application? Otherwise, you basically are going to probably put this in a big data cloud somewhere, and you're going to have to compute with it, and you're going to be able to go and do whatever analytics you want. But if you go back to, can you capture the amount of data that makes sense to store it on the edge, and then also bring the compute to that? That is probably the architecture we're looking at. So there's actually one thing we forget when we talk about data is that there's also a lot of context that goes with the data. It's not just that, let's say you're saying it's currently 20 degrees, it's also good to know what time of day it is, perhaps what the coordinates are, and things like that. So if we can process the data more quickly, perhaps we can even use more context. And then if we can think of a hybrid architecture where if you could reduce the data to signals and then store the signals for longer term analysis, then you could end up with a more efficient system. I think a big aspect of this is not so much about collecting the data and then figuring out later what to do with it. And this is what we've seen in a lot of the initial big data systems applied to networks. What happens is you create a data lake, you pull this data in, you sort of stare at it and say, what do I do with that now? So I think the key is really, as you're collecting the data, is actually processing it, adding the dimensions to the data to actually give you the information that you need. So you've got to really figure out how do you process the data near real time. It doesn't have to be real time. To start to provide the insight that you need, that ultimately will drive the automation and the actions that you take on the data. Thanks. OK, we have three minutes left, so I think we have time for one more question and we have one more questionnaire. Just kind of a fundamental question with applying AI to big cities and your problems. To me, the biggest problem with AI is how to define what is it you want your system to do, your many sensors and your network and everything else and your closed loop automation. How to define what you want it to do and how to define what you never want it to do. And then how do you go about making sure that the training set you produce actually achieves that all the time, especially if it's doing ongoing learning? Yeah, that's a great question. That is what made the original systems of 30 years ago very brittle. Anyone want to take a guess at that? Well, you make a very good point. I think the first step is to approach the types of problems that we can solve with intelligent software and intelligent machines. The way that we look at this at Nokia, I mean, the first steps, personally, I think it's a ways off before we actually get to that step, where there's software that's fully automating and taking action on a continuous basis. We sort of look at it as several steps in between. The first is sort of predictive analysis. So using the data to understand is a failure likely to occur. Today, that's a very manual process, very hard to do. But there's enough data that we should be able to automate that. So those are the kind of things we start with automating. Automating the customer support and response. So when there is a failure, collecting all the information and analyzing it to actually get to the root cause without having a very long period where humans are trying to figure things out. So that's another example where you can apply artificial intelligence and machine learning without taking that risk that, well, what if the machine decides to do something we don't want it to do? So I think with those steps, we start to understand better what are the inputs, what are the outputs and actions. And we can start to ease our way into looking at types of configuration, types of automation we can apply in the active networks, in the active environment. Anybody else want to take a shot at that? It's a very interesting question is all I'll say, right? Because any time we talk about automation, we definitely have that conversation. And there's many different ways to think about this. Now, let's say a failure is happening. There is a cost to not do anything about it. And are we willing to pay that cost? Is really the way I'd approach this problem. Sometimes it just takes way too long for humans to figure out that there is a problem and you can't wait that long. So actually, just about at a time, I want to thank you for your very hard question. And I want to share with you what I think was a really good answer from the committee. But in my words, it was be smart about the scope of what you expect and then you can succeed. Again, I'll go back to 30 years ago when we first set out to build systems that basically replaced humans and we didn't succeed because the scope of the problem we set was too big. Today, I think we actually are succeeding by applying AI techniques because we are now smarter about setting the scope where we can have successes and then we grow on that. Okay, well I want to thank the panel for the discussion today and the audience for getting here at 9 a.m. on the third day of a conference after the party last night. And hopefully you learned something and if you're any follow-ups, we're still around to talk going forward. Thank you.