 Hello, everyone, and welcome to JSA TV and JSA Podcasts, the newsroom for telecom and data center professionals. I'm João Lima, and on behalf of the team here at JSA, thank you for tuning to our latest virtual CEO roundtable. Our first 100 registrants for today's roundtable will have received the fresh lunch, delivered to your door, or a gift card to your own meal. Today, we are excited to share our JSA virtual roundtables on a new platform to include the first in the industry virtual networking experience with a unique opportunity to talk face to face with other event attendees before and after the panel. So make sure you head back to the networking lounge after the discussion for live networking with speakers and attendees of today's event. As a quick reminder for everyone who has joined us today, we look forward to your participation during this event. So please feel free to add any questions that you may have into the chats or request the mic to come on camera and ask your question to our panelists directly. If you have any questions about upcoming roundtables, whatever it may be, such as how to register or how to participate, feel free to reach out to us through our website at jsa.net. And by the way, just as a reminder to marker your calendars, our next virtual roundtable will cover edge data centers, critical low latency solutions for big cities, and that will take place on July the 15th at 1 p.m. eastern time. Without further ado, let's get started. Our topic for today will be covering disaster recovery and network resiliency. Down time, as you all might know, can translate to substantial losses, making disaster recovery plans critical for business continuity. With that said, it is my pleasure to introduce to you our exceptional executive lineup who are waiting on our topic. And joining us today, we have Jill Santillis, CEO of NJFX, Paul Scott, CEO of Confluence Networks, Isaac Mian, VP of Sales and Support Engineering and Red Light Communications, Warren Rayburn, SVP of Sales and Marketing at Comsta Technologies, and Sharon Forney, Director of Data Center, Martin Nicola. I mean, first, I apologize if I butched anyone's name, but let's go around the room and get you guys to introduce instead of me talking about you for a while. Let's go around, you guys, introduce yourselves and what you do, what business you're in, if it's networking, if it's data center. Let us know what you do and where you come from. Who wants to go first? Otherwise, I'll just pick someone. Yeah, I can go first. Isaac Mian, Red Light Communications, Canadian manufacturer of hardened wireless networking solutions for the mission critical industries. And right now I'm here in Toronto under a complete lockdown. Still in full lockdown. Warren, would you like to go next? Yeah, absolutely. So good afternoon, everybody. My name is Warren Rayburn. I serve as the Senior Vice President of Sales and Marketing at the Comsta Technologies. We are on the network side of the equation for today's conversation. We are a full service voice and data integration partner for our clients. Our services range from IT and MSP to UCAS and also inclusive of audio, visual and physical security needs. So thank you all. Thank you. Thank you, Warren. Jill. Sure. Thank you for having me, Joe, first of all, and thank you to the JSA team for being allowing NGFX to be part of this session today. So, Gil Sanflies, I'm the founder and the CEO of NGFX, and we're a carrier neutral cable landing station. We've got four subsea cables that land on our campus, campus comprised of two facility, an original facility back built in 2001 by Tyco, bought by Tata Communications with two subsea cable systems, TGNA and B as well as the Seabroce cable. We've got a meet me room in that facility. Then we built a tier three cable landing station next door where we host the Hofru cable that has capacity and fibers going to Denmark, Norway and Ireland. We're an interconnection hub, so we're a data center facility tier three but really facilitate communications between subsea cables, terrestrial systems, creating an ecosystem for dynamic telecommunications. Which has been so important over the last 14 months, especially. Sean, would you like to go next? Sure, thanks. Hey, everyone, Sean Farney. I'm the director of data center marketing at Kohler. We've actually been making power gear for over 100 years, including more recently our four megawatt gen set. And before that I built and ran data centers for Microsoft and coming to you today live from the wilds of rural Wisconsin outside on the water. Thank you. And Paul, last but not the least, thank you, Joa, and an extended thanks to all the JS, to all the JS18 for this opportunity. I'm the CEO and co-founder of Confluence Networks, which is a subsidiary of MasTech, a New York Stock Exchange player, in the infrastructure space. And we're developing the first of its kind offshore Eastern Seaboard subsea, highly scalable network platform that is meant to provide unmatched diversity for Eastern Seaboard north-south traffic to provide a step change in resiliency and performance and security for users of high bandwidth applications. So we'll connect data centers, carrier to carrier, dozens of international subsea links that currently converge on Eastern Seaboard into the domestic fabric. And hopefully we'll be co-locating with my friend Neil up there at New Jersey Five Exchange as part of that expanded ecosystem. Okay, thank you, Paul. I thought before we dive into the more in-depth questions, maybe let's just establish the difference between resilience and recovery. So who wants to explain the difference to our audience between resilience and recovery? I'll take a stab at the first shot. And so from my definition, resiliency means I'm not going to have to recover, right? So if I do a good enough job in building a system that's resilient, my customer won't feel any kind of issue for a quote unquote recovery. And I'm talking from perspective of a facility providing power, cooling, and interconnection services physically to connect these networks together. It's all in the planning. We call it blue sky planning. So if you build it right and you prepare and you have methods of procedures when you do any kind of changes, you really are providing a resilience where you don't have to have a recovery. No, I think it makes absolute sense. Anyone would like to add anything else, Isaac? Yeah, sure. So I'll just build on that. Resilience is, you know, from my perspective, avoid failure avoidance. And then recovery is, okay, what do you do? How soon can you recover once you have failure? Because let's not forget that Murphy does exist. Anything that can go wrong will go wrong. Even if it cannot go wrong, it will still go wrong. So failure is going to happen no matter what, no matter how resilient the system is. So then it all becomes about how soon are you going to recover from that failure? What is at risk? What's the cost of a failure? And as the failure duration increases, that cost increases, so how can you minimize that? I come primarily from an electricity infrastructure industry and in that industry and in Canada, if there is an electricity outage in the dead of winter, it's not just about money, it's about people's lives at risk. So recovery becomes very important at that point. It can mean different things for different industries, but the key difference between the two is resilience. First, you try your best, you design your system, you operate your system to make sure there is no failure. But then recognizing that failure is going to happen eventually and that you are prepared to recover from that failure. In the networking world, we use the terms like MTBF, mean time between failure, and then MTTR, mean time to resolution. These are kind of the metrics that I used. Yes, and we'll be touching on those soon as well. I think, Sean, would you like to add this? Yeah, thanks. In the data center space, it comes down to play and the design is already mentioned. So resilience is a concerned basis of design. That phase of designing your build is really where you address resiliency, right? So do you have parallel electrical and mechanical infrastructure? Do you build for concurrent maintainability? Those types of things. So you don't have, as others have mentioned, you don't have an interruption of uptime. But you also do have to address recovery. If there is an outage, what do you do and how? And you have to practice that and you have to train to train to that. So service providers obviously have to have both. They want to avoid downtime. Otherwise, they're remunerating customers many times due to contractual obligations. But if there is an issue and a problem, you do have to know what to do and you have to practice that. And usually that's audit-ridden. When I was at Microsoft, we had to have both and we had to test regular intervals, the what-if scenarios. Okay. Well, that makes sense. And let's talk about when things go wrong and paint a picture on why this is all very important. I mean, how bad can things get? I think Isaac, you kind of already touched the point here almost between life and death. But how bad can things get? Even on a business level, we used to have those reports from, I think, with Ericsson in the old days or something like that, where the cost of downtime on the data center space was huge. I mean, every minute, every hour was a fortune they'll be thrown out of the window. And then the latest one we had was from 2014, something like that. So nowadays it would be a lot. How bad can things get? To be honest, I don't think there's a limit to that. I'll give you a couple of examples. The British petroleum oil spill that happened, it was a failure of that infrastructure. And the cost of that still continues. If you look at the complete cost, it's in hundreds of billions of dollars and it still hasn't ended. The environmental cost of it, the communities are still dealing with it. If you look at, again, I come from a mission critical infrastructure industry, so I'll give you those examples. If you look at the electricity infrastructure, the most prominent blackout that I remember is 2003 blackout. I think all of us here are old enough to remember that in the northeast part of this continent. The cost of that blackout, that outage, if I remember correctly, was estimated at somewhere between $6 to $9 billion to the economy. That's how bad it can get. And as I mentioned, if this happens in the dead of winter, in a place like Canada or in North Dakota and these places, you're talking about human rights at that stage. I might add that we all recall the Christmas Day Nashville bombing event that took out a central office for an extended period, an example of what appears on its face to be resilient, multi-path environments where networks should be up, how things happen, and what we're hearing, and part of our case logic to build this subsea link up east coast, is we really, this is coming from big enterprise carriers, service providers, is we really need to think about improving our diversity. It's one thing to have four or five multi-path fiber routes into your core data center, but with cloud adoption growing at an alarming rate, higher, with the move from on-prem to spinning up virtual servers in a distant environment, the best ability is availability and it starts with, and we beat this to death, feeling good about your core network architecture so that when events happen, there's a decent chance your uptime remains in your server environment is available, but when it fails, you need your disaster recovery plans and they need to be get adjusted, whether that's cycling generators or working with the teams and what if we face a real challenge, what is our action plan to tighten that meantime to recovery? And I mean, I really think this is a really topical subject because with everything that's been going on over the last 14 months, there's a lot of companies jumping into cloud and adopting a lot of digital strategies, and maybe sometimes it's actually a topic that gets thrown into second plan, and then when they hit with something bad, things don't go according to plan as part of what this is about, but speaking about them, for those that don't know, how often should a business test the environment and what is the process like to test the business system? Yeah, I'll jump in there from our perspective, Joe, what we assist clients with is developing those business continuity plans and as part of that interaction, we outline their uptime targets, those targets also that they convert to their service level agreements outward facing and otherwise. A lot of it, for us, it goes right back to that design consideration. It's that mantra of just really not if, but when. In preparation for today's dialogue, I was reading up on a few studies, and one interesting one referenced a survey of 500 global IT leaders, and half of those surveyed stated that they've had outages that lasted 30 minutes or more, at least four times through the year of 2020. Given the reliance on the cloud architecture and the security considerations and otherwise that this pandemic thrust upon the market, it certainly gave people pause and I think in terms of these BCP, business continuity plans and initiatives pushed into the forefront of their thought process. It does come down to the testing aspect of it. Depending on the business, there's compliance considerations arguably, but best practices, again, from a quarterly standpoint, we start often there, but for us, it's a matter of defining the people, their responsibilities, the testing elements all the way through from the edge and the hardware consideration up through and including their connectivity to those cloud elements, avoiding overlap and otherwise and to make sure we have that resiliency accounted for. Would you like to add something? I would. If you think about disasters, we're still in a disaster. We have our employees that aren't in office buildings. We're asking traders to trade from home. I don't know if there was ever a game plan for this to tell the entire world to go home and not go where the big pipes existed in those office buildings. We had to have data centers repurpose their data and access the internet, and the internet became mission critical. This thing that evolved in the last 15 years called the internet is what kept us all coordinated, connected, and how it works today. We wouldn't be having this conversation if it weren't for the quality of the internet. What makes me nervous and Paul touched upon this, there was a quiet Christmas day that just went by where one building in the middle of America had a problem by one individual and four states lost the internet for a couple of days. That makes you think like how vulnerable really are we? We have to be responsible as a community when it comes to being telecommunication leaders to make sure that we point out issues where we see the evolved internet not finding itself having pinch points and all of a sudden one failure at one building built in 1920 creates a catastrophic effect for an entire region potentially globally. And that comes down to planning your infrastructure, putting things in place that always work no matter what happens. We can lose a downtown environment. We can lose an older building. Let's put assets and purpose built facilities, not older buildings that have underground parking in them that could be very susceptible. Going forward, the internet is no longer a nice to have for video games and doing your homework and looking at websites. No, it's how we're going to be communicating for the visible future. And we need to make sure that as leaders in the industry, we're making the right investments to make sure this always works. Things like the cloud don't work without the internet. It's a great point, Agil. Within a matter of weeks, we found ourselves in a global business continuity exercise, right? That's been going on now for more than a year, which is just amazing. So back to the question around how much should you test and for what you should test your recovery plans, more testing and for things we haven't even thought about. Look at a couple of few months ago down in Texas, what happened just due to a failure of imagination in the design of the electrical grid, the lights went out and talk about mission critical, not to be morbid. That's when people's lives were involved, right? Everyone rolled over to generators. Knock on wood. Thank goodness for generators. They kept data centers up. They kept hospitals up. They kept healthcare and emergency response up. But no one even thought that could happen. And certainly no one thought this pandemic should happen. So for the crowd today, test off and test randomly. It's like the top we had audit intervals for testing our disaster recovery plans. But we also had random tests. You get a phone call to do a tabletop exercise. Imagine this just happened. How would you react? So the industry needs to embrace that because we are now living it. It's a different brave new world. And we need to accommodate that and update our plans for sure, whether that's bits on the wire and how PGP talks to other zones across network connections or how you build data centers and how many generators and parallel infrastructure systems you put in. Things are much different now. Yeah. And I mean, especially with the advent of fetch computing as well, this is going to be even more important, especially around building proposed built infrastructure to cope with our digital lives. And I mean, if it wasn't for this industry, we were saying just before we came live, we were saying how well this industry has actually managed to do during COVID. And I mean, you can just imagine what the world would have been like if this industry wasn't here. Like it wasn't 100 years ago with the Spanish flu. So it would have been a lot worse. So I mean, the industry doesn't deserve a round of applause. And I'd really like to point off, we need to build purpose built infrastructure to cope with today's world, not the world of 30 years time and all one building that throws down a few states or even the world's internet down. But speaking of throwing a few states offline and like even the world, I mean, how should business respond in the time of crisis? We are in a time of crisis, we've kind of touched upon that. But even beyond the global pandemic, I mean, a hurricane, we had the big freeze, I think was down in Texas a few weeks ago, a few months ago. I hope I got the right state. I mean, we've also have data center fires, we have a massive fire in the data center here in Europe a few weeks ago, which made headlines everywhere. I mean, how should businesses respond in times of crisis? I suck. Yeah, sure. So, you know, first things first, if you haven't thought it through, if a business hasn't thought it through, and they've just started thinking about how to respond to a crisis, it's already too late. If I say just let it be. You have to be prepared and we've already discussed this. You have to do your regular tests and everything. You have to plan it out. That from my perspective is the very first step. You have to understand today and it's not just about the past or even the present, the pandemic we're in. It's the future that we are stepping into. Everyone in these industries today is talking about the fourth industrial revolution and the digital transformation and all these things. The one thing and it's good. It's good for innovation and it's good for economic development globally. It's going to be great. But that great achievement comes with certain risks. Today, we're integrating information and communication technology with sort of the traditional industrial system and that is creating interdependencies that and complexities that we don't understand yet. I have a doctorate in the field. I'll tell you, I don't fully understand when it comes to integrating ICT with the energy infrastructure and if anyone out there is claiming that they understand all the interdependencies that are being created as we integrate ICT with the power system, I say they're lying. That is the first thing we need to acknowledge that as we digitally transform our infrastructures, we are creating complexities that we don't yet fully understand which means that eventually a failure is going to happen. We need to thought it through in advance how we are going to react to these failures. That then depends on the industry and the business you're in. For something like a nuclear power plant, for example, that response is going to be very complex. Even testing that response will put the system at risk itself. So you have to be very careful and that strategy, that testing strategy has to be worked through. Usually, you won't do it more than once a year. But the key sort of the key focus at both ends at a very high level and I'll sum it up is one is recognizing that we are creating interdependencies that we don't fully understand yet. So there is a risk out there that needs to be managed. And two, once it happens, once there is a crisis, we should have prepared for it. We should be prepared for it. And then the focus should be on reducing the duration of that crisis. How fast can I recover? And then how can I leverage the tools, the technologies, the people, the processes to optimize all of these in alignment with each other to recover as quickly from the crisis as possible? And I'll probably add as well on the marketing front, it's important to be very open and honest with clients and consumers and keep them aware of what's going on. Warren, would you like to add? Yeah, I think that's a big key. We have ourselves found so much success in terms of our partnerships with our clients and expanding those partnerships and providing true value to their business. I mean, you look at the impact that this pandemic had on the small business community and you find yourself often challenging or being met with the challenge around budget. And you have to be prepared to speak to what a lack of a plan and readiness on the execution front could lead to for the business. I think the key is interrogating the existing environment at all times. We've touched on the testing and the preparation and the planning. But I would only add that we have found many, many occasions where there are assumptions made, poor pre-sales preparation from an incumbent provider, for example, whereby there's two different names on the internet or two different invoices to service the client's internet needs. But then we come to expose the fact that there's overlap in the plant route, shared fiber, for example, and wholesale relationships and other walls. So those things, I think we own the responsibility as a service provider to expose those items to the benefit of our clients. And again, take it, as I said earlier, from every step of the way from the endpoint through the client edge, carrier edge, and out into the PSTN and beyond to the internet. So that's interrogate is my keyword there. Warren, a question from me to you. As I think back of penetration testing and these things, easy for the CFO to say, do we really need that? That seems like an unnecessary spend. I have no idea if the colonial pipeline performed a robust rigid array of protective cloaking, like penetration tests, to find those vulnerabilities, to find, when I think of my fiber world, it's one thing to have six fiber providers, three on each side of the road, all a book three or four inches away on the same bridge attachment. That happens as those are real world examples and not point fingers. It's just the challenges we face on that full ecosystem of data centers, cloud compute, the network architecture, and so on. Absolutely. I couldn't agree with you more. And to that end, taking it out beyond the plant in the field, but to the head end and the electronics and so on, all of those items that go into supporting it to your point, it just takes a level of interrogation and otherwise that again, as I said, I just feel we owe to the end user at the end of the day to illuminate those potential points of failure. Those common time points, as you said, I was fortunate enough in the past to have a SEALEC business and we got literally burned with a fire at 401 North Broad Street in Philadelphia and exposed the fact that while we had four different ISP connections out to the world, three of those were in one specific manhole and we had to ramp up our own efforts on the BCP front to make sure that never happened again. There's a lot more visibility into it now. I would just wrap up by saying that the tools that are available to the service provider community in 2021, thankfully, far outweigh what I had available to me when building that network 10 to 15 years ago, but great feedback. I'll say it. What Warren and Esha both alluded to, I think they're great points, is that we're at a point now in this fourth industrial revolution progression only emphasized by COVID that if you don't realize that technology is not just part of your business, but is your business and your revenue stream, you need to reset at the board level and interrogate. I like that. Interrogate your entire value chain on how you create product and services and look at where there could be potential weakness. Again, this is around recovery and resilience and poke a lot of holes because I don't think Colonial did to answer that question. They didn't run that red test and look what happened. We are firmly in the 21st century data-centered economy, as I call it, and you better have enough generators and enough redundant routes and you better be doing all the things around your technical infrastructure that makes your revenue infrastructure works or simply you'll be out of business in a few years. What I would add to that is the enterprise customer, as correctly stated on this panel, has realized that technology is their business. That includes hospitals. That includes banks. Everyone is using the internet technology, data transfer to make themselves competitive, more agile to compete in this marketplace. They too need to make it an initiative of focus in terms of security and privacy. For example, we work with D-Kicks and they've got this closed user group profile that allows even a small regional hospital, $10 billion hospital to have a private peering relationship with its most trusted partners with a set of rules and hospitals don't think to come to a facility and look for that. They've got to take the time and energy to start exploring what's available to make sure that I don't have what happened to Colonial happen to me because it could take me out of business. It could affect our economy and those procurement teams that have been buried in the basement of the big corporate headquarters need to raise their hands and say, I had an answer for you, but you didn't listen to me three years ago. And they've got to make themselves much more relevant to the organization and start proposing ideas such as diverse routes, unique ways of interacting with others, creating technology, support their security because it's the wild west that we're going into in terms of the internet is now critical. It's global. It's accessible anywhere in the world. Our employees are remoting into our critical infrastructure. So how do we create these barriers? How do we protect ourselves? How do we do that tabletop blue sky planning to make sure that the what we never thought would ever happen does happen and then move forward. You have to do a lot of planning procurement can no longer be a cost center. It should be a strategic initiative of how to do the best job you can buying what you need. I mean, it's vital to the existence of the business at this point, really, picking up on employees and then I will only ask a couple more questions and then we've got quite a few good questions on our audience Q&A side of the chat. Often, I mean, we've spoken about COVID, we've spoken about even hurricanes, big freezes, we've spoken about fires, but often the problem starts with humans. A lot of down times are human made, let's say that way. How should businesses, disaster recovery network resiliency plans account for potential staff wrongdoing? Who would like to take the lead on that one? That one's not an easy one because you're talking about your family in some cases, right? The folks that you work with every day and how to prepare and it's about having a process in place, creating a community of trusted employees that you know that you work with, trust but verify we call it. Doing background checks is very important, creating organizations that work with each other but check each other out at the same time too and it goes beyond your employees because the industry is made up of technicians and vendors. We basically coordinate lots of vendors to create a product and having those vendors also show their commitment on them creating those background checks, making sure that they have a security process in place when they come to do work at our facilities or on the networks. It really is a collaborative effort of integrating your customers, your vendors, your partners to kind of agree is important for all of us and having those checks and balances in place because in the US that is a concern at the moment having insider threats, it's not the foreign adversaries, it's potentially insider threats on folks that aren't happy with the outcome of some event and trying to take quote-unquote a New York City building down by doing something nefarious and going into a manhole and the fire is not by accident, the fire was on purpose with the cast of gasoline because quite a bit of damage in New York City so it's about thinking through and about coordinating planning and collaborating with others. And I'd add good point Gil, I'd add to that not a plug for Comstar by no means but sometimes it takes a third party like Warren to come in and stress test all your code of conduct, how you run your administrative logins, how you refresh them, all the rest of it be and again that's a cost money but it can be a better than a pound of cure in many cases. Very well said, yeah I agree I think you know two key points are you know from my perspective you know are to empower and educate so you know get the investment from your team you know empower them to influence positively you know what we're talking about here today and their own experience and the ownership that they take on in terms of their service to the client and providing that best in class experience you know and taking pride in that you know personal brand and certainly the brand element with the company itself so Gil to your point you know getting that investment is so key. I also think on the education front you know one of the things that we assist companies doing with doing is going through and you know providing examples you know whereby you know we'll put a you know a spoof email campaign together for example and see again how the staff responds and the point being is it's not intended to be punitive but it's intended to be educational you know doing a recap with staff there after you know again eliminate why we have to avoid these things and what that impact could be to the business so in closing for me those those are two key points and something we stress with our end users. I think the email is quite interesting because sometimes it's the CEO that opens the email that they shoot in which has happened a few years ago. Isaac yeah just just to you know add you know one point to that which is that the wrong doing does not have to be deliberate okay and it at times may not even have anything to do with cyber security or physical security you know it can be just a lack of training of how to use the system you know it can be an honest mistake by an employee from from my perspective and in my experience at least you know the way to deal with it is to go beyond just thinking about security and approach it as a proper comprehensive risk management task you know you have to and the pandemic has shown us that it's not even about enterprise risk management it's now about integrated risk management you know that goes beyond your enterprise and even beyond your industry you know so it's having that high level comprehensive approach you know an adoption of that approach that I think helps. Now that makes sense look I was going to ask more questions but we do have some really good questions on our Q&A from our audience so let me start with that now the first one is up to what extent should governments be involved in regulating or assessing vulnerabilities in critical infrastructure so high concentration of cloud locations connectivity paths central offices I think that there is a big question I mean who wants to take the leads on that one who wants to talk about governments. So we have a pretty good relationship with our US government and we are open and we actually invite them to come in and do testing with us to make sure that we have the resilience we need and security levels that we have we're not afraid but but but that invites regulation right and in our industry data centers are not regulated right we position ourselves with certain accreditation we share those accreditations out with our customers to make them feel comfortable we go beyond the call of duty personally in doing this but we do notice that a lot of carriers do not want to get into that space right because historically regulation means higher cost especially the larger providers get really disadvantage in that case because they have a lot more to discover and a lot more to share so it's a double-edged sword you know what we do is become critical the internet became critical overnight and it's not regulated right and how we do our connection is not regulated I think the best way to do it is to partner to partner show a plan show commitment and market your resilience as an operator to demonstrate that you've got some resilience in your network that you're not one building away or two buildings away from having a catastrophic failure and taking them in the entire region I think with the new public private partnerships I know our government has earmarked dollars in the billions for critical infrastructure it's open to talking to the carrier community we invite the carrier community to you know engage and at least start having the conversations with them yes and we also don't want a lot of bureaucracy coming in and starting carving the innovation within the sector Isaac yeah so as a Canadian on the panel I'm going to be the proponent for government regulation as I as I mentioned earlier you know when we're the introduction of technology into existing industrial systems and existing infrastructure does come with some risk and I know that regulation at times can impede innovation and even economic development you know but societies should strive to achieve a balance between the two because you know as I mentioned earlier if you leave the power system company to coordinate with the communication service provider in the region to understand the interdependencies that are being created you know when they develop a system a smart grid system that leverages the communication infrastructure and integrates the two infrastructure you know without any government support and regulation and a push by the rest of the society you know that's too big of a risk to take to leave it to you know just two private organizations like that in my opinion you know and and I'll be honest that's an opinion you know from my perspective the interdependencies between we are that we are creating between different systems span multiple industries you know they go across all these industries and and you need some kind of oversight to make sure that the risk introduced is being managed is managed and that's where the government oversight and regulations come in and it has a role to play yes it has to be balanced no doubt you know we're not in an authoritarian regime at least here in in in North America you know but there is a role to play for regulations yeah Sean as an example Kohler works hand in hand with with the EPA in the US so they define a standard on emissions levels it's called the EPA tier four final standard and so we manufacture products to that standard they don't tell us how to do it or what to do just a threshold to hit and that works well for us for mitigating emissions on generators so that's a a partnership model that doesn't necessarily stifle innovation or add extra regulation hmm okay thank you we got one time for one more question so when running a data center what percentage of the consumption goes to running the machines and how much for cooling rooms for the cooling rooms sorry who wants to take that one maybe Sean that's a that's a consultant answer depends right so years ago uh Christian Bellotti who's now at Microsoft kind of authored this this measurement of of that very thing called pu pu and it's it's really a uh an effectiveness uh or efficiency of how you use power in a facility right um where the most perfect and unobtainable efficiency level is is 1.0 every kilowatt or megawatt you you have coming into the facility is used for your it critical load um so it's been bit of a bit of a battle at the hyperscale level over the years who can get that number the lowest um currently kind of best in in practice in the industry is is 1.1 which is really close to 1.0 so lots of different things go into that including what you measure in that number and how you do your cooling do you use a chiller based system use free air um economization um how do you return power to your it critical load very very um uh deep question with lots of different answers and we can spend a whole hour on that but that is tracked and it's a bit of a competitive thing with with data centers the enterprise data centers uh because of uh the age of the gear and and the fact that they're sometimes hosted in office buildings have much much higher numbers than some of the very large scale data center real data center providers uh so they can operate much more efficiently drive their optics down and so on and so forth and that explains some of this you know kind of flight to cloud and movement to co-location wide services and cloud service providers over the last few years i like i like to add one too there's a customer we have called bulk and in norway just stopped snowing last week and they've got free cooling most of the year they also have water falling constantly so they're all on hydropower so they have zero emissions all renewable energy um there they've really mastered this if you want to try and save the planet let your compute happen in norway and they have a cable that goes from our building to norway and they invite those to put in hyperscale facilities they're parts of norway they will pay you to take the electricity off their hands i'm not sure bulk has one of those properties but in northern norway they have too much power they are going to take it off the system so i think your geography matters whether you want to try and get a good pui and get a good renewable energy program yeah especially especially north of norway um it's full of energy though space i mean we saw the collows collars data center which maybe didn't go as planned but it's that country there's no lack of energy there uh but look guys um i'm afraid that's all the time that we have for i really appreciate you taking time to join us today and on behalf of the panelists i would like to thank everyone for tuning in and for participating in today's roundtable just a quick reminder that our speakers are staying on for the remainder of the lunch hour to answer any more of your questions and there's a few more so meet them back in the networking lounge at the table and to our viewers as well if you were one of the first 100 registrants we hope you enjoyed your lunch make sure you visit us at jsa.net to register for more upcoming jsa virtual roundtables including our next one which takes place on july the 15th where leaders in our industry will talk about age data centers critical low latency solutions for big cities for me that's a wrap and look out for the playback of today's roundtable coming soon to jsa tv and jsa podcast on youtube i tune's i heart spotify and more in the meantime see you back in the networking lounge and happy networking