 Welcome back to London, everybody. This is Dave Vellante with theCUBE, the leader in tech coverage, and we're here at AWS. We wanted to cover deeper the public sector activity. We've been covering this segment for quite some time with the Public Sector Summit in DC, went to Bahrain last year, and we wanted to extend that to London. We're doing a special coverage here with a number of public sector folks. Anjanesh Babu is here, he's a network manager at Oxford Glam. Thanks very much for coming on theCUBE. It's good to see you. Thank you, thanks for having me. Oxford Glam, I love it. Gardens, libraries, and museums. You even get the A in there, which everybody always leaves out. So tell us about Oxford Glam. So we're part of the heritage and collection side of the university, and I'm here representing the gardens and museums within the division. So we've got world renowned collections, which has been held like for 400 years or more, and it comprises of four different museums and the Oxford University Botanic Gardens in Harcourt Arboretum. So in total, we're looking at like five different divisions, spread across probably 16 different sites, physical sites, and the main focus of the division is to bring out collections to the world through digital outreach, engagement, and sort of bringing fun into the whole system. Sustainability is big because we are basically custodians of our collections, and it has to be here almost forever, in a sense. And we can only display about 1% of our collections at any one point, and we've got about 8.5 million objects. So as you can imagine, majority of that is in storage. So one way to bring this out to the wider world is to digitize them, curate them, and present them either online or on our form. So that's what we're doing. And your role as the network manager is to make sure everything connects and works and stays up or maybe describe that in a little more detail. So I'm the systems architect and network manager. For gardens and museums. So in my role, my primary focus is to bridge the gap between technical and the non-technical functions within the department. And I also look after network and infrastructure sites. So there's two parts to the role. One is a BAU business as usual function where we keep the networks on going and basically keep lights on, basically. The second part is bringing together the designs. It's not just solving technical problems. So if I'm looking at a technical problem, I step out and almost zoom out to see what else are we looking at and which could be connected and solve the problem. For example, we could be looking at a web design solution in one part of the project, but it's not relevant just to that project. If you step out and say, oh, we could do this in another part of the program and we may be operating in silos and we want to break down those. So that's part of my role as well. Okay, so you're technical, but you also speak the language of the organization and business, I could say, but we put it in quotes because you're not a business per se. So, okay, so you're digitizing all these artifacts and then making them available sort of 24-7. Is that the idea? And what are some of the challenges there? So the first challenge is only three percent of objects are actually digitized. So we've got 1% on display. Three percent is actually digitized. It's a huge effort. It's not just scanning or taking photographs. You've got catalog accessions and a whole raft of databases that goes behind it. And museums historically have got their own separate database collections, which is individually held on different collection systems. But as public, you don't care. We don't care. We just need to have a look at the object. You don't want to see, oh, that belongs to the Ashmoin Museum or the Petrivers. You just want to see and see what the characteristics are. For that, we're bringing together a layer which sort of interrogates different museums. It sort of reflects what we're doing in our SIT. The museums are culturally diverse institutions and we want to keep them that way because each has got its history of a kind of personality to it. Under the hood, the foundational architecture systems remain the same so we can make the modular expandable and address the same problems. So that's how we could be supporting this and making it more sustainable at the same time. So you've got a huge volume. You've got quality is an issue because people want to see beautiful images. You've got all this metadata that you're collecting. You've got a classification challenge. So how are you architecting this system and what role does the cloud play in that? So in the first instance, we are looking at a lot of collections that are online and on-premise in the past. So we are moving as a SaaS solution in the first step. A lot of it requires cleansing of data. Almost, this is a state of images. We are not migrating. We are sort of stop here, let's cleanse it, create new data streams and then bring it in the cloud. That's one option we're looking at and that is the pop-solid most important one. But during all this process, the last three years where I've been with the Glam Digital Program, there's been huge amount of changes. To have a static sort of golden image has been really crucial. And to do that, if you're going down the road on-premise and trying to build out scale out infrastructure, it would have been a huge cost. The first thing that I looked at is for explore the cloud options and Amazon President Solutions like Snowball and the storage gateway. Straight forward, who was up the data, it's in the cloud. And then I can fit it on the infrastructure as much as I want. Because we can all rest easy. The main day one data is in the cloud and it's safe and we can start working on the rest of it. So it's almost like a transition mechanism where we start working on the data before it goes to the cloud anyway. And I'm also looking at a cloud clearinghouse because there's a lot of data exchanges that's going to come up in the future when they're to render, when they're to us and us to the public. So it sort of presents itself a kind of junction. Who's going to fit in the junction? I think the obvious answers here. So Snowball or gateway, basically either Snowball or gateway, the assets into the cloud and you decide which one to use based on the size and the cost associated with doing that. Is that right? Yes. And convenience, I was saying this the other day at another presentation, I just, it's addictive because it's so simple and straightforward to use and you just go back and say like, it's taken me three days to transfer 30 terabytes into a Snowball appliance. And it's on the fourth day, it appears in S3 buckets. What am I missing? Nothing. Let's do it again next week. So we've got a Snowball appliance in 10 days, bring it in, transfer it. So it's much more straightforward than transferring it over the network and just you've got to keep an eye on things. Now that it's not, so for example, the first workloads we transferred over the file gateway, but there's a particular server which had problems getting things across the network because of a bit outdated OS on this. So we got the Snowball in and within a matter of three days, the data's in the cloud. So it do affect that every two weeks or part of the Snowball, bring it in two weeks in three days, it goes up back in the cloud. So there's huge system and it's, it doesn't cost us anymore to keep it there. So the matter of deletions are no longer there. So just keep it in the cloud, shifting using our lifecycle policies and just that it's straightforward and simple. That's pretty much it. Well, you understand physics and the fastest way to get from here to there is a truck sometimes, right? Well, yeah, it is, I mean, literally it is, it is one of the most efficient ways I've seen and it continues to be so. Yeah, simple and concept and it works. How much are you able to automate the end to end the process that you're describing? At this point, we've got a few proof of concepts of different things that we can automate, but largely because we are, a lot of data is held across bespoke systems. So we've got 30 terabytes spread across 16 hard disks. That's one, that's another use case at one of our offices. We've got 23 terabytes that are just describing on, it's on a single server. We've got 20 terabytes on another Windows server. So it's kind of disparate, it's quite difficult to find common ground to automate it. But as we move forward, automation is going to come in because we are looking at common interface, like API gate phase and how do you define that? And for that, we are doing a lot of work with, oh, not essentially a lot of work. We've been inspired a lot by the GDS API designs and we are just calling this off and it works. So that is a roadmap we're looking at. But at the moment, we don't have much in the way of automation. Can you talk a little bit more about sustainability? You've mentioned that a couple of times and double click on that. What's the relevance? How are you achieving sustainability? Maybe you could give some examples. So in the past, sustainability means that you buy a system and you over provision it. So you're looking at 20 terabytes for three years, let's go 50 terabytes. And something that's supposed to be here for three years gets sort of kept going for five and when it breaks, the money comes in. So that was the kind of very brief way of sustaining things. That's clearly wasn't sort of wasn't enough. So in a way, we're looking at sustainability from a new function saying, we don't need to look at long-term service contracts. We need to look at robust contracts and having place mechanisms to make sure the data is whatever data that goes in comes out as well. So that was the main driver. And plus with the cloud, we're looking at a least model. We've got an annual expenditure set aside and that keeps getting into it. Sustainable is a lot about internal financial planning and based on skill sets with cloud, skill sets are really straightforward to find and we've engaged with quite a few vendors who are partnering with us and they work with us to deliver work packages. So in a way, even though we're getting there with the skills in terms of training our team, we don't need to worry about complex deployments because you can just outsource that in spreads. So you've shifted from a CapEx to an OpEx model, is that right? So what was that like? I mean, was that life changing? Was it exhilarating? I think it was phenomenally life changing because it set up a new direction within the university because we were the first division to go over the public cloud and set up a contract. Again, thanks to the GCloud nine framework and a brilliant account management team from AWS. So we shifted from the CapEx model to OpEx model and then understanding that all of this would be considered as a least service in the past is like you buy an asset, depreciates. It's no longer the case. This is the least model. The data belongs to us and it's straightforward. Yeah, and Amazon continues to innovate. You take advantage of those innovations, prices come down, you take more. How about performance in the cloud? What are you seeing there relative to sort of your past experiences? I wouldn't say it's any different, perhaps slightly better because the university has got the benefit of a super fast bandwidth to the internet, so we've got 20 gigs as a whole and we use about two gigs at the moment. We had 10 gigs, we had to downgrade it because we didn't use that much. So from a bandwidth perspective, that was the main thing and performance perspective, workloads in the cloud, we frankly find no different, perhaps if anything, they're probably better. And talk about security for a moment. How, I mean, early on in the cloud, people are all concerned about security. It seems to have tenuated, but security in the cloud is different, is it not? And so talk about your security journey and what's your impression and share with our audience what you've learned. So we've had similar challenges of security. So from security, I would say there's two parts to it. One is the contractor security, one is the technical security. The contractor security, if we had spun up our own separate legal agreement with AWS or any other cloud vendor, it would have taken us ages. But again, we went to the digital marketplace, used the GCloud 9 framework and it was no brainer. Within a week, we had things turned around and we were actually the first division to go with, go live with an account with AWS. That was to be taken care of. As soon as university has got a third party security assessment template, which we require all our vendors to sign up. As soon as we went through that, it sort of far exceeds what the university requires. And it just tick box exercise. And things like data encryption at rest in transit, it actually makes it more secure than what we're running on premise. So in a way, technically it's far more secure than what we could have ever achieved that that's on premise. And it's all taken care of straight forward. So you have a small fraction of your artifacts today that are digitized. What's the vision? Where do you want to take this? We're looking at, I'm speaking for on behalf of guns, but this is not me but say, so I'm speaking on behalf of my team. And basically we are looking at a huge amount digitization and a large amount, the collection should be democratized. That's the whole aspect, bringing it out to the people and perhaps making them curators in some form. So we may not be the experts for a massive collection from say, not America or the Middle East. There are people who are better than us. So we give them the freedom to make sure they can curate it in a secure, scalable manner. And that's where the cloud comes in. And we back ended using authentication that works with us. Logs that works with us and rollback mechanism that works with us. So that's where we are looking at in the next few years. How would you do this without the cloud? Oh, if we're doing it without the cloud. Could you do it? Yes, but we would be wholly and solely dependent on the university network, the university infrastructure and a single point. So when you're looking at say, the bandwidth, it's shared by students using it, network out of the university and our collection visitors coming into the university and the whole thing that requires the DNS infrastructure, everything's inside the university. So it's not bad in its present state, but we need to look at a global audience. How do you scale it up? How do you balance it? That's what we're looking at. And we're having almost impossible to meet the goals that we have and the aspirations and not to mention the cost. Okay, so you're going to be at the summit, the Excel Center tomorrow. What are you looking forward to there from a customer standpoint? I'm looking at service management and because a lot of her work, we've got a fantastic service desk and a fantastic team. So a lot of that is looking, we are looking at service management, how to deliver effectively and namely, because as you might say, Amazon is huge on innovation and things keep changing constantly. So we need to keep track of how do we deliver services? How do we make ourselves more nimble and more agile to deliver services and add value? So if you look at the OSS stack, that's my favorite example. So you look at the OSS stack, you've got seven layers going up from physical and all the way to the application. You can almost treat an organization in a similar way. So you've got a physical level where you've got cabling and all the way up to the people and presentation layer. So right now, what we're doing is we are sort of making sure we are focusing on the top level, focusing on the strategies, creating strategies, delivering that, rather than looking after things that break, looking at the things that are operationally, perhaps add value in another place. So that's where we would like to go. All right, Adjenesh, thanks so much for coming on theCUBE. It's a pleasure to have you. All right, and thank you for watching. Keep right there, we're back with our next guest. Right after this short break, you're watching theCUBE from London at Amazon HQ. I call it HQ, we're here. Right back.