 Okay. Hi. I'm Robert Barron. I'm a software engineer at the Massachusetts Open Cloud. Joining me on stage is Steve Gregory from CyVerse and Hong Xu, who works with me at the MOC. Here at the MOC, we are providing services to both people who require computational services as well as to people who are providing computational resources. What we are describing today is our collaborative effort to provide a simple-to-use GUI, Gigi, that is designed to complement Horizon. Although Horizon is a very powerful tool in being so powerful and providing so many options, many of our users have found it to be intimidating. When I think of our users, I think of a group of computational chemists down the hall from me. They would write their code locally, but barter for time on a much faster system. While they may be experts in quantum computing, they may not know how to set up a router, the first thing that you have to do in Horizon. That said, our goal is to provide them an interface that automates common tasks that most of our users that they would need as well as most of our users would need, but also allows them to use Horizon to work with the same project when they need the power. This goal extends to our federated cloud, where we will have different portions of our cloud, where we actually have different portions of our cloud being administrated by different groups adding to the complexity. Although we are designing this to work with Horizon, there are more degrees of freedom. We are using Gigi to simplify common operations that all of our users will encounter. We don't want to reinvent the wheel. Our basic strategy is to make commonly used use cases simple. For example, with my typical user, the computational chemist, all they require is a big externally facing VM. To accomplish this, Gigi will provide a set of default security groups, a default network, and all they need to decide is how large a VM that they need. If at some point down the road they decide that they want to collaborate with more individuals, they can change the security groups to allow them to come in using Horizon, or they may want a cluster of VMs, they can do so also using Horizon. We are also providing services that are not found in Horizon, such as Cloud Dataverse, which we have built on top of OpenStack. So our basic requirement, one of our basic requirements is to allow users to switch between Horizon and Gigi. As a starting point for Gigi, we chose to fork Cyvus's Atmosphere Troposphere project for the following reasons. It provides a similar user experience that we wish to provide to our users. It is open source. It has a similar user base and some maturity. To give you an idea to the basic architecture of how Gigi is and what we are inheriting from Cyvus Atmosphere Troposphere, we have the front end at the top, which is their Troposphere project. We have the middle layer, Atmosphere, which is their Atmosphere project. I will discuss this in more detail after an introduction to Cyvus Atmosphere Troposphere by its lead developer, Steve Briggery. Thanks, Rob. Today we will be talking about Atmosphere, some of the problems that we have encountered and how to solve those problems for our user base. First, I would like to quickly go over the history. The Atmosphere project was started in 2011 as part of Cyvus's Cyber Infrastructure, an NSF-funded project at the University of Arizona. Our first supported cloud was Eucalyptus, and in 2012 we expanded that support to include OpenStack. So what is Atmosphere? Atmosphere is an open source platform for the science and research community, which simplifies the cloud computing experience for users with a wide range of computational literacy. We did this by creating a straightforward user interface and a robust REST API that allows users to view their cloud resources, collaborate with others, and focus on the science. Since releasing OpenStack integration in 2012, we continued to see steady user growth. Over the last five years, Atmosphere's user base has increased from 600 to over 6,000 users. And during those five years, Atmosphere itself has transitioned from a single cloud to a multi-cloud interface. In 2016, we took that to the next level, allowing Atmosphere to become a platform that can be self-hosted by other sites like the Massachusetts OpenCloud. The first problem that Atmosphere set out to solve was simplifying the complexities in cloud computing. For our user base, that meant creating that simple interface, Atmosphere Airport, that allowed them to see only the most critical cloud resources, instances, images, and volumes. The most important part of Atmosphere Airport was making it as easy as possible for the scientists who had never been exposed to cloud computing to launch and then gain access to their instance. So we solved this by creating sensible defaults that enabled a one-click launch requiring users only select the instance size to be launched. To help users access their instances, we also provided browser-based terminal and VNC functionality. This enabled our users to go from launching to viewing their instances desktop in just two clicks. But sensible defaults alone are not enough to get users to launch and then access their instance. Behind the scenes, the Atmosphere API was responsible for ensuring that security groups and networking were initialized, and after the instance had made it to active was responsible for the allocation and assignment of floating IPs and a boot script that would secure the VM as well as enable that browser-based terminal and VNC access before we handed that instance over to the user. And as it turned out, there were a whole lot of scientists who were interested in free cloud computing resources. So many, in fact, that our clouds were constantly at 100% capacity. And with no incentive to delete instances, we found that a lot of those resources were sitting idle. So we addressed this by starting, by tracking the amount of instance usage over time and assigning a monthly quota to our users. Users who went over their quota would have their instances stopped until their quota was renewed or supplemented. That would allow others access to the cloud. The next problem that we encountered is that the rapid development of OpenStack meant a new release every six months, but our production systems were not quite as agile. We needed to maintain compatibility for the production systems and ensure that the latest and greatest version of OpenStack could all be supported by Atmosphere. So we created an abstraction library that relied on libcloud and the OpenStack client tools, which enabled Atmosphere to manage OpenStack versions from Havana to Newton. Although airport was created with multiple clouds in mind, it was only capable of showing one cloud provider at a time. Since our users had two or three OpenStack providers to choose from, this meant selecting a new cloud provider and waiting for a page refresh to keep track of their resources across clouds. Additionally, we realized that our image catalog was starting to grow, but that our users didn't have enough information to figure out which image they should be launching. So we addressed this in 2014 by creating Troposphere, a single-page app built with React and Backbone.js that had a detailed image catalog with support for versioned images so that users could see the established history of an image and how that image has changed over time. Troposphere also focused on grouping instances, volumes, and images into logical projects rather than based on the cloud provider and the credentials that created them. Here's the project details view in Troposphere. You can see that the multiple OpenStack clouds project has three instances, each from a different cloud. And here's an example of a versioned image as displayed by Troposphere. Each version has a description that explains what has changed and represents a glance image on one or more OpenStack cloud providers. With the introduction of Troposphere, we were able to change our focus from making cloud computing easy for end users to making cloud computing easy for OpenStack site operators. And in early 2016, we became a self-hostable platform so that other organizations could host their own copy of Atmosphere and Troposphere and manage their own clouds. In February of 2016, JetStream Cloud became the first organization to use Atmosphere as a platform. And in September of 2016, Massachusetts Open Cloud became the second organization. Every organization that adopts Atmosphere installs it with a different end goal in mind. And in order for us to become a more effective platform, Atmosphere and Troposphere had to become more flexible. This meant implementing a custom theme for our interface, making authentication a plugin that could be easily swapped out for other implementations, and converting our instance deployments to Ansible, enabling site operators to choose what they wanted to deploy on an instance before they handed control over to the user. So that's Atmosphere in a nutshell. If you have any questions or are interested in using Atmosphere for your site, please come see me or Andy Lennards, the lead developer of Troposphere after the presentation. And if you're looking for a job we're hiring, so check out the link below for open positions. Thank you. Thank you, Steve. From that description, it is obvious why we selected the Cyverse Atmosphere Troposphere project as our starting point. We do have some significant differences. For starters, GG's target audience is different. We have a combination of academic users and industrial partners who are also using it. Also, we only support OpenStack. This drives our requirements to work with Horizon. Horizon, I would say, so Atmosphere, another key difference is that Atmosphere needs to work with multiple clouds. Now, to work with Horizon, we need to make sure that users are defined in OpenStack. Our users are defined in OpenStack whereas in Atmosphere the users are defined in their middle layer. And projects are also, in Atmosphere, our collections of cloud resources whereas in GG projects are defined in OpenStack. Atmosphere needs to have a model of authentication, authorization as well as multiple layers of authentication, authorization for each of their cloud that they connect with, whereas GG can just use OpenStack's model for authentication, authorization, which is Keystone. GG does not need to act on behalf of its users. And we are exposing other cloud, other OpenStack cloud and MOC services such as Cloud Dataverse. To summarize the modifications that we have made to GG mainly involve interoperation with Horizon. To highlight the differences on the left is what we are inheriting from high-verse Atmosphere and on the right is where our current architecture. The major point here is that the architectural differences are minor. Right now we are a fork, however it is feasible that this could evolve into a single code base. In order to work with Horizon we moved the primary authentication authorization to using Keystone in OpenStack so that that sort of pinkish color is moved down to closer moved out of the middle layer. And this was done in GG using the mechanism described earlier by Steve involving his plug-in architecture where we just took the plug-in and passed the credential straight through to Keystone. We are using the catalog returned from the unscoped token to get the project definitions. And the catalog returned from each scope token to get the instance definitions. So our projects effectively in GG become defined by the projects in OpenStack. This way we can maintain a consistent view of each project using OpenStack's access control mechanisms without explicitly building it in to GG. We are also working to bring online other MOC services, OpenShift and Cloud Dataverse. And currently these are being loosely integrated as you will see in the demo given by Han Chu. Okay so this is the landing page of GG. It basically tells you about what GG is and what services that MOC currently supports. As you can see here we support OpenStack, Cloud Dataverse and OpenShift. Let me just log in, use my MOC OpenStack credentials. Just one second. It's the same as Horizon. We use Keystone for user authentication. It's going to take, yeah that's fast. Okay so on the top you can go to different component pages by clicking this. So let's go to the marketplace. In the marketplace we have MOC services and GG enablements. For the GG enablements they are the tools that enable GG to become a MOC marketplace. Robin and I are going to work on this over the summer so I won't show you guys the details here today. In the services we have a launch a new instance tab that I will show you in a minute. And the second thing here is Cloud Dataverse. So click on this you can go to the Cloud Dataverse site and then from there you can select the data file that you want to compute with and then you can go back to GG to do some data processing job with OpenStack Sahara. My colleague Jeremy, which is here, I'm going to give a demo about this on Wednesday morning at 11am. The third thing here is the Red Hat OpenShift. Currently it's just a pointer to our OpenShift dashboard with having single sign-on in the future so our user don't have to really tap in their credentials again just to be able to see that. So let me just log in with using the same credentials that I used for logging to GG. I won't do more. Now let's go back and launch a new instance. So if you want to launch a new instance you just click on this and you will see a list of images that we have. Let's say if we want to launch an instance with R pre-installed and it's over to 16 so you just click on this, you will see a launch button right here. All you need to do is just click on this launch button, you will see a form that is pretty much the same as the Tropor Sphere has. That's all you need to do before you launch a new instance. If you don't care about the name you can just leave the way it is but I'm going to change that to Monday. And you don't need to worry about this. You just need to make sure the instance size is big enough for the image. Let's use the median and click. So here you will see a list of all the instances you have in that project and the one that you just launched. Now from here by clicking on this icon you can go to Horizon and check. Again in the future this will be a single sign-on so you don't have to your credentials again. So here you see the instance you just launched. I just launched. Zero minutes? Okay. If I refresh I will have a floating IP. Yeah there we go. We have a floating IP here assigned and you can just use the IP to SSH into the instance. And as you can see here all the instances have different status. So for example, shut off, suspend and you can go here and check it's pretty much the same mapping between each interface. I think yeah at the end of the demo I would like to show you guys a video that is basically a comparison between Gigi and Horizon for launching a new instance at the first time. So you get an idea about how fast Gigi could be. This is not the one that I want to show. Okay. So I have this new user called OpenStackDemo. It's the first time for the user to log into both interfaces. And as we know in Horizon you need to set up the network, router, subnet, to really before launching a new instance. But in Gigi all you need to do is to select an image and upload your SSH key and you could have that instance to be launched. So in Gigi we're about to launching a new instance. It takes about 54 seconds. And in Horizon you still need to make sure you set up the network correctly and then upload the key that you want to use. So I think it's taking about 124 seconds. Yeah and see here in this form you have to go through all the tabs that you have to make sure everything is right before you launch the instance. Yeah it's 126. Okay. So coming next we're going to talk about the future features that Gigi is going to have another conclusion. Thank you Hanzo. In the future we're planning to have single sign on so that we only have one account that someone has to use to sign on to their BU account for example and all the other applications will use that login. We are also in the process of working on an open cloud exchange at Marketplace, a federated cloud. And as you saw in the user interface during the demo there's a series of enablements, things that have enabled us to be able to federate the cloud and a number of features. We are specifically working on bringing up open shift. We expect that sometime at the beginning of summer and we are also working on cloud dataverse which is mostly there as will be seen in the demo on Wednesday. Take home message is that we are working on improving and simplifying the user experience. Our GUI will be interoperable with Horizon. We are not planning to replace Horizon. And we are planning to use have Gigi, our GUI, be a gateway to all of MOC services. And here's the two reference talks. Thank you for your time. Okay. I think what you're... Let me rephrase the question. You were commenting that in the one slide we were showing instances from different open stack versions. And later on I'm mentioning that we are using Keystone, hence we are only using... We can only show things from one project, one set of instances from one Keystone project. Right? And that is one of the key differences between Atmosphere and what we're trying to do with Gigi. Because if you did... If we did something like that in Gigi, in our GUI, when you switch over to Horizon, you wouldn't see all of your instances. Right? And so that's one of our key differences. And one of the things that we've changed is made the projects come from OpenStack. No, no. We only have one Keystone. We have a federated cloud. And so we have one Keystone service that we're intending to hit a proxy service that allows the cloud to be federated. And so between Keystone, the proxy service will handle the Keystone to Keystone authentication. So one login, one identity provider, multiple clouds. Any other questions? So I was curious, have you gotten any interest for the cyber stuff from a community outside of scientists? Do you see it more broadly applicable? Yeah, definitely. Because it was an NSF-sponsored project, or NSF-funded project, our main focus and driver was the research and science community. And initially we started as the I-Plant Collaborative, and that was focused only to plant scientists. Since we've become cybers, we have expanded that audience to be generally all data science, and we have seen great success in that area. Looks like that's it. Thanks, everybody. Thank you. Thank you.