 So, hello and welcome everybody to the .edu special interest group. I'm Diane Mueller. I am the director of community development at Red Hat for OpenShift and the OpenShift ecosystem. And today we have with us our new chair for the edu special interest group, Steven Graswell from UNC Chapel Hill who was volunfold or voluncoerced into chairing this SIG. So, we're really pleased to have him with us. We have a bunch of other folks who are on the mailing lists, on the main mailing list for OpenShift. A lot of other edus besides UNC. These are just a few of the ones who accepted today's invitation to talk at Red Hat where have Linux and all kinds of other products and lots of other edu situations. But really we're trying to focus this specific interest group on things like OpenShift, the container ecosystem, scaling things in the cloud and how that all applies to .edu. And so that's kind of why I'm trying to gather all of this. I've had lots of conversations in the community around with edus and so I thought it would be timely to get you together before you all go on vacations, though I think a lot of folks have gone on vacation. So we have set up a mailing list. One of the things I'm going to ask you to think about is whether you like to use or are willing to use or able to use Slack or our avid users of IRC in different ways that we can communicate with each other. And that's kind of what I'm trying to figure out what the best way to engage in these conversations will be. And so that will be helpful to know for me. But for today's agenda, we're thinking we're going to do together and you can change the mission and the goals of this as you want as the edu folks is to discuss and develop and disseminate best practices for administering, managing and operating OpenShift in an edu setting. I find that can be lots of things, you know, whether you're using it for hosting your university's web infrastructure or you're using it in the classroom setting. There's lots of different things I've heard of people doing. If you want to join or you want your peers at your university to join, there's a web page set up here with the URL for the SIG off of the common site. And we, as I said, coerced Stephen Braswell at UNC to share this session. And so I'm going to stop sharing my screen and let Stephen talk a little bit about his experience because my university days are long behind me. But I'm very grateful to have Stephen here with us. And so Stephen, if you want to share your screen. Hi, everyone. I'm Stephen with the University of North Carolina at Chapel Hill. I want to talk about two basic things today. Give an overview of what we've done with OpenShift at UNC and then talk about just some ideas that we have here for the edu SIG. And then I'll hand it back over to Diane and she'll talk about other ideas that she's come up with as well. The first little bit about me, I've been working for a little more than 16 years. I work in our middleware services group, which is in the central ID group. Some of the things that we do is we manage basically applications infrastructure, historically system administrator type people that we still work some with systems, but we have a separate group that manages the operating system and hardware level for us. We manage basically everything about that for the applications. That includes Sakai for learning management, OpenShift. We have a big WordPress installation or two big WordPress installations for content management. We still have legacy Apache web servers that host various static content and programmatic content for end users at the university. We host a large number of job applications. Some of them are custom applications for departments that are housed in Tomcat. We have JBoss for some applications that specifically are supported on JBoss. And then we still have used Glassfish for some custom, basically ERP-like systems. They're things that even though we implemented PeopleSoft, their functionality that PeopleSoft doesn't do, that's custom to the university. So we have job applications that are dedicated for that. We also run our log aggregation system. We use Splunk for that. We have application performance monitoring tools. We manage collaboration tools for the university. We just recently took on helping manage reporting systems. Now a lot of other basically web-based services, they get thrown our way. So our group does a lot for central IT. But OpenShift is only part of that. Are you still seeing the one slide? If we didn't go to the next slide. Okay. Let me just do it this way then. All right. So one of the things that my group really tries to do is help with the evolution of IT. So outside of universities, people are growing with the industry. Universities can be a little slow. Particularly, we're a state government entity. So we move a little slower since our funding depends on the state funding. So we've been evaluating the, okay, what do you run on bare matter? What do you run on VMs? And these days, what is more appropriate for running in containers? And we're trying to lead that here at UNC Chapel Hill. And that includes things like implementing OpenShift for PAS, working on tools like Puppet and Ansible for Automation, and so on and so on. So that's one of the reasons that we're really big into pushing OpenShift is we're trying to get things moving forward. We used to be a, our organization used to be pretty good at keeping up with technology and we've been a little slowly and we're trying to help push that forward again. Very slow to advance here, so. Yeah, yeah. We'll figure it out next time. So we branded our OpenShift implementation as Carolina Cloud Apps. We're using OpenShift Enterprise version two, which got renamed to the container platform. We have a self service sign up tool so that people go and sign up for the service. The primary reason for that was we wanted to lock down the namespace to their university user ID. And there wasn't an easy built in way to do that with version two of OpenShift. So we wrote it, we locked off people being able to create their own accounts and they have to go through our tool to sign up. So that was just a namespace thing. We do provide it free for everyone at the university. We're not charging for it at this point. We don't foresee charging for it anytime. And then we've centrally managed infrastructure and the OpenShift software for the users. Some of the benefits we saw was the basics of OS management, the operating system, the programming languages, databases, all of that. We managed that for them and then they don't have to. So they can focus on developing their code. Whether they're a student doing it for a class, a professional programmer doing it for the business of the university or someone just wanting to learn a programming language. We also had instances where someone might be just hosting a web server just to run some PHP code and they bought a desktop machine and they're running under their desk. But it wasn't getting patched. So we had the vulnerability issues. We wanted to provide a mechanism so that people aren't doing that anymore. We also have some internal consolidation needs. As I mentioned before, we have a lot of Java application servers. And one of the things we saw was we could consolidate that onto the OpenShift platform. Right now, we run all those on separate VMs and their clusters of departments and other units. The multiple applications might live in the same Java container, but it makes tuning the JVM very difficult because all the applications have different needs. If we could put those outs on containers, particularly in version three of OpenShift, we could tune those better and then provide some autonomy for the developers to manage some of the restarts and such that they might need for the applications. We also didn't really have an easy way for lamp functionality. Basically, if you wanted that, we had a group that provides you with a virtual machine, but then you had to manage everything yourself from the operating system, the programming languages, everything, patching, you had to manage all that yourselves. And particularly for students that wasn't provided for students anyway, and we wanted to provide functionality, that kind of functionality, but make it easier for people. We also had some departments, even for just simple services like Drupal, they would work with an off-campus vendor and that vendor would host it for them or provide some other kind of services. We wanted to keep that spending on campus. And even though we provide the service for free, that's an incentive for them to not go and spend it off-campus and then all the programming and content resources stay on our campus network. And we also have some legacy web servers. We use AFS still for a file system that people put their static HTML code, programming languages and such. We had some projects we were trying to move people away from that, decommissioning, and this was a way to work on that. Our initial rollout was in October 2014. At first, we had created, we'd set up so that only individuals would get space in OpenShift. After we did that, we realized, oh, we're still back in the old mentality of that's tied to that particular user. If that user was a grad student or an employee that left the university, it's still tied to them. And changing ownership of that was a little hard to existence. The namespace was tied to their user ID and then that was entered and tied into DNS. So to work with the departments and other project needs, we created a separate OpenShift environment just for departments and projects. And what we did there is we created a fake user ID that wouldn't conflict with the university IDs that basically owns the namespace, even if there's only one application in the namespace. We did that so that it's not tied to an individual. You can set up access controls and such and then provides a mechanism for the collaboration stuff to be easier. We fronted the hardware and software costs for three years so that we had time to work on getting an implement. I'm getting people used to using the system without having to worry about, okay, it's been year one. We need to fund it again. And then one of the things that Red Hat helped us with was we did a training session of about 40-something people. That was an interesting learning experience trying to get other people on the same page. We started from scratch where basically we worked with them to install, get on the machine if it didn't already exist and then going through setting up a sample application that we had developed and setting up a database and breaking it and then showing them how to fix that stuff. For the most part it was a good idea. Probably should have done it with a smaller group of people. What we've done instead is we'll go out and work with departments and groups one-on-one and re-evaluate whether a training session is a good idea in the future. One other bit that's not on this slide is we started our proof of concept roughly around December 2013, January 2014 and worked on that for a couple months. We were ready to actually go live between August and September of that year but we had some other major projects that were going to go live at the same time and we were asked to delay. So we were ready to start a little bit earlier but just didn't have the opportunity to. For anyone who's already deploying up a shift on site and that's the other thing that I briefly skim number four is we are running it on-premise. It is basic architecture like I said before. We have the individual space and then we set up a departmental one. Under the departmental section you'll see some boxes labeled sensitive. One of the things that we want to do is be I would offer sensitive data which I'll talk about a little bit later where the HIPAA and FERPA and they are that and just so that we didn't need to duplicate it we share support nodes which is our DNS server, our MongoDB and ActiveMQ for up and shift. Our campus DNS server doesn't have API access for us so we're running our own and pushing the updates up to that. We had some expected use cases for the project and we expected people to do both production and non-production applications particularly around PHP and we were hoping Java. We've seen some stats I'll put in later that primarily large PHP users for the up and shift. I mentioned the legacy AFS space and our old web servers. We have lots of programmatic content there and we're hoping people would use the system to migrate those applications from the old system into this where it's a little more modern and they have a little more control and information like logs and stuff where they don't have access to that today. I mentioned that we have a big WordPress installation where we provide a multi-site WordPress for users to set up websites with a common set of plugins and themes. They do run into some problems where a department may want a plugin or theme specifically for a professor that no one else wants to use so for the shared environment the team that manages that says well we're not going to install that so we expect that people would want to run their own WordPress as their own Drupals and use their own versions of plugins and such and up and shift would be a place for them to do that so they're still campus resources they don't have to manage the VMs and such. We are a university so we expected a lot of learning capabilities and usage for the system took around learning a new language so if your PHP program and you want to learn Python or Ruby it would give you an opportunity to do that or just sandbox applications for proof of concept to either a student a professor or if you're working at a department you're a designing application to show something that they may want to use for to replace an existing application without having to again get a separate VM or do it only locally and virtual box or some mechanism like that. And we also expected since we do have a computer science department classroom use is particularly there and that hasn't there hasn't been as much use of that yet there is interest once we get to initiative version three where we are working with a computer science professor who's interested in using ipython jupiter to do some classroom stuff there and so we're working off and on with him on his needs there. So Stephen we have an ipython hit of a guru in Graham Dumbledon so hopefully he can do a session on that and bring that some of his expertise in on that he's got some good connections in that community. Yeah I talked to Graham at Summit and we and he's very very willing to help me with the ipond on jupiter stuff. What I needed to do was have a subsequent meeting with the professor to gather some more information about how the professor wanted to use it and then Graham could better write up a use case for me and how to deploy that in v3. Some notable use cases we had the ones we expected and here's some that were people actually use the platform. One of them is we worked with a professor who had some students developing some mobile applications for them so that I think the professor was in some biology department so basically the mobile application allowed like high school students or middle school students to go out into the field you know they're just out on a field trip and they'll use the mobile application to identify bugs and the plants that they're on and such. While that's running on actually on the android devices the api for that and the backend database was running on Openshift. We're really happy to work with them on that and continue to work with them and they seem to be very happy with the platform. We have someone that wrote some kind of psychology blind study, no sensitive data there, an application to handle that. People using it for digital signage so the backend displays for the content of the digital signs on their dues and that the Openshift for that. Again I mentioned about the WordPress stuff people are using putting up their own department websites on various either CMSs either WordPress or Drupal. Some of them we didn't even know until we started seeing traffic coming in so that was kind of great. We worked with a researcher who was writing a virology web application that the application will actually be hosted on Openshift but then it would interact with our research computing cluster through an API and submit jobs, get information about jobs and such. And then our help desk which we call the service desk is moving a lot of their internal applications for like looking at call volume, looking at other help desk statistics. I don't remember all of their applications at this point. They've been moving so many. Moving those from VMs that have been hosted for them into our platform and they're particularly interested once we if we ever get into evaluating the new .NET functionality. They have some .NET applications that they run on Windows servers and they're very simple so that there's no reason they couldn't run them on Openshift in the future and we hope to work with them on that. Some very basic statistics. We're almost to 1,100 users on the system. 468 years. The most popular are PHP and MySQL. Oddly, one of the other top cartridges for our V2 deployment was Cron. There are a lot of people doing scheduling in the applications as well so that's kind of nice. I've hinted before about sensitive data so currently our information security office hasn't approved our Openshift version 2 for sensitive data and we've been working with them for a little over a year to do that. It's not technical challenges of the platform. It's more policy on our end. We were trying to be plain ice and ask them for permission to be able to host sensitive data. We're not every group at the university talks to them first. They talk to them later. So we talked to them first and so we got more down and them deciding that they needed to flush out a little more of the policy. We've been working with them on remediating any issues that they have with the platform. Again, it was never really with Openshift. It was about policy of how are you doing backups? How are you doing log management and aggregation and such things like that. So we've been working with them to get that. We're very close to that finally and we have a lot of people, a lot of departments particularly who want to host things that have hipped the data and we expect that once that's approved the numbers will dramatically start increasing on our usage. We've been stopped a little bit at this point but we expect that'll dramatically increase. Some of the challenges we had was historically we were behind the scenes group where we managed the systems but we didn't actually directly deal with campus customers. We typically dealt with other IT people mostly within our own department and a few in other departments. This is our first one. Directly interacting with customers and helping them out, getting their application deployed and such. So that was a challenge for us. As many of you working at university know, there are varying people who might be using the system. So there might be an administrative person in office just doing some things with WordPress. It might be a student. It might be a seasoned developer. So the varying ranges of technical skills and helping those people was initially a challenge for us which kind of goes along with the next one of customer documentation. Allred had great documentation on using the Epshift platform. We wanted to have some custom documentation for our implementation and write the documentation as such that it would cover everyone from their varying technical skills. While we understand you need to have some technical knowledge to use the platform, we provide the commands that you need to run to do the basic operations. And then, for example, if it's WordPress, they just interact with the WordPress interface after that. For a lot of our developers, particularly our professional developers, they were big diversion users. So there's the workflow change for them to use Git since they had, they weren't necessarily big into the open source world. So they hadn't had a lot of familiarity with Git. And we still have some challenges to that today. Then the workflow of pushing in to Git and then it automatically getting deployed. That was just a challenge for some of our customers that we had to work with. And since we really wanted people to start using the platform, so marketing was a big challenge for us. I'll show on the next slide that we worked with our communications office to get the word out. We had the training session. We put up posters. We had stickers made. And then we weren't active in Twitter for anything related to our group. So getting the word out there, particularly on changes to the platform, patching and such, is ways to notify customers about changes. That was a big thing for us. We created a developer liaison program, which is just people in my group who will go out and work with customers one-on-one. Usually it's been student groups that are helping a professor or a departmental programmer. Helping them get used to the way the platform works. Again, the developer worked for the changes that we mentioned. And that's been really successful. Another challenge is the container versus VM, when to use administrative versus you might need your own full virtual machine. That might sound simple to a lot of us. We're working with our team that runs our virtual management platform and make sure that we're all on the same page so that when we give our disparate presentations around campus, we can correctly direct people to which they might, which, whether they need a container, whether they need a VM, particularly, obviously, high memory, high CPU applications, you prefer on a virtual machine. And then another challenge that continues today is application backups. While we do system-level backups for disaster recovery reasons, we documented that our customers needed to do their own application backups using the commands in the RHC tool for OpenShift. And there's also the challenge of our one-time backups. Our system-wide backups aren't going to capture but a point-in-time database backup, which may not be in a consistent state if it needs to be restored. So that's been an ongoing challenge for us, is how to do that. How are we going to handle that in the future and work with our customers on that. As I mentioned before, we had marketing, so we had posters, stickers, some little business cards, and we went a little overboard with the pillar. We don't give the pillars away, they're just one of them on Boston's office. But again, we really wanted to get people to use the system, so telling people it's there, telling people it's free, and getting the word out. So what are we looking at for the future? Again, the sensitive data thing I mentioned. We're close to getting approval for OpenShift version 2. They want to start the process over with version 3. We don't expect that process to take quite as long. But again, since version 3 is a rewrite, our Information Security Office will need to re-evaluate some of the things that they did for version 2. The biggest thing for the future is moving to OpenShift version 3. We have approved a concept for that. We haven't set up our production platform yet. That's one of the biggest things around OpenShift that we're working on these days. I mentioned before the glassfish, the Tomcats, the job applications. We'd love to move those into OpenShift version 3, particularly so that individual applications have their own individual containers. The JVMs can be tuned for those applications. The developers can stop and restart the applications themselves. What happened involves us. Things like that. We toyed around with the idea of putting Sikai into OpenShift. We haven't even talked to our learning management team about that. It's just an idea that we'd like to do because, again, it's a job application, so it could fall into the same categories as our other ones. When you talk about Sikai, I'm just talking in the chat, the learning management system for folks who don't know that, and I didn't. When you talk about beginning the evaluation and putting it in OpenShift, are you talking about perhaps taking on containerizing it? Yes. It just runs in Tomcat. We're actually I have a project working on the new version that's going to be in Tomcat 8. It's basically just putting it into a container. We probably still host the database for it on an outside of OpenShift for our database team would manage that. But yeah, just taking the Tomcat install and the application code and putting that into a container and running that on OpenShift and being able to, particularly during exam periods, easily scaling that up without having to get a full virtual machine. That would be really nice for us to be able to do. Even for other applications where, during student registration, we're getting hit a lot. We'll temporarily scale things up to twice as many containers. Then once that's over, we'll scale them back down and and not having to have full operating system. This would be really nice for those cases. Another thing, at Summit, we saw a presentation by someone at Duke University who's just down to read from us about how they took their multi-site WordPress and containerized it where everyone, instead of being part of the same multi-site, gets their own container of WordPress, and then they can kind of do what they want. That was an interesting idea, and we'll eventually talk to them. They aren't using OpenShift to join it with native Docker containers, and we would like to do that with OpenShift. Our digital services team already creates a UNC branded theme and has their set of plugins that they prefer that we might would create an image that has just like UNC ready for our customers to use if they didn't want to use the shared platform or if we decided to split the shared platform up a lot more. I mean, these are just ideas that we've been tossing around. In version three, there's a log aggregation tool that comes in, the elk stack just using the influent D instead of log stack. We heavily invested in Splunk, so big thing for us is getting the logs to go to both so that customers inside the system can use EFK to look at their logs, but we get a copy that goes to Splunk for our information security office to look at, particularly for departmental applications and sensitive applications. We didn't do a five load balancer integration in version two, and we'd like to do that in version three. There were some challenges in version two because you had to have some administrative access. We still have to have some of that administrative access in version three, but we worked with our networking team who manages the load balancer, and we have a solution for that. I actually have some ideas about how, so basically you have to have just short of root access on the load balancer to integrate it, and it's actually a limitation by F5 that I hope to submit a RFE to have them make a change, and then you still need some privilege access, but you wouldn't have to have as much privilege, and I want to talk to the OpenShift team about some potential workarounds based on some information I gathered from F5 around that. With OpenShift version three, we get a right to run license for CloudForms just for OpenShift. In statistics management and such, we had that installed and working with Red Hat to look at CloudForms for other things, but in the new version of CloudForms with the self-service capabilities and then the chargeback showback abilities for containers and such, that would be interesting for us to have that data, even though we don't charge for the system, it'd be nice to be able to have that kind of accounting. And then evaluation of mobile platforms. Again, we've worked with Red Hat on a hosted proof of concept for Red Hat Mobile, and then with the new Red Hat Mobile you can host some components of it on site. We're not a Red Hat Mobile customer, but we are evaluating the product just to see if there is interest in it for our campus. So if we did end up purchasing it, we'd probably host it on our OpenShift platform. So Stephen, the Insect Taxonomy mobile app that you talked about a while back, what did they develop that in? I think they just did it in like Android Studio. I remember it was in Android only apps. I don't think that they used any kind of framework that I'm aware of. We didn't get to the specifics since they weren't hosting the application on OpenShift, so I don't know the details of that. So with version three comes new challenges. We haven't had a lot of familiarity with Kubernetes or Docker, so we have to get familiar, a little more familiar with that. We do have some people around campus, some of the more savvy developers and professors. We have done some things with Docker, so they know a little more and we've talked to them and such. So that would be a big challenge for us as getting used to that and passing that information along to our customers. With OpenShift version two, the host to get repository basically came built in. In version three we need to host one. Again, we were subversion shop, so having to set that up was something new that we had to do, but it did help us push to have developers start talking to developers about maybe migrating their existing subversion repositories, even for applications that won't be hosted on OpenShift into a get repository. The v2 to v3 migration is going to be the big thing. We've been working with Red Hat. Again, we're not far from the Red Hat main office, so we do get the value out of being able to work with them closely since they're nearby. We've gotten some information about some initial guides on migrating applications. We were hoping that to learn from what they do for OpenShift online as they have millions of applications who are hundreds and bringing that access is going to be our biggest challenge. More documentation. So we wrote a lot of documentation for the v2. There's even more stuff, particularly with teaching people some basic things about get. And the additions of components with Docker and Kubernetes means that there are just more things we need to tell people about. So we'll write more documentation about that. Version two, we created a basic shib with integration for just a few languages. Figuring that out for version three has been a challenge, and that's one thing that we're still working on. With Docker troubleshooting customer applications and why they didn't deploy correctly, we already see that it's going to be a challenge. We'll be doing it with our very simple applications. And we don't have good ideas on that yet. If others do, we'd be happy to hear them. Version three's new technology challenges. We were told by an engineer right ahead that we've clicked some buttons that other people haven't, and so we find some bugs that they haven't. So just getting used to the new technology and version three again goes back to even the familiarity with Docker and Kubernetes that we're trying to get through. Again, application backups. There's not an automatic tool like there was in v2. Since you have persistent storage volumes, the backups for those are supposed to suffice, and we need to just figure out policy around that, procedures, etc. So we can tell our customers. And finally, I just want to get through a few ideas that we have for the EDUCIG, and then I'll hand it over to Diane for teaming on for what she has. The biggest thing for us is we're very open with sharing information. We'll tell you everything that we about our environment. The pain points that we've had was shared, the very bad tools we wrote, things like that. We're hoping that others will be the same way for their implementations because for-profit companies and EDUs operate very differently. We're a state government-based public university, so we have some challenges that maybe private universities don't. We need that kind of open information sharing. Documentation. Again, we wrote a lot of internal documentation. We've given that to universities. We're happy to give it to other universities. We have ideas about maybe coming up with some format with markdown and storing it and get, and then you can substitute our university name for your university name. Those are just ideas that we have. We can have three migration strategies. Again, we're still in the early stages of that, and if other people are going through the same pain, we'd love to share the information or hear from them. Monitoring ideas. For version two, we took a lot of redette operations team, put a lot of their Zabix monitoring on GitHub, and shared some other information with us. We did a lot of that for v2 and just getting ideas about monitoring for v3, whether from the infrastructure side or helping customers if they want to do service level monitoring. Logging that I mentioned before, other universities can just use the ELK, EFK stack that comes built-in in v3, or do they have their own? Do they splunk shop? Do you want to send to both? Do you use something else? We're interested in hearing about that. The CloudForms usage. Again, we're just starting our evaluation of the right to use CloudForms and the day we can get out of it. If other people are more actively using it or even if they're doing something with the managed IQ upstream and that we could figure out this similar feature, that'd be great. And then Cost Model. Again, we're offering the service for free to our campus. If you're doing it for fee, we'd be interested in hearing about that and why you're doing that. Or if you're also doing it for free, great. Or you only want free for part of your community. Again, we're up in the information sharing so that we can look at our ideas and say, okay, maybe the way we did it wasn't right and the way that these other universities are doing is a better idea than what we did. And then there's our contact information. The email address is for everyone on our CloudApps team. And then our website just shows is just more of a marketing thing in our signup tool. And then we try to be active on Twitter. We're not as active as you'd like to be, but those are ways that you can contact us. And as always, you can contact us through Diane as well. And that's all I have. I'll stop sharing so that Diane can do her slides. Well, I'm also going to ask people to unmute themselves. They should all be, they'll see a microphone with a red slash through it. It's black. You should be able to talk. You might get a little bit of an echo. I think that's one of the problems with luching sometimes. But thank you very much for sharing all that information. We'd love to see maybe a .edu big repo on GitHub where we could share some of the documentation and maybe that's something we can set up. I'm going to share my screen here just to drive a little bit more of the conversation because I have questions for you. The first question I have is, well, I should ask, do you want to continue meeting as a SIG? And if so, how often would you like to meet? I have a time slot in mind Wednesday mornings this time. Pacific, I'm on the West Coast. That's noon Eastern. Maybe it's your lunch break is what I'm hoping. And I know most everybody has an August holiday plan, especially folks in edu. So could a few of you voice your opinions here? We have maybe in the chat, if you don't feel like talking out loud, John Wang or Gabriel or Chris or Patrick or Steven, does this time work for you guys? This is Chris from the University of Michigan. Yeah, this time I think would generally work for us. This is noon, so it's our lunch break. And Wednesdays tend to be a meeting light day, relatively speaking. Monthly seems like a good frequency for these meetings, at least to start with. And then if there seems like there's more content or less, then we can address from there. But monthly seems like a good initial frequency. Yeah, so that's kind of, and that's what I was thinking. And I think maybe not the first week of September, the second week, which I think would be the 16th, if I've got my calendar figured out here, I think that's probably this, it's either the 16th or the 17th. In 2017, it will be the 14th. So that's what I was going to propose, because that'll give you a week to get, I don't know how your schools start, but that'll get you through the first week of Labor Day and everything. But that was what I was thinking. Any objections to that, if we do the next one on that? All right. And I'm just going to move to the next side. I'd love to know who you are, or how you self-identify. And maybe if I spelled administrators right, you'd admit you were an administrator. Yes, plus one for the 14th. So Chris, what is your role at UMish? So I'm sitting here with two other guys. I am a business analyst with our teaching and learning group. And the guys on my right and left, Dave and Mark, are both system administrators. There's a fourth person that's part of our virtual project team, who is analogous to what the way that Stephen described himself. He's also a system administrator, but more working with client-focused applications as opposed to underlying systems. Okay. I'm about Gabriel and the other folks that are on this call, not that I'm picking on you, Thomas. Hi, this is Thomas. Yes. And I'm also from the University of Michigan. I'm actually from the College of Engineering. And I'm an application developer in our web services group. Perfect. So that's good. And I can see a couple other folks, but they've got themselves muted. So they're probably in places where they can't talk, but I will bop back here. And so I've also created a mailing list for this. So I'll send the same questions out to the mailing list. There were a number of topics that Stephen suggested. I was just sort of taking some notes while we're going through there. Now that I remember that Duke did that presentation on multi-site, word perfect, that might be a good topic too. Things do not have to be exactly open-shift specific in my humble opinion. I don't know how you all feel about that, but if it's things like git fundamentals or containers one-on-one, there is another SIG for image builders, and so we can pull some of the content from that and talk about that. I was hoping the folks from Boston University would come sometime soon and talk about their mass open cloud initiative too, which is bigger than what you're doing with Carolina cloud apps, which is specific to your university, but this is sharing across different Massachusetts-based educational organizations, and I'm kind of curious to hear about it. But Chris, I'm wondering if on the next one on the 14th, if you could talk a little bit about what's going on at UMich, would that be a good thing? Or is there a specific topic that everybody's dying for like the ELF or the should be the ELF or the EFK stack or something like that is, is this kind of use case sharing good or are there things that you'd like to deep dive on? Those are the kinds of things we can talk about them on the mailing list as well, but if you have opinions about that, that would be helpful. But if I can peg you Chris for the 14th, that would be great too. Yeah, that would be just fine. Okay, so we'll do something similar to this and I'll keep reaching out and bring your other colleagues too, and I will add all of your names to the mailing list and anyone who's listening to this after the EFK, after that you can go to this page here that's on the screen, the hashtag interest section of the OpenShift Commons page and pick the EDU one and sign up for the EDU mailing and that will get you on there. A couple of things that I think come to mind for me in terms of things to talk about, I guess I'd be interested to hear if the other people on the line, other people online are interested in these as well. But one is the model that people are using to provide OpenShift to their customer base. Is it pure DevOps? Is it more standard operations? Is it something in between? Or is it a development group? So who is it that's administering OpenShift and how they are working with their customers? Are they using it to move to more of a DevOps model or is it more of a traditional model? So that's one and the second thing is, is there kind of common interest in some of the enhancement requests? So if there are common enhancement requests across multiple schools on the SIG, that might be Diane, useful information for you and I think probably for all of us to provide a red hat that it's not just an individual enhancement request, but there are multiple schools that might be behind some of the requests that we might be making. Yeah, so the other thing that springs to my mind too is to talk about the Trello cards that we have for driving enhancement requests and how to do a thumbs up or sign on to one of them. That probably would be handy for people who haven't done that yet and how to add a Trello card, which is if there is a common one that rise up. And that is really one of the reasons why I'm doing this EDU SIG is so that you guys have a voice and make those kinds of collaborative pushes on the folks, the actual people who are writing the code and contributing to OpenShift Origin. So that's all good. I also think that we can do things like a survey about your question about what models people are doing on the mailing list too that is a long-standing one and ask people those questions and then to describe their model and how they're using it and maybe the evolution of that and then coalesce that information into a presentation where people just talk about that and what's more effective for them and why. These are all great things. I think once a month it will work very nicely so that we don't do anything and what I'm going to encourage you all is to ask all of those questions of each other on the mailing list and you should get invites to the mailing list I think by tomorrow morning. I will add everyone who was invited to this session and if there's anyone else at your organization so if you have like two or three other folks in the room just send me their email addresses or send them to Steven and we can add them to the mailing list. So is there anything else anyone would like to add? Because we've just about used up your entire hour and I really want to thank you for joining us today. You never know when you when you kick off a SIG whether tons of people are going to show up and it's just really nice to have all of you here sharing these your best practices at UNC and your experiences and to hear from you all so thanks and I will talk to you all on the mailing list soon. Is there anything you would like to add there? I'll stop sharing my screen Steven and I think you've got yourself on mute. No thanks everyone for coming out and listening to me ramble on for almost an hour. Rambling is good. Thanks for sharing your experience Steven. All right. Yes very much thank you. All right take care guys.