 Hi everyone. Welcome. Thank you for coming to our talk. My name is Sam Peasley. I'm a product manager at Harvard Medical School. Really quickly, I wanted to introduce Tony and Michael. So I'll go ahead and, he was one of us. Okay, gotcha, gotcha. So this presentation is about multi-sites and micro-teams. And, you know, we title it Maintaining Multi-Tendent Whips Properties at Harvard Medical School. And so as a product manager, I'm going to quickly go over sort of the landscape at HMS and sort of how we think about work on our team and how we kind of juggle, you know, a few dozen websites with a pretty small team. So with that, again, we're from Harvard Medical School. It's, I'm sure you've heard of it. The way Harvard is sort of structured, it's very decentralized. And so, you know, all the schools operate very independently. And basically how that translates is, you know, when projects like this come about, the teams tend to be pretty small. So in our case, we have about, you know, three to four people that work on Drupal sites on a day-to-day basis. In the past, as it pertains to this project and this product that we support, we had about five or six years ago, a lot of the Drupal instances were managed independently. So we had a few dozen websites that all operated and were managed, you know, individually. And, you know, long story short, we recognized the problem and we wanted to create, you know, a solution that was more sustainable, you know, so we can, you know, we can adhere to best practices that it means to accessibility and make them, you know, all of our, you know, designs mobile friendly and things like that. And so they're on-brand and everything. They're a little easier to maintain. So really the genesis of this team was to create a product that could, that could scale across the school and be able to operate as the sort of the primary HMS Drupal system, Drupal instance that HMS supports. So the website landscape at HMS, what does it look like? So our audience, we tend to think about our audience in sort of two fragments. We mainly call them internal and external audiences, I think, internally. But our internal audience is very large. Just the HMS, it consists of about 30,000 to 40,000 people. This is faculty, staff, students, and alumni that use our websites on a daily basis to get news, to check out their departments, to get connected with different events across the HMS ecosystem. And our external audience, sort of what that translates into on a monthly basis is we get about 31 million search impressions off of Google. For our page views, we have about 600,000 users on a monthly basis and generate about 1.5 million page views a month. Our web community, the people that work on these websites on a day-to-day basis is rather large. There's about 150 site editors across 40 websites that we manage. And while I think there's hundreds of websites at Harvard Medical School, again, we're really talking about the web properties that have sort of the most traffic and have the biggest footprint and impact at the school. So our team consists of two departments. I work in the communications office, actually, and my colleagues here work on the web team within IT. And it really consists of three developers. One is not here today. And one product or so. And so we really do all the work and vision for planning all this out. Sorry, I'm going to take this off. So stop jingling in the microphone. All right. And so the types of websites that we work on, we sort of have three to four products within the websites that we support. So the main website that we talk about is our flagship website, hms.harvard.edu. And we really use this sort of as a launching pad to a lot of the different websites across the HMS ecosystem. So it serves as a library or an index. Other products we have are editorial-based. We have a magazine website. We have some news content. And the primary focus for this talk is really about our departments and offices, which sort of share an ecosystem and a main template. And then there's some course websites for our team that market those courses. So our main responsibilities for office is really to maintain the Drupal instance and making sure we're adhering to best practices and keeping up to date with all the Drupal and security updates to enhance. So we survey the community quite a bit on a daily basis, actually, to think about how we can align and find ways to improve our installation, make accessible. Accessibility is a big priority at Harvard. And we want to make sure we're up to date on all the best practices for accessibility. Sure, that's a pretty popular topic from looking at the agenda today for the conference. To train all of the users to maintain their websites, we sort of have a train the trainer model. So we really let the individual editors manage their own websites and manage their smaller teams that support them in maintaining that content. And then we support our users to which, you know, anything from bugs or content fixes, we are sort of the central resource to help folks with that as well. So how do we address everyone's needs being that we're quite a small team? I think first and foremost, I kind of think about these things and, you know, for, well, there's a bunch of different principles that we talk about on a day-to-day basis. I kind of lumped them into four principles that I like to think about when we're taking on new work and as we engage with our community. And so this is really, these are really for, you know, our internal audience and our editors in how we sort of, again, take on work and how we collaborate. So number one is transparency, right? We have to deal with this quite a bit as we all work on the web. You know, we want to build channels like Slack to allow people to reach out if they have issues and feel like they can and that they're supported when they do so. And we want to create systems and share the process and how those system works to set expectations for everybody. I think setting expectations is really important, so we don't overpromise and then underdeliver. We want to understand that no process is perfect to communicate that. You know, we all make mistakes. Things happen on the web that sometimes go beyond our control. And we want to work with people to identify, you know, the core issues and find solutions that solve the problem. So we, you know, put a big emphasis on that. And then also, you know, I tend to be a more of a big picture thinker. So I want to make sure that we're being inclusive in how we're, you know, solving problems and making sure that our needs are aligned among stakeholders and that everybody has a chance to, you know, be able to reach out and connect with us. And I think last, just having a transparent and understanding culture just ensures that we, you know, are building trust with people, which sort of dovetails into my next point, which is about collaborations and relationship building. I'm, again, really dig on this, the topic of relationship building. I think without good relationships and hard work, projects, you know, hard projects that tend to take a long time or that people put a lot of effort in can fail if you don't have a good relationship with the person you're working with. So I think it's really important to, you know, focus on building that trust and collaboration. You know, work with people to understand and obviously make them feel heard and making sure you're building a proactive rapport with people. Part of relationship building, I think, is to seek out proactively pain points and with people and try to solve them, as opposed to waiting for them to come up and happen. And last, you know, looking for tasks, when we think about work and in working with people, I tend to try to focus on tasks that have high impact and relatively low effort. So I'm all about keeping the small wins. I think stacking your small wins is really important in celebrating those. You know, maybe a low effort job for us might be very impactful for somebody else. So I think seeking those out are really important to try to solve those problems as it relates to, you know, building relationships. And lastly, effective collaboration is inclusive, innovative. It aligns and helps us align goals and allows us to share those. And I think by tapping into those diverse perspectives, we can learn new things, identify blind spots, and discover creative solutions. And again, all sorts of people in building those relationships. So this next point, building elegant solutions through simplicity. Again, I've talked about alignment quite a bit. Cross functional collaboration, I think in alignment are key. To build products to scale, it's really imperative for us to, you know, obsess over the user, both for our editors, the people that are working day to day on the websites and in the audience that visits the website. You know, it's important to communicate sort of the iterative nature of our work. It's always changing. And constant refinement fixes make our website easier to use and manage. And while it takes a lot of work sometimes to get to an elegant solution, ultimately, it pays off because it reduces complexity and eliminates, you know, elements or unnecessary features that we might not need that we would then need to support long term. And so just generally we embrace simplicity as sort of a guiding principle to sort of enhance user experiences and reduce the amount of time we spend on development tasks. And again, long term, I think it seeks to like improving scalability and making sure that we can have just one product that we that many, you know, users can editors and what not can use. And then my last point is listening. And so as a dad, I try to be a good, a good listener. And I just kind of use that sort of a guiding principle I work to because I think it's really crucial to gather, you know, it's a crucial tool to gather insights and feedback from people. And, you know, as I practice listening with, you know, my colleagues, you know, I try not to interrupt people, be practice patients and making sure that we're hearing every all the needs and challenges that we have to deal with on a day to day basis that can sometimes be really complex and sort of conflicting. You know, sometimes, you know, our editors want one thing that works for them but doesn't work for another person. So again, like, you know, listening and taking all that in and being thoughtful about solutions, I think is really important. And kind of going back to the relationship point, it helps build trust and report. And I think, you know, after we meet with people, I try to, you know, summarize key points and make sure that we're proactively, again, following up with people to make sure that when we're, again, when we're hearing issues that we're taking action on them. All right, so that's my bit of the presentation. So I'm going to hand it over to Michael, who's going to talk about support and how we are able to support this community of WebEdders at the school. Thank you. Hi, everyone. My name is Michael Garofalo. I am a Drupal developer at Harvard Medical School. And the first thing I want to actually talk about today is kind of a quick overview of our overall support process. So the first aspect, it really comes down to handling incoming requests. We have a lot of different individual support channels, such as emails. Sometimes users just find it easier to message people that have been working here for 10 plus years. I've been working here close to eight myself. And we have another developer on the team who's been here for, I think, 27 years. So sometimes people have built out relationships over time, and they just find it easier to reach out to us directly. However, we do have another ticketing system, ServiceNow, that allows users to submit whatever they really see fit, whether it's a bug, a new request, or just a simple question. We noticed that a good amount of questions were coming in through ServiceNow, and we tried to eliminate that by having a Slack channel dedicated to all of our users, including the web team as well as the communications department. And we can just sit there and talk about, you know, different issues that they think might be related to a potential bug that was just introduced to a code deployment, or something as simple as just getting a recommendation on how to structure their content. The last real piece of channels we have is an internal development board that we have just for all of the work that we're doing. It's in Jira, which I'm sure you all heard of. And it's just really just for us developers to go in there and keep track of our work and everything we have on our plates. Now, when it comes to SLAs, we have more of a 20, you know, respond in a 24-hour period, but, you know, we tend to, you know, because we're more of a small personalized team, we try to get all requests responded within a 12-hour period. That doesn't mean we're going to solve your issue in 12 hours. A quicker response goes a really long way. So even if we're reaching out to the user and saying, hey, you know, we got your request, we have a little more discovery work to do, if we have any follow-up questions, we'll be in touch within a day or so. This more or less just gives us time to dissect, do a little more troubleshooting, and if we do need more questions, we can get those ready for when we come time to meet with the user. We try to complete the request that don't require any development work, such as creating a new user account or, you know, creating a redirect. Try to get those completed within one to three hours. Try to, like, half a business day. Then for you know, requests or bug fixes that require a sprint, like for a development ticket to go into one of our sprints, we do let the users know that and we work with Sam in the communications department to figure out when we can get this into our sprint as well as determining the priority. So evaluating the request, it really comes down to us communicating with one another because we are such a small team and we just want to help understand what's going on and ping ideas back and forth with one another. But with saying that we kind of have three broad types of requests, of course there's outliers, but the first main one we get is bug fixes. It's the most common type of requests we get. With our small team size, we don't really have a deaded QA person or process, so you know, bugs tend to get through nothing major, but we just work quickly and efficiently to try to determine what the scale of this and if it's impacting one of our sites on the stack or if it's affecting all of them. That will also help determine if we need to do a hut fix or if this is just something that can be rolled out in our next release. The second type of request we have is website creation or a migration. We get a good amount of requests, maybe about five or so a week to either help someone migrate their site onto our platform or ask for our services. Sometimes we even get requests from affiliates, different surrounding hospitals. But however, because of our resources and small team size, our resources are limited to certain groups within Harvard. Hopefully that can change with our growing team. Slowly growing, so hopefully within time we can take on more fun and interesting projects. Speaking of that, the third type would be a new feature. Because it's, you know, within a microsite type aspect or an environment, we look into every single one of those requests thoroughly because we're not trying to just do a one-off solution that only helps one site. We're trying to find a solution that can help multiple sites because that would be something that's worth our resources that could potentially help all the other site and content owners. So that's actually one of my favorite topics and it's communicating non-technical, excuse me, communicating feedback to non-technical users. Technology in general, not just Drupal, but technology in general can be incredibly overwhelming. I can't tell you how many emails I get of someone asking me for help and then letting me know that they had no intention of even knowing what Drupal is, let alone it being a part of their job. So my goal is to really just help users not have any of those worries or fear. I tell them, I don't care how they want to reach out to me. When I was in the office, people would come by my desk. Now they're showing up on my inboxes and asking me to jump on the Zoom call real quick just so they can explain something to me. It's really whatever helps the user communicate the issue or whatever they want to to me. And as you can see, I have a link there. This is very helpful documentation and information that the team created that we figured it would help reduce basic types of requests such as how do I do this or how do I add a link to a menu. We have all sorts of documentation, whether it's Drupal or we even have a little other documentation for Trumba. But just for the Drupal aspect alone, we have essentially how to get introduced and familiarized with our system. This alone has cut down our training requests that Sam and I used to get all the time. And it also really helped the support tickets go down as well. I'm going to pass it off to Tony. All right. Thank you both. And let's see what I can do here. I can fit everything. All right. So a bit of technical background. So we've been talking about our microsites. This is what we call our multi-site install, which was set up a few years ago at this point. Background. I'm Tony Savorelli. I'm the web development manager in the web team at Harvard Medical School. I joined the team about 10 months ago. So the situation that I found, which had been developed years before I joined, was the following. We had, for what we call the microsites, we had a multi-site install that was trying to unify the way all the different department and office sites worked the way they looked and feel, the look and feel was set up. And generally, to make sure that the functionality and the visual aspects were consistent across the board. Not only within the microsites themselves, but also with our flagship site, which uses pretty much the same content and visual components. So we have an identical configuration with some exceptions. Some sites are, in fact, outliers. And that's fine. We're able to control that. We have, not had, have a homebrewed content structure and frontend components, which were developed externally and we keep maintaining internally. One of the things that was true until a few months ago was that our multi-site ran on a fat repository, which we hosted and still host on Bitbucket, and then, separately, also pushed to our remote hosting environment. Fat repository, meaning, of course, we would run, we would install the multi-site locally, we would run Composer locally and push all the artifacts, which, of course, created a few issues that I will go through. There was no standard configuration management, which meant that all the configuration for each of the sites were stored exclusively, or largely, let's say exclusively, because that's, I think, it's a better, it's a better term for what was happening in the, in each of the sites databases. As we know, that's kind of scary, especially in a post Drupal 7 world when we can use more modern configuration management, that was, you know, a potential breakage point. So the downsides of all of this was that the multi-site, or the monolithic setup, was very rigid. It was easier to make mistakes and commit them to the repository, particularly if multiple developers were working on different sites or on the same sites at the same time. Our local development environment is Lando. We had no local multi-site support on Lando, meaning that every time somebody had to work on a different site, they would have to, you know, wipe everything, download the database for a different site. It was complicated and tedious. Right, Michael? All of this made both the development and the deployment process slow and error prone. In addition to that, problems during development, which would affect a subset, a single site or a subset of sites, would mean delaying availability for all the sites, sometimes by several hours. And yes, we do have a very swift support system, but the worst thing that can happen while you are trying to debug deployment as it's happening is also have to deal with incoming support requests. So fortunately, we have a very dynamic and dynamic team, and with the help of the communications office, we're really able to keep that thing under control. Release of production would become more burdensome and require change control tickets for relatively minor updates. You know, the IT department at Harvard Medical School is a fairly large department, and change control means having to set up meetings in advance and making sure that all the stakeholders are aware of the changes that are upcoming that they approved. There's a sufficient number of approvals that are applied to each of the changes, and so keeping this process restricted to the more complex changes, say, major version Drupal upgrade, that's something that we really want to have change control for. If we have a couple of bug fixes, like some visual styling that needs to be applied, maybe we don't need change control for that. But if we have a development or a deployment process that is inherently lengthy and potentially can potentially create errors, that all of a sudden makes the need for change control for the change control process a lot more obvious. So that was not ideal. And ultimately, especially because the fat repository we were working on, we accumulated technical debt and coding consistencies. And so that's never a great thing to work with. And lastly, no standard configuration management made, not having a standard configuration management made reverting changes pretty much impossible. And that's also kind of scary, the more sites you have. And so that was when I joined, I sort of thought that we could start doing better and progressively we have, we're still working on it. So my goals, my main goal, which is not here, but I'll tell you what it is, is basically this generally my goal for everything, move complexity upstream. If I'm working with Michael, I would like his life to be better. So I take on some of the tasks that he doesn't need to worry about every day. And he does the same with our users. I want that, I like that model, I want that model to be scaled up. And so one of my ideas, which is by no means a new idea, it was new for us, was to automate all the uninteresting tasks. We're still in the process of doing that. First of all, switch to a lean repository so that we wouldn't have to push all the artifacts from our local to BitBucket and then to our host. So standardized build process, we use BitBucket. So we started using pipelines on BitBucket to build the sites and push them to our hosting provider. Build a solid configuration management. I guess it's hyphenated because I think I'm missing a word there. Never mind. Build a solid configuration management model, really. So especially with a multi-site, that can be an issue because each of the site needs its own config directory. So both locally and remotely, that was a bit of a headscratcher. Then in a second I'll let you know how I solved that. Facilitate local developments so that we wouldn't have to switch back and forth between databases every time somebody needed to work on a different site. Ultimately, increase predictability, reduce effort for us during development and during deployment of new releases and downtime for the users. Even just a small change would take up to maybe four hours sometimes to deploy. That's not acceptable and it doesn't matter how many change control tickets we open or how much we communicate on Slack to our user base, it's still a long time. Ultimately one of the goals would be to strengthen the cohesion among sites in terms of features enabled and also at the same time allow more autonomy during development. This is something we're still working on, so I don't have a secret recipe for that but we're working on that. Maybe next year I'll have part two of this talk ready to go. Method and tools, I've sort of touched on these pipelines and script, so using pipelines first of all to build our code base and make it ready for hosting and write scripts that would accommodate both our microsites and our standalone sites. Example, instead of always typing out all the repetitive commands that we need to use to deploy our code, which are inevitably going to be different for standalone sites and the multi-site, I decided to go out of my way to script them and unify the way we do things in a rational way. I set up local configuration and settings to allow running all the entire multi-site stack locally. I don't have examples of that, but especially with Lando, it seems to be pretty simple. I had to do some research, but in the end it's very easy to run MySQL commands on Lando so that all the databases that you need are created on local setup. That was extremely helpful to our local development process. Ultimately, one of the things that I'm consistently working on is to create scripts to improve control over remote environments. If we need to run certain commands across multiple sites or subsets of sites, the idea of running them individually is not great. I'm working on running Drush, for example, on a set of sites all at once, or all the sites as in bulk. Those were the tools that I had in mind. I've been talking about multi-sites. When I submitted this proposal, this was November, I want to say. We had a multi-site and now we moved hosting provider to just a few months later. Because the multi-site install was so, this is a bit of a bait and switch, I admit that. I was here coming and telling, I wanted to come and have secrets on how to effectively run multi-sites. We were not able to effectively run multi-sites remotely because they had started becoming too burdensome. All the reasons that I listed so far on why our setup was difficult, they became even more complex and obvious shortly after we submitted this proposal. New hosting provider, which doesn't allow multi-sites, so each site is in its own code base, which is actually not a bad idea. The moment we made the decision, the first thing that I thought was, holy cow, how are we going to manage this? Because we currently have 34, 38, 34 sites in the old multi-site install and they could grow in number at any time and we need a solid solution to make that process easy, both to create new sites and to maintain existing sites and also maintain some continuity with the way we used to do things. Removal of single point of failure, other advantages of moving away from a multi-site concept is to lower the potential downtime per site. Instead, when it's deployment time, instead of spending four hours deploying all the sites, we might still spend a while depending on how large the deployment is, but each site is going to be potentially down. I'm not saying necessarily down. They almost never are down. There was one case last week, but it was the IT site, so nobody, I know this is being recorded and it's not true. Everyone looks at that. But for each site, the potential for downtime is very contained and very small. This move also lowered the requirement for change control. Like I said before, if we have very small changes to apply either to all the sites or to just one of them, nobody needs to know. It's going to pretty much happen transparently within our required or communicated window. The additional advantage is that we could, we can, we have, perform staggered deployments. If for whatever reason, for example, actually we have an example, we are going to, in the next few weeks, enable SSO on all of our microsites. It's going to be a staggered deployment. Instead of deploying all the configuration changes, all it wants to all the sites, plus having to test SSO on live sites before users start, well, using it. It's a little complicated. We are going to deploy this in batches, basically three days in a row. We are going to have this deployment smaller and more controllable. However, even though we moved to a non-multi-site install, and despite all the features that our new hosting provider provides us with, we still have one issue, what to do locally. Because with a multi-site, we had one local install that controlled several sites with individual sites. It would be unthinkable to have to install each of them every single time we need to work on any of them, or not even each of them. But look, we probably all work with Docker or many of us, we know how slow and painful it becomes after just a few sites are active. One of my main issues there was that I wanted to avoid that. Long live the multi-site. We're still using a multi-site locally. We still have one repository on BitBucket, which is our old repository with all the sites in there, with all the separate configuration directories, well, not file directories, but it's set up to have separate file directories per site. We let our pipeline decide where each of them goes. Basically, in our config default, there's a subdirectory in our config directory, which contains separate config directories for each site. Once our code base reaches BitBucket, there's a script that basically says, oh, you are our alumni site. You're going to go to this repository. This directory is going to be renamed config slash default, presumably. I wrote it. It's been a while. Sorry. From the point of view of our hosting environment, each of them is a single site. None of them contains configuration that's specific to other sites. It's still originally a multi-site, so there's going to be contrib modules that are common to all of them. But again, like I said before, we still want to maintain as much as possible the coherence between all the sites, and so the code that's actually specific to each of, to very few of them is very little, and so it's not a big concern. I built some custom rush scripts for bulk administration so that I can pull down all the code from, or sorry, all the databases and all the files from all the sites if I want all at once, if I really want some pain in my life, which is not that painful actually. And we also set up a branching strategy and sort of a conditional pipeline situation. I have a few screenshots or a few diagrams that I'm going to show. How are we doing the time? We're sort of working. Okay. There are a few minor downsides. I had a lot more to say. I'm not going to say. We have a few minor downsides of all of this. Obviously, deployment because instead of deploying a single code base to a multi-site, we're deploying 34 code bases, deployments can overall take longer. But for the single site, it takes a lot less. So each site gets deployed pretty much in a couple of minutes, I want to say. Of course, multiply a couple of minutes by 34 sites. That's pretty substantial. But at least each individual site is not going to feel the pain of a large deployment. The other main downside that I pain point really that I feel right now and I'm trying to work to solve, I might actually have a final slide at the end, is that it's harder to monitor the status of all the separate sites. This is why I'm holding my hands here. I don't want to hit anything. It's hard to monitor the entire process without opening a new terminal window, one for each site. So we'll get there too. So I have a few screenshots. Our hosting provider has a tagging feature, which we use very extensively. So as you can see here, I'm not going to go into the finer details of what we do, but pre-release, for example, pre-release tag, these six sites are tagged with pre-release, which corresponds to our pre-release branch. Every time we commit and push the pre-release branch to our BitBucket repo, the branch gets built, and only these six sites get actually deployed on them, which means that we don't actually have to wait for all the sites to be built and deployed every time. We just need to apply minor change. These we consider to be the sites that are more representative of our stack, and so that's useful to us. I've also talked about, let's see what's next. I'll get there. I also talked about conditional pipelines. This is what I consider also conditional pipeline. You see the data layer and the refactor tags there. Refactor, for example, when I was rewriting a bunch of code for the sites, again, I didn't want that code to necessarily go into the master branch in the remote environments, in the hosting environment for all the sites, so I wanted to use a separate development environment for that specific purpose. Basically, if I create a branch on my local called demo slash refactor in this case, I push that branch to BitBucket. My script feels that. It knows that it's a special branch, and it's going to push it to our host as the refactor branch on the host, and then I can create a separate environment on it, and so I can even more isolate that part of the process. It's very much all in my head. I mean, we've been using it, so it exists and it works, so hopefully it makes sense the way I'm trying to convey this to you. So this is a very generic idea of what we're doing. If we have a feature branch, like coming branch created from JIRA, for example, I don't know, web-123, whatever, that feature branch is going to be built. Actually, no, sorry. The feature branch is not going to get built. It's going to get built only if it reaches a demo branch, at which point it becomes a candidate for a separate environment. The pre-release branch is always built and built into the master branch in our hosting repository, and the master branch also gets built into the master branch in our hosting repository, but for all the sites at once. So pre-release only gets built into a subset of those sites when they are tagged with pre-release. Master is always built on all that, so we don't push the master unless and until we're really ready to deploy. And then we go into testing live and we tag and all of that. So basically when it's deployment time, I have two windows open. One is deploying to test and a few moments later I start deploying to live. It works, but of course, as I said before, it's still a little bit more convoluted than I would like it to become, and I'm working towards fixing that as well. And finally, one of my scripts is a funky little thing that I built that basically once we tag our code for deployment with a release tag, a little script just grabs the release notes from the tag itself and deploys it to test, deploys it to live, and just enters the release notes as a message so we always know what's happening. It's complex, it's funky, but one of my main ideas is that our BitBucket repo for all our sites is our main source of truth. Whatever happens on the remote host doesn't matter much as long as the code works and the code is clean. I don't care too much about how clean the history of that repository is because that's expendable in my view. If we had to switch hosting providers again for whatever reason, our source of truth is what matters. So there's room for improvement. I would like to have a pipeline driven deployment so that I don't have to control it from my local every time, I or Michael. And I would also like to be able to build concurring bulk operations so that they're not sequential and we don't have to wait for all to happen. And we can deploy all the sites or do operations on the sites all at once. I would love some bulk monitoring. Haven't gotten there yet. We'll see. And then this is not really something I'm necessarily into anymore. The idea of having Composer Run on our hosting provider. I think the pipelines, this was my idea a few months ago and now I'm changing my mind. I think our pipeline-based setup is still working pretty well. And so I like the control that I get from running Composer the way I want to run it and for the hosting environment to host all our artifacts. I always know what's in there. So I think that was that. So thank you. I hope this made sense. And if you have any questions, we're here to answer. Yeah, absolutely. So can everybody hear me okay? So the question was how is our train-to-trainer system set up essentially? So basically with each site we sort of designate a primary owner of the site. And so I will work with that primary owner quite a bit on making sure they understand, you know, all the different features of that website, what to do, what not to do, you know, accessibility, mandatory accessibility things and whatnot. And then it would be their responsibility then as they onboard new employees or, you know, share responsibilities with other people to then teach those people. So that way each website, you know, even though they look and feel largely the same, you know, we want to make sure that they have their independence that they run it the way they want to run it because they're going to have a different idea of how to lay a page out than maybe I would. So, yeah, we really give the independence to the individuals and the site owners. Yes. Yeah, so typically those requests come in from us. And because Harvard is such a really confusing and big place, it has to go through a specific process. We have a committee that we call the use of name committee. It sort of has to get approved by that committee. And that committee, you know, exists basically to make sure that the Harvard brand, the Harvard Medical School brand specifically is being represented, you know, in the way that we want it to be represented. So really when we're talking about the sites that we're talking about today, these are all external facing websites and they all have, you know, good traffic and whatnot. We recently went through a big process of kicking people off the website because we have this large affiliate network that you may not be familiar with where 16 or so hospitals is where our MD students essentially are taught at the school. And sorry, the question was about website governance and how we go about that and onboarding new projects. So that would be the primary. And then I think the second area, once they get approved, we work with one individual, again, kind of going back to the train, the trainer model, we would work with one individual to help set that up and make sure they're sort of the primary point of context. So there's not too many people talking to us, you know, at once, because I think that can get kind of, you know, complicated. Yeah, back there. So the question is about how to maintain uniformity between sites, or among sites really, with a multi-site slash not multi-site setup. We're working on that. I'll be honest. It was hard to maintain uniformity before then. One of the things that had been an established practice was used to be to have, I'll say the word, partial config imports across the sites. For the time being it's still our, it pains me to say this, it's still our official, the way we officially do that. But we also have features installed because the agency that created our current setup used it for some, it was an agency thing, right? Sorry. Again, I've been there 10 months. I'm not, I don't have the full, I still don't have the full understanding of who created what. But yeah, we also have features installed here and there. And I've seen some cringing, I saw some cringing earlier about features. I used to use features a lot in Drupal 7 and I'm sort of thinking that it might still be viable to maintain uniformity. Haven't made my final decision yet. But yeah, I, it's a question mark still. Sorry, I don't have an answer. Do one last question and then we'll wrap it up. Thank you. So I'm probably the primary point of contact for the community. And so I would say we sort of have an internal priority as far as the super users or the websites, they get a lot of traffic. And so when I say daily, I really mean, you know, like our, our marketing group, for example, you will have monthly meetings with them and we'll touch base on a daily basis. If, you know, we're working on rolling out a big feature, a big one that we're working on right now that many of you are is this transition to, from universal analytics to GA4, which is we have to meet with them, you know, several times a week at this point to get all those things buttoned up. And so really, my point on that is just kind of thinking ahead and looking at their scope and making sure that they have an avenue to reach out to us. They're, you know, thinking about projects and we're not just getting last minute, you know, projects sprung on us. If, you know, we're not, if we're not aware of them, essentially. Great. Thank you so much, everybody. Thank you.