 Ok, people are slowly dropping in from lunch. I think it's about time to get started. Yeah, ok. Welcome here everyone. To the session about how to build a scalable platform for today's publishers. Presented under the site building track. So, what are we gonna do here today? First of all, I'm gonna introduce myself. Talk a little bit about the project that I'm currently working at. We're also gonna walk through some of the common requirements that you bump into building sites for publishers today. Some common requirements, not specifically in all tight to Drupal, but common requirements on a CMS for publishers today. Then we're gonna touch on why Drupal is the right tool for these kind of sites. And we're gonna dive a little bit deeper into what modules we've been using to solve some very particular problems around certain areas. And in the end I'm gonna give a short demonstration of some of the modules that we've been using. Ok, first me. Dick Ulsson, I currently live in Doha, in Qatar, right in the middle of the desert in the Middle East. I'm the lead Drupal developer at Al Jazeera, the TV channel, leading all the Drupal sites and so on. I'm currently on a leave of absence from Node 1. It's a Drupal consultant in Sweden. I'm also a very active core contributor. I also work on a lot of contributed modules, among them being UUID and Deploy. Ok, so, a time ago Al Jazeera started adopting Drupal for a lot of their sub-sites, smaller sites, smaller projects. And they did so in order to be able to move a little bit faster on the web to deploy projects, to reach the audience a little bit faster. One of the sites that was built was blogs.aljazeera.com. A site that turned out to be very important during the Arab revolutions that we've seen the past year here. It allowed the editors to reach the audience faster, live blogging and so on. Unfortunately, due to internal reorganisations, Al Jazeera is not ready to deploy Drupal for Al Jazeera.com yet. But we're definitely in the process. So the platform that we've been building for Al Jazeera, at Al Jazeera. We're scaling that, we're benchmarking that and we're testing that to match the requirements of Al Jazeera.com. We want to build a generic and good platform. Just to give you an idea of what we're benchmarking against and what we're working on. We have about 50 web editors at Al Jazeera. Around 30 articles or pieces are published every day. It contains news articles, opinion pieces, program packages, etc. All in all, you can roughly say it's around 60 to 100 nodes a day including all references and so on. So it's not a large number but still quite substantial. We have editors working in two shifts so they are constantly working on the CMS on the staff. And on the other side, average, we have about 60 million requests per hour hitting Al Jazeera.com. And during spikes, around 50 million requests per hour. That's about 14,000 requests per second, quite substantial numbers. Again, this is currently not a Drupal site but this is what we're scaling and testing for. Blogs.aljazeera.com is what is running in Drupal right now and we're in the process of upgrading to Drupal 7. Just to give you an idea of what we're benchmarking for. Okay, some of the requirements, not necessarily tightly to Al Jazeera but some common requirements that you often run into. First of them, being the platform that you're aiming for to choose should really support agile development. It should really support the process that your editorial team is working under, which is a very iterative process. So the platform really needs to be able to support that so that the development team can work in the same type of cycles. And also webpublishing itself is constantly evolving. New technologies come out every day. You need to be very fast, you need to have a platform that allows you to stay on the edge. Because time to market is crucial for publishers. You need to have a platform that quickly allows you to get something out there, reach the market fast and then iterate on top of that. So you can build prototypes for fast and so on. So that's the first requirement. Second one, being a little bit more obvious maybe for publishers. We really need to have a system that supports efficient workflows. It needs to support the editors daily work of course. It's a quite tricky one because organizations work very, very differently. As well as sections within organizations. Taking Azure as an example, the news desk works very differently compared to, for instance, the program department, workflows and so on. So puts the platform to some real challenges in configuring a very good workflow and so on. Third requirement, content freshness. What do I mean by that? It's basically the time it takes from when a publisher clicks publish until we reach the first impression from our audience, from our visitors. It sounds simple problem to solve but it's actually not that obvious for all the systems. Many systems, CMS sits on application server behind corporate firewalls. Lot of workflows generating HTML pages, pushing it up to servers when a new article goes out. Lots of different caching layers, we have content delivery networks, we have reverse proxy caching as well in some cases. And it's not always just instant publishing. We need to reach the audience fast with new pieces of content because minutes really makes a difference on the social web. Breaking with your story, a minute, two minutes before your competitors can really make a difference in driving traffic to your site. So we need to be able to reach audience fast without minutes of cash delays and so on. Being first is really vital and sort of to demonstrate this. For a large scale Drupal site it's not very uncommon that you have very many layers of caching. At the bottom you have your Drupal database of course, you have static caching in line in PHP. You have the Drupal cache API, on top of that you have varnish, reverse proxy caching. And even in some cases you have an additional layer with the content delivery network even in front of varnish. And getting all these layers of caching to play well together can be a real challenge. And editors, they don't want to sit around waiting three, four minutes before their articles reach their audience. Before they can tweet a link to their article. So this is a really important requirement. Scalability. It's really important when your story breaks, when your story gets out there, it's really important that your site stays up obviously. Otherwise you won't be able to have the impact on your audience that you want to have. I can again take Al Jazeera as an example. Breaking with stories during the Arab revolution, during the Arab Spring. If the site wouldn't have stayed up, I'm not sure that Al Jazeera would have had the impact during those critical times when stories were breaking. So the system really needs to be scalable to support these traffic spikes when visitors are hitting your site. Another challenge is that when the traffic spikes are the highest, your editors are going to require the cash time to be the shortest. Again taking Al Jazeera as an example, when Gaddafi was captured during the Libyan revolution. They were updating articles once every minute, twice every minute with corrections, more facts and so on. So they really want to tune down the cash times when the spikes are the highest on your site. And security. You can never forget about that. Online activism is constantly increasing and trust is extremely important for publishers brands today. Being compromised, being hacked, heard to brand obviously very much. And looking at it from the other side, information leakage from your CMS system could potentially jeopardize people's lives. Having source information, having references to people where you get the sources from. If that information is leaking out, there are organizations and governments that could do bad things to these people. If sources are leaking out, source notes taking in the revision logs and so on. Potentially could be very dangerous. Security of the system is obviously very important. And why is Drupal the right tool then? Reflecting back on the requirements we just went through. Agile development. Drupal har en lot of ready to use modules, as I'm sure everyone here knows. We can reach market fast, we can build prototypes very fast and then iterate on top of that. Efficient workflows has been one of the weak points in Drupal for a long time. Recently picked up a lot of good momentum in the community. A lot of good modules has been released solving a lot of the issues. A lot of good work is also happening in Drupal 8. We are going to look at some of the modules here for Drupal 7 and how we issues to solve this. Content freshness, again Drupal is a one click publishing CMS a little bit. You click save and your article goes to the front page. So by nature Drupal is good at solving the content freshness thing. But as I demonstrated still have many layers of caching that we sort of need to deal with. Scalability. Drupal 7 got a lot more scalable. A little bit slower but a lot more scalable. The new database API, master slave replication, better support for that. Pluggable field storage, better cache implementations. We have QAPI, we have entity field query. A lot of new APIs that really helps to scale your platform. In new ways. And being a very popular, very tested CMS Drupal is also very secure. Not only its code is secure but Drupal is also secure by process. Drupal has a very well defined process of how to deal with security, security patches and so on. Something that not all the other open source CMS has. So I would say the biggest strength in Drupal is that Drupal is secure by process. All software has security holes. All software will expose security holes at some point. Unfortunately, that's the reality of programming. So it's very important that we have a secure process on how to deal with security holes. And this is really where Drupal shines with its security advisory team. Okay, give me the modules. How do we do this? And I'm gonna fall back on to these requirements as I walk through the modules. Why we should choose particular modules and how those correlate to the requirements that we are running into. First module, being workbench. Or actually workbench core and workbench moderation. Workbench is a suite of modules. Provides easier content management. Easier ways to deal with your content, to deal with revisions and so on. And with workbench moderation, it's easier to set up flexible workflows around your content. And it provides much better coherence for your editors. Drupal out of the box is a little bit scattered for content authors. In how they deal, how they find content, how they filter and so on. Workbench provides much better coherence. Through something called my workbench. It's essentially a dashboard, a home for your editor. Where they will find the most crucial information around the content that they are currently working at. Or that other people in their team is working at. I'm gonna give you a demonstration of these modules that I'm walking through later on. Workbench moderation. We can assign different workflow states to our content. And this is very flexible, you can configure these as you go. And also the revision management. We can have a published revision out on the site. And we can work on a new draft for this. The green one here being published already. The red one is something that we're currently working on. Not something that Drupal Core is very good at. So workbench and workbench moderation exposes. And provides this functionality. Really, really helpful. So why workbench then? Unique workflows per content type. Unique workflows per row. We can be really agile and we set up everything here. And we've noticed that sections within our organizations map very well to content types. News desks are mostly publishing the news content type. We can set up workflows for that. The program department is working on their program packages and so on. So we can really be efficient with our workflows here. And provide better coherence as I mentioned. Next module. Deploy. A module that I'm the maintainer for. And that I've been writing during my time at Alte Sierra for Drupal 7. It's essentially a framework for pushing content. From your Drupal site to another system. Can be another Drupal site, can be an arbitrary system of any kind. You can set up this to be automated or manual. And deploys many times used to set up something called content staging. Basically you're separating your editorial site with your public site. Where editors are setting and editing content, previewing, reviewing content on your editorial site. And then pushed to the public site when ready. Essentially looks something like this. You have your staging site. Often on a secure network. Behind corporate firewall. So we can protect our revision log. So we can protect unpublished contents. And when they reach the ready state. When they reach the published state. Deployment is triggered out to the production site automatically. Simple screenshot over a dashboard where you can see and manage your deployment plans. Deployment plans being packages of content that is supposed to go out together at one time. We have two different plans here for instance. Instant deployments are queued on a very tight schedule. Weekend deployments and so on. Packages that goes out in the weekend. I'm going to demonstrate this again a little bit later. So why deploy then? We can separate our sites. We can actually deploy code and updates faster. Falling back on the agile development here. For instance if a bug appears on our public site. We can fix that bug and we don't need to run through the tests for all the editorial features and all the other features that are sitting on the editorial site. Because on the public site we've stripped down a lot of modules that we need because editors are not logged in there. So we can be much more accurate in fixing bugs, deploying updates and so on. We don't need to do as much quality assurance and so on. Because there's less code that we're touching when we're separating our two sites. Agile development. Falling back to efficient workflows. We can set up transparent content staging. Editors don't really care if this site is split up in two pieces, editorial and public. They just want to get their content out there. And we can set up this to be automated. So the editors don't really know or don't really see if it's split up on two sites. And security. We can have our editorial site on a closed network as shown in the graph before on the last slide. Also by stripping down modules that we don't need on the public site. We have less code running, a decreased hit area, so to speak. Less code that can contain potential security holds and so on. So it provides much better security as well. Next module. A new module that we've been working on at Al Jazeera. It's basically a very lightweight wrapper for listing functionalities. It's called entity list. We have very many different ways of listing and querying content on a Drupal site. We have views, we have node queues, we have Apache solar, we have search API in some cases. Entity field query and a bunch of other ways. They all work differently. They are all cached differently. They are all presented and themed differently. So entity list is essentially very lightweight wrapper around this. An entity list lives as context in your panels. For those of you that knows how panels works. And we unified output by injecting this context into our panel panes, into entity panes. This makes a unified output, we can optimize the output better. It's easier to theme. It's easier to cache. Because entity list knows more about what content that is actually in a view. Or in your pane. So it's easier to optimize. And we can also transparently switch query backends. For instance if we have an entity list on our front page that is supported by a view. The backend handler for this entity list is a view. And we are starting to run into maybe performance problems. We can transparently switch the handler for our entity list to for instance an entity field query. Without changing our presentation layer at all. It's completely separated and dealt with in panels. So we can work more agile, we can work faster and it provides better scalability by being able to switch out query backends. And it also integrates with a new module called cache tags. And what is cache tags? Cache tags is essentially back ported functionality from Drupal 8. The cache tags API is not committed to Drupal 8 but we are working on it. And I've back ported this to Drupal 7. And before I'm gonna walk into what it actually does I'm gonna describe how it works today in many cases without using cache tags. So we take an example, we have node ID number one. We have node ID number one on the front page in a view. We have the node page itself. And we have another page somewhere else where node ID number one is presented as a related article or something like that. We have three different cache entries. We have views cache in the first alternative. We have entity cache. And we have maybe another view with the related articles with three different cache entries to deal with. And since it's very complex to set up a good cache invalidation logic when an update is deployed, it's very hard to track where this node is appearing, what page we should clear the cache on, what views we should clear the cache on etc. We do like this we say I'm gonna cache this for five minutes because after five minutes there might be an update to this node that we need to display. So we're only gonna cache for five minutes and then clear the cache. After five minutes we might have an update, we might not have an update. So it's not really a very efficient way of dealing with it. So what cache tags does is that we can tag every individual cache entry with more information than just the cache key for it. We can tag it with, for instance, node one. So all those three cache entries are tagged with node one. And when an update to this node is being deployed or is being published, we can just say invalidate everything with node ID number one. That is tagged with node ID number one. And we can very efficiently clear the caches on all the panes and all the views and all the entity caches and so on at once. In a very efficient manner. So you can tag cache entries and you can also tag whole requests. So the varnished cache, for instance, the reverse proxy cache is living outside of Drupal. But we can say on this page node ID number one, five, twelve and 32 lives on this page. It's presented on this page. It's very easy to do that because we have unified the output through panels and the panels caching mechanism can tell the page that these nodes exist on this page. So we tag this whole page with all these nodes. And again, when one of the nodes are being updated, we can easily invalidate the cache in varnish by sending an invalidation command saying that all the pages containing node number one should be invalidated. So what does the configuration look like? Some simple screenshots is very easy configuration to set up your entity list. We can choose what the query back into use, entity field query, use, search API, etc. The lists are living as context in your panels. You add them as context and in your panels interface you inject them into either separate entity paints or a paint containing all the list. So we can set out item number one, two, three on our front page. We can set them with different view modes. Something that is not very easy to do with view sometimes. We can set full node display on the first one and TC view on the second one, etc. This is all configurable. And the cache tags interface for Drupal 7 requires actually a patch to cores. You need to be aware of what you are doing. But it basically adds another argument to cache set where you can give an array of tags that you want to tag it with. You can tag it with the node, you can tag it with its author if you want the ability to invalidate everything that has with a specific author to do. You can also tag requests with a separate function. And then it also provides a cache invalidate function where you pass in an array of tags that you want to invalidate. And it's invalidated across all the cache pins. Okay, so why entities and cache tags? We can refactor faster with better separation using entity lists as I described. Being more agile we can deploy updates faster and so on. Transparently switch query backends for better scalability. I mentioned that already. And with integration between entity lists and cache tags we have no more stale caches anymore. When our editors are deploying or publishing an update to a node we will instantly invalidate that cache and it will instantly reach our audience. Editors don't need to sit around waiting for three minutes for their update to reach our audience before we can tweet a link to our article before we can reach faster than our competitors out there. So we can instantly invalidate. And the power here is that everywhere on the site it will be invalidated on a related article list, on paints on the front page, on topic pages and so on. So we have a very automated way of doing this. At the same time we also get longer cache lifetimes. Because if we are not deploying or publishing an update to a node we won't invalidate it after five minutes. So we actually have longer cache lifetimes when we need it. So it scales better, performs better. Ok, so now on to the demo. Before I get started here I'm just going to see how we make, how we do it in time here. I think it's pretty well. Ok, so I'm not able to show our Al Jazeera site that we're working on at the moment. So I've set up a very basic demonstration site here. We have a front page. We have two topic pages. With some very simple content types on. Everything built up with panels and entities as we showed here. First thing I'm going to show is my workbench. Basically the dashboard I was talking about. Giving editors an easier way to find and manage their content. We have a good overview here. And we can expand all the lists. We can filter it down etc. And these are all powered by views by the way. So you can easily change and optimize and add columns to your workbench views here. So what we're going to do now is just updating the title of an article. And we're going to put this as needs review. We have options down at the bottom. Where we can set this article as needs review for other to review. At Al Jazeera we have a very well-working peer-to-peer reviewing. So everyone can review and everyone can publish actually. And going back to the workbench here we can go to the needs review queue. Where you will see all the articles and pieces that needs to be reviewed. Assuming that I'm another editor logging in on the site. I can go in prove read this article. I can read it through. And when I'm done I can go to the revision overview here. I can instantly publish the last draft that we've been working on directly from this interface here. And we'll see that the latest draft turns out green and it's published. Basically the title is updated. Next thing I'm going to show is the deploy module. So basically what we're doing here is that we're opening up our production site. Production site I've made this read to separate the staging site and the production site to make it more clear. So what we're going to do now is that we're going to show you here the deployment overview. Sort of dashboard where you can manage all your plans manually if you want to. So here we have our two deployment plans. Our instant deployment plan doesn't contain any content at this point. And we can have an arbitrary number of plans with different workflows attached to them. So it's very flexible editing a plan. It's quite straightforward. All the different steps in a deployment is configurable. So how you collect your content could be a view. It could be collected manual. The deployment process itself is pluggable through Seatools plugins. You can queue it up with QAPI. You can use batch API or directly in memory. And you can also configure your endpoints at the bottom here as you see. Endpoints are basically the system that you're deploying to. In this case it happens to be another Drupal site. It happens to be our production site. Configuring it here we can configure authentication methods it's using. We're working on an OAuth plugin and it's not completely done yet. So in this example we're using session authentication. So we need to give the username and password for our production Drupal site. And an endpoint URL. Endpoint URL being URL basically to your services endpoint. That works with the services module. So here we now manually will add this piece of content to our instant deployment plan. And going back to the overview here we see that we now have one piece with an updated title. And we can deploy it directly from the interface here manually. And we get some good green messages here. That's a good sign. And updating the page here we see that on the production site our title is now updated. A very simple example. Obviously this is manual and it doesn't work for all the editorial teams. Many editorial teams will want to have very manual control over this. But in our case we have set up a transparent system where they don't know when it's deployed. Well they do know obviously when it's published. And we can set up rules here basically it's full rules integration. So what we're doing here is that we are configuring the rules module. Here we have a rule triggering on after updating existing and new content. We can set some conditions. The transition should be to published state. And the action is to add a piece of content to plan. We then have a separate rule. Where with higher weight that is deploying the plan itself. So when we now go back to our workbench update an article put it to published. It's going to be deployed automatically to our production site. Updating the title and putting our piece here as published. We'll now trigger the deployment automatically because we configured our rules here. We see some messages here. And walking over to the production site and updating we see that the title was updated. For those of you that saw that. Next demonstration. Moving forward quickly here. Is the entity list. Module together with cash tags. Configuring an entity list. Is as we saw on the screenshot the screenshots before. Very simple. We have a title for entity list and we choose what handler to use here. In this case we're using a view. To support this entity list. We can configure our settings here. Going over to panels. We see here at our front page panel we have added our list as a context. It's purely the data. The query of the data that lives as the context. The presentation is done with paints. So here we have our three paints. Item number one two and three. For instance. And we can configure this. To say that this is the list that we're going to use. This paint should use index one and this view mode. Again there are paints to present the whole entity list if we want to. But this can sometimes give you quite good flexibility in how you want to configure specific items in your list. So this is now set up with entity list without cash tags. With the sort of old way of dealing with caching. So when we go here. We have an unpublished piece of content that we want to push out to our production site. We publish the content here. And when we walk over to the production site and updating the page. We see that we don't have any content. We have to wait before the content before the cash is being cleared. No matter how many times we update here we need to wait one minute. Before the article is published before we can link to the. And waiting another minute here time elaps and our content is out. Not a very good way we need you know faster publishing. So now it's configured with cash tags. And looking at the headers here we see that this is a fully cashed page in varnish. We have two cash IDs in the headers in the response headers for this page. And we also have cash tags here in the headers. We have a cash tag for the list and for the nodes existing on this page. Fully cashed page at this point. So going back to the staging site now. And doing an update. And deploying that over to the production site. We'll say that we need to invalidate all the caches for this particular node. And remember that this was a fully cashed page that was on the production site. So saving here will trigger the deployment over to the production site. And updating the page here now. Will give us the updated title instantly. We don't need to wait for cash times. And we have actively invalidated the cash in both varnish. In both entity list or in the pain in the panel. And this is a very simple example. It's only one list here of content. We can have many different pains and the content can appear in many different ways. The next request is obviously a fully cashed page. And it will live on forever until it needs to be cleared. We have longer cash life times. Again we see the cash tags here in the HTTP headers. So a very short and simple demo. Showing some of the sort of strengths in these particular modules that we've been working with. Obviously there's a lot of more modules that you use on a publisher site. We have content locking a very very useful module. We use the media module of course. A lot of whistle weak integrations. But those generally work very well out of the box. And these are sort of the modules that we've been working very hard on trying to solve some particular problems appearing in the requirements. So are we going to release all these modules? Yes of course. Al-Dazir has picked up on open source very well. We are working very actively in sort of building everything as generic as possible. Releasing our solutions. Getting help from the open source community. Supporting and improving these modules. We are even looking into or we have been building our platform as a distribution. And we are looking into releasing as much of it as possible as a distribution for the open source community. We are not ready with the project yet. But we will probably release it as some sort of distribution later on. For others to take on, for others to improve and so on. So we are really excited about that. And that's about it. You can reach me on Drupal.org on Twitter. You can read my blog. I haven't put up the slideshare link as you can see. I will do that. I will tweet about it later so you can download the slides. And questions. Anyone. Please feel free to step up to the mic in the middle and present yourself and make sure to speak in the mic so it's recorded for others to see later. Yes. Hi, Damien McKenna. So the cache tags module looks like it's going to be possibly one of the best caching engines right now in Drupal. And it depends on improving it to have, say, more functionality out of the box to do more of the invalidation automatically. Because right now it seems that it's kind of an extension for memcache. And then after that you have to do the rest yourself to do the invalidation yourself. Yes, so for those of you that are going to the cache tags project page. It's currently maintained by Carlos. I've become the co-maintenor. And we've been working on a sandbox with a lot of contribute improvements. With cache tags, with panels, with node queue and a lot of other contributed modules sort of out of the box support. And since I've become the co-maintenor very recently we are going to merge that into the main project. So yes it's going to be a lot more better out of the box support. We heavily rely on this so we are going to add in functionality as we go into the module. So it's very soon going to be released with much better contribute support, yes. Also, given your push to try and make it part of Drupal 8 standard. And also some of the possible loosening of standards on what gets added to stable releases. Have you thought of trying to push to get your patch into Drupal 7? It has been discussed in the issue, on the core issue. I am perfectly fine with providing a back ported patch. I would love it to be honest. But it's quite a big API change although it would be backward compatible. Men ja, that is something that we need to discuss. Thank you. My name is Martin Rio and my question was. Most of the times when I use varnish there is a time to live on pages. So I was wondering how the, when you are expiring a tag or when you are deploying sorry a content to production. How is the Drupal invalidating the varnish entry immediately? So varnish has a command line interface or through telnet. And the varnish module for Drupal allows you to send terminal commands directly to your varnish servers. To all of them at the same time. So basically we are sending a terminal command to our varnish servers. The cache tag module takes care of that. And invalidates all the varnish entries based on that header. We basically send in a regular expression saying invalidate everything containing node ID one for instance. So it's done with terminal commands to the varnish servers. Is that part of the cache tags module you say? Or is it part of the rule for pushing things to production? So that is provided the terminal commands. The invalidation commands is provided by cache tags. And then we have contrib hooks in cache tag that says when a node is saved invalidate and so on. So yeah it's two pieces to it. Hi my name is Matthew my voice is going out so I apologize. I work for media company and we are currently stuck on D6. And we're having a hard time leveraging varnish with our cache times. We're noticing like things like our homepage. We're having a hard time when there's new content. Basically refreshing the cache and stuff like that. Is there any tips or tricks that you would say for a D6 that could help us out? Or should we just stop kicking a dead horse and go on D7? If I don't recall wrong here I think actually cache tags has a release for Drupal 6. I think Carlos has worked on that. I haven't been involved in that at all. We've been working on an improved version of the seven Drupal 7 version. But it might actually have a Drupal 6 release. I'm very sorry that I can't answer that to you right here. But look into that. I think it wouldn't be too hard to actually provide that functionality for Drupal 6. There is a release someone says here in the audience. Thank you. Hi with security being a requirement. I had a little bit of conditional question. I remember saying about month, month and a half ago. I was here got hacked. Of course news didn't distinguish. Was it the Drupal side or was it the non Drupal side? And I was just wondering if it was Drupal. Are there any lessons that you learned from that? Definitely. It was a Drupal site unfortunately. But it was what you call social hacking. Basically password leaking out. Someone just directly logging into the system. It wasn't very much technical. We could do in that point. It was all very quickly though. And the lessons learned from that. Because the Drupal 6 site that is released at this point. Their editorial staff is logged on to the public site. In our Drupal 7 release we're splitting these up. We only have editorial users on the staging site. And we don't have any user accounts except user ID number one on the public site. With a very strong password obviously. Very well protected. So that is definitely the lessons learned from that one. Splitting up the two sites. Definitely, yeah. Thank you. I see that Deploy still just has a dev release. Can you maybe just speak to you about which parts of it are stable and working and which ones? So the core functionality in Deploy at this point is I would say quite stable. The reason why it's still in dev is that because we want to make some additions to the API before a release. This is something that we're going to work on on the code sprint on Friday. And hopefully a first alpha release will come out of that code sprint. Because we're very, very close to release for the first alpha. But if you know what you're doing, if you have an engineering team that can support you. You can start using Deploy today. But you need to be of course aware of the challenges that you deal with. Building on a dev release of the module. But it's very close to release. What about UUID? UUID has alpha releases. There are some bugs that we very recently solved. That was blocking a beta release. And I'm probably going to roll the beta release on Friday as well here. So we're very close to a more stable UID release as well. Great thanks. Do we have any more questions from the audience? Please feel free to step up to the mic. Okay, I say thank you again. Have a nice time here at Gruppokon. Don't forget to evaluate our sessions here. Thank you.