 that the site has a top navigation, essentially. And the problem there is that since legacy has been around for 15 years, there's 15 years worth of business logic that has been baked into their older .NET platform that had to be exposed. And those preferences had to be properly mapped and migrated. And the same will talk a little bit more about that migration in a little bit. And then the last component here is this concept of infrastructure and the appropriate infrastructure and understanding what kind of trade-offs we make when we have certain caching strategies at different layers. There was an Akamai caching in front of the entire legacy website. The Node React web applications was being hosted on AWS, but we also had this separate data center which was hosted in Chicago, which was the .NET platform that also needed to consume these APIs. And there was a very specific SLA which needed to be hit in order to ensure performance across the ecosystem for these APIs. So as I mentioned, since we are dealing with two different teams, so managing velocity around that so that there are no blockers for each other, right? So we chose a common methodology, which was crum for this project. And in terms of Drupal development, we were always one sprint ahead of the front-end team. And we showed that we were very proactive in sharing the API designs up front, and so we basically one block the team to manage. Also, it was very important. Since we were excellent by nature, it's a very distributed organization. We spread out all over India and the rest of the world too. And there's another client who was also distributed. So they had three different offices that we had to work with. So it's really, really important for a project of this size and scope that we keep the communication channels open and we over-communicate to see the success. So we used tools like Slack, Zoom, Skype, email, phone, even pull requests. So there were a lot of commenting that was happening on the pull request itself to ensure that we were never really blocking each other. We got the information that we need and we kept moving on because it was such an aggressive timeline to be hit. And they had defined a pretty hard deadline. Then some of the Drupal best practices that we followed during development were having consistent environments. So we knew pretty early on that the solution would be hosted on Acquia Cloud. So we said, OK, what's the closest environment that we can get our hands on and bring it into our development early so that there are no surprises. We could test it and all of that. So we took Jeff Gierling's Drupal VM, brought that as part of our code base, actually. So we pushed the environment as part of our code base. Anytime someone would check it out, they would have the environment within it. It's a vagrant image if you don't know it. And that's how we developed it. That was our consistent environment that we followed throughout the lifecycle of the project. We also established a Git workflow within the team. We followed release branching and feature branches here and ensured that we were maintaining release nodes diligently. It was very important because we were constantly delivering solutions or delivering the developer product over to the front-end team. We had to communicate that. And release nodes became our de facto document that everyone would refer to. We also used Drush build script and established that early on in the project. So this helped us maintain updates to core contrives, any patches that we are putting in the system. It was all automated. So this would, again, save us time, any new Drupal core release that came out. It actually did in the last four months of our working. There are a lot of contrives. So we could easily just run the Drush build script, get the new updates in, and we were done. Again, feature-driven development is not new to anyone, but we ensured that we followed it to the best. All configurations were checked in into the features. And it brought a lot of consistency in terms of output. We also had created a bunch of different checklists in the project, both automated. We used a Drupal module called Site Audit, which sort of checks for Drupal best practices. And at the same time, we also created a bunch of our own checklists that we followed. Because there were APIs that were being developed. There were a bunch of different aspects of Drupal development that we looked at, and created a pretty holistic checklist that we would follow after every release or before every release. Since we were deploying to multiple different environments, there was a QA team from like you see side. There was our own internal QA team. There were a lot of features that were being developed. We were constantly deploying to our solution. We were constantly releasing it. So there's a Drupal module if you guys haven't used it. There's a module called environment. You could kind of plan your entire environments that you are deploying to in that. And we basically scripted it all. So we could simply kind of just figure if it's a QA environment, we know what configuration the QA environment needs, which modules should be turned on and off, all of that. So it was completely automated. So I highly recommend if you guys haven't used environment, do go use it. So just to give you some stats, we had used about 80-plus contrips in the project. I wouldn't go into too much detail around what was being developed, and 40-plus custom modules. This sort of gives you a high-level architectural overview. There's Akamai. There's the Node.js layer. We're hosted on Acquia. And it's the Drupal ecosystem that's there using the REST APIs. We are delivering content back to the Node.js layer. So with that, Lakshmi will kind of give you a deep dive around this component of the platform. Before that, any questions around what you've covered so far? So no headless implementation is a complete without RESTful API. So we chose a module called RESTful for doing that. The Drupal ecosystem already has got a couple of other modules which are popular for doing this. You might have heard of services. And there's one more called RESTWS. When we evaluated RESTful, it was in its early stages. But the best part about RESTful was it was very developer-friendly. The only prerequisite you needed is for the resource to be present in your Drupal database. So you could tweak it and export it as a REST resource. The other thing is RESTful allows you to configure and fine-tune every aspect of your API, like caching, headers, the way you structure your payload, the authentication mechanism on a per endpoint basis. You can have authentication scheme OAuth for one endpoint. And you can have a token-based authentication for another endpoint in the same system. So it was highly configurable. No API is useful without having any documentation because we are used to consume that documentation because only computers can consume an API JSON payload. So there are a lot of competing standards for specifying how exactly an API spec should look. You guys might have heard of Swagger. There is RAML. We choose RAML because it closely resembles YAML. It is readable both by computers and humans. Quite a lot of parsers exist in various languages for RAML. And the best part about RAML is it allows you to auto-generate test cases out of the specification. So the only thing we need to write, even before we start coding the API itself was to write down an RAML spec and share it with all the API consumers and stakeholders so that they get a heads up of what exactly they will be expecting to consume when we publish the API. The other goodie about RAML is there is a PHP RAML parser which allows you to parse an RAML specification and convert it into a Drupal web test case so that you need not spend your time writing manual test cases for an API endpoint. You have a lot of other better things to do. So if you guys can see this, this is how an RAML looks. Pretty textual. You can read it and infer or make something out of it as to what the endpoint does. So this is an example of an RAML format. It very closely resembles its cousin, the YAML format. Like I said, the authentication is pretty configurable. And RESTful comes with its out of the box authentication, which is pretty secure. But not being a public-facing API, we didn't have any such security requirements which were very stringent. So we just cut some corners on that and we wrote our own lightweight authentication to squeeze out more performance out of the whole system. Because we used RESTful, we were able to do this. I'm not sure if we could have done this if we had used any other services-based module. There is a lot of literature around how you should version your RESTful API. Even the very fact that where you place it is a question of debate. Some people say that you have to place it in your URL, in your headers and all that. Unfortunately, RESTful only allows you to place it in the URL. But there is even a bigger question as to where you, how you increase a version number when you are creating RESTful resource. We had this policy of increasing the version, minor version number by one, every time the payload changes or the behavior changes. The underlying concept here is your RESTful API version is a sort of contract between the consumer and the producer as to what the payload will be and what will be the authentication scheme and everything. So any of these changes, it's technically a change in contract. So you bump up the version number. So versioning was pretty useful in describing a contract. The same way RML was useful in describing a spec. We did have challenges with RESTful. We had an endpoint which had to imitate a URL alias in Drupal. So given a URL alias, you have to get all the metadata and fields for that entity. It could be anything. It could be user, node or taxonomy term. There were some tricky parts here as to what do you do in case the alias is not formed or what if there is a URL redirect. You have to do a lookup on the URL table every time and fortunately there are a lot of modules which had exposed APIs for that. So we had to make use of those in RESTful. And we need to expose the metadata of each and every entity along with that URL alias. So we had to consume the endpoints provided by, sorry, not the endpoints, the functions which were exposed by meta tags module to get the meta tags or metadata for that entity. RESTful ships with batteries included, caching functions and provision. You can integrate it readymade with either Redis or Memcache. We have to do it with Memcache because our platform was having Memcache only. There is a need to clear your cache very diligently based on the context because you have to take a digitization between caching every time resource changes versus caching only those endpoints which change when you change a particular node or entity. So we have to perform this sort of mapping. So I've detailed those in a few blog posts which I have linked here. We thought this is a very generalized use case so we have put it up as a contract module in d.do. You guys can check it out. I mean, not yet because we've not put any code. Probably will. Any questions around RESTful? Yes? Yeah, you can do it. You just have to, whatever logging you're using, you just have to plug it inside your resource handling class or function. That's it. As you mentioned versioning and REST, you bumped up a minor version. So is it like the older version will still keep working? Yeah, the older version will still exist. That's a good question. I was expecting somebody who'll ask this actually. So we closely followed the inheritance mechanism for versioning. Anything with changes in 1.3 from 1.2, we just inherit the previous class and then only change the function which has. So let's say if there is a node just a.k.a. working right now on 1.2, it will still keep working. It'll still continue to work as long as you don't decommission it or disable it. So you can have multiple versions working at the same time? Simultaneously, yes, yes. Because there may be consumers who will be consuming the older API as it happens in all REST APIs. So you don't want to disable that. Okay, thank you. Hello, yeah. Okay, so I'm Hussain. I'll be talking a bit about threshold panels and migration. So one of the requirements of this project was that even though it's a decouple system, the editor still wanted to maintain control over the layout, over the content in the layout and so on. Now, of course, it's a decouple system. I mean, if it was not a decouple system, the answer is easy. You just use panels. You've got a very great interface and your whole fleet of tools that works with panels and you're done. This is decouple, right? So you can't do that here. Actually, before starting, we also looked at presentation framework. How many of you here know presentation framework? Who is the fan? No one. So this is presentation framework. I'm talking about it's a module developed by media current. That's right, media current, yeah. For weather.com. You know weather.com runs Drupal now. And presentation framework is something they use to make handling panels easier. We started off this part, but quickly realized that this is not really what we're looking for. We are just actually looking for the exact opposite. Something like in a different direction. And we built this. Yeah. So this was our problem. We can let editors use panels. They have access to Drupal back end, of course. And you have your standard panels configuration. You create pains and all that. But we wanted to, I'm sorry, we wanted to make it available via regular JSON output, you know, in a RESTful endpoint. And basically what you see on the screen, that is what we wanted. Your regular pain configuration, converting it to JSON. Something which you can work with in code. And we built this. It's Contrib-LL scheduling. So this is like the overall working of RESTful panels from very, very, you know, from Eagles, I view. I'll not go too deep into this, but if you're familiar with RESTful at all, you would know that data provider classes are kind of common at RESTful. We use the same methodology. And to actually render the panel, the panel which the editor builds, we are using the standard panels renderer. This is probably what you're using anyway on the front end. You know, there are, out of the box, there are two choices. Standard panels and IP. We went with standard panels just because it's simpler. And in RESTful panels, you have something called Structured Render which will kind of make it suitable for JSON output. And in our custom namespace, you know, very specific to legacy, our requirements, we built a model called LegacyPane, which would, so in legacy, these were not really panels, they were nodes, whatever we wanted to return through this endpoint. They were panelized nodes. So RESTful panels also has a class called RESTfulPanelsPanelizer over there, which you can just extend and use it, just extend it in your custom module and you'll be able to output panelizer as JSON. And another thing that this module provides is it passes in a RESTful context. Do you know what I mean by context? As in panels and C tools context? Okay, some do. So yeah, basically what this helps in is, like you saw in a screenshot before, so if this pane, so this is the legacy pane you saw in the diagram, you can, if it is being rendered normally, you know, on a normal page or something, it's being rendered as a bulleted list. But in JSON, it comes down into structured JSON object, your regular JavaScript object, key value pair, right? And this is made possible because RESTful context is passed in and the content type can take a decision. Should it render as array or should it render as a bulleted list? It's basically the flexibility is on to you. You can do whatever you want with the data. So this particular module is already contributed. It's available for use. There is a dev release. You can go try it out. I'd love to hear from you if you try it out. It's d.toe slash project slash RESTful underscore panels. And what this module currently lacks, basically the reason it's in dev is because it's not completely tested with a variety of context, right? I mean, we never had a use case in legacy and that's why it's not tested as of yet. That's one thing we want to look into. Another thing is meta tags. So you know meta tags module, I'm sure. So you would panelize the nodes. They would have certain meta tags information along with it. And by default, of course, RESTful panels does not do anything. Right now in legacy pain, we are doing this, but I'm looking forward to add some support for that out of the box because it's a very, very common scenario. And panels variance, again, so a node you can have multiple variants, right? But they are actually really different displays. Even now you can use them, but the owners of determining the display is onto you. Maybe that is something which the module can do as well. I'll quickly cover migration, it's a very big slide and then I'll be open for questions. So here we migrated from SQL Server and this particular database was built over 15 years. So you can imagine what kind of tables they might be. But one thing about that was it's very different from your regular Drupal content module, nodes, fields, field collections and so on. Right, and so that was one of the challenge in correctly translating the source data into Drupal specific data. It was just a different form of normalization, I would say. And so what we basically did, so this is like a best practice which you should be following on your migration projects anyway, is that you should map everything. I mean, if you probably, if you don't know what I mean, then I think it would not make sense now. But I'll go with it anyway. Migration module, the framework allows you to map only the required fields. But as a practice, we map each and everything anyway. So what happened here in this case was that migration was iterative. So the development went hand in hand and while the side build was not finalized, we're already migrating data. So what if fields changes? Like the field name changes or fields get dropped or added all the time. Migration framework would be able to tell us that if there is any such change, it'll give up an error. If you don't map everything, you will get all the errors and you'll just ignore them. But if you map everything, you will be on look out for any such errors, which is really helpful. We also migrated different types of data over different times and so we just split them in migration groups. Again, if you're familiar with migration, I'd really encourage you to look into this best practices. It's a great framework, great module. So using these mechanisms, we basically migrated around, you can see 2500 articles, 5000 media items, various galleries, around 1100 affiliates. They're all very, very discrete. I mean, you can probably see five bullet points on this, but actually they are very complex structures and we probably had like 15 or 20 migrations. I don't really remember, but we had a bunch of different migrations covering each and everything. You can see the data structures get really complex. That's about it. Any questions? Do you call it for migration? Do you mean the development or the running of migration? So development, I mean, it was spread out like I said. It went hand in hand with the development of site itself. So each print, we would identify that these are the elements that have been built and we would write the migration along with that. So it went hand in hand. So, I mean, if I say it was spread out over weeks, it doesn't give the correct picture because it didn't actually take weeks. It was interspersed with other development. The entire run depends on the server. I mean, Jordan, do you remember how long it takes to run a migration? Yeah, it's, migration happened quite a while back. I don't really remember how much time it took, but yeah, I think that's 15 to 30 minutes. Yeah, okay. So that's, since there are things which field collections does, which makes it very, very heavy on performance to delete them. Creating them is easy in this case. Deleting takes time because our migrations were structured. Like I said, these data structures were quite complex. So we had a migration just for field collections. So if you try to roll back that migration, each rollback for a field collection would roll back in node, sorry, would re-save the node. We actually submitted a patch to fix this. It is subsequently fixed, I think, you know. I mean, we found a lot of things, you know. So it's not just RESTful panels and RESTful purge that came out of this, you know. We, the variety of patches that went in over the course of the development. So it was this use case. I mean, field collection was suitable for the job. I mean, do you have a more specific question because I really, I mean. We can make a field collection side issues with the regions and all. Field collections themselves are entities, so they would have their own revisions, yeah. And I see what you mean. It just happened that it worked great for us. And I mean, like, do you want to add on to the answer? Anything? It was more about migrating the data and the work on flex. I mean, that's the only thing. And that was really fixed, actually. The structure was so long-placed, so you decided to use it. Yeah, well, of course, the content structure is pretty complex, yeah, for certain kinds of entities. I mean, some are just articles, simple, but others are- One, two, three. Yeah, pretty much, yeah. Yeah, one, two, three. When you're quotes, videos, and images, you go to legacy.com, slash news, slash galleries. You check out some of those books. Because of the interoperability of different kinds of media, we needed to deliver all of those in the payload. So once in a while, you have a video, and depending on what was uploaded by the editor, in that specific collection, that's what we wanted to do. So it's very use case-specific. Okay. All right, I'll pass it on to Basam. Am I audible? Yeah. Okay, cool. Yeah, so as you can see, the front end at legacy was quite diverse. We are using everything on top of the node layer. We use Express to consume the APIs that Drupal provides. So these APIs are then taken by React and Flux implementations. So they take these APIs and render the client UI. So we use other tools, open source tools, like Babel for transpiling from ES6 to ES5. We are using Webpack as our build system. And we are using stylus for writing modular CSS. So yeah. So why did legacy choose to write a client-side application using React? Well, there are these three main reasons. Performance, developer productivity, and to avoid content injections from the affiliates and third parties. Taking a performance first, with client-side applications, you get a slow initial page load time. But every subsequent page transition or every subsequent request is extremely fast compared to what you get with your conventionally developed applications. In turn, this leads to an experience that's similar to what you get with native applications that are there on desktop and iOS or Android phones. Another reason is developer productivity. So with React, you get a composition, easy composition. So your React application is built of small components that form of huge applications like legacy. So with React, we were able to easily add new features, remove features without affecting the existing UI or breaking any tests. So that made us age island, we were able to rapidly make the new changes. Similarly, event delegation and writing inline styles was extremely easy. We use inline styles just to avoid cascading problems that are common in large code bases. This was pre-CSS modules era, so we rolled up our own solution for this. Coming to testing, your React components just consume a particular API and you get the component on your page. So what we did was we took the component, we passed it some JSON data and rendered it on a headless or a real browser to test it out. So we used Phantom.js and we used various versions of Chrome and Firefox for testing our React components. Similarly for testing out events or simulating events, we used a library that React provides called testutils. So it's as easy as calling a function. You just call a function like click or key press and that's it. You can replicate those features just by writing some code. So coming to some solutions that we implemented while working on legacy, one of the most important one being server-side rendering. So this helped us solve the SEO problem that is common with single-page applications where usually the SEO crawlers get just an empty body tag and a bunch of scripts so that's not useful. You aren't able to track any of your page content so what we did is we rendered the React components on the server using node. So what happens is that the first time the browser gets the page, it's the whole markup and once it's there, once you have the whole page there, client-side rendering kicks in and reacts their takes over. So that helps improving the initial page load time that is common with the single-page applications. Another side effect, good side effect of this is that since we are rendering the client-side application on the server, we were serving just the plain static HTML pages so that made caching very easy. Solutions specific to Drupal were like, as you already know, we are using Drupal as a data source and as Hussain mentioned, it's providing the layout configuration as well. So we consumed the layout configuration to build out the page structures and these structures were later filled in with the data that was coming through Drupal. So you were getting a free, flexible drag and drop you had to create your structures and that were built using React. Some solution specific to React were like, React doesn't play well with raw HTML, that's because the main feature of React, virtual DOM doesn't come into play if you pass it raw HTML. So just to harness that functionality, what we did is we took the raw HTML on the Drupal layer, we stripped it out, took out the HTML tags and attributes and then passed it to the client-side as a JSON object. So on the client-side, we consumed this JSON object and built the React components there. So we didn't have to compromise on the virtual DOM layer there. Similarly for complying to specifications like schema.org, what we did is we passed the metadata as JSON objects and it was consumed on the client-side using React. Any questions specific to front-end? Infrastructure, it was an additional pain to render first using Node.js and then pass on the HTML. Yes, yeah, like there are benefits to it, so yeah, you have to do it if you are implementing a client-side application. But then so it was like you at last mentioned that you were passing the metadata using JSON objects. Yeah. So that was for Node.js or for the final? That was for Node.js and for the subsequent request, React consumes that. So initially Node takes it and then once it's rendered on the browser, React takes it after that. Anyone else? Yeah. You mentioned there is a new line style. Right. It means that you have a style attribute that you are actually using. Like, isn't the client-side's list correct? It will make the CSS 600 expert automatically. Right, like since React has come out, like conventions have drastically changed, we are even writing the markup, the JavaScript and the stylesheets in the same file. So it depends on the developer productivity and how well you can manage your code base. So this worked out very well for us, organizations like Netflix and Facebook have been doing this for a while now. So it scales very well for large code bases. No, no, no, there is no theming done on the Drupal layer. All right. You just take this. All right. So some final thoughts as we wrap up the presentation are kind of want to say final learnings having just gone through this very large decoupled project. Continuous integration and continuous deployment best practices are really important when it comes to making sure that a decoupled project goes well. If we had not accelerated our continuous delivery, then we would have left the front-end team blocked whenever there was missing data or new requirement or some kind of bug that they had uncovered. We couldn't release on our sprint cycles every two weeks. We needed to release when they needed the solution in order to unblock them. And so continuous integration, continuous deployment is really important to have in place if you're gonna take on this kind of project. Personalization for decoupled Drupal architecture requires specialized infrastructure or middleware in looking at things like the affiliate universal navigation, the menus which are deployed across all of those affiliates. The performance is that kind of level of uncached requests are I guess indicative of the need to host that kind of solution outside of Drupal. There's a lot of requests that we're coming in for these menus that have a lot of context and a lot of personalization for these affiliates and there's more performance solutions in node and other architectures that we're considering at this point, looking forward. Another thing to keep in mind if you're taking on a decoupled architecture and you're working across two different teams, you're planning for architecture across two separate systems and the decisions that you make in order to perhaps deliver some kind of technological solution within Drupal greatly affects the kind of solution that needs to be delivered on the other side and vice versa. So if there is a gap that the front end has Drupal is required to fill it and if there is a gap that Drupal has the front end has to fill it. An example of this would be we provide metadata for a lot of the discrete articles and galleries as they're rendered on the URL resource from RESTful. On other pages such as some of the RESTful panels pages we actually serve up tokens which are contextualized based on the specific use cases of other parts of the ecosystem because we're using RESTful panels to kind of create this configuration layout we needed a solution where metadata was contextualized based on the front end instead of within Drupal. When you're planning this kind of architecture a coupled system you really need to take a lot of careful consideration with where you're introducing points of failure. One of those is for example this RESTful path resource all of our redirects and 404 errors all come from this resource and so this needs to perform well at scale for the system in order to serve up all of those 404s and redirects and everything. And again this is a final thought this is a great I think case study for example of a progressively decoupled system. It's something where we're continually innovating as we work on the project to deliver new open source contributions for the Drupal community in order to continue to decouple certain parts of Drupal and deliver that. Questions? And not just for me, for anyone, the team members. How is caching validation handled or invalidation? Lakshmi, you wanna talk about that? Most of the times there is no one to one mapping between your data and your resource. So whenever people make a change it's more of a mini to one. So we have to have a prior configuration as to when X or Y changes. Let's say bundle A, bundle B or bundle C changes then we invalidate the following end points. So we have this set up in code. Resoul allows you to do that or rather has a provision to do that. So every time there is a change in any of these we have the proverbial hooks, right? So they take care of purging the respective end points, caches and that's how we handle it. Did I answer your question? Products. I think one of those would be Adobe Edge as an alternative and I think it has some restful resource. What's the content full? What's the hosted content full as a hosted resource? It's really easy to build content architecture and expose those pieces of data to be consumed. Content full is actually the use case where I was talking about earlier or the business decision that Legacy made. They wanted to own the platform and own the data when you choose a third party service like that that's hosted, you don't actually own that system. If they go out of business then you're in a tight spot. So that's one of the key decisions I think that we made in going with Drupal. Any other questions? All right, thanks you guys. Thanks everyone for coming. Welcome to Drupal Con Asia, it's a real pleasure to be here. We're presenting today a case study on legacy.com migrating a top 50 most visited website in the US onto Drupal. First I'd like to introduce the team. My name is Jordan Ryan, I am a CTO of Fast and Interactive and I was the Solutions Architect and product owner during the delivery of legacy.com's migration to Drupal. And Ankur, CTO of Xcelerate, would you send it to the team? So Xcelerate was brought in to do the implementation work for the project and Jordan and I, we worked together including some of the team members that are here. That's me that the API developer plan who said that not only migration work, also an important component of the project was managing presentation and there's a module and was almost part of the front end team implementing Node.js React solution. So before we get started, I'd like to just introduce legacy a little bit. Legacy is, as I mentioned before, a Concast Top 50 website. They have roughly a few hundred million page views a month. They've got maybe 20 to 50 million uniques on any given month. And they serve most of their content traffic through affiliate partners. They serve obituaries across roughly a few thousand newspapers in the continental US and also internationally and they consume obituaries from these newspaper partners and serve them up on their channel pages for those newspapers. Legacy's business as far as the Drupal solution that we're presenting today is just focusing on the features or the news editorial section of their site, which just drives a lot of their consumer engagement once they have users that come into the site from these affiliate channel partners. So one of the questions everyone always wants to have answered is why Drupal? We'll talk a little bit more about why legacy specifically wanted to, let's see here. How's that, better? Sorry. We'll talk a little bit more about why legacy wanted a decoupled architecture in just a little bit, but first we have to talk about why they wanted to use Drupal. And the answer to that is very simply that they were building a team that was gonna create an innovative and progressive front end in terms of design and they wanted to be able to quickly iterate on the design and implement new features. And so in order to do that, in order to deliver that kind of solution, they wanted to have Drupal as a service, they wanted a service architecture so that they could continue to deliver a high quality consumer experience, but they didn't wanna spend a lot of time innovating within the CMS space. And so they looked at Drupal for that kind of expertise and delivery. So as I mentioned before, why decoupled? Legacy was really looking to innovate on the front end. They didn't want to innovate in the CMS. They were looking to a tried and true implementation. And Drupal was kind of the enterprise standard for them based on their requirements that we went over in their discovery phase. Content was a small part of a much larger ecosystem. When you look at, when you look at legacies, when you look at legacies architecture, the content system only serving a few million page views a month compared to their much larger few hundred million page views a month means that the Drupal application itself did not need to be scaled up in the same way that their Node React application needed to be. And using React and Node, which was the decision by legacy for their front end application, that lent itself to componentized widgets that needed services in order to be populated with data. Again, another reason why we chose decoupled. Legacy wanted to own the data and platform. This kind of goes back to why Drupal as opposed to why not use an additional hosted service. There's certainly some other solutions out there that can provide restful APIs for content. But legacy's goal of owning that content and owning that platform was another key decision on why they decided to go with a decoupled Drupal implementation. So I'm talking a little bit about what we did. This is just a quick overview and then we're gonna go through each of these pieces in detail. The initial, in context, all of this happened over a very fast paced six month timeline. The initial discovery engagement was about four weeks to six weeks. And then after that, I was working with legacy in order to bring in additional partners in order to deliver this solution. After executing the discovery and deciding on some of the architectural components, we brought in Accelerant. Yes, so on the Accelerant side, we covered the technical architecture for the solution. We did a lot of the site building and any custom development. And then the core of it, which was the API development, we migrated data from MS SQL and continued to help their front-end team. A lot of performance optimization work was done and we continue to engage with them and there's a continuous discovery that's happening on the solution and we've been working since. So some of the key challenges that we are trying to address using the Drupal solution that we are proposing. So there were actually two teams. So legacy already had their front-end team which was going to implement a Node React solution. And so they had a different velocity than we did. So it was a challenge that we had to sort of take care of when we were implementing the Drupal solution. Then managing presentation. So there was a unique requirement. They wanted to give editors control over the layout of a page. So as you know, if you were working with a decouple system, controlling layout in Drupal, how do you do it? So there's been a couple of different approaches to that. We came up with a unique solution which was saying we'll talk about it during this presentation. Also, power of Drupal is really to manage metadata. The SEO value is the key. So you have content, you have metadata around it and that's what enriches. It's great for search engines. How do you bring that value onto a decouple platform? So that was another challenge that we had to consider when we were building the solution out. Since there were a lot of APIs were being developed, we had to ensure that we were constantly versioning APIs so that we weren't causing issues for the front-end team. Any contracts that we laid up front were adhered to and any new changes that we would make were versioned in the APIs. And also legacy by nature as a business, they work with a lot of different newspapers. And so they had to serve out a lot of menu content to these newspapers. So it was highly customizable, but it had to be extremely cashable so that the performance, the kind of volume, traffic volume the site gets, so that we could continue to serve it. So there were a lot of varying page elements that needed to be addressed. And likewise, there was a caching mechanisms had it to be put in. So we considered all of these problems in our solutions. Also on the React and React is unlike HTML as most of you know. So we had to sort of look at componentizing HTML for the various React elements. Now we'll talk about some of our methods, how we went and did some of this. Sure. So going back to that initial discovery, some of the things that we executed in order to deliver the most value to the client is this concept of value-driven development. The concept of value-driven development is the idea that we define the business metrics or key goals for each of the epics or user stories and document those in the tickets so that when a developer implements a solution, they have some context with which they can actually deliver that solution that's gonna have the most value for the client as opposed to a less defined user story which may not capture those specific business values. Another element that we focused on early on in the discovery process is API designs first, focusing on what that contract was gonna look like based on the initial comps that were delivered by the client and they had executed. And that allowed us to take a look at things like the content architecture, how that content architecture is gonna be built out within Drupal in order to expose those fields. Another key item is legacy was really looking to Drupal to become a platform solution and when I say that I mean the delivery needed to be such that the interface was dependable. There's a lot of additional complex business logic that we also had to extract over the course of the discovery. This actually is what spurred the continuous discovery that needed to take place over the course of the project because as we work with the client and work with the front-end developers, requirements changed. I'm sure you all have been through that before. And most particularly where requirements continue to change and be revised is around the affiliate partners or the solution which was explicitly being developed for those affiliate partners which was a universal navigation. It was a menu that was being delivered across everywhere.