 Welcome to yet another exciting session with Gayatri and Pallavi from ThoughtWorks and this topic seems to be very interesting for me as I stated. Today's topic is how to approach continuous testing of cross-functional requirements. I won't waste much time. Over to you guys. Thank you, Vijay. I share my screen right now. So once again, welcome everyone. It's so joyful to see a lot of participants already in the room. So welcome once again from me and Pallavi. So as Abhijitha has already introduced the topic, we're going to talk about continuous testing of cross-functional requirements. Quick introductions. So myself, I'm Gayatri Mohan. I'm a principal quality analyst at ThoughtWorks. I also played the role of global QASME at ThoughtWorks capabilities, mainly crafting career pathways for the QAS and the company. I'm also a member of the Tech Advisory Board at ThoughtWorks India. I recently released a book called Full Stack Testing with O'Reilly. So Pallavi is my colleague. Pallavi, would you like to give one a short intro about yourself? Yes. Thanks, Gayatri. Hi, everyone. I'm Pallavi Vadlamani. I'm a lead quality analyst at ThoughtWorks. I also played the role of a delivery principal. I recently moved to Milan and I'm enjoying the European summer, but right now I'm based out of Milan. All right. I think that's all with the introductions. Let's quickly take a look at today's agenda. Today, we'll start off with an introduction to the cross-functional requirements. Then we'll talk a little about continuous testing and then we'll see how to apply the continuous testing and the strategy for the cross-functional requirements. Then we will be delving deeper and talking about a strategy for continuous testing, two of the cross-functional requirements, security and accessibility. Which will take Q&A and that's the agenda for today. All right. Let's get started. Introduction to CFRs is like the agenda says. Before I talk about CFRs, I wanted to understand from you how many of you have worked on CFRs or have an idea or context about cross-functional requirements. All right. I think there probably are a mix of responses. Some of you are familiar with the term and some of you say that you do not have the exposure. We hope today's session introduces you to the world of CFRs and why the importance of including them in your test strategy. Let us start talking about CFRs with an example of a simple three-tier e-commerce web applications architecture like an Amazon. We'll talk about CFRs in reference to an e-commerce web application. Like you see, this is a very simple architecture. It has three layers, the user interface layer, the services layer and the database. The services, there are three different types of services. One is the identity and access management services, which handles all the authentication and authorization of the requests that are coming in. The order service, which handles all order management, be it creating orders, modifying orders, or deleting orders. Then we finally have the payment service, which handles your payment requests, cancellations, refunds, etc. Now, let us take a look at what the functional requirements for this e-commerce application would be. Functional requirements are usually prioritized and driven by the business. Some of the functional requirements would be that the user should be able to log in and browse the product catalog. They should be able to purchase an item and get it successfully delivered. It should also be able to manage and take a look at the order space, do a lot of tracking of the orders. While these are some of the functional requirements, there are many more that are discussed during the design phase and ideation phase of the application and the application is built. But are these requirements enough for us to say that our application is ready for users? Is it just enough to build these? Now, let us say that you are trying to add a product to a cart and it is taking you more than two minutes for the product to get added to the cart. Or you make your payment, you get through the order and you finish the order and all your details of the cart are available on the internet. In both of the above scenarios, the functional requirements are still valid and you are able to add a product to the cart, you are still able to pay for it, but they add little or no value because either your profile details are exposed or it is taking forever for you to actually add a product to cart. Would someone continue? Would we all be happy using an application like this? I think most of us would not be very happy using an application if our details are exposed or sensitive data is available for the public. This is where the CFRs come into picture, CFRs or the cross-functional requirements for any application and in the scenarios we have spoken about, it would be that any action taken on the application should complete within x seconds or in terms of sensitive data, the CFRs for the application would be that the user stores and transmits all of the data, not just the cart details, but also all the profile details, the address is the telephone number and any sensitive data of the user should be transmitted and saved securely. Now let's take a look at the definition of cross-functional requirements. Cross-functional requirements as the name says are requirements that cut across the entire application and they cut across all the functional requirements and they need to be built into every functional feature to achieve a high quality application. As you can see on the service architecture diagram on the right, we have CFRs cutting through each of the layers and each of the functional features. As evident at visitus that the functional requirements refer to the core business services offered to the customers, cross-functional requirements refer to the execution and evolutionary qualities of an application. When I say executional qualities, I'm talking about the behavior of the application during runtime such as some of the executional qualities are availability, authentication, monitoring or security. Let's take the example of availability. When I say an application is available, we wouldn't let's assume that in case of our e-commerce application like an Amazon, on a prime day sale, if the application goes down while you're buying that last Xbox which is available, I think all of us would be very, very disappointed, ensuring that the application is always available for the end users as an execution quality of the application. Similarly, there are evolutionary qualities talk about the behavior or the static application for qualities like the maintainability, the scalability, the extensibility. Since the world has expanded, internet is accessible everywhere in spinning up new geographies and spinning up a new instance on a demand, it talks about scalability and it is an evolutionary quality of the application. Now that we've spoken about what cross-functional requirements are, let us take a look at some of the critical cross-functional requirements for our example or the architecture in consideration. What we've done here is kind of telecoded the cross-functional requirements. These are some of the critical cross-functional requirements for the example in consideration. The pink refers to the user interface layer, the purple talks about all the other layers of the application and the last yellow color or the mustard talks about the entire CFR which cater to the entire application as a whole. Let's start with the first user interface layer. I would like to talk about accessibility. In today's world where internet is freely available and app is being used by a lot of different kinds of audience, there are laws which mandate accessibility so that the app is always available and can cater to the needs of different differently able people and all sets of audience. Similarly, I think on this call there are a lot of us based in different regions. I think it becomes important for the app to cater to the local audience with the language and the URL catering to the local region from where the app is being used and that is where the internationalization or localization comes into picture. Now moving on to the second category which is the authentication and authorization. I think whenever a user wants to place an order or if I want to place an order on Amazon, I will have to log in so that I can use all of my save details and also be able to place the order successfully. So authentication becomes very critical over here for the application to identify the user that is being logged in. Authentication just does not end with the UI. It also has to decide as to who gets access to what in the application and that is where authorization comes into picture and it interacts with all of the services in the architecture and it allows you to access what you're meant to access. And moving on to the third color which talks about the CFRs for the entire application. I would like to talk about monitoring. Monitoring is the ability of the system to collect data and alert you when something is not going wrong. For example, you have many services in your architecture and you want to be notified when one of the services goes down even before the end user notices it. So alerting and monitoring helps here. So these are some of the CFRs. For this example of an e-commerce platform we have taken, while this is not an extensive list, there are many books and blogs that talk about at least 30 plus CFRs. The need and the kind of CFRs that the application caters to can be decided by the businesses and the teams that are building it and accordingly there can be a test strategy defined for those. All right. So some of you now might be wondering how are these cross-functional requirements different from non-functional requirements? The term cross-functional requirements was coined by a thought worker 12 years ago by a thought worker named Sarah 12 years ago and it's been used by a lot of folks within our organization and also in the software industry. Even Sam Neumann references to or calls it cross-functional requirements in one of his books, Building Microservices. Like the name says, the requirements and it makes a lot more sense to call it cross-functional according to us because the requirements as the name says are spread across the entire application and they have to be built and tested as a part of every feature that is being built especially in ThoughtWorks. We ensure that every story or every user feature that is being built, we do consider the CFRs that need to be accounted for as a part of that feature as well. Also calling them as non-functional may come across as very non-essential or trivial and might not get the priority it should be getting which is again totally against what we want to achieve in the end which is a high-quality software application. Now that we've understood at a very high level what the cross-functional requirements are and why is it important to test them, let's quickly understand what continuous testing is. Continuous testing is a process of validating the quality of what the application that is being built by both manual testing and automated testing after every commit and after every change that is happening and getting notified when there is something that is deviating from the intended quality outcome. Let us take an example. I'm trying to access Amazon and the page doesn't load, it takes forever for my product to get added to the cart and it is leaving me frustrated. How could this have been avoided and not be shown to the end user? If I had my performance KPIs as a part of being checked as a part of my CI-CD pipelines and have tests that fail when my performance baseline numbers are not being met then the feedback is within the development scope and it can be fixed immediately even before it reaches the end user. While CI-CD pipelines are one way of continuously getting feedback I think the whole testing process can start before the development itself during the design and prototyping phase where we can everyone is talking about the flows, reviewing if a particular flow makes sense, if it will deliver the intended outcome to the user and during development in the form of unit tests and integration tests which run in your local machine which give you instant feedback rather than wait till the CI-CD pipeline runs and again on the CI-CD we have both automated for automated test running for functional requirements and non cross functional requirements and finally once the application is deployed to the intended environment we can always do a manual exploratory testing of various scenarios reviewed by PO all of these contribute to the continuous testing process. What does continuous testing let us achieve? One of the things one of the key important things is that it shifts the entire testing lift so that we get early feedback we are able to work on it quickly fix what the issues and deliver a product which is very high in quality and another major intended outcome of shift lift testing by working on continuous testing is that a product is always ready to be shipped which is the key process that enables continuous delivery. Now let's take now that we've understood a little about continuous testing let's take a look at some of the benefits of continuous testing. The continuous testing when the achieving common quality goals like I said earlier testing from the beginning helps us get early feedback and helps us build high quality applications early detection so that the end users do not suffer it also increases the collaboration between all the roles on the team which includes which again increases the delivery ownership it has also enables a continuous delivery which has capability to deliver on demand and also enables us to achieve the four key metrics for a high-performing team. Now that we've understood continuous testing and what cross-functional requirements are let us delve into what continuous testing strategy for cross-functional requirements could look like. Gayatri will take over from you. Thanks Pallavi. So I think when we are talking about the continuous testing strategy for cross-functional requirements I just wanted to give some context on why we chose to put focus on this particular topic. So mainly because I think whenever automation or continuous testing is spoken about the functional requirements automation is taken priority right like we know that we need to write unit test we need to write integration test services and even UI into an automation test and integrated with CACD to get that continuous testing feedback for functional requirements. Very very scarcely we see that cross-functional requirements are included into the continuous testing process itself and that is one of the reasons why we wanted to put focus onto this particular topic and see how we can actually approach the continuous testing for CFRs. So one of the feedbacks that we hear when people want to try continuous testing for CFRs is also that CFRs are very abstract it is very vague and each of them have their own way of addressing it and those kind of abstractness keeps them at a distance for people who try to do continuous testing for CFRs and that is the puzzle that we wanted to break down and see if we can provide a simpler approach to continuous testing the cross-functional requirements. So basically let's take a step back and look at the Forbes model which is a model which is established really long ago for software requirements in general. So the model the Forbes model is an acronym that stands for these five themes F is for functionality, U is for usability, R is for reliability, P is for performance and S is for supportability. So what the model is trying to tell us is that all of the software requirements both functional requirements and the cross-functional or non-functional requirements can be bucketed into these five themes. So when we are bucketing it, when what does it qualify to get into one particular bucket is what you see as the leaflet on the page there. Like for example in the category of requirements that can be realized as user flows. Those are functionality theme that falls under functionality theme and the category of requirements that affect the user's experience like the internationalization or cross browser compatibility. All of those form fall under the usability bucket and any requirement that make the application fault tolerant that falls under reliability. This is like for example you have to include some error mechanism, error handling mechanisms, scalability, all of those things fall under reliability and performance is very obvious. There are performance KPIs like the availability concurrency and all of those things that those requirements of an application fall under this bucket and the last one is the supportability where we where all the evolutionary code qualities like maintainability, testability and security and all of the secure code all of that fall under the supportability bucket. So this way we are able to view all the software requirements into these five buckets and there are testing techniques and testing tools that are available for us to test each of these themes which is what we will be seeing next and try to apply those testing techniques to arrive at a continuous testing strategy for a particular CFR in the later section of the talk. So what are the testing techniques that are available for each of these themes? Let's take functionality for example manual exploratory testing, functional test automation data testing techniques are there for catering to testing all the functionality related requirements. Although manual exploratory testing is not restricted just to functionality it can be applied to any team in this model. Mostly manual exploratory testing I think we can contribute majorly to functionality. So the tools some of the tools that can help us do that are postman for APIs, browsers, bug manager, Charles proxy everything and functional test automation. Once again I think for unit testing, integration testing services or UA end-to-end automation there are a bunch of tools that are displayed here that can be used to do the functional test automation part to cover the function continuous testing of functional requirements and the data has become an entity of its own in the in today's world there is a separate focus on testing for data quality and everything specific to data. So there are a bunch of tools that help us do continuous testing of data specifically like test containers, DQ, great expectations and all of those things. Coming to usability the testing techniques that we can take help from are like visual testing, user experience testing, accessibility testing. So visual these are some of the established testing mechanisms already. So visual testing is a way where we can compare the screenshots against every incremental change and automated and get feedback like backstab.js, apti-tools are some of the established tools that help us to visual testing in an automated fashion. User experience testing. So user experience testing is specifically on the design part of it. It's not even about whether the element is available in the same shape and color as in the design but actually validating whether the design makes sense for a given user flow and a given set of targeted users. So this is like done using AV testing tools and also prototype testing tools like user Zoom, Optimal Workshop and similar of kinds and accessibility comes under usability. We need to make sure that the assistive devices are the user flows are tested using the assistive devices and also the web page or any other application is in built in a way that can be used by the assistive devices. Some of the tools that could help us in continuous testing process of accessibility are listed here. This includes both automation and also manual exploratory testing of for accessibility in particular. So when it comes to reliability so stress testing, chaos in any infrastructure testing techniques are available for us and the stress testing is an extended performance testing way where we are actually pushing the application beyond the expected load and ensuring whether the application is fault tolerant. We are trying to see how much is the stress that it can take and how can we make it resilient to faults in terms of stressing it. So some of these established tools are already available for us and chaos engineering is more of an experimentation methodology where we are trying to uncover some of the behaviors of the application in a turbulent situation. So it is not like an acceptance criteria or an edge case that we're trying to test it out but rather we are putting the application in a situation where it is really turbulent and we're trying to uncover some of the existing flaws in the application and trying to make the application resilient and fault tolerant using that testing technique. So these tools could be used to even automate it and do it in an automated fashion and to uncover and put fixes to make the application reliable and infrastructure is one of the bad bones of any application. So as Pallavi was also mentioning in the beginning of the talk that any application these days is expected to go global and that's when the businesses can gain revenue. So making infrastructure spin up within a click and making infrastructure as code practices have come into picture and therefore ensuring that we are able to test the infrastructure requirements is become key. So some of these tools are helpful for us to automate the infrastructure as code and perform continuous testing and also make the application reliable in terms of infrastructure availability, security and all of those things. It comes to performance I think it's an old topic but back in performance testing, front-end performance testing has been there for a while. We have tools like Lighthouse, web page test form depth tools for like front-end performance testing and some of the variable established back-end performance testing to derive the performance KPIs and then structure the load accordingly to cater to performance requirements. And lastly under the supportability category, we have a few of these testing techniques available to us today and the tools that are respectively marked there. So supportability as we were talking about is the evolutionary code qualities of the application. So this is where I think it gets really big like maintainability, extensibility, readability and how do we actually go more testing for them? How do we actually make sure that continuous testing happens for them? So that's where these testing techniques really come into picture and help us. So one of them is the architecture fitness test. So to really ensure that whether the architecture that is originally visualized to cater to some of the cross-functional requirements, like for example, all the classes and packages should be independent. It should not have cyclic dependency so that it can be reused in another component. So that is like one of the architecture fitness requirements that cater to the CFR, maintainability and reusability. So how do we actually test for them? So some of these tools like J-Dip and R-Unit, RQ test, all of those tests are very similar to J-Unit like they can be written as unit tests just to make sure that all the classes are within the same package or they don't have cyclic dependencies. So those kinds of things can be added as automated tests and run as part of the CEA pipelines and to ensure that these sort of supportability related evolutionary code quality related requirements are continuously tested and the feedback is given for the team. So this could look very trivial for a very small team, like all of us are seated on the same table and everything, but when the team really grows and there are multiple streams that are working, often there are huge trade-offs between that needs to happen, like whether should be catered to performance or whether should be catered to maintainability. Like skipping layers and actually directly calling the database could be performant, but it won't cater to maintainability. So that's where these kinds of automated tests will ensure that the layering is maintained and it will give feedback in a holistic fashion. And we have static code analyzers that will help us tell whether the readability is maintained, whether there are unused variables. So that gives feedback as early as in the development stage itself. So that is like continuous testing starting from the development stage itself, telling us that these things could be avoided and giving feedback right then and there. Dynamic code analyzers have been in play for a long time as well, like the ZAP-DAS test tool which is the security testing tool, which changes the code, injects vulnerabilities, tells us what are the open vulnerabilities which could be automated and integrated with the CI. TA test is a mutation testing tool once again, which tells us what are the tests or the scenarios that the unit test could have covered, but let it open. So those kind of tools tell us whether the code is in the place where it can be evolved over time, catering to all the supportability-related CFRs. And linters, I think, again, old-timers here, like ESLint, which gives feedback on the JavaScript code for known errors. And Stylint and the ESLint plugin, can I use the dark blister here, are used for cross-brook compatibility, like we can actually make sure that only the supported features for our supported browser list is being used even during the development stage. So that is like the feedback that is got as early as the development stage. So some of these aspects actually help us in catering to supportability-related CFRs. So I think now I hope the picture is getting carved in all of the minds, like how CFRs actually manifest themselves within these five themes. And to make it much more clearer, we'll take a couple of examples and see how these testing techniques can be utilized to carve out a continuous testing strategy for a couple of CFRs so that we put the strategy to use. So let's take security. Security might seem like a really vast area. And definitely, there is a lot of learning that needs to happen. But going with our strategy that of using the PURPS model and the testing techniques that cater to the PURPS model, let's look at how we can visualize security. So security-related requirements, how can it actually manifest themselves? Security-related requirements put manifest themselves as functional user flows. So for example, users should be able to go through multi-factor authentication. They have to enter their login credentials, they have to enter their OTP, and then go through multiple procedures during payment. All of those are functional user flow-related requirements that cater to security CFR. And once again, when it comes to security, I think one way to look at it is to build a defense mechanism as part of your functionality and core services. The other way to look at security is to react to potential security threats, like how do we react to it? So that is like making the application call tolerant. So that is reliability-related requirements that will cater to security. And the supportability, once again, I think the code should not have known vulnerabilities. At the code level, how can we actually not inject known dependencies that has vulnerabilities in all of those? Writing the code that is not catering to any security breaches. So we can see that security requirements can fall under these five things. And if we have to borrow the testing techniques from each of these teams, the continuous testing strategy for security will look something like this. So yeah. So there are static code analyzer tools, like Snake, OVAS, dependency checkers, that will actually scan the code in a static code, give us feedback on known vulnerabilities. So we are utilizing the static code analyzer technique from supportability to get feedback as early as development. We could also use pre-commit tools like Darlis, once again, another static code analyzer, which will actually prevent the team from checking in any known secrets. So that's another way that we could utilize from the supportability category of testing techniques. So these are tools that fall under security related testing under the category of static code analyzers. And once again, I think we could automate functional automated tests, like we were saying, like log in and all of those things, we could actually do it in the, using the functional automated testing tools itself. And the reliability factor we were talking about infrastructure scanning. So infrastructure testing related scanning can happen in the CEA to give the reliability related requirements feedback. And dynamic testing is another way we could adopt another testing technique that we could adopt. And as always, manual exploratory testing, as I was saying, manual exploratory testing is one of the key things that we could do across all the teams. And once again, since security is one of those areas where specialization is needed, I think, at the release stage, we could even include the pen testing stage. But till then, you could still get the feedback. That's the main part, like continuous testing is to get the feedback immediately as you deviate using manual and automated testing methods. So that's the use of continuous testing and be ready for release. So that's the part till manual exploratory testing. And there are also in production, this is another way where reliability testing techniques come into picture. These are tools that will react to the security part, like runtime application, self-protection tools, like twist lock and stuff like that, which can be used to monitor. So those are monitoring and reacting to it. But the continuous testing part, I think, as we can see, as a software development team, as we are building the software, we can get the security-related feedback as early as a single commit is being pushed into. So this is for applying the strategy for one particular CFR, the continuous for security. We can move on to another CFR accessibility and see how we can apply the same strategy. So we're doing this exercise mainly to see how we can break down the CFRs. So CFRs need not be just vague and abstract, but just trying to break it down into smaller teams and looking at it from the first model will help us build a continuous testing strategy for that particular CFR. So that's the idea of doing another exercise here. So accessibility. So accessibility, I think, once again, if we break down the requirements of accessibility, what could be the requirements? One is the functional user flows. For example, if your application has a video, then one of the functional requirements, the user flow, should be that there should be a transcript right beneath it. That's the accessibility requirement. So making sure that the user flow includes a video and also a transcript beneath it is one of the functional user flows. And when it comes to usability, I think one is user experience testing, the one of the testing technique that we spoke about, right? Like one is user experience testing that design itself has to be validated to make sure that it is taking the accessibility component into it. One is that the other one is accessibility in itself is one of the usability testing technique, right? Like being able to work with assistive devices and ensuring that it is compatible is one of the things that falls under usability category of requirements and supportability. Once again, the core should have appropriate accessibility related static requirements like the tag should be there, all text should be there. So all of those things ensure that the code can be evolved in a proper way that caters to accessibility. So we can visualize accessibility requirements to be manifesting across these buckets. This is majorly across these buckets and how we can see how we can adopt some of those testing techniques that we spoke about earlier to build a continuous testing strategy for accessibility. So like we were saying, the design and analysis, like when it comes to accessibility, the design itself has to be validated and make sure that the user experience part is calculating the accessibility requirements. So there we start continuous testing starts from the design phase. During development, again, we can take help from the static code analysis that are available to ensure that the code has the right accessibility, DOM build, the tags are available in all of those things. And there are also runtime checkers for us, which can be used for runtime evaluation and giving us feedback like the tools mentioned here. Once again, functional tests like we were talking about like either be it an unit test or integration test or a UI end to end test, making sure that the accessibility related requirements are automated and they're giving the continuous feedback. And manual testing, once again, as you keep emphasizing manual exploratory testing is a feature that we can apply to any of those things. And once again, for accessibility, manual exploratory testing could be done using some of these tools and also using assistive devices like screen readers or keyboard functional tool. But the message is how we can decompose accessibility into multiple buckets and take help of our testing techniques and tools to get that continuous feedback right from design to manual testing phase. And of course, during the release testing phase, we could also employ our user testing with the real users and also the certification. So this is on a whole how we can achieve continuous testing for a CFR like accessibility. So here I think we are talking about a couple of CFRs. And I think I just wanted to mention that if you're looking for actual implementation and knowledge of existing tools and everything, the book has more of it. I just wanted to mention if people are interested, take a look at it. I'm also happy to give away three ebooks for the participants here. Please leave your email addresses in the chat and I'll be able to send across three ebooks. Apart from that, the key takeaways that from this session, we want to leave the folks here with our CFRs, define the execution and evolutionary qualities of an application. CFRs are as essential as functional requirements for the application success. Functional and cross functional requirements together make the application a high quality product. So far, if functional, like we saw in the example in the beginning, if businesses try to prioritize the functional requirements, it is us who should be encouraging them to look at the benefits of incorporating cross functional requirements as early and preach them about the quality aspect of it and be able to advocate for it. And the CFRs can be visualized along the Forbes theme. It need not be as vague as possible. It can be broken down into simpler themes along the lines of the Forbes model. And there are testing techniques and tools that can be used to conduct continuous testing for each of the Forbes theme. So we are not left alone. We do have support with respect to testing techniques and tools for the CFRs, although functional functional requirements, automation and continuous testing has been prevalent. I think parallely the cross functional requirements, testing techniques and tools have also been evolving parallely and we should be able to make use of them to gain the benefit of continuous delivery. And actually just a time sake, we are at 445. Okay, that's about it. I think we were just going to say therefore continuous testing of CFR is very much feasible and should be included in your testing strategy. So I'm not sure if we have time, but I'll be there and both of us will be there in the hangouts. And for folks who are generally interested to check out the book here is a 30 day free trial and that's open for you to check out as well. If you are interested to learn more in this area, get practical implementation knowledge and all of those things. Well, we do not see any QA there. Okay, fine. We will then wrap the session. Thanks for your insights, Gayathri and Pallavi. Thank you folks. Thanks for participating actively. See you.