 Welcome to our session. I will just give you a brief introduction to Rossio. He has been working with us at USP since 2003. And he is the director of general chapters department science division. He is providing us the scientific leadership to a team of scientific liaisons responsible for the activities of six different expert committees that cover the majority of USP general chapters. Rossio earned his PhD in pharmaceutical chemistry from the University of Buenos Aires. He has authored many publications and peer reviewed articles and is a frequent speaker and instructor on topics related to chromatography and validation. Prior to joining USP, he worked in the pharmaceutical industry in QA and QC. Rossi held the position of assistant professor of quality control in the faculty pharmacy at Buenos Aires University and executive secretary of the Argentine pharmacopeia in the period of 1997 to 2001. He's a quality engineer certified by the American society for quality. So welcome Rossio. To our participants, if you have questions, we do have the Q&A box available for you at the, it should be popping up on your lower right corner. So please type in your questions if you have any while Rossi is presenting. And we ask you that you please type out your questions fully and legibly. So we really understand what the question is that you're asking because we won't have an opportunity to really ask you back to help clarify. Our webinar will be for 1 hour. We will have a Q&A session towards the end around 15 minutes before we are finished. And there will also be an exit survey for you. So we ask you to please give us your feedback. It really helps us to help improve as well. On your experience today with USP. All right, with that, well, thank you so much. Please enjoy harassing the floors on yours. Thank you. Good morning. Good afternoon. Good evening. Thank you for joining us today. The SCN explained my presentation today is about a new general chapter we are planning to incorporate into the USP is chapter 1220. And then my intention here is to explain obviously what is inside that chapter and I think I'll pursue life cycle and also explain what is the status of the or what will be the status of this chapter in the context of USP, especially considering that we have a series of chapter that talks about validation verification and transfer to 24 to 25 and 26. So if you look at the definition of validation, I picked the one in our USP chapter 1225, but the others definition are very similar. And he says that validation of analytical procedure is the process by which it is established by laboratory studies that the performance characteristics of the product meet the requirements for the intended analytical application. So the concept is that through the validation experiments you will determine if the method has the appropriate variability in order to be used for one particular purpose. And we know that that is not always the case in the pharmaceutical industry. In many instances, the validation exercise is made in a checkbox type and there is no clear connection between the variability of the method and the requirements for the particular analytical application. 1220 has the intention to solve that or at least to provide a path that allowed us to develop methods that will meet the requirements of certain predetermined requirements through the life cycle of the method. So from the moment that the method is implemented in the lab until the method is retired. In the introduction to 1220 says that this chapter presents an alternative framework for analytical procedures that holistically incorporates all the events that take place over the procedure life cycle that are designed to demonstrate that the procedure is and remains fit for the intended purpose. And I want to make clear that the intention here is not to replace 1224, 1225, 1226 with 1220. As the introduction says 1220% an alternative framework. So our expectation is that some people may use this enhanced approach for development and validation of their methods. Others may continue doing the traditional. And I think that there will be a transition probably that people will use one or the other and the expectation is in the future 1220 will be the norm. So you probably are familiar with this table. This is the table in the chapter on validation in USB, but it's a very similar table is in the ICU to guidance. And as you can see the table identify where are the the experiments that you need to perform to determine the the appropriate performance characteristics. Depending on what what is the intended purpose of the method being category one, the main component within the matrix category two. A minor component in the matrix category three is a performance testing and category four and identification test. But the and I will say this again, but with this series of experiments. You cannot determine if the variability of the method. Can be suitable for the intended purpose. As you know, we always say that you can validate any method, a very crappy method or a very good method. All of them can be validated. So again, we are trying to cover that gap in in determining if for that particular application, the method of the variability of the method is suitable. FTA identified this problem. In this document 2004, when when FTA talks about a liability or unknown variability of the methods. The document says variability and or uncertainty is a measure in a measurement system can pose significant challenge when out of specification results are observed. So FTA understand that when you have an out of specification results is difficult to identify if is because the variability of the method or is because the the the. We are testing is outside the specifications. And this is because probably the total viability of the method have not been studied before measurement system viability can be significant part of the total variability. And when when similar and repeating out of specification observation for different products across the industry, and less that optimal understanding of viability. This is what they are seen at least at that time. And in this context is is difficult to promote the continuous improvement improvement. So again, the the the industry is in many cases using I see a QQQ to or to 25 to to validate the methods in many cases trying to fulfill a regulatory requirement as a checkbox and exercise as I explained before. But again, the connection between the the the reportable result and the predefined requirements for the method are not perfectly identified. There is no such a link between these two things. And so this is basically the one of the concept we identify when we propose the modification of the current validation of analytical methods. The data started in 2013. I think you speak create an expert panel with the intention of develop developing an approach. For validation of analytical procedures better with a looking into the future. So our question to the panel was, what do you think validation will be done in the industry between 20 years from now. The panel very quickly identify the the approach that is more or less identify in Q QA on now Q 12 as an approach that can be applied to analytical procedures. Obviously, this is this is not it. This is not finding from the from the expert panel is not something that they decided to do. There were many groups at the time working in this concept. So they basically embrace that concept. Following that, there was a series of four papers that the expert panel publish. The first one was basically a philosophical definition of the life cycle management of analytical procedures. So this is where the basis of the concept were explained and developed. And follow that there were three papers that expand on very specific topics within the analytical procedure life cycle, especially those that are kind of a new for the industry in order to better understand and apply the life cycle concept. So as you can see, there was a paper on fitness for use decision rules and target measurement uncertainty and pf 42 to There is one on analytical target profile structure and application through the life cycle and any paper on analytical control strategy. All these elements are part of the life cycle approach for the method. And what you see on the other side of the slide is is basically the scheme of the of the process. And as you can see, everything is there with the quality product target profile. Which is defined in QA. So knowing what we want to measure, we we can evaluate the the the amount of error we can have in our method in order to know the avoid making the wrong decision when we approve a product. So and we can identify those those requirements in form of total error and certainty, wherever you prefer. But that the analytical target profile is simply a very simple declaration of what is the intended purpose of the method. And what is the maximum error I can accept in in this measurement. According to the the criticality of the quality attribute we are measuring the process of the life cycle is debated in three stages. Procedure design procedure performance qualification and ongoing procedure performance verification. You can see that the word validation is not here and and and this is important to measure in this concept. Validation takes a more important role which is overarching the whole life cycle of the method. So I would say you will develop the method you will validate the method and you have to demonstrate that this this state of validation is maintained through the life cycle. The the actual validation of the method is stage two procedure performance qualification that we will see later is quite a it may be a little different from from the classical Q2 validation. So in in there is another important component in here which is knowledge management. There is a recommendation for a very strong process of gathering information about the method that we will use during the procedure design. Also as the method of dancing the life cycle we will be able to produce or data that support the the validation and that also will be knowledge that will be incorporated into the the body of knowledge we have from the for that particular method. So again stage one is a is probably the method development is when we decided that we we design the method and we established the the necessary parameter for the method and it's very important here that we will identify or better understand the effect of the procedure parameters in on the performance of the method. We will do a reassessment and we will identify again some kind of a isn't wasn't a study, but we will identify a space where the method can be used. On stage two we basically will confirm that the reportable value generated by the procedure meet the ATP criteria. So once the confirmation when it is confirmed that the method meet the requirements of the target profile the method can be implemented and after that the purpose is that you have to collect data regarding the performance of the method in routine use. To identify any in advance any problem that the method may have and if the methods if the problem. Happens you have to make the decision of going back to stage two going back to stage one to fix the problem depending on on the criticality of the problem. So I mentioned a couple of elements that are. In very inherent to the life cycle approach. One of the one of those is the analytical target profile or ATP. And we say that ATP is a prospective description of the desired performance of analytical procedure that is used to measure a quality attribute. So, technically speaking, we will have. An ATP for one particular procedure for one particular attribute we want to measure. This ATP or this description is established before the method is developed because the definition of the what is the appropriate quality of the reportable value had to be done before. Selecting the method and based on that we can choose the proper technology but for the measurement. It is says that the independent that the ATP is independent of the measurement technology. However, in the recently we hear more and more that that maybe is not the case. All the time there are cases that. The ATP needs to be associated with one particular technology. So, I think that this many of these concepts are evolving as we speak. So, we can expect that 1220 in the in the future, maybe in the future. Maybe revise to incorporate new ideas or new concept or modify some of the elements described there. What should include an ATP? As a minimum. It is important to have an ATP. It is important to define what is the analyte. So, what we are trying to measure. A brief description of the analytical matrix. So, we will measure a in the presence of ABCD also. We will describe the range that we will use the method. Remember the this space on the. The specification of the quality attribute one to measure. And finally, the procedure. The procedure. The procedure. The procedure. The procedure. The procedure. One to measure. And finally, the precision and accuracy acceptable for the reportable by this is very, very important. And as you can see through this presentation. The core of validation is now precision and accuracy. All the other performance characteristics are, I will say secondary. The main the main aspect is that if you meet the precision and the accuracy. Establish for the method within the range. Technically speaking, all the other components or all the other performance characteristics will be in the in the range we specify. The chapter explain or present a couple of examples on ATPs. Again, this is based on the current knowledge. The expectation is that this will be evolving. In fact, in the process of elaborating this document, we search in the in the literature for. How authors, different authors are using the term and I think I'll start the profile and we've we found that time, which is two or three years ago. Almost 30 different ways that the ATP is presented. Anyway, the intention here is to be as much as flex very flexible in term in terms of what you can include in the ATP. However, you have to keep in mind that the elements we mentioned before needs to be there. In this two examples. The. The ATP is presented with different level of complexity if you want the the first sample. As you can see. We identify what is the analyte. We identify the range. We identify what is the. We identify what is the matrix, the analytical matrix. And then we identify the maximum accuracy and the maximum precision. This is kind of based on a classic classical validation mode. Where. Accuracy and precision are estimated as two separate performance characteristics. We are now advocating for for estimating the total analytical error in the method as the sum of the accuracy and the precision. And this is where where the method of the the example to is is doing when it says that the distribution of the total analytical error of the reportable value falls within the total allow all analytical error range. Of plus minus something. So again, in this case, the ATP defined the maximum variability for the metal. And this is what you want to try to achieve during the metal development and confirm during the stage two, the qualification and the continuous verification. As I mentioned before, precision and accuracy take a very important role in the process of defining the total analytical error. You know, we use bias in the term bias in the in the chapter many times. Maybe this will create confusion in the future because you know that the accuracy definition in ISO is different to the accuracy definition in ICH and all our documents are based on ICH. But we we thought that at some point is important to try to align to a better better terminology. So bias is the way that I so define what we define as a QSE in in ICH. So again, precision and bias or QSE will give me the the variability, the total variability of the metal. And we have to identify if that variability is appropriate for what we are trying to to measure. So for example, if you have a maximum specification based on a typical 90%, 110%. You don't want to have a method that when you add the QSE and the precision, most of that range is covered by the distribution of the of the test results. You want to have and this is like a rule of thumb. You want to have that the total variability of the metal cover at least one third of the one third of the total budget for the asset as criteria and no more. So how we will identify what is the appropriate variability? Well, we have to use the different factors. We have to be into account how critical is the quality attribute that we are planning to measure. What is the risk of making the wrong decision? So approving the product, which is not good or rejecting a product that is good. And the the weed of the specification of the acceptance range for the quality attribute measured by the procedure is the other aspect you have to have. You know, if suppose and we are talking about an API where the maximum the acceptance criteria is 98102 is the method may be different from one that we want to use to release a product. With a 90% 110 acceptance right here. And finally, the clinical safety and efficacy impact that the analytical error may have, especially in those products that has a clinical very narrow clinical window between the toxicity and the effectivity. This is where this approach make much more sense in order to better identify the accuracy and the precision as a total analytical error. This try to explain a little bit the different situations that you may encounter. So suppose that this is the upper limit of an specification. And you see there are four reportable values and the identification of the distribution of those reportable values. And as you can see in the case of scenario one and is an IO for the decision is very clear. In one case you have the complete distribution of error outside the limit. And in the other case in a scenario for you have the whole distribution of the error in inside the specification. The problem is when you are in a scenario two or three where a certain amount of the results will be inside that the majority outside and in the opposite. Only the tail will be outside and the majority of the rest of the distribution will be inside. So what we are trying to identify is how much of this type of error we can accept in making a decision about a result. The decision rules that we use typically for established specifications. Take this into account even though it's not specifically identified as a value. We can assume that the analytical error will be at least incorporated if we use the appropriate decision. For example, if you look at the first one, this is the typical case where the acceptance zone is more narrow than the safe and efficacious range. So you have a garb on that allows you to be safe in the decision about approving or not. So if you are a little bit outside the method in both of the limits, the garb will give you the assurance that the method is still within the safe and efficacious range. There are other cases like the second figure that you can use when you need to reduce the risk more. And you establish a strict acceptance zone, as we say before, between the values that are narrow than the safe and efficacious range. And you add in each case an indecision zone. So there is a zone inside or outside that it will not give you the appropriate assurance that you are within the range that you need. So in the decision, if you get a result within the indecision zone, we'll need to take another sample and make additional testing. This is familiar for you because it's more or less the scheme used in tests like this solution or content uniformity. So going back to the different stages, trying to put a little more information on what happened in each case. So stage one is a procedure design. So we will collect knowledge. We will do experiment. We'll do risk assessment in order to identify what is the appropriate method. Remember that at this point, the ATP is already defined. So we will develop the method that we are sure that the reportable result will meet the criteria established in the ATP. Minimize the bias and provide the adequate precision. So again, this is not about minimizing the bias and minimizing the precision. The precision may have an inappropriate value depending on the ATP requirements. Also, we need to understand the effect of the procedure parameters in the performance of the method. In fact, we use the robustness studies to do this. However, this approach advocated for more in-depth investigation about the control of the effect of the procedure parameters using design of experiments and defining what is called method-operated design region, AMODR, which is a space where the method was designed to be sure that we are meeting the requirements of the ATP. Based on that, we have to optimize the performance characteristics of the analytical procedure into an accuracy precision. We will have to describe the calibration model. We have to determine the selectivity and the sensitivity of the method, et cetera. The other two components that are added here is the definition of a preliminary replication strategy. So the understanding is that in order to have a reportable value that can be compared with the acceptance criteria, we have to have a certain number of results in order to be sure that the precision is adequate. We also will define an analytical control strategy. As we mentioned before, on stage three, you will start monitoring the method during the rest of the lifecycle. And in monitoring the method, you have to choose what are the appropriate parameters and activities you need to do to demonstrate that the method is still meeting the requirements in the ATP. System suitability is part of it, but we will realize that there are other quality tools that can be used to demonstrate the continued performance of the method. So in order to measure in the risk, because the risk assessment is also part of the stage one, again, we can't use any tools. Those instruments that are used to support the quality management like any quality tool. In the chapter, we describe the use of two, the Ishikawa diagram and the heat map that I will show in the next one. So the idea here is based on the results we obtained during the robustness study or MODR, we can identify what are the critical parameters. In the performance of the method. And for that identification, we will use the Ishikawa diagram. Then we have to determine the criticality of that parameter we identify. And for that, we can use a heat map, like in this case. So identifying the parameters or part of the method that can be critical for the result, you can identify how much this will impact the level into strong, medium and minor. And the impact is, in this case, determined based on the accuracy and the precision. During stage two, again, this is where we confirm that the method was developed to meet the ATP requirements. And also we will confirm the initial analytical control strategy that was identified as preliminary in the stage one. One important thing is, again, the qualification will be based on the accuracy and precision. However, if there is a requirement and regulatory requirement to complete all the other analytical performance requirements of the method, you can use data generated in stage one if needed. So as you can see, the collection of information we have in stage one and pre-stage one and ATP, it can be used to present the regulatory authorities for them to better understand the good or bad of the method. So at the end of the stage two, the replication strategy is confirmed and is confirmed also that the performance of the procedure meet the ATP and other requirements. And finally, we enter in stage three. Stage three, again, is the monitoring of the method during the lifecycle. And the idea here is to have an early indication of potential performance problems or adverse trends and identify what is the changes required in the analytical procedure or in the performance of the method. Confirmation of procedure performance after changes, change management, it will be also a warranty by the data of the continuous verification. Remember that when you detect an out-of-trend result, you may need to modify the method independently. The level of the change, you may need to go back to stage two, stage one, and this has to be determined based on the risk management. One of the tools, again, quality tools here, one of the tools that you can use is control charts. In this example, which is in the chapter, is a range control chart, but you can identify data, in this case, out-of-specification. But in other cases, you can see clearly that there are six or more results in the same trend, and that may be an indication that there is a need for revision of the method. Finally, I went to spend a couple of minutes on the ICH activities because in November 2018, it was approved, it planned for the revision of ICH Q2. Again, the ICH Q2 was developed 30 years ago, and the technology changed the need for better data change, etc., so it's clear that the revision of ICH Q2 is important. Again, that was in November 2018, the ICH Assembly approved the World Plan for ICH Q2. In the first meeting, it was clear that instead of incorporating all these elements into one guidance, there is an opportunity to develop a new guidance. In this particular case, the plan now is to revise ICH Q2 and create ICH Q14, which is called Analytical Procedure Development. Regarding ICH Q2, the concept paper indicates that the current ICH Q2 is very chromatographic-oriented, and there is a need to incorporate the use of spectroscopic or spectrometry methods, and multivariate methods that have to be evaluated using different models, different approaches, so the typical ICH Q2 parameters may not be appropriate. It also indicates that because there is no reference standard used during the analysis for multivariate methods, it is important that the robust of the method will be well-identified, and the variability of the method will be properly addressed. In terms of what is in the guide right now, the guide is still not in the public domain. Hopefully, it will be available between July and the end of this year, but the major changes are obviously the incorporation of the multivariate calibration methods, accuracy and precision are included in a, I would say, in a higher category. It's like there are high category of performance characteristics, accuracy and precision, and all the others. So linear calibration mode, not linear calibration mode. We're trying to avoid the word linearity because in fact it's not such a thing. There is also an incorporation of the term selectivity because the parameter where we call today specificity in chemistry is to demonstrate that the method is specific for something. However, what we do during the validation is to qualify the level of selectivity that the method has. You don't always need a very specific method. Q14, again, is intended to be used during the method development. Incorporate some of the components that we described today. Important is that the method described two possible approaches for method development. The minimal approach or the classical approach and the enhanced approach, which incorporate the elements that we describe into a training. Such a thing like analytical target profile, knowledge management, research management. Obviously, the description of robustness will be now in Q14 instead on Q2. And two important things, how to develop multivariate analytical procedures and real-time release testing. So again, these two guidance probably will be available before the end of the year. The group will meet again the first week of June to do the final adjustments before the publication. With that, I think that this is my last slide. Thank you very much for your attention. And I think we open the floor for questions. Thank you very much, Horatio. There are indeed a few questions. First question is, well, we have the normal ones about if we can provide the slides to these presentations and also if we can share the recording. So we will share the slides. Of course, I will send an email to all participants today or tomorrow with the PDF of the slides. That's completely possible. We will also share the recording on our YouTube EMEA webinar channel and we'll let you know the link then as well once it's done. That will take however a little while and we'll be done maybe in the next week. We have of course also a little bit of technical questions here. The first question that I would like to ask you Horatio, there is some difficulty with the term multivariate analysis. Can you explain what you mean with multivariate analysis, please? So in the typical analysis, chromatographic, you use two dimensions. When you go into the area of, especially in spectroscopic techniques, like a near infrared for example, you will be determined, you will, you have to develop a model that take into account more than two variables of the method. And that is the way that you will identify a result. It's not the classical chromatographic, but it's a multi-dimensional space where the result is incorporated. Basically is that, so it's very specific for multivariate, sorry, for a certain spectroscopic test. But it's using more and more. Thank you Horatio. There is another question. How can we define the true value in a drug product since there can be a variation by 1% in values? I'm not sure if that's easy to understand, but that is how the question was put up here. Not completely understand, but I will try to respond. So during the manufacturing process, obviously there are variability, not all the tablets are equal. And then probably during the validation of the manufacturing procedure, you will identify that probability. Now, during the validation, remember that what we are trying is to separate all the variabilities associated with the manufacturing and only focus in the variability of the method. Sometimes can be done, sometimes it's all difficult, but that is the idea. Independently of the variability during the manufacturing, which is captured during the process validation, you have to assess the variability of the method in a separate form. And obviously when you define the ATP, you have to take into account the process variability, the variability of the manufacturing process. Thank you. We have also a few questions that are going in the direction if this chapter will also apply to sell therapy products to other biological products. So we have three questions here that are going into that direction. So is it more for small molecules or is it thought to be used overall? I think the initial concepts were developed with a small molecule in mind. Now, this question is not common, has been mentioned all the time, what about other methods? And we indicate in the introduction of the chapter that this approach can be used for any analytical procedure, but the effort you have to do in method development will depends on what type of method is. For example, in a method that is not quantitative method, probably there is not too much interest in expanding in the variability of the method in the 90 tests, for example, is it would be more appropriate to use in a quantitative test. But some elements can be used. Now, and that's also incorporated bio. Bio products can also be analyzed. The methods can be developed using this approach. However, you have to accept certain flexibility because this is not chromatography, it's not a typical instrumental test. So the idea of 1220 rather than give you specific directions is to provide the concept and you can choose how to apply the concept in the appropriate way, depending on the method. Unfortunately, I'm not familiar with the cell rate programs, but I'm pretty sure that the concepts can be applied in some way. Obviously, the variability of this method is much higher than a normal small molecule method. Thank you. One more question we have here is about sample performance. And if you can explain the same. And it's going into the direction to, to say a method is performing well in lifecycle a control sample can be tested but how to identify such a sample is consistent such a control sample is consistent and can be used then as such. Do you have some some advice for the obviously not because you will use one control sample today. Another control sample in the future, but you will do. You will apply for that control sample the replication strategy. So that will give you a more better assurance of the of the method because the is everything is okay. The viability of the method will remain stable. So your replication strategy will give you every soul with a plus minus analytic total analytical error. And that is the value that you will compare with with the other with the other with the future. So, keep in mind that is sometimes the, the, the ongoing performance education will imply more than one injection in the case of of. How much traffic. Thanks for ratio. We have two more questions. Maybe we can can get them through before we end the call the webinar. One question here with that was just coming in how can we determine the accuracy and precision for ATP before we develop the analytical method. Okay. Good question. For, for the method itself for the statistical tool, you can use for that. Anything that is appropriate can be used. There is no restrictions there. But again, the idea is, you have your acceptance criteria for the critical quality attributes. And, you know, that is so critical that you cannot allow to have more than, I would say, 5% of the, of the results outside the, the specification when you have a result that is close to the specification. So, based on that, you can identify what is the appropriate accuracy and precision. Again, I will mention a rule of thumb. Like, you have to be sure that the total analytical error is only one third of the total specification that that will give you some kind of a assurance that the method is appropriate. But any, any other approach is, is a bit. And the last question that is in here in our question and answers box, there were also questions coming through the chat box, but, but we are looking at the moment just on the question and answers box here. That's the last question in here. Does the analytical life cycle approach and also q 14. There are two techniques that I employed for identification, so spectroscopic identification, or we have also lost on drying here mentioned south fated ash and residue and ignition, which is the same. Again, the, the, in order to, to apply this approach, you need to consider how critical is the test so you're, you are doing. And it's like a loss on drying or, or other type of test identification test, it may be critical that probably you, you cannot apply. There's no too much value in applying all this concept in in a method like that. So what I'm going to explain about identification before is that even though the, the chapter says that any kind of metal can be used, can be, can use the total enhanced approach or part of it. It is important to indicate that this really have value in quantitative test, but it can be used for identification. I'm just looking now into the chat. There are a few more questions. However, there are questions like what can we do to reduce the analytical bias of a method. I think that's probably a little bit too, too broad to answer right now here. The same as here to what, what things we need to take into consideration to calculate the analytical error. I'm not sure if we can answer the questions like these today. Would you like to try or shall we? Well, I mean, the total analytical error is a, is a unified parameter that put together accuracy and precision. One, one, one thing to mention is the USP chapter 12, 10. And I think statistical tools for meta validation describe one example of how to determine the total analytical error with two estimate of the, when the estimate of accuracy and precision. So you have an example in chapter 12 10. In 12 10. Thank you. That's a very good, very good advice here. Yeah. Okay. And the last question. Well, two, two questions. Maybe still the sampling procedure. Should that be included into the method validation protocol? Good question. Yes. The method analytical procedure start with the, with the sampling. It is very common to see a more extreme case where the operator prepare stock solution and do several delusions of those solutions to test the accuracy or to test the precision. That's not the appropriate way to do it. The appropriate way to do it is start from the beginning from the waiting or from the sampling in order to do to be sure that the method is working to the answer. Yes. Thanks. And the last question that we have here. What is the trend in my analytical results over the years in the chromatographic parameters. How can I distinguish a trend is based on the analytical method so that we have a trend in the analytical method. How can I distinguish that from the trend that could be coming from the manufacturing process and is based on that. I think with a control sample that would be a step that we we talk about using the control sample. And again, one detail. In many cases, people are not familiar with the control charts. The control charts has specific rules about when to consider a trend and and all what to do when you have six, six continue trending results, et cetera, et cetera. So I will recommend to study a little bit more the use of any, any, any type of quality because there are many. Thank you. Thank you very much ratio. I think we are now at the end of the question and answer session. And I would like to give back to young for his final words. Can I make a one. Announcement. The chapter now is being considered for ballot. The ballot will be in June. Of this year. And sorry. Yes, in June, and it will be available in, in the USP. I think in November this year. And the other, the other comment is that we are planning a workshop between the British pharmacopoeia and the USP for the end of September. So there is no announcement yet, but you will see the announcement. So that's useful information. Absolutely. Jan, please. Well, thank you so much for also thank you so much Christian for this very interesting webinar. This will be recorded and available to you participants. So please stay tuned as well for that. We have two more announcements for our next webinars coming up. So on Tuesday 1st of June, we will have USP general chapter 1469 nitrosamine impurities, which is then going to be followed again on Tuesday 15 June. The value of pharmacopoeial reference standards, which will be presented by Christian who's here Christian. You just want to mention like a liner to something so to about what it is that you're going to be talking about. We will mainly talk about some challenges that you need to keep in mind when you, when you are working with secondary standards and setting up your in house reference standards in, in comparison to the, to the pharmacopoeia materials or to the overall primary standard. So, so, and how you can take the least risks in doing that way. Excellent. Alright, please, if you're participants, if you still have any questions, you can still send us by email at email at usp.org. If you also go on our homepage on www.usp.org. We do have an events and training calendar on the upper right corner, the right tab. If you click on that, you will see our other events and webinars and educational courses that USP is offering to you. Thank you so much again, Ross. Thank you so much. You've been Christian. Thank you for putting this together. Thank you session. And we hope to see you soon. Please stay safe and please take care of yourselves. Thank you. Thanks for participating.