 The topic today is a common, a very common quality assurance activity, which is the validation of analytical procedures and other associated activities like verification and transfer. Those things are represented in the USP in chapters 1225, 1224, and 1226. We will also talk about the new ideas for validation, which are now represented in a new proposed chapter in USP chapter 1220, which is now open for comments until the end of November. Let me start saying a few words about what is the importance of validation. And when you develop a product, I mean, the methods used to demonstrate any quality attribute in a product needs to be validated. And this is from through the development until that the product is in the market. The validation process will provide evidence that systems and methods are suitable for the intended use. And that phrase, suitable for the intended use, will be present in all the activities associated with validation. The Code of Fair Regulation in the US says that the accuracy sensitivity, specificity, and reproducibility of this method employed by the firm should be established and documented. That is the other important part of the validation, which means that all the experiments you will perform in order to demonstrate the suitability for the intended use will be documented. And finally, there should be evidence that the method that we have been validated at one time at the beginning maintain this state of validation through the life cycle, which means from the inception of the method through the retirement of the method. And this is something that is not well represented in the current validation scheme, but it is an important piece in the idea of the methods life cycle management. The definition of validation, as I was anticipating, is this one is the one that is in Chapter 225, Validation of Compendial Procedures. It says that the validation of an analytical procedure is the process by which it is established by laboratory studies that the performing characteristics of the procedure meet the requirement for the intended analytical application. A couple of important elements in this definition, first of all, is the demonstration is based on laboratory studies. So you have to perform certain number of experiments to demonstrate that the method is validated. And the more important and is underlined here is the idea that you have to demonstrate through the validation results that the method meet the requirement for the intended analytical application. You have to conclude that with the level of variability you determine in the method, this method can be used for the application you need. Currently in USP, we have four chapters more or less related to the validation of analytical procedures. Those are Chapter 225, Validation of Compendial Procedures. This is the first chapter that was introduced in USP on this topic a long time ago. The idea is to reproduce more or less the text in ICHQ2. So it's more focused in chromatographic procedures. However, you can adapt the validation protocol in 225 to different types of methods, not all, but some of them. Then we introduce another chapter 226, Verification of Compendial Procedures. This is to demonstrate, it's a guidance on how to demonstrate the suitability at the time of the implementation in the lab. And we first specifically, at least from our perspective, two pharmacopoeial procedures. We introduced 224, Transfer of Analytical Procedures. This chapter focused on the transfer between laboratories. We usually say that if the method is in the pharmacopoeia, the appropriate activity will be the verification of the transfer. However, a mix of activities associated with verification of transfer can be done in any case. The latest member of the family is chapter 1210, Statistical Tool for Procedure Validation. This is a chapter that has intention to provide some statistical guidance on how to demonstrate a more important performance characteristics of the method, namely accuracy, precision, and limit of quantitation. The performance characteristics are those defined in the ICAQ-2 and in 225, and we will not go into the details of this, but you know that the accuracy is to demonstrate if the real, the reportable result deviates from the real result. Precision is the dispersion of the results around the mean, and it's defined in three levels. Repetibility and reproducibility, usually pharmacopoeia focus in the first two, repetibility and intermediate precision, because reproducibility data is not always a by-level. However, as the method are published in a pre-official state, like in Europe, pharmacopoeial following the U.S., Japanese pharmacopoeial following Japan, that gives the user, the stakeholders, the opportunity to reproduce the method as is written in the, as is being proposed in the pharmacopoeia, and provide data to the pharmacopoeias in terms of any problem you will face, you face when you try to reproduce the method. It's a sour way to obtain something close to the reproducibility data. Then we have specificity, probably more important, which is to demonstrate that the analyte is being quantified without interference. Limit of detection and limit of quantitation, this is for trace elements, trace components in a matrix. You have to demonstrate that you are able to detect and quantify the analyte at the appropriate level. Linearity, sometimes the definition is conflictive, because in ICH it's unclear, but we will define here linearity as the response function between the analyte and the instrument response. This is what you do when you do the graph between having the concentration in one axis and the response of the detector or wherever you're using to measure in the other one. Not always is linear. Range is the lower and higher concentration where the method will be used. And finally, robustness, another sometimes conflictive parameter, because we can see the robustness not part of the validation protocol, but part of the method development. So in robustness, you define tune of the method that has been developed. This is where you will identify the system suitability parameters, et cetera. This table is in ICAQ2, very similar, and in 12.25. And I think it's the basis of the current concept of validation and even verification and transfer probably too. And it's defining what kind of experiments you have to do, depending the type of method you're trying to validate. And the methods are divided in four categories. Category one is when you're measuring the main component in a matrix. Category two can be quantitative or limit test. This is for impurities, so a small component, which is in a low concentration in the matrix. Category three is for performance testing. This is unique in USB. It's not in ICAQ2. And finally, category four, which is for identification methods. So if you see this, you follow this table, you will have a nice, define a nice protocol for validating your method. However, it's very difficult to base, after doing all this experiment, to demonstrate that the method is really suitable for the intended use. And that is the focus of the new ideas on validation, which is the one the topic I will talk about at the end of the presentation. Pharmacopias, USB also allows the use of alternative methods to the methods included in the Pharmacopias. The requirement of using an alternative method, according to USB, this is in the general notices, topic 630, is that the method needs to be fully validated. Your alternative method needs to be fully validated and must produce comparable results to the compendial method or procedure with an allowable limits established on a case-by-case basis. So in order to demonstrate, I would say, the equivalency, we're trying to not to use the word equivalency anymore. We are talking about comparable results. It's by using a type of a two-anxiety test, a statistical test to determine the comparability. And in order to do that, you have to pre-establish a range of the results where the method will be considered equivalent. For more details on this approach, I will recommend for you to look into Chapter 1010 in USB, which provides examples on how to demonstrate equivalency or comparability using the two-anxiety test. So what about verification? So we identify what is validation. You will do validation when the method is new and developed by you. And also when you use a pharmacopoeial procedure with sufficient level of modification that requires full validation. Now, in cases where you will take the method as it's written in USB and will try to implement in your lab, the first thing you have to do is to verify the method. And the need for verification is not a USB requirement, it's in the color fair regulation. And this text, which is reproduced also in Chapter 1225, Validation of Companial Procedure says that users of analytical methods described in the USB and F have not required to validate accuracy and reliability of these methods, but merely verify their suitability and their actual conditions of use. Which means that at the time of use the method in your lab, you have to do some kind of a, some people call minivalidation, we call verification, to demonstrate that the analytical operations, including the matrix, instrument, or analyst are appropriate for the use of the method. Another definition is included in Chapter 1226, Validation of Companial Procedures. And it says that verification consists of assessing selected analytical performance characteristics, such as those that are describing Chapter 1225, to generate the appropriate relevant data rather than repeating the validation process. So it's not going into the details, but the concept is there are certain things in the list of performance characteristics that are critical in order to demonstrate that the method as is being used in your lab is suitable. And part of the difficulty of interpreting Chapter 1226 is the fact that people sometimes don't understand that the decision of what are the more critical parameters lies on the shoulders of the user. So the user needs to use their scientific knowledge to determine what is the appropriate experiment to demonstrate that the method is verified. And we will discuss some of the approaches you can follow in order to identify those characteristics. So if we try to compare the two activities, the validation and the verification, you will realize that when you do validation, you are trying to challenge a method, the procedure. And for doing that, you use a well-defined sample. Usually when you do validation, the composition of your sample is well-defined. Sometimes you use a reference material for doing the validation. Also, you assume that your analyst is well-trained because you are not trying to identify differences caused by the analyst. You are trying to determine or identify variability caused by the method. And also, you assume that the method is well-calibrated. So again, in validation, your focus is the analytical method. When you do verification, what you are trying to do is challenges the analytical operations associated with the repetition of the method, also the analytical matrix. And you will use a well-defined method, which is the pharmacopeal procedure. So when we do verification, our focus is not now the method, but the analytical laboratory, including the analyst instrument and the sample matrix, the matrix of your product. So if we put all those together, you will see that from an analyst's perspective, if you are working in a GMP environment, you can assume that the analyst has the appropriate education training and experience, as it is defined or required by the current GMPs. The same way, instrument needs to be calibrated, and that is also a GMP requirement. So we can still have those assumptions. The reagents used in the method usually are defined, well-defined with quality attributes in the pharmacopeas. So you know which is the appropriate reagent to be used in that particular method. So what is the main source of variability during the verification is the matrix, because the matrix contained in your product contains your XCPNs and all the components in the formulation, and those components may not be the same that were used when the pharmacopeal monograph was developed. So it is important to have this step when you define that your product, your matrix, is appropriate for the method that is written in the pharmacopea. Again, there is not such a generic protocol for verification, it is something that should be defined case by case more or less. But we can have some basic rules or some basic approach on what to do. And we know that specificity, for the reasons we explained before, is very important in the verification to demonstrate that any of the components in the matrix interfere with the quantitation of the analyte. This is very important. And obviously, in some cases, you know the matrix, you can identify, you can prepare a placebo matrix. So you have several tools to verify the specificity of the method. You can also determine peak purity using the appropriate HVLC detector. Then we have a QSEM precision. These two are the main, together with specificity, the main performance characteristics that needs to be tested during the verification. We have to demonstrate that the analyte is properly recovered within the range of the specification. And also, you can determine that the repetibility is appropriate during the, within the range of specification. Well, LOD only in limit test, but LOD is also important when you run an impurity test. And the idea is to demonstrate that, and this is probably more legal, people use different approaches, but one approach may be, if I'm able to quantify the impurity at 50% of the maximum specification, I will be sure that when I am quantifying an impurity and add the specification, I am in the middle of the range, so a QSEM precision or appropriate, etc. And finally, you don't need usually, unless you change the concentration of the standard solution, which is not the case in general, you don't need to test linearity. Again, if we define linearity as the response function between the analyte and the response of the instrument, this is an intrinsic property of the molecule that don't need to be tested again during the verification. Range is not need to be tested, range is already defined in the paracupial specifications. And robustness usually is part of the verification, as I said before, is more a component of the mental development. Very important, but it is in the mental development part. So this is about verification. Transfer is defined in chapter 1224. The transfer of analytical procedure is a documented process that qualifies a laboratory, the receiving unit to use an analytical test procedure that originated in another laboratory, which is the transferring unit, thus ensuring that the receiving unit has the procedural knowledge and ability to perform the analytical procedure as intended. Here, the focus is not the method, it's not the sample, it's the lab. So during the transfer, your intention is to demonstrate that the receiving unit has the appropriate personnel to run the test and the laboratory service as are appropriate for running the test and the instruments also are appropriate. So transfer is not an activity that is really closely linked to a pharmacopoeial procedure, because again, if the procedure isn't a pharmacopoeia, probably the verification is the appropriate test. So it was included in the pharmacopoeia just with the purpose to process the loop between all the associated or related activities with the generation of analytical data. There are four approaches for transfer, comparative testing, covalidation, revalidation, and transfer waiver. Comparative testing is more common way to do the transfer and basically is to analyze the sample from the same batch in one lab and then the other lab and demonstrate that the results obtained are equivalent. Again, in order to see an exercise of demonstrating equivalency between two results, go to chapter 1010 and you have a couple of examples there. The covalidation is the process for which we validate the method at the same time in both laboratories. We follow the 1225 as a general protocol for the validation. And you will try to demonstrate that the method is suitable by analyzing the, that the method is transferred by analyzing the validation results in one lab compared with the other. Problem here is that sometimes you may see differences and it is difficult to identify if those differences are caused by the method, because the method is not fully validated yet or is the lab. For that reason, always the comparability test is the more appropriate. Revalidation is again similar, is repeating the validation experiment in the receiving lab. And again here you may see difference between the results and depending how, when the method was validated originally, those differences may cause by some kind of a shift on the method behavior or the lab. And finally transfer waiver. There are situations where you can waive the need of doing a transfer, a formal transfer, and those four options are also listed in chapter 1224. The new product composition is comparable to that of an existing product and is analyzed by procedures with which the receiving unit already has experience. The analytical procedure being transferred is described in the USPNF and it's unchanged, so we will do verification there. But again, nothing prevents to use any combination of activities really, activities associated with the verification or the transfer. The analytical procedure transfer is the same or very similar to a procedure already in use or the personnel in charge of the development validation or routine analysis is being moved to the receiving unit. In those situations, the method don't need to be transferred formally. 1210 is a chapter that I presented at the beginning of the presentation. Again, is the latest member of the family in describe certain statistical approaches that can be used during the validation, focus in accuracy precision and the detection limit, and not the rest of the performance characteristics. Not all of them are described. One interesting point here is that the accuracy and precision are described in 1210 as one performance characteristic. So you can combine the accuracy and precision in one component, we can call total error, and 1210 explain how to do that. That is very important for the analytical lifecycle management because that is the approach we follow in that enhanced approach. Briefly, again, how we will define when to use validation and verification. When the method we are trying to implement is not validated, we will validate using 1225. If it's validated and it's compendial method, we will use verification 1226. If it's validated but it's not a compendial method, the appropriate activity will be transferred. And this briefly describe what are the different options for transfer depending on the situation. So if you're trying to transfer a method that is validated and cannot be weighed, you can use comparative testing. If not, you will try the revalidation. And on the other side, if the method is not validated, you can do a covalidation between the two methods. So I'm here, we describe or we discuss the traditional approach for validation verification and transfer. So what are the new ideas regarding validation? And I will start mentioning this document that FDA published in 2004 where the indications of the weakness of the actual validation process are mentioned. The document says that variability and or uncertainty in the measurement system can pose significant challenge when outdoor specification resource are observed. Measurement system variability can be a significant part of the total variability. Similar and repeating out of specification observation for different product across the industry and a less than optimal understanding of variability. Continuous improvement is difficult if not impossible in this situation. So the document acknowledged that the out of specifications not always are caused by deviations in the product. It also can be caused because the variability of the method had not be perfectly defined or qualified during the validation. In this way, this way, coverly method validation and transfer have become an exercise more focused on satisfying regulatory documentation requirements rather than understanding and controlling source of variability. Analytical procedures were validated at the beginning of the method lifecycle with little consideration of how well the method will perform in real work operation conditions. ICHQ2 often apply in the laboratory in a checkbox manner without linking the validation parameters to the fitness for purpose of the procedure. This is the key topics here. First of all, we know that we have to validate because is a regulatory requirement, but we are usually following ICHQ2. We will not link the method variability with the fitness for purpose of the method. And the second is that when we value the method once in lifetime and then the method is used with a little consideration about how the method will shift through the lifecycle. So those are the things that we are trying to address when we talk about validation and verification, sorry, when we talk about the lifecycle, analytical procedure lifecycle management. The first attempt to go into this direction in USP was the creation of the Validation and Verification Expert Panel in 2013. This group produced four documents which are those listed here in the pharma copial form so you can access them if you want to read because they represent the basis of the future development of Chapter 1220. The procedure lifecycle can be reproduced using this scheme. We will briefly discuss all these stages, but here what is important to notice is that there is a previous and initial activity which is to demonstrate or to identify what is the analytical target profile. This is a statement, a very simple statement that we identify what is the definition of CWT for the intended use, but from the perspective of the performance characteristics of the method. One that is defined will choose the technology, we will go into stage one which is procedure design, we will go into stage two which is procedure performance qualification and finally stage three, continue procedure performance verification. On all this process we will collect information and we will have to establish a knowledge management system to collect and archive all the necessary information regarding the whole story of the method. So what is the ATP, the first step in the process? The ATP is a prospective description of the desired performance for an analytical procedure that is used to measure the quality attribute. So we plan a method we need for measuring certain particular quality attributes of the product. We will define based on the specifications, based on the therapeutic window of the product it said that we will define what is the quality, the necessary quality of the reportable value we need and based on that we will create the ATP. When we create the ATP we will define what is the more appropriate technology for measuring that particular attribute. The ATP usually include the definition of the analyte, so what is the analyte, what are the components present in the matrix, what is the concentration range for the method where the method will be used and the accuracy and precision which will be acceptable for the reportable value. There are two examples in the chapter in the proposal of chapter 20. The first one is more or less based on the traditional approach of presented accuracy and precision as two separated parameters. The second one is more focused in the total error and it says that the distribution of total analytical error of the reportable value falls within the total analytical error range of plus minus certain percentage. It can be done using uncertainty, measurement uncertainty, etc. Any way to provide the minimum requirement for the quality of the analytical data will be appropriate. The stage one of the procedure is the method of design. Here we will define what are the methods parameters that allow compliance with the ATP criteria, so in other ways minimize the bias and provide the adequate precision. We will understand what are the effects of the procedure parameters on the performance, on the procedure performance, mainly usually known as robustness. We will optimize those performance characteristics including accuracy, precision, calibration model, selectivity, sensitivity. Finally, we will have a preliminary replication strategy for samples and standards, something that is not usually included in the traditional approach for validation or better development. And also, we will define the analytical control strategy, which implies the system suitability parameter and other tests that will assure adequate performance. The stage two is the qualification of the method. Here is where we actually do the actual validation. This is mostly focused in accuracy and precision. At this stage, if we are able to demonstrate that the accuracy and precision of the method is adequate within the range of the method, we can say that, and meet the ATP requirements, we can say that the method is qualified. Also, we have data generated during the stage one that includes the rest on the other performance characteristics like linearity and limit of quantitation, etc. So, at the end of the stage two, the replication strategy is confirmed, and it's confirmed that the performer of the procedure meets the ATP and other criteria. So, we are ready to implement the method. When the method is implemented, we go into stage three, which is the continued performance verification. So, the method may be transfer, may be used in the same lab. We will use it under different situations. However, we have a set of tests, which is more than the system suitability test that will help us to identify if the method is still performing within the ranges that was expected to be performed. So, the ATP criteria is still being met, and we will do that through the rest of the lifecycle of the method until the method is retired. So, just to finalize, the USB chapter validation verification on transfer will continue being the compendial guidance for these activities. So, you should not expect in the short term any big change in the way the pharmacopeia describe a validation verification transfer in these three chapters. Chapter 20, which is the one that is now in the PF, will provide directions for implementing an enhanced approach that encompasses analytical QVD and respect principles. So, the analytical lifecycle management incorporate, among other things, the idea of creating or using analytical QVD tools for the method development, and also is a risk-based approach, because the ATP is defined, the requirement for the reportable value, reportable result, is defined based on the risk we can accept for obtaining the reportable result. How this new approach may impact on existing mono and future monographs is being discussed. There are several points that needs to be discussed. For example, if there is a replication strategy associated with a particular ATP, probably that replication strategy should be incorporated into the method description in the monograph, which is a big departure from the current philosophy in the pharmacopeias, which describe only one standard one sample. Other aspects that have to be considered is based on the knowledge collected during the method development, it is possible to incorporate in the monograph the ATP, which is defining the quality of the reportable value, but also what are the appropriate adjustments that can be done in the method and the method still behave as expected. Something that we do actually in a genetic way, put in certain parameters, adjustment of certain parameters in Chapter 621, or in the chromatography chapter in the European Pharmacopeia. So those things are still discussed. Just an announcement, in December 8th, we are presenting together with the British Pharmacopeia, a workshop, Analytical Quality by Design, and the Method Analytical Life Cycle, one of the companional applications. In this workshop, we will discuss the activities in both pharmacopeias related to this topic, and also we will present examples. And I think that's all I have. This is my contact information. Also, if you are interested in other educational programs in USP, you have the link there. And also if you're interested in participating in USP activity as a volunteer, you also have the link there. Thank you for your attention. And let me know if you have any questions. Thank you. Thanks a lot Horatio for this wonderful presentation and very detailed presentation, even given the short time. We have indeed a couple of questions here. First question was, will the presentation be available? I think we can set it up as a PDF or ratio and provide to the participants, right? Yes, absolutely. Very well. Thank you very much. Okay. And now the more technical questions here, we will not be able probably to go through all questions today, but I have a couple of questions here that I think are interesting. The first question is, if method transfer between two companies was successfully concluded, the receiving labs still have to perform verification of methods? No, the transfer implies that the method is can be performed in the receiving lab with no further action. Excellent. Thank you. And another question is, even if there is no modification to the operating conditions or to the specifications, is there is there a kind of recommended frequency for revision of a validation study necessary? Well, yeah, that's a good question. And I have to say that usually pharmacopias are silent in terms of how often a quality assurance activity needs to be repeat. As if you go into the traditional approach for validation, the answer is yes, because methods are not static. They shift with the time. You use different reagents, different columns, different analysts, and the level of training may change. So it is recommendable to exercise the revalidation or at least some kind of reverification of the method. But obviously, USB or any other pharmacopia will tell you how often it needs to be done, because that depends on the use of the method and the characteristics of the lab. Thank you, Orishu. There's another question. Is the robustness parameter mandatory to prove for monograph methods? Usually, the robustness is not included in a verification protocol, because again, the verification has the purpose to identify where are the critical instrument parameters and how those critical parameters can be modified or adjust. And usually, that will not change. So it may be cases where it's needed. I cannot omission a situation where you need to reproduce the robustness. Now, you can see how the previous question and this question will disappear when we embrace the new idea of validation, because obviously, because you will do a continued verification of the method, you need to worry about revalidation. You are continuously monitoring the method. And regarding robustness, when you develop the method using the analyzer procedure, lifecycle management, you have enough data to identify all the robustness parameters and you will define, if you use HVD elements, you will define what is called the MODR, the method operable design region, which will indicate where the method can operate. Thank you. Another question we have here going in the same direction, I think, when the method is pharmacopayered and is used over years without any manipulation, is verification data still required? That's another good question. And I have to say that there is probably a glitch in the USB text, because when the chapter 1226 was incorporated, we included a clause, a grandfathering clause that says that methods that are already implemented at the time of implementation of the official stage of this chapter don't need to be re-verified. So you can use historical data to demonstrate that the methods have been verified. But after that, the expectation is that any new method after the implementation of 1226 needs to be verified. The problem is we will still leave that grandfathering clause in the chapter, so it's very unclear. But the idea is that everything that is coming to the lab now needs to be verified. Thank you very much. There's one question here going into our bioanalytical fields, and the question is the methods that are described in USB General Chapter 129, Analytical Procedures for Recombinant Therapeutic Monoclonal Antibodies. Do these methods need to be validated or verified as well? Well, I have to say that I'm not familiar with the content of the chapter, because it's in the bio department. It's not in the general chapters department. But again, what I can say is that when a genetic method is described in a chapter, when that chapter is referenced in an individual monograph, we can say that that method was fully validated for that particular purpose, and there is no need for validation. It's just the verification is the only thing you need. However, if you are using a USB method described in a chapter for a product that is not described in the USB, the idea is that you have to demonstrate that the method is suitable for your product. So maybe a more extensive validation is needed. Now, the point in this case, and again, I will excuse myself because I'm not familiar with the content, is that there is a common aspect in the monoclonal antibodies that makes me believe that many of the components, many of the quality, many of the performance characteristics of the method described in the chapter don't need to be reproduced. For a more extensive answer on this, I will recommend to contact the people in charge of the document in our bio department, Christian. I think you can direct the question to the right people. Yeah, I can. Thank you very much. And don't worry Horatio, this is the only bio question for today. Everything else is going directly over validation and verification. There's one question about alternative methods. There is no statistic data available for pharmacopayal methods. Do I need to implement the pharmacopayal methods first to generate data for statistic comparison when I want to set alternative methods? Yes. In order to, the way the use of alternative procedures is described in the general notices, and again, general notices is the document at the beginning of a USB that impact the whole, the whole pharmacopayal activities. The ways described imply that you have to generate some data using the method in the pharmacopoeia in order to accept or adopt an alternative procedure because you have to be certain level of comparison between the pharmacopoeial method and the alternative procedure. Thank you. There's one question that came up several times also for the example of excipients. So what if you have already verified a method on a certain material from one manufacturer, for example an excipient? Is it necessary to verify the test again on a similar material this time from another manufacturer? Okay. Well, very good questions, all very good questions. And there is some kind of a balance here because you may say, okay, I validate this method. This method is, I was able to identify the appropriate purity of my excipient and I can use the method. Now I changed the supplier of the excipient and now the composition of the excipient may be different. And at least, I don't know, a specificity probably needs to be tested again to be sure that any new component or impurity appears in the matrix that was not detected before. Now having said that, I have to say that if you have an appropriate process to qualify your suppliers, this step will not be needed because you have a previous contract with your supplier in terms of what is the expected quality of that particular excipient. So depending on the case, you may need to do it or you may not need to do it. I would say that for basic Euro compound type of excipient, the answer is you don't need it. I'm talking in the cases where composition of the excipient is more complex. Thank you. There is one question about harmonization or maybe harmonization at some point in time because the questioner is saying there's a new chapter published in Pharm Europa. So in the Pharmacopale Forum equivalent from European Pharmacopale, there's a published new chapter in Pharm Europa concerning a two-step process for the verification of Pharmacopale procedures. The chapter is 526, it's saying. In chapter 1226, so in our chapter, there is no clear distinction of these two steps. Is USP considering the review of this general chapter to harmonize with what Pharm Europa is saying or suggesting? Any of these chapters, verification, validation and transfer are in the PDG program. So there's no formal intention to harmonize these chapters. We started last two months ago, we started a new cycle with new committees and new experts in our committees. And now we are starting reviewing all our chapters. Validation, verification and transfer are now in the review of a new expert committee which is called Measurement and Data Quality and they are also owning chapter 1220. So I cannot assure that in the near future or in the future maybe some modifications in 1224, but so far there are no plans. Thanks a lot, Horatio. Another question is we have still five minutes and a few questions we can still take. So another question is why precision is only focused on repeatability and not on intermediate precision as well? I would say the contrary. Precision needs to be focused in intermediate precision because intermediate precision is where you have a more closed reproduction on how the method is used in the routine. So I would say I agree that precision needs to be focused in intermediate precision rather than in precision, which is a limited amount of data you collect. Yes, exactly. There's one question to your slide number 10. Do we always have to obtain positive limit of detection and limit of quantitation results at the level of 50% of the specification when verifying the method? No, and thank you for the question because I think I forgot to mention that that particular slide by no means is any list of requirements. This is a collection of things that we hear with the time that people are doing. It's not necessary to go that level at 50%. People who do that is because of the fact that because you don't know the distribution of the error around that result because you are in a low concentration, in order to be sure the appropriate amount of data are included within or outside the range, it's a good idea to verify the limit of quantitation in a lower level than the specification. So when you are in the specification, I would say the whole Gauss distribution of error is inside or outside the limit. But by no means that is a requirement or a mandatory activity. You can choose your verification protocol and acceptance criteria based on your knowledge of the product and the method. Thank you. One more question here. If the API manufacturer already has verified, validated the analytical methods used to determine related impurities in the API, is the finished product manufacturer then also required to revalidate, re-verify the method? Well, that is probably more a company decision. I will assume that this is a method that is not in the pharmacopoeia. It's a method that was developed and validated by the API manufacturer. Obviously, you will collect that information when you decided to buy that particular API in terms of what are the appropriate impurity profile, et cetera, and you will use that method. In terms of you want to revalidate that method, accept the validation protocol of the supplier or verify the method. I think it's more or less a company decision. At the end of the day, I'm not sure if it's the same everywhere, but at the end of the day, the responsibility of the purity of the particular compound is the pharmaceutical manufacturer. So you may choose what is the level of risk you want to have. Maybe we have two more quick questions possible. If the alternative method that we are using is a EP or JP compendial method, we have to perform a full validation or is it just a verification? That's probably a question for regulators. Is it a regulator who needs to accept or not the method coming from other pharmacopoeias? It's not a USP rule. Thank you, Horatio. And maybe the last question, please, can you repeat the difference between transfer and verification? Verification is not the ability of the lab to perform the analysis, right? Verification is where we will challenge all the analytical operations in the lab with this pharmacopoeial procedure, but using our sample. So it's the analysis, the instrument, the laboratory services, vacuum, water, air, et cetera, and the matrix. So we will focus on mostly in the matrix because it's the main source of viability, however, we assume that the instrument is well calibrated, the people are well trained, et cetera, et cetera. When you do transfer, you cannot have all those assumptions. What you are trying is to demonstrate that the laboratory operations, the instrument, the the analysis, et cetera, are appropriate for the method. So in one case, we focus in the matrix. In the other case, we focus in the lab. Thank you once more. Thank you very much, Horatio, for answering all these questions. There are many, many more. We cannot answer them today anymore. We will receive tomorrow probably the PDF of Horatio's presentation. There is his email address and please feel free to send him any question that you think has not been answered here during this webinar. So once again, thank you very much, Horatio, for the time and sharing your knowledge and experience here with us. And thank you also once more to the participants for spending this hour today with us on the validation transfer and verification of analytical methods. We have our next webinar in two weeks about dissolution and performance verification testing on dissolution. So thank you, Christian. Thank you very much, Horatio. And thank you for your attention. Thank you, everybody, and have a good day further on. Thank you. Bye-bye. Thank you. Bye.