 So we're opening this method that has the first 20 standards, and it is, if I click on method contents, you'll see that we're set for intensities only, it has the number of conditions, the number of elements, the number of methods, the number of analytes, which could be different than the number of elements, the number of standards, and some other details on what's going on with this method. And so we have elemental peak profiles, which I don't know where they all came from, but, and there's a whole, how can I say this, there's no great way, easy way to tell whether you're going to get better data with a different peak profile. We could import the generic peak profiles and my inclination would be to do that. The thing is that I don't know, I mean it's 530 now, I'm not sure how long we're going to sit here, but to go through and start looking at different sources of, you know, fit it with this peak profile versus that one, it's really hard to know where you've made an improvement. And it's unusual to find a big improvement. So I'm just mentioning that we could, you know, my inclination, given that some of these peak profiles are from 2004 and 2008, they're probably, and you see here in the count rate column, it's telling us that some are from low, some are from medium, some are from high. And on the new method that we created, it's running medium and high, but the medium is going to have a different time constant than what the old data was using. It's using a shorter time constant, which will give us higher cap rates, similar resolution, but I would be inclined to use our generic peak profiles from the 8th microsecond time constant. Okay, that's the thing we changed. Which is the thing. Yeah. And so to import new peak profiles, after I select element of peak profiles, the icon will be different. I think it'll be an arrow pointing in for import and an arrow pointing out for export. This is WinTrace 8. This is actually WinTrace 10, which is the one that runs under Windows 10. And if you wanted this version, wanted to do data analysis with it, you could do that. I don't know if it's a good idea, because again, it's not that one. So if you go that direction, you can do whatever you want. You can run under Windows 10, but you can't put it back on the instrument. So could this drive that instrument? No. They stopped trying to support older hardware. Let's go through the process. Well, I'm going to show you how to import peak profiles and then how to use the new peak profiles within the method, because importing them doesn't get them to be used. Just because they're there, they don't automatically suffer. It doesn't automatically jump over and start using those. It's going to continue to use the ones that are currently being used until you tell it to use a different one. So again, we're in element of peak profiles. You click import. You get a file of the dialogue. And then what you're going to want to look for, you're going to, you're in C, calling back slash point x, you go to peak profiles, and then you go to generic. And then what you would hope to see is 8 microseconds. Those are the ones you want for the medium count rate setting. And it wouldn't hurt to also import the ones from the 3 microsecond time constant, which is for the high count rate setting. So 8 microseconds is medium. 3 microseconds is high. And just import all of them. Your file, your method file will get a little bit bigger, but it doesn't hurt anything. For now, I'm going to just import all of these, which are actually acquired on an SDV. Probably very different resolution. So, you know, but we don't care because we're not going to actually ultimately use this method. So this is just, so you'll see, so now we've got all these peak profiles here. And the user interface is a little different. They've added columns here that I don't think of as working in the album. But either way, you can, you see the node here. A little plus sign in a box is called a node. Expand the node. You can click on any of these and see the spectrum of that. So the peak ID button is a different icon. It's a red thumbprint, red and blue thumbprint. But if you want it to, you can click on there and just have it help you. The version you have, I'm pretty sure when you go to a new spectrum, it automatically redows the auto peak ID. This version, it doesn't redo auto peak ID until you turn it off and turn it on. So that's why when I click on these, it's not showing me the peak levels and then I click the button twice. Anyway, so there we have a big, hot, full of peak profiles, which you will have after you import all the medium. I can't rate peak profiles. And on mine, will they be 8 seconds as well or they should be 20 seconds? They should be. You can go and double check. There should be 8. Because by the time version 7.1, which is the least, we have 8 microseconds. Peak profiles is generic, I'm pretty sure. We can go check that. But we're not going to try and change anything on this next view down from peak profiles and latency conditions. We're not going to change anything here for the moment. It's nothing else to say. We're not changing that. This is the place where we would have elements, remove elements if you wanted to. I mean, do you know of any elements you want to add or remove? Is there anything else that you feel we need to do here differently than how we're doing it? Looks like you've got all the ones I want. So I think I sent this to you with the PD thick, the ZC condition added. And these elements appeared to me to be the ones that were the ones that most benefit from that condition. I did not try to take them out of the mid-ZB condition. Again, the question of what is the benefit of having a third condition is one that we could try and address that, but maybe not today. And the other thing we could do is we could export some spectra from the two conditions and see how they overlap. And we looked at them one after the other, but we didn't try to overlap them. So, spectrum processing is what I was getting to. And this is one of the other places where we have a somewhat different user interface on the version 7.1 that you have, in that if we want to look at the details of the spectrum processing for titanium in the version you have, we would simply double click on titanium and it would bring up a dialogue where you will see the peak profile spectra that are available and you would see a spectrum showing the region of interest that has been defined for titanium on this version. And if we wanted to make any changes, we would have to go into that properties dialogue. But that was changed and so on this version, if I want to change from XML to net or gross or derivative, it's right here on the, I don't have to go into a properties dialogue, in other words, it's already here. But the other thing is that if I want to change the fitting region, I click on that down arrow button there and now it brings up a dialogue where the only thing I can do is to change this fitting region. It's more limited. The other thing that still works, this is something that's not obvious at all, but there are function keys here that, so these two are horizontal expanding contract and these two are vertical expanding contract and it's not shown anywhere. It'd be nice if they had little buttons and the L button because you'd log them on and off. And so one of the things, I hesitate to encourage you to get into this. Most of the default fitting regions are fine and so I don't think there's necessarily a good reason to change it. But if you do, you should be aware that for the most part we expect to include the entire line series and some flat background on either side of the piece in the fitting region. As long as you're using the XML fitting technique, we need the whole peak profile included and I frequently found people that came in and set it to some little region at the top of the peak because they thought that we'd give them better data. And in my, if you were going to use the code... I know what it is. Does anybody use them? I didn't know anybody used them anymore. I'm not too much of a user for field maps. It's a dinosaur. So is that a K-Alpha and K-Beta? Or maybe they're L-Alpha and L-Beta because we're so close together. L lines don't look like that. We're in log mode. Maybe that looks more like what you're expecting. That's a K-Alpha and K-Beta. And that's titanium. So it kind of has to be a K-Alpha. So here is ruthenium. So this is the one that you're using as a ratio. And it'd be nice to have a spectrum that actually had a company's canopy. It doesn't really matter because all you're giving the software are the high and low modes. It's not using that peak shape in that case because we're using gross intensities. I think I said net intensities during the talk. You can try both ways. Whichever works is fine. I don't know if there's any theoretical reason to pick one versus the other. And in this user interface, the list of peak profiles is over here under this dropdown. But on the user interface on yours, you only see the list of spectra when you double click on the element. And the properties come up. And it's in the properties dialog. You choose gross net, XML, and you choose which peak profile you want to use. And it's a game. If you're using XML, it matters. If you're using gross, it doesn't matter. It's just a display function. So what I would be inclined to do is, and again, I can't tell you it's likely to make that much difference, but you could go through and use the 8-microsecond. As I think, you won't have 8-microsecond time constants. You could switch over to using the newer ones and it would probably be easiest to find them by the date. I think when you go to look at the peak profiles, like if we look at these peak profiles, you'll notice that you've got dates here and some of them are 2004, 2008, 2015, and so on. So you'll see a date when you look at the properties dialog for an element and you can choose the one you want to use. So, again, here is where we would be adding elements or where we don't see anything we need to change. It looks good. It looks way more than in years, typically. Yeah. So down here under unknown components, you see that we've got elements that are unchecked. Obviously, ruthenium, rhodium are unchecked. It's like uranium. We're not trying to calibrate or chromium. Tungsten is often included in peak fitting. We may have emailed back and forth about this. I think if you've got standards that are powders that have been ground in tungsten carbide grinding mill, then you'll have tungsten there that's not going to show up on the certificate. And so it's not unusual to include tungsten. I don't know if it's actually something that is detectable and obsidian. It's not often used, but it is. I was wondering about the elements Jeff listed. There's just three. But at this point, it's time to try and run the calibration calculation. So you see here under calibration when we are set for intensities only, it may show elements that, I guess it's only shown in the elements that are checked on them. But we have no data for nickel. So I don't know what's up with that. Is that nickel and amyloid you're trying to measure? No. Okay, so it sounds like that's one that should have been unchecked on this. So I think you uncheck that because you know I'm getting ahead of myself. Let's go ahead and leave it checked and you can see what the user interface does. When we say that we want to go to, let's start with just a linear calibration. So now you see it's come up with a red, it's highlighted in red, which is telling us that we don't have any standards and the calibrate button here, which would be a picture of a Poinax on your software. It's grayed out telling us that it's not going to let us try and do this because we may have any calibration. Oh, lead as well. It's like there's no lead data. Is that right? There should be. There's no lead data. Okay. So we go back to unknown components and we uncheck nickel. Check lead talking. Oh, there it is. I don't expect that. And now when I go back to calibration, you see that this is no longer grayed out. Yeah. And then red. And so I'm going to go with the button and I say continue. It's exceeded. That's good. Calibration. So at this point, put the cursor up there to fly out where you say, okay, you guys are up 20. Is there some reason that we exclude that? Or would we expect it to be on the line with the other guys? Is it super high in something or low in something that would cause us to say, you know, lead that data point? Or do we want to try and find a matrix correction factor that gives data? Is that just titanium? This is the titanium calibration curve. Okay. And then it's just standard 20, which is gobs and gobs higher. Okay. About eight-tens of a percent. I think titanium is going to have less than what Jeff excluded. I mean, 20. Well, so we can go back to calibration and go down to standard 20. And look at the other elements and say, okay, is there something? So this is 6%, it's a .8% iron. Here's what he said. You should also leave out number 20. It's actually a vessel that should not have been included in the beginning. Okay. Well, so good. So we're going to right-click on that. Yeah. And say, don't use standard. Okay. And then if you were looking, there would have been, there was a node here before. Yeah. And when I made a change, then, you know, the way we're no longer calibrated. And so now I click the button and we're on the calibration. It's almost like you want to delete it because it's, it's, we don't, yeah, we don't get to see as much here. Right. In this version of the software, there's a zoom-in function. Unfortunately, in the version you have this. That's what it would be much better for. Yeah. The other thing I can do is switch over intensity versus concentration. And in this mode, it does not show the standards that have been excluded. It gives me an R-square, but it doesn't give me the armness error. Mm-hmm. Armness error is just a little bit of a mess. Mm-hmm. Yeah. It looks like a good fit, but that's not, not great. So you go through each of the elements? So you go through each of the elements, and then the decisions about how are these bad standards, are we going to mount, are we going to set them down? 17 was one of the bad ones. So we just throw that centered out all together? Yeah. 17, he said, you should also leave out 17, because it's very high, and these, these are Kenyan, they're very high, and irons are coming in, and they're really, so it looks, I know, unless he's looking at that for the video. Well, the only thing is that, when you've got standards, they're a little bit out of range, or at the extent of the range, and then when we start looking at doing the makeshift correction, you can say, well, maybe you'd actually like it in there, so you can accurately calculate a correction for that. Mm-hmm. And so, you know, this is the kind of thing, regression error, now it's funny. This I wouldn't expect. Well, maybe. Okay, so we just threw out one standard, and suddenly cobalt won't calibrate. Cobalt is a problematic element in a matrix like this, where the iron is relatively high, because there's a severe overlap of the iron K-beta with the cobalt K-alpha, and it's really questionable whether you should expect to analyze cobalt with EDX or F, any time you've got significant iron, and I don't know if you do that on your broker or not, but it's just, it's just going to be tough. And I can't imagine how 1.05 ppm standard would have helped our cobalt calibration. Mm-hmm. Some things, some things doesn't make sense, especially with 5% iron. Probably it was calculating the concentration of cobalt. I don't want to guess. I'll tell you what, let's go look at them. So there's our manganese after we throw out number 17. It's still like an outlier, but again, we're doing linear. I mean, I'm expecting to need to do matrix correction. Okay. So it doesn't bother me that these calibration curves aren't beautiful. This is what I expect for cobalt. Lots of given concentrations, no measuring intensities, because the primary line is severely overlapped by iron, which is their, you know, thousand time tire concentration. Yeah. My inclination would be to go back on no components, find cobalt on the list, uncheck it. You're on the calibration calculation. With this version of the software, I'm going to go ahead and click File, Save I House, just so I still got your data in original form. In case I want to put it on a different computer at home. One other thing that people don't use very much is the comment section. Let's see, it's the line here. It's the last state, it's 7. We just saved it. Okay. So cobalt isn't going to do us any good. So here's our iron data. So copper looks like copper's low. And two things. One is that we're not getting a good correlation. The second is you see, in this case, we're seeing the air barge. Yeah. It's based on uncertainty. It's expected in one sandwich. Okay. I mean, you go to iron, you don't see air bars, because the peak height is relative to, well, the big peaks, actually. So here on copper, we're fighting insufficient counts. This is the mid-ZB condition. Oh, you know what? So this would be a case where we could add copper to the mid-ZC condition and see if it works better. Okay. So to do that, I'll go to analyte some conditions, go to select mid-ZC, click on copper. And the thing is that there are peak... I mean, there's not a lot of zinc, but there's some zinc and copper overlap. So if I want to do copper in this condition, I need to include fitting for zinc as well. If there really were nickel, in your samples, I would need to do... I'll put that right now. Let's just add a bunch of elements. This is the zinc here. I'm not going to put it in a couple of things, because it's not going to be... And now that I've clicked on nickel, I think we've previously taken it out of the method. And we clicked on it, we just put it back in again. Right? So I'm going to go back here and find nickel. Oh, it's not. It's still deactivated. It was in there, it just wasn't. So this is the cotton thick copper. We can kind of run it later. We are in the thick palladium. Palladium? Mid-ZC. So all the mid-Z filters are palladium. A, B, and C are just different thicknesses. So now... Oh, and it changed it for me. It's all these. Which is surprising. So we can now go through and see what those curves look like based on the mid-ZC condition. I think mid-ZB is... I expect mid-ZB to be better for manganese and iron and certainly for titanium, because it has a thinner filter. They get this more intensity of continuum radiation down near the absorption edges for those lower tonic member elements. We put it in here really to get better performance for the video instruction. If you've memorized what the RMS errors were from the video, say something about whether this works better or not. It's certainly a tighter R-square. We're telling you this, isn't it? It's a manganese. Or maybe an oceanic. Iron. I think iron looks better. Copper. So, again, here we're getting these data points that are way out. Are we already in touch a little bit, say, copper? Yeah, these error bars are still big. And so... And I'm sure that we're going to accurately measure copper unless we decide to use longer counting times or something. Well, if we're doing some bronze, we should get better copper. Bronze is going to have a higher concentration. The problem here is not copper. The problem is obsidian. Copper in obsidian. The spectrometer will measure copper all day, just not in obsidian. So, Zinc, I think we're seeing something. This looks like pretty much data. RMS 07. And this data point is actually unlined. So, I'm sure 20 is unlined for this. For Zinc. Do you leave certain samples in for some moments? You can. Yeah, if you want to. I don't think we should spend a lot of time with that because, again, this method file is just going to be... it's just practice. But how would I do that? So, we go back to calibration and we're talking about Zinc in number 20. So, come over here to Zinc and I say use it. There you go. Now we're on the calibration calculation. Again, we're doing linear right now and throwing data points out where you... especially in these where your colleague has said, hey, the matrix is pretty different on this one. Those are the ones you'd actually like to leave in if you're going to do matrix correction. You're going to do intensity correction, calculate correction factors. It's good to have samples that are a little bit outside of the range. That's where you get a more accurate measurement of the matrix effect of that element. But on the other flip side is, if you're not going to be measuring that matrix and you don't need to go there, then don't. I would say if linear works, that would be great. If you have a really tight range of concentrations, maybe linear is fine. I think that's... see, this looks pretty good given the concentration levels we're at. So there's your intensity and again, this one... maybe it's a little bit out. One's on the line. 20 is on the line per gallon. Yeah, 20 was just off. So arsenic here. Okay, below... most of these are below 10 ppm. Look at those error bars. That's to confirm which condition that is... oh, that's... this is the wrong condition. Mid-ZB is not the right condition for arsenic. This is one of the elements that's definitely going to be better with the Mid-ZC. And in fact, while we're at it, I would just change all of these guys because we put the Mid-ZC condition in here specifically for these elements. If you wanted to go through them, we confirm to yourself that that really is a good idea while we're at it. Barium, the fact that it's set to Mid-ZB means that it's under this condition we're taking barium L-line data. And the whole reason we're using, you know, the high Z condition is to get K-lines of barium. So these guys, it looks like, had never been put in with L-lines so that it automatically... Oh, Dorian? Yeah, Dorian is one of the ones that wants to be Mid-ZC. It's receiving barium and high ZB. Change it to high ZB. You just change it. Because we want the K-line. We looked at those K-line spectra before. We were looking at those when all of a sudden we got to that spectrum. That's where we suddenly realized we had a problem with the sample changer. We were going through and looking at these big, beautiful barium peaks in the high Z condition. And then all of a sudden there was a spectrum which had the two K-lines and it was like the... What is it? 35? 30. Can you read? 32. 36. Okay. Yeah. That's all we wanted to put barium on this system. Yeah. Because it goes up to 50. The spectrum goes to 40. The source goes to 50. It has no sensitivity. And that's the difference with a 50KB blood source. But, by the way, the deal is here. I go to Analyzing Conditions and I click on the PD medium. This is the condition we use for titanium. Barium is in this condition even though we're not going to analyze barium in this condition. It's in this condition to correct for the overlap of barium on titanium. That's why you wouldn't take it out. Right. Because it's a big enough peak that it's going to cause problems for titanium if you don't include it. But then we also included here in the Copper Thick condition and this is the condition that I'm recommending when you analyze barium. Does that make sense? Yeah. So we've re-run the calibration calculation. We're going to go up to arsenic and this still looks like crap. Why does it? I think this is the level. Yeah. I don't know what your arsenic looks like on your brook group but we're below 10 ppm. It's going to be really tough to get good arsenic under these conditions. My guess is if you look at a spectrum this appears one that arsenic is 10 ppm but you have no lead data for engineering. Is that true? I believe so. Yeah. Okay. It doesn't matter. It's suffice to say arsenic is crap. I apologize but I don't think you're going to analyze arsenic. If you wanted to lead the calibration in there and if something pops up with a big number it would be good to have a calibration but I wouldn't trust it. Do you know what arsenic is? Do you know zinc? Yes. Zinc is not. I mean for this system we can do zinc. We can do gallium but not arsenic. Okay. So there's your video. Strong channel. That one is pretty good. Which one is it, 20? 17. They're both right on the line. Yeah, you're tearing this one away. Not too bad. You know what given what we saw with the samples not being perfectly where we wanted them of course that was the second tray we were looking at. Maybe the first tray full stayed where they were supposed to be. I got a lot of tape on there. These curves look pretty good to me. Okay. So there's your bearing curve and there's lanthanum. I would definitely leave that point in if you want to have a lanthanum curve. 17, okay. There's so much scatter down here that having to point up here I think it's probably a really good idea. And same for cerium. You know it's over there. That's 17. I would say to point out here more than I trust three points down here. The relative uncertainty and that intensity is small. Oh, we threw it out. Alright, so you know what, I'm inclined to say let's go, let's put these back in. Okay. And let's case by case if we decide we let's put them back in and case by case we decide we want them out. I think there was one element in one of the standards that was at the beginning that we didn't like and so then we threw them out together. I'm not sure. For the elements that are here the elements that we really like for this application, rubidium, stratium, and cerconium might be there. And this might be a matrix effect. This might be a matrix effect. So we can turn that off for that element. For that element. And I don't know what the right answer is. Maybe the right answer is to throw it out. Maybe the right answer is to put a try on matrix correction factor. But again, this is probably one of the ones we threw out before. I would say if you wanted to try a new copper having this data point to anchor your calibration at the slope is a good idea. Is there like a guideline for the R-square we're looking for? Depends on the higher level. Higher is better. No. Well, closer one, yeah. Yeah, higher is better. It gets back to how good are your standards. What is this institute? I mean, anytime you see you see air bars like that you gotta say, okay, we don't have a great institute. We're the detection of that. So we're not going to trust these numbers very much. Unless you get up around 40 p.m. And then, yeah, then we're doing something then. So is it basically ready to go? Yeah, I mean right now we can run unknown. But once there's a calibration in there, it's functioning. So at this point I'm certainly sure Steve would do and see what the numbers are. Sure. Yeah, you should definitely do that. This chart, unfortunately you don't have. The chart that we give you in the version you have is just one standard at a time, a list of the elements in the given or recalculated and this is the analysis tab. What's nice about this this version is that it's telling you the uncertainty. Right. When you see that the uncertainty is 20 and the difference is 120 okay well that seems like it's something wrong. If the uncertainty and the difference are the same order of magnitude, it's okay well we're not going to do better than that unless we count longer. But at this point yeah well the other thing we would do I mean we can go let's we just added all the standards back we added those standards back and we can just go through these real quick. If you wanted to try and analyze arsenic you would definitely want to take all these zeros out set them to non-included but I'm not I would say it probably makes more sense to just uncheck arsenic and the unknown components and just not even report it because I just don't think you have the calibration. But maybe it'll look better when you get 40 standards maybe one of those other standards are higher. But anyway so we've got we've got curves here for the most part these curves look pretty good I like this having these in for a serial length thorium I don't think I've seen a thorium calibration curve like this before. I mean they've worked hard to find obsidian sources with a range of values that report from the last bucket of Ferguson and it had hundreds of obsidian sources amount of value to add it's got range of values and elements that matter that's why we have those highs and that's one of the reasons they don't like our GM2 because it's low and a lot of it. Well it would be a good one to do a reproducibility study I mean if the unknowns you're running are in that range then you absolutely would want to use that as your control. Okay I'm going to put in change to intensity correction and I'm going to click ok and it's going to warn me that I'm losing my calibration curve when I do that so the node disappears and the other thing oh look at that it saved them so it knows about all of my matrix correction factors so you know what this isn't too bad so this is a very conservative and I don't know if this is the same of you know checkboxes you have in the original file because I thought that there were more than this but maybe it was just like that So are we looking at like the large peak of where it is interacting with a lot of elements and that's why we have so many checks Well ok so first of all I mean what does this mean matrix elements in this column and analytes in this row so when you when you check a box it means that this element we're correcting it for the effect of this element on whatever element it's checked for and so we're correcting for iron on itself iron on rabbinium, strontium, zirconium, niobium so those are the main so we're only calculating out of this whole array of possible correction factors we're only correcting for the effects of iron iron on these these elements that's not for me to say look at the calibration curve do you want to get better r squared you know or not do you so I think with 40 standards that are all obsidian you will be sufficiently over determined that you can certainly put in more major correction factors absolutely no doubt well absolutely you start with iron because I don't think in terms of the size of the peak it has a big peak because it's a high concentration but it also has a broad range of concentrations and so it will change the matrix and one of the things we can do is find a high and a low and look at how the component peak changes and you probably see a change in the intensity of the component scatter peak you know inversely correlated to the size of the iron peak so the iron peak gets bigger it's a heavier matrix you get less confidence gathered and so one of the things that I haven't even looked at and what we've talked about is do we actually put in a peak ratio to this method I didn't even look at because we jumped right into looking at calibration curves but anyway iron is the obvious one we tend to want to put an iron correction on elements that are where you have a can I say crappy on camera if you have a crappy calibration curve if you have a lot of scatter in the data points it doesn't make a lot of sense unless you think that that scatter is caused by a matrix effect from iron but if it looks just like random noise and you don't have a good slope you don't want to go trying to do a matrix correction because now you're applying a systematic correction to something that's just noise to begin with so I'm going to click the button continue we're going to get a updated calibration and I've got to get the shelter dinner so we didn't put a factor on titanium let's see if it fixes titanium does it wrap it up here it's 5 o'clock we won't have a problem we've got it very nice is that titanium there again we're turning one of those so that with the matrix correction it really can be yeah and if we had a larger number of standards at this concentration and at the high iron concentration I would feel better about it the thing that you worry about is basically we're swinging that matrix correction based on one standard and that's something that with empirical matrix correction is a little bit dicey but if it also cleaned up everything down here I mean this is the kind of game you can play throw this one out and see if this data gets better yeah I wanted to go through the mechanics of showing you this the process of actually deciding what terms to use and which terms not to use is something that can take a lot of time it can kind of drive you crazy yeah we just have to go through it next time and I have a spreadsheet that can help and I'll share that with you like a workflow so the process is that you can we can go here one element at a time and put in correction factors and take them out and look at those RMS errors and see how they change as a function of which correction factors a co-worker of mine wrote a spreadsheet that does all of that in Excel with a macro it goes through and puts in every possible matrix correction one matrix correction at a time and tries them out and then gives you a report that says here are the four best scenarios for one if you're doing one correction factor here's the element that gives you the best result if you do two correction factors here's the one that gives you the best result up to do you try to not do too many yes because nobody that I've ever worked with before has 40 obsidian standards alright you have 40 obsidian standards so you can use four correction factors with 40 standards go for it the only thing that I worry about this is what I was talking about having titanium and it's perfect but the factor that's giving us this perfect fit is based on one standard and so the fact is that it doesn't know the difference between a fundamental physical phenomenon and an error if it's just a bad standard because this guy's way out here it's still going to draw a line through it it's totally bogus because it's out there by itself we put this factor in that one gets better undo influence but if it's a good standard maybe it's doing exactly the right thing