 Hi everyone, thanks for coming. I'm happy to have Wayne Watson visiting us, who's a thermal official representative, and he's helping me with calibration work on the Quantex XRF, I mean up here in New York, and we're going to use the presentation workshop where we use it very much here. So, yeah, welcome, Wayne. I don't know if there's anything else in the introduction, but he's been working with Steve Shackley on Steve Shackley's instruments that were here in the past, and this was the original, the one that we have here is the first Quantex that Steve brought up, he was at Berkeley. He bought one used before that. Is that right? Yeah. He bought one from IBM San Jose. I think probably around the time that they sold off in this to Reed Wright. They had an XRF that they were using to measure thin film coatings on heads for hard drives, and it had a special copper anode X-ray tube because that was just the right X-ray tube for that particular application. And they had bought several of them and they were auctioning them off and he got one at a really good price. And it wasn't really perfect for the application because a copper anode isn't really the best, but it works. I think it was really cheap. It was a spectrometer. I calibrated when it was at IBM. So yeah, I used to work in, so thank you. I'll introduce myself just a little bit. I'm in sales, full disclosure. I am a salesman. I am in sales though because they move the factory where these spectrometers are made. Down in South Bay. And I work there as a product manager and I wrote specifications for new instruments and for software. And at a certain point in time, the cost of real estate was too high in Silicon Valley and our company moved manufacturing to Wisconsin and I didn't want to move, so I had to find a new job and that's why I'm a salesman today. But of the products that I happen to sell, this is one I know well because I used to be the product manager for it. And so my presentation is based on the slide deck that we use for our XRF school that's usually like a three and a half day school. And so there's no way I can cover all of these slides and I was also hoping to go into detail on the software a little bit and we just can't do all of that in a couple of hours. So I'm going to skip through some of these slides and if you see something that is of interest and you want to stop and go over it just shout it out or if you have any questions feel free to interrupt me. But yeah, EDXRF is what we're going to talk about and what you see here is a slide that's showing the two kinds of detectors that go into XRF systems these days. The one on top is the one that's in the system you have now. It uses what's called a lithium drifted silicon detector or SILI. And it's an old technology. This was the state of the art from the early 70s until about 2008 or 2009 when the newer silicon drift detector or SDD detector was developed and that the one on the bottom there is an SDD. And we don't need to talk about a lot but the resolution on the unit that you've got is relatively similar to the newest XRF systems not quite as good. The thing that the newer detectors do is they can count X-rays faster. Your system is limited to probably 20 or 30,000 counts per second. The new ones can count easily 200,000 counts a second. And so with higher count rates it means you can achieve better detection limits in a given counting time. But like I said, for a lot of applications there's a lot of difference in performance. So we're going to talk about XRF theory and maybe touch on sample prep a little bit although the sample prep you guys do is relatively simple on number one and relatively specialized. And so some of these slides may not apply to your work so we may not spend a lot of time on that. So X-ray fluorescence is the topic and X-ray fluorescence is the process where we... Pardon me, I haven't used this slide deck a lot. So I'm not going to describe what an atom is. I'm going to assume you guys have got that. So the nomenclature though for XRF was established when the Bohr model of the atom was the state of the art. And so rather than referring to the principal quantum numbers one, two, three we call the electron orbitals KLMN shells and the X-ray emissions are named according to the shell that's involved in the fluorescence process. And so when an atom is ionized you have an unstable state and an outer orbital electron is going to have to fill that vacancy in order for the atom to be stable. And when that happens the atom has to give off energy that's equal to the energy difference between the two orbitals. Pretty basic, simple stuff. But what makes it interesting for X-ray fluorescence is that the energy difference between those orbitals is quantized and for a given atomic number of element it's always the same amount and it's different for every element. Again, I'm going to skip some of this stuff. It's just pretty basic. You guys have got all this stuff, right? I'm going to give Nico the slide deck so don't go writing down this stuff that's on the slides if there's really no particular need. So being aware of the depth of penetration of the X-rays or the escape depth of the X-rays you're measuring is somewhat important because people like to think of X-rays as just going right through matter but depending on the element you're measuring and the energy or wavelength of the emission line for that element the escape depth could be a lot more or a lot less and obviously the sample matrix has a huge effect on the escape depth and so it's important to be aware that if you're working with heavy metals in an organic matrix X-rays can get out from centimeters deep within a sample but if you're trying to measure sodium magnesium and aluminum in a silica matrix those X-rays are only going to get out from a very, very thin layer and so here it's saying what, sodium can get out from five and a half microns in a silica matrix. People that do grinding for sample prep typically don't get samples that small and so even on a well ground sample you can't assume that you've ground it fine enough to expose all the surface of the material you're analyzing. It's kind of a useful chart just so that you know where you're at in terms of how large a volume of material am I actually measuring. Probably elements that would be more so arsenic in a silica matrix looks like it gets out from one millimeter, a little over a millimeter deep. So again, that's the kind of volume for when you start looking at rubidium, strontium, nitrate, and zirconium which are popular ones for characterizing and sitting in probably a millimeter and a half, two millimeters deep within the sample that's where your x-rays are coming. I'm going to skip this, x-rays were discovered, x-ray fluorescence was discovered. Yeah, then your intensity is going to be to some extent a function of the thickness of the sample. Unless you're looking at lighter elements if you're looking at making these in iron it's going to be a shallower escape depth and so a millimeter would probably be what we call it. So the term we use is for a sample that's thick enough that x-rays can't get through from the back of the sample to the detector we call that infinitely thick. In other words, if it was twice as thick you'd get exactly the same spectrum. And so if we're talking about a sample that's a millimeter thick and you're measuring manganese in iron it's infinitely thick. But if you're measuring barium it's absolutely not infinitely thick. And so, yeah, you need to be aware that your calibration may not be accurate for samples in that thickness range. Richard Hughes and I tried to do some work on this to do a thickness correction calibration and it wasn't really satisfying we were trying to use a fundamental parameter as thickness correction and the results weren't great. So it's almost like you would want to try and have standards that are thin to measure unknowns that are thin would be the, in terms of analytical work probably the easiest way to deal with that obviously not easy to get standards that are with a preset thickness but if it was a known material and you had one standard you could ratio to it maybe. So again, getting back to what is x-ray fluorescence is again the x-ray fluorescence process. We use an x-ray tube to generate ionizing radiation we ionize the elements in the sample as those elements return to the ground state they emit fluorescent x-rays. It turns out that x-ray fluorescence isn't the only way that an atom can return to the ground state it can also return to the ground state by emitting an OJ electron and it turns out that as you go down in atomic number OJ emission is the preferred mechanism for returning to the ground state and so the sensitivity we get for x-ray fluorescence goes down when you go down in atomic number and goes up when you go up in atomic number. So again, these elements that you guys look at are benium, strontium, metronium, and cerconium they have fluorescent yield around 50% compared to something like sodium, magnesium it's around 10% and so it's one of the reasons that they're in a sweet spot on the periodic table for getting good sensitivity, good detection limits. But if you were doing glasses some archaeological materials are using light elements here in glasses Yeah, so it's harder to get good detection limits I mean you're absolutely not going to get this the detection limits for sodium and magnesium that you get for benium, strontium which is not going to happen the detection limits are probably down around 10 ppm or less than 10 ppm depending on the counting time sodium, you know, it could be a tenth of a percent so it's, yeah, a huge difference in detection limits and it's mostly due to the fluorescent yield being lower for the low atomic number elements So x-ray spectrometers work using a detector that, you know if you guys were electrical engineers this might mean something to you but basically the x-ray detector is a photodiode which means we have a bias across it and current doesn't flow except when a photon hits the detector and it promotes electrons to the conduction band they get swept to the contact we get an electrical pulse the number of electrons is proportional to the energy of the x-ray and so all the electronics after the detector are designed to measure that pulse hype really accurately and do it thousands of times per second and that's how we form the x-ray spectrum so that's the all I want to say about the electronics my background is not an electrical engineer either but anyway, so we were doing that thousands of times a second and we get an x-ray spectrum the spectrometer itself has got an x-ray tube that's down below the sample chamber x-ray detector on the other side they're pointing like this up at the sample and some power supplies in there a high voltage power supply which drives the x-ray tube it's a 50 kilovolt 50 watt x-ray source which means that at 50 kV it can go to 1 mA at lower kVs it can go to slightly more than 1 mA and when you get down to 25 kV it can go up to 2 mA 2 mA is the maximum tube current there's the side view looking from the right side don't need to spend a lot of time on this have you ever opened it? never had to fix anything? I had a customer at Los Alamos tell me he'd had a system for 17 years and never had a service call our service department doesn't like to hear that it's hard to sell service contracts do you like that? so this is the sample chamber you can see that in this configuration we're using a my mouse isn't working I guess the mouse isn't going to work for me so you guys have all seen this spectrometer with the lid open you know where the samples go no? yes? in this configuration you could run it from a laptop I guess we had some customers that ran them with laptops this is a close up of the x-ray optics the sample is sitting right above the x-ray tube in the detector you can see there is the I don't know why the mouse is disappearing the x-ray tube is down here this geared ring is the filter wheel we're using primary beam filters to tune the excitation source this is a collimator I guess there's going to be labels pop up for all this so detector on this side and this ring represents the sample you mentioned there was an elliptical ring we may have a picture of that I've got another slide I guess you can kind of get the idea that the beam is elliptical the thing I can't emphasize enough is that the x-ray detector is over on this side it has a beryllium window that's something like 12 microns thick and on the other side is vacuum so if you touch that beryllium window it will break and the detector will be destroyed and we don't make that kind of detector anymore so we can't even fix it so don't do that I just I just have to say I mean it's like there is you can upgrade to the new silicon drift detector but it's expensive something like $25,000, $30,000 to buy a new replacement SDD detector for it you don't want to break the detector so one of us might have yeah, yeah, yeah don't let him in your lab um he'd probably never get to your lab if he showed up on campus anyway so it's probably not a big concern but yeah if you the kind of samples you look at you should never have a problem the people that have problems are people that put powder in cups you know once in a while they'll have a sealed cup they'll put it in there, they'll put it under vacuum the cup pops and they've got dust all over the inside of everything I helped yeah, did I? okay I'd forgotten about that yeah, well, so you were lucky um if it does happen the first thing to do is don't do anything because probably whatever seems like a good idea isn't can dare for instance really not a good idea that will definitely break the window vacuum cleaner can be used very gently what you'd like to do is very gently pull the columnator off the detector and clean it but again probably better at least talk to somebody from our company before you do anything because it's an expensive repair again I'm just going to leave these slides I just have to we've already talked about this a couple of times but just one more time for anybody that might not have been following we name the orbitals the Bohr model the innermost shell is called the K shell the next one out is the L shell M, N after that and the X-ray missions are named according to the orbital that was ionized so when we knock out a 1S and a 2P electron drops down to fill the vacancy we call that a K-line emission because it's the K-line, the K shell that was ionized when we knock an electron out of the L shell an electron from the M or N orbital drops down to fill the vacancy we call that an L line so that's the nomenclature for XRF that has to do with the fact that XRF was discovered in the early 1900s that was the state of the art so I'm not sure why they put this slide here but just simply for your information the sodium is the lowest atomic element we measure with this system the the there are XRF systems that can measure down to beryllium but not this one since we're using a beryllium window it stops the X-rays from getting to the detector the ones that go to lower energies use different kinds of detectors but frankly even people that have XRF systems that can measure down to beryllium don't in geological in mineral samples it's very unusual there are a small number of applications for XRF analysis of those low atomic number elements general purpose geology and archaeology is not one of those applications it turns out that again the extra emissions for those elements come from such a thin layer on the surface of the sample that it's really not a bulk composition technique unless you have people boron and borosilicate glass is a great application for XRF because it's a flat smooth homogenous surface and so XRF works but when you've got mineral samples and they're not flat and there's not it's just don't think you're going to go there now if you get an EDS system on an SEM you may very well measure down to carbon to get a signal and it will be indicative of is it an area where there's carbon or is it not or it's an indication you've had the beam there a long time and you've carbonized a bunch of oil which is back stream from the vacuum or come off the surface of some other object within the chamber and been hit by the beam and deposited on the sample but other than that it just doesn't work too well for lower atomic number elements good question sodium and magnesium you can't detect at all without a vacuum aluminum and silicon on samples that have high concentrations you'll get a tiny little peak if you have the conditions optimized for those elements and so I would say probably don't count on that unless you want to run in vacuum beyond that phosphorus and sulfur you get something like 30% transmission and so you get a pretty good peak if you have a sample that's got those elements you could probably do it for high precision work you have to be a little careful because the atmospheric pressure will have an impact on your sensitivity the other thing to be aware of is that argon is present in the atmosphere at about 1% which is well above our detection limit and so if you run a sample in air expect to see an argon peak so vacuum if you care about argon you should get it in vacuum that's a pretty rare application where you need to measure argon I think by the time you get up to calcium pretty much no difference in the sensitivity in vacuum versus in air okay well since we're here we're going to talk some more in later slides about how to optimize excitation conditions and what we do to optimize excitation conditions and the color code on this particular periodic table is to help you decide which excitation conditions will optimize which elements and so we don't have enough resolution to see it very well but up here we have a color code which references excitation conditions that are called low z, mid z and high z and they refer to the 0 to 10, 10 to 20 20 to 40 keV range in the spectrum and so what you'll see here then is low z, A is for the lowest atomic number elements that we measure low z, B is just using the first filter in our set which optimizes for a slightly higher energy range and so we'll talk more about that later but that's the when you see this again what we're talking about is optimization for different elements absorption edges ok so I don't want to spend a lot of time on this you don't need to understand this to know how to use this XRF system so the other thing, this is the legend for the periodic table that we had up there before and it's showing you the obviously atomic number absorption edge means this is the minimum energy which takes to ionize that atom in other words if you want to analyze silicon you have X-rays coming in that are above 1.838 keV or they won't knock out that 1S electron out of the what we call the K-shell again if you're using our periodic table with the color code you don't need to memorize any of this you don't need to look it up even you just use the right condition the settings are already tuned for that element and then the the other numbers that are given there are the K-alpha and K-beta emission line energies let's skip some of this K-alpha and K-beta doesn't mean a lot on silicon because the peaks are not resolved yeah so this is an X-ray spectrum I guess this was the first spectrum we looked at so this is a chart of counts versus energy on the Y-axis so the K-alpha emission is the larger peak at the lower energy and the K-beta is the smaller peak at the higher energy and this happens because we have the possibility after we ionize the K-shell we could have either a L-electron drop down fill that vacancy or an M-electron drop down and fill the vacancy the L2K transition is more probable and so we get a bigger peak for it the M2K transition is a bigger energy difference but it's lower probability so that's why it's a smaller peak at a higher energy simple right? yeah so again for it takes just a little bit more energy than what the nominal value of this K-beta emission line is to ionize the K-shell in the case of silver we have a choice we can measure either the K-line emission or the L-line emission and there are cases where you might want to do one or you might want to do the other it's important to realize that some elements we have two lines to choose from but over here what is showing you these are the absorption edge, the K-alpha K-beta for silver and over here the absorption edge L-alpha L-beta for silver and so it's just different transitions different energies yeah again I'm trying to go fast but if this is confusing stop me and so this is what a spectrum of the silver K-lines look like if you remember in a very qualitative way I think when you were looking at the copper peaks the peaks were closer together I mean it's a different scale but as you go up in atomic number the difference between the K-alpha and the K-beta increases so at low atomic numbers like when we were looking at silicon there actually is a K-alpha and a K-beta so you just see one peak as you go up in energy they get further and further apart by the time you're at a copper they're fully resolved by the time you get to silver they're quite a ways apart and so just when you look at a spectrum you know after you've looked at them for a while you look at a peak and you say oh that must be a K-line of a higher atomic number element just by how far apart the K-alpha and the K-beta are so this is what the L-lines look like for silver and so here we're really not resolving the L-alpha and L-beta and it's sort of a stair step you can't really see it but there's an L-gamma down in the line here that isn't resolved but as you go up in atomic number the L-lines form sort of a stair step where you've got stairs on the right and just sort of a Gaussian on the left with one other additional little line so this whole profile here is typical of L-lines of elements in this range of atomic numbers okay okay too high yeah yeah so what what does that mean too high means that our X-ray source if you remember goes up to 50 KV go ahead but when an L-electron fills the K-shell then if you also have the case where an electron from the M-shell fills the whole of the K-shell what about an M-electron filling the L-shell that also happens you sort of have this upwardly cascaded exactly so I didn't talk about it but yeah when we talk about silver and we're talking about ionizing the L-orbital then it's going to be an M or an N-electron that's going to drop down to fill the vacancy in the L-shell M to L would be the L-alpha N to L would be the L-beta yeah you do you do we don't get fluorescence if you get far enough out those transitions are so low in energy that we don't measure them high X-ray fluorescence under the conditions where we're measuring the silver K-lines we're just we don't get a lot of sensitivity for the L-x-rays I mean it's the background because of having to do with the reasons to do with the detector and some other things the background goes up and our sensitivity is poor and so we just don't typically see a big L-line under the condition where we measure the K-lines so you optimize for one or the other and then you just don't even think about the line you're not optimized the thing that's a little more complicated and you need to be thinking about down the road is that let's say you've got a sample and you know there's barium in it and so you've got a barium K-line that's in a part of the spectrum where there aren't any other peaks overlapping it it's very obvious when you've got barium the problem is that barium has an L-line which you might not measure the L-line if you want to measure barium because it's not as sensitive as the K-line but it happens to overlap titanium and so now if you want to get the right answer for titanium you need to deal with the fact that there's a barium peak signature sitting right on top of the titanium and so I mean there are a bunch of these classic overlaps lead and arsenic, their primary lines are right on top of each other and so if you've got one of those elements at a high concentration it's going to be hard to see the other one at a low concentration if they're equal concentrations they have secondary lines and you can look at the secondary lines and figure it out it's not a problem if the aluminum is there at a real high concentration and the other one's low then you're in trouble if you need to measure both take it to ICP and like I say there are a number of I think there's a slide here which shows some of the classic overlaps molybdenum and lead both overlap sulfur and so again if you you know if you want to measure sulfur and you don't have those elements at high concentrations you're probably alright but if you do then you have trouble so you can solve this in an exaggerated count if the peak is going to be higher yeah and well the thing is that the difference between this and a portable is that people that use this we expect them to spend a certain amount of time looking at spectra and recognizing what the profiles look like and even for sulfur the shape of the profile for molybdenum is very different than the shape for sulfur and so even though they're right on top of each other if you look at it you'll see that there's something going on there of course you've also got another line from molybdenum you know you go hopefully you would have noticed that you had a big molybdenum peak and you say oh yeah I'm going to have trouble with sulfur but the simple answer is yes if you don't have the software set up to correct for that overlap then you would get erroneously high for sulfur if you didn't correct for the presence of the molybdenum like arsenic lead I think we usually in a mineral sample that's a classical overlap we typically will put them both into the method so the software is going to do the overlap correction you know just leave that to chance titanium and barium same thing it's just if you're going to measure titanium you're going to put in the barium to correct the overlap so in the case of lead the deal here is that we would we need x-rays that are at least 88 keV hitting the sample in order to ionize the k-shell of lead the x-ray source runs at 50 keV therefore the highest energy x-ray it can produce is a 50 keV x-ray and that's not enough to ionize the k-shell of lead so we don't measure the k-shell of lead even though we put the numbers on the periodic table and what we do instead is to measure the L lines of lead and so here you're looking at the lead lines and again as with the k-lines as you go up in atomic number the lines move apart and so the L-alpha and the L-beta here are very fully resolved so it's the beta the height yeah thank you for noticing I was trying to decide whether to talk about that or not it turns out that the the height of the L-beta relative to the L-alpha changes as a function of your excitation spectrum and so on our quantum depending on which filter or which excitation condition you use you might have L-alpha much bigger than the L-beta or you might have the L-beta bigger than the L-alpha and it has to do with the fact that we have separate absorption edges for lead and if we have x-rays that are better at knocking out you know one of those electrons will get something different than the other one I guess in terms of setting up an analytical method what I should say is that our software uses pure element peak profiles have you noticed that or recognize that in the software there's a place in there where the software comes with a generic pure element peak profiles that we use in the software for doing overlap correction and background correction in our peak fitting algorithm and they are acquired under specific excitation conditions and so if you've got a peak profile that was acquired with the mid-ZB condition then it'll have the right L-alpha L-beta ratio and it would be a good idea to acquire your own peak profile but again this is not something that changes from sample to sample it's something you think about when you're setting up the method and so if you're going to use the same method you know month in and month out and so if you're going to use the same method you know you know you're going to use the same method you know month in and month out it's not like you have to worry about changing it or thinking about that we'll talk about a little more a little later like I said I think there's some slides here that show that the purpose of this slide is to just show that you have a certain profile a certain shape for the emission lines of different elements and one of the things that can happen as you look at EDS or EDXRF spectra is you learn to recognize the certain profile for certain elements the software will also identify them for you and then of course when you get into a more complicated sample you have to deal with the fact that you have multiple lines for different elements and you want to identify them and make sure you've found all the trace elements in your sample and so it's there's a little more to it we'll talk a little bit about how to do that later thank you I mentioned fluorescent yield a little out of order this is a chart of fluorescent yield as a function of atomic number and again showing that as you go up an atomic number the fluorescent yield it's also true that the K-line fluorescent yield is higher than the L-line fluorescent yield and L is higher than the M and that's one of the reasons that oh man jumping all the way back to the overlap of molybdenum with sulfur because molybdenum is an L-line and sulfur is a K-line you'd actually have to have a really high concentration of molybdenum to cause a problem for sulfur because if they're equal concentration under different conditions you'd have a much bigger peak for sulfur than you would for molybdenum so that's why it doesn't usually that particular case won't be a problem for you okay I think we already well okay I guess it's worth saying that if you're going to set up a method from scratch and you have a choice you would prefer to measure K-lines you're always going to have better detection limits better line separation if you use a K-line than if you use a L-line do any of you guys actually use EDS run SEM EDS you have in the past you're good so one of the things in thinking about this by comparison to EDS we have this 50 kV X-ray source and we have filters and we look at K-lines all the way up to barium cerium lanthanum and it works pretty well when you're doing EDS for one thing your beam typically doesn't go up that high and one of the reasons it doesn't go up that high is because you don't get as good resolution you don't get imaging resolution at higher kVs as you could do at lower kVs and it's more expensive to make a microscope that runs at higher kVs so there are a lot of reasons mostly to do with imaging that they just tend to run at lower kVs like 15 kV is a typical high end for EDS and running at 5 kV or 10 kV is very, very common and lower than that is not unusual but when you decide that you're going to image your sample at 10 kV it means you're only going to be looking at emission lines that are excited by electrons of that energy and so we look at a lot of lines in X or F that you just never would look at when you're doing EDS and so it in some ways it makes it simpler I mean if you're going to analyze lead by EDS you typically are going to analyze the M lines of lead whereas by X or F we would measure the L lines because we can and because they're better resolved better since today so yeah and so that's just kind of what this slide is talking about that why would we rather use k lines all right there so one more time the periodic table these are the elements we don't we don't measure these are the elements where we would only be using k lines as we go up an atomic number these are elements for which we would be only using L lines and then in between there we have some elements where we could use either k lines or the L lines depending on what kind of sample and what else you're looking at okay do we need to do this, can we skip this okay so excitation, there's our X-ray tube talked about some of this, it's air-cooled it's off when you're not analyzing samples there are X or F systems out there that leave the X-ray tube on all the time and they just put a shutter in front of it but we don't do that so that's one of the reasons that the system lasts because if you're not using it the tube's not worrying out we talked about beryllium windows on the detector there is also a beryllium window on the X-ray tube it's not quite as thin but equally expensive to replace if broken no, it's the detector you're thinking of yeah yeah, let's talk about this isn't the slide on the detector but we've already passed the slides on the detector yeah, so this class of detectors lithium drifted silicon detectors for, like I say, from the 70s until the mid-2000s most of the units in use in the world were cooled by tanks of liquid nitrogen because they needed to be really cold and it just has to do with thermal movement of the electrons in the detector crystal at room temperature they're moving around enough they're generating enough signal that you can't measure the X-rays and so by cooling the detector you slow down the movement of the electrons in the crystal and you reduce the electrical noise and then you can use it as an X-ray detector our particular company spent the money and the effort to figure out how to use a Peltier cooler to cool the detectors do you guys know about Peltier coolers? yes, no you can buy ice boxes that plug into the 12 volt socket on your car and flip a switch one direction and it's cold inside, flip a switch the other way and it's hot inside, it's a Peltier device it's a guy named Peltier found these crystals and when he put electrical current through them they got hot on one side and cold on the other side and sometime in the 1970s I think it was NASA that had a need to have refrigeration or at least move heat in outer space and they didn't want to have a pump and they spent the money to have people develop Peltier coolers and we found out about these and decided to apply them to X-ray detectors and in the 1980s we were the only company that could make Peltier well in fact ever, nobody else figured out how to do it so we made Peltier cool detection we had to have we bought these things, we made a stack of four, it's the four stack of Peltier coolers to get the detector down to like minus 90 centigrade and at that temperature it works fine as an X-ray detector but it was, yeah that's what you have, it's a Peltier cooled lithium driven silicon detector it doesn't the detector's not going to overheat, you're cooling it you're not eating it at room temperature the detector is less stable so we'd like to keep them cool if you left it at room temperature it would not stay in operational condition as long as when you keep it cold but it takes an hour and a half or two hours to reach operational temperature after you plug it in and it's cold all the time if the unit's plugged in so the on-off switch on the back of the instrument, the rocker switch it turns the county electronics on and off and the chamber control electronics but it doesn't turn off the cooler so when you plug it in, in the off position it's still making noise and that is the power supply and the fans that take the heat away from the detector this is when it stays on the computer attached to it sometimes we have to restart it it'll turn off the X-ray yeah yeah, you're running Windows XP that software will run under Windows 7 and I've heard tell that some people are using it under Windows 10 but I haven't tried it and it's our software department doesn't claim that it works under Windows 10 but I'm not sure that's because there's something actually wrong or because I already told you everything I know if you wanted to try running it under Windows 10 be my guest, it might work so X-ray tube what you have there is an end window X-ray tube and this is the way it's configured the cathode is off to the side the electrons are actually curved by a field and smack into the anode here and the X-rays go out through the window it's kind of a neat design it gets the anode closer to the sample so we get more X-rays per unit of power the X-ray tube before that and the anode wasn't so close so for example, we want the minimum distance to our artifacts so we're creating samples in the pen table if there's like a concavity we can try not to get in the line it's not there, you don't have to be as low because it's well seated yeah for sake of reproducibility you want the sample to be at the plane of that lip that's in the well of the sample tray that is what we consider the analytical plane everything's designed for it to work at that height if the sample's higher up or lower down it's harder to get good reproducibility and we certainly expect from one sample to the next if they're not all the same height then it can lead to problems with accuracy and it's the reason I wanted us to redo it with the other tray the sample's at the same height so when we're setting up excitation conditions you have two parameters on the x-ray source, one is the voltage one is the current and the intensity of x-rays is proportional to the kv squared the kilobolt setting squared but it's just linearly proportional to the x-ray tube current and so if you double the x-ray tube current you expect to get twice the counts but changing the kv setting it's more complicated than that because when we use a primary beam filter we don't get kv squared but it makes a bigger difference on the spectrum and we'll see that a little bit in a later slide and it says here that you want to adjust for 30% and for your system we shoot for 50% so on a newer model we put in a silicon drift detector and it was too fast for the counting electronics the detector was faster than the pulse processor so we had to change one of the settings in the software so it would not try to go to 50% dead time because it would overload the counting electronics kind of embarrassing situation we only made that model for a few years and then we put in a faster pulse process back to shooting for 50% dead time but at the point in time when this slide was put together we changed it to safety because that's what we were doing on those systems that's your samples that you think it will yes yes there may be a slide here about dead time um I'll just, we'll skip that I'll just tell you so x-rays are produced sort of at random and at higher count rates it's not unusual to have two x-rays come into the detector so close together in time that the counting electronics cannot count them as separate x-rays it's it knows that it was two x-rays but it can't measure them individually because the signals are overlapped with each other and in that case it throws out both of them and to compensate for the lost data it extends the counting time and so dead time is the time during which someone extra processing a pulse and they're not ready to take another pulse and so the idea by shooting for 50% dead time half the time we're counting x-rays half the time we're waiting for the next rate x-ray to come in and if you're at significantly high if you're at 80% dead time then you actually end up throwing away more x-rays than you count and that's not efficient and if you're really low dead time it means that you're spinning 95% of the time you're just sitting there waiting for another x-ray to come in and that's not efficient and so on our system we have something called auto-tube current which means that at the start of the acquisition the electronics sample the dead time and if it's lower than 50% it turns up the tube current and if it's above 50% it turns down the tube current and that way you try to optimize the efficiency of the data collection show the slide for the screen the screen shots for the acquisition and kind of a minimum that people need to know you want to move along let me just try and yeah, yeah, yeah I'll skip some of these this next set of slides I think you need to there was a question about excitation conditions this is really central to why we have different excitation conditions again, very different than doing SEMEDS where you just see you're running at 15 kV then you're at 15 kV this is a set of three spectra of the same sample run at different kilovolt settings with no primary beam filter and what you see is that when we run at 12 kV well, first of all what you're looking at is a fluorescent spectrum in the sample superimposed on a spectrum of the source X-rays scattered off of the sample and into the detector so this big background hump here in EDS we call it continuum radiation or background radiation but these are source X-rays that came from the X-ray tube and we're using them to ionize the elements in our sample but they've been scattered by the sample into the detector and here is the spectrum of our excitation source with the fluorescent peaks superimposed on top or added on top of it does that make sense? and you can see here that when we run the X-ray tube at 12 kV our excitation spectrum ends at 12 kV so there's a one-to-one relationship 12 kV on the source maximum energy 12 kV right? we go up to 16 kV on the source now our spectrum ends at 16 kEV right? but the other thing that's interesting here is you see that down here at 12 kV the only k line we're seeing here is for chromium and as we go to higher kVs suddenly some other peaks show up because we've now gone past the absorption edge for those elements and they show up in our spectrum right? and then the peaks over here are you can think of as being the profile of your characteristic lines of your X-ray source which has a rhodium anode so there are rhodium X-rays from the X-ray tube hitting the sample if we're running at a high enough kV they're not labeled but these peaks down here are rhodium L lines from the source so rhodium L lines here rhodium K lines here good? so now the first example that we give for I guess you can kind of see that can't you? you can see the yellow and the green spectra what you see here is that down at the low energy end of the spectrum if we're running with no filter we've got the rhodium L lines scattered off the sample into the detector you see that's part of our excitation profile and under these conditions if we're trying to measure the elements sodium through sulfur they're very efficiently fluoresced by the L line of the X-ray tube and so running with no filter is a great way to measure the elements through sulfur assuming you've got vacuum but let's say you wanted to measure an element that has a peak right at the energy of the rhodium L line which would be chlorine well you're not going to be able to do it because you've got a big peak there from rhodium scattering off your sample so in that case the thinnest filter we have is called the cellulose filter and it's designed to be just thick enough to take out all rhodium L lines from the spectrum so that we can measure elements that have their peaks in this part of the energy range and so for sodium through sulfur we would use no filter but if we want to be optimized for chlorine through scandium we use the cellulose filter and so that's the first filter in the system makes sense the idea is that the area where you have best peak the background is a region that's just lower in energy than the excitation band so in this first condition with no filter my excitation band is like from here to here all those that what looks like background in the spectrum that is my excitation band when I put in the cellulose filter my excitation band is right here and just below that in energy again is where I have best sensitivity and so here if you can see it that's big enough these are different energy scales so these first two are showing 0 to 10 keV this one is 0 to 15 here we've got 0 to I guess the other three are 0 to well these two are 0 to 30 and then this one is 0 to 40 but in each case what we're showing is you've got a hump there and so each of these I guess I should say each of these spectrum was acquired by scattering x-rays off of a cellulose filter and so again we're scattering the source radiation into the detector to get a measurement of our x-ray tube profile and so in each case we're going to slightly thicker filter we're moving our excitation band up so it's there's our region of best sensitivity there with the first filter we go up a little bit in energy here we've gone to a slightly thicker filter our region of optimum sensitivity has moved up the energy scale a little bit each one of these filters just moves you up the energy scale to a higher atomic number elements and so when we talk about excitation conditions we're mainly talking about the kilovolt setting in conjunction with a particular primary beam filter to give us optimum excitation for a group of elements is that I don't know for some of you I don't mean you don't have enough background for that to hang together but anyway back to this thing so at 12 kV if we put in a filter we knock down the background under that chromium peak and get a significant improvement in our peak to background performance and get a big improvement in our detection so this is the reason we use primary beam filters for example if you need a barrier to differentiate two obsidian sources then you use the thick copper filter and you have time for analysis because you have to run that condition and you're looking at which longer analysis can be somewhat longer but it's all it happens automatically so this is the case where we're looking for higher atomic number elements we're looking for the mercury lead bromine I think this is a plastic sample and when we put in the primary beam filter it knocks down the background where the peaks are and improves our detection limits and so often times we're going to be using multiple excitation conditions the software handles this for you automatically when you set up a method you define the conditions you want and it cycles through the different excitation conditions automatically collects the data so again one more time periodic table color code and so we have different conditions for different groups of elements yeah there's a lot we can skip here this is probably a good idea this is a simple formula for estimating precision of analysis where n is the number of counts in the peak and you can use this formula to get an estimate of the best precision that's possible given a net peak of certain size and so if you know what precision you're trying to achieve look at the spectrum, look at the peak you're trying to measure look at how many counts are in the peak and you can quickly just do one over the square root of the number of counts and get an idea of do I need to count longer and get it done faster but it's part of setting up a method at least where you're running unknowns where you have the same elements at the same concentrations when everything's all over the map isn't quite so simple but I'm going to skip some of this oh there's the x-ray detector we've already talked about this everybody remember this from earlier in the talk there's our little electrical pulses um so there's a illustration of what the electrical output of the detector looks like and we're measuring the pulse height there's just way too much information here you don't need to know this this is for the 3-day class yeah the idea is that pulses are coming in thousands of times per second forming a spectrum so I used to give this class a lot of slides so I mentioned before that if you've got a peak overlap it could be difficult to get the detection limit you want in a given case um detection limits are a function of the sample matrix if you have an interfering element or an overlapping element it's going to have a huge impact on the detection limits that you can achieve that was a slide on dead time which we've already talked about sample cups I don't know if you're looking at powders I've got a yeah yeah um so there's not a lot here that you don't know if you've made a sample cup and put a window on a sample cup do you have sample cups here at all? we'll demonstrate that in the lab but um there are you put the powder in the cup over the cup you put a snap ring on it tap it on the table and it's ready to go the only thing is that cups are oftentimes sealed so you want to make sure you don't run it in vacuum and if you're really worried about something bad happening you could reach down and flip the on off switch on the vacuum pump or unplug the vacuum pump or something like that accidentally making a mistake and turning on the vacuum but of course then the person that wants vacuum needs to know to turn it back on I don't know there are other ways to deal with there there are double open ended sample cups and you can put a what's oh man micro porous film on top so you've got the mylar on the bottom and you have a micro porous film that goes on the top and a snap ring that holds it on when you pull vacuum the air goes out through the micro porous film and then when you vent the chamber the air goes back down in and it's venting sample cups that would be the problem if you didn't have the micro porous film because what happens when the air comes back in it swirls up the powder and then it flies all over the inside of the chamber and the micro porous film is there to prevent the dust from getting out so I could give you more information on that if you want to run in vacuum with powders, if you're running in air it's much easier sorry can we always use mylar or are there situations where you use a different window if you're trying to measure sodium and magnesium then you can get polypropylene windows that are slightly thinner have a little bit better transmission I think you get something like 20% attenuation of sodium x-rays by mylar film and it's just a little bit less if you use a thinner window you're not going to run liquids in there are you ever you can analyze liquids it's just all the usual things apply you don't want to pull vacuum on liquid so this starts to get into qualitative analysis into the software a little bit this is the user interface it's a position manager program and qualitative mode is the mode where we're setting up the software to just acquire a spectrum and so this is called a you see it's called a qualitative tray list and down below what you see is that we have just a place to enter the name of the sample the excitation condition you want to use and then where do you want to put it in the sample trace and when you run it then when you're finished all you get is a spectrum and you can click a button on the software to identify the peaks I want to tell you it's a pretty good algorithm for doing peak identification it is absolutely not perfect anytime you see a peak labeled that you don't believe then you probably ought to either get some help or think about why the software made a mistake it's not unusual to have well ruthenium for instance when we run on a condition where we're using the rhodium k-lines for excitation we get scattered rhodium k-lines we haven't talked about competent scatter yet maybe there's a slide for that later but you don't have a ruthenium in your samples just put that in your notes I don't have a ruthenium because the software will identify the two characteristics lines when it's scattered by the sample it loses energy in the scattering process and gives you a peak that's at the energy of ruthenium it's not ruthenium but the software will identify it there are other cases where plutonium may get identified or thorium or something like that well thorium might have actually thorium and uranium it's not unheard of to see peaks for those mineral samples I'm just trying to say this the algorithm's not perfect it's used to help you ID the big peaks in the spectrum and get started but you're expected to look at those identifications and decide for yourself whether they make sense and we do qualitative analysis partly as a first step for setting up a quantitative method if we've never analyzed this sample before and we want to know just what are the elements in it it's a good first step decide what excitation conditions we want to use find out if there's something in there that wasn't expected but when you're done you just have a spectrum oh my goodness all kinds of useful new stuff in this so this software was developed at a point in time when people still used folders off of root on the hard drive as a good place to save their data having something under my documents and so ccone backslash quantx is where all of the quantx data files are expected to be saved you can put them anywhere you want on the hard drive you could put them on a network drive or a thumb drive but we don't recommend saving directly to anything other than the local hard drive when you're running a data acquisition sequence because if there's a delay it can throw the software out of sequence and it can get loose track where it's at and cause you to have to restart so generally we want to save data on the c drive and the spectra folder is where you usually find spectra they should yeah and then the idea is to use a thumb drive or they probably wouldn't I'm sure they don't let you put a computer like that on your network but I think we kind of said I would save on the desktop or in a simple way on the desktop just for access oh my goodness we've got to get through this it's still talking about qualitative identifying peaks so you could run one sample with multiple conditions there's some here on yeah we don't have time to talk about this or we're never going to get done um yeah this is more advanced here's the whole thing on Compton Scatter anything you want me to stop and talk about okay that's the end so that was the first slide deck out of several so quantitative is the next well there was something about spectrum processing here I think we'll just skip that so we have to include slag to calculate oxides sure not a problem I'm just no no no I'm not in display mode this new whiz bang version of Powerpoint shuts off the display when I stop the presentation so the steps to creating a quantitative method start with what we talked about under qualitative you need to figure out what elements you're measuring decide what excitation conditions you want to use peak profiles you usually don't need to do on your own so I think we'll skip that creating a standard library is something that you've done but it is part of the process of setting up a quant method peak profiles are in the system already there is a set of generic peak profiles that were acquired by us that should work fine and so yeah I would normally normally you just use that so you create a standards library you import the standards to the method and then you calibrate the method and after you got the calibration curves the way you like them you set up a shortcut on the desktop so you can run unknowns quickly and easily and yeah there are a lot of things to think about which we're going to skip find the given values for your standards we'll leave you the slide deck yeah this is all about acquiring the peak profiles we're going to skip it and you're going to use the generic ones the standard library is the program where we define the given values for our calibration standards and basically you're just going to go oh all these little tricks so here you go standard new and you tell you want to create a bulk standard and you put in a name for the standard over here on the left I've lost my cursor again so over here on the left you're entering the name of the standard then you can enter the element and the given concentration the units of concentration and then go to the next standard the only other complication here you'll notice there's a column here that says fp yes no down here the concentration total the total of the values entered can either be in green or red if you want we haven't talked about what fundamental parameters is yet but it's a technique for doing automatic matrix correction and it's really handy there are cases where it works or cases where it doesn't work I think the obsidian analysis is going to be empirical based on standards but if you want to use fundamental parameters and again it does automatic matrix correction it allows you to use standards that are in a different matrix than your unknowns and so it has some great features the downside of fp is that the standards have to sum to 100% to be used as a standard you have to define 100% of the matrix of the standard because if you don't know the matrix you can't calculate the matrix correction factors and when you analyze unknowns you need to be able to account for 100% of the composition of the sample and so for instance if you're measuring in water you have to go in there and tell the software that the remainder, beyond the elements we've listed the remainder is water and then when you set up the method there's a place where you can enter water as the balance but again to use the fundamental parameters technique this total here has to be shown in green or it won't accept it as a standard for a fundamental parameters calibration empirical calibrations no such requirement and so again if you've got obsidian and all you want to measure is rubidium, stratium, mitreum, zirconium you can just ignore the rest of it just put in the values for those elements make a linear calibration curve or maybe a matrix corrected calibration curve and you're good to go you don't need to define all of the sample if you don't want to which is that which the next slide is showing is that if you have a bunch of standards and they all have the same elements and you don't want to have to type the list of elements for each standard you can simply copy the given values for one of the standards and then when you put in the next standard you can just paste in the values and then you've got your element list so it saves a little typing and save yourself a little bit of work all right so the method explorer is the program that is used to create analytical methods and we designed the file that it generates we call it a method file we designed that to be a comprehensive storage format for all of the data related to a quantitative analysis method and the great thing about it is that if at any point in time you're working on something and you have trouble with it if you send us that method file if you email that method file then myself or our applications guy has all the information we need to see what was done right what was done wrong what's working what's not working and even in many cases fix it and send it back to you so when we talk about the method explorer or the method file excitation conditions it has your the list of the elements you're going to measure it has your calibration curves it has your even the specter of the standards and the specter of the unknowns all in one big data set and so it's very handy for doing tech support the only downside about that approach is that every time you run an unknown that method file gets a little bit bigger and if you run a lot of samples it can get pretty big and I've seen them 200 megabytes when they get that big the program slows down and we don't recommend letting them get over about 50 or 60 megabytes and it's really easy to just make a copy of your calibrated method and rename the file you're using and start using a new file and just archive the other one yeah and sadly there's nothing in the software to warn you that in fact it's going to be slow so it's a good idea to check how big is that file or just if you look at we'll get there in a moment but when you look at the sample list you can see how many samples have been run so when you create a method file you can link a standard library to it so when you go to add new standards it'll find them a little easier we can import peak profiles again the peak profiles are used for doing quantitative analysis overlap correction and background correction at this point I'd like to go into more detail we're running out of time so I'm skipping some of this but it's on the analytes and conditions screen or tab where you define the excitation conditions you're going to use so we talked about that color code on the other periodic table that we were looking at that shows you which excitation conditions are correct for which elements it's on this screen where you add excitation conditions or remove excitation conditions to make a method optimum for the samples you're trying to measure and after you select the conditions you're going to use then what you do is you go in and oh jeez I didn't realize you'd put this much detail in here but after you select the excitation conditions you're going to use you select a condition and then click on the elements that are going to be in that condition and they will be added into your method and so if you have two conditions or three conditions you want to make sure that you're selecting the condition first for which you want to put an element into that condition and then click on the elements on the periodic table um here's an example of a warning that will come up again we briefly mentioned peak profiles that we used for doing peak intensity calculations if you don't have a peak profile for an element that you're trying to measure you'll get this warning and it just means you don't have a peak profile you'll probably go find one add one or add it to the method this slide is just showing an example of an error condition just clicking on elements on the analytes and conditions view the software is automatically adding those elements to the spectrum processing section of the software there's nothing you need to do to go set this up it's set up automatically for you if you have peak profiles there are special cases where you might want to edit this I'm not going to go into that but this is where you would set the fitting range for a given element again this is called spectrum processing question? oh, if only um, it is not no this XML goes back to about 1975 when our software group was programming for deck mini computers with 16 kilobytes of RAM and they were trying to figure out how to do multiple v-squares fitting on a computer with that much memory and then they did it and it's just a technique for taking the peak profiles for each of the analyte elements and trying to find a way to sum together all those peak profiles each one scaled by a scaling factor called a k-ratio such that you get the minimum RMS error when subtracting that from the spectrum you're analyzing and the result of that is a set of k-ratios for each of your analytes which is essentially proportional to your peak intensity and XMLGIS is X-ray multiple v-squares and it's the algorithm we've been using for ages and ages really well more than one of the nice things about XML fitting is that it doesn't require a function to fit the background we use a digital filter to filter out the background before we do the least squares fit and in my experience it's more reproducible than techniques that fit background because fitting background has its own problem but it simplifies the process of setting up an analytical method because there's no background function in here that you need to worry about or think about so we skipped the spectrum processing slide deck but the other options here we have gross net XML derivative gross is where you just set energy limits and all the counts within that region of interest are your measured intensity with no background calculated net is what you set a high energy and a low energy limit and it draws a straight line between the number of counts in those two channels anything above the line is net counts anything below it is background XML is this least squares fitting technique derivative and it is an enhancement of XML that fits not only to solve for a K ratio but it also solves for a peak shift correction factor resolution shift correction factor so it has three parameters that it's solving for when it does the least squares fit and it is helpful on spectrometers where you have some problems with peak shift or if there's big temperature swings in your lab the peaks could get broader as a function of temperature for the most part on this generation of spectrometers we don't really have problems with peak shift or resolution shift and so we don't find applications where we really get improved results with that enhancement that we call derivative on older units on the earlier generation it made a difference on this one we don't find very many cases where it helps there's more in the slide deck on on spectrum processing on that I want to look at that unknown components is the part of the software where we are you're asking about oxides this is the place where you define the compound that you want associated with an analyte element and so if you want to analyze as oxides you just come right in here and type it in the other thing that happens here is that the case I was talking about where you want to analyze a sample that's in water or in oil and you have a matrix that is most of the sample so you're measuring the iron peak but it's a different matrix whether the iron is 2% or 5% that has an effect on all the other elements you're trying to measure but is it 2% or 5% in silica or is it oil or water or lead or what it makes a big difference to the sensitivity for the other elements so you need to account for 100% 100% of the sample in order to do matrix correction using a theoretical model and so again if you want to use fundamental parameters it may or may not work but if you're going to try and do that you have to have a way to account for what are the major elements in the sample whether you care about measuring them or not you have to have some way to account for them and here we're accounting for them as the remainder you analyze all of these elements you got estimates of the concentration subtract from 100 that's the unmeasured compound just a linear calibration curve to a first approximation it's just like any other conventional analytical chemistry technique where you have a plot of intensity versus concentration you got standards, you draw a line you analyze your unknowns against that line and you don't worry about what the rest of the sample is because in that case you're not doing a comprehensive matrix correction the next step away from that and again I think we're skipping that slide deck but the other way to approach matrix correction is if you have enough standards you can calculate matrix correction factors from the standards so you have a calibration curve you'd like it to be a perfect line that's not a perfect line it looks like iron's going from 0.2 to 7% I'm pretty sure it's going to have a matrix effect on these other analytes and so you actually based on the intensities you measure from your standards and you can calculate a matrix correction factor for the effect of iron on the other elements and that's empirical matrix correction and in our software the one you would be using is called intensity correction there's also nos lucas too totally up to you totally up that's whatever you got the most accurate results are in cases where you have type standards that are accurately characterized however you do it where you get the numbers if you've got accurate numbers for a calibration standard you can get accurate results what I think is kind of wonderful about x-ray fluorescence and it's also a little scary I can add a line to analyze the concentration of iron in a rock by I can put in a pure iron for example as a standard together with matrix correction software the fundamental parameter software and calculate for concentration of iron down to PPM levels and it will give me a number is it accurate well it depends on how accurately I've defined the matrix if I sell the software well the remainder is SiO2 and if that's true it'll probably be within 10% but if that's not true there's a bunch of whatever else could be in a rock it's not really iron and silica and the matrix is something different than that then it won't be as accurate and at that point the question comes well how accurate is it and so well give me a standard and I'll tell you but of course at that point if you have a standard you can use it to calibrate with it's a great technique for getting ballpark numbers if you want to get something if you're going to publish you really ought to have a standard this system also came with something called uniquant which I'm not going to try and give you a sales pitch but there are fundamental parameters algorithms that go through and just look for everything from sodium to uranium and come back and report a number and say here's what we got based on just again just a factory calibration so there are some cool things you can do with the ability to get a quick and dirty number but if you're trying to get accurate results you really need to have matrix match standards like I said when we did the soil analysis that I was talking about the calibrations were based on NIST contaminated soil standards and so we had good quality standards again we made assumptions about the matrix but we had good quality standards we can demonstrate that we were getting accurate results when you go to set up the program about 10 minutes unless you want to stop just go directly into the software yeah so the software version I have loaded on my computer it's not the same when you run so it's got slightly different icons it looks a little bit different yeah I like the version the version you guys have is actually honestly a little more stable than this one so this is a slag method and we're using intensity correction so it's relatively applicable to what you guys do and the structure of the program is the same just some of the icons are a little different but when I click on elemental peak profiles which you see here oh still not oh because I still have I have to turn off power point maybe unplug it plug it back in now it's on your screen but not online what's that where's my cursor except that my my mouse is not working thank you I like that no I've got it now so that mode works this is Microsoft thank you Microsoft again it shuts it off when you run power point because it's assuming that when you get out of your presentation you don't want them to see all your embarrassing emails yeah I mean you could do projector only if this doesn't work this time we'll do that yeah okay so except that I don't have my cursor I have another idea wake it up again sleep still no cursor I've got one yeah well maybe I unplug it again yeah so these are peak profiles the software comes with a collection of peak profiles for elements and you can just import all of them if you want to when you're going to set up a method or just the ones you want but again when you go into create a method if you've got a a peak profile and you click on an element it'll give you that element without an error for instance if I click on cobalt here it's yelling at me it's saying you don't have a peak profile for cobalt so I click okay and that means I can at this point what it's done is it's set up cobalt to analyze with net intensity calculation which is not a good idea so I'm going to take cobalt back out of my method by clicking on it a second time down here again as we discussed you can see when I click here I'm selecting an excitation condition and I don't know if you can see the buttons which ones are pressed in or they're not pressed in it's a little bit subtle but also you can see that the color code changes and so the color code here is different than the color code we had in the other periodic table that we were looking at here the color code is telling us that with the lozier condition we're optimized for the K lines of sodium through sulfur and these other elements that are in yellow chlorine and argon could in theory be analyzed in that condition it's just that they're not optimized this is a little bit of a misnomer because it's showing you can measure all these other elements if you wanted to but you would be measuring the L lines of those elements and that's probably not a good idea this is one thing that if I could change it I would take this back out and not let it show up in green but when we go up to the next condition the lozi B with the cellulose filter running at 8 kilovolts we now are optimized for chlorine through scantium but again the idea is that you don't have to be optimized for an element to analyze it so if you're looking for high concentrations of aluminum and silicon in your sample you could use the cellulose filter condition and measure those elements without needing to use a lozi A condition because they're high at high levels you don't need to be optimized in order to measure them and then moving up the periodic table or the energy scale go to the mid ZB condition now we're optimized for titanium and you can see over here you can see oh man we don't have enough resolution here to do this you can see the elements that are in that condition and then go to the mid ZC condition and again we're just doing arsenic and lead in this method in this condition 50 kV with a thick copper filter is where we are measuring 10 antimony bearing everybody understand what's going on here on the user interface so these are pre-loaded in the software no this is a method that was so out of the box there are a few example methods that come with the software for plastic and steel but for the most part the expectation is that when we sell one of these usually an applications person comes on site helps you set up a method or gives you a short form of the 3-day class and teaches you how to do it so out of the box this is one thing that's very different from our laboratory systems versus our portables you know I think other companies that sell portables have similar software where it has a very simple user interface you tell what the matrix is and everything else is pre-programmed the difference is that you can put in whatever elements you want here you can calibrate it the way you want with standards I mean that's the expectation so you're going to calibrate it so spectrum process when we talked about briefly I'm not going to go into that detail this seems like a rather odd slag sample that we're analyzing all the elements as elements where we can just click here to select the oxide and the version that you guys have that's not available you need to type it in and again down here is where you would define the unmeasured compound and then the calibration view looks like this and so here are the names of the standards here is the sum of the concentrations for the standards there are your elements or your compounds anything that's shown in parentheses is something that we decided not to include in our calibration one of the things that we find is not unusual is that you'll have if you have a large number of standards you may have elements that are undefined they just didn't tell you what the concentration is and so if you don't type it in the software is going to assume zero and zero is a data point if you leave it in there as zero you can just force it through that point and if it's not really zero then it's better to have it in parentheses so any value you want to take out like this a data point here if you right click on it you can change to use it or don't use it same goes with the standard if you decide the standard is no good or if it was put in upside down or something you can take that standard out out of the fit use it in calculating the calibration curve down below here what you see are links to the spectra so if we have all the spectra of the standards then this will all be filled out if you haven't run the calibration then these would all be blank and when you run an automated routine with the calibration the system will acquire the spectra for you and put them into the method and run them into your standards and then it will automatically run the calibration calculation in this case I'm just going to recalculate the calibration based on the spectra that we're saved and so if I click continue it's going to do something like this and it's telling me all is good it worked click OK and then it's going to show me my calibration curve and I would say it looks like very well on magnesium and we've talked a lot about the escape depth for the x-rays of these low number elements especially slag is a horrible sample matrix honestly it's difficult to grind to a fine powder and and it tends to have an indeterminate amount of oxygen in it and there are several things that make it a difficult matrix but as we go up an atomic number the x-rays are more energetic they come from deeper within the sample and it's easier to get a good calibration so there's an example of a data point that we threw out and we've got some information about how to fit we've got here like that in this case we are also doing matrix correction and so over here on the matrix correction tab we have a user interface where we can tell the software which correction factors we want to calculate and in general the idea is that you want to put in correction factors for elements that are going to make a significant change to the average atomic number of the matrix over the range of your standards so in a slag if it's a steel making slag you'd expect that iron is going to be an element that's going to have an effect we're not doing any correction factors for iron what are we doing here so these are the matrix elements these are your analytes so the idea is here that so for instance what we're doing here is we're calculating a correction factor for the effect of calcium on sulfur by clicking that check box and then that's calculating one factor one correction factor and then we're also correcting for the effect of copper on sulfur and so we're calculating two correction factors we're also calculating the slope and so we're calculating three coefficients yeah so this check box since it's checked means that we are placing a zero intercept on this calibration curve we're unchecked and we would be calculating slope two matrix correction factors and intercept so that would mean calculating four parameters we've got one, two, three, four, five, six we've got eight standards it turns out you don't want to calculate very many correction factors relative to the number of standards there's a rule of thumb two or something so if you want to, where n is the number of standards and so you don't want to over correct as you if you put in too many correction terms you'll start getting nonsensical bits with your data set with 40 standards you're good to go probably any matrix elements that you think you want to do correction for we can probably put in and calculate a correction term but on a set like this with eight standards I think two correction factors is the limit with a forced zero intercept any questions on calibrating what we're doing here with calibration so I think this is obvious select the units you want mostly either PPM or percent once in a while you may end up with an element you've got in more than one condition maybe for experimental purposes you put it in the mid-ZA and then you also put it in the mid-ZC and later you want to check and see which one gives you a better calibration you can go through and change conditions here if the elements in two different conditions and then calculate it one way and see how good the fit is and then calculate it the other way and make a decision so in the units the percent PPM is just percent divided by 10,000 yes weight fraction is percent divided by 100 and the other ones are for thickness, they're not really relevant and then the other thing we really didn't talk about too much is ray-showing briefly for analysis of obsidian it turns out to be really helpful to be able to use the Compton Scatter Peak as an internal standard and so what we're going to do this is not obsidian but let's pretend it was we would take a mid-Z spectrum of a sample like this and what we're going to do is we're going to use, this is the Compton Scatter Peak we mentioned briefly and we're going to just set a region of interest on the Compton Scatter Peak and we're going to ratio the intensity of our analyte elements to the intensity of the Compton Scatter Peak and use that ratio as the basis for our calibration rather, I mean it's more conventional just to plot intensity versus concentration but in some cases it works well to do a ratio to the Compton Scatter and so what we would do is okay, I can't do it do it yeah so, but we want to go I'm not going to try and do it because I'm going to have trouble with this software version but the point is that we're going to look qualitatively at the spectrum pick the region of interest limits that we want to use to integrate that Compton Scatter Peak and then we're going to come back here to analytes and conditions and we're going to put in a fictitious element into the mid z, c condition or the mid z, b whichever one we happen to be using and so we would maybe pick an element like Krypton or something that you know you're not actually analyzing it's not an analyte element you could call it Ruthinium but we're going to set up we're going to enter it into the software the software is going to say we don't have a peak profile you go into spectrum processing you go to Ruthinium you go into the place where you set the limits and you put your region your fitting limits in here you would set this to net as it is already set by default the software goes to net and then on unknown components you want to come here and uncheck Ruthinium because you don't have standards for Ruthinium and then under calibration you're going to come over here and for the elements that you're going to ratio to to the Compton Scatter Peak you'll go on in here and click on Ruthinium and so now it's telling the software that for lead the software will be a plot of lead to Ruthinium intensity ratio versus concentration so if you have variability in the same intensity of your samples or the gap between the detector and the sample this will well the thing is that the thing is that it was originally used for matrix correction because as the matrix gets heavier as the matrix gets more absorbing that Compton Scatter Peak gets smaller and vice versa on a lighter matrix the peak gets bigger and so it turns out that you would expect to have higher sensitivity in a light matrix lower sensitivity in a heavy matrix and by ratioing to the Compton Scatter Peak then you get it's like poor man's fundamental parameters it gives you matrix correction without all the number crunching but it turns out that it can also compensate for sample size so if you have a smaller sample or if the sample doesn't fill the beam then you would expect to get proportionally less scattered intensity because if there's less sample in the beam there's less area to scatter x-rays and so you could also use it for that the thing that gets a little bit dicey is if you're going to use it to correct for sample size and sample matrix you may be asking it to do too much so you want to try and you don't want it's not that you can't do what's impossible but it can do a lot I mean I know people that have done the experiments where you make the calibration curve and then you run unknowns of different sizes and shapes and you just look if you've got one big chunk of obsidian you break it up different ways and you try different size samples and you just have to decide for yourself how much it's working but I think you know that already so at that point then it's you click the button until I want to do calculate my matrix correction calculate my calibration and then you have to let and look at that it still works even with the intensity ratio after you throw out the one bad data point up here it's giving you two choices of the ways to plot the data the default is calculated versus given so this is the software calculated for you unknown this is the given value and you can use flyover if you hold the cursor still over one of the data points it will show you the given and calculated values for that standard if I go to this one this is intensity versus concentration so this case there's a lot going on here first of all we're starting with a peak ratio and then we're correcting that by the matrix correction factors that were calculated based on the correction factors that we asked the software to calculate but again here you can see when you're calculating a non-zero intercept you can see what that looks like you can see the slope of the line some things like that and it's a really good idea to check this because every once in a while if you put in too many matrix correction factors the software can go nuts it can decide that some very funny looking curve is the best fit for your data and when that happens you may find I mean I've seen cases where all the data points are negative the corrective intensities are way down in the negative and the line's going this way and it's not good so you want to check this make sure you're getting something that looks reasonable as a calibration curve setting up samples is a new feature you guys don't have it so there's no point in talking about it it's the opposition manager no that would be nice setting up samples I think I told you about this when we were talking about spark optical emission spectrometers the idea is that if you've got a calibration based on 40 standards and you have to give the standards back at the end of the week and you want to be able to run unknowns against that calibration a year from now the calibration is drifted the idea is that if you had some samples that you knew represented the top and the bottom of your calibration range even if they weren't certified reference materials you could run them at the time you do the calibration and then at some point down the road you can run and calculate a correction factor to drift correct your calibration and get you back to reading the right results for your unknowns so one of the things that Steve Shackard would be with is obsidian analysis is the RGM2 RGM1 and 2 in spot 20 I think on the carousel and then drift would be evident because of that so that would be a control sample I refer to that as a control sample and it would be a really good idea to have a control sample run it every time you run unknowns and plot a control chart of concentration results versus time for that sample and when you see it drifting up or down or something then you know it's time to fix it look for a problem it's a small continuous change in one direction you say okay that is my x-ray tube wearing out and I'm going to just apply a manual correction factor maybe but if it suddenly jumps then you know something's wrong your sample tray wasn't working the sample got jammed in the sample tray or something it's not so yeah it's a really good idea but that's a control sample drift correction factor if you say that if you know that that sample has been reading for the last week it's all the readings have been 3% low there's a place you can go into the software and put in a correction factor manually to bump the results up by 3% but you start with a control sample to decide when you need to do drift correction so when you run unknown samples the software will generate a sample list and so here are the dates and the times when these samples will run and so this is called a tray list each of these tray lists only had one sample but they could have up to 20 in a tray list and so the tray list would show you the names of all the samples that were run when you click here as with the calibration view down here it's showing you the links to the spectra of that unknown so if you and then we have because I was playing around with the software I recalibrated the method and now I've got a sample list here I've got the spectra of the unknown but I don't have calculated results so if I come here and click analyze and click continue it will recalculate the result for that sample and in this case I think it's also trying to oh well no it's not I was afraid it was going to go try and print something to a printer I don't have so this is the analysis report this is the analysis so this is my sample list again this is the analysis result in a tabular form so I've got peak intensities for the that are shown there by excitation condition down here I've got background count rates I've got concentration results so this is probably the thing I'm kind of that I would be interested in in the event that you happen to run replicates then you've got some nice information here about min and max and standard deviation but if you're just running one sample it doesn't not very meaningful but again these are the results this is the report so this is how the data comes out on the printer and there are some things that you can do here to make this maybe a little more useful one thing that I would typically do on a method like this where I've got 5 excitation conditions and I'm using up a lot of paper if I'm actually printing that as I go in here I clicked file settings and then on reporting I turn off conditions and when I do that it makes my report a little simpler but you can also decide to include or not include so I like having the uncertainty I don't find the background very useful so I turn that off and make the report the way I want it is uncertainty where is it measuring? it's based on counting statistics if you remember the slide where I said here's the estimate of precision 1 over the square root of the number of counts it's a little more complicated than that but it's something like that so it's based on the peaked background ratio number of counts in the peak, number of counts in the back so it's an estimate of the uncertainty you would measure if you didn't want to the idea is that if you want to know the uncertainty you take a sample and you set reps to 10 or 20, you just run 10 or 20 replicates and you look at that analysis result table and it will show you what the standard deviation is that will give you an idea of the uncertainty that you should report the best way to do it and you might want to compare it to what's expected uncertainty sometimes it's similar not always yeah, 317 ok any other questions while we're here? good question yeah you can and it might actually be easier in the version you have but if you just come over here to a spectrum preferably a pretty spectrum with peak levels so here basically just edit copy and what's supposed to happen if you go into word and I forget what this software version does oh yeah there we go so there it is there's intensities just as raw data but you could also come in here and print it as a bitmap same same yeah one column so for archiving archival format because you can create a secondary open in years from now is there an open format for methods or is that all proprietary it's proprietary unfortunately yeah so you could archive it but there's no promising in the digital community no well okay that's true there's not the only other thing I can offer is you can export the spectra and when you go to export a spectrum one of the formats that is available is Emsa and Emsa is actually a text file it's a text format yeah Emsa might be easier to get to I mean it's let me show you my desktop wasn't so messy this would be easier oh there it is so there's what the Emsa file looks like and so it has a little more information that might be useful yeah so Emsa is a standard for microanalysis for EDS systems and so yeah if you want to archive the data I think this would probably be the best our version would be 7.45 we should check but I think it does I know there was a version somewhere along the line where they goofed and they didn't implement it but I think you have it but yeah you want to keep it running for years and years and so yeah I think I have a Vmware I was telling you I have a computer at home that I installed Vmware and I have what Windows 98 for the version of this software that we were releasing under Windows 98 and then I got Windows XP, Windows 7 with that operating system so I can open up old data files and I was amazed when at one point in time the computer I was on you know I'm cheap and I don't buy all the Vmware updates and all of a sudden I got an update to the Mac OS and Vmware wouldn't run anymore and so I but I was able to copy those instances of those to a different Mac Mini and load the current version of Vmware and they opened right up it was amazing in VirtualBox VirtualBox was harder to use Vmware for me was easier but I run it on a Mac works fine under Vmware so obviously you have the usual problems with installing Windows but you don't have to keep it current because it's just a virtual machine and it's not a hardware situation go look at the machine oh just leave it