 Welcome to the discussion of our brief history of frequent delivery techniques and quality steroids. My name is Thomas Bodford. I'm from the Massachusetts General Hospital and Howard Medical School and I'm here with Brock Mackey, who is an emeritus professor at the University of Wisconsin. So we selected, together we selected 15 medical physics papers on this topic. Our criteria were impact in terms of citations, impact in terms of clinical relevance and innovation for the medical and research community. And so without further ado, we would just move to the first paper discussion, which is the paper from Michael Boietini. Yeah, this is a very interesting paper. So in historical terms now kind of summarizes what radiotherapy was like in the mid-80s. So essentially it was a coplanar treatment that used blocks, it could use compensators. And in fact compensators are sort of the Euras version of IMRT, but they weren't done with optimization, but typically for missing tissue. But this was really the first paper that had a pretty good start at defining uncertainty for radiation therapy. And in fact at that point in time, the ICRU had really only said that those should be within five percent, but didn't really give any specifications. And this and other papers like it encouraged the ICRU to better define the criteria for prescribing, reporting, and recording radiation oncology that led to the concepts of the gross tumor volume, the clinical target volume, and the planning target volume. And this particular paper really describes the basis of how one should view a margin as respect to treating the GTV. I should point out that this paper does have an error in table one. So when you read the paper, the lung densities recorded there are incorrect. They should be the nominal lung density as 0.2 grams per centimeter cubed, the minimum lung density 0.05 grams per centimeter cubed, and the maximum lung density of 0.3 grams per centimeter cubed. Go on to the next paper. I'm sure it was the first one where I've noticed that mistake in the last 36 years actually. Probably. Probably. It's not a topic of conversation generally in the field. I wanted to also highlight maybe the essence of that paper to some degree was also to define the nominal scenario, the degree which you normally take for granted as the only one you're looking at where all parameters are like the prescribed or like the term from the planning system, but to add the worst case and maybe the best case in the area. So the nominal dose in the nominal case and the dose that can be delivered in the worst case, say in the nominal tissues, 5%, 10% higher dose also, and then also on the other side, 5%, 10% less dose and then to always do the calculation based on these different scenarios which then led to the idea of robust planning many, many years later. Yeah, in fact, he was applying project management principles to really radiation therapy. All right. So the next slide is tomotherapy. Well, this is a dear to my heart, obviously, I'm the first author of this paper. And this paper really described the first intensity modulated radiotherapy IGRT machine, basically dedicated to IMRT and IGRT. As you can see it had a binary MLC and it had a CT x-ray source at 90 degrees to the treatment beam. So the vision here was to be able to do IGRT with a kilovoltage CT-like beam. It was never actually implemented in tomotherapy because it just was another piece of apparatus that would have provided excellent images, but it turned out that the megavoltage image detector was probably good enough for setup verification while not being typically good enough for doing a planning study of a patient. It also showed an innovative ring target. So in other words, if you're moving around, you're not going to overheat the target. And so it was what actually possible to put in a ring target. The only problem with this is that at that time the Linux vendor that was supplying this particular one had a built-in target. So it really wasn't a practical first idea. But this machine led to what has become common in the industry of having a single energy machine, machine that's dedicated to IMRT, in fact rotational IMRT, one that has a single energy and no electrons in the co-planar machine. So all subsequent machines pretty much that evolved after that, like the Vue Ray, the Unity, the Halcyon have followed the same general principles. If you're doing IMRT, you don't need electrons, you don't need co-planar beams generally and the single energies. Is it back better than having a high-energy beam for IMRT? Did you want to discuss the power development by normal system? Oh, yes, yes. Yeah, no, it's a fairly interesting story. So the concept of tomotherapy was patented by the tech transfer group at the University of Wisconsin and Mark Carroll, who is a brilliant physician, really invented it almost simultaneously. But since we beat him to the patent, he licensed the RIP for his Nomos Peacock machine, which really was the first IMRT treatment provided by the Nomos Corporation, and they sold about 70 of those machines. And Wisconsin Alumni Research Foundation kept the general patent for a fully integrated machine that we ended up licensing for a dedicated, integrated tomotherapy machine. All right, we move on to the third paper, which is about intensity violation and generation of intensity profiles by dynamic jaws or multi-leaf collimators on general purpose in-accent multi-leaf collimators. So in parallel to the development of tomotherapy that Rod just described, there was a lot of efforts on the side of, you know, developing IMRT capabilities for conventional linear accelerators using MRCs that were normally designed for field shaping. And the challenge was, of course, how do we drive, how do we control the leaves of the multi-leaf collimators so that they generate the desired intensity map now overall. And in 1992, that there was a publication from a group in England from Hungary and Rosenblum who showed that it is actually possible to deliver arbitrary intensity profiles, such as the one shown here, by unidirectional sweep of the multi-leaf collimator, say, both leaves starting close on one side and then moving over to the other side and closing again on the other side. So I have to move my hands closer to this side actually. So we start on one side, then we open the leaves at various angles, and then we close them both on the other side. So that was demonstrated before, and then in 1994, this publication from Spiru and Shui showed how the calculation of these trajectories of the multi-leaf leaves can be done very easily actually. So the principle is shown here. Essentially the idea is that the intensity at any point in the treatment field is proportional to the difference of the time from when the first leaf crosses that point and opens the irradiation of that point, and the later time when the other leaf crosses that point and stops the irradiation of that point. So it's simply this time difference of the leaf crossing that point that controls the intensity at that point. So we just have to make sure that these times are proportional to the desired intensities as shown here. And that was the insight that the group and Sloan Kettering had. But interestingly, it was an idea that was sort of right for its time because at the same time, in the same year, there were four publications on that same idea. Three others were published, one in PMB by the group from Stockholm, one in the Green Journal in Radiation and Radiotherapy and Opology from the group in Heiberberg, and one in the Red Journal from the group in Houston where I also participated. So again, an example of a development that was sort of right and then was developed and implemented simultaneously more or less in different parts of the world. Yeah, and of course, this is for a static gantry delivery. So it's safe for five or seven fields. If you're doing V-MAT, then the leaf sequencing is far more complicated than shown here. But this was definitely the root of it. And probably for the first decade or so, it was either fixed static ports with the leaves moving between the ports and the machine coming on or this dynamic program of sweeping the leaves until V-MAT. So in this static field port approach, one question is, what are the best beam angers that we, how do we choose the beam angers that give us the best compatibility to the tumor-solid program? That is a difficult problem and it's still unsolved until today. But here we have one of the earlier publications by Yorkshire and others. I was also a co-op on that one. And we show that you have to use optimization techniques that are more stochastic methods that avoid local minima. In this case, we use simulated annealing. And that is a very time-consuming optimization techniques. But regardless, it is still to this point, to this day, an unsolved problem. What I wanted to highlight here is one interesting aspect, mainly that the leaf direction choice in IMRT is not intuitive. Let's assume we want to treat this U-shaped target volume shown at the top with the organ that was quite in the center. So what are the best beam angers to use for this geometry? Let's assume we have two options. Let's assume we have two lateral beams fixed that we want to use and then we have to decide whether we use the anterior beam in addition to the lateral beams or the posterior beam as shown in the lower figure here. So what do you think? What is better? Well, I would use the anterior beam because it has a higher dose at the entrance side than the exit side. Yes. And that's why you are a rock-making and you have the smart insights. Probably most people would say the opposite, namely that they would use the posterior beam because the posterior beam spares the organ at risk. So in conventional, say, in free conformal variation therapy, you would probably use the posterior beam. But in IMRT, because you have the potential to reduce the intensity in the center part of the field to block the organ at risk, it's actually better to use the anterior beam. So the beam selection problem in IMRT and in free conformal therapy are very different. It is a challenging problem. Yes. And in fact, in the general IMRT case, you tend to have the higher intensity beams closer to the organs at risk. And that was shown by the classic paper by Brahmi that really started IMRT in 1983 where he treated a torus or a donut shape and tried to spare the whole of the donut. And you see the highest intensity beams are just skimming tangentially to the whole of the donut, the region that you want to spare, and you have less dose away from the region of the spare. Exactly. I'm glad that you mentioned Anders Brahmi because he has influenced the field of IMRT so a lot has a lot of impact. Including designing the first MLC. So the first MLC was designed back in the 70s by Anders Brahmi and Gruppen and the Microtron by Scanditronix was a 50 MVB and electron energy is up to 50 MVB as well, so very advanced machine for its time. All right. Now another problem that came up in bringing IMRT from the bench to the bedside, so to speak, is how do we do the QA and the evaluation of the intensely modulated treatment fields. So in conventional treatment fields where it's essentially just the shape of the field, and it is enough to make sure that the shape of the field agrees with the planning, the planned shape of the field. With intensely modulated fields it's not only the shape but also the intensity of the dose at each point in the field. So we needed new ways to do the quality assurance. And here we have, this is the most frequently cited publication in medical physics over all years. It's by Dan Lowe, Harms, Joseph Mutesch and Jim Brody, published in 1998. And they address this question and this problem, how do we quantify differences between the delivered intensity map and the planned intensity map. We're just enormously important of course because the counter argument against IMRT has always been that it's way too complex, it's too easy to make mistakes, how do we make sure that we actually get in the patient what we are planning, what we're seeing in the planning system. So their idea was essentially to take two metrics, one is the distance to agreement metric that measures the geometry in a way and also the dose difference metric for the y-axis here, to combine them into an integrated term that they call the gamma value. And the gamma value also has an interesting and important characteristic that it's a pass-fail criteria. So if you pre-select acceptance criteria of say 3% dose difference and 3 millimeter spatial difference offset, then if you feed that into this gamma value concept and whenever you get the gamma value that's below 1, you are within tolerance and if it's above 1, it's outside of tolerance. So that is another development that has been had enormous importance in our field. And the nice thing about this metric was that the low-gradient regions, for example, inside a tumor, what's really important is the difference in dose compared to the prescription. Whereas in a high-gradient region, then what you care about is the distance to that high-gradient. And so really the clever part here was realizing that and having a metric that was kind of independent. So either you're worried about where the high-gradient is or you're worried about what the absolute dose was. And then they were kind of independent, almost like two dimensions. And so you could then make a combined one by just adding them in quadrature. And that assigned then a pixel value or a voxel value of goodness that you can either display as a map or you could make a graph or you could report the statistics of how much the volume is good and how much is bad. So extremely clever metric. You can fool it a few times and there's been papers subsequently about situations where bad plans look like they passed this. But in general, it's a very useful concept. The last series of papers that are related to HMT delivery on standard linear accelerators is by Tom Loves also and colleagues from the Sloan Kettering Cancer Center. And I described sort of the ideal way of intensive relation using modeling collinators by this unidirectional sweep of the leaves across the field. That assumes that the intensity in the open part of the field is 100% and on the close part of the field is 0. But in reality of course the situation is different. They have for practical reasons most muddy leaf collinators at the time and even today have this rounded leaf and shape shown in the top part of the figure. When I was in Heideback we designed double or we called double focusing muddy leaf collinators that would really shape or move the leaves on the surface of the sphere so that at every opening angle the leaf ends are focused on the source. But that was a very challenging mechanical design and so that's why most commercial MRCs followed the design shown here namely the rounded leaf end that projects towards the source. One question with that design is shown at the top is that the geometrical pinumbra which is just 0% transmission under the blocked part of the leaf may not agree with the dosimetric pinumbra because there is some transmission through the thin part of the leaf on this side here that I'm showing right here. So the question was how much does that affect the dose distribution? And Tom Rosso and colleagues showed that it leads to the effect of the 1mm shift of the dosimetric pinumbra versus the geometric pinumbra. Yeah and of course this is a pain in the treatment planning system because you really have to account for this to get I and RT right especially if you're using lots of close leaf gaps. And the other potential problem then is that this partial transmission also leads to scatter more scatter than you would have if you had a doubly diverging MLC like as a matter of fact the first MLC that Brahmin produced was doubly diverging and then Siemens produced a doubly diverging one so it's certainly possible to produce a doubly diverging one but they're more expensive to build. The second challenge with multi-leaf IMRT delivery was this so-called tongue and groove effect. So the leaves in the lateral profile are shaped like this. There's a tongue on one side of the leaves that goes into the groove of the neighboring leaf here and the purpose of that is to reduce interleaf leakage. If the tongue and groove did not exist then there would be more higher intensity going through in between two neighboring leaves and that might be at a level that is not acceptable. But the one consequence of this tongue and groove construction is that it leads to a dependency of the delivered dose on the sequence with which we deliver the intensity of the dose. For example if you consider that two leaves are open in one section then of course you have full transmission in the open part of the field. But if you first open one leaf and keep the other closed and then will be opposite in switch and close the first leaf and open the second leaf then you will always have the central part of the field blocked by the tongue of one of the leaves drawn here. And so that had some impact on the design of the sequence in algorithms that defined the trajectories as I said before given intensity. But it was a sort of a problem and it was just one thing to be considered. In fact another way to solve the problem is to stack more than one set of MLCs on top of each other moving them by a half a leaf spacing. It gives you effectively higher resolution as well and then eliminates the issue of leaf leakage. Exactly and there was also done in following designs in other systems. Alright so this is a paper by Jeff Sewardson and David Jaffrey when they were at Beaumont Hospital just outside Detroit. And so this was really one of the series of papers that described using a flat panel and flat panels had come into existence at this point for radiology for replacement of film and radiology. And so these systems by the way have relatively slow readouts typically frame rates that are more like a camera 30 hertz or so. Whereas a conventional fan beam CT scanner would have readout rates of the thousands of samples a second instead of 30 frames a second. However in radiation therapy with a gantry turning slowly by convention and law then this has adequate sampling for CT scanning for radiation therapy. In a conventional fan beam CT scanner the fan angle is quite large. It's about 45 degrees or so so that you can get a relatively large slice and a large lateral field of view. But the cone angle is very very small. The area then is kind of a beam is a narrow rectangle. In a cone beam CT you essentially have relatively a smaller fan angle but a much bigger cone angle and so that the area of the beam is much more. So what you end up what ends up happening is that you get a lot more scatter in cone beam CT than you would be in fan beam CT. And the graph here is showing that as you increase the thickness of the patient and this graph was done just with fixed polystyrene sheets that you get to a point where you have 100 times more scatter signal to a detector than you do primary signal. So the signal is almost entirely scattered. Interestingly it is of course possible to estimate the scatter and this paper showed the dependencies of scatter but it's still difficult to completely remove the scatter in modern cone beam CT scanners because again it dominates the signal. Interesting you can see in this case there's two bone objects and so you see a darkened area in between which would make it look like it's a lower density region. And this is because there's more scatter that is being represented and that looks like less attenuation. So that's fundamentally what's happening. An interesting thing is notice that the noise and by the way SPR is scattered to primary ratio. Notice that the noise is less in high scatter to primary ratios and that's simply because there are more photons and noise is essentially depended upon the photons hitting one detector compared to the photons hitting the other detector. Now this doesn't really improve image quality but it's just something of interest. It appears to be a smoother image. Now we're moving to intensive relation for different modality namely protons. The general clinical benefit of high MRT and X-ray therapy has been to shape the dose around organs at risk or complex shaped target volumes as shown in this paraspinal case here. Now with proton therapy we could do that already without intensive relation namely we could use a beam from this direction and stop the beam right in front of the critical structure for example the spinal cord. So there was no need for intensive relation to shape fields for convex concave target volumes. But as shown by Tony Lomax and others from the Pochera Institute in the publication 2001, intensive relation turned out to be very, very useful also for proton therapy for a number of other reasons and one reason is shown here namely the reason that IMPT as it's called intensively proton therapy can be designed so that it is more robust against uncertainties such as range, proton range uncertainty. Here we see a design of a three treatment technique for this specific problem. And so we have one beam coming in this direction and covering the gray shaded part of the field. Then we have a second beam in this direction producing this gray shaded or covering this gray shaded part and you see that both are avoiding the critical structure. So there is no issue with range uncertainty even if the beam overshoots the spinal cord or other critical structures would not be affected. And the third beam is shown here and that again has zero intensity or very low intensity in the central part that would be at risk to overdose the critical structure. So by combining these three fields we get a uniform coverage of the target volume and the sparing of the organism not only that we get that without the price of potentially overdosing the organ at risk in a situation where we have an overshoot of the proton beam beyond the planned range. Yeah, it's very interesting in proton radiation therapy that we brag about the brag peak how sharp it is and high resolution. Unfortunately it's like a really sharp knife it can cause a lot of harm too if it's not wielded correctly. And what really this field needs is some way to make sure we know where the protons actually are then we could start to use the high gradient part of the beam right against normal tissue structures between normal tissue structures and tumors and like sharpened radiotherapy. So this is a paper about respiration gated radiotherapy and clearly it's all about making sure that the beam is going to the target volume when the target volume is in motion. And so in gating what you do is you hold off the beam when the beam is not well centered with respect to the target volume or inside the margins that you've made on the target volume and turn it on when it is in that space. So this paper is all about gathering the parameters that you're going to need for doing that planning. So the planning is about the capabilities of the patient in order to breath hold. For example, you want to do the breath holding and inspiration or expiration. People tend to do it on expiration but some patients actually find it easier to do on gating when it's fully inspired. The other issue is phasing. Is this patient staying in a regular breathing pattern or do you have to worry about the patient then for whatever reason taking breath faster or slower? The system is relatively straightforward. You would have an infrared emitter and infrared camera looking at infrared markers on the patient. So this can be done in a pretty well-lit room because there's not much interference of infrared with respect to normal visible lighting. And so really the main story here is how big do you make the planning target volume with respect to the CTV, the clinical target volume so that you can be as efficient as possible. So a bigger margin will allow you to be more efficient and deliver the treatment with less stockage of the beam. And if you want to have a very small planning target volume with respect to the CTV, a small margin, then you have to essentially waste more time. So it's an optimization of clinical benefit versus practical necessity in the busy time. Let's move to the next one. All the papers that we discussed so far were landmark papers that had a very, very big impact in terms of the clinical use and they are most of all, essentially all of them are being used on a routine basis in the clinics today. We also selected a couple of papers that are not in routine use yet but that have very great potential in our opinion. And this is one of them. So it's about microbeam radiation. The idea here is that we use a synchrotron radiation and this is from a production in others from the European Synchrotron Radiation Facility in Grenoble. And the synchrotron radiation can be shaped at a very high level of granularity. So they use beams of about 20 micrometer thickness and then they have the peaks of the beams and then they have values that are maybe 100 micrometers long. So it's pulsed in a way. It's a delta shaped type of dosage. We're going to discuss two papers that are related to delta what I call delta delivery. One is delta delivery in space and that's the microbeam idea. And then later we also discuss delta delivery in terms of time which is the flash radiation therapy. And they have interesting biological potential that hasn't been fully explored yet but it's interesting to look at both the physics and the biology of this problem. So one physical problem that was discussed in this paper is how to do the symmetry for these very, very fine beams of like I said only 20 micrometers wide. And in this case they used a dosimeter on a chip using MOSFET transistors and in the lower part of the figure you see a very, very sharp increase of the dose towards the peak. It's a little bit difficult to see but this is on a log scale. So we see a three order increase in terms of the intensity of the dose over the fraction of a millimeter so that that's a much, much higher than the dose weighing that we can create in any of the conventional techniques. And the biological potential is really fascinating. I've seen pictures from then where they show that they used the microbeam for a small annivers I believe and you can really essentially cut the tissue in slices but you get almost no effect on the normal tissue even though that very small part within the high dose part of the beam gets potentially destroyed. An interesting thing here is that how much of this microbeam effect is due to the high dose rate because there's a high dose rate going on here as well. But the classical reason for the microbeam is that it's felt that the normal tissue stem cells can migrate from small distances and fill in for the normal tissue cells that are in the beam. But if the flash effect actually works maybe that's not so important. Yeah, all combination of the two. Our combination of the two, yeah. So this is the first, so we talked about gating and the other way of course to deal with motion is to follow the motion. So in other words to be able to track the motion. And so this was the first motion tracking system which was developed by Accure on their CyberKnife system. And basically they used both an IR tracking system so that they could look at motion on the surface of the patient but they were also using orthogonal X-ray sources at an oblique angle to inspect the target volume. And they were using internal markers then as well as external markers. And so they developed a way then to predict where the motion was going and to have the robot which is able to keep up with the motion and follow it around. Interesting, this is Accure is also recently implemented real-time tracking on the tomotherapy machine using the two X-rays system that I first proposed in 1993 to be able to do orthogonal pairs of X-rays one megavoltage and one kilovoltage to do the same thing to track tumors in that system in real-time. And the tracking there is done by moving the jaws and following with the jaws and following with the MLCs. This paper is about really the properties of an on-flattened photon beam. So as you see here, Remstrolum is mainly in the forward direction so that the intensity is forward peaked. As the energy increases it becomes even more forward peaked. So at 18 MV as compared to 6 MV it's more on-flat. And so what's really going on here is that yes, the intensity is changing and we feel-flattening filters can flatten this out so that we have uniform fields. But for intensity-modulated radiation therapy there's no point in starting with a flat field. You want a flat field, you can always intensely modulate the field flat. And not having a field-flattening filter makes it easier to do the treatment planning because in treatment planning it's model-based treatment planning what you need to do is explicitly calculate the attenuation of a lot of rays, the primary rays incident on the patient. And by putting a flattening filter in you are perturbing the energy of the beam. Whereas if you don't put a field-flattening filter in you're essentially not perturbing the energy of the beam because the Remstrolum process itself isn't very energy dependent at all. And so, you know, dynamic jaws have essentially replaced wedges as well for the same reason because it's easier to effectively modulate the intensity than to deal with the changing energy in treatment planning. And it was the same idea on the symptom of therapy, right? Yes, oh yes, actually the accurate had the first flattening field flattening free filter or no filter no flattening filter but they had a relatively small field so thermal therapy was the first one that really used this. And by the way, you can it's a spectrometer too because if the energy of the linac changes the angle will change and it's actually easier to detect the energy by the change in this angle if you have a good detector than it is to do a depth dose curve. The next paper was also enormously impactful in our field that's about Podometric Modulated Art Therapy or V-Met and that's the type of I-MT that's being used mechanically essentially everywhere today. The idea was proposed some years earlier before this paper was published but the challenge was to calculate the trajectory the speed of the gantry as it continuously rotates around the patient and the intensity of the beam and the shaping of the multi-leaf program so it's as Rob and I discussed before it's a more involved problem than calculating the intensity maps for static fields The clinical advantage is of course that the delivery is much faster whereas conventional I-MT sometimes for difficult cases it could take 20 minutes to deliver an I-MT field and with the art therapy idea the treatment delivery time on the machine would be reduced to a few minutes So Carl Otto and I came up with a method to calculate the positions as a function of the gantry angle and the speed of the gantry at every angle and the methods were later refined but essentially the idea is still so fundamentally the same that we can accelerate the delivery of I-MT by using dynamic delivery techniques and some publications have shown that this not only increases the delivery speed but also the quality of the treatment plan comparing D-MAT goes with the I-MT I will say that that depends very much on the number of themes that are being used so in conventional I-MT let's call it that typically we use 7 or 9 I-MT fields and that was based on studies we did way back in the 1990s that showed that you cannot get much better if you increase the number of things much beyond that number but in reality you can and so if you use say 19 instead of 9 I-MT fields you get a dose of illusion that looks very very much like the D-MAT plan and sometimes even better so I think with very much faster delivery techniques that are available today in the more modern machines and we will discuss one example in one of the next papers the question whether we use D-MAT becomes more blurry is this yours or mine oh ok so this is a this is a paper on a pre-configured linear accelerator happens to be the Halcyon Halcyon by the way it looks a lot like the Tomothera machine it's a ring gantry or O-ring gantry machine the accelerator then spins around in this particular case around 360 degrees and then spins around the other way it doesn't happen to have a slipper like a Tomothera machine or the reflection machine does now as well which is another O-ring based machine in fact you can say that the unity is an O-ring shaped machine except the thickness of the board of MR is very deep and the you know the basic thing about this paper isn't the configuration though of the gantry but it's pre-configuring the planning parameters into the system and instead of having it commissioned in other words like other machines are today and Tomothera was as well that really allows then the medical physicists not to have to take commissioning measurements which they're likely not to do as well as a canned golden beam dataset that can come from the vendor and the vendor offering this of course requires then to keep the machine in that same condition of delivery and this is describing tests in fact this is a nice display of tests that can be applied to verify that in fact the golden beam data or the pre-configured data supporting the treatment planning system is in fact correct and the responsibility then goes from the medical physicists water fan to measurements to commission the beam to making simpler tests of the machine after servicing but in fact they should be doing that anyway so the bottom line is you can have a even more accuracy I think because the commissioning is right you are making measurements after every major servicing operation and of course routinely just to check if there's no servicing and this is likely a better approach to the workflow of quality assurance than to fool around with setting up your treatment planning system of course it requires the vendors make a cookie cutter machine that everyone is essentially identical to the one before it so there's still a lot of dynamics in the development of the treatment techniques I'm curious if we did this exercise again in 10 years from now I'm wondering what we would discuss then well I think we would be discussing we would be evaluating artificial intelligence because we'd have a ton of data that would be hard for us to evaluate and we're probably be measuring the exit beams as well to infer what the dose was at each fraction and artificial intelligence would tell us if our machine is right or not and maybe we would tune it so yeah so this is a really clever this is something you can all do at home this is an idea from from Magdalena Brezolova Carter in Victoria, British Columbia who show that you can get flash like dose rates just with a conventional x-ray tube so in other words the intensity of the beam at the window just after the window of the x-ray tube is so high that you can get dose rates you can see in even up to hundreds of grade per second in flash you have to be about higher than 40 grade per second so this is useful this is not going to be useful for treating patients but it's useful for doing research for example on effects on cell lines or the effects for example on skin and small animals and in fact I'm sure you can do the same thing with a linac if you were to place the cells right up against the target if the target was close to the electron beam you could get very high dose rates for lead act for photons and we know by taking out the target the electron beams a long distance away is also at the flash dose rates so we talked earlier about flash a little bit and I'd like to get Thomas's opinion on flash before I give my opinion okay there's certainly a big flash height at this time and whether it's justified or not I'm not able to say but I mean biologically it's a fascinating thing to explore physically there are big challenges and clinically there are prices to be paid for example as you said flash effect typically starts at about 10 grade so that means fractionation is maybe not possible at all or only at a very limited level and multi-field treatments are also questionable I mean one possibility is to treat all fees at the same time but that's again technically very demanding so we have to really understand the effect first of all we have to understand the effect better from the biological side but then we also have to understand what is the price to be paid what we give up in terms of all the technology that has been delivered to this point but we saw what we discussed in this video yeah so I mean fractionation will give you about tens of percent benefit and so that's why the field since the French Fribando in 1920 or so we had fractionated radiation therapy now if this proves to be factor of two or higher in effect people all switch the flash but if it's a few tens of percent it probably is giving away more than it's benefiting but we'll see alright thank you very much for your attention and thanks so much Rock for your time and Rock came to Boston to visit me here so that we could do this video together right after the David here and thank you for the opportunity