 Our speaker today is Christopher Tessone, who's a staff scientist at SLAC National Accelerator Laboratory. He joined SLAC in 2011 as a postdoctoral fellow to study process structure relationships in organic semiconductors. In 2013, he joined SLAC as a staff scientist where he currently runs a research group focused on combining informatics, institute measurements, and chemistry to accelerate the discovery and deployment of new materials. And in particular, his work focuses on applications in catalysis, photovoltaics, and additive manufacturing. And hopefully by the end, I will understand what that means. So take it away. Yeah. So I think I'll preface this by saying, given the composition of people, we can kind of take this conversation in whatever direction you guys want. So I'm going to start by giving a little background about what I do. But if people want to pull in different directions, just unmute your mic, I'm more than happy to make this a conversation rather than a lecture for sure. So yeah, don't don't feel like you got to sit here silently. I find it very awkward to give this is probably the hardest transition to zoom is giving a talk without an audience that is participatory is just super, super weird. But getting used to it. So I'm going to talk a little bit today about a really recent push by the Department of Energy to start to look into how we can upcycle plastics, and what that means, and then how we're applying some of the methods we developed in other spaces to this particular problem. But first, I want to talk about Slack National Lab. So for those of you who have been on KIP, if you go on Sand Hill Road, back up the hill towards 280, you'll see a sign says Slack National Accelerator Lab. So Slack is one of the Department of Energy's 17, one of the 17 national labs and the national labs were created to pursue national scientific objectives. Actually, originally, we're built around developing the atomic bomb and atomic energy. Since then, it's kind of been split into two halves. So there is one half of the department of energy that focuses on the national nuclear security and administration. So maintaining the stockpile, cleaning up nuclear waste, all of that kind of stuff. And then the half that I'm in, which is called the basic energy sciences half. So we really work on foundational, really close to academic research that has like a 10 to 20 year horizon in terms of making it out into industry. So the stuff that you're going to be hearing about today is probably a little bit closer to making it into the commercial sector. So we're looking at a probably five to seven year horizon for some of the stuff that I'm going to talk about today. So what I do at Slack is use the X-ray facilities that are there. So what Slack was originally built around was a particle collider. So this long linear accelerator that is highlighted in yellow here used to be its main purpose was to accelerate particles at the two ends and then smash them together here. And then that's how we figured out the structure of quarks and all of these types of things, subatomic particles. But eventually CERN got built. And CERN makes it so that there really isn't any purpose for any other particle collider to exist because it can do this at such much higher energies than is possible at any other facility that there's just not really a reason to do that. And so we repurposed both the linear accelerator and the synchrotron. So what the synchrotron used to be used for is to speed up those particles and then they get dumped into this ring so they could get to the speed of light or within a fraction of it and then smash together. But now we turn these into X-ray facilities. So if you speed up a particle in a circle the change in the acceleration by turning emits light. So we have to conserve momentum and that's how it does it. And it emits a lot of X-ray light the way that our facilities are built. And so we use that to look inside of the structure of things. So at SSRL, which is the synchrotron version, we have a host of different things we look at all the way from looking at the structure of proteins or the atomic structure of metals or other materials that we might make. In fact there's been a lot of work on studying COVID and potential vaccines for COVID there all the way to imaging. So something really similar to what you would do when you get a dental X-ray for example. The reason why this facility is so special is that it's probably 10 orders of magnitude brighter than an X-ray you would get at the hospital which means we can look at things which are weakly interacting. And LCLS which is the linear source what's special about that is instead of essentially having a continuous emission of X-rays there they pack that same amount of X-rays into a pulse that is 10 femtoseconds wide which is one times 10 to the negative 15 of a second. And what that lets you do is look at the motion of individual atoms which move at about the speed of sound. So a little bit about SSRL because that is where I spend most of my time. We have a whole bunch of different techniques that we look at there. So I talked about imaging. That's a lot of stuff that happens on these branches of beam lines. You're going to hear me use the word beam line a lot. I try to avoid jargon but this is so baked into my day to day that is really hard to avoid. When I say beam line we take a bunch of specific experiments that we build off of the ring and then we build a whole experimental chamber around it and that's called a beam line. And so there's a bunch of imaging beam lines. These go from big stuff down to really really small stuff. So things on the nanoscale. So about a thousand times smaller than the width of your hair. And then we have a bunch of what are called scattering beam lines and I'm going to get into that a little bit later. So that's crystallography. And then we have a bunch of absorption beam lines. So if you ever took a chemistry or physics lab and you did a UV or visible absorption experiment this is exactly the same thing as that except we're looking at absorption of x-rays. And that tells us about the chemical environment of materials and things like that. So why do we do this to study materials and how they work? And the short answer is the canonical premise of material science is that you can if you understand the structure of a material perfectly you will know everything about its properties. And so we spend a ton of time trying to understand the structure of different materials and this might be things like understanding exactly where the atoms are arranged, understanding the strain which is from the perfect version of those atoms if they're moved by a teeny teeny fraction of an angstrom. We can look at the size of nanoscale objects what their defect structure looks like and then something that we'll talk about a little bit today is the phase identification and quantity. So usually anything you would a modern material isn't made of just one thing it's a mixture of things and the ratio of those components and precisely how they're arranged are really important to how they function. So I'm going to talk a lot about x-ray scattering today and this is the kind of cartoon schematic of how that experiment works. So we get x-rays off of the synchrotron ring those are focused by a bunch of optics and then hit the sample and in a scattering measurement what we're doing is looking at the amount by which your sample scattered the x-rays and that quantity the difference between the incident vector and the scattered vector is called q. So we're going to see a bunch of plots of i of q. All you need to know for the purposes of this talk is that the pattern by which things scatter is directly related to the structure of the material. And so what I mean by that is we're going to collect kind of arrays of peaks or other weird line shapes that then we build a model of the atomic or nanoscale structure and simulate it and then use a fitting procedure to minimize the difference between the two and that's how we actually measure the structure of materials. So this is actually a picture of the synchrotron from up the hill actually if you were to turn around you'd see the linear accelerator right behind this picture and this these are my group who actually did a lot of the work that i'm going to be talking about here today. So as always with science nothing happens in a vacuum this is not stuff that I just did by myself I do sleep and eat and rest so all of these folks were the participants actually performing this work and I think one of the things that's really important to understand about understanding energy science and the development of materials and applications around energy is more and more this is a multidisciplinary collaborative effort. So it's not just as it was once upon a time where a chemist is sitting in a lab by himself and discovers some new material. Today we're working with electrical engineers, environmental engineers, computer scientists, economists, chemists, physicists. It takes a whole team of people with deep expertise in a lot of disparate areas working together to actually accomplish the kind of work that I'm going to talk about today. Since we have so many business people on this call or business students on this call I will make a plug is that one of the things that has I think been a traditionally a missed opportunity in science is academics will focus on pretty specific problems that may not actually have a commercial application and so one of the large pushes that we've been making at Slack and and to a large extent at Stanford as well is to use techno economic analysis to guide the research that we're doing so that we're not just performing work which maybe has no end use but rather we know what the benchmarks for instance in this case for plastics how cheaply we need to deconstruct and the plastic so what is the process cost for that and then the process cost for the reconstruction that determines what the value add of the final product needs to be and all of the research that I'm going to talk about today is guided by that techno economic analysis which happened before we ever started the project but this is pretty unique and to take a counter example for instance I spent a decade working on thin film flexible solar panels and the cost point at which we needed we thought we needed to hit to make the solar panels competitive with silicon kept changing year over year and the reason for that was no one had done the complete supply chain and capital expenditure analysis for what the factory costs needed to be in order to understand where you were competitive versus silicon and it turned out after about 15 years of thousands of researchers around the world working on this problem we recognize that given the theoretical limit of the efficiency of those solar cells they were unlikely to ever be competitive in the market for silicon so that's you know this is always the kind of give and take of science doing research for its own ends is absolutely necessary that's where we get serendipitous discoveries and taking the flexible thin film PV as an example in that case tons of other commercial applications came out of it so the screen in your cell phone is a great example of that which uses an OLED almost certainly and that is technology that was developed in parallel with the thin film portable tape work okay so let's talk about plastics for a second and why I'm talking to you about this so plastics have fundamentally empowered a number of industries have been wildly successful at changing the the landscape of the world how quickly we can manufacture materials how we package them when we look at things like 3d printing they have changed design cycles for products because we can prototype things so much more rapidly but the problem with plastics is why they are so widely used is that plastics really aren't built to degrade ever so most polymers that are used in plastics are made so that they will never change their structure that is the goal because if for instance you package something in a plastic you want it to stay sealed and not degrade in any way so the problem that that's led to is most plastics are not recycled and so it's about 16 percent of plastics that are currently recycled the way those are recycled is mechanically so you chew them up and then recast them in a new form and that degrades the quality of the polymer so the mechanical performance goes down for example which means there isn't really a good economic driver for recycling of plastics because you put in a bunch of energy and a bunch of capital and you get out a worse material and this is a problem that is only going to grow the consumption of plastics by humanity is increasing exponentially and by 2050 this is going to account for about 20 percent of our petroleum consumption so unless we can find a way to reuse or recycle or upcycle plastics we're going to be emitting significantly more greenhouse gases to support our plastics addiction and in addition to that because plastics are so robust they end up in places we don't want them like the ocean and so I think as much as I would wish otherwise altruistic drivers for cleaning up these waste streams haven't been working as much as one would hope and so we need to provide economic incentive to do so and that's really the mission of this new department of energy consortium a bottle is its acronym is to actually upcycle plastic waste and what that what that upcycling means is so in a recycling scheme you break the plastic down mechanically or chemically and then you remake exactly the same plastic that you had before and the issue with that is your process cost is basically fixed by the existing manufacturing cost right so it's really hard to realize a return there in an upcycling scheme we take those plastic waste streams and turn them into higher value plastics and alternatively we can make plastics which are more readily recycled so we're trying to attack this both on the cost as well as the value side of the spectrum and I apologize probably sound like an idiot to the business folks I'm not a business person and the way that we're going about doing this is right now we don't have really good ways of doing this so fundamentally this is a relatively new problem that people are looking into there's two ways people have looked at in addition to mechanical recycling to actually provide an economic driver for the reuse of plastics one is burning them so pyrolysis is the fancy term for that to generate power which obviously emits further greenhouse gases so it's not solving that half of the problem and the other is chemical recycling which is generally quite costly and so we're starting with a known waste stream and we don't know the ways we normally convert chemicals into useful things is the process of catalysis which just means speeding up a reaction that would want to happen anyways and generally most industrial processes use either thermal catalysis or in the case of bio derived materials bio catalysis and in this case since we don't have good ways from a starting perspective of how to convert these materials we're throwing we're kind of using a shotgun approach so guided by those techno economics we're looking at every potential pathway to deconstruct and reconstruct these materials and then using the techno economic analysis to evaluate which are the most promising on an annual cycle and then downscope and redistribute resources to the areas that are working so why is a place that uses x-rays to look at the structure of things involved in this problem at all and the answer is that because we have such a breadth of different ways of understanding structure we can actually look at the whole process so from understanding how to synthesize the catalyst more rapidly we have tools for that that accelerate the discovery cycle of new materials for catalysis by an order of magnitude we can characterize the structure of the catalyst but also the product so when we're talking about the mechanical behavior of a plastic we actually that's fundamentally tied to the structure of the plastic so we can do a really really short timescale of one second experiment and understand something about what the mechanical performance will be and so we can decrease the cost and the time investment to figure out whether a product stream will be useful or not and then we can also look at the behavior of the catalyst itself so for the chemical engineering folks we do time on stream type studies so we just look at if we set the catalyst and this and the substrates of the plastic in a reactor and just watch what comes off we can do that but we can also look at exactly follow the chemical rearrangements that are occurring in real time as well as the electronic rearrangements and so we can understand mechanistically what's happening and we use that knowledge in a process called inverse design so inverse design since we're trying to drive cost targets down here in this case in energy consumption targets we might look at for instance an enzymatic approach which digests plastic and spits out a product that we care about but might be too costly or not robust on large scales but we want to make a thermal catalytic pathway for that which is traditionally how petrochemicals are processed and in that case we would study the enzymatic approach to understand like these are the key steps in that process and then we would design a heterogeneous catalytic catalyst to mimic those so really at slow black we have a platform to characterize the whole processing pathway and that's why we're a part of the bottle consortium so I'm going to back up for one second there are probably 50 active projects occurring across 20 institutions that are all part of this consortium as well as what's occurring in industry and in you know 30 minutes I'm not going to talk about all of them so I want to talk about one specific problem which I find to be kind of particularly interesting because it's kind of the crux of what's difficult about deconstructing and reconstructing plastics so over here on the left hand side you have some really common polymers that are used in plastics so things like polyethylene polystyrene are great examples of really commonly used plastics that you might find in for example a coke bottle that you would buy and the challenge with this is so for those of you who have forgotten chemistry what this chemical formula means is I have a carbon here a carbon here they're bound together and then that repeats ad nauseam that's what a plastic is that's what a polymer is and so this little squiggly spaghetti line is what that actually looks like in a real material and then there are just millions and millions of these things packed together to make the solid material that you're looking at and the tricky part is if I want to take that break it down and make something new the thing I break it down into needs to be really really self-similar in which I don't want to start if I'm cooking for example I'm not going to chop all of my vegetables different sizes because they would all take different times to cook right I want to have everything exactly precisely identical so that I can build a process around how I make that in fact this is why restaurants majority of the work when you're in a restaurant is the prep work that you do right and so the challenge here is if all of these bonds look exactly the same how do I make sure I'm always cutting this one and so there is an industrial process that is in use today based on a catalyst that has manganese and cobalt in it and the problem with that process is it's not very precise in terms of what it spits out so my badly drawn cartoon schematic down here shows what this histogram of the product is trying to show which is we don't just get one length of carbon chain at the end of it we get a distribution of lengths of carbon and so we can't then take that and use that as the starting point for making a new plastic very easily and so the challenge that we have here then is how do we design a catalyst which is only going to give me for instance 12 carbons in a row so instead of looking like this picture down here every segment that I cut is going to be like this one down here so how does my group go about doing this so for the past seven years we've been working on developing a set of techniques that combine in-situ experimentation which is a fancy way of saying we look at an experiment using some probe while we're doing it so instead of doing an experiment and seeing what happened at the end we look at it throughout the whole pathway of the experiment occurring we then use a series of kind of traditional statistical models as well as machine learning to automate the data analysis so that that happens for us and that enables us to use machine learning tools to plan the experiments for us and essentially automate the whole scientific method that we're trying to optimize in this case and we apply this in a number of areas so things like additive manufacturing which is 3d printing as I mentioned before photovoltaics and then catalyst and the reason why we use I'm talking primarily about these examples initially is these are places where the potential design space is enormous so there aren't good ways to search it if you were to do a kind of traditional design of experiment it would take you years and years to exhaustively search these in many cases you know hundreds of thousands of years and so the the way that breakthrough discoveries have been made is really kind of intuition or serendipity so either someone stumbles upon the right answer or through training you have a good understanding of what's likely to work and you make good educated guesses the way that we look at it in this different discovery paradigm is to build libraries of materials which we think cover swaths of the space we're trying to search and so the catalysts that I'm going to be talking about today are based on nano crystals so nano crystals are materials that are really really great for doing this kind of controlled understanding of why a product distribution emerges from a specific reaction because we can design the structure of the nano crystals we can change its shape quite a bit we can change its composition either going from things that are well mixed just the interior of this or things which have like a core shell or a decorated motif or even out to hollow things and so this provides a platform where we can build libraries of these materials and what we do with that so here's an example where we were looking at methane reforming over water and we wanted to understand to well in this case what we wanted to do was achieve 100 selectivity and and to do that we looked at a series of nano crystals where we change the size of the nano crystals from seven nanometers so that's this top picture so these are electron micrographs of the nano crystals down to two nanometers which is this bottom picture and then we study the product distribution as a function of the nano crystal size and we see that in this case the largest nano crystals gave us a really uniform product distribution over a range of temperatures which is what we want to see so I want to pause for a second because this is really core to everything else that I'm going to talk about from here on out which is this idea that when we precisely control a material structure we can then study its performance and we understand the linkage between those two and that lets us design better materials in the future okay so we're going to break this problem down a little bit more so actually the slow step in that whole process it's not studying the performance it's making the things really precisely and the reason why this is hard is the way that and I'm happy to hear that none of you guys will suffer this fate but the way that chemists and chemical engineers design these materials is you go through a long process of synthesizing things and you separate out the product that you wanted to make and then you usually look at it using electron microscopy you figure out did I make the thing I wanted to make most of the time the answer is no and you start back over at the beginning if you did make the thing you wanted to make you build up the actual catalyst which is supported on some porous material and then you characterize the reactivity and selectivity but the slow step in this process when we started was this figuring out how to make the materials that you want to make the specific catalyst and when I say slow I mean sorry this is an example of one of these libraries of catalyst so this is a series of palladium materials where we varied the size and kind of one nanometer increments with really narrow distribution in that sizes so all of the catalysts in the two and a half nanometer example are the same within a single extra layer of atoms which might be different between one or the other and this was about a year of work for a grad student to figure this out how to make all of these materials and so when we're looking at like where's the opportunity in the space the opportunity then becomes if I can take that year and compress it down into a day how much does that enable me to accelerate my research outcomes and quite significant but how do we actually do that right it's easy thing to say like okay let's just make this go faster but you know these are not unintelligent people trying to figure this out it's not just one person's effort it's a group of graduate students at Stanford since you guys were all attending Stanford graduate school and hazarded guests that you think they're probably quasi intelligent folks that are doing this and so the answer isn't just work harder get smarter do anything like that we need a fundamental paradigm shift in the way that we're doing this and for us that paradigm shift really was enabled by machine learning so machine learning tools are just really robust right now you don't have to have a background in computer science to use them it's they are at that level that building web products was maybe 10 years ago where you no longer needed to be kind of a c s jock to build an amazing web tool there were tools available that let the novice build them and as scientists we started to pick up and get our hands into some of these tools and figure out how can we combine the thing we've been doing for a long time which is this kind of in situ monitoring with some of these more modern data science approaches to speed up the way at which we do things and we tend to call this a data driven approach it's kind of a silly moniker because all science is technically data driven but what we really mean by that is when you're thinking about an algorithmic approach right the first thing that you need to do is say what are the inputs that I'm looking at and then what is the output of the algorithm so in this case we were trying to figure out how can I make a library of materials which I want to vary the size and so the inputs into that are all of the chemistry that can occur right which reagents I'm going to use so what chemicals am I going to put in my flask how long am I going to let it sit at what temperature and you end up with a space that's like six to ten dimensions right and for the most part we all across the world do the same things because the way we learn from each other is to read each other's papers so one group publishes a paper that says this is how I made this thing and then some other groups going to pick that up and tweak it just a little bit and so we end up being in like these really narrow search spaces for this input space so that's I think the input space part the desired measurable in our case we're talking about the size and the composition of the nanocrystals and so the key to using an algorithmic kind of machine learning approaches I got to understand my input output space so for us the input space is what are all the chemicals and their concentrations and the time of the reaction and the output is what did I make what was its size what was it made of and then I need to understand a little bit about how would I measure those things so I just wanted to populate this table a little bit so we're trying to make the thing in the middle this is what I can play with so this is what I would allow my machine to vary and then these are the things I need to understand about what I made and there's a bunch of ways I could look at that so I could look at that using spectroscopy I could use electron microscopy I can use some of the x-ray methods that I talked about before and then I'm going to choose which of these techniques I'm going to use based on the timescales that these things occur on so in our case we're talking about timescales that are on the order of seconds and that lets us narrow which techniques we can use really quickly and it turns out the only thing that works for this particular problem is x-ray scattering so when we're talking about a machine learning loop we want to the machine to specify some amount of inputs then we want to measure what the output is and then the machine is going to incorporate that new information retrain itself and spit out a new set of inputs and so I need to be able to characterize the output in real time and that's why we're going to end up using x-ray scattering here so the x-ray scattering measurement that we are using here is going to be small angle x-ray scattering it just tells me about the size of the materials if anybody is done any chemistry and to synthesize nanoparticles this is basically just a little mini round bottom flask that we made small enough to fit in our beam line and then we detect the magnitude by which we scattered the x-rays using a couple of detectors out here there we go and the reason why we're using scattering is it's really sensitive to the shape so this is what the scattering curve looks like for a sphere a hollow sphere a cylinder or a disc and it's also sensitive to the size so this is how the shift in the pattern occurs as you go from a two nanometer particle out to a 10 nanometer particle in blue and then lastly how homogeneous or how self-similar our product is is the other thing that it's sensitive to so this local minima decreases as your thing gets essentially worse and worse the problem with this is this is really difficult to analyze this data so in the historically it's taken a person like me who spent most of a phd trying to understand how the stuff works and then put it into practice for a few years so for someone like me at this point i could look at this curve and tell you what the size of the particle is probably without doing any fitting but for a first-year graduate student that's going to be a process that probably takes them three or four weeks to understand how to do the fitting and all of that and the approach that we take is to try to figure out how we automate the whole process so this is actually the data analysis flow that we go through here there's a bunch of data cleanup and then there's a fitting procedure to actually get you the information that you care about and sorry we went about doing that by taking all of the data that i had collected over about 10 years we trained a set of machine learning module sorry machine learning algorithms well actually so first we took this curve and we broke it down into a simple set of metrics which we think described the curve so instead of this being an array of a thousand points we can turn it into an array of 20 points and so that lets us speed up the training process to something reasonable it also is really helpful because typically the problem with using machine learning in science is we don't have big data sets so if you're talking about an experimental scientist if i have 200 or 300 data points of different materials that is a huge data set for me so it's not a problem like google might have where i have a hundred thousand or a million different pictures of dogs and cats and so it's easy for me to train a machine vision algorithm to recognize those i need to simplify the problem and one of the ways i can simplify the problem is turn a big old array like this into a few descriptors and then i train an algorithm to just understand the correlation between these descriptors and the output that i care about which is the size so this was a project that took about two years and three material scientists turned data scientists to to code up and it works reasonably well and the way it works is essentially it looks at any new data that you have coming in we've broken the data down into sets of system classifiers so first there's a classification layer and then once you have the classification layer it goes down a different the size or the shape so essentially looks says is this nanoparticle a sphere or is it a cube is it a mixture of sphere and cubes and then once you know that then it can send it down a different different pipe to figure out the size of those different materials so sorry this is just a an example of the interface one thing i will say scientists aren't necessarily code the prettiest software that you've ever seen we're pretty utilitarian in nature but the important point here is that this can handle think the trick about using these kind of feedback loops in science is your analysis needs to be nearly perfect so when you're trying to feed a machine to design experiments for you if your analysis tool fed it the wrong answer you're just making your algorithm stupider and stupider instead of smarter and smarter which is what you wanted to do with every experiment more examples so with just so i want to i want to talk a little bit about acceleration here so we're just doing that step of automating the data analysis so now instead of it taking days for a graduate student or a postdoc to interpret the data and now take seconds we're able to take a recipe like this which made a mixture of things these are little nano wires and little nano spheres and there's actually some nano cubes in there if you care to look so it's a mixture of things that mixture is is useful for nothing but if we can make just the nano wires of this iron platinum material it's a really useful catalyst for what goes in your catalytic converter in your car so it's useful from an economic perspective because we're using less platinum which is the thing that's in your catalytic converter right now and it's useful from an environmental perspective because it converts the noxious chemicals you don't want to breathe at a higher selectivity and we can take this recipe which we have been working on for three months and in a day of looking at different combinations break it down into actual products that we want so this this was the first step at accelerating that but this really only lets us look at we're still in that normal feedback loop we're relying on the chemical intuition of the scientists the second piece is actually training a set of sequential learning algorithms to do the design of experiment for us so this is the basic loop in a sequential learning approach and so how I will differentiate sequential learning and if there are any people who care to educate me I am a lifelong learner by nature of being a scientist so please correct me if I am wrong here is that you are using a machine learning framework for which you are retraining the algorithm at each subsequent step where you have new data to introduce into it so in our case our approach to using this was we built a machine controllable set of reactors so we can everything can be controlled by machine you don't need any humans in the loop computer sense synthesis conditions to that reactor we measure what we got out using the scattering methods we talked about the automated data analysis pipeline figures out what we made and then we package that together with the synthesis conditions and feed it to a machine learning algorithm which then trains on all of the information it has so far of what were the synthesis conditions and what do I make in this case we used a random forest architecture for those who care this also works with several other flavors that we've looked at as well and what this basically does is you start with an algorithm which knows nothing right it doesn't understand that has no data by which to correlate and you just start running experiments and as you do that the algorithm starts to learn when I tweak this parameter this way I got this size and as I tweak this parameter that way I got this size oh sorry this is what all the chemical pumps look like since we don't have any and so what the algorithm is essentially learning is the correlation between this set of inputs which we talked about and the outputs on the other side which was the size and the polydispersity and when we do that the amazing thing is actually it doesn't require big data so it turns out that maybe chemists aren't so good at figuring these things out we just had relatively easy problems to work on so far because our sequential learning algorithms can start with a data set of 18 experiments so we decided on 18 different experiments that we coded up that was the training set train the initial algorithm and then we let it run and we were trying to target a nanocrystal with a radius of 30 angstroms our other condition was to maximize the yield which is this intensity here and then to minimize the polydispersity in the system and after an additional 28 experiments we hit the target and so I think this is exciting for us from a number of reasons first this is the first example of kind of a completely closed loop autonomous pathway to discovering new materials the only other places these have been widely used is in drug discovery and in those cases they got to start quite a bit earlier because they do have data sets of hundreds of thousands of different pharmaceuticals and so five or six years ago when Google released the tools that they released okay thanks Sarah they could just start using them whereas we had to build tools that worked for our small data sets um then going back to that um the other thing that's really interesting about this and I apologize as soon as we replace this with a better figure is the machine isn't doing searches the way a human does searches in other words it's not varying like keeping everything else constant and then varying one of the parameters at a time it's going crazy so it's changing a whole bunch of things non-linearly from experiment to experiment so what this plot actually is is showing you the different concentrations of the different reagents how it's changing the overall flow rate and how it's changing the temperature and then what the particles that you get out at are and why this is so useful is so that when we go back and try to understand what the quote unquote design rules are for the system now we have a set of data which lets a human figure out oh this was the chemical property we were changing that led to this different size here um so sorry all of that to say how am I tying this back to bottle so what we're doing now is yes we're using that automated tool so the punchline of that all is we took that year-long search and we can distill it down to 24 hours now um so that's uh fairly exciting it's not grad student's favorite thing to do in the world anyways but what's probably more exciting is we take that base architecture and we start to include all of the other information we collect about the catalyst so how did it perform under this reaction condition and now we have a pathway not only to speed up how fast we can make these materials but figuring out what we should be making for industrial applications and then tying this all together into a single platform which lets us do things like figure out really quickly how to use plastics in our supply chain or alternatively as the petrochemical feedstock landscape is changing really really rapidly it's way better to use natural gas to synthesize things than crude oil just from a cost perspective right now but we can't make that switch our normal timescale for something like that is 20 years and we need it to be a year and so these are the kind of pathways of research that let us get there so um if people have questions I'll shut up now sorry Sarah oh it's quite all right I just I'm under an obligation to get folks back into the main room in like about four or five minutes so uh yeah we got time for a question or two depending on how complex they are Chris thanks for the talk I actually was working in the petrochemical industry before school so this is all very interesting because it's obviously top of mind as all companies are trying to transition to plastics and also figuring out how to deal with the waste issues so obviously my mind is more geared towards the applications and whatnot so it's really interesting to see how you guys were kind of optimizing the structure of your catalysts but do you did you um I think you touched on it very briefly in terms of the selectivity of the I guess and um hydrocarbons that come out do you have any more I guess results on that like what kind of compounds you're you're creating with these catalysts yeah so um I can answer that in short so all of the work that I showed you actually came from a project that was looking at converting propane to propene which is the first step in making propylene right which is the second most widely used polymer in the world third somewhere around there um and with the plastic deconstruction that's actually just kicking off this October um but we're using these methods there so in terms of how um how we can break down plastics to uniform distributions that's all work that it should be coming in the next year I would say um but these are the methods that we'll be using to do them so in that case of propane to propylene we're able to achieve 100 selectivity with zero degradation in the catalyst so I think the important thing for that reaction is actually extending the lifetime of the catalyst to drive the cost down with achieving high selectivity so that the separation costs go down as well so it's kind of the selectivity lifetime product is the important metric there and and there we actually were able to find a platinum tin catalyst that does that with as long as we measured it which was a week had no degradation in the activity or selectivity so uh what's interesting about that is that's a reaction that's been looked at for 30 years by various petrochemical companies billions and billions of r&d dollars invested in it that we're able to see something relatively novel and actually that's that work was kind of tech transferred out to industrial partners because we don't have reactors to run under real conditions so yeah great well I'm afraid I have to cut us off there but Chris thank you so much for dialing in from your wilds of Wyoming and I appreciate you sharing your expertise with us I'm sure people have further questions uh they're welcome to get in touch afterwards and