 It's a pleasure to be here. I am really representing these folks, these are the colleagues back at Carnegie Mellon who've done all the heavy lifting on this project and the work I'll be presenting is due mostly to them. So very briefly, it looks like all of us are starting talks with why the interest in carbon capture, the professor and me can't help that, assuming that some of you have come here without too much background in this. I'll talk a little bit about the objectives and scope of our GSEP project, some progress to date and some of the work to hit. So why the interest in carbon capture in addition to Ed's comments and others, again the fundamental motivation is climate change and the fact that we need not small reductions but large reductions in CO2 emissions in order to achieve those goals. If we were talking about five or 10% reductions to solve the climate problem, we would probably not be talking about CCS. 70, 50, 80% is a different story. Fundamentally CCS is the only technology available that we know of that can address carbon emissions from the existing use of fossil fuels which are likely to be around for some time. So my view of it is more of a bridging technology, something that will be necessary if we wanna get large carbon reductions fairly quickly while we're working on a long-term sustainable future. CCS also turns out to be a major component in all the modeling studies that are done globally when one looks at cost-effective strategies to meet climate change. Every modeling group who's looked at this shows that without CCS on the table, the costs of achieving climate goals will be substantially higher than without trillions of dollars are typically estimates that come out of that. So while the focus of this talk is on CO2 capture, we shouldn't forget that it's really part of a capture and storage or sequestration system that has three major components. First, the ability to capture CO2 from power plants and other industrial sources that produce it. Again, the CO2 might arise from coal combustion but it might also arise from natural gas combustion or the use of biomass, so-called negative emissions. In order to sequester or store it in a geologic formation which looks like the most likely option right now, one has to compress and transport it and so the compression is typically needed to turn it into a supercritical fluid, essentially a liquid that can be moved by a pipeline to appropriate storage sites. We're gonna be talking about two major approaches. What we can do today, we've talked about and Ed set this up beautifully, post-combustion and pre-combustion. Here's a little more detailed schematic of what a post-combustion system would look like at a coal-fired power plant today. This is what most of the utilities look like without the CO2 capture piece. The yellow box in the middle are a variety of technologies to address so-called criteria or conventional air pollutants, SOCs, NOCs, particulates, mercury. And if one were to capture CO2 in a post-combustion environment, one would add another piece of technology. After that, upstream of the stack and as Ed indicated, the current technology that would do that job is in a mean-based system. We have CO2 at low concentrations and low pressure. So this is a chemical solvent and its energy requirements are substantial. The pre-combustion system looks a little more complex. The CO2 capture piece here actually has two components. There's a CO2 capture unit toward the back here which today would use a physical solvent. So here we have high pressures and relatively high concentrations of CO2. So rather than a chemical solvent, one can use a physical sorbent to do that job at a much lower energy cost. But in order for that to work, one first needs to add upstream a water gas shift reactor that basically is a chemical process that converts CO in the gas to CO2 and H2. So the CO2 capture unit is basically trying to work on a CO2 hydrogen mixture as opposed to the postcombustion which is largely a CO2 nitrogen mixture at much lower pressures. Again, the chemical that would be used today favorably is commercial called Slexol. It's a glycol-like substance that has been used widely in industrial applications. Here's what some of this actually looks like in terms of hardware. These technologies for both postcombustion and precombustion have been used at power plants, both gas and coal-fired power plants, the two on the left. At scales, roughly an order of magnitude smaller than a commercial plant today. So these units are in the order of a couple of 10s of megawatt electricity equivalent. Hydrogen production plants or this particular one uses a Slexol system to essentially do the same separation except the hydrogen is used to make chemicals instead of electricity. Here are photos of two newer developments. The one on the top is, that would be fair to say, a big deal. This is the first large-scale demonstration of postcombustion capture at a coal-fired power plant. It's what the community has been waiting at least a decade for. In this calendar, one won the lottery and on this one, the Sashpower Boundary Dam facility at 110 megawatts, there was official inauguration on this just two weeks ago. The plant started up a month ago. The CO2 capture unit is in the, let's see if I can get the, is this unit in the foreground. So this is now operating at 90% capture and for the first month or so, so far, so good. The picture on the bottom is a unit still under construction that is now scheduled to start next year. Rather than this year, it's a large gasification plant that the Southern Company is building and it will capture CO2 using Slexol as a solvent at a scale of 600 megawatts with about 65% capture. So we're starting to see in these two examples and others that are planned in Europe, the first large-scale implementations of that. If that's the good news, this is the bad news, they're expensive technologies. Through one significant digit in a new postcombustion plant, adding an immune system would increase the cost of generating electricity at that plant by roughly 70%. The incremental costs are lower for gasification combined cycle and natural gas plants, but still quite significant and in terms of an absolute cost of electricity, we're different baselines. We're talking about CO2 capture in this meeting because most of that cost is associated with the capture part of that system, transport and storage, while those costs can vary depending on site-specific, roughly on the order of 20%. So if you wanna make a big dent in CCS costs, you've gotta go after the capture process. And there are lots of ideas for how to do that. This is a slide from the Department of Energy showing a variety of options that are being pursued in different scales and some notion of their time frames for success. I was happy and delighted to see GSEP join that process a couple of years ago. A request for proposals sought advanced carbon capture processes and consistent with the GSEP philosophy, looking for, and these were word step out, game changing improvements, big improvements that could have big impacts in the next several decades. And as a result of that solicitation, three projects were selected, Ed's project at Notre Dame involving Attic Liquids, Randy's project at Northwestern involving metal organic frameworks, and Jen Wilcox's project here at Stanford involving some novel activated carbon solvents. So what am I doing up here? A year later, there was another RFP asking for development of a systems analysis framework to be able to evaluate novel processes, these three in particular but others in general, in the context of some rather rigorous criteria that GSEP had put in the original proposal for what they'd like to see in these advanced processes. So we were selected along with Chris Edwards group here at Stanford to work on a systems analysis framework that could be used to get some quantitative metrics for just how these advanced systems would fare relative to baseline systems in the context of full power systems. The approach we had proposed and have been following is to build on some prior work we've been doing with a lot of support from the Department of Energy. With that support, we've built a modeling framework called IECM is the acronym, if you Google it, integrated environmental control model. Essentially an easy to use model of a single power plant, could be coal fired, gas fired, biomass. And it basically includes all of the environmental control systems, not only for air but also for water because water use is another issue here in solid way. So it's basically a full blown mass and energy balance with engineering economic models and the ability to handle uncertainty. We've posed to build on this framework as a tool that we and then others could use to ask and quickly answer a whole variety of what if questions. What if I could create a material that had these characteristics and so on. So the overall approach is basically a couple engineering process performance models with models of cost engineering economic models in a systems framework that has a probabilistic capability so one can look at uncertainties in a fairly rigorous way to identify both risks and opportunities and hopefully in a package that again is easy to use and portable so others can play with that. So the software package, if you were to download it today is one that has a graphical user interface behind which there's a lot of stuff. You bring to the model information on the design of the power plant that you're interested in fuel properties and cost factors and the model delivers information on process performance emissions and costs. So when this project started, we had in it already a whole suite of technologies, a number of baseline CO2 capture systems that we had worked on some other things we're doing for DOE and a whole suite of power plant and environmental control technologies. And what we've been doing in this project is to work specifically with the three groups that GSEP has been funding to develop new process performance and cost models that could be implemented in this framework and used to assess some of the specific criteria that GSEP put in their original proposal. And these guys have, I think, the biggest challenge ahead of them. GSEP enumerated eight criteria. I've reorganized them into four that basically deal with performance metrics and four more or less with cost metrics. I've highlighted three of them. The ability to capture and separate more than or equal to 90% of the CO2 to substantially reduce the energy penalties relative to what they are now, but also to keep the cost low. So it's a perfect GSEP challenge and along with a number of other things. So our job is to try to figure out how things are going in these directions and probably more importantly, to try to use a larger modeling framework to suggest ways that one can move more effectively toward meeting these goals. So let me tell you first about some of the work we're doing. The three projects all have about a year left as does our project. So we're about halfway through the project. So this is really, I intended this to be a kind of informal progress report to GSEP and others. Let me tell you first about what we've been doing in the area of postcombustion capture. I should say, well, I think one of the slides got, I lost a slide here. I should back up and say first that all three groups that I mentioned are still working actively on their materials. So they have not given us the formula, the magic recipes yet for the materials they think will do the best job of that. Somehow either, maybe it'll show up later, a slide seems to have been dropped. So what we've been doing is working with what I've called surrogate materials. Materials that are similar in nature to the materials that the groups are doing, but they're not the last word. So one of those in Jen Wilcox's project here at Stanford is looking at, and we heard a little bit about this this morning as well, some novel activated carbon sorbents. These are basically solid sorbents that might do the job. So on the left, these data points are some data that Jen was kind enough to provide to us, and the solid lines are fits to that data using a conventional Langmuir equilibrium model that does a nicer job down in the lower in the temperature ranges that are likely to be relevant. So what we have basically is a model representation of that. On the MOF side, we have with advice from the folks at Northwestern, we've looked at a number of metal organic frameworks that they believe would be most useful to start playing with. The data I'll be showing you, so we've looked at several MOFs. We've also looked at some other solid sorbents, not on either of those types. So I'll show you some preliminary results in a minute based on, this is a zeolite, we'll just call it ZIF-78, isotherms are the sort that Ed just showed. So here is basically CO2 uptake as a function of temperature and pressure. We'll look at a case study at 50 degrees Celsius, which is a typical flue gas temperature coming in using isotherms both for CO2 and nitrogen where you'll notice the scale is quite different. We wanted to start as simple as we could, and so the first model that we have put together is a three-step process which involves adsorption and regeneration. It's a pressure swing system where flue gas containing, in this case, idealized as CO2 and nitrogen is fed into an adsorber, and then when breakthrough occurs, the system's reversed and there's a vacuum that pulls out the CO2 to give a CO2-rich product. One of the things you can see in this system, well, I think I have another slide here. I'm gonna skip over the details, it's in there. Here's a better representation. Let's look first at the one on the right, on the left rather. These two lines, the blue line is showing CO2 recovery which is basically the CO2 capture efficiency and the red line showing the purity of the CO2 that's captured. The GSEP target is 90% capture and for this sorbent at this temperature, the only way to do that is at very low pressures, probably unrealistically low, but those are the numbers that would come out for a single stage vacuum separation. So 90% purity, low pressure, and I'm sorry, 90% recovery and the purity levels are also not, don't exceed about 70%. These two slides show specific work. This is energy per unit mass of CO2 absorbed and the sorbent required to do that. Again, we need to operate at low pressures is what comes out of the information here. So we've run a preliminary case study using our IECM framework where we assume 90% capture in a single stage VSA system. We pressurize the adsorber to a little over-atmospheric pressure 1.2 and desorb at this very low pressure but then compress the CO2 to 135 bar which is basically pipeline pressures. And here are some preliminary results. Let me just focus on this next to the last line which is the net power plant efficiency. Basically, there's a lot of energy needed not only to do the capture but most importantly, to do the compression if one actually had to go to these very, very low vacuum pressures to do that. So the result in this case is a power plant which would be 39% efficient without CO2 capture takes a significant hit comparable to what it would take with a conventional amine system but also with lower purity and that would probably not be pure enough to put into a pipeline. So the take-home message here is we need to go back and build a more complex model of a probably a two-stage process and play with some additional parameters to achieve higher efficiencies and higher product purity with this system. This was the best of the several sorbents that we looked at and so other materials would have similar challenges I think is the preliminary finding that we come out of here. We also modeled a pre-combustion system using ionic liquids. Again, we used a particular liquid that was recommended to us by the group at Notre Dame. Here basically we're comparing we're substituting ionic liquid for a conventional Selexol solvent. The technology, the process for doing that is the same using either sorbent. So there's an absorber, this is syngas from the Water Gas Shift Reactor so it's essentially modeled as a CO2 hydrogen mixture. In this case, in both cases we've taken impurities out of the system. And the adsorption is followed by a series of depressurization steps in flash drums and recompression to basically desorb the CO2. Again, we use data for that particular ionic liquid which Ed can probably pronounce and I can't. And used it in a preliminary case study where again we're looking for 90% CO2 capture and taking an entire system compressed to 135 bar with an idealized gas mixture of CO2 and hydrogen. Again, there are details of the process model that I'll put in the presentation but here's the bottom line in terms of the simulation of the overall power plant. This is the power just for the unit I just showed you and in this case the ionic liquid actually turns out to be about 10% better in terms of energy requirements than Selexol. Not a huge breakthrough, but a step in the right direction and we can step, we can look forward to other properties. High Bo Zhao did this work, also did some sensitivity analyses. We're starting to play around with this to look at effects of various design parameters. And what we really need to do, CO2 removal efficiency of one backed off the 90% target moved down to say 85%. Actually, some things actually start looking better. So there's a lot of playing around that remains to be done. I think in terms of preliminary messages that come out of this very early work, we're just underscoring what Ed and others have said that work on novel materials really has to focus on high selectivity to ensure high capture efficiency as well as high purity. These have all been idealized simple systems so we haven't mucked it up by putting any water vapor into either of these. All of these materials are, none of these materials like water vapor. And so either you have to design one that is impervious to it or make the system more complex by taking a dehydration step which is really what you wanna do. So in order to improve the realism of this we'll need additional data on sort of in behavior in the presence of water and other impurities. And isotherms not for single component gases but for mixed gases in order to get more realistic performance estimates. So none of those imperfections were in those results I showed you earlier. Let me just say a brief word about process cost models. I'm not gonna show you any cost results today. We're working on that, still a work in progress but just in terms of what our approach is and some preliminary conclusions from some other work we finished recently on some other processes. What we try to do in our cost models are first estimate on the capital cost side what are often called direct equipment costs. What would it cost to buy and install the equipment that one needs to do the capture. What's often forgotten and often handled rather I was gonna say sloppily that's not right with not as much care as is perhaps needed are a lot of the indirect costs. So in a traditional cost estimate after you do an equipment costing there are a series of other measures things particularly called contingency costs that are all typically estimated as a percentage of your direct equipment costs. There are some guidelines for how that can be done in some recent work. I've stuck my head out and pointed out that major organizations who put out these guidelines like DOE, EPRI and others in many of their own studies don't follow their own guidelines and tend to give numbers that are probably more optimistic than they should be for this stage of development. So we wanna be careful in going forward and the reason is that there's a lot of history that suggests that we tend to be optimistic technologically at the earliest stages of technology development but as technology is mature toward FOAK is called first of a kind, a real commercial reality. While ideally we all wanna get to that low cost, enth of a kind plan you have to start somewhere else and you can't get to the enth of a kind plant without building in plants. If you never get past the first one you'll never get to the enth of a kind plant. What we have found and I suspect we'll find in this case is that high capital costs is another major barrier and hindrance to the entry of new technologies. Historically there's a lot of data to show that we've done a poor job of predicting commercial costs at early stages of development. So we're gonna try to do a more careful and more realistic job on that but the message that comes from the work we've done so far is that while as engineers we're also, we're always after the holy grail of improved efficiency there are trade-offs and so there are challenges not only to the technical community in finding and tailoring more appropriate materials but also to the engineering community at large in finding ways of minimizing the capital cost of these systems and challenges in terms of how we can make things simpler how we can reduce the size of vessels how we can use materials that are cheap and not expensive. Those two sets of challenges I think are the ones I will want to emphasize here and the fact that they're inevitably going to be trade-offs between cost and performance in terms of getting to that next best system. So a lot of the work ahead, we have a number of tasks that we had initially proposed that involve refining models, characterizing uncertainties I haven't talked about lifecycle analysis we wanna ask where a lot of these new materials come from and whether secondary impacts that need to be of concern. What we'd like to try to do is try to reverse engineer our models and come back with advice to the process developers for what kinds of parameters they ought to be seeking and that will be a major focus of the work that's ahead. So with that, I will thank you with apologies for running over and take a question or two if there is one. So we have a couple of minutes for questions. Sally? Oh, there we go. Thanks, so you said high selectivity is important but what's high? Do you have a sense of what's high? What would be a target that somebody should be shooting for? We didn't come with numbers if we go back to some of the data that we showed it would depend on the particular material that we're talking about but the basic message is we need to do a better job getting higher pureties on these separations. So when we see 70% as a maximum for that particular those are basically substances that are reported in the literature it's not what people are currently working on. What you really want is 90 and 90. So what that backs into in terms of the particular parameter we'll figure that out but that's what we're looking for. Over to Paul. The effect of water vapor I think is gonna be an especially problematic one for a lot of these materials and yes. Oh so on your task list one thing I didn't see on there but you did kind of mention it during the talk is sensitivity analysis and I encourage you yeah to keep on on sensitivity analysis. That might help with maybe you don't build the best plant to begin with but helps you later on when you get to the end you can start to do the 90% I don't like that. Our aspirations are to use the probabilistic capability that this model has and do a more rigorous job which will involve some expert elicitation so I'm expecting that we're gonna try to visit folks in the three groups to try to elicit their best estimates as to what kinds of properties might be achievable. We can put some of those judgments into the models and get probabilistic results. The likelihood of achieving different targets which is a more rigorous way and would account for a lot of interactions of that. Ideally we want to do that both on the performance and on the cost side because at the end of the day as you said you want to get the best system to do a job and we're gonna try to figure out what those parameters are. Thanks very much, yeah. Thank you.