 So the discussion has already started and this has been great. I do want to remind everyone that Goldilocks was an accused felon and squatter who was a sociopath enough to eat other people's breakfast. So her as the standard, we need to keep that in mind, you know, looking through the bear's view, not just the Goldilocks view, you know, so keep that in mind. One of the things that we hear a lot and kind of heard, we heard yesterday, not so much this morning, ironically, is that we're stuck with Epic and Cernar and a few others and we're just, that means we're screwed, or hosed, or whatever term you like. I'm on camera and the word screwed is okay, my mom will call me later about it. But the idea that we can get past that, it seems like it's an area that is ripe for disruption. What is going to happen? Is this going to be something where we just have to ignore Epic and Cernar? Or because they clearly don't want to play with us in real time. So what's the solution? So I would ask you a clarifying question back on that question, which is, are you talking about whether Epic and Cernar will adopt fire and some of the normative core models around patient and reports? Are you talking more about will Epic and Cernar and those crews implement the level of genomic medicine standards that we need to put in the EHR? The latter. Okay. I think I can answer that one. I'm not smart enough to do the formula. So they're not going to do it unless there's a lower cost to doing it that they're confident in. They're not going to take the business risk to invest big dollars in a draft representation of what the implementation guide, the HL7 clinical genomics workgroup comes up with in the next year. It'll probably be way too complicated. And it's nothing against them. It's just there are just too few resources. You guys invest more resources and development in there. We could do it. The only way to do it is not try to put all our eggs in one basket and say, okay, clinical genomics workgroup, we're going to give you some more money, we're going to get people in here to contract with you to really build out this standard grade. I think the way to do it is to separate and attack the problem from different areas and to do those core fundamental kinds of services, either as smart apps or models around what do you need to do to manage variation? What do you need to do to manage phenotypes and diseases? What do you need to do to manage knowledge? Have a global alliance or ClinGens or different research groups build those up and then hand them over to HL7 clinical genomics workgroup and create services and tools in smart on fire. Once those services and tools are available, it's going to be hard for CERNR's lab companies. I worked for SunQuest and it was sold from partners. They bought Gene Insider product that I worked on, commercial tool. One of the things we put in that tool and we spent an enormous amount of money thanks to Sandy Ironson and Heidi Greene building an allele registry in there. It was ubiquitous. As soon as the lab technicians would put in an HDVS expression or whatever, it would automatically go out. Do I have knowledge on this? Do I have a canonical identifier? Does it work against Bill 38, 37, 36? Whatever it was, it would just resolve it real time on reports. It would solve tons of errors and tons of problems and it was expensive and hard to build and maintain. Commercial vendors look at that and even Sandy and Eugene Clark, the developer, that we often said to myself, as soon as NCVI or someone builds a tool to do this, we're gutting this thing, putting it in and we're going to focus our resources and dollars on higher level problems. The real problems around genomic medicines, the stuff you guys were talking about yesterday, that's what you guys want to get to. You have to provide a service and a tool that vendors can just use. It's like HDNC or GRC or all these wonderful tools that you guys put out there that people eventually take and become standard parts of your bioinformatics pipelines. You need to provide that for these tool vendors. And then Epic and Cerner, they're going to look at say, okay, this is a lot smaller problem or a little smaller problem. What's the next big thing that we have to overcome? That's what I think. Can I just follow up a question on that? So how far along are Cerner and Epic in terms of adopting Smart on Fire? We just heard three great talks which talked about how- Four great talks. Well, I guess Casey talked about more about clinical decision support. But I think the discussion about Smart on Fire really was highlighted pretty strongly in most of the talks this morning. And so how far along are we? Smart on Fire from Fire. I think it's really all about the data interoperability and consistency. So I'll choose to focus on Fire itself. On the genomic side, I think you just gave a wonderful response, Larry. And I won't, I cannot modify it because it's not my space. On the clinical side, although I sound pollyannish, I think we're actually at risk of getting there. And part of that is because of the Argonaut specification that ONC has adopted, it's now become promoting interoperability, sort of the successor to meaningful use. And CMS is starting to require. In fact, there's a shoe that will drop from ONC probably in the next few weeks. Raising the bar of interoperability access to electronic health records. And it is invariably and uniformly based on Fire. So vendors are basically required to implement Fire APIs that allow access and retrieval of data, which is why I'm so rabid about suggesting that the research community give up on research data models and just embrace clinical data models that are coming for free in native format out of the EHRs. That being said, the reality in 2018 is that the vast majority of EHRs that are implemented have an unreliable Fire API. That is the reality today. I don't deny that. The quality of a Fire object that you get out of the average epic installation is marginal. It's improving, but my point is the trajectory is very clear. It has become regulation. It is becoming industry standard. It is becoming market driven. And that juggernaut is not going to stop. It's interesting. I heard some, how do we phrase this, uncharitable comments about Cerner. The reality is Cerner has fundamentally re-architected its EHR and it is built around Fire. They have basically an underlying Fire framework for what they do and how they do it. And thus, as it turns out, Fire objects retrieved from modern, recently installed Cerner systems are vastly more reliable than they are from Epic. But that will change in the next six to 12 months. Epic will be required to catch up. And I think that ship has left the station. EHRs, by design, have become irrelevant from a research perspective. Because if you have a standard API, it becomes irrelevant which EHR you're ultimately working on. Gil, did you want to add something? Yeah, I think I agree with that. The train has left the station. That's a really good phrase to capture that. We hope that the train, the boat lands after it left the station. Hopefully there's no snow that needs shoveling in front. But yeah, I think it's a great time to be involved because I'm kind of seeing each time a new release comes out by an Epic or Cerner, you see a lot more support coming in for new resources, for new functionality. And so sometimes you can see a little bit of what the trajectory is for like a year or two in it. It looks very promising. So the way that right now the genomics part is being built is that you can communicate a lot just with an observation and kind of constraining an observation. And so once observation goes normative, there's essentially that power is there to potentially do that, at least to pilot that, and organizations are doing that. Because basically what I've seen is organizations, they are now building precision medicine. They need, they are going to implement something. So they are faced with a decision. Do I implement a proprietary thing? Or do I implement something that is a standard now or close to a standard now that I can learn about the standard, then be kind of an early adopter. And so then when it gets solidified, we'll be in the good shape for that. So most people that I'm seeing, they prefer not to go the proprietary route, but to try to learn about the standard and to use it and dive in. And then they end up sometimes being kind of these power users who have developed the standard. So I think it's a great circle. Okay. So I want to bring in a couple other points here that just to keep things real again, is that version two, HL7, is the way they do it now. And it's the way it's going to be around for four, five, six, seven years before all these vendors change that. There's a lot of investment in that. There will be some tools that may convert from V2 to FHIR and all that. Chris was right on when he said this is about FHIR, which is more the specification, the standard you have to implement yourself. You asked about SMART, who's doing SMART on FHIR? I don't know, that's just a technology choice. That's a how do we do it. People can use SMART because it's out of the box, use it, it's great. Epic may decide to do it their own way because they don't want SMART apps connecting to their system automatically, or you know, who knows what they want to do. It's hard to say. So that's more of a how-to implementation. I think Epic, Cernar and all those, I know Cernar, Kevin Powers, the head of the, one of the co-chairs of the clinical genomics work group, he's very invested in bringing that stuff back to Cernar. I've worked with Epic folks that are looking at FHIR, of course they are. So it's going to take a while, but to get version two and FHIR on a same underlying conceptual model, underlying model that they both use, you have people like Bob Freeman that are working in aspect of clinical genomics that are working towards coming up with the model that is used to create the FHIR resources and create the version two resources so that everyone can call an Apple and Apple and know what it is. And that's not being done yet. So we're going to have this great environment. We're going to allow people to innovate and do their own things. But the standard for data sharing and data exchange isn't going to happen until we get those services and those models right. In genomics. In genomics. Yes, that's very important. Casey, did you want to add anything to this for Mark? So yeah, I did want to just add to the comment around, okay. I wanted to add to the comment around overcoming like vendors capabilities. And it seems like vendors are opening up more in terms of their APIs to enable third party software to be able to communicate with the vendors. And I kind of, I also have a question for the FHIR experts here in terms of the decision support process, in terms of decision support and SMART. So SMART seems like more of a display based decision support. But I'm wondering if there are other frameworks that allow more complex decision support within the EHR. So I wouldn't classify SMART as a decision support. It's more of like the layers that are needed to turn something from kind of like an API, a message into something that's like a secure app that has a unified interface. So it's that part of it. Now you could use it to create CDS and things like that. And right now it's being developed to possibly integrate as part of the whole standard for FHIR so that you kind of automatically have that in there. Because right now that part of this whole framework is kind of was developed at the Boston Children's Hospital, but it's being looked at to be put into the standard. Exactly, yeah. So I guess my question wasn't framed the best way. But so you're saying that SMART can be a framework that can be potentially leveraged for decision support. So when you develop an app, for example, would it still be mainly display-based or can you do more complex, like can you link with functionality within Epic, for example? You can actually, I think what you're getting at actually is a concept called CDS Hooks, a related project that the department works on where there are certain hooks and things that occur in your workflow. When they happen, you're starting to order a prescription. When you're typing that in, some CDS kind of a card will pop up and tell you some information. For example, oh, here's a generic. It's much cheaper than the one you're about to order. Or hey, this one's going to conflict with your genetic, this genetic profile that you have. So that's under CDS Hooks. And those hooks kind of are hooks to the Epic or to those systems. And that may be an interesting thing to try out as well. Yeah. Bob Fremont, if you wanted to add to this thread. I just wanted to take just a half a step back and expand or clarify or confuse what Larry and what Chris was saying about the relative maturity and adoption of fire and genomics in particular. Fire as a framework, as a specification, is absolutely going to be the way of the future. And to Chris's point, I think all of the EHRs have signed on to this, whether they like it or not, they're going to be forced to adopt fire APIs for many of the components inside their HERs. Their EHRs. And that includes the underlying semantic data models that fire embeds within those profiles. To Larry's point, though, and I cannot emphasize this enough, the work that's going on in the genomics space with respect to fire is still immature and it's evolving. And there's a lot of reasons for why it is lagging the rest of the development in the fire space. Chris said very elegantly that fire is like Lego building blocks. With respect to most of the clinical data elements that fire concerns itself with, those Lego building blocks are fairly small. In the genomic space, they are absolutely enormous and they are complex and they're all these weird shapes that don't fit nicely together. And so we are struggling with the complexity of the domain, the degree to which and the rapidity with which that entire domain is evolving as the science moves forward. We're coming up with new ways of thinking about these data and using these data, analyzing these data, that we want to push into clinical practice much faster than the standards can keep up with. And that's fine, by the way. That's the way it should be. The standards should lag the cutting edge science. But what we're left with is we're left with a framework called FIRE that expects these underlying profiles to support genomic data. And we don't have that maturity in that FIRE genomic space yet to be able to support this. Now to Larry's point, what that means is the EHRs aren't going to be adopting support for FIRE genomics anytime soon because it is still very much a work in progress and they would be silly to invest the time and the effort into doing that at this moment. They will get there eventually. But what is needed to help us get there is the time, the effort, the resources, the commitment by a variety of stakeholders to actively go in and work on these specifications because as both Larry and Gil can tell you, those HL7 calls can be fairly lightly attended at times. And the number of people that actually have time to put into figuring out the really hard questions that hold up this whole domain is shockingly small. Thank you. Mark? So I have one comment for this thread. Then I want to take the conversation in a different direction. I think that it's all well and good to focus on the vendors and the larger ecosystem and I think we're pretty confident that things are going to be moving in the direction that have been outlined. I think as we go more to an app world though, the issue that we're going to begin to encounter is at the local system level, how willing are local systems to open up their EHR applications? I think the average internet security office is highly concerned about those as being entrees into hackers and that sort of thing. Irrespective of whether or not they're right, I personally don't think they are, it is their native conservatism that I think is also going to ultimately delay implementation when things get to the point that we could actually do it. So that's something that will also have to be cognizant about. Chris is going to disagree with me because that's tough because I'm asking a different question. Which is, we've been talking about portability and this is probably initially directed to Gill although I'd appreciate others weighing in on it as well. You know, the idea that the use of things like smart and smart on fire allows us to, and patients in particular to access their data in a variety of different ways. The thing that we did hear though about the tethered patient portals and I suspect this will to some degree also relate to apps that are linked into electronic health records, is that again when you leave the system whether a health plan or a health care system, that is still going to go away. That those data, even though they technically belong to the patient, will still reside within that firewall of the system. And for most data that's fine. If you have an electrolyte panel, I mean it really doesn't matter. Somebody else is gonna run an electrolyte panel. But for genomic data that would be a disaster because this is persistent data that can be reused over and over and over and we really want that information to move with the patient. So given that we have at least in theory tools that can access the genomic data, return information to patients in a way that they could use in potentially physical communication. What's the solution for the storage of the sequence in such a way that we can allow that to actually be accessible to the patient wherever they are operating within our health care system? Yeah, thanks, Mark. So yeah, I think the central answer to that question is that the data doesn't really go with the patient directly. It's that the potential for access to the data goes with the patient and who the patient can share that with. And so it could be on a cloud in Arizona. It could be in a cloud in Russia. I mean, it could be anywhere, but what happens is a person will decide, I will share this with this position, with my family member for their family history tree. And I think that power that you get, which is what Smart on Fire and Sync for Science, that a subset of that allows you to do is it empowers the patient and it takes off a lot of the issues around, for example, there's issues around sometimes HIPAA and things like that of like, oh, can we actually use that data? Well, if the patient says that you can, then you kind of get around that. So that we heard yesterday and I've heard this before, you know about kind of people using, not people, but organizations using HIPAA as sort of this blocker. And that this is one way that potential could go around that. I think we can really look to the financial industry. You know, people there, there's a lot of very sensitive data right around bank transactions, international transactions, things like that, security and all that. But at the end of the day, they're implementing APIs and they are doing these transactions and, you know, it took a little bit of early adopters, but eventually it happened. I do want to add one last thing about the issue about, you know, I do think that, yes, it may be slower to adopt for EHRs or for others to adopt genomics, but the way it's designed with layers is that you can use just the normative observation with existing loint codes. I mean, like anyone can do that, that's what we did with digitize. And so I think you'll see some of those crop up first and with time we'll be able to do more complex things. So I, yes, if you say, when will genomics be done? Yeah, that'll be a while, but when will we start using genomics actually in clinical care? A lot sooner than we think, I think. Go ahead, Chris. I don't want Mark's pessimism to linger here. You are correct that psychologically that might be a mind frame, but there are two observations. One, that horse has already left the barn and that virtually all systems allow personal health record access, my chart, some kind of, and that's at the end of the day is a borehole into their system to get specific data out onto a webpage that they don't control. So I think the philosophical objection to data leaving their borders is already passed. The secondary issue of whether or not they would allow an API access, which they may perceive as a different thing even though it isn't, is also not out of, not in their control because we're seeing an increasing regulatory environment with strong speculation that the requirements to implement APIs will be part of CMS regulatory requirement. So it's not as though they have a choice to say no. We are de facto seeing the reality of very pragmatic and robust and hopefully ubiquitous clinical API services built on a fire framework emerge from the clinical environment. And my only plea is that we as a research community not overlook that, but rather embrace it and treat it as an opportunity. So you have on that thread is Vermont Bob as opposed to Rochester Bob had a question. You went on that thread? All right, go ahead and then. So at the risk of mixing metaphors, I feel compelled to say that sequence has left the bioinformatic pipeline. One of the questions that I'd like to ask in response to your question, Mark, is what do you mean by sequence? You said, I believe that how do we allow patients to retain or obtain access to their sequence? It depends on what we mean by sequence. Is it the string of ACs, Ts and Gs that come off of that bioinformatic pipeline? Or is it the Diplotype, the Haplotype, the Star allele, the genomic indicator in EpicSpeak, the metabolizer status in the pharmacogenomic realm? You know, whatever terminology you'd like to use as that gets rolled up, that might be stuck onto a lab report someplace. Some of those data elements are appropriate and are easily incorporated into the traditional lab reports that we find ourselves passing around and are relatively easy in most cases to embed within a traditional EHR environment. Other parts of those data elements are very, very difficult to think about how we would include that within a traditional EHR. And some of the systems that Casey and Gil talked about in terms of ancillary genomic systems that are specifically designed to store lower level genomic sequence data might be necessary. So I'll ask, in response, to what degree are we trying to access it all? I want it all. You want it all. Because frankly, what we currently know about the genome in terms of its relevance for health is a fraction of what we're going to know in five years, 10 years, 20 years. And even for someone of my relatively advanced age, I would like to take advantage of that knowledge going forward. And that can't be restricted to what we currently know and are represented in more traditional data elements, which is a relatively banal and easily solvable use case. It's really the dynamic aspects of the sequence going forward, where what we don't want is that every time somebody moves to a new system, they say, well, okay, great, we have these reports, but now we need this information, so we're going to have to re-sequence you, or we're going to have to do an additional genetic test. That is extraordinarily wasteful, and that's the use case we really need to solve. Yeah, I know this will disappoint you, but I could not agree with you more. All right, then. So. Okay, then go over to Olivier, and then we'll wrap up. Thank you. Great session. I had a recent experience two years ago. I started to try to map some of the results we were getting in the clinic, and I used, I guess, the HL7 version two, but what I was surprised is that some of the providers, and a hospital like UCSD, deals with a lot of genetic test providers, and none of them really had a universal standard, the whole custom standard. So when Larry you said they're using HL7 v2, I don't know how frequent that is, and in fact, we have a couple of companies here that are providers, and I wanted to know what do they need to start adopting the standards that are already there, and maybe they're imperfect, but maybe they're good enough, and do they need something else, or do they need the EHR to be ready for it? Like, it's the chicken and the egg. Do we need the providers to be starting this first, or do we need the EHR to be ready to welcome the formatted and standardized data first? So, I'm gonna try to answer the question. So I think your question was that they don't have HL7 v2 implemented in the system that you were looking at, is that right? So the reason why folks wouldn't adopt HL7 v2 is because it's really expensive to do, and they probably didn't have a driving reason to do it, so they come up with custom solutions. I can tell you what happens at big institutions like partner healthcare, they use HL7 v2 with their lab systems to integrate with originally their proprietary EHR, and then ultimately with Epic. And when they use v2 to send those lab results back and forth, they put a big hairy, what they call Z-segment in a v2 message, which is a custom thing. They just essentially put text files or PDFs into the v2 message and send it over. It's just a different way to get that PDF into that longitudinal medical record. And then if they wanna send structured data, everyone does their own thing in that Z-segment area, and it just becomes a different way of a different kind of problem, but overall the clinical application of the clinical level workflow of being able to send results, patients, ordering physician information, that's all nice and structured, and they can do it. Now, why don't the vendors that you talked with, that you worked with have the v2? I don't know, but I know that the ones that are trying to do HL7, that's predominantly what they use. Bob Nussbaum, did you have a question? It's wanted to comment that, for example, in a laboratory like in VT our primary objective is deliver clinical results to patients and providers. And we are desperate to be able to do this in a seamless way with EMR. And essentially waiting with bated breath for there to be a national approach to be able to do this so that we're not trying to create it on our own. I mean, the mantra for all companies is you build it, you partner it, or you buy it. And I think for most laboratories, they don't have the capability to build it. And in fact, building it would be an error because then you're gonna end up with a whole bunch of different stuff that doesn't talk to each other. So I would say that there are software vendors now who are selling, there's a well over $200 million a year revenue stream going to software vendors that are attempting to form the glue between laboratories and EMRs, and not just CERNR and Epic. I mean, there are a lot of office-based OBGYN offices, for example, that have EMRs that they've installed. They absolutely want to have things. I mean, we've had clients tell us if you cannot push the results into our EMR and you're gonna send this PDFs, we won't use you. So this is a crying need. And I'm really interested in and excited by the progress that I've heard about this morning, but I wish that I heard about this progress in 2015 and that we were now three years forever long because it's too slow. There's no universal agreement on that. All right, Mary, and then finish up with the... I guess I just wanna maybe do a little translation. I think what we have to do, we have to have the clinicians and scientists that know about genetics really be specific to people like Larry of saying this is what we need the test names to do so that we avoid duplicating a test unknowingly, so that we at least know that that gene has been tested in the past and how do the current results compare to it and that we can link it to CDS. And we can't depend on the informatics people to come up with the structure that we need to work at a bare minimum. And I agree with Bob, we need to do that now. And this is ridiculous. And we've been saying this for years in this genomic medicine working group, it's a joke to think that we're really gonna make genomic medicine work unless we solve this fundamental problem. And it seems like you guys understand that. We know what we want in a weird vague way but we're not giving the informatics people the basic information that they need to make progress here. So I really feel like we have to take responsibility and we need more time to devote to this topic. Maybe we need a special, maybe GM, whatever, 12 or 13 can focus on this issue and get the players in a room and don't let them leave until they come up with a system that will work at least for bare bones purposes because right now we're wasting a tremendous amount of money doing sequencing and genotyping and nobody can find it in the medical record. It is in there as PDFs. And it's our responsibility to deal with this. I think she said there and then I wanna go on something else. Yeah, thanks, agree, a thousand percent and thanks for throwing your hat in the ring because you have a lot more clout than I do. And working with you on the digitized AC was an example of, it's a software developer's dream, it's my dream and my passion to do this but I can't do it because I don't know genetics and I'm learning an enormous amount in my experience but you can't get the kind of requirements that are needed in an epic and at CERNER and all that. You can try to get your customers to come together and do that. ClinGen is an amazing resource where you take people like Aaron Riggs and Heidi Riem and Krista Martin and Carlos Busmonte and the whole Sharon Plan, the list goes on and they sit around and they tell you what the requirements are and it's fantastic. The digitized AC thing that we did was a little microcosm of that and that was fantastic too. Those requirements are giving me the confidence to say, we can do this. We need, like you said, we need the investment to do it faster with more people because there's only like five or six of us trying to do it part-time and that's just not cutting the mustard. To go back to what Bob had said, if I could just respond to that, you mentioned wanting to do this and wishing this was out sooner. I'm sure you're familiar with e-merge. Right now there's a project moving in e-merge four to take two sequencing centers and they're returning results, thousands of results, putting them into a shared repository using a very proprietary model, structured data right down to the variant level. It's early and it's awesome, I think, because I was part of it, but then now what we're doing with that model, we've got it in this decentralized repository where you can have clinicians, not just bioinformaticians, but clinicians access it, you can download APIs. It's great. They are now taking a project to turn that into a fire-ized HL7 model so that we can take that, hand it back to the HL7 clinical genomics worker group and say, be informed by this and make the standard a little better. So that's the kind of stuff that's going on right now. It will take time, but we need investment to make it happen quicker. Yeah, I was gonna address a couple of things. So I think, I'm just saying that basically you're gonna see PDFs for genomics in the real world for the most part. We live sometimes in this tower where we see HL7 version two, which itself isn't great, but it is a step that I think much of the world doesn't see, they just see PDFs. So now that's step one. So I mean, that's why I think a lot of people are looking at developing proprietary ways and then they're like, well, you know, the standard, even if it's not perfect as being developed, that may be better suitable for some of their needs compared to nothing and then they'll be able to adopt something. We have to especially take a look at how do you map from version two to this and some of the same loin codes are used, essentially instead of an OBX, it becomes observation with a constraint. So I mean, I, from, we've done some mappings. It seems that for some genomics things it should be possible to use once the normative observation comes out, you won't have the implementation guide that guides you, but you'll be able to kind of have a sense of what kind of direction it's going to be able to implement something that I think would be reusable in the future. Now, version four is slated to come out, you know, in probably in a few months. So it's, you know, it's not that long of a wait. So if, and also if people want to influence it, now would be a good time. So I already got an email from someone who wanted to join the thing. So I just added to the mailing list. So feel free to join. And I liked the idea that Mary said about having another, a group of, sometimes having a small group working together, kind of like, almost like a jury, you know, you lock them in until you don't get a hung jury, right? Cause they're not going to eat, so or something and we get to make some decisions. And sometimes that's what our workroom meetings are like, but they do give us foods. And to second that, the cost of putting, bringing together the folks like Gill and myself and even some clinicians and geneticists into like these in-person two-day meetings instead of these one hour weekly meetings where people are given assignments that they don't really ever get to do, come back the next week and it just drags on and the calendar months flip by. You need to have those, be able to afford those in-person meetings on a really regular basis and pay for those people's time to invest in it. That'll get the job done. It's an IT project. It just needs to be funded. Well, thank you. I want to thank each of the discussants and also you for your participation. We appreciate it very much. Obviously we solved all the problems so we can just declare victory. Do try to not focus on the problems at hand because as we're gonna hear in a second, we have a picture coming up. So we're gonna smile. Yeah, just to add, great session. Before anybody goes away to do anything else for this break, the first thing we need to do is take a group photo. Teji, where do you want us to meet? Yeah, let's just go right outside to out the doors to the right in that little garden area and we'll have everyone face the building so we have the nice backdrop of the ocean. Okay, all right. So, and we will reconvene, try to reconvene at 11.