 Okay, go ahead and get started. Thank you everyone for joining us today. Getting involved with Top Factor. My name is David Meller. I'm the Director of Policy Initiatives here at the Center for Open Science. The Center for Open Science, we are a nonprofit organization independent of anyone else located in Charlottesville, Virginia. We're funded by both government and private family foundations with a mission to increase trust and credibility of scientific research via transparency. And we work to achieve that mission by identifying barriers to reproducibility through replication, large-scale replication projects. We advocate and educate for incentives, policies, and practices to address the problems that lead to low reproducibility. And we build and maintain the OSS, a platform for connecting and collaborating research projects built in with data sharing registry and pre-print services in order to enable the types of actions we want to see happen. That line for this webinar, we're going to give a little bit of a background about what the transparency and openness promotion guidelines are, what they are, why they were created, what we've been doing for the past five years or so in order to increase adoption and remove barriers to implementing policies and practices that more directly align with the ideals of scientific practice. And what the needs were that led us to come out with a top factor as a way to more universally describe precisely what steps are being taken to promote open science practices. Towards the end, the second half will be about how to get involved with the top factor, how to point out policies that need to be updated, or how to submit journals that you have recommendations for journals for us to evaluate, or how to evaluate journal policies yourselves and send them our way. Finally, we'll talk about some of the future plans we'd like to see over the next couple of years and how this can evolve. In case you haven't heard, but I bet most of the people here online have the transparency and openness promotion guidelines. The top guidelines consist of eight standards that can be applied in three levels of rigor. They're directed, they are tools for journals, publishers, and funders to take more direct actions towards promoting open science practices. They cover data citation in order to incentivize the publishing and sharing of those datasets initially. They should be treated as sightable objects and individuals should get credit for that work. Data materials and code transparency are kind of the core set of practices we want to see underlying empirical articles. Design transparency gets to the use of reporting guidelines, make sure it's clear precisely what's conducted so that when somebody wants to build upon your work, key details aren't missing. Pre-registration of studies involves putting a study into a registry prior to it being conducted so that it can help open the file drawer and help understand what the denominator is, how much research is actually conducted every year compared to how much is published. Pre-registration of analysis plans starts to address some of the misuse of statistical analyses and makes the distinction most importantly makes the distinction clearer between confirmatory hypothesis testing research and exploratory hypothesis generating work or work that's involved in model model development or theory development. Keeping those two modes of research distinct is important for a variety of reasons and then to encourage replication studies. Replications are the bedrock of scientific evidence in many applications but they can be very hard to get funded or to get published so top takes that takes that on. We have a couple of examples of the three levels of rigor in which these policies can be applied. The status quo for many journal policies is to encourage or sometimes even discourage some of these types of practices and we know for a variety of reasons that there's been a lot of empirical work showing that those encouragements aren't very effective at actually seeing the type of practices we want to see happen. For example, encouraging one to share data when it's not required doesn't lead to many openly available data sets. So all the top guidelines start just above that status quo with a disclosure requirement state whether or not data are available for example. Level two is that mandate data must be made available in a trusted repository. There are exceptions for ethical and legal concerns but otherwise there's the expectation that data be made openly available. And then three is a reach goal where data must be provided to the maximum extent ethically and legally permitted and somebody takes the effort to computationally reproduce the reported findings using the author's original data. That is a reach goal. It's not going to be widespread in the very near future but it is possible and one of the main points of the top factor is to show what's possible by seeing what other journals are doing in related disciplines or in your own discipline. Pre-registration a little bit similar a little bit different in some ways. Level one standard is the same disclosure expectation. The article states whether or not the work was pre-registered. It's important information to have. If the work was pre-registered the journal checks for compliance with the plan or for transparent changes from the pre-registered plan. Most pre-registered plans that we see when they are reported there are deviations and changes from what was expected to what was actually conducted. Those are all perfectly acceptable but it is important for others to evaluate the timing and the rationale for those changes. Undisclosed deviations and pre-registered plans could be cause for concern not always but in order to evaluate that that has to be disclosed. And then finally level three a mandate inferential or confirmatory studies must be pre-registered if they're going to be published in this journal. Again that's a reach goal. A couple of journals do that and they're making sort of clear expectations about the types of studies in which pre-registration is expected. If you're going to do a confirmatory well justified hypothesis test on a population and make an inference from that to a wider population pre-registration adds a lot of value and some journals and some funders are requiring it. The top guidelines were published where they were created in 2014 and they were published in science in 2015 under this heading promoting an open research culture. And the purpose of the top guidelines were to provide language recommended language and a framework for implementing best practices and scientific publishing and funding. One of the barriers to changing policy is just not knowing precisely what to include or how to include it. So the policy language and the example templates provided by the top guidelines are all CC zero. We've been working with editors, publishers and funders for the past five years to help adoption and to help implementation of those practices and those policies in those venues. Over the past four or five years we've had a signatory campaign working to show norms and buy-in to this to this framework to the top guidelines framework. There are over 5,000 signatories of the top guidelines representing every major publisher and every major discipline. Wide support for the philosophy of promoting these types of practices and working towards implementing one or more of the different practices and policies. A signatory of the top guidelines is an entity that is saying that we support the principles and we will review the policies over the course of a year to determine which, if any, are appropriate for implementation. We have seen a widespread implementation of one or more of these top policies. So we've been doing our best to track those and to point to examples of publishers, journals or funders implementing top policies. We know of just over a thousand journals and funders that have top compliant policies but that's actually a pretty crude measure of implementation of these top standards. We mean the definition of a top compliant policy is that the author guidelines have at least one policy that is at least level one of those eight different standards. So that doesn't give much transparency, that doesn't give all that much information about who is moving and who is implementing these various different practices. But we've seen a wide range of societies and publishers and journals implement these practices across the life sciences, social sciences, etc. Many of these are doing, as you can imagine, at very different levels. There are several that are taking very direct, very high level steps of computational reproducibility, implementing two stages of peer review that register report format. And we see widespread adoption of that kind of level one philosophy of stating whether or not the work is, whether or not the data is made available, for example. But we also know, as looking through those 1040 implementers, that the most common implementation of a top compliant policy is describing how a data set should be cited, which is a great practice of course, but there's a lot more that Tannen should be done to promote these policies. We've seen the implementation of several similar styles of data policies. So back in 2018, we started seeing Wiley, Springer Nature, Elsevier, Taylor and Francis come out with a series of data policies that can be applied to the journals in which they publish in a modular way. So they describe them in slightly different ways, but there are policy types or policy levels or descriptors of basic share upon request made publicly available, encourage, expect, mandate, verify. Those are all different types of policies that have been implemented by the major publishers and by individual societies. And so we wanted to give a little bit of clarity into which of those policies, again, comply with the expectations of the top guidelines. Remember, a basic level one implementation of top requires, it does require something, it requires that the authors, each article has a data availability statement. In the case of all of these policies are referring just to data transparency. And so all the publishers have policies that comply with that level one disclosure requirement. All of them have policies that comply with that level two mandate that data be made available. And then a couple of journals and a couple of publishers have policies that comply with that higher expectation. And there's actually quite a bit of, there's a wide spread of policies in that verification level. But again, what we're really looking for for data transparency level three policy is that extra step of computational reproducibility. And there are journals that do that. And there are publishers that point to how that can be done. But that's only gone so far. So we're looking for ways to provide more specific information. We wanted, we did not yet up to this point have a comprehensive database of journal policies as they relate to everything covered in the top guidelines. We very frequently were asked for examples or requests for information about how much any given policies being implemented across a particular discipline or sub-discipline. And we had no public way of providing that information clearly and quickly. We didn't have a means of providing very direct feedback on specific attempts to implement policies covered under the top guidelines. We're in frequent communication with a wide number of journal editors, policymakers, academic societies and publishers. But that had been quite ad hoc, as you can imagine, a variety of different ways to interact with them as opportunities became available. What we have now on the top factor is a very specific and direct feedback on how this policy relates to the framework covered by the top guidelines. Up to this point, it's been difficult to compare and to learn from what others are doing in your discipline or in other disciplines or across publishers. So a good example is a lot of the work that's being done by the political science community or the economics research community, American Economics Association. Both of those have several journals where that computational reproducibility is being taken, being tackled head on. Lessons learned from those should apply to many other communities and comparison across communities is going to make that more easily accessible. An individual looking to up the expectations for work that they publish will be able to see examples of what other disciplines are doing or within disciplines, which practices are being taken from within a discipline is also a same type of comparison to be made and lessons learned to be made that way. We do want to give more consistent recognition for those implementing best practices. We have, as they came across our desk, we would promote them, put them on our website and showcase what we were able to see. But that was, again, a little bit of ad hoc process. We wanted to provide more consistent recognition of journals, publishers, journals and publishers that are taking these steps head on. So those who are doing these level three practices deserve recognition and credit for the work that they're doing to put that in. And this provides a way to do that. That's unbiased by whoever I happened to have heard of most recently, for example. And then finally, awareness of those not taking the minimum steps. We, there's often a discussion about carrots and sticks for how to make progress in scientific reform. Up to this point, we've been promoting a lot of best practices and encouraging those to take further steps. But we've known that a large number of journals and a large number of articles that are published there aren't meeting really basic expectations and what I would say should be requirements for the minimum level of transparency that should be expected of empirical research claims. And we are at the same time providing very clear guidance and tools and resources for taking a step up and implementing some of these better practices. But it does take awareness of how many journals and how many policies are not being implemented to the degree that they should be. So that all comes to the top factor, a database where you can evaluate and see the policies and steps being taken by a large number of journals and publishers. Let me do a live demo. Got a video backup just in case I crash something. But this is available at www.topfactor.org and it's a database of journal policies. Each of the top standards are listed here at the top of the page. Data citation. Try to make this bigger. Data code and materials transparency, design and analysis guidelines, looking at for those reporting checklists, study and analysis plan registration, replication, whether or not the journal encourages replication studies. Level two would be encouraging replication studies and reviewing them with the results stripped out of the review process. And level three would be encouraging replication studies to be submitted before the study is conducted. That's of course a registry report. There's a separate standard for other types of registry reports. So does the journal encourage submission of novel research studies as a registry report that would be a level three policy. Level two would be what's known as a hybrid registry report submitting those studies with the results removed again. Or level one, a basic policy of just stating that we'll publish regardless of the novelty or significance of the findings of novel research. Then finally, open science badges, a way to indicate whether or not data, materials, or the registration is available, underlying the reported results. That's just again, that visual indicator located on the journal article to demonstrate and to point to the fact that more transparency is underlying the reported findings than might be required by the journal. So you can, if you're particularly interested in seeing what journal steps or what journals are taking steps towards that data transparency standard, you can sort by data transparency. You can sort by analysis code. So by sorting by level three there, you can see all the journals that are taking that highest step towards. I'm sorry, bring a little thing right here. How many journals are, for example, at that level three of data transparency and those are good examples to follow. You can see journals where registry reports are accepted for replication studies. You can, maybe you're focused on areas of empirical research where registration or replication studies are of interest to you. You can filter those out and just focus on empirical articles or journals that publish empirical articles and the steps that are taking for data, materials, and code transparency, for example. And the total in this column is updated as that, as that filters are applied. The total represents the sum, as you can imagine, of all of their policies. So one, two, or level three. The highest possible at this moment is a 29. The open science badges have two possible points for giving one or multiple badges. And you can sort, of course, by whatever total you're interested in looking at. You can filter by seeing what steps various publishers are taking. And you can filter by discipline. So again, you can see what economic journals we're doing and you can compare them to psychology, for example. Or just focus on what the psychology literature is doing. Okay, that's it for the demo for right now. But feel free. I encourage you to play around with that more. And questions, Mike, one question just came in about that level one and level two for badges. And again, that's whether or not they offer one or multiple badges. The specific rubric is available for describing zero, one, two, or three points for each of these different standards. I'll show where that is made available. Oh, that's really good. Thank you, Malika. Something I had on my notes that I forgot to demonstrate. I'll go right back here about these little blue dots. These are the justifications or explanations of policies. So many, there is ambiguity in lots of these author guidelines about what precisely is required. These blue dots will have over text for a justification for why a level zero, one, two, or three is warranted in this case. I'll give a couple of examples later on of where we see a lot of ambiguity. But this is just an explanation to the author or an explanation to the editor describing why this level was determined. For example, there could be a very in-depth data policy that doesn't actually require anything. So in that case, if it was rated a level zero, despite the fact that they have a lot of explanation about how to share data, we would explain that there is no data availability statement required or data transparency is not required. And so that does not comply with the top guidelines. Those blue dots give a little explanation of why that score was applied. Here's one that code availability is strongly encouraged. But again, that's not one of the top policies. I'll show you where you can link to this evaluation rubric. But each of the scores comes down to asking whether or not all the underlying data must be made available or whether or not there must be a data availability statement. A couple of summary stats for what we've seen right now. We are up to having 346 journals in the top factor database. I'll show you where you can download that and see precisely where those are included. They range from zero to 27 out of a maximum of 29. The mean score is 4.9. The median is 3, meaning that half the journals that we have in the database are at or below that 3. The modal response used to be 1, but we've been updating over the past couple of weeks. The most frequent top score is a zero. As of last week, the modal response was 1 because of a large number of journals that have that data citation encouragement. But now the most frequently included top score is a zero. We've started sharing this with journal publishers and editors about four weeks ago. There's a little bit of a rough estimate at this point, but about 35 journal policies have changed based on discussions that we've had and expectations that editors thought were explicit, but that weren't actually explicit in their author guidelines based on seeing the results of these top scores. All of this information is available at the website, cos.io, slash top. It has some information about the rationale for the top guidelines and how to use it. Importantly, what steps you can take to get involved, and that can be suggesting journals for us to include in the database. We have a little bit of a backlog of journals that we would like to get up there. We share them with the journal editors before we put them on the top factor publicly, and so we just track that internally, but please do send us your suggestions. I can't guarantee precisely when they'll get up there, but we do track those requests and we'll add them when we're able to. You, I'll get to this in the next couple of minutes. Please do submit journals that you have evaluated. We have a form for that. If you have a couple of journals that you would like to show up on the top factor, send us your evaluation. We'll check that. We'll compare with what we see, and then we'll upload that to the website. And then finally, we do make mistakes. If you see a policy that's not accurately represented, please let us know, either by email or that suggestion form notifies us that we need to take a second look at a policy. And that often will start a conversation between us and you or us and the editor or the publisher in order to make sure expectations are clear. So if you click on that, submit your journals for an evaluated form. I'll get to this submission form. You can send us information that you evaluated based using that rubric that I pointed to. The rubric is available on the website right here. And you can send us this information. So it's a pretty clear set of questions. Do they have a policy on data citation? Do they require it? Do they have the statement that they'll check that? And getting to that blue dot that was asked about just a minute ago, that justification. If you're not sure, if you think you're being too lenient, you can say make a note about that. If you think you're being too strict, you can say it doesn't appear that they actually require something, but the language is ambiguous. Data transparency, again, state whether or not data sharing is encouraged or not even mentioned. Article must include a data availability statement. Data must be made available or computationally reproduced. And again, you can add a little justification if there's a little bit of ambiguity. We see several common issues when we're evaluating these data transparency standards. So the three most frequent ones I think gets to about 80% of the questions we have when looking at a data policy. The policy must apply to all the data underlying reported results. Oftentimes, author guidelines will state that a subset of the data have to be made available. For example, in communities where there is widespread agreement within the community that this is the repository that everybody puts their genomics data in, for example. That's good. We encourage those policies. We have nothing against that, obviously. But the top guidelines policy states that all the underlying data that was used to generate the reported findings must be made available. Especially that statistical data, those samples taken in order to make any inferences to a wider population. See a lot of benefit or transparency into that type of data set. Available upon request is not compliant with the top guidelines. So something saying that the authors must make available to the reader community if there is a request to see the underlying data. That is not compliant with the top guidelines. There is a lot of empirical evidence that doesn't actually lead to much data transparency. So that is not compliant with the top guidelines. And we see a lot of policies that strongly encourage or say that data should be deposited, for example. And unfortunately, that's sort of an unenforceable expectation. You can't go to the journal and say that this article does not comply with your policies. Can you please help me figure out what to do about this? If the policy is simply saying that the article should have data available. So those are the type of statements we frequently see in author guidelines that do not comply with these top standards. Design and analysis or use of reporting guidelines or reporting checklists. Again, the purpose of these standards is to make clear precisely what was conducted and to report all of the important statistics and design elements necessary for understanding precisely what the method's done. Many people might be familiar that method sections have been shortened over the past few decades to such a degree that it's often impossible to tell precisely what was done. That's one of the major barriers in our reproducibility projects. It's impossible to tell precisely what the design was. So a checklist can help remind the author of what important details to reply to include in their manuscript. So we often see articles or author guidelines pointing to resources such as the Equator Network. I'm not sure how many reporting guidelines the Equator Network curates, but it's several dozen or probably several hundred. But some of the major ones are the Consource Guidelines or their Rive Guidelines or the Prisma Checklist. Different disciplines and different study designs require reporting different types of information. And there are well-curated sets of communities that have taken that arm. A couple of questions come in. I'll make sure to get those at the end. The journal created checklists by Nature and the star methods at Cell Press are good examples of an individual journal stating this is what has to be described when reporting the results of empirical work. And there are lots of societies that have taken this onto APAs. JAR's standards are good examples of items that need to be included when reporting empirical research. And finally a couple of other steps. Note whether or not the journal encourages replication studies or if they encourage replications as part of a registered report format. Most journal policies don't mention anything about whether replications are appropriate for the journal. And there are sterile author guidelines out there that specifically discourage similar with registered reports. Does the journal accept this format? There are several benefits with the top factor. It is transparent. All the data available to it is made available on our platform and we point directly to the author guidelines that we're using to evaluate them on. It's based on practices that are directly associated with core values of how science should be conducted as opposed to significance or novelty or newsworthiness. Those are not scientific values. Those give other information that's helpful to have but it doesn't get to the importance of the underlying evidence. It evaluates something that the journal controls. So you can't control lots of things in life but journal can control precisely what steps they're taking on these fronts and so it's very easy to change the top score. Importantly this is diversifying away from all the other metrics out there that focus only on how much attention is grabbed by a journal article. Again that's fine information to have. We don't want to eliminate that. I don't think the world would come to an end if we did eliminate that but I think it's important to have other ways to evaluate steps that are being taken by a journal. There are limits to what the top factor is. It is still a journal level metric. It does not directly speak to individual articles. A good example of that would be a great top factor score of eight would be disclosure of all these practices and encouragement for submitting a replication study. That journal is taking measurable steps in the right direction but if the answer to all of those are no, data are not available, no I will not share my materials. I'm not going to fill out the checklists. If all of that is no then of course that article is no more the evidence underlying that reported binding is no more transparently available than it would otherwise be. So it's still a journal level metric that doesn't necessarily apply to the individual articles published in there and we wouldn't want to imply that that is always the case. Register reports are another good example. An article offering this as a format is taking a great step towards addressing publication bias and the incentive to present exploratory findings as if they were confirmatory. We obviously greatly encourage that but not every article published in a journal is going to be registered report nor should it be. So that's again an article level metric that isn't being reflected by this top score. Finally there's a risk of gaming through unenforced policies. It's fine for an article to assert that they're doing x, y, or z but of course there are expectations to follow if that's being done. The article if the journal states that all underlying data must be made available or a really good reason described in the disclosure statement of why it's not made available and steps to take to access the data. If they're not following those asserted policies then they are in effect getting credit for being more transparent than they deserve to be. So the solution to that is an auditing process that we would like to help develop with the community. So I get to the future of the top factor. We obviously want to get a lot more journals covered by the top standards. We want to get about a thousand by the end of the year so we need your help. Submit us recommendations for our journals to evaluate or better yet, submit valuations that you have seen throughout the field. You could also submit it to us directly in a CSV file using the same format that the data are available on OSF if you don't want to go through that Google form. So please do send us that if you'd like to do that. I think we're on track to get to about a thousand by the end of the year but I don't think we'll get there without a little bit of extra help. As I mentioned, audits are going to be necessary. We don't yet know the fairest way to do that whether it's a sample or every article published over the course of a timeframe and what counts as an unenforced policy or how to display that information on top factor or someplace else. But we are transparently showing what the journal is asserting and I think it's only fair to have a way to check that those are being enforced the way that they expect they should be. And of course, we have 10 fields on the top factor right now, the eight top standards, registered reports, and badges. I think there's a really good argument to be made that transparency into peer review is one additional step that can address some bad practices in scientific publishing. There are probably a lot of others. So all of those are on the table for future inclusion in the top factor. With that, I'd like to say thank you. I'm going to, there are a couple of questions I've been submitted. I'll make sure to get to those. If you have more questions, please submit those through the Q&A panel. Does the top factor replace the ICMJE recommendations? The International Committee of Medical Journal Editors has specific recommendations for steps that should be taken in publications. It does not replace what ICMJE is doing. They have a set of specific criteria for particularly in clinical medicine, what needs to be made available. And their consortium, I don't remember precisely how many journals are on their board, but there's a certain core set of folks in the committee. And then there's a wider community that has asserted that they follow the recommendations of that committee. So the committee provides specific recommendations. I believe, I might get this wrong, so please correct me. I believe their requirement is this disclosure of individual patient level data of any clinical trial. There needs to be a statement describing how to get access to that data. So it's kind of a level one policy, but with an expectation that it be made available through theoretical means. They also obviously have strong recommendations towards registration of clinical trials. Registration for clinical trials has been required by law for about 20 years now on clinicaltrials.gov. And ICMJE stated a few years after that that none of our journals or journals taking our recommendations should publish the results of the clinical trial if it was not prospectively registered, if it was not registered before the first patient was enrolled in that trial. Most of our focus is outside of clinical medicine. There are, as I just described, there are strong community norms and there are strong legal requirements for rigor and transparency in that field. We see this as a complementary effort. We would, I think we have a lot to learn from each other, but there's no specific plans that top factor would ever replace, for example, what the ICMJE is doing. That would be that's not on the table. Yeah, for the open science badges right now, the score is zero, one or two. There is possibility of more badges in the future, particularly for analytical code or for other things. And so that could go up in the future. We just wanted to give a little bit of transparency to what steps are being taken to again recognize when data are available or materials or registrations are available. So that one is subject to change as that evolves. And it doesn't have to be the open science badges that we promote or encourage use of. There are a couple of other publishers and there are a couple of other journals that indicate when data are available through kite marks or through or through other visual indicators on the table of contents that data are available. So that's the criteria for that badge. Is there anything you can say on the feedback you have had since launch? The biggest feedback we have gotten have been folks reaching out to us saying I really disagree with what sounds the biggest. I don't quite understand why this evaluation is being promoted in this way. Those have just been very direct conversations about what is required or what is encouraged in order to get published in a given journal. They have all been extremely fruitful in the sense that they point to very specific language and very specific expectations about what is or is not again being required as a condition of publishing in that in any given publisher or any given journal. So that has been the focus of most of our conversations with publishers and editors over the past couple of weeks about what this top factor means. And again it has led to several at least 35 at last count clarifications and author guidelines about what is expected or what is required. Some of the policies data citation is one that a lot of folks haven't really considered. It's very uncontroversial. Of course you should cite a data set if you're using it but there have been a lot of discussions about oh I didn't think of that as being something that's important to use or stating whether or not it should be in the reference list which is where citations are often counted. I'm just going to go through my backlog here. If I mark yours as done but you disagree. I didn't answer your question please just raise your hand again. What if the journal suggests an external badge awarding site but don't visualize on the paper or the table of contents. That probably wouldn't count. It's important for the journal to signal to its readers that data for example are available and we would have to take a close look at it but the underlying rationale for that is to give additional recognition for an empirical article that is more transparent than its standard these days. So it would be hard to see if they're pointing to an external site awarding badges if that could satisfy that underlying requirement. I think it would not comply with that policy but we could of course take a closer look and importantly if we do see good implementations that aren't technically in compliance with the way top is framed at this moment the feedback from those processes are being used to improve the top guidelines. So that'll be the work that'll be the focus of future work on clarifying the levels, adding levels if they're justified, or giving additional guidance for what counts as best practice in each of these different standards. Has it been a manual process to go through the journal instructions to do that analysis? Yes it has. We are aware of a couple of drafts of machine learning algorithms or more brute force attempts to score these types of policies that probably would be the future. We know that there are about 30,000 journals out there in the wild. I don't think we could get it to all of those through a manual process obviously. So we're looking at more machine readable ways to do that. It's one of those things that we take a lot of resources to start but then obviously have a lot of efficiency going on. In the near and medium term we're focused more on crowdsourcing for folks sending us in their evaluations, us checking them and putting them online. Then we think we can get to a fairly decent set of journals that a decent percent of the scientific community would look to when considering where to publish. We think we can achieve that goal through manual and crowdsourcing efforts. As time goes on, more automation is probably going to be needed. I guess that's all I can say about that. That's my only expectation right now. I'll stay on for another few minutes just in case any other questions come up but otherwise class will be dismissed in just a moment or two. If you'd like to get in touch, I should put my contact information on here. Feel free to get in touch and we can talk more. All right, thanks everyone. We'll go ahead and end so have a good day.