 How's it going? We're going to get started. I'm Ryan Lane. This is Tim Bell. We're going to give the results of the user survey and the feedback. The results were compiled, or the data was gathered and was compiled by Tim Bell, Tom Fiffield, myself, and JC Martin. Unfortunately, JC can't attend the summit. So basically, I'm going to go over the results of the survey itself, giving data back with graphs. And Tim is going to go over the feedback and comments. And he's going to talk about the ongoing activities that we're working on from the user committee and how everyone else can get involved. So for this specific survey, we had a few new things. The first thing was we had a new type of input for the survey itself. And that was, how do you actually consume OpenStack? And that's specifically a survey for application developers. Also, in the previous versions of the survey, you didn't really easily get to choose whether your deployment was dev QA or prod or proof of concept. We've added that into the survey. Also, we have additional input for comments. For instance, in the last sets of surveys, we only had comments for people to say what they disliked about OpenStack. And we did head none about why they actually liked OpenStack. So we've added things for that. So getting into the actual survey results. For the cumulative survey responses that we've had so far, we've had 1,780 responses in total. This is for all of the surveys in total. The statistics are actually relatively difficult to show if they're only broken down for each individual survey. So some of the data that I'll be presenting is aggregate over the entirety of all of the surveys. And some will be specific between Icehouse and Juno summits. So in aggregate, we have about 506 deployments back from the surveys and 512 companies. Out of all of the survey members, we have 293 user group members from that results. If you notice, North America, Europe, and Asia are, again, our largest places or continents. And of course, North America is still in the lead with Europe being second, Asia being third, but relatively close. The style of this presentation for the graphs is really more for reference than it is for me presenting. We've found in the past that people tend to reference these very often, and that for presenting purposes, that the room is usually not very full. So I apologize in advance if the slides are not as easily readable as they should be. So I'm not going to go into full detail into everything, but later, you'll be able to read the slides on Slide Share if you want to look at the results in detail. This shows the number of countries for deployments and survey results. And you can also find the results of last survey online if you would like to compare some of these things yourself for the numbers. Some general statistics. For organizational size, it's actually relatively diverse. We don't mostly have people from small organizations or organizations. It's very well distributed, which is very nice. From business drivers, it seems to go very well with normal open source projects and that the strongest business drivers are avoiding vendor lock-in, cost savings, operational efficiency, open technology, flexibility of the technology choices, and the ability to innovate and compete. Time to market is also there, but it's actually not one of the higher results. But all of these do correlate very strongly with most open source projects and what people are using them for. For the stage of deployment, the majority of the deployments right now are improved with concept. It would be very nice if we could try to probably target moving people from proof of concept into production, but it's also very healthy to see this many people in proof of concept. For the type of industry that's being used, there is a very large percentage that's IT. I think this may be how we're asking the questions rather than the results themselves of where our diversity is. Me personally, when I filled out the survey, I chose information technology for my deployment even though my sector is education, but it's not academic. So there's probably some tweaks that we can do for this data. But the vast majority is information technology with academic being second, telecommunications being third. And then there's a large group of other where when split out is actually very diverse. So there's probably a decent amount of information technology that can be broken down as well. So the information sources makes me very happy that the very top result by quite a bit is docs.opensack.org. It shows that our documentation is going very well and that people are using it as their primary resource. Blogs are second and Opensack mailing list being third, operations guide being fourth, launchpad answers, which is somewhat of a surprise is fifth. IRC is after that. Looking at user types, as usual for the survey results, they're very heavily skewed towards cloud operators. This has been a trend we've seen for all of our user surveys. It's a lot easier for us to actually get this survey out to cloud operators because they tend to come to us a lot, whereas some of the other groups of users don't come to us and find us as much, especially from the cloud consumer point of view. From the deployments types, by a vast majority, the most deployments that we have are on-premises private clouds. Past that, it's hosted private clouds and then public hybrid in community. I'm actually very happy about us adding community to this list since that is what my personal cloud is actually in and this is the first time I can actually have a proper section to put it into. So going into the specific results, we've broken it down by dev QA, proof of concept and production. All three have the exact same sets of data, so I'm not going to go into much detail on any of these and I suggest that you go and look at the slides and use these for reference when you need. From the point of view of dev QA, everything is very heavily skewed towards smaller numbers of objects, cores, storage size, IPs, compute nodes, and instances, which tends to show that in dev QA, people tend to use smaller clouds. Also, with the releases, and this is something that's actually very consistent between dev QA, proof of concept, and production to a point, is that people tend to use a release that's at least one year old, generally more than one year old. This is most likely due to the difficulty of upgrades, but also just because probably people are very busy with their current production clouds or their current clouds and don't have the ability to upgrade as easily. For the services, it's generally the core set of services that are in the highest amount of use, mostly oriented towards Nova. Looking at deployment tools and hypervisors, network drivers, identity drivers, and API formats. By far, the normal large DevOps tools are the ones that people are using for this. It's Puppet, Chef, and SaltStack for the non-open-stack specific tools. DevStack and dev QA is very heavily used. PacStack in the other category is the leader. From hypervisors, KVM is by far the most used hypervisor in all open-stack deployments, with Zen being second, ESX being third. For the network drivers, OpenVswitch, LinuxBridge are the two top with Cisco coming in third, and past that. There's actually a very good distribution of vendor drivers that are being used, which shows a very healthy ecosystem. From the identity provider point of view, most people are using the SQL driver with LDAP being a strong second, and all of the rest of the drivers being relatively small, not unused. From the API format, I was actually very surprised to see that XML is relatively heavily used, whereas being deprecated and probably removed and future versions of OpenStack. From the operating system point of view, it tends to be very heavily on the Ubuntu side with CentOS and Red Hat being a good close second and third. For the number of users, we actually have a lot of people I would prefer not to say how many users they have. This is unfortunate. It would be very much nicer if people would tell us more information, especially since the user committee itself is under non-disclosure, and we don't share the information, and we only provide aggregate statistics, and by knowing how many users that people have, and it really helps us know what's actually going on in the ecosystem. But in DevQA, it tends to be on the much smaller side. For Black Drivers, the open source products here are definitely the highest contender of all of them. Like the only one here that is not is NetApp. That is the highest. But actually in this of the others, it shows a very high diversity of what people are using in storage drivers. For proof of concept, the results are very similar to what's in DevQA and production. I won't really spend much time going through this or production statistics with the exception of showing what some differences are. Release and services tend to be the same, similar with all of these statistics. Sure, which, so for production, all of the statistics skew more towards the larger side than they do in the smaller side. Obviously over all deployments in this, there is going to be a lot more smaller deployments just because we have a very large number of deployments. But in comparison to QA or to proof of concept, there are a lot more larger clouds inside of production. For the releases, there's actually, it's much worse from the perspective of people using older releases or at least sticking with much older releases. And all of in DevQA, proof of concept and production, we still do have people with server results that show that they're using Austin, which is obviously wrong. No one could possibly be using Austin. It's most likely that people are just not updating the survey with that specific result. Though I do know that there are still people using Cactus and Diablo in production as at this point, because they have no upgrade path from that. But otherwise in production for services, it still orients very heavily towards the core services with some of the non-core services now showing up, which is good to see. And similar for production, the deployment tools are relatively the same. The hypervisors are also relatively the same with ESX actually being one deploy higher than Zen from the network driver perspective. I believe that this in production is the only one that actually shows no network as being heavily used, which I find somewhat hard to believe, but tends to be the case in the data. Past that Cisco is still in the other category, the larger use, but the diversity is very good. And similar in production, there's quite a few users that say they're still using the XML API format. And another thing that I should mention about all of production dev QA and proof of concept is that now that we've added compatibility APIs to the results, we do show that a lot of people are using the EC2 and S3 compatibility APIs, which in the past did not show up in the data results. This shows that we probably should be putting some emphasis into these compatibility APIs, where in the past we really weren't. So some interesting conclusions from all of this data is that our community very heavily favors open source software as a whole for all of the deployments in the actual terms of people using the software. So when we're focusing on actually providing something to end users that when they're testing and doing proof of concepts, it's very important that our open source portions of our software are actually very rock solid in comparison to the vendor specific ones. Because if we don't do that, then we're not providing at least proof of concept users a very strong product. Some comparisons between IceHouse and Juno. For the most part, it's good. It's up into the right. There's a few weird things in the data, specifically the number of companies has gone down, which shows that people are editing their survey results and removing their company name. Also the number of responses that we've gotten for the survey has actually gone down. But seeing is that the first time we did the survey results, we had a very small number and the second time we had a very large number. It shows that we're still reaching a lot of people for the survey, but we didn't get that initial like giant burst of everyone putting their data in at the same time. Otherwise for the user types, the number of users in each separate category are continuing to go up. And same with stage of deployment. The numbers are going up and they're all relatively even. And this survey we did an app developer survey, which gives us some really great information about how our developers are actually using OpenStack. For application libraries, the vast majority of our developers are using the OpenStack clients. Past that is JClouds. The next largest result from that is that they're using no library at all, which means they're probably just hitting the API endpoints. And then past that as Fog, Delta Cloud, PHP Open Cloud, Package Cloud, and then a set of other drivers. For languages, it tends to skew very heavily on the Python side with Java being second, Ruby being third, and PHP being fourth. The business drivers are basically the same as the business drivers for the regular deployments. The configuration tools are relatively similar with the exception that you start seeing things like Heat and Docker coming into the mix being very heavily skewed for developers over operators. And in this situation, there's actually a lot less people using the API, the XML API over the JSON API. We also have some statistics on what people are using for developer environments in case people want to target which developer environments we should be providing extra support for. The actual demographics for the app survey are very similar to the deployment survey, so I'm not really going to go much into this as well. Now I'm going to hand it over to Tim for the comments and feedback. So along with all the statistical data that we've gathered from the based on multiple choice answers, we also gave people in general a free format box in which they can put in various comments. The areas that we asked for feedback on was what should the priorities of the user's committee and the foundation be? What do they actually like about OpenStack? This was a new question because we'd felt in the past that there wasn't any easy way for people to say, hey, they really appreciated some features. We then asked them what should be the points of emphasis, what are the things that we should be looking at most for 2014 across the whole community. And then a specific question that was on request of the Neutron team, which is to try and understand between Nova Network and Neutron, what were the things that were holding people back on Nova Network? So with these comments, what do Ryan and I do? The aim behind this is because all the information is under nondisclosure that we make really sure there is no way of tracing back a comment to the origin. So this means we go over all the comments, remove any reference to companies or that we feel could be identified back to the person who made the comment. And then with that, we summarize those comments and provide them to the PTLs. The aim behind this is that we're then able to guide some of the discussions in the design summit on the basis of the feedback of the people using OpenStack. The basic structure that we've got is that we went over the various areas. So we've got in total about 1,000 comments. So it gives you a feeling for people are very happy to voice their opinions. And in particular, some of them are quite blunt and required quite a lot of editing before we forwarded them onto the PTLs. It's very nice to see people being so open. So this was my favorite comment. I think for those of you who've sat through this discussion in the past few years, the DOCT team has often been under quite a lot of the negative side of the comments. And I think it was really great to see not only this, but lots of other comments along the similar lines. This is backed up by the fact that the docs.openstack.org is now right at the top of good sources of information. And I think Anne, in particular, deserves a lot of award for that along with the rest of the team. So what do people like about OpenStack? This was supposed to be the area where people could be positive. So in general, looking over things, there was lots of emphasis on community, open source, the ability to do what people wanted to do. The fact that they could easily extend things. Quite a lot of people are looking to just tweak in different places when it's not quite what they want. People like the fact they could have a look at the code and work it through, understand how it works. But one of the comments that I also liked a lot was they ask someone what they like about OpenStack and they write in the box, not a lot. So I hope to see from that person that maybe next time round, we'll see a bit of improvement if we get a chance to get some improvements from this survey. Scalability, easy automation, and then a very honest answer of someone's working on it and this is what gets their paycheck. So that's a sort of sample. In general, what I've tried to do is to highlight the larger, typical items, but also a few ones that particularly caught my eye. Let's say that the comments around this get forwarded onto the PTLs and the other relevant bodies as they need them. So in terms of user committee and foundation priorities, this is matching quite a lot of the things that we're hearing in the various sessions and also the kind of items that have been discussed during the keynotes. So one of the items that came up a lot was the feeling that we need to find good mechanisms for feeding comments from operators and end users into the development community. As you will have seen, that this is already taking place to a fair extent during this summit and it's something we hope to be carrying on driving further forward. We need to find ways under which we can close the gaps on enterprise IT. For those of you that were in the earlier Intel session where they explained some of the things around what they've been doing to look at enterprise needs and there's a further session tomorrow which is the birds of the feather session in terms of win the enterprise. So that's very much focused around that area of trying to close the gap of what the expectations are for the enterprise users. This also probably explains why in the proof of concept sections there's a fair number of users who are basically in the proof of concept stage and waiting for some final closure on some functional gaps that they need before they go into production. Some people ask for industry-specific working groups. So the idea there being rather than having a set of people around, for example, where big clouds, small clouds or geographically distributed, that there are ways under which people of the same industry can get together. So we'll see if we can find ways to get some various people to try that out as a concept. Already we're seeing some of that in the media industry where there's already associations going on, but maybe pushing that out into some of the other user areas would be good too. There was a desire to focus on the people using OpenStack. Now that we're seeing deployments out there, then we also need to be reaching out further to the people who are using the CLIs, using the APIs, using Horizon, and working out what can we do to make that more consistent and more usable. There've been requests to expand training and certification. Now this, I feel, is one of those cases where we probably need to expand the communication on the work that's already been done. There's already a major part of the marketplace as the foundation was describing, where there is descriptions of training, there is certification, but it's clear that that information has not got out to the people who are filling the survey. So I think that's probably more of a communications exercise rather than one of doing some major activities in improving. There's the request to actually define what OpenStack is. So this touches somewhat on the work that's been going on around Defcore, defining some compatibility APIs, but also was touching around this fact that OpenStack seems to grow in some areas, and really at what point do we say, okay, that's not OpenStack, that's ecosystem. And then finally, there was fair amount of input on Launchpad. For those of you that have used it, there are some interesting features of it, and this is one of these areas that the developers are looking at as part of the design summit to see if there are other tools around. I think Storyboard has been mentioned as an alternative to make that experience a bit easier. The aim behind that is it shouldn't just be an experience for the developers because increasingly now we're asking end users, operators to get involved as part of that process and contribute, and therefore this sort of tool needs to be made as simple as possible for early entry. So focus for further enhancements, these are what are the things that people are looking for to come out of the survey to get fed in and then to get included into the plans for future releases. So there are three items here that I've highlighted as ones where when we went over the comments that were made this time round compared to last time, there's a lot less of them. So in particular, the docs area, there's still some area for them to be working on but an equally installation and configuration and security. I hope this is a question that people have seen improvements in those areas rather than it's just that they are no longer commenting on them but already we're seeing cases where people weren't putting that same level of I can't install, it's been more a question of we've now been guiding them to the fact there are lots of tools out there to help you install, it's not just a question of pulling down the source code and trying to find the installation instructions. So other areas, several comments came along the lines of saying there should be a lot more focus on getting the core functionality stable rather than expanding out the functionality further. As Troy mentioned, there is debate on this in the community. I think one of the strengths of the community is there are different perspectives on it. And this will be one I think to follow over the next six months and see how much that it becomes a feature again in future surveys. Many requests for zero downtime upgrades. Now, although I've haven't gone through an upgrade myself, there are periods of less availability and more. This is an operation which is increasingly getting possible but there's still a lot of areas where things are not just push the button and go. There's a lot of work going on in Nova in particular to make this easier and that'll apply also to other projects after when the techniques are worked out. But again, this is one of those things that feeding through to the PTLs, they will put that into the summit as being one of the items that they'll be discussing. Quite a few comments on consistency between the APIs, the software development kits, the CLIs. For those of you that have used even the command line tools, there's a huge variety of inconsistencies between the current set of CLIs. Those are gradually being picked up but it's something which is clearly causing disturbance for the community. So again, that's another thing we can be passing through and things like the standard open stack CLI tool and the work on the APIs and the software development kits to try and get those standardized across the projects rather than each project having its own flavor will be a welcome thing. High-village virtual machines, this is the fact that if a hypervisor dies, you want the virtual machines to be restarted elsewhere, classic enterprise requirement and equally one which has proved quite difficult in the past to try and work out where to place in the project. So this is the kind of thing that the Win the Enterprise initiative is looking to be working on, getting together the input for and then taking that onto the development community. Neutron stability, simplification, resilience, V6 and making it scale. These things I don't think came as any surprise to the Neutron project. It's the kind of things which if you look at the things they'll be discussing in the next few days is already on their list but having the user confirmation of this is a very useful thing for them to be concentrating on it further. Finally looking for a horizon, getting the full set of functionality into horizon rather than it being a case that you need to wait two or three releases before the option comes partially into horizon otherwise you have to drop down to the command line interface. And then a fairly regular theme around trying to get the Amazon compatibility closer which as we see from the survey still continues to be used in significant numbers. So in terms of what we've been doing for the past six months to try and start to build up the motion around the feedback loop. So along with the survey, we're now doing proactive mini summits between the summits with the operations community. So we had a gathering early in April. We were very kindly hosted by eBay and San Jose and Tom and Sean very kindly moderated this. Moderating a set of 50 plus operators was thought to be quite a challenging activity. The thing that surprised me on it is when Tom and I first discussed the idea, Tom was rather worried about the amount of tranquilizers that we would need to keep things quiet. As it was actually it was much more a question of seeing people asking how could they help rather than complaining. So trying to establish the channels under which that could be done and part of that I'll cover as to what we've done since then in terms of the operation streams here. Many of the operators were saying we want to find ways to get in early and in particular one of the key ideas that came out of this was the idea of a blueprint in terms of how do we keep an eye on the blueprints. Operators wanted to get in and say this was a really bad idea early on before someone had written all the code all the test cases and then submitted it in for a patch. So instead what we wanted to do was to find ways under which we could comment on what people wanted to do and warn them to say if you do that don't forget the following things not just dive in and start coding. Completely as a coincidence at the same time the Nova team was looking at the same sort of thing but from a development perspective where they wanted to have a better tool for reviewing blueprints. And they came up with what is now the Nova specs project. So with this people go through a blueprint review using the standard tools before the code gets started worked on. And this tool is something which anyone can subscribe to. You can get nice notifications to be told hey there's a new blueprint for this project. And then you can just dive in and making a comment is simply a matter of popping in typing your comments into a few windows and then you get in there. You can optionally choose to follow it if you wish and then with that you get all of the updates or else you can just pop back occasionally to see what's going on. So with this we've already found a number of cases where we've saved potentially large amounts of confusion from people who think they've covered the scope but actually fail to realize the ways under which some parts of Nova were being used. And now the other projects are looking to replicate this behavior out. And hopefully as we look to some of the sessions during this week then we'll be able to get that agreed across all of the projects. So we found a number of volunteers who are more than happy to be helping out. Many of the operators if we bear in mind the profile we're looking for. So we're looking for an operator who has in depth or reasonable knowledge of the internals of OpenStack to contribute. And chances are they've got a pretty busy day job and night job. So we've put together a form which allows people to volunteer at a different level. Many people are happy to help out a little bit. Others have a bit more time and can actually be helping out on some of the larger items. So I'll put up some links towards the end where we can explain how people can choose and explain the level with which they can help. Because we're looking for people who have some time even if they maybe don't have the full depth of skills to help in all areas but who can genuinely be helping out and help and provide this feedback. We've got lots of ideas. The Etherpad was I think about 5,000 characters. And so with that it's all online. People can have a look through that but then equally we've got a set of Etherpads from this week that'll be online as well. Again that follows through the design summit structure and the Etherpads are there. We're going to repeat this probably towards the end of August. So Rackspace have suggested San Antonio and we'll see what's possible there. But certainly getting together for one maybe two days in order to just keep things flowing, take the feedback from the summit, refine. And in particular we already give that feedback to the PTLs at the mini-summit stage which means that then they can be taking that as part of their loop. They talk about the conversation being a continuous process. It's not just once every six months we chat and then after that we go away and come back again in six months but it's more of a continuous process of communication. So the brief summary's there at that link. As Ryan said the slides are up on slides chair so you don't need to be there with the mobiles trying to type in the URLs. Those you can click on later. So what have we got coming on at the summit? So for the first time we've actually had operation streams of a design summit format. This worked really well at the mini-summit. In particular no slides, just people talking and etherpads. We had the staggering figure in the op session on upgrades where I think there was somewhere between 150 and 200 people in the room and 80 operators simultaneously editing the etherpad. This is quite an etherpad technology wise is great but equally to see the rate at which people were giving input. Afterwards we get a chance to look over all of it and this can be fed in as part of the feedback as well. So it's great to see people taking time to contribute and also how enthusiastic people are in finding a good way to give input to the development teams. So the first one was on Monday afternoon. Unfortunately we were scheduled afterwards so we couldn't warn people but there's a session on I think the whole day now Friday. So with that then there are a lot more further chances for discussion. The schedule's all up as part of the standard program so choose the slots you want to come along to or just come along to the room and stay in there for the whole day, you're welcome. The other thing that we've been looking at doing is to find people who do have good knowledge or reasonable knowledge of some of the projects. What we want to be able to do is to allow the developers to have an operator or two in those sessions so that when they do have questions they can just be pinging off the ideas to see what is a good idea. Would this have an ops impact? And equally for the operator if they see there are problems to speak out there and say hey I wouldn't go that way guys that will cause us difficulties. So we've got a dev ops session scheduled for each of the major projects. So on those then operators are more than welcome to go along, raise the things that cause them problems in a constructive fashion so Tom doesn't have to use the tranquilizers and then after that we can be looking to get that session going. It's not necessary for operators to be in all the design summits but that one I think is a particularly good one to go along and give your specific feedback on. There is a set of forms that people are more than welcome to go along so there's a list of operators who are happy to volunteer for the dev ops sessions and for staying in the summits too. So one of the items on Friday I'd like to particularly highlight is JC who's been working as part of the user committee for the past two years has got a new job and is left eBay. So with that that means that for the user committee there are three places one nominated by the management board, one nominated by the technical committee and then there's a third position which JC was filling which is selected by the first two. So from that point of view we're looking for someone to help out on that effort within the constraints of the non-disclosure agreement but then after that now that we're getting so much feedback it's been anonymized we're getting so much etherpad input. To turn that into concrete actions afterwards that can be worked through with the development community is an area where we really want to be looking at how to scale out the working groups around us. So come along on Friday give us your ideas in terms of the overall governance but also how we can get more and more people helping out and doing this work. This link in particular is a really tough one so just go to slide share for that. There there's a form you give a very brief profile of your skills, the amount of time you can help out and those people will then be contacted afterwards to help out on the various working groups. So feel free to volunteer you're welcome. At the same time there's lots of other sessions going on I've been impressed with even how interactive some of the operations tracks have been. There's a massive good information and I hope everything comes out on YouTube afterwards because we just can't get around to all the number of sessions that's going on. So those of you that want to help there are some links there. Please volunteer on the form. There's a mailing list for the user committee itself. This is much more focused on governance. So this will be where we'll be working out how the working groups will interact. This is not so much for people who just want to know what's going on. For that kind of thing then it's much better to be following the open stack operators list. The volume is gradually increasing there but it's still really good intense content. And there's also the generic open stack one. There's sessions and there's the ether pads feel free to read them through. And in particular have a look at the blueprint on blueprints process. This is a very nice way under which when you've got time you can have a look and give your comments. If you don't have time then just skip over and wait to come back to it a little bit later. So in terms of references, the user survey we intend keeping permanently open. Already there are some terminals up there. Lots of people who didn't enter it in earlier are giving their data in. And then we look and do something like a sliding window in future so that people who have entered deployments in the past gradually if they don't update their data they will fall off the total map. References for the surveys. So for those of you that like to do in detail comparison between the numbers, Ryan and I are challenged with already just putting the Excel together from all of the answers we've got and produced the current snapshot. We haven't had much chance to do historical trending. But with the data there and for those of you that are interested to do so then you're welcome to produce various trends from that. Do we have the link for the slides? Oh, very nice slide, okay. So on there, there's the full details of the slide pack that we've just presented. So the aim for those who'd like to go through and do some more detailed analysis of that, you're welcome. In general, this is the only aggregated data that we'll produce for the average operators and users. The aim behind this is that we don't want to be in a situation where we're answering questions like, can you please give me the hypervisors that are in use for users which have more than 5,000 nodes because this eventually allows people to ask cross questions that can give them details that we consider to be confidential. In some cases for the PTLs, then there are some more specific queries that come in and those we review to see if they would not have made anonymous data available and then we'll run a special query on the data. So if there's a need for something then the best thing to go through the relevant project channels and then the PTLs can come to us for that additional query to be done. Otherwise, welcome to go through the numbers. It's been a lot of fun putting them together. It was one of the great things that JC was doing for us in the past. So Tom and Ryan have been trying to get JC's macros all going and from what I can see, things have been pretty successful. So come along on Friday, help out on the governance, volunteer on how you can get helped and with that it'd be great to get some more help as well for working through to the development community on getting the feedback working. Are there any questions? Great, so in the hopes of the aggregate data or of the raw data, aggregate data, we can probably provide some of the aggregate data in raw form. Yeah, I think so. So it's not there yet. This has been quite last minute as well, so. Okay, great. So happy reading and thanks for filling out.