 So I'm here to talk about Rugu 2015 year review. Every year we basically, for a better talk, we just talk about new features and things that have happened since last year's World Swamp. I'm Terence Lee. I go by Hono2 on Twitter. I work with Richard Schneemann and gave the talk yesterday speed up science or something like that. I actually got kicked out of that room because I was a fire hazard, but we work on Ruby experience at Rugu, which means every time you deploy a Ruby app, that is totally bad. You're running from us and so if there's actually a problem with that, then that is our fault. You should come and talk about that. So today I'm going to talk about a few different things as well as Will here on the right and Luigi as well. So first off, I'm just going to cover some general Rugu features and things that have come out since last year's World Swamp. We also have amazing Postgres products and Will works on that team, so he's going to cover some of the new features that have happened in the Postgres land there. He'll cover some stuff with specific Ruby, things you've worked on, some announcements and stuff like that that you should be aware of. And then finally, Luigi's going to cover basically some of the work that Matt's team that he's on has worked on in the last year as well. So on to Rugu. So when I'm talking about Rugu, I kind of just mean like the general product like the runtime and the build service and things like that. So one of the really cool things that we've launched in the past year has been the Rugu button. And so if you've seen these around and you can get them, we've got those in the review, which basically this purple deploy button. And when you click on it, you get redirected to Rugu page to basically deploy your own copy of that application. So in there you can specify, so you like just type in a different name that you want. If you give it a blank, it will make up a random OEI Rugu name for you. And then inside of this thing you can specify add-ons and other things to basically define like what it takes to actually set up a template of this. So this is great for any demos that you have. Like if you're preparing a presentation, I've used these before last year when I was doing a bunch of WebSocket stuff at a conference last year. And so basically all the demo presentation, the demos that I had, people would just click the deploy button and get their own version of like the typical Hello World chat thing for WebSockets. So just very simply to actually set this up, you just have a rebate and then inside of it you just mark down and just point it to this image and then link it to an upward deploy. And so when you click that button you'll know what repo you're coming from and actually set that along. And the actual magic behind it is with the app setup is there's this app.json file that you put in your repo. And inside of that it takes a certain set of keys and you just specify the name and description. And then some of the interesting stuff is like the add-ons. So you can pick various workbook add-ons like if you need that so you can get it out and running. So maybe you're depending on Redis like in my WebSockets example, I'm doing a session in PubSock using Redis. So I specify a Redis add-on provider, in this case it's just like the free tier. So anyone deploying this thing will actually get a completely working thing. They don't have to go through and like create and then like get cloned in the app and then actually like manually go through and add all the add-ons and stuff. In addition you can also set up environment variables. So if you need to set up specific environments to get that app up and running you can specify all that stuff in here. There's just a hash inside of the JSON as well as any scripts and stuff that you need to run there. So you can do some post-deploy and things like that. So that's the app of JSON and so once you include that as well as the button on the Remi, your app is good to go for a workbook button. The next thing that we, one of the other things that we're going to work on has been GitHub integration. This has been something that a lot of people have been asking for for a while and there's ways to achieve this using stuff like Cochip. If you just use their CI service you can have them actually just deploy to Roku or Travis has something as well. But directly through Roku if you go through the GitHub workflow and connect it with Roku you can then through the deploy tab like connect your apps to a GitHub repo. And so you can have a lot of deploy from a specific branch. So say you're working on a day-to-day feature for a lot of clients and every time you commit you want to deploy to a staging application to then show the client. So every time you push the GitHub they'll automatically do a callback, a webhook callback to actually do that deploy. You can also manually deploy specific things straight through the web interface and not have to do a git push. So a great feature for doing stuff and staging, if you're doing things in production and you have stuff tied to CI it still would probably recommend kind of not doing it this way. But I think this is great for handling various PRs and other features that you just want to easily get that up and out there and not have to worry about that stuff. So we originally launched this last week for Roku Elements. And a lot of you who've used Roku for are familiar with our add-ons ecosystem. So the Postgres add-on for getting a Postgres database. We have, you know, Fasten for some CDN stuff, New Relic and various other services out there for using that. So in the elements that we now have add-ons as well as our build packs. So when you deploy your Ruby app using the Roku Ruby build pack. But you can go and fork and make your own custom build pack. And as part of this you can look and see, without searching through GitHub for various build packs that are available that you can now search through and use. But maybe you want to add an X or you want to go and add like Fango.js as part of the application. Because you're going through and driving like form filling or something in a work process. And so now like all those things are easily searchable in here. In addition, with the Roku CLI we have this plugin system. And as part of this you can kind of search through and look for various plugins to then extend and add to your Roku Bandline experience. So all those things are wrapped up into one place that you can go and kind of piecemeal together to build your integrated app experience on Roku. In addition to that, for those of you who are dealing with appliance or NERP, we announced PXDinos NERP. And PXDinos are the 6 gigabyte instances that are run by themselves so they're not multi-tenant. So you don't have to do any noise and error problems and you get much more consistent performance. So this is great for anyone dealing with things in Europe. So if you are using a bunch of web servers and stuff that you want to basically scale out, that's great for being able to get that scale and kind of having a massive process there, low balance per process of getting more orders. And then we also announced the CDER 14 stack, which is based off of the latest LTS release, which is in Sukhumov 2014. And with this it brings hopefully not a lot of changes that you're going to notice about it. More on today, libraries in the back end for a lot of infer security purposes. And basically we've dealt with a lot of hard work for smoke testing, things operating our own apps internally before launching this. And we had a fairly long, or like a decent long beta period for testing all that stuff out. So a lot of people are probably still on the CDER stack. And with this announcement we basically are going to be sun setting CDER at some points. So on November 4th this year CDER will be retired. So you should look into migrating your CDER apps to CDER 14. And it's pretty simple and you just do, you can set the stack for your next push, you just specify CDER 14. And then you just do a command and then when you do your push for open master it will build in on the new CDER stack and you'll be running it on the new CDER stack on your next push. So if you do run into any issues you can use the broken rollback command that's still available to rollback here. So pretty easy way to do any migrations and recommend doing this on a staging application of your app before trying it in production. Because there are a lot of changes. So it's been like four years I guess since the last Ubuntu that we've been using. So I mean Lipsy and some other things have changed under the hood a little bit. And if you go to the CDER 14 blog post or description desk and we do have some articles for some gotchas and things that you might want to be aware of with that. So the next thing with more security and things there's, we now have two factor offs. So you can set up your phone to basically have a second factor of authentication in addition to your password. So this is great for security and other things. Like I know Slack recently had a security issue with them last few months. A bunch of people had to hold their passwords and now I have like eight Slack accounts that have a bunch of two factor stuff that I have to log in with. But we at Workgroup now have supported this in the last year. So this is great because if you're deploying, you know, like your business and other things that you want to be actually secure. Recently we shipped HVGid and switched all of our defaults stuff over to get so now you don't have to deal with SSH keys. Which sometimes for issues I've seen in support units with customers with having multiple RoboCats and having to basically like switch SSH keys in and out. This all goes to SSL. And this is also great for Windows users where SSH has been a huge pain with that. So great work there. And so DHHIS talked about this keynote about Action Cable with WebSockets being powered by Redis for some of the PubSub stuff. And one of the great things was we announced WebSockets weren't recently, or I guess like kind of the end of last year. And this has been in labs for a while but WebSockets are enabled by default whether you're using them or not. So whether they're enabled on the router. So with Redis Clive launches and you want to use Action Cable it will work out the box at least for the WebSockets end. And if you want to play around with FEI and other things we have a chat example on the DevCenter article. From the DevCenter sites for actually just like using FEI as a WebSocket driver and running that and working on it. So great time to check out WebSockets for Rails 5. And so recently we've relaunched Dashboard and we've done a bunch of work there. And one of the most interesting things for me has been just all the metrics stuff that's been on there. So you now get like response time, group, a dialog, there's a memory graph. I know this is something where we've gotten a lot of requests for just like introspection into your actual application. So if you're an active user of Ruby you've probably already seen a bunch of lists but just go through the Dashboard and just look at all the graphs that are out there. And this is something that we're continually iterating on and trying to improve on. So Dashboard will be landing a bunch of new features as time goes on. So I'm going to hand this off to Will to talk about Ruby Postgres. Yes, I remember I've been with the Ruby Postgres team for a long time. We have a number of cool things that came out last year that I want to talk about both in the Ruby Postgres product and also in Postgres itself. Yeah, so the big thing is Postgres QL9.4 was released earlier this year as a community project. We supported currently in a beta capacity so you have just a dash dash version 9.4 when you provision it. But hopefully very soon I think like this week or next week it's going to become the official one so they can do this and get 9.4. And the greatest thing about Postgres 9.4 is the JSON piece support. And so the last two versions of Postgres 9.2 and 9.3 you've been able to use the JSON column. But it didn't really do much except for just some syntax checking to make sure it's now JSON. That still was super useful because you could pair it with something like PLV8 which lets you run the V8 JavaScript engine inside Postgres and you can do some crazy cool stuff with parsing out documents, putting check constraints on it and make sure that only now documents that match your custom rules get in and so on. But the really cool thing about JSON B is that it stores the representation under the hood in a binary format. And so the Postgres developers were able to get some super impressive speed improvements out of this. And actually there's some really good benchmarks that PGS works did where it shows that in several cases the insert and update week is faster than other document databases that all they do is documents. And one of the great things is that it's for everyone else, there's this patch, but that's how you can get it inside your, if you want to use these rules, you can just say JSON B instead. But so okay, so there's not a lot for this, but one of the really cool things that you can do is Postgres has jammed indexes which is general inverted index and that lets you, you can do that for several other data types, but when you use it with a JSON B data type, you get an inverted index on every single document. So instead of some other document databases, we're able to say, oh I want an index on this key or this key, you can get it on everything. And so you can do really good searches using an index on anything in your document database there. And what's really awesome is the pattern that we use internally a lot is we'll have our regular rows of regular models and then just have before we stay stored, now we use JSON B, have a column in there and that way as you work on your application, you can kind of add some extra data before maybe later loading it onto a popular column. And is there a time? Okay, so I want to tell you that this is actually a cool story of the JSON B step. And so like Ruby where directly with Ruby we employ some of the Ruby core members to work with, over the last several years we've sponsored the development of actually quite a number of the Postgres features and one of them was the previous JSON and we wanted to help get JSON B there's a group of Postgres developers referred to as the Russians, it's because of the Russian and if you talk to the Postgres developers they'll know exactly which three people you're talking about and so they made each store several years ago and they were like, but each store for those of you who haven't used it before, it's a key value store in Postgres that is just strings only and it's flat. And so the Russians are like, oh, we're going to each store too, it's going to be great, it's going to have booleans, it's going to have numbers and you can test it. And we said to them, we're like, just do JSON. And they're like, no, we'll do each store two first. And so we sponsored the project to build JSON B on top of their infrastructure but it wasn't looking like it wasn't going to get in and we did a bunch of initiations to get everyone on the same page and it was just under the wire for the end of the feature freeze and it got in and this is a really cool feature. And I encourage all of you to use that Postgres 9.4 spin in something like that because it makes building applications so much nicer to have that sort of semi-structured with a structured base to take out there in the same place. So another thing here is data clips. I've had data clips on Postgres for a little while but they recently got a very nice design refresh. And for those of you who haven't used this before, it's a really powerful tool. Most of the internal like EI business stuff that we do internally that we use is powered by hardware data clips itself and what it is is you can type in a query, you can set data clips up or you can type in a query, you get to see the results or you can give them just for your data and you can share them around with people in the company and what's really great is that if you don't have access to your data and so if you have some, you know, this is people who say in your company that you don't really want to give full re-run access to data this is a great way to get them, you know, let them use some queries and stuff and what's really nice is it has a CSV format and you can take that and import it into Google Documents and you can build spreadsheets and dashboards and stuff like that and it's a super powerful feature that I really like a lot. Another recent change is we made a lot of improvements to our PG Backup Service and to sort of designate the old system from the new the name changed by owning a code in there and so we had a lot of problems, not a lot of problems but some people had issues with the old system like a very big backup that failed uploading and such this new one has been re-architected and it's much more reliable so this, I think it rolled out pretty recently and we're in the process of migrating people's old backups from the old system to the new one and so it should be pretty smooth, pretty smooth updated and again, another big feature of the new PG backups is you can schedule when you are going to take the backup before everyone, they didn't have the same time but because we would queue all the jobs at once and let them take out over the course of like, I don't know, about 12 hours but this one you can say, if for example you're from a different part of the world and you're off peak hours at a different time, you can specify that at the point and it makes for a much nicer product but this one, I think it's really awesome because I wrote it it's PG.NOS, seeing a lot of support tickets we end up looking at the same sort of things for looking for problems and what this does is it's coming out right in the CLI that generates a report that looks for a lot of common issues and the backup for this is all open source you want to see exactly what it's doing, you can check out GitHub or review PG.NOS but let's take a look here at what some of these things are so one big problem with Postgres is if you have a query running for a long time, because of the MVCC architecture that Postgres has a long run for query that keeps the snapshot running for a long time and that can start to cause problems after, you know it depends on how fast, other parts of your database are smarling but this example here in 90 days, that's way too long and so what this does is it checks for those long queries and it gives you the process ID there so you can go and kill it and another one here is hit rate and so this will look through all of your tables and all of your indexes and tell you what the cache rate is and if you're a hit rate, you really want it to be really in the 99 plus percent because anything lower than that, like if it's 98 percent that means two percent of the queries have to go all the way to disk to get the answer back rather than it being in the Postgres cache or the operating system file cache and then it really slows it down so if you're seeing many of the low hit rates, that's either caused by a change in query times or a good sign that you need to move up to a larger plan more randomly the other thing that's nice in here is the indexes and this will actually run through a bunch of checks and it will tell you if you have an index that is never used and so you're just spending a lot of time and a lot of energy maintaining that index on every, you know, insert out to the delete when you never actually look any good and so that's a good candidate to take it rid of another thing that it's on this example here but it will show you indexes that have a large volume of rate on every live table but are rarely used so it's a little more tricky, you need to use good judgment there if it's okay to drop them but those can also be a great candidate for terminating database and making it better the next check here bloat, I mentioned before Postgres is at VCC so that means at any time there's an insert or delete yet insert and delete, it doesn't actually modify the old data it just keeps track of which from the minimum transaction it was visible to to the maximum transaction it was visible to and so when you actually use delete data it's not actually moved from disk right away and there's another process that goes along in the background called the auto vacuum and then that actually goes in and then deletes things after the fact but what can happen in certain pathological cases is that your table gets bloated and your indexes become bloated with all these dead values and that's something that it's well once you look for it, you look for it but a lot of people aren't aware of this and so having this in the tool here is super helpful another thing is if you're getting close to your connection count one thing in Postgres is that every single backend takes about five to ten megabytes of RAM and so you do want to keep your connection power down this one, it looks at the plan you have and it has the recommendation of the connection counts per plan and it will alert if you're getting high moving that down, idle transaction, similar to the long periods if you have a transaction open for a long time the same as an execute in a query for a long time has to keep all that data for when it is and that'll tell you idle transaction, you should care blocking queries, if you are doing something that creates a lock and other queries are waiting on it will show up and then just unload the system which is pretty straightforward but it's a great tool and a little bias to say but I think it really helps give you a quick diagnosis of what's going on in the database One thing that's really awesome that came out last year is the expensive queries and this is over at postgres.org.com rather than the main review dashboard for technical time and this is a tool that actually uses an underlying feature called PGSTAT statements which one of my colleagues made some huge improvements to and got connected to Postgres, that's Peter Dangan what this does is it looks at all your queries and it takes out the constants and it puts in the question mark and then groups them together and you can see what your average execution time to total execution time like how much I am at spending and it gives you a graph over time and this is really helpful to look at from time to time and see what the hotspot in the database is it's a good way to find if you have an index or so and more indexes and I think this is towards the end it's pretty cool but you're not going to probably use this every day but fork fast so for a long time I heard that Postgres has supported fork and follow follow is like a read replica that stays up to date forever and fork is a second instance of the database that uses the same underlying technology but then stops progressing and the forks are really helpful if you want to test out the migration but the way that the Postgres replication works is that it was a base backup and then into the individual way to have lock segments get uploaded and when you do a normal fork it downloads the most recent base backup and keeps playing replaying the wall files until it gets to the current time what fork faster do is it will just stop it after after the base backup why would you want this? well one reason is that it doesn't typically be a little bit faster and if all you are doing is testing a migration resource or if you want to take a backup off of not your primary system and the exact time doesn't really matter this can save you a good amount of time in some cases and thank you very much so I'm going to talk about some things we've done specifically in Ruby sitting out here in the front helped me but we actually maintained Ruby 1.7.1.2 for the whole community for a while for various reasons a lot of people were kind of on these old reviews and we wanted to give people a nicer window to actually upgrade and move off of it in a more reasonable time but in the middle of summer last year that support came to an and we extended it for about 8 months I believe and so if you were running on these versions of Ruby you should move off of it and if you did not know earlier this year 1.3 support has also ended which means that if you're doing any it's not getting any bug fixed or security maintenance releases at all so if you're on this you're kind of on your own for back porting during on security like I remember I was talking to perhaps the fraud pod and they had a client that was on Ruby 1.9.3 and he was asking about like how to back port security stuff for the most recent security thing that came out and you definitely don't want to be in this mode especially if you're on Ruby we do all the work for doing this so as an example like on Christmas on the day that Ruby 2.0 came out we had Ruby up and running I think within a few hours of the release so this is something that we take a lot of pride into we basically on try our best on the same day we only miss one MRI release since we actually had multi-Ruby support for MRI so to give you a sense of like how many we've built in the last years since last year I was a mom I've personally built 55 rubies across both the senior and senior 14 stacks including MRI and JRuby so this is worth that you don't have to do to deal with it all you can just like get on and running on that day so you should have to take advantage of this and try to maintain your own Ruby Dell probably a big change that affects customers is that we now recommend Buma this has been something that's been in the work for a while we used to recommend Unicorn I mean if you're on Unicorn it's not like super terrible but one of the nice things about Buma is that if you don't know it's a threaded web server and if you don't, so if you have a thread safe application you can now use multiple threads and it has a master worker model as well so if you have a non-thread safe app you can like a Unicorn just have a single threaded worker but have multiple worker processes afforded from that so you can get that and the other really nice benefit is Unicorn is built to sit behind like Nginx or patching generally and so it does not have any logic or mitigating like slow connections but Buma does so we definitely recommend hiring a Buma and in our docs we now already have documentation for that so it's something you should definitely look into and this shouldn't, I don't know how many of you know this but we do also have JVD support actually we recently hired Joe Kutner to work on Java stuff and if you don't know him he's a pretty active contributor to JVD itself he also maintains Warbler in the JVD ecosystem so we have great JVD support he works full time on that so we have JDK8 support now as well as when JVD9 came out by the next day we had support for that on the platform so you can play around with that and we also have really good relations with Charlie and Tom so as part of our release they notify us of it coming out and we test it and I have filed a handful of bugs like other reports about the build system being broken because I guess I'm one of the few people that actually test this so on to review more updates by Wijin Hello my name is Koichi Sasa I'm a member of, hello Koichi and a member of Man's Chain so Man's Chain is the only core hired one for the privy core so our mission is to design new Ruby so new Ruby is Ruby 2.3 and Ruby 3 and also improving quality of M1 quality has several meaning but for example reducing bugs so nobody is nice software and also high speed, high performance and low resource consumption low memory or something like that in Man's Chain we have 3 members of Man's maybe you know the key so he is a little metal Ruby he design everything and also we call him patch monster because he fix many bugs and include many bugs and also fix these bugs so this is the point now of the Ruby and you can see most of the patches written by Dogun so it means that all your Ruby is based on Dogun and also I it's me so I am a designer I am a designer of Yami so it was introduced by Ruby and also we have improved generation of incremental issues we released Ruby 2.3 last year and we have many improvements I wanted to focus on improvements Keyword arguments Keyword parameter is very slow as Ruby 2.1 but just now Ruby 2.2 improves the performance of Keyword parameter so please enjoy using Ruby 2.2 and we are planning to release Ruby 3.3 at the end of this year so please suggest any idea and welcome so please catch me after beta and discuss how it is thank you so in the exhibition hall so after this talk so there is one more talk for this but after that we will be happy a bunch of folks in the open source community for both Rails team Sam Saffron who has done a bunch of performance work on every profile and other tools works on the Discord team we will be at the RoboBoost doing basically ask us anything bring us your problems or if you just have general questions or other things we should be there if you want to talk about Ruby 2.3 or if you really want to complain about concurrency or something so we will do that during the whole half of the year you need to, we will lead into the life of the talks there and then tomorrow in the afternoon break Richard Schneemann our co-worker he wrote a broke up and running book he will be doing a book signing thing I think he only brought a limited number of books so if you want to book a show I guess earlier and that's kind of all I have and Ruby are still really important to us if you can't tell we're still heavily invested in a book with Matt's scheme he met with Rails friends and then obviously all of our postgres work as well I think postgres has become the defacto like that is for doing a lot of replication so thank you, come visit us at a roof and looking forward to talking to all of you