 Oh, we know we said that That's a one to try well Okay, it's really now I just pasted So the last hearing to other room and also the web What's it like I That is premium economy and that's economy all the way at the back. You'll have to drag your own seat Don't shake it Okay I Hey You guys know the first row in the table to this four speakers That's fine I You Maybe The rest of it is easy, but the spreadsheet in the middle like Well Well It's not quite over the limits, but yeah, it is absolutely space communications. Yes Yeah Ah I still I Of course I Don't Good morning, everyone. I know mornings generally aren't very good for people of our persuasion, as I might put it. But good morning nonetheless. Let's try and make the best of it being 10.30 and us barely having woken up. The good thing is we have coffee. You'll notice that there's a URL on that. There is a URL on that coffee. Terrence, can you get me a tissue, please? I got coffee on myself. This is lovely. Anyway, yes, coffee, which is now all over me, and therefore I have more caffeine than all of you, because it's on my outside as well as my inside. Anyway, right, you'll notice that there is a URL, thank you, Michael. There's a URL on this coffee cup. If you haven't had coffee, you should go take a look. If you don't drink coffee, do what I did and steal a cup. Well, I do drink coffee, but I stole the cup anyway because I threw mine away like a responsible person. By the way, do throw your cups away when you're done with it. Just a reminder, not that I'm trying to shame anyone, but still. Yes, there's a URL on the coffee cup. If you go to it, you will see something that looks wrong screen, doot, doot, doot, doot, like that. Let me make it bigger. You will see something that looks like this. It says win. Thank you. Headphones. See? I love my people. They give me lots of tissues for my coffee spills. You will notice that there is a URL that takes you to a place that says wind headphones with BandLab. If you're wondering which headphones, these, there are three of them. They are Marshall major twos and you can win them by doing, well, generally what the website tells you, which is to take a picture of your coffee and post it to either Twitter or Instagram and hashtag it with BandLab. But if you don't hashtag that with geek camp, I'm not going to be able to see it. So you might want to put BandLab and geek camp in there. And also if this was geek, because this is geek camp, I can't just let you post anything and get away with it. Get away with those. So obviously the people who win are going to be the most geeky pictures of coffee. You have lots of props. This is Microsoft Singapore. Don't vandalize anything. Don't steal anything. Don't move anything. Don't pour coffee over anything. But barring that, go ahead and take photos with anything. If you're trying to take photos of places you shouldn't be in, Terrence will hunt you down. He's over there. He has a red Cortana looking t-shirt. Cortana will chase you down along with Master Chief. Be careful. Using machine learning. Yes, using machine learning. It will know where you're going to go next. Be afraid. Be very afraid. Anyway, yep. That's it for coffee other than whatever's built in me. So that aside, moving on to slightly less important things than caffeine for the average geek, hello there, everyone. This is version 2016 of geek camp. That is not the 2016 version. That is simply the year 2016 version of geek camp. We've done a few changes from last year. We've tried to go way more casual with this. Going back to the era of geek camp up until say 2012, 2013. No ministers, no curated talks, no keynotes, no stiff upper lip formal shirt and long sleeves and suits. Just, you know, Roland's dressed perfectly for this. Just t-shirt, Bermuda's and slippers. I've seen people wear suits to geek camp. On that note of people wearing suits to geek camp. Not that I'm implying that wearing a suit is in any way related to what I'm about to say. But at geek camp we have a very strict policy against harassment. So if you are being harassed by anyone, please look for me and let me know. I will probably be hiding out somewhere outside or in the back of this hall. And basically if you need to flag anything to us, just look for me. Now sponsors. I've already talked about band camp, except I haven't told you what they do. Well band camp is a collaborative music platform. If I give you a moment I'll go to their website and I'll let the site speak for itself. Maybe I should log out first. Home. All right. Well, the site kind of sort of speaks for itself. Band Lab is, I guess, a social music creation platform. They do some pretty cool stuff. They've also recently bought the Rolling Stone, if you know what that is. They did not buy the Rolling Stones, the band, because nobody buys a band, unless you're weird. But they did definitely buy a music magazine that is, some would say, a nice cornerstone of the industry. And guess who's their partner of the year? Microsoft. Our next, well, I wouldn't say sponsor, sponsor, but Microsoft is absolutely critical for the continued existence of Geek Camp, because otherwise we would all be sitting on the streets right outside here, instead of in this air-conditioned venue right now. So yes, thank you so much, Microsoft. We love you. And our third sponsor, without whom you would all be malnourished and starved little kids and adults, PayPal. Lovely sponsorship of our lunch today that you'll be seeing in a bit. Speaking of lunch, Band Lab, by the way, also sponsored all that lovely coffee, which is why their URL is on the coffee cups. Go there, win headphones, drink coffee. It's all connected. It's a giant conspiracy. Sponsorship is a conspiracy, obviously. Which is why we do it. What you don't know is that we're all part of the Illuminati, except by saying that, now you are part of the Illuminati, anyway. Moving on to less serious things, their schedule. We're going to have eight talks today. Simply put, it's three lunch, three break, two. Sometimes I like mirroring my screen. Sometimes I hate it. We're going to start off with Michael talking about the magic behind engineers.sg, and then we'll have Roland talking about the wonders of a cardboard box to paraphrase his title. And we're going to have Letitia talking about evolution of careers from I'm going to sit down and do admin things, to I'm going to sit down and do JavaScript things. It might sound similar, but it's actually kind of different. After the break, we will have three talks again. We'll have Shipping with his 3D engine for the pebble. We'll have Omer talking about C++ plus, as I like to call it, rust. And then we'll have Justin talking about the hyperledge of project. And after a little break, so we can rest our tired little brain cells, we're going to have Joss talking about GraphQL. GraphQL is not rust, is not rest, has nothing to do with rust. And then we'll have to wrap up the day when talking about Kubernetes, which is a lovely little way of doing infrastructure, automation and management. Now that the schedule's out of the way, some very important housekeeping matters. I'm going to start with something that's immediately obvious. The front door is now closed. You'll want to move in and out of the back if you need to go to the washroom, get some coffee, get some drinks, blah, blah, blah, run away because you think we're all scary. That's the reason. Use the back door. If you are a speaker, feel free to use the front door as long as it's not in the middle of a talk or I will hunt you down and bad things will happen. For other housekeeping matters, I'm going to have Terence come up and give you information because he is the local here at Microsoft. All right. Thanks, Raul. All right, guys, good morning. Good morning. Awesome. Good to see you guys at any level. I'll just try that again. Good morning, everyone. Good morning. Awesome. Awesome. If you guys haven't had coffee yet, that's right up there. All right. So once again, welcome to Microsoft or Miha to Microsoft. We're really happy to see all of you here today. My name is Terence and I'm from Microsoft, of course. And of course, I'm Aaron, who is what Raul was saying earlier. Just wanted to give a few important points because we'd love to see you guys going to places that you shouldn't be. You might be locked out and you might end up staying your holiday and weekend over here and not going to save you. So first of all, this floor that you're watching today, this is level 21. Okay? Level 21 is the open floor. You guys can go anywhere in the level, as far as you don't go through any class doors. That is what I meant. Because you see like two rooms over there at the far end. You will see a transparency center and a cyber privacy center, as well as the MTC, which stands for Microsoft Technology Center, right? So those two rooms, unfortunately, they're all announced today. If you have to find a way in, you can get out. All right? So please try not to get in there. You see the door open and don't go in there. If you'd like to take pictures from outside, it's fine, but just don't go in there. Trust me. All right. And for restrooms. Okay. So there are two sets of restrooms on this level. One set, you get there, you're not going to come out. So it's important to take a look at this. All right. So to go to the restroom, you want to go to the one near the transparency center. Okay? So if you go out of this door over here, all right, come out of this door. If you go straight all the way past the very nice tables and very flat lines around the table, you will see in the system of the center, the one degree background, on the right-hand side, you'll see a small corridor. That's the toilet you want to use. If you push any button and you go outside to the dark corridors, you won't be able to come in there unless you have access cards. So please take a look. There are two sets. One of them is not available. The one you use is the one near the transparency center. All right? So of course, that's for level 21. For level 22 at all, in case there's more people that are able to fit in this room, we might open that upstairs so that it's also open. But likewise, you'll see any glass doors. Any buttons you push to get there, don't do that. Because once you get into the rooms with the buttons, you need another car to come in back again. So yeah, just remember, ground rules are again. We can't. See any buttons? Don't push them. Go see any glass doors. Don't go to them. That's our stage. Sounds good? All right. Awesome. If you have any questions, you can quickly read just for me. I'll be happy to show you guys how we do this on time. Otherwise, we have a lot of exciting lineup for you. I'll pass the time back to the whole set then. Now that we have a very important housekeeping things out of the way, there's other very important things to be covered. We have stickers at the registration. Three colors. Which faction do you belong to? I'm just going to leave it at that and let you figure out what it means. Anyway, most important thing today. Have fun. Be that coochie over there. If you were unlucky enough to get a chair without a table. Welcome to economy class. Have fun. All right, Michael. All right. Hello. Good morning, everybody. Hi. So today, I'll be talking a little bit about a little project. I started a few years ago. It's called Engineers to the SG. So I'll be sharing a little bit about the magic that happens behind the scenes. What you see on the screen here are a bunch of volunteers that helped out at Floss Asia earlier this year in March. I see a number of faces here who are there with us. Kathy, someone we know over there. Right, a bit about me. My name is Michael. Michael Chang. That's my Twitter handle. You can follow me there. There's also a GitHub organization with that name. It's one of my open source projects. A bit about my origin story. I studied art and social sciences in NUS. I did not major in computer sciences. But I picked up programming on my own. And PHP is basically my weapon of choice. I joined a little startup back in 2011. It's called Found. And when that kind of ran out of money, that's all startups do. I joined another company called Make 33. And of course, you all know that company. We make me right now. After that, I joined another company called New Innovation, which does agile stuff. Of course, that went away and they got bought by Pivotal Labs. This seems to be a trend going on there. Anyway, that's where I am right now. I'm a single power. I'm a software engineer there. Working on some new digital products for the company. We're hiring, by the way. Anyway, moving along. And just last week, it was a small new project that I started. It's a not for profit community initiative. So we wanted to basically help document the Singapore tax scene. What basically happened was some years ago, I attended a little meetup and somebody was complaining about, I can't find any engineers. There are no engineers in Singapore. That got me a little bit riled up. But it got me thinking that perhaps we have a marketing problem. There are people that we exist. We didn't tell more people that we have grassroots organizations, grassroots meetup groups that are happening. Because I think in the late 90s and early 2000s, IT is perceived with more enterprise IT rather than open source IT stuff. So I guess maybe that's the way. So I felt, wait, there's got to be a way to tell more people about us. So videos is kind of like a way of showing us, showing other people what we do. Of course, to also let people know that these user groups exist. So you find on our website, you go down to the bottom. You'll find a link to the event page. If you like the meetup or you like the videos that's being shown there, you can click that and go to the event page and maybe follow or join the group that's there. So that's kind of one of the motives I wanted to do. That's one of the reasons why I started this group. We started this back in October 2013. So the first time we ever did this was at the user group that I ran. It's got a Singapore PHP user group. October 30th that we did our first recording. It was a single camera set up. Very different from what we have right now, which is two cameras and many other things going on. So which I should go through in more detail later. Yeah, so basically, yeah, this is why I created Ingenious SG. Where are the engineers? So I want to make Ingenious SG the place we can go to to find out where the engineers are, what we're talking about, and what are the cool things that we all like to geek out about. Right. And we have been recording quite a lot of meetups in the last two years plus. Three years, come in three years. Just a short, small handful of the meetup groups that we meet there we record. Some stats right now. We have on our website we've got 1,217 videos. Plus today eight videos we'll be going up to today. So we've probably hit 2,020-ish. We've got 2,400 subscribers on our YouTube channel. Our monthly views is about 20K on our website. 25,000 views on our YouTube channel. We've recorded 21 conferences so far in the last two years plus. CSSCon and GSCon is the next conference we're recording. We have trained about 40 volunteers. So Ingenious SG is basically a volunteer-driven organization. We don't get any money from this. And basically it's from passionate people who are interested in contributing back to the community and people who are interested in learning about the technology that we use. So I've been training quite a few number of people. Of course the actual active people, you probably see them at meetups. So they're probably just a handful of really active ones. But we're looking for more. We're looking for more volunteers, yes. So this is our website. It's written in Google Rails. You're really keen on that. And this is our YouTube channel. So a bit about the magic. So I just recorded the iOS conference last week. So it was on Thursday and Friday. I went back home and over the weekend I got the videos out. By Monday the guys was Sunday night and Monday. The guys were able to watch the videos. So these are the tweets, the very kind tweets from our front presenters for the conference. Yeah, like Ben Asher who works at Yelp. He was at Singapore for iOSConf. And basically, yeah. He said the same thing that I did last year. The moment he landed in SF, the videos are ready. So it's kind of cool. So how do we do it? How do we get this done? Well, I think with any kind of thing that we do, we try to streamline the capture process. I spent about a year and a half just iterating and trying out different techniques of capturing. When it first started out, it was just one camera. And then we added a screen capture to a few other things. But basically what I want to do is want to capture all the inputs. By streamlining the capture process, it's basically about capturing all the inputs together. The video, the screen capture, as well as the audio. At the same time, capturing all this at the same time. And feeding all this into a laptop via USB. So right in front here right now is the laptop that is capturing what I'm... There's a camera pointing at me. It's capturing my screen as well as capturing my audio, my voice. And basically goes through a software video mixer that basically records streams and allows me to edit the video feed in real time. With this, there's basically no video post-production required. So basically at the end of the day, the videos are ready. The MP4 files are ready and I can upload it. Yeah. So just a visual representation. So there's the presenter's laptop. There's the video output. In my case, the HDMI out goes to video splitter which is placed the video channel one to the projector and the other one goes to a screen capture tool which then gets pushed into... I plug that into the laptop using USB. For the presenter, in most of our cases, we have a webcam for easy ease of use. Basically, we have a lot of volunteers who are not exactly very technical and we want to make it as easy to use as possible. So that anyone can try to use our system. So we have a webcam and we also have a small microphone that kind of picks up the audio. So the set that we have today, currently we have about five of these sets running around somewhere. The set we're using today is set number two. I gave them names by the sequence in which I created it. Anyway, so the webcam and the audio basically has a USB interface somewhere. It captures that. And basically, magically, I'll have MP4 file that's ready to be uploaded. Right. Once the file is ready, the only post-processing I do is basically audio processing. Basically, I kind of level out the audio so that if it's too soft, I'll try to even it out so they can hear the voice. I find being able to hear the voice of the speaker very clearly is a very important thing, especially if you're watching a video or you may not be watching a video, you just be listening to the video. So the audio quality for me is a very important thing to get right. And hearing the audio in this, you don't have to tweak. I mean, hearing it in a way that you don't have to tweak your volume control is very important. For me, I use Apple earphones when I'm checking the audio levels so that at least I know that it's with a simple, basic earpiece. I can hear the audio properly. So I do some audio processing on this right now. The script I use is actually a shell script that uses FFNPEG and another open source project called SOX which will convert, which will basically take the video file, split it into two files, a video channel as well as the audio channel and process the audio using SOX and then recombine them into one file. This file, this script wasn't written by me, it was written by Saini, who runs WeBuild. So basically I took that script and I modified it to support multi-files access. I'll show you the GitHub review a bit later where the scripts are available. So once the audio is processed, I'll upload it to YouTube and there's a crown job that runs every 10 minutes to pull the videos into our website. Once the video appears on our website, there is an admin interface where I log in to update the title, the description, and many other things. Link to slides and also link back to the events. I find it's important to link back to the event page because I want the viewers of the website to also find out more about the Meetup group that's organizing this so that they can also attend future Meetups and join the Meetups there. Of course, that's what happens on the day itself or what happens before that. So more about the magic. Well, it's not really magic, it's just process. So I actually contact the meetup organizers because for me, getting the presenters comfortable in recording our video is important. We don't want them to feel offended or we don't want them to show anything that if they are not comfortable with going online or being made available on the Internet. So we make sure that the presenters are comfortable with us recording them. If they are not comfortable, we just say, okay, so we just won't record them. There are the other situations where we recorded that before and we had to go back and do some editing after the fact. So there were situations that happened but as long as they are comfortable with us recording, it's fine. Then we get our volunteers to sign up on the Meetup schedule. So we use Rebuild.sg for the events calendar. So we use that quite extensively to help us organize and plan our duty roster so everyone will know what's going on. We will ask our volunteers to sign up for the Meetup as well so that the Meetup organizers know that they are going there. Then they pick up the set on the day before and then they basically reach... I will advise for them to reach the Meetup venue like half an hour before. Typically our set is quite simple to set up. It takes about 15 minutes to set it up if you have enough practice. Otherwise, half an hour should be a comfortable time to get things all set up. So as I showed you earlier, basically after the Meetup is over, I copy out the file, do some post-processing on it. It's in the wiki page over here. You can probably find it. So there's a couple of scripts that I use. Can you see it? There's a couple of scripts that I use. There's a multi-norm.sh, which basically does audio normalization. So it evens out the audio so it's not too loud or too soft. I have a script that actually helps me to generate a thumbnail. So I think it uses ffmpeg underneath it. For videos in the past, I used to open up the files in iMovie or Final Cut Pro to kind of cut out the parts, which is a very heavy process. So I use ffmpeg, which I wrote a shell script for, to split the file into separate files. There's another script that I use which concatenates. So basically if I have separate videos, I can merge them into one file if I need to. I did not put that file here, but I probably should. Wait, it might be here. Nope, it's not here. Never mind. I'll figure that out. So this is our open source. So you can actually see what's going on. Multi-norm.sh. You will dig inside there. It's nothing more than just reading through some files, copying it to a folder, splitting the files, normalizing it and recombining it to one file. So yeah. It's shell script. Yeah. Let's get back to this. After this, we update the admin page and then we also post a link back to the Meetup page. So we try to use our short URL. We try to link them back to our website because there are situations in the past where we had to take down videos. Like, there were some videos that would... And YouTube is kind of bad in the way they don't let us re-upload the same file or change the video at the URL. They don't let us do that. So we have to find a way... We felt that we need a permalink. So permalink should be our website rather than the YouTube page. Like, most recent situation was at iOS Conf. iOS Conf, I had one video that I uploaded and apparently the speaker actually changed the title of her talk. And so I had to re-edit the video in the scene, but I kept the link the same on the website. They made it a lot easier to manage. Of course, there's a simpler process. Just upload the file somewhere that Michael can retrieve it and then I'll process the file. Which a lot of my volunteers do. Okay. So this is some of the open-source resources. So we have a GitHub page that we maintain all this... I documented my process of setting up this project. So there were a couple of... Like, in this wiki guide I include some prerequisites where you should try to create a YouTube channel, a YouTube account, how you can add other people to your channel so you can have other people help you manage your videos. I've also documented different ways that you could possibly try to do this yourself. So my original goal was to try to get all the meetup groups to tell the meetup organizers to get your own volunteers to record this video. But it turns out not many meetup organizers have a big pool of co-organizers or volunteers they can tap on. The other option I had was to think about getting venue owners to be involved. The venue owners like Microsoft will have one set here and then we can just draw it out. But that creates a burden on the venue owners. Basically, whoever works at Microsoft has to be around for us to draw it out and we return it. So using a bunch of volunteers as a core team of volunteers to go around to different meetups was a better option for this. But in doing so, I also wanted volunteers to learn how to do this themselves if they so choose to. So here I documented different processes of doing it either using your own camera. So if you have your phone, you can download the YouTube Capture app which you can use to record videos and upload it straight to your YouTube channel. So you can use your phone and then you could even put it on the smaller tripod and you can record your meetup. The audio capture on iPhone is actually quite good. The video capture as well is also quite good. For the more recent cameras like iPhone and all the pluses, basically the bigger ones have image stabilization which is kind of great for this. Like in the past what I would do is when my main recording system fails on me I'll use that as a backup. It has happened a few times. Yeah, like for the last one I actually used an iPad to record it. So yeah, it was okay. Yeah, using one camera is also possible. I did some research on what are the best best of class cameras with good audio pickup and something that's cheap. So we found this Zoom Q4 I think it's another Q4N right now, the latest model because about 400 plus. The audio capture on this is actually very good. Zoom makes very good microphones and this is the microphone that just so happened to have a HD camera. Of course much later I found out that actually Sony cameras are not bad. Like the one that you see at the back right there, that's our backup camera. The backup camera costs about 200ish you can get it from Lazada. The audio capture on that is quite good. The situations in the past when my primary system fails on me had to rely on audio capture from that piece of thing. So it's not too bad. How am I doing on time? Okay. So that's for video capture. There's also the recording of the screen. So how do I do recording of the screen? If the presenter is using a MacBook Pro or MacBook they can actually just start a quick time app and just start a screen recording of their own screen and this can even set it up to use the built-in microphone so that you can basically I can record my screen right now as I'm doing the presentation. So that's one way. Of course you want something that's less intrusive what I strive for in our setup is to not require our presenters to install any software like solutions in the past will be oh please install this driver here or please install this USB thingy and all that stuff which can be quite intrusive and I think it's not good to have to push this on presenters. So what I wanted to do is to have a non-intrusive way of capturing what is happening on your screen. So what I'm using is a hardware screen grabber that's called even media live gamer portable. This type of devices are actually created for gamers for gamers to capture their gameplay. So like PS2 or Xbox they use that to capture their HDMI output or something. So I'm using this right now actually in front here for the capture of my screen. In the past I had an analog system so I had to do some manipulation but basically yeah so basically getting the HDMI feed in there's a software that basically lets me that lets me record the video and I could do post production editing with iMovie or something like that. And of course eventually I came to a point where look I don't want to have to take all these separate files and combine them in post production and do this. There's a couple of software that's out there. This is one of those that I use in early days. It's called Xplit Broadcaster because about 4.95 monthly subscription. So that's what we are used to use. Of course nowadays there is a new software out there that I'm using it's called OBS OBS project. Thanks to Rahu who actually recommended this software to me many years ago so that look guys if you use this it's free. It's open source. When I first started it was quite crappy. It wasn't as good as it was. It only had a Windows version and the Mac version wasn't as powerful because I was using a Windows machine and yeah. I actually did try to use my Mac to do recording in the past. The result was mixed. I didn't get a good video on it but it's not as good. Especially for the device I'm using the Mac drivers are not as compatible with OBS which is kind of strange. It has its own proprietary drivers which is kind of weird. Anyway so this is a free software that you could use. That's the last link here. I also put up a small video tutorial if you are really so inclined you want to watch a video of me how to set this up. You can watch this. There's nothing to try. My website is also open source so you can actually check out the website on GitHub. It's a Ruby on Rails app so you can scroll down. There's a whole bunch of instructions how to get the code, how to set up Ruby, how to install Postgres and how to get this thing started on to get this thing deployed somewhere. We did take part in Tech Ladies last earlier this year. Tech Ladies is a software program that helps women learn about programming by working on a project for an NGO. So they basically work on the engineers at SG website as part of their program there. So basically I open source there so we could actually check it out and contribute back. There have been quite a number of pull requests that I received in the past so thanks to all those who have sent pull requests. So a few photos of our team in action. When I was at MIG, that was where I first started recording the videos. We had a weekly thing called the guys come to our office and I record the thing. So this is he's presenting on Kibana which is a visualization tool for stats. Apparently this is our most popular video on our channel. It's about 17,000 170,000 views so far. Quite scary. This is the first conference that we recorded. So the first conference that we attended to record multi tracks and stuff. It was quite scary because I had to buy another set of my recording gear just for this so it was kind of scary. But it got done. So our typical recording setup where we record where we have on the bottom my laptop audio mixer as well as a video camera here. So the video camera was always pointing at the speaker right now in front of me. This is Sony camera pointing at me so that records me, my face. And this is Avermedia Live Game Portable which is a screen capture tool. There are a bunch of these screen capture tools out there. Elgato has one, Black Magic has a couple of these screen capture tools as well. So you can actually look those up. What I like about Avermedia is that it has very good support for different types of screen resolutions. So you can support 4x3, 16x9 which most some of the other screen capture tools don't support. And it works through USB 2 so I do need to have a very powerful machine to run this properly. So this is my setup at Force Asia. I have a small just two where I put my laptop there. So almost every single meet up I go to as I have different configuration. This is kind of fun. This is a software in action. This is open broadcasted software. So I could in real time move this around and adjust and move the resize and adjust the different screens. I could set up different default arrangements of screens which I could then pre-configure and then switch into the correct scenes as and when it is required. So that's how you can, if you watch the screen transitions it looks so smooth because it's all software. Software for the wind. For audio recording, as I said I put premium on good audio recording. So having good microphones to pick up the sound of the speaker and the voice of the speaker is important. So in my case right now I'm using a lapel mic which is hooked up to my shirt sleeve. In other situations where I'm I don't want to want to intrude on the speakers. I'll just put one condenser microphone right in front of them. Like you see in front there, that's one of the condenser microphones that we use. I'm using that for question and answers. So your voice can also be heard by whoever is watching the video later. But in most situations we only have one which will pick up the speaker's voice. You will still pick up a little bit of the background sound as in which we can even out using our audio processing to which you can't even out the sound. So it levels out the background sound to Q&A. We'll probably be at the same level audio level as the speaker. So this is an example of the wireless clip mics that we use. We first used this at Picon in 2014 I think. We first used this in Picon because one of the organizers who was there told me that they had last year was quite bad. We need better audio. Okay fine sure. Let's get this. It's not cheap though. So this is the audio, this is the sound mixer that we use. Actually I don't use this anymore because I found a smaller and more compact device over there. It's the Zoom H5. Yeah. It's a good recorder. In most of our systems I also have a backup backup system. I have a backup microphone and a backup audio recorder and a backup camera. Why do I have a backup? Because it's running Windows. Well thankfully it has not crashed too badly on me before so it's okay. But in situations where the recording does go sour or something does have, we don't capture audio properly or it's out of sync or whatever I could use a backup video to reconstruct the slides and a backup audio to pick up to reconstruct the video that I need. Because for me the audio and the slides are the most important thing. My gesticulation and all that stuff, that doesn't matter. It's my voice and the slides. It's more important. So having backup systems is important for me and I think right now in this case I have a backup camera at the back that's pointed at the screen so it's recording the screen. So if anything happens to my system I can reconstruct transitions using that. I have another camera in front this is also recording a video so in case that crashes I still have that to recover. And the H5 itself is actually recording my audio as it sends it to the live stream. And of course the software OBS now better for using OBS is it has support for live streaming so it can stream to Twitch. OBS is actually again another gamers gamers use that to screencast their Twitch channels of their gameplay and all that stuff. So OBS is actually pretty good for streaming as far as recording. So with all this gear we need to have a way of carrying everything around. So I resorted to getting my mom's old suitcases and stuff. At first I was kind of ghetto. I just used a shopping cart. How hard can you get. I mean there's not much stuff how bad could be like less than $20 for this little thing. So turns out it wasn't a good idea. Why? Because it's carrying a lot of gear and the wheels are flimsy. I bought two and both their wheels broke. I don't know how the guys were handling it but the wheels broke. I think because the gear was too heavy they were carrying up staircases and all that stuff. It broke and I was like man I'm not going to spend more money on this shit. Let's get some proper suitcases. That's when Mustafa comes in. Mustafa has some really cheap and good quality suitcases. So these two over here are used for carrying. So here are three of the sets that we have. Again I have a total of five sets right now. Two of them are actually stored in a secret location where my volunteers can pick up during meetups when they're done they can bring it back and store it in a secret location. Okay. So I named them by the order in which I accumulated the equipment for. The equipment that I accumulated actually started early this year. In March when I was recording Force Asia they needed to record five tracks simultaneously. So which is why I started buying additional equipment just to help make that happen. So right now I have five sets. Yeah. So if you are interested in helping out as a volunteer there are enough sets for you to carry around. In the past when I first started doing this the meetups were not as regular there were only like one or two meetups a week almost two a week but nowadays it's like four or five days a week I'm involved in going for meetups and on my team rather I'm going for meetups and recording meetups which is kind of cool. Cool for the community but stressful for us. Thankfully I have a job last week knock off at six o'clock so it's thank you Singapore Power and Pivotal and Neo. Yeah. We do have a cable management problem because carrying so much gear around does require a lot of cables so yeah we do have but then again my rationale is we only be there for a couple of hours I don't think it's that much of a deal but for conferences where I do record conferences I try to tape down stuff like over there there's a little black tape or duct tape. Simple. So what's in our bag? Here's what our set number three and four which is what our community members use this is laid out on the table what the guys would carry to the different meetups through some of them Windows Laptop because OBS Studio works best on Windows especially with the screen that we have and it has been proven to be quite effective. It's cheap I got this off carousel for like $300 we're dedicated graphics cards anyway so for screen capture I also have a bunch of video adapters so I want to prepare for every situation every kind of output that presenters bring to us so over here we have HDMI there's another slide here so we have a VGA to HDMI which is meant for locations where there's only VGA projectors we have a mini display pod to HDMI and HDMI to VGA and a few other things so yeah we're ready for you except USB-C which I'll probably have to get soon actually I have a couple of them already so I have a couple of them yes so for the recording audio to simplify things I basically got a blue icicle USB it's a USB sound card which has an XLR interface a 3 pin interface which can be connected directly to a condenser microphone so this setup because condenser microphones requires you to send electrical charge to it to basically charge the diaphragm so basically basically requires a electrical charge which the icicle can provide so basically power the condenser microphones what are these condenser microphones I think they have the best audio pickup overall webcam so we have a Logitech webcam that we use the cable is kind of long about 2 meters but in case we have situations where the presenters is quite far from you we have a 10 meter USB 2 extension cable just in case it's a screen capture too it uses a mini USB cable that connects to the laptop HDMI cables as well as HDMI splitter so in the past when we first started out it was all VGA because there was only two that I know how to use then much later I figured out that we could actually use HDMI that's the capture so this video camera this is usually the backup camera that you see at the back the backup audio recorder I'm using the Zoom H1 it's kind of cheap about 100 plus dollars for this so it captures really good audio a bunch of tripods and for the backup camera as well as the main camera and of course you need power so so what's next what are the next steps that we want to push and you need to to I want to try maybe well these things are at least on my wish list I don't think I have the capacity to actually do this right now but it's some things on my wish list that I hope that I can get more some volunteers to help out and spearhead some of these things one of which is to produce transcripts for videos so that people can easily skim and scan the text of the content of the talk and even make it searchable by keywords even translation and subtitling so that it makes our video accessible to the region so I envision in the future we could have a Bahasa version or Vietnamese version which can be made available and people from those countries can watch and learn some of these things and also find a way to build our to optimize our workflow further to automate the distribution process because right now it's still rather manual after the fact I still need to close and update the content and stuff so these are my volunteers that we have as you see on the left we have the volunteers for Force Asia on the right is our volunteers at Picon 2014 I think so we are beyond here Valentine, Latalia and a few others yep I feel bad for some of the volunteers at Force Asia because they were there and I attended the conference and I was short-handed so I kind of like hello can you help me and they were all very kind helping me and I'm very, very thankful for all of them so you will find that what we're doing is inspiring and kind of mildly interesting or useful do join us and support us you can email me at admin at nginus.sg you can also follow us on Twitter nginus.sg for the win Facebook or if you are so inclined you can actually support us by clicking on this little link over here which will bring you to a little support page where you can help out in some small monetary means so no pressure don't do it if you don't want to yes that's all I have questions yes Roland you mentioned what was the most popular talk anything interesting about which speakers or topics or events well keywords if it's a buzzword field kind of talk like docker or kibana or something like that so search terms and keywords and buzzwords do draw a lot of views big names speakers also draw a lot of views because I think partially because they retweet or reshare it on their own Twitter account or through their social network so that also brings a lot of views to be honest I think I need some data science on this to find out what's the popular ones I have not done it yet so it's something I probably should try to figure out not that not that I want to use that to target different which are the meetups I should record I try not to curate that because I really want to capture all the content as much as possible I try not to curate or target things but I do recently on the website I actually added a featured section so I can target or highlight noteworthy videos I think like recently I highlighted the keynote keynote videos from iOSCon during the agile conference after the agile conference I also highlighted some of the I also brought up the featured featured video some of the keynote speakers from agile conference so there are some ways of servicing things and usually if it's the first video on the home screen on the home page usually gets quite a fair of views as well because it's usually the first you click on to kind of get a sense of what are the videos there I should really data science this and do some it would be an interesting project it would be an interesting project yes it would be an interesting project any other questions yes thanks I have lost things before maybe myself personally have lost usually I'll forgetfulness usually packing stuff away like power adapters and stuff so I do have I did lost one power adapter once I think it was stuck to one of the I haven't actually solved that so I think some of my volunteers suggested having a packing list so they can look through it and then double check what's there in the volunteers training where I when I conduct volunteers training I would ask the volunteers to learn how to contact the pack and repack the set and I would tell them which other pockets they should put the specific items so that they would know what needs to go back where that is helpful in helping them remember what they should be keeping for some of the equipment for most of the more expensive equipment I actually put the engineers SG sticker on it so people would know that it belongs to us there was one situation where there was one particular pack so we were at Force Asia there were a lot of Lenovo laptops flying around and there was one particular pack there were a lot of those laptop bags we weren't sure which is whose is belong to and it just so happened that I stuck the engineers SG sticker on the laptop bag so we were able to find it about the videos being useful for engineers yes that's one of the goals for people to also learn about these videos one of the validations I have of this was at my workplace at Neo Innovation last year I was doing something and then my colleague was during lunchtime he was watching a video behind me I turned him up, what was he watching he was a picon video of one of my other colleagues talking about pie test testing in python and apparently we were working on a Django project and he was he needed to learn skill up on testing in python he was watching a video that my team recorded so for me that was a quick big validation that people are actually using it for the purpose that it was meant to which I think is a good thing yes it's not for profit yes but we do need money so for certain some conferences I actually do charge a small fee the money that actually goes to helping upkeep the equipment that we have it's also the same the company those are the companies behind this project and the company is also used to organize PHP conference which is PHP Conf Asia which I run and basically the money that I earned from the conference also gets funneled back into purchasing equipment for the for all our sets what I learned from this is I think a community can be really I think we have a great community here that everyone is willing to share new things and as we got out that we were doing this there were more meetup groups that we were interested in getting their events recorded which I find is a very cool it's a cool thing I hope I have more volunteers I really wish I had more volunteers that can capture everything but I've come to the conclusion that completeness is not a go anymore I don't need to capture all the videos which could be a good thing which means you guys have to start going for meetups yeah so okay I don't actually benefit much from this I have sleepless nights if anything it's a community that benefits yeah I hope you guys enjoy it yes Vishnu clock drift you mean OBS well I think I'm recording 50 FPS on both cameras right now so I think it should be fairly okay but the problem does arise when I put these two together with the screen recording because the screen recording is recorded at 30 frames per second and the videos are going at 25 frames 50 frames per second so we do sometimes it does happen but I'm using Final Cut Pro that kind of reduces that probability so the workflow during conferences for post production conferences is I would take all the video sources into Final Cut Pro and synchronize synchronize them using the audio so I use the audio to synchronize a bunch of clips together so I know that when the video clips will be in sync so when I need to cut out or edit out parts I know that the audio and the movement of the lips are kind of in sync we didn't have a problem before in the past when we were using webcams where the lips are moving but the audio is like 10 millisecond off or something so there's a bit of disjoint there but those can be fixed those can be fixed in post production but in the previous system we actually when I was figuring out the audio setup on OBS I did have to put on the video camera a few millisecond offset so that you would be in sync with the audio but once I figured out how to actually solve that problem by not putting it as a source but putting it as a audio input audio input rather than an item in the scenes list it kind of fixed the problem yes took a while to figure it out but yeah yes I know to be honest I have not had a bandwidth to actually explore that yet but there are ways I mean there is auto subtitling on YouTube which is kind of crappy I'm not actually looking into that although if you go to any of the videos on YouTube we are cut and rolled into a program called Amara Project so the Amara Project does scroll down so if you scroll down here there's an Amara link where you can contribute back to who? I clicked the wrong one oops yeah okay so there's an Amara Project page where you can actually contribute to subtitles of different videos and I've also recently turned on the community contributed subtitles so there is an option here that allows community members to contribute subtitles of different languages so you could do that as well right now yeah I've not had a bandwidth to actually explore this further so if anyone would like to help out I'll be very happy to grant you access to stuff thanks alright thank you very much of course Michael Michael will talk to you about how to do paraphrasing you mentioned where the title is that's about right if I were to talk to you that's like something I would say yeah my thoughts exactly okay magic has not happened oh there we go that yes let's keep this configuration we shall come back to the scary maths later so the title as it says I would point out is not metaphorical this is about an actual use for a real cardboard box to receive radio signals from the moon this is part of a broader series of talks and demonstrations of doing on amateur radio I won't repeat the entire introduction today other than to say there's a great Wikipedia article this is the sort of conventional symbol for amateur radio something's gone wrong give me a moment please I should be able to see everything's fine just I haven't got the usual presenter display never mind so but very briefly amateur radio is a communication service but it's specifically for things that are about personal motivations rather than what you're doing for an employer the definition is used in most English speaking jurisdictions it's been used for about half a century I would suggest it looks an awful lot like a definition of the maker movement except for the bit about radio that is it's self-training it's not for profit it's sort of engineering for fun I resumed my involvement in radio about two years ago I first did this one as a kid and the question for me at the time was why do it the internet and smart phones have kind of eliminated the traditional use of amateur radio what's the point anymore and so I identified four different areas where amateur radio was interesting to operate where the mobile network goes away this is particularly in natural disaster type scenarios Louisiana and underwater the mobile network stops working and the roads are underwater so engineers can't get out and fix it operating places where the network doesn't exist rural and desert areas are an obvious case space is the other which is my area of interest the DIY radio electronics very much interests me I've been playing with electronics since I was seven years old and so this has an application of electronics it's fascinating and not so much for myself but for many people the ability to operate at increased power amateurs can operate in the sort of high hundreds or low kilowatts which is useful for communicating very long distances on the earth's surface using the ionosphere as both a duct of a mirror amateurs have been involved in space right from the beginning and I do mean the beginning in 57 hello I do apologize my presenter mode just isn't working in 1957 the Soviet Union launched Sputnik the first person in the western world to detect Sputnik was not part of a university or a military organization or any governmental organization it was a ham operating out of his basement in west Germany he later went on to establish the Bochum Observatory and I'll come back to it later but yeah it just happens that westerners weren't looking at the time so it was a ham who picked it up first hams were involved right at the beginning of the space race so in 1961 when the first cosmonauts and astronauts went to orbit December that year Oscar I orbiting satellite carrying amateur radio went into orbit it hitched a free ride on of course a CIA rocket perfectly serious and so that was the first of the amateur satellites there have now been dozens or low hundreds of satellites carrying amateur traffic one way or another it's an ongoing thing there are probably 30 odd operating today one is the ISS this has not only an amateur station on board but also an amateur repeater and more than half of the astronauts and I believe also the cosmonauts who serve on the ISS are in fact licensed amateurs and use it to talk to friends and family from orbit so I haven't yet got round to doing it and they usually do amateur radio after work so I think 3 in the morning 4 in the morning our time I will get to it at some point another interesting variant is satellites built for a different purpose so these are EO79 and 80 they're European space agency satellites that had a sort of paid science mission first and in return for being allowed to use amateur spectrum then made these satellites available for amateur use for the entire rest of their lifetimes so they spent 6 to 12 months doing their science mission and then 5 to 10 years providing amateur service so there's quite a few different ways that amateur satellite capacity comes into being they're not all fully amateur funded satellites to talk to these low earth orbit satellites so the ISS the EOs you don't need to tell the complicated equipment in some cases you can do it with a handheld antenna in fact if you look at the bases of the antennas just in front of my head they actually have nice foam handles these are antennas designed for handheld use with a handheld radio in this case I've got a pair of them to solve a polarization problem and a little metal box in the middle that has motors in it that steers to follow low earth orbit satellites as they cross the sky they cross quickly 15 minutes is difficult and so it gets exhausting to keep pointing while you're talking so the I've forgotten the secrets of the story but a satellite that was launched in 2000 called AO40, Amadoska 40 had intended to use what's called a Molnia orbit something invented by the Russians because they had a need to serve the very northern latitudes so you take three satellites with this very elliptical orbit that's fairly tight around the southern hemisphere and a long way out 65-70,000 km at the northern hemisphere and it has about 12 hours of exposure on each orbit so just three of these is enough to provide round the clock coverage for the entire earth north of about 40 degrees latitude and so for the Soviet Union this was a very useful tool so AO40 decided to attempt to get into a Molnia orbit Molnia orbit rather it failed in this entirely different orbit to what was planned but and again so this is not like commercial service there's a whole lot of random stuff going on I'll come back to AO40 later but I raise it now because it will be important in the story to give you a sense of where it is it's not in low earth orbit it was only operating for five years between 2000 and 2004 but it's in this very elliptical and very high orbit geostationary effect so quite a long way out my end objective is to bounce radio signals off the moon this is a bit difficult it's three quarters of a million km around trip and so it's not just radio it's cryogenics, it's signal processing and a whole bunch of other things that all have to work at the same time for this to work this is a multi-year project and that's okay I will keep playing with related stuff as steps towards what the cowboy walks came about I was looking for an intermediate project before I get into what I'm doing just a quick survey of other cool things that amateurs have done or are doing in space during the Apollo program when second-hand TV dishes were not available cheaply this gentleman built himself a stressed hyperbola dish so it's not a parabola it's straight beams that are then stretched using fishing line but it's near enough to provide a focused beam to actually able to listen in to the capsule communications for the Apollo program the deal was, yes, it's fine for amateurs to be listening no, you can't record it or reproduce it so it's an interesting way in Singapore it's like if it's not broadcast, you don't let them listen to it in the US it was like fine, you got us, but you can't reproduce it so they got to listen but we, you know, the official recordings are in assets one thing that's generally fairly easy actually is to listen to Jupiter Jupiter is the most powerful source of HF radio emissions in the solar system even more so than the Sun and so with a bit of wire about 8 metres long and a sensitive receiver you can in fact listen to radio noise emitted by Jupiter at the other extreme and I mentioned the Bokum observatory found by the guy who happened to hear Sputnik in 2006 a bunch of amateurs were able to detect Voyager's signal at that point it was already past Pluto it was 10 billion kilometres so this is, you know I think a quarter of a million kilometres is hard the other they then said about bouncing radio signals off Venus it's 5 million kilometres away in 2009 that's beyond what I expect ever to attempt apparently they're a fairly ambitious group so they've set their sights even higher not to bounce signals off Mars which apparently would be too easy but to build this object and put it in orbit around Mars they're not kidding how they'll arrange a ride not yet clear but as I said amateurs have been putting have put dozens of satellites into orbit including in some of the high orbits can we get a ride to Mars? time will tell if they succeed they will win this revolting object this was a cup created in 1929 here in Percy Maxim who's the very much the father of the amateur radio movement in the US was also fascinated by Mars and so he established a prize which is just the cup, there's no cash for the first amateurs who established two-way communication with Mars and your details up to you so the AMSAT, it's the AMSAT Germany group who's doing this then they actually expect to succeed at least within our lifetimes whether that's 5 years or 20 years away I don't know more on the sort of projects that have succeeded this is IC3 it was a satellite launched in the 70s it was placed in a location called Earth's Sun L1 to observe the Sun it was then retasked to rendezvous with a comet it was then retasked again to rendezvous with a second comet so it's both the first spacecraft ever to rendezvous with a comet and the first and only to rendezvous with two was part of the posse of probes that chased Halley in 1996 by about 1997 NASA was like okay, we're finished, we have nothing more useful to do with this spacecraft so they gifted it to the Smithsonian in orbit it's in orbit around the Sun and it intersects the Earth every 15 to 20 years approximately so in about 2010 more than a decade after its last command was sent by NASA a bunch of people who I assume are college students thought, huh, IC3 will intersect with Earth in about 2014 we should reactivate it and so with the help of the Smithsonian and NASA and the Aerosaber Observatory who lent them the world's biggest dish and some radio manufacturers who lent them software-defined radios this small team actually put together a project to regain control of the craft to fire its engines once to get it into a stabilizing spin unfortunately they weren't able to make additional firings to put it back into its original mission but this of course required not only orbital mechanics and spacecraft operations but radio the reason for the use of the Aerosaber dish was to have this very weak signal communication with the spacecraft that at the time was a long way away and so this is kind of an extreme example but it's what people who are playing with this stuff for fun can and have achieved so, uh, getting back to what I'm doing my problem for choosing smaller projects and sort of small steps towards the larger one low earth orbit's pretty easy and can be done with handheld gear and can be simplified at least operationally by having an automated tracker which I built earlier this year geostationary and mulnia are anything up to there's thousands of times that distance but with suitable dishes and some fiddling it's doable but to get from and setting aside that there are no mulnia orbit or currently any amateur geostationary orbits will be in about 18 months getting from 60,000 km to 750,000 km in one step is a bit uh, horrifying it's quite a difficult thing to do well, wouldn't it be nice if there was at least one intermediate step but there can't be because there are no operating radio transmitters on the moon which is literally true but misleading there's an operating radio transmitter in orbit around the moon this is the lunar reconnaissance orbitates in the process of doing some square meter photographs of the entire surface of the moon it has two downlinks one of them is on S-band which is pretty close to Wi-Fi and Bluetooth so the equipment is readily available not necessarily cheap but readily available so when I realized this I thought, oh hey, this is still operating surely somebody has gone to look for it and the answer is two people have gone to look for it one using a 60 cm satellite dish one using a 90 cm Wi-Fi point-to-point dish but they're both, because of the frequencies, they're both about right fairly similar stats so the dishes are comparable size they're both optimized about the same frequency very similar gain 21, 22 decibels is is approximately 100 to 1 so instead of listening to the whole sky they're listening to a hundredth about a 15 by 15 degree cone, if you like some differences in approach the OZ9AC guy used a 1000 to 1 low noise amplifier without any frequency conversion which means he needed some fairly fancy coax it turns out that the standard coax that we use most radio is useless at 2.4 gig because the losses in the coax are immense the gateway guy basically sort of ordered random gear off Alibaba and then messed around for a while and got it to work was using a low noise lock and down converter for satellite TV and then just cheap coax of RTL, STL, the standard longals and again because the down conversion had occurred the frequency was in a range that a $10 STL longal could talk to rather than a $1500 universal software radio peripheral that sort of rolls the voice of something to find radios so okay, I thought that's great so when I get to the point of doing that sort of stuff this will be a useful starting project ahead of getting a 3-minute dish and all the other stuff required for Earth, Moon, Earth but there's still dishes and in Singapore there's a regulatory problem with dishes it's doable but you need about 3 or 4 government agencies to all say yes at the same time and so I will get there but it's not where I want to start but okay, I will shelve this as an interesting project as an interesting entry point once I start having those discussions with the various bodies who have to say yes so separately about a week later I was reading up on AO40 this thing that tried to get to a million orbit and ended up in this not quite, but this very high orbit and there'd been a problem with getting Hams in the US to try it because even in 2000 Hams were sort of less willing to go out and buy a dish so someone says well, no, no, it's microwave you don't need a dish, you can use a horn meaning something with flat sides a bit like a speaker box and the important thing about flat sides is you can literally use a cardboard box in a roll of foil so not a metaphor, an actual cardboard box in fact two and so what you're seeing here so from the outside you're seeing a cardboard box inside is a second box with sort of four roughly triangular pieces to make the horn and at the back of the horn a bit of, roughly a paper clip a bit of wire stuffed directly into the input socket on the low noise block and so the low noise block and down converter sitting directly on the back of the cardboard box and the driven element is just a bit of stiff wire in the throat of the thing so I thought well that's really cool I love this, it's a sort of nice hack and good thing for Hams and yada yada yada and then I happen to notice the specifications and here we've got one of the craziest coincidences ever LRO operates at 2271 MHz AU40 operated at 2304 they're less than 2% apart it means I don't even have to redo the numbers for the, I'll just make the same box I will redo the sums but in principle they're so close that you can use exactly the same equipment in both cases but importantly the gain is about the same 20 DBI is about 100 fold this is near enough the same specs as what these other two guys were using to detect LRO so I can take a cardboard box that was used to solve a problem a decade ago for AU40 and use it to detect transmissions from the moon I've got my intermediate project now unless the MDA decides to start regulating cardboard boxes in which case I have a problem but assuming not that's the approach that's available I'm taking sort of a hybrid because I already have a navina which has a what's called a field programmable RF unit sort of a primary component of a software defined radio on board which operates all the way up to 3.8 GHz so it will cover completely the frequencies involved meaning I don't actually need a down convertor so I've taken the box, a low noise amplifier bit of rather expensive coax and the navina, that's the current design the amplifier is just a little box it's sort of that big with SMA connectors on both ends the coax is ludicrously expensive A because it's got to have low capacitance so it's not absorbing the signal and B because it's got to have good shielding so the signal isn't escaping a really fundamental problem if anyone in the room has done any communications engineering the reason that super heterodyne radios were devised a device like this which has a lot of gain and the same frequency at the input and output is a very high risk of feedback if any of the output that's a thousand times as strong as the input gets back in then you'll just get feedback and you probably won't damage the amplifier but you'll certainly lose your signal under the noise and so you need the coax not only to not have capacitive losses that absorb a high frequency signal but also not to have emission which will make its way back in so yeah that's like an $80 cable a novena unfortunately I've damaged the screen connector on mine so I may have to do something a bit clearer but the device is still working fine not apparent in this photograph but there's a sort of option socket here and the novena is shipped with about four this is Bunny Huang's open source laptop for those who are not aware of it it's shipped with about four different options it's a field programmable RF unit and it makes use of the fact that on the motherboard there's an FPGA because the rate of samples coming off a Lyme chip is so high that you can't get it into a USB3 or even a USB-C connector you'd need about four so you've got this very high bandwidth connector available so how much time when do I finish anyone 11.30 so I'm off about 20 minutes so here's the difficult math part of the talk there are four separate ideas that I want to present here and this is to give a sense of why this thing is difficult to do even one way and then sort of in your head you can look at what happens if you double the distance it gets worse so this first table describes the amount of power of single power available at each step between the LRO and the analog digital converter inside the chip in the laptop the transmitter is operating at about 5 watts or in scientific notation for reasons that will become obvious we're crossing 22 orders of magnitude here so the numbers get tiny 5 by 10 to the 0 to make their lives easier radio engineers use decibels and usually decibels of milliwatts the reason is instead of doing a lot of multiplication it becomes addition and subtraction all the powers of 2 where accustomed to become multiples of 3 so a doubling is add 3 or halving is subtract 3 multiply by 16 add 12 or divide by 16 subtract 12 and so for radio engineers it's all gain in decibels and therefore an actual power level in decibel milliwatts very few people in the room are accustomed to thinking this way so here are the same numbers in scientific notation and then metric so the transmitter on the satellite is operating about 5 watts it has a 75 centimeter dish antenna which delivers about 22 decibels it's about 200 fold or 160 fold gain effective gain instead of radiating uniformly it packs it into 160th of the sky and the effective isotropic isotropic means all directions radiated power is 790 watts this doesn't change the actual amount of energy that's left to the satellite but because it packs it into one narrow piece it means we can work with we can pretend that we've got that larger amount of energy the big problem is the free space path loss where we get to lop 21 zeros off the amount of power so what was 800 watts of effective power is now in the outer watt in fact not even, it's the zepto watt range there are some even worse problems pointing is difficult in fact the reason that this project is connected to my satellite tracker is that at some point I will mount the cardboard box on the tracker in order to maintain precision pointing to keep the moon right in the middle of the field because there are losses associated with pointing I've also guessed here between atmosphere and pointing divide by about 4 there are also atmospheric losses if you're looking at stars in the sky they twinkle a bit this is called scintillation and it's because you've got air between you at least the first 50 kilometers you've got air it's moving a little bit that disrupts the signal a little bit so between those two allow 5 decibels roughly divide by 4 ok, and finally polarization so the the tracker that I built is the right angles this is to synthesize something called circular polarization most radio signals are linear you've got a simple antenna, straight line electrons being pushed back and forth left and right, so you've got an E field going this way and therefore the H or magnetic field going this way that's the basis of radio and most of it is done linear you can also polarize a signal in a circular way and so the signal is doing a helix it's going in circles the simplest way to do it is to have an antenna that is itself a helix and point it in the direction you're pointing and so as the electrons have moved up and down the wire antenna you're creating a magnetic a first thing electric field that's in a helix and then 90 degrees behind it a magnetic field that's in a helix this turns out to be really useful for satellites because it means you only have to get the pointing right you don't have to align polarization if you've got two linear antennas if you're on Earth you've got a TV and a TV station that's really easy but if you've got a satellite and an unknown orientation and you've got an antenna on the Earth it's not enough to point them at each other you've also got to get them into the same polarization or you're throwing away 99.9% of your signal so helical is frequently used in satellite comms because it means that once you've solved the pointing problem you don't have to solve the polarization problem the difficulty is because I'm using a horn here so I can only do linear so I'm getting away from the need to make a curve and both the technical problem and the regulatory problem but what I'm getting back is I can only operate linear and what that means is and only one linear can't be polarized two ways so for the satellite tracker I did for make a fair it's two antennas and there's a way to mix them in this case I can only do it one way which means I lose 50% of my signal so I go from just 19 watts once I've taken into account atmosphere and pointing problems to one this is a tenth of an atto watt in SI that's called a zepto watt which means basically nothing to anyone in metric you're allowed to concatenate prefixes so to help make some sense of this what I'm talking about is 0.1 nano nano watts a tenth of a billionth of a billionth of a watt and this is not notional this is an actual amount of power that's available for for processing so we're not it's a ludicrously small amount of power so okay how do we get from this to a usable signal the first is the cardboard box the cardboard box has a forward gain of about 20 dB or 100 fold so that gets us from a tenth of an atto watt to ten atto watts bravo we've gone from almost nothing to a little bit less than a little bit more the next is the lenoise amplifier it adds about a thousand fold gain 30 dB which gets us up into the femto watt range unfortunately we then have some problems the coax despite being very fancy we'll still lose about 30% of the energy that gets passed to it there's also coupling inside the novena between the coax and the line micro systems chip I'm guessing another decibel there might be actually two or three so what's presented so inside the the field programmable RF chip the Lyme chip are these four amplifiers that are the process signal in sequence so at the time that we enter the chip we are at about 5.6 femto watts this is still a ludicrously very tiny signal fortunately there's a lot of gain available inside the chip there's first the lenoise amp stage which introduces about 12 decibels amplification there's then a variable amplifier up to a thousand fold there's then a low pass filter for getting rid of a lot of noise there's then another variable amp with another thousand fold so at this point we're up we're steaming we've reached 350 nano watts a third of a micro watt is this enough hard to say so inside the chip the ADC is it's differential and it works on about a one volt peak to peak basis it's a 12 bit meaning 4,096 levels and I believe they're in here meaning we're looking at about 240 micro volts per step in the input to the ADC so how do we compare these things the input to the ADC has an impedance of about 2 kilo ohms so you multiply these two numbers together and take the square root which gives you a voltage of about 27 millivolts which is almost 100, in fact just over 100 times the step size so the answer is yes assuming that my numbers are right and there's an awful lot of guesswork in here so they might not be but assuming that they're right then this stuff altogether is enough and just enough to allow the ADC inside the front end chip to actually discern what's happening from LRO so sort of we can almost celebrate except we have a problem there's always a problem the problem here is noise so this is a much lengthier discussion so I'm going to sort of gloss over it a bit the entire sky outside of earth the entire universe is emitting radiation called heat at a very low level about 3.7 degrees Kelvin or minus 269 Celsius but it's not zero and for numbers this tiny this tiny amount of heat actually matters worse the moon is not at that temperature the moon varies between a couple of hundred below freezing and a couple of hundred above freezing for the purposes of this I've assumed about minus 25 Celsius which is somewhere in the middle or 250 Kelvin occupying only part of the field that the box is looking at so it is as though we were inside a sphere at 23 degrees Kelvin minus 250 Celsius we multiply this by something called Boltzmann's constant which is very close to Avogadro's number for those who remember the chemistry and we had it with something called the noise power spectral density which is measured in watts per hertz or joules in fact and the difficulty here is the amount of noise power the noise power you take in is a function of how wide a bandwidth you're listening to if you're listening to a channel that's 1 hertz wide then you'll get you know, 10 to the minus 22 joules of energy per second if you're taking a channel that in this case is 5 megahertz wide you'll get about 10 to the minus 15 watts of noise that's an incredibly tiny number except that it's about 10,000 times this number that's a problem that at a first approximation if you don't have the antenna what you're receiving in noise alone is 10,000 times what's coming from the satellite and that's within the passband that you care about that's hopeless, there's absolutely no way to recover that signal so you then add an antenna and this helps us sorry, it actually gets a bit worse because of the polarization problem but that's fine add the antenna not twice, I think I've got this right firstly, it strengthens the apparent signal coming from the satellite because it widens the aperture the aperture for a bit of paperclip is tiny, the aperture for a cardboard box is about 100 times that so the amount of signal power that's entering and being guided into the front end of the low noise amplifier is increased by about 100 fold by having the box there this is the basis of an amplifier or sort of a cone that you're hearing with or whatever, it's the same idea but additionally it narrows the beam so the fraction of the sky that we're receiving noise power from is dropping at the same time so we get to count the gain twice not perfectly, unfortunately hidden in here is an assumption about the antenna gain, so I'm afraid by this picture it isn't quite right so the numbers are slightly exaggerated but so these are all added except here here we subtract these are these numbers here we're subtracting in this case because the box is simultaneously increasing the amount of energy received from the satellite and reducing the total amount of noise power that's entering and so we step from this hopeless signal to noise ratio of 1 in 10,000 which we have absolutely no hope of decoding to 1.26 so for every say a lot of total energy coming into the box we're getting about 600mW of signal and about 400mW of noise that is just barely decodable, it's just enough this works because of this double effect of the antenna gain the next few steps are fairly unexciting except for something called noise factor this is a problem for all amplifiers but especially for low noise amplifiers nothing an amplifier can do can reduce the amount of noise if you've got a signal that contains 1W of sorry 9W of signal and 1W of noise signal noise ratio of 9 to 1 then no matter what your amplifier does your output can't have a better signal noise ratio because any amplification it performs will amplify both the signal and the noise but worse than that most amplifiers will infreduce noise themselves that's called noise factor and I won't it's a whole vector by itself but it adds noise without adding gain and so I sort of scoured the world to find someone who can do a 30 dB microwave preamp with a very small noise factor and to get a sense of how small that is compare it with the low noise amp on board the radio chip this only provides 12 dB or 16 volt gain and yet it quadruples the noise and that's typical to go and find an amp that can provide a thousand volt gain and add only 10-20% to the noise is a big deal it's also an expensive device it's like a 500-600 dollar device once it's landed in Singapore so I raised this to make the point that it's not enough that we can resolve the signal and it's not enough that we've solved the sky noise we've also got a deal with system noise noise introduced by the components in the signal chain and in this case, although the very expensive preamp only slightly changed slightly worsened the signal to noise ratio the one built into the radio chip quadrupled the noise and on these numbers it drops the signal to noise ratio from 1.12 which we probably can decode to 0.32 that is the noise is three times as big as the signal which we can't decode so at this point we're like ok we give up and go home but it's not quite there are at least two interesting strategies to pursue here the first is let's go and get a dish increase 4.29 to keep the numbers simple so let's go from 100 fold to 800 fold suddenly our signal to noise ratio is looking much healthier even despite the amount of noise gratuitously added by the low noise amp in the radio chip unfortunately as I said dishes are a problem in this country and the other strategy of using what in this case would be eight boxes is complicated for reasons we'll get to the end mostly it's expensive for this other problem so the other solution is the bandwidth rather than trying to land the entire 5 MHz wide signal that the satellite is sending we lower our sites a little and only take 1 MHz this doesn't affect actually it does but I haven't shown it it doesn't badly affect it allows us to detect the signal it's not wide enough to allow us to demodulate it it's like standing outside the door of a nightclub we can hear it but we can't hear the the lyrics and we can't hear the melody so it's that problem we've narrowed what we can hear but at the same time we've narrowed how much noise we're bringing in and so it gets our signal-to-noise ratio back above one so we have a reasonable chance of detecting the signal and so that's the point of the experiment I'm not trying to demodulate, receive and decode NASA's, in this case it's not the mission data it's the telemetry tracking control data all I'm trying to do is detect the signal from the satellite and also measure its Doppler so it's coming around the moon as it comes towards us the signal is raised as it goes across it's about the transmitted frequency so there's a way it's lowered so what I'm hoping to be able to do if this succeeds is detect that the signal suddenly appears when LRO appears in front of the moon and the timing is public that its frequency slowly drops and then it disappears when it's supposed to go behind the moon again so that's the expected outcome of the experiment a bit tight for time but yeah, not trying to demodulate let alone decode or receive so where am I the coax has arrived the low noise app is theoretically in Singapore in DHLs holding somewhere so I should have it in the next few days my novena is working except that I damaged the screen cable so there's a bit of fiddling there and I will shortly procure a cardboard box additional approaches so I mentioned that I'm not using a down converter at this point I'm starting with the sort of the purest do a very low noise app however, adding a down converter adds another 1,000 fold gain but also simplifies the coax and the losses in the front end of the radio so this is a user more conventional convert from 2 gig to 300 megahertz type approach which means one more device in the chain but drastically simplifies the link budget and the others use a much more sensitive spectrum analyzer get a lab instrument these are expensive, they have to be paid money for and rented and they can't be moved usually which is fine, I can just point the box out the window but I have to go and rent time in a lab somewhere to do it but it's certainly an option the other, I mentioned long term averaging because I have knowledge in advance of the behavior of the satellite I can in principle take signals over a long period of time so even with like a 0.3 signal to noise ratio I could in principle detect bias it's not going to be perfectly uniform noise because I know something about the behavior of the satellite so take several minutes data and sum it and I can perhaps argue that successfully that we're seeing data that's consistent with transmissions from the satellite without actually being able to detect it in the usual way however if both of those fail then sure so and this is two options really one is to go to a 3, 4, 5 minute dish which means I've got to go through all the regulatory hurdles the other is an array of boxes which I won't do because two problems one mixing is difficult you have problems with the phase relationship between the boxes and you introduce a lot of noise in the mixer but two you need a preamp per box so if I switch to sort of eight cardboard boxes I'm looking at about $5,000 worth of preamps I didn't mind buying one because it will have application in future projects but I'm not going to buy eight so yeah hopefully it doesn't come to this right on target, brilliant questions Rahim say again but you've got to run how you're going to get it from the horn to the preamp there's no transition line the preamps input SMA connector is actually in the throat of the horn with the paperclip stuck in it if you want to mix you've got to put bits of coax and every one of those introduces loss and you're dealing with zepto watts what you gain in the multiple boxes you'll actually potentially lose in the distribution network let alone the noise that a mixer will introduce landed I expect $566 just one I've got about one more question so what do you expect to get from the signal just a carrier because I've got to narrow the bandwidth to the point where I can no longer even demodulate the signal it really is just either the signal is there or it's not I know it's approximate frequency so I'm looking for sort of nothing, a signal nothing to even demodulate the signal but further as far as I know NASA is not publishing information about the content so I would not be able to decode it so there's another one here somewhere I guess since you're using a cardboard box and trying to listen to the room it's relatively near you don't have to be very accurate about where you're pointing the cardboard box it's field of use about 50 degrees so it's a fair bit of play but clearly the pattern is a curve the closer to the centre of the horn the moon is the better you also mentioned other projects like the IC3 satellite so those are far further away and some of them involve transmitting a signal to them rather than reading so I guess there's a huge amount of angular accuracy required so any idea of how accurate it needs to be in the aerosol case it's a 300 meter dish I couldn't tell you off the top of my head but I imagine it's like hundreds of a degree and so they cheat a bit because they can't steer the dish the dish is stuck and the Chinese one that just started operating is 500 meters it's also stuck they just wait for the earth to point in the right position but what you can do you've got a bit of wiggle room at the focus and so yeah they've got a sort of very high precision way of controlling the beam between the focus and the dish but yeah to cover the distances that they covered initially it's fractions of a degree and yeah I'm never likely to be able to equipment that accurate so you talked a lot about satellites at the beginning right and a lot of them are funded were funded by governments we know of any we know of any satellites that are not government funded I haven't done a sort of exhaustive study but it's generally a mix so the right go back to oscar one the device itself was built by amateurs in a garage and paid for by the people who built it but the they got a free ride on a CIA rocket seriously so it was like engineers who work so it's a project called corona it's NASA's most successful project ever it was 130 something launches and 130 recoveries and this is recovery in flight so you've got a film canister parachute and a supersonic aircraft that intercepts catches it so this was a vitally important intelligent function at the height of the cold war and so yeah like whatever they needed to spend they spend and that's where most of NASA's money was coming from but the satellite is a the cameras are a payload at the top of a rocket it must be actually symmetrical otherwise the rocket engines can't keep the rocket straight and so you add balanced and so they were able to say ah well instead of just putting in your rocks how about this satellite sorry this transponder and so that was the deal because they were the right guys they were doing the engineering work for NASA and NASA contractors anyway and they were hands they were able to sort of make the argument that hey how about this as a ballast and therefore they were able to get their ride for free they still had to deal with pointing and power and that's a complicated problem there are a few projects that are likely to go into geostationary in 2018 and both of them are riding on the back of commercial satellites so that the ride, the pointing and the power is provided by others, the amateurs are only providing the repeater the Zali, one under the US, no big surprise the other one's Qatar somehow the amateur club of Qatar was able to persuade QTEL to carry a ham transponder into geostationary and that's due for January, February 2018 I'm afraid that's all we're time for but if you have further questions you can always look for Roland the Lancer because he doesn't want to run away from you at that point I'll be right alright, thank you Roland I noticed that some of the skis like walking around if you do like walking around this is the 90s Hi guys, my name is Latisha and I'm actually a JS lead developer at Tomasi's Communications and as you can hear I'm actually quite nervous because this is my first talk and I think I'm going to start talking about how I switched my role from a NEMIN to a JavaScript developer I don't know who voted for me but thank you because this is not really much of a technical talk like Roland that's a very awesome talk okay so I'm going to start with my life story itself that when I was young in secondary school I really hated school because I never thought that school environment was conducive it's kind of boring for me and because of that, my grades are bad and the only thing that I like during school was just to do the there because of my bad grades I couldn't get into music or any kind of arts course that I like because my grades were too terrible to get in so what can I do right so there was really no choices that I like out there and then I'm just like whatever so happens that my cousin actually just kind of like asked me like hey why don't you try to study take a diploma in information technology and I was like okay because that's the only choice I have that's really nothing that seems good so yeah I got into the IT course which is cool but after the first week of school I kind of hated it because I really hated all the modules and everything that's related to IT so I was like I must graduate here as soon as possible because it's such a boring course and then I really hated school I really dread going to school and I always take excessive toilet breaks and talking to classmates who happen to be in the course because they had no choice so because of that my grades are bad again and I didn't attend the graduation why would I attend a graduation of course I hated so then I had no choice again back to the bad grades mood and I wanted to try to apply for arts but I realised I was too shitty to become a art student so I was like I had my mum was telling me why don't you just try applying to get an IT degree again I was like okay because I have no choice again so then of course I got rejected it's expected because my grades are bad so I was like I'm just letting fate decide whatever my life goes so I had a resolution I decided to get a job to have a private degree in business because it's a cooler degree than IT, IT is like for the nerds I don't care well I did a lot of job hunting I sent out all my resumes I was like yeah I'm so happy but what happened I got rejected by several of them it was just this promote and sell Apple IT product I was looking forward to it but I got rejected so sad but I got a response from a startup which I thought it was a scam I was like oh this startup must be a scam because it looks so weird there's only a few people it doesn't seem like established well known company like Microsoft or any sort of that it's like oh no I'm not going to go there but I was like because it's my only last choice and the only one who responded so I'm going to go for that so I got the interview and there was this guy who's a British boss who was just telling me like la la la the responsibilities that you have to do as an administrator and everything and he talks about a story a metaphoric story about a boat and a wreck and propelling forward which I didn't remember because all my mind was thinking I must get that job and yes I got the job which I don't know if I'm really so happy about it but I'm really happy because that's finally a job that I could get but I was guessing the reason why I was selected because there was no other choices there so I had to do the basic administrator tasks where I do phone calls handling of petty cash organizing events sending emails but I think I really did a bad job because I could see froning faces everywhere like everyone was like so unhappy and hear people whispering no this is not good not good so I was like oh so sad and then one time somehow my British boss asked me hey could you modify the website and manage the content of the HTML page and I was like okay of course because the reason why I was selected to do that because they had no much people and then I well I managed to click by copying Stack Overflow and thanks to them and thanks to the past where BlockSport Tumblr was so famous it was an inting to have one blog blogger in Tumblr like hey my blog with Diary is so cool because I made so much beautiful CSS and well it got it but well it started to have more like oh wait sorry and then he started having more modifications he wanted like the design layout modifications I was like okay I have to modify it and that's when I have to start learning programming for real because I need to modify PHP and HTML and CSS and all of these then afterwards I had the current CTO who obviously gave me a base before last time because I probably was terrible at me he was like asking me do you want to learn some programming again I'm assuming that he asked me to do that because there was really nobody to take care of the app site again so I had to maintain the AppRTC which is a fork of AppRTC it's a website which is running on AppEngine using Python and a signalling server which is used to relay information to people so I was like okay because I have no choice again it's my job and then I had to do a lot of Google search experiment changes and I always love to advocate StackOverflow so it's a best friend remember that I was like after a while I was like wow technology is so awesome and I had to learn about WebRTC if you don't know what WebRTC is let me show you guys oh no wait that's damnit this is WebRTC WebRTC is actually a free open source project that allows you to communicate with each other on the browser video conferencing which is cool you can do a lot of stuff then and I thought WebTechnology was awesome my perspective towards technology started to change a little where I think wow it's really so cool I didn't know you can do so many cool stuff to help people then one time JS developer left and then gosh damnit then I was like oh can you handle the task in left there's a JS okay great thanks so much then I got it I have to do it and then because of that I had two tasks which I was an administrator working and I was a developer where I had to send emails arrange meetings and another talking about the product UI making a user interface for investor and like integrating PayPL for our UI and of course because of that I was stressed and I was like oh I can't wait you know I was so stressed I really like to take on cakes coffee to just calm myself down trying to do anything to calm myself down but thankfully they had new hires engineers and then new admin I mean experienced admin okay not just the one that just randomly dropped out when they came in and get a job and then I was like yeah finally which I started keeping oh no wait then I started handling my administration work to the experienced administrator and finally I get this official developer title I don't think it's really official then afterwards I started talking to the new hires and everything and they talk about scrum agile trading UI UX I'm like what the heck is this I'm in a foreign land where I might what's all this and I don't even know it's just new to me and I was like thrown in this unknown world and then I remember one thing to ask the guy the server guy about having to make a change and he's like oh you want to change this just go to this file go and modify it and push it to your branch I was like what's this what do you mean by that it's a Node.js server at that time I was I have no idea and I thought it was going to be finally over until I was passed a sign to do and maintain a Skyling.js SDK if you don't know what a Skyling.js is it's just a platform for users to actually will allow you to use the WebRTC but we have a platform to allow you to do connections that's what it was I'm not I'm just talking about the product I need which is really cool but it's JIRA and I learned about JIRA, confluence, versioning, scrum, I was like oh what's that cool again then I had to learn about software development process which at that time I kind of screwed up a lot of times so like pushing to the master branch it's the best correct way to do it if you need everything done quickly and then what is peer review you don't need peer review you just need to merge it who cares about that and JIRA and the t-shirt size and what's this I don't know actually I have no idea what is versioning just need an ABC why do you need like this 0.x something whatever don't understand and because of that I made a lot of bugs and then I have to learn about what is test was QA validation testing again also I didn't know what it is until I had to learn from someone who helped me they were like oh you can use Karma on JS or like Mocha or like you know Selenium oh okay but how do I set them up and one time because of that cause I do not know how to actually do proper development process and everything I actually pushed the master branch and made changes directly on production server which kind of screwed up the website and they were like oh not working what's going on I was like sorry you know and because of that I feel really bad because I secretly felt very lousy because I was thinking am I really a real development because I just started and you know it's easy to get discouraged because you can face harsh criticism from others that makes you feel stupid actually then I understood it's not to be too hard to yourself because we all make mistakes and it's part of a challenge of life it's true mistakes you learn what you can do and what you should not do and then you accept and learn to move them on or else you will learn from them you'll be fine you know the flower then because of that I never stop learning I never stop putting limitations on myself because the moment you limit yourself and you think you cannot able you'll never achieve what you want to achieve so I always continue with this quest of always attaining more knowledge than I already know today and because of this in this two years span I actually came from just modifying a website to actually doing a JS SDK that actually powers thousands of people's devices that actually helps to connect different devices like medical consultation, group chats internet of things all of this can be done just within two years for never giving up and never putting limitations on yourself and here is like an example of like a sorry gosh you can join this room and do a gig camp talk example like connecting to this let me try to connect to this so you will be inside the conference like I can show you by the way it's just gonna be a few seconds so just think if you're not connected don't be sad it's okay so if you connect inside this see yourself in this someone is connecting yeah you can connect to this hi nice to meet you you can actually start a video conferencing just from the web it's like cool okay I'm just gonna leave the page and you can see here is an example thank you so much and it's an example like how you can use the Skyling SDK which the product I built you can do like internet of things where this is actually connected to a browser using our Android SDK to connect doing the Android SDK to connect and move things around see what you can build okay it's ready let's move on yeah it's over oh no no no no ah gosh okay you see there's so many use cases you can do with all these SDKs and stuff I was like thinking like wow I didn't know I can build such a stuff you know and like so many use cases and things that you can help to build and integrate things for people which then tells me to the end coming to an end that a quote that's not from Albert Einstein obviously everyone is a genius but if you judge a fish by its ability to climb a tree its leaf is like whole life living is stupid obviously not Albert Einstein because Albert Einstein never said it but people use it every day and life is full of possibilities if you never try you never know what you can achieve just don't limit yourself you never know and the thing that you think that you like may not be exactly the thing you like or the thing that you think you don't like maybe actually something you enjoy so you can always try many other things and you're like oh yeah actually I like that oh yeah actually maybe I don't really like that so that comes to the end so hi Fuya thank you thank you questions questions questions not so much a question it's a comment you were copying harsh criticism when you disrupted a production website when I was about 21 or 22 I managed to delete a production database and we didn't have a backup so my entire weekend we spent manually re-keying unfortunately we had an audit trail manually re-keying an entire database of work in progress so now we've had automated backup services but yeah it's not an uncommon we all have that and then get more getting when you are young you just do stupid mistakes and you're like oh any more questions often another team should say that how resilient is our production environment rather than blaming it about the production master and bring things down there should be ways to check production environment that okay something's not right let's re-bird so yeah they should ask a question not say hey fling is involved because it's the easiest thing to do alright just one question first of all I applaud your courage for telling us your story especially saying certain things about arts and IT in front of this particular audience it's very very courageous but I have one question so you found your love for software to learn problem solving and this happened when you were working yes why do you think you didn't spot this when you were studying is it a problem? because it's cool so every time we just go to the computer copy the code and you just like rely on the other teammates in order to achieve your goals we're going to pass exam yes yes yes I don't need to memorize I'm pretty intrigued how you're acting what is your using for mobile in the first night yeah I was using a year ago but I refreshed my page right and the it's actually a kind of a controller that allows me to navigate slides so you're just using that to contribute yes but halfway after a while it's like my I think my collection is called how do you spend your energy you can't really queue in steam-licking slides are you the naturally talented one of the mentors to help you I just draw them just for fun you know any more how should what controls it is using the same scaling as dictated and spewed thank you Laetitia everyone can go now I think it's just easier for this bit okay obligatory 3 seconds of sponsors one two three okay you know how Laetitia mentioned that stack overflow is really useful well there are some very useful things about stack overflow this is an xkcd comic that you might not have seen or you might have seen it doesn't really matter the point is not the comic if you scroll to the bottom Randall Monroe says stack sort connects to stack overflow searches for sort the list and downloads and runs code snippets until the list is sorted this might seem like a terrible idea but as with all terrible ideas like javascript someone has done it so you give it a list and you look at this amazing little output console and you try and sort it there you go your array was sorted so stack overflow useful for more than just humans also works if you use the epi alright team cool that was stack overflow next will be norms or as the more cultured among people with lunch few things before you run out the door don't make me barred doors or anything before you run out the door and have lunch there are drinks in the fridge outside help yourself to them there is coffee help yourself to the coffee don't steal it because people who don't be happy about that just ask them nicely for a cup the other thing is disposal stuff when you're done eating I'll let Karen sample that one and then I'll come back for more stuff alright thanks alright so guys I hope you guys are feeling hungry because I'm shaman alright so because too unfooled there will be pizzas available right outside here so if you go from this door here you'll see pizzas you'll see a fridge for drinks you can go from this door here you'll see pizzas as well what do you guys have all set to go to? vegetarian friends you want to take the door over there once you go on the door straight down you'll see a nice view of the city you'll see a label that says vegetarian pizza that's where you want to go for non-vegetarian friends you can go this way here but outside here you'll see non-vegetarian pizzas likewise if you go there you'll take a right you'll see non-vegetarian pizzas as well so because of the trash just not to ask for your kind cooperation please do not throw the paper towels and trash and paper plates into those bins there because otherwise you're not going to do it because it's now on Monday we have black trash bags that's put around the area near where the pizza is that's where you want to throw them likewise for the coffee cups as well and for even the drink cans also put them into the trash bags that's where you'll clean at the end of the day with regards to food in the auditorium yes you can have them consumed here you can drink your berries here as well but please do not eat any trash mine I'll bring you to that if you drop the slice we're making you clean it up otherwise you might get on your skin but please don't eat any trash in this room here otherwise the food guys later we have to clear it up please do take your trash put it in the next trash bin as well alright with that I think the only last thing is that to try and avoid a massive queue for pizzas because that's what everyone really cares about we've tried to spread out pizzas in multiple areas but try and find somewhere that has compatible pizzas for you and if you eat these please be nice to the vegetarians they can't eat your pizzas you can eat theirs we don't want to have to like you know make you ricochet pizzas or anything so be nice eat pizzas there should be enough pizzas to go around I think at last estimate we have one pizza per person but maybe all by a tiny bit and not very large pizza so please don't eat like two entire pizzas someone can starve to do that it will probably be me who's starving so unless you want me like over here on stage in public yep with that foods outside drinks are outside not much drinks outside that way go once again you see any expenses people go to them alright thank you have a good lunch see you later one quick thing if you are wondering what time you should be back in here that is two o'clock everyone is watching they're watching you I wanna get you out I'll get you out thank you you're welcome I'll take the video thank you what's one of those four I don't think one hand did get into a lesson it's a lot of teachers So we've got, like, an ideation license, so it's probably that license. Uh, in Spain, we're all European. So it's a reward for this, as you said, after that. And we've got a couple of weeks, ideas, the right things, and we're really sorry. We have to withdraw your license. And probably what? It's a domestic, the residents will go away. So something that's called NDA, it's reported legal. It sounds like this. And so NDAs, the right things, they can't be seen, not been enforced before. So it's probably the fact that it was not a TV installation. In fact, they had a license to get it out. Well, here's an ideation. In fact, off of that one, we just are just secondhand legalists. And so it's like, oops. So the solution in his case, because his employer was a... But it's that level of regulatory complexity that I don't particularly want to be able to do. There's not much risk of anybody bringing NDA into our NDA, but it's not really a couple of blocks. And any of that couple of blocks is not useful at all to some of these questions. So it's not an engineering bubble. It's a hack to work around a... Not even regulatory, but a public perception problem. It's not legal. It's a couple of blocks. It's an alpha world. Like, something you did like that. The fact that it can receive gold from the moon, you'd be able to do that. So do you need a license from any kind of satellite dish anywhere? It turns out the effective answer given what happened with this term is you'd practice every satellite dish, which you'd practice in every dish. And the next one has just started covering that thing up. That would work, right? It's not enough for justice to be done. Justice would be seen to be done. And it's the same problem. He cares about justice being done, right? Well, in some cases, it's only... Does it look like you're doing something? But the same... Not too forced. So MDA can't be seen to be done, why not? So that's the... That's the biggest problem. No, no. Like, do you really not receive television like you do? Yeah, I know. It's a cap-out. We're at MDA's most. But, yeah, it's a couple of blocks. No, it's going to take a few days. Yeah. And in any case, you can't solve that one. It's a legal problem. Yeah. It's a neat way of getting around what the MDA has of being seen. But that's not good. Fuck. Well, the bare mind of me, everything we do is subject to a problem that we can expect to get on the platform. We've got to stop that. But they are so busy, they don't... What was that like, getting permissions for a list on the spectrum or something? Well, we seem to be on that. I mean, I sort of go to IDA. I don't know. To let them know that they're behind me and I think they've been filled with this as something that they... I mean, we should go ahead and say what has happened to them. It's like, yeah, we know who the hands are. We know who they've gone through. Because they've been able to get their license in the first place. They've been able to get their license in the first place. They've been able to get their license in the first place. But we'd rather want ten of these than one. And it's a manifold, right? It's the same thing. People will be able to view the tokens. So the token that we should check is on the spectrum, which is a great name. And you've got to find the license to be able to get it. And so, you know, you can see why they might not be able to get their license in the first place. It's not bad that they don't do that. I know we're... I'm pretty sure it's all good. But please don't take the chance. It's a bit like the thing with the whole thing. I think it's exactly what I'm saying. And forget what it was, right? Yeah. With press corps in the room. And so there was an agreed phrase that was, in fact, a crew member. And so, how has happened about three times in one of the different ways? This is what it was. Is that actually... And yeah, it's the opening time that the press are asked to leave the room. There's nothing that was going on, but there's nothing in the rocket programs yet. And so, yeah. We gave a sort of guest. That's the sort of thing. That's the fact that's going on. But yeah, okay, that's the point. So, for that reason, a lot of the time, we don't legally... I'm not really happy about the law, but it's like... I'd rather they read the law than... Did the maths make any sense? I sort of... It's a technical crowd. I'm going to risk the middle, like, eight minutes of the talk dealing right in the guts of... There's only one person who, like, no... We're saying it. It was more of, did it make sense? Fear is always running away with... I didn't say anything, but we're all like, I said, That's not what the specs should say. But it actually works. So if I'm buying myself a rocket, then here it's reliable. But it wasn't working. And they're like, is there a problem with the software that we put in the USID on the ground side? Or is the... the satellite side a load of not-sending property? Like, six weeks before launch? We might not have the SBM down... They had a UIT down as well, but the SBM down, I guess, much more bandwidth to look down images more. And they were looking six weeks from the launch. They were looking at a very real possibility of not having read it, working for the SBM downlink. Yeah. It's a micro-spec run. It's the second satellite, and it's got this low light. So it's not infrared, it's visible light, but night light. So, yeah. You won't talk about it with customers. But... Like, it's not hard to guess what it might be. Um... But he... Yeah, it's a satellite this big. And my... My big fear, the one I'm talking about, is the side-lobes. Like, the amount of... that is enough that often... Like, I'm a bit concerned that my office, if there's anything I wouldn't want to try down, he's going to have side-lobes that are bringing in 1,000 Wi-Fi devices, which will, of course... are more shielding... More like a sort of giant ball. Yeah, a giant ball, yeah. I think... Certainly for isolation. I'm not so much... The focusing is the information, but it's the isolation, yeah. If I could find... Find... I don't know what I could do. Reuse... It's not like a repurpose that way. All I need to do is prevent anything about that kind of... from being... That's why the... It's certainly on. Roland. Hello. If you're looking for pizza, you might have to wait for a little bit. We have a second batch of incoming, but we're out of the current set of pizzas. Jesus. People just... I thought a little Christmas have arrived. Okay, in that case, I should go and get some coffee. Yeah, there are people trying to teach other... Yeah, yeah, yeah. That's certainly one of the... Yes, it does. One day, all the equipment I collected. In two years, I had a working prototype and made a two-tube cloth for an art exhibition in Shanghai. It took me another two years to transform the NICQ prototype to low-scale production. I needed more room, so I moved from the garden shed to a local castle. That enabled me to build cleaner lab and finish the production equipment. New vacuum pumping station, induction heaters, mass-working equipment, and so on. I also hired first employees to help me with the manufacture. In 2016, over four years from the start, we finally have steady low-scale production, and it seems that we succeeded and have revived the NICQs. Thank you for your attention and now, enjoy the video. Yeah, it's very simple. Second one, please. I mean, it's very complicated to talk about, or I guess we need to have a video of it. Did you ever wonder why in North America, television's run at a frame rate of 29.97 frames per second. I mean, what a ridiculous frame rate. I came across the inconvenience of this number recently when I was making a video, and I was trying to manually assemble this strange back of this footage. And it got me thinking, where did this come from? And I couldn't find a nice, coherent, concise explanation on line, so I had a bit of a dig into the technical details, and I thought I'd make a video explaining how this came to be. It comes down to how these old CRT screens used to work. At the back is a capo ray. It sends a beam of electrons forward wherever they hit the screen when the screen lights up. Here, it tends to steer that dot around on the screen. To produce an image, you need to scan it across the screen. And if that dot is small enough, and if the dot is moving fast enough, you can vary its brightness. And because of the way the human eye works, it will perceive the brightness as an image. And so, as you can see here, a rapidly scanning dot is producing a picture of me, and then another picture of me. In fact, there are infinitely many of me. That is pretty good value. The electron beam didn't actually do the whole image in one pass. It took two passes. The first pass, it would put the top row in, and then every second row all the way down. The odd position. It would then do a second pass and fill in the even position. This is what's called interlace video. Because of the human persistence of vision, we wouldn't see two different passes. We would just see the complete frame. And in North America, TV was broadcast with 525 horizontal rows. Which, you may have noticed, is an odd number. Each pass that the beam would do 262 and a half rows. The weird half thing was because of the geometry of how the beam gets back to the top, and you want to take the same amount to everything stays in alignment. But that's the basics behind interlace video. When TVs were first built, it would have made sense to do two of those passes 24 times a second to match what cinema movies ran at. They were 24 rounds a second. However, these home appliances were plugged into the household electrical supply in North America. That's alternating current running at 60 hertz. So to make them easier to build to avoid interference. They used the altruity to time the scan. And because it took two scans of the beam for every image, it made your TV was running at 30 frames a second. What a perfectly logical and sensible frame rate. The problem was with the introduction of colour. The 30 frame for second system was for black and white TV. And in 1953 colour TV was built against the airwaves. And that ruined everything. TV in the 1950s was set as an analogue signal over radio waves. Each TV channel was given its own spot in the electromagnetic spectrum. Specifically a 6 megahertz window to stand for the data. Now the first quarter of a megahertz it couldn't use because it's kind of wasteland, a buffer between channels. It couldn't really use the next one megahertz either because it was a build up to the picture signal. After that you get all the interesting data about the picture. And finally 4.5 megahertz later you get the audio signal. Then after that another wasted quarter of a megahertz of wind down. Then above that you would get another wasteland and the next station of bugger. They were kept in fairly tight. So in reality each channel didn't get 6 megahertz they just got this one 4.5 megahertz gap in all of the image and audio data. When colour TV came along in 1953 the colour data had to be put somewhere in that 4.5 megahertz window. But it needed to be positioned carefully so it didn't disrupt the pre-existing picture and sound information. It looked like this was going to be a major problem. The colour signal did interfere with the picture and sound signals in a way that produced visible artefacts. It was distorting the picture and that was not acceptable. So the technicians had to find a way to fix that and thankfully there's a thing called line by line phase reversal. Even though I don't fully understand how that works I do know of the criteria to be able to use it and it comes down to the two gaps. The gap between the picture frequency and the colour frequency and the difference between colour and sound. In order for line by line phase reversal to hide the artefacts both of these distances had to be an odd integer multiple of the horizontal frequency to provide a point to it. The horizontal frequency is just the number of horizontal lines being drawn. We know that if you add these two differences together you get the complete 4.5 megahertz window to the entire signal and we can now do some simplification where we know if you're adding two odd numbers together you're going to get an even number out the other side we can move the half over there and if you have any even number you're just going to get some integer and so the moral of the story is that we need an integer multiple of the horizontal frequency to equal a total interval of 4.5 megahertz which is of course just 4,500,000. Well let's see if it works the horizontal frequency is equal to well every frame is 525 horizontal rows and we're running that at 30 frames per second if you multiply them together we get 15,750 out the other side that is our horizontal frequency we can then try dividing both sides up here by the horizontal frequency and we hope to get an integer out the other side very sadly we don't we get 285.714 another one and the poor engineers must be like oh that's close imagine imagine if that was 286 that would solve all of our problems but it's not for that to be 286 we would need a different horizontal frequency we would need a horizontal frequency of 15,734.25 and we have well we would have that if instead of 30 frames per second you got 29.97 frames per second and so that's what they did they adjusted the frames per second to make this number here an integer and remove the interference between the new color signal and the old picture so there you are North American television has a frame rate of 29.97 because if you multiply that by the number of horizontal rows in each frame and then you multiply that by an integer it happens to be 286 you get out a whole number which matches exactly the frequency window this system of broadcast is called NTSC and it was put in place in 1950 by the national television systems committee and so now you know it stands but not the smartest surely there must be a better option than 29.97 well let's have a look what happened in Europe Europe has power television that's based on a 50 turns power supply and so with two scans a frame you get a 25 frames per second power has more horizontal lines than NTSC it's got 625 whenever you have someone going on and on about how power is better quality NTSC it's because it's got an extra one under a horizontal line it has technically got better resolution and in Europe there's a slug you can go window to send the data off there's actually a full 6 megahertz window just for the data that's actually sending the TV signal so the power technicians must have as many as possible into an integer multiple and it turns out exactly 384 precise and you might think wow they got lucky but in fact this was deliberate how came into place because of color television Europe had a look at North America what a mess let's just do a new system from the ground up and make it work and that's why in Europe to this day we have a nice and tidy interlace standard whereas in North America it's this ridiculousness the question now is was there a better option instead of changing the frame rate what if instead they changed the window to which the data is sent what if they just moved out slightly to make the integer multiple unfortunately that wasn't possible the standards for this were they were not allowed to go out of that 4.5 megahertz range the only other thing they could change would be the horizontal lines and this in my personal opinion is what they should have done so let's say we want to keep the frame rate at 30 frames per second we're going to change the number of horizontal lines how many are we going to need well assuming we only want to increase the number of lines we don't want to decrease the loose quality in the loose standard and assuming we still need an odd number so we get the half line geometry for the beams movement the next compatible number of horizontal lines above 525 is 625 with a nice multiple of 240s the NTFE standard could have been the same number of lines as how we could have had two much more compatible standards if they had changed the horizontal line instead of changing the frame rate but they didn't they changed the frame rate instead it would have been stuff that this ridiculous number ever sent to the judgment their motivation the time was to make the transition as smooth as possible and by slightly tweaking just the frame rate this was very backwards compatible almost no one would notice this change their theory was if they do their job correctly no one would be sure they had done anything at all the final moral of the story is just that convention came around for a very long time because of human nature you might have abrupt changes in technology people need to do transition from one to the next standards have to be continuous for some definition of continuous and that makes them incredibly termatious now I know a lot of people who watch my videos weren't being a real pack sector and they're responsible for coming up with standards and conventions and a lot of young people who watch these videos you're going to come up with the conventions and standards of the future all of you please when you're coming up with new ideas just spare a thought that your brain can play one day to be locked into the same standard although that said I still got my video made despite having to deal with 29.97 frames per second if you're curious it was the one I did with Henry Segerman with this very cool camera because we had to explore all the frames met with them in Python and then put them back together in the description so people come up with conventions I guess actually we don't care a convention that exists is better than something that doesn't so if you can bodge it together and the works go for it the people in the future will find out a way to do it according to my new YouTube statistics at this point in those videos of all the people who watched it only 30% of them are still playing it most people watch each thing bit and then they don't pay attention what I'm just rambling on at the end and so those of you who are still paying attention who are my people because I have a special announcement just for you you're all incredibly supportive and what we've asked when am I going to set up a Patreon page and I've finally done it I have set up a Patreon page and this is kind of a soft launch I'll do a proper launch later in the video about what I'll be doing but for now I thought I'll just mention it at the end of this video if you'd like to please do click the link in the description go and check it out if you're not familiar with Patreon you can support me a bit like you would a Kickstarter but it's ongoing the idea is people can afford it donate money so I can do these videos that time in return I have also the logic have a look at it give you some feedback about these things there I'm going to have fun making a bit of extra content for Patreon in fact I'm going to do a behind the scenes video of this video I've just made because you wouldn't believe when people attack around here to make this all work and because I haven't got any Patreon supporters yet I'm just going to put it on my Patreon channel and anyone can see it if you're on the link is at the very top and if you're not subscribed I would love to make more videos and do them there so go check it out I'd love some feedback before I get a couple more let me know if the rewards are what you want what are the three things I can add in there for you I really appreciate you all supporting these videos I've had a question I'm curious about how you like how do you like how you like how are you how do you like how do you like how much is it how much is it how much is it how much how much is it . . . . . So a quick reminder that if you would like to win one of these lovely little headphones or the URL that is on the coffee cups, if you don't have a coffee cup, I'm sure there's much in the bin, I guess. But no, more seriously, it is dnd.ly slash dev coffee. Just seriously, go find someone with a coffee cup and read it, all that, it's easier. Alternately, if you don't want to go there, you should either tweet or make a post on Instagram with the hashtag bandlab and the hashtag theCampusG. And we'll pick the ones that we thought were the best ones. And you get headphones. Before we start with our next talk by Shippen, you can eat in the venue. If you're hungry, feel free to go out, grab some pizzas and come in, not during the talk. So if you want to go and ask about the time, or between talks is a good time. There's also still lots and lots of pizzas, lots and lots of drinks, help yourself. And that's about all I have to say for now, so Shippen, take it away. Hello? Can you guys hear us? Hold it up. Let's get started. Hi everyone, I'm Shippen. I'm a software engineer at Paypal. And today I'm going to talk about creating a 3D game engine for Paypal so that developers can create 3D games for Paypal Watch easily. Apparently this is still not related to my job at Paypal. Is there something that I personally find very interesting and worth exploring on? So I got my Paypal Watch around one year ago, one year and one day to be exact. I saw this from my Facebook timeline yesterday. So at that time, my wife bought me this Paypal Watch as a gift. It was awesome and the watch has a color display which is always on and there's an app store for it. You can download a lot of apps on your watch. And the battery lasts for five days. That's actually very good. Not like you have to charge your watch every day. That's kind of painful. And personally, I'm very into 3D computer graphics. I have developed some games before for both PC and mobile platforms. And after I have the Paypal Watch, I was thinking, hey, how about create a 3D game engine for Paypal so that others can create 3D games for Paypal? That sounds interesting. So 3D game engine is still some concept that's kind of blur. So what we really need to do here is these three things. First, 3D games are basically showing you a series of 3D images. So first, we need to create a generic way of rendering 3D images for Paypal. Second, it needs to be very flexible. So developers can create game logic. They're rendering logic with this platform easily. And last one is high-frame rates for PC games. It's really 60 FPS, but that might be a bit hard for a watch. But still, we want to have high-frame rates. So these three we want to do. And this is what I am trying to achieve here. Basically, something like this. But obviously, it's a bit too hardcore for a watch. And what I end up creating, it's not this good, but something similar. And I will show you a quick demo for it. So before I go to the demo, let me quickly explain what I'm going to show in the demo. So this is a 3D model, a car. And the model actually looks like this. And what I want to do is I want to display this 3D model in my watch and make it rotate. Basically, that's about it. It's not really a game, but it's more about demonstrating the concept that it can be extended to a game. And this 3D model is OBJ file. If you look at it with a text editor, you will see that the 3D model itself is actually a list of 3D coordinates, vertices, points like this. So what we need to do here is we need to render this list of vertices on our watch. So that's what we try to do. And let's go back to the demo. So this is the main menu for the watch. Everything here are the apps I have installed on my watch. And Pebble 3D is the demo I created. Now it's loading. Yeah, and this is what I achieved for... It's pretty cool, huh? The graphics is still very simple, but it is something very interesting. On the watch. But this video from here is actually fast-forwarded a little bit. The actual speed is not so good. I'll explain more in details later. Okay. So basically this is the demo. We want to design a 3D game engine for Pebble. So usually we need to start with hardware specs. And here, let's review the hardware specs for Pebble time first. Pebble actually has several versions. And the one I have is Pebble time. So I go with this one. First, the display. What I really like about Pebble is it has a color display which is always on. That is very important for me. So for the display Pebble has, it's actually a sport 64 colors. It's an e-paper display. And the resolution is 144 by 168 pixels. Let's look at the processors it has. The CPU for Pebble is actually an STM32. That's actually one of my favorite processors for embedded systems as well. It can run with 180 megahertz. But for Pebble, it limits it to 100 megahertz because I think there are some constraints on the battery. There is also a Bluetooth module for it. And what really surprised me is that it actually has an FPGA chip on it. Anyone know what is FPGA? Cool. So I didn't really look at the specs for the FPGA chip, but judging from the name LP1K, I think it should have 1,000 slices on it, which is quite cool. Pebble is not open source, so I don't really know what they are doing with the FPGA, but it really surprised me. And no surprise here that it doesn't have a GPU or a graphics card, which I don't think any watch will have that, but it's not an issue for us. We can still build our 3D engine, even without a graphics card. So next, we have a look at what hardware specs we have. And next, let's look at the software specs. For Pebble, they actually provide us a lot of APIs that we can use. Basically, two types. One is Watchside APIs. Basically, you write the programs, and these Watchside programs will run on your watch. And there's another part, which is PhoneSide APIs. So using these APIs, you can write the app runs on your phone. And the Watchside apps and the PhoneSide apps, they can communicate with each other through Bluetooth messages. For Watchside APIs, they have C, so you can write apps in C, and they will run on your Pebble Watch. And they also have JavaScript API. So it's not really JavaScript, it's JavaScript. Basically, it's JavaScript for IoT, which is designed by Samsung. And there's also PhoneSide APIs. So this code will run on your phone, basically. They have JavaScript version, iOS version, and Android version. Pebble also has a cloud IDE, which is very simple to use. You don't really need to install anything. Go to this website, you can just create an app there. And it's really amazing that there's a run button. You just click the run button, it will open the emulator inside the browser. And you can see the result in real time, basically. That's very cool. But sometimes this cloud IDE is not very stable, so there's also a local version. You can install the Pebble SDK on your local, and it will work as well. So I'll just quickly show you how to run it on your local. So it creates an emulator here, installing the app, and it's running. Yeah, this is the same result as you see just now, but it's running in the emulator. This is a little bit bigger than the previous one. Okay. So we already know about the hardware specs and the software specs. So let's design a 3D game engine for Pebble. And before we do that, let's review our goals again, just in case if we forgot anything important. We need to have a generic way of rendering 3D images. It should be flexible, and we want to have high frame rates. And to do that, we can do it in two ways. Just like what Pebble API has provided us, we can have the phone-side rendering or watch-side rendering. So for phone-side rendering, what we're doing is we create some app on your phone, and that app will read all the 3D vertices and generate a 3D image for you, and send the image over to your phone to display. That is pretty straightforward. And there's another way you can do this. It's by watch-side rendering, which you need to send the 3D vertices to a watch, and your watch will process the 3D vertices and generate the 3D image from these vertices. Both ways should work. And before we actually jump into these implementations, there are some things we should know from this. So for phone-side rendering, obviously your phone has more computing power than your watch. So the rendering image on your phone should not be much of an issue. But for sending image over to watch to display, that part, we might have some issue because it's using Bluetooth, and Bluetooth is not famous for high-speed. And rendering on your watch-side, there might be some limitation on the vertices because although Pebble has a very powerful CPU for each app, you only have 24K to program, so that's actually an limitation there. So let's look at phone-side rendering first. So what phone-side rendering does is actually something like this. We have the 3D model on your phone, and the phone reads the vertices and renders the 3D image, and then we send the image over to your watch to display. Basically these three steps are pretty straightforward. And let's look at the Pebble display again. We already know that it is 144 by 168 pixels. For each pixel, it's actually represented by a byte, and a byte is 8 bits, right? And for these 8 bits, they have 2 bits for red channel, 2 bits for green, and 2 bits for blue. So 2 times 6 is 64, so that's how it supports 64 colors. And there's also 2 bits for alpha, but it's not used in Pebble actually. So let's do some calculation here. Each pixel will take one byte, and 2 bit red, 2 bit green, 2 bit blue. The display resolution is 144 by 168. So one image frame will actually take around 24KB, which doesn't sound like a lot of data to transmit actually. Just 24K, how hard is that? And let's look at the Pebble Bluetooth model again. It has a TI Bluetooth model. It supports both Bluetooth 2.1 and Bluetooth 4.0, but the PIE one is just for notifications. So we cannot use it for our use case, so we need to use the Bluetooth 1.1. And for that one, the max data rate is around 3 Mbps. So let's translate that to KB. It's around 360KB, but this is like the maximum value because in the data rates, the bits are not just the information you need. There are also information for checksum or baseband encoding, stuff like that, so not every bit you can use, but maximum should be 360KB around that per second. And one image frame, we already saw this previously. That's 24K. Hey, great, we can get 10 frames per second. That's actually quite okay for, not so okay for a PC game, but it should be okay for a watch game. So I implemented that, and when I launched the app, this is what I got. Actually, it's stuck here. It's not moving. And there is actually a function in Pebble that you can determine what's the maximum buffer size that you can stand through Bluetooth within one message. It says 8K, but I tested it. Maximum, I can send it, is just 1.8K. Might be something wrong with their Pebble OS, but it's not open source, so nobody would know what's happening there unless you're from Pebble. Yeah, that's a bad part about it not being open source. So what I have to do is something like this, kind of painful. So I have to send one image frame in 14 messages. So you see the blocks here, the red one, green one. So each time I can just send one block to the watch. And each time I send the block to the watch, the watch, after receiving this block of data, they will increase some counter and request the next block to further on the phone, and the phone will send it over and repeat this process for 14 times. We get one frame of image, which is going to be very bad, but how bad it is, let's see. Okay, so I already talked about the sending image to your watch, this part. The next part is actually 3D rendering. So somebody needs to render a 3D image for us, right? So we need to do this 3D rendering part. I'm using PebbleKit.js, so it's basically JavaScript that's running on your phone. Beside the pure JavaScript part, it also has some extensions on it. It supports things like WebSocket, HTTP request, geolocation, and local storage. But again, no surprise here. It does not support WebGL. So without WebGL, how do we generate a 3D image with JavaScript? Actually, that's also fine because we can create our own WebGL with pure JavaScript. It's actually not so hard at its sounds. So let's look at this doffing. It's a 3D doffing. It's a 3D border, basically. So if we look at it closely, we'll see that it's not a doffing. It's actually a group of triangles. So let's say we have something really complicated like this bunny here. We want to show this bunny in 3D, but we only have triangles. What we can do is we just keep adding more triangles to it. And if we have enough number of triangles, we'll look like a realistic bunny here. So that's the same thing we're going to do. For the 3D renderer we create, we actually just need to have one function to draw a triangle. And with that, we can do everything we want. And although these 3D models are in 3D, but our screen, our display is always in 2D, right? So, ultimately, every 3D triangle will be mapped to a 2D triangle. So actually, what we really need to do is to have a function to draw a 2D triangle. That's pretty simple, right? And that process is called rasterization. I won't go into details how to do it, but basically you have a 2D triangle here and you want to draw it on the screen. Your display are created with these pixels, right? So basically you need to have a function to draw a triangle into something, map it to something on your display. That's all we need to do, actually. And let's say we already have this function to draw a 2D triangle on your display. The rest we need to do are very simple. So here is a graphics pipeline that normally all the 3D applications will follow this way, and same for the 3D renderer I created for Pebble. So it's a graphics pipeline like this. First, we have a 3D model, right? And the 3D model will be actually a list of 3D triangles. And then there's some processing on each vertex and we map it to a screen space, which is in 2D. And after we have that, we draw this 2D triangle to the display. Basically that's all we need to do. And if you're familiar with OpenGL, actually the vectors processing part, it's also called vertex shader. And the fragment processing part is also called fragment shader. For texture part, I didn't include it in this demo because Pebble Watch only supports 64 colors, which is not so colorful. So I didn't include texture in it, but I have a demo with textures with JavaScript here. It's actually a previous talk for JS Talk, and thanks to engineers.sg, they also recorded it. So if you're interested, you can check this link out. And the size here is actually the same as Pebble Watch, but this one I added the texture into it so it looks better, but ultimately it's the same thing that I have on the watch. Okay, so now let's look at the performance. I've added some logs to the rendering and the sending data over part. As you can see here in the red box, the 3D rendering part actually just took around 240 milliseconds, which is still okay, but sending the buffer over, it took eight seconds. So it means that if you're playing a game with this system, you need to wait six seconds for each frame update, which is not really tolerable. And this is tested with real device, with iPhone 6S and Pebble Time. So how can we improve this? I think there should be better solutions than this, but the most straightforward way would be just reduce the image resolution, right? So originally, we are sending an image with a resolution of 1.4 by 1.68, and that would require 24 kb of data. So let's reduce the size. If, let's say, reduce the size to 36 by 42 pixels, the data we really need is just 1.5 kb, and that's less than 1.8. So after we reduce the resolution, we can send the image in one message, which is great, but because the resolution is so much lower than previously, the image won't look as nice. So here's a demo recorded in real time. There's no fast-forwarding for this. It looks like this. Yeah, it looks like something from Super Mario because the resolution is really bad, but the performance is much better for 3D rendering part because the image size is reduced. It takes less time to render the image. Previously, it's 240 milliseconds. Now it's just 130 milliseconds. And for sending the buffer, previously we needed to do it in 14 messages, and now we can send everything at once. So the sending image part, it will only take around 400 milliseconds. So overall, we will get something like two FPS, two frames per second, which is not that good, but still okay for a watch game, I guess. Yeah, but again, the result will be not so good as the previous one. On the left side, it's with full resolution. On the right side, it's reduced resolution. You can see the difference here as well. Okay, so next, watch side rendering. I didn't really have enough time to finish this part. I did some experiments on it, but it's not finished, actually. So like I mentioned before, Pebble, they also support JavaScript running on your watch side with JerryScript support. But the JerryScript is not running very well, actually. The performance is quite bad. So when I port my phone side code to watch side, it compiles, and when I run it, I always get these memory warnings, so the app crash, the emulator crash. So I guess for this case, we still have to use C for these kind of high-performance applications. There's actually one app from Pebble App Store that's doing this. This is rendered with watch side rendering. Friend rate is much better, I think, from the screenshot I got here. It looks like 3 FPS to 4 FPS, which is quite okay. But the bad part for this is you only have 24K to program for your Pebble, so there would be an imitation on the geometry part. For this app, it says there would be an imitation on 300 vertices. While the app demo I created, the model has 6,000-plus vertices, so there's always trade-offs and compromises. Okay, so summary. These are the goals we listed at the beginning of the talk. We want to have a generic way of rendering 3D images for Pebble. Check. We did that. It's flexible. Check. We did that. But high-friend rates. I put a question mark here because two friends per second, maybe not that high. So... But again, there are ways to improve this. You can... So now each pixel takes one byte. Maybe you can reduce it to one bit per pixel, something like that, or you can do some other stuff to make it better. That's definitely possible. Yeah, that's about it. Thank you. Questions? Send only image diffs, set us in the full-page map every time you send a diff of what change that might reduce your... Yeah, that should work. But I didn't try that because it's really a side project and don't really have that much of time. But yeah, if I try that, the performance will be much better. Yeah, that's for sure. I'm intrigued by the FPGA. It's a quick look while you were speaking. It's a line of about five different models. The number of cells in each model is equal to the number of horizontal pixels in each HD video mode. So it looks an awful lot like it's a set up to be a line of time video output devices. Oh, good. I don't know anything about people. I wonder if they could be encouraged to be a little more open about what's happening there because you've got a very powerful machine there. Yeah, yeah. Thanks for the information. Actually, if they can open source the design, maybe we can do a lot of cool stuff with it. It has a PGA chip on it. It has a very powerful CPU. Yeah, they really should open source it. Is it all making a Super Mario game of the day? Super Mario? I think actually, I think there's already a Super Mario for Pebble. Yeah. Any questions? Yeah, but video compression. You need to compress it and then on your watch you need to decompress it and that part might take a while. Sorry? Yeah. Yeah, it could end without quality time. 24K graphics? Yeah. Couldn't do that. Yeah, so that's why if you do the website rendering, you would definitely have a lot of limitations on the geometry part. You overclocked the... Oh, you overclocked? Yeah. I... Can you basically look at the crystal? What are some sort of development kits? Sorry, I cannot hear you properly. What are some development kits? No, there are no development kits for Pebble. No, there are no development kits for Pebble. And it's not open source, so you don't really know what it's doing inside it. Yeah, overclocking might be work, but you need to do something hacks on it. The PGA is actually driving the LCD in the front of the screen. Yeah, that's your work. Thank you. 245. 245. We will start at 145 tomorrow, of course. 145, thank you. Because, you know, we just want to be here all night, obviously. One of the nice contrasts is this versus, say, cross-agent. I mean, I've been doing cross-agent, I'm pretty crap. Yeah. But then I realized that we won't have quite enough buffer. Like, we have no buffer, and at the moment, we don't have a buffer, so it's not a buffer, it's one that speakers have. And a lot more buffer is when you encounter some people to talk to each other. Why are you here? You wouldn't want to come and you couldn't be right. Thank you. Thank you. Thank you. Thank you. All right, children slash not children, time to settle down. Because up next, we have Donald Trump trying to be assistant programming great again. He's going to build a wall and keep those buffer overflows out. How many of you have touched systems programming since university? Well, yeah, it does. But just to clarify, before I go on, though, this is my only humble thing I'm going to say today. I'm not an expert on Rust, so if you see any issues or please approach Rahul, who's done far more Rust projects than I have. But with that, let's go ahead. So who am I? I'm Umar. I'm an iOS developer at Garena. I don't do Rust at work, but that's okay. I'm still going to talk about it anyway. All right. So why? Why are we going to talk about Rust at all? How many of you have seen this? All right. Good. A few people have. For those of you who haven't seen it, this baby is Heartbleed. It's a very cool bug and a logo. How many of your bugs have logos? No, right? Why does it have a logo? It's because it's a bug in OpenSSL which most of us use, actually. And it's a bug in OpenSSL that results from, believe it or not, a missing bounce check. You know what a missing bounce check is, right? If length is less than some length, yeah, they missed one of those. And that caused this bug. Why are you making noise about Hillary using an insecure email server when you have bugs like this floating around? Right? And, to be honest, this is not the only bug in such a vital library that's used everywhere. No, it's not. There's this one. Shell Shock. Quite big recently. Also has a logo, right? Cool. There's a thing going on here. There's GoToFail. Everyone remember GoToFail? There's a bug in Apple's implementation. It's basically just had two GoTo statements. It doesn't have a logo, so I used the code block itself as a logo. There's this new one called Sandworm Effects Window Systems. Yeah, I know. I know, right? I don't know who comes with the logos. So, anyways, for all of this, right, you have all these big bugs in these common libraries that everyone uses. And we see a problem here. There's a big problem. What's the root of this problem? Like, why is it that something that's so core and so important to us is done in a buggy fashion? This should not be the current state of things, right? We need to change. The problem is languages like C, C++ that are used to implement these libraries give you a lot of control. Too much freedom. Like, too much. I have two very famous quotes from two of the greatest philosophers of the 21st and 20th centuries. The first one is Beyoncé's troupe, the creator of C++, who says C++ makes it easy to shoot your whole goddamn leg off. Now, when the creator of C++ says that, you know you've got to take him seriously. All right? I also have another quote from another, other greatest philosopher of the 20th century, Uncle Ben, who says, with great power comes great bugs. What exactly are we talking about? What kind of bugs are we talking about? We're going to focus on memory safety. What exactly about memory safety? First thing, buffer overflows. Because you have direct access to memory and you don't have a bounce check or you're missing a bounce check here or there, you might corrupt objects that are adjacent to you in memory and change what they're actually holding. You might have buffer overreads where you read something, you miss another bounce check and you read something that's beyond what you're supposed to read, revealing sensitive data. Actually, that's what happened in hot bleed. You have dangling pointers, one of our favorite bugs, whereby you have an object or something, you forgot to dialog it, but you lost a pointer. Oops, sorry, it's going to be there. I'm going to go to the memory leak. Too bad. You have double free, which might corrupt you, so you've tried freeing objects twice. This thing can corrupt the heap in certain implementations. And there's plenty more. There's a lot more. And for those of you who are wondering, huh, I haven't seen this at work. Most of us don't deal with this, right? Who has to deal with this, actually? Aside from Rahul. Yeah, GC, bro. Like garbage collector. Yeah, we have those, right? Why can't we use those everywhere? Screw all this metal memory management. We should use GC everywhere. So garbage collection. Why not? Why not garbage collection, right? So when I'm talking about garbage collection, I'm going to be talking about generational garbage collection, market sweep, JVM style, past the world kind. You halt the world and you do your garbage collection, like this truck passing around. Imagine this truck passing around, this stops everyone there, collects little garbage and goes away. That's the kind of GC I'm talking about. All right. The problem with GC, though, is for systems programming, GC has non-deterministic destruction. This is important. I have a resource, a video file. This can be a gigabytes large. And I want to make sure it's destroyed by this particular frame so I can load something else and buffer. I can't do that with GC, because I don't know when my GC is going to happen, when stuff is going to be invoked. So if I'm constructed in resources, I have a problem. I need to have deterministic ways to know when my stuff is going to be released. Another issue. GC needs a lot more memory to be efficient. This is not me saying this. Chris Lettner, the god of compilers. To be honest, GC needs around three to four times more memory than your actual application in order to be very efficient. Otherwise, it will lead to bad performance and thrashing. So meaning that when you're talking about memory constraint devices, like ARM, Pebble, for example, talking about mobile phones, you can't use GC there, right? Because you just will have a few megabytes of RAM. Another issue that's not as often talked about is battery efficiency. The problem with GC is that because it often has to do a sweep across all your memory, it will do a lot of RAM reads. And all these RAM reads will eventually affect your battery performance. And this is another reason why you can't have that many RAM reads, your battery performance will go crazy. And you ever wondered why Android phones suffer in battery a lot more than iOS phones? This is one of the reasons why, not exactly the only reason, but this is one of the reasons why iOS phones do slightly better because you don't have a garbage collector in iOS. Then, now imagine if I'm on an airline, I'm an air traffic controller, and there's a flight about the land in the next second, and then I have a GC pause. Not very good, right? Like, you can't have GC pauses in hard real-time systems. So, with all these factors in mind, we realize we can't actually use a garbage collector at all, which is why we're stuck with the current state of affairs, whereby we have to worry about all these memory bugs. And that's where Rust comes in. So, yay, Rust, hello world. Easy, right? Rust, hello world is easier than C, hello world. So, we've been already. We had a head start. You don't need the hash include, IO, blah, blah, blah, blah, blah. It's easier than C++ ones as well, right? No need to use namespace standard, et cetera, et cetera. All right, so since we already win, I'm gonna stop now. Okay, never mind, let's... Yeah. All right, so what is Rust exactly? Those of you who've never heard of Rust, actually, who's heard of Rust before? Is it the thing that... Yeah, you got it. It's pretty close, it's like an old car. And those... Okay, so we're on a certain half. You have heard of Rust before? Those of you who haven't heard of Rust before, Rust is a systems programming language that is used focused on three goals. Safety, speed, and concurrency. And today, we're gonna be talking more about the safety aspect about Rust. This talk is not a bottom-up tutorial about Rust. So I'm only gonna cover specific salient features of Rust, specific to memory management, that I think are very important. The rest, I mean, you can read the Rust line book. Also, to those of you who are wondering what is a systems programming language and how is it different from application programming language, well, the systems programming language is the thing that's used to build the libraries your applications will use. So your OS, for example, will be written in an systems programming language. Your garbage collector will be written in systems programming language, et cetera, et cetera. So it's the stuff upon which you build your stuff. Okay? That's why you probably won't see a lot of applications using Rust as much, although there are some, naturally, because every language has a web server implementation. Rust also has a web server implementation. Ha, take that JavaScript. All right. So, let's look at a bit of Rust code. So, here's some simple Rust code. One thing I'm gonna focus on is everything in Rust is immutable by default. So when I do let x equals 10, I'm literally defining a variable called x. And if I try to reassign it to something else, it will throw a compile time error, because it will say that I cannot reassign an immutable variable, which is awesome. Immutability by default is something that we should have had in cc++ a long time ago. It's something that people now say you should use in cc++ because you should use const everywhere by default, unless you don't need to. But it's something that was a mistake in cc++. It has now been fixed by the Rust gods. So, if you want to have a mutable variable, you need to define this keyword mute before the variable name, which will make it a mutable variable, and this will not fail at compile time. So this is good, so far so good. By the way, if any of you have any questions during one of the code snippets, just feel free to ask. Okay, next thing about Rust is strongly typed with type inference, which is very sexy. That means that I don't need to annotate everything with a type. The compiler will infer the type for me. So if I define a variable y, for example, that's type which has the value of the meaning of life and everything, it will be inferred to be in 32. But I can also specify it, and naturally Rust supports all your numeric types. And if you notice, like, you have to tell Rust, like, what's the size of the integer that you're going to deal with? This is important because you're dealing with systems programming language. You don't want it to abstract out stuff. Got type inference? Good. Looking a bit like Haskell, actually, in all those nice sexy new languages, right? All right. So just to carry on with the example, I'm just going to introduce one more construct so that it'll make the memory part easier. I'm just going to introduce vectors. A vector is a resizable array, okay? Similar to a C++ vector. You can put stuff inside it. It's got O1 access for O1 element access, O1 push, pop, et cetera. And the way you declare a vector is use this thing. Yeah, use this VEC exclamation mark, which is actually a macro. Oh, yeah, Rust has macros too, like Lisp. And these are not macros like preprocessor hacks. This is a proper macro system. And I'm going to talk about that much more about macros in detail, but just to let you guys know it's thought through. It's not an accidental add-on with C preprocessor looks like sometimes. So this thing is a macro. And this thing will basically create a vector with these elements, all right? Of type n32. The interesting thing about vectors is that their content is stored on the heap. Just keep that in mind for now. We're going to go into that a lot more detail in a bit. Okay? To those of you who don't know, you does not know what stack heap is, stack heap. Anyone who does not know? Yeah. The difference between the stack and the heap, especially when it comes to programming language, when you're running code, or you're having like, how, where will your program memory be held in stack heap? And I'm going to do it later on. So it'll be clear later on. All right. So let's jump to the cool parts now, the memory model. All right? Rust's philosophy in general is this. Zero cost abstractions. What does that mean? That means all the features that Rust has should not have any performance cost, should not have any additional thing that will hinder you from using it as a system programming language. That's why Rust does not have a garbage collector, because you know that garbage collector will not make it a good system programming language. So with that in mind, let's jump into Rust's memory model. And the first concept that we have in Rust's memory model is a concept of ownership. So what does ownership mean? Well, you own something, right? That's ownership. That's literally what that's what it means. You own it, so you take care of it. So in general, variable bindings, like we over here, we have ownership by default of whatever resource they are bound to. Okay? And when the binding goes out of scope, Rust will free the bound resources as well. So if you have guys have done CC++ and you know you allocate some memory on a heap, you need to have a free statement at the end, right, to clear the heap memory, right? In general, for us you don't need to, because in Rust if you are owning a resource, the moment your variable binding for that resource goes out of scope, Rust will free the resources on stack or heap, whatever. It doesn't matter. Yes? Alright, I'll get into that in a bit. That's a very good question. And that's a very legitimate concern. That's something that will be held very, very soon. Okay? So this is the default behavior. That means you don't need to add pesky free statements, etc. later on. So in this case, we on heap will be cleared up. Just to give you guys a bit more information on what's actually going on in terms of stack heap, and those of you who don't know what stack heap is is doing over here. I have a very pretty and sexy flowchart diagram here that I made. Alright? So over here this is, these are two statements that I have in Rust. First expression I declare a vector of 1, 2, 3. The second expression just for our friend's sake is an integer, alright? So what's actually going on in terms of programming? What happened at runtime here is we have a stack, this is the stack, and this is the heap. The green is the stack, the orange is the heap. In the stack, Rust will actually say that this label V is something of is a vector object on the stack. This vector object contains an address to the actual data on the heap. So the address is 0x100 and on the heap the address 0x100 contains the actual data inside. And this stack object also contains the size of size and it will contain a few other properties that you might need access to because you don't always want to go to the heap all the time. Getting stuff from the stack is usually a lot faster. So that's why it will store some information about it about this data structure on the heap, on the stack itself. That's how Rust does stuff, that's how Rust vectors work. Just for comparison's sake I have the variable i over here, i is something of type in 32 and the value is stored directly on the stack because it's a primitive type. All the primitive types will naturally always directly go on the stack. This is something that's the same thing happened in C and C++. The slight difference in C and C++ is generally over here you won't have a stack object but your variable will have a direct address to the pointer on heap. In this case we just have some additional information on the stack that Rust can use for optimization purposes because this stuff is a lot faster to do this way. So this is what's happening. So ownership sounds good in principle but just now Roland pointed out a very good thing that how do you prevent that from happening. Rust has an additional rule. The rule is there is only one binding to any particular given resource. So you can only have one binding at a particular point in time. That means if I have code like this where I declare my vector v then I have another let v2 and I assign it to v which seems innocent and straightforward and I try to print my v this will actually give me a compile time error. Yeah I know right? What? So the compile time error is that you are using a mood value v. Another thing about Rust I apologize compile time errors kind of suck. They are working on it but right now they kind of suck. So why does this fail? It fails because I'm not satisfying my one rule for ownership. I have two bindings to the same vector and I can't have that in Rust. I can't own, if you're owning something you can't have two bindings to it. So let's look at it again at that nice low chart level. In this chart I have v1, v and v2 the same vectors and their point this white arrow means ownership. In my sexy little diagram this white arrow means ownership and I can't have two of these at the same time. I can only have one of these. So yeah only one owner. Alright? So if that's the case then how will I do stuff? Like if I have a function here for example and I pass I define this function and this function does something with v so that's something with a vector of v and this is technically a new binding because I'm passing something as an argument and this is a new binding now. The name can be different but it's a new binding. So when I do this also sort of compiler because I have two bindings to the same resource so how do I pass stuff around? This is more screwed up than c, c++ any language that you have possibly seen I mean I can't even do this sucks. Yeah that's my initial reaction when I saw this as well but there's a very good reason for this and the reason is let's say I'm counting the number of votes that Trump has. Alright? Okay? So I have a vector here number of votes I have imagine this is a lot bigger than it actually is it's not just the wife, the daughter and the son alright imagine there's a lot more people here and let's say I want to fake a few alright? That's cool. So I declare a mutable vector. Let's say I can do this and then I push aha I can now my vector should change right? This would actually not compile and the reason it will not compile will become clearer now over here so say if this did compile what would happen at runtime I have my fake more vector here whose size is 4 right? Because I added something and I added my let's say I mutated my heap if you notice this thing is now wrong right? This thing is wrong now my object has been corrupted and a lot of memory bugs happen like this where you move stuff or you mutate stuff that may also be referenced somewhere else and then boom you get index out of bounds you get on iOS you get exec breakpoint etc etc etc so that's the reason why you can only have one ownership so every binding can only own once okay so this is not really new to be honest this problem exists in C++ and we deal with it, we have a way to deal with it it's called copying just have two copies of the array and then we can do whatever mutations we want because they're two different copies they won't cause data corruption right? so and if you notice if I'm just using a primitive value like I just have one and then I sign v2 to v2 equal to v this will compile actually and the reason it compiles is to implement copying so primitive types well okay you can copy them around and actually every data type can implement the copy trait copy is a trait it's a trait as a protocol actually so using the implement a method of this trait and you can have copying so I can copy everything right which is nice, works fine it's inefficient as crap but it works this is an actual production code so how do I pass things around in a function? well I have to do something like this so since I can only have one binding every time I reuse something I have to return it back so it can be reassigned and reused so let's have a function called foo because that's a good name for functions and this thing takes a vector v1 v2 and does return some result the result it returns once again is the meaning of life and everything and returns the two vectors back so if I use it in this way where I define vector v1 v2 then pass it to this function then I can reassign the result back throughout this flow I only have one binding to my vectors only one binding to each vector at one time so this thing will compile fine so this is fine, this works but I mean like why, this is not correct this is not right, this doesn't feel right so to solve this problem Rust has another concept which is called borrowing so what does borrowing do? borrowing allows you to have a reference to an owned resource and when it goes out of scope it won't actually deallocate the stuff deallocate the actual reference so a reference will borrow ownership rather than owning the resource so in this case how do I know it's a reference? I have the nice and here and whenever you have a type that's prefixed with an and that means it's a reference so I pass these two by reference and then I don't need to return them back because I'm passing them as reference so they won't be deallocated and this thing will work fine and when I want to pass something as a reference so I want to pass it as the own type of the reference I just prefixes an and this thing will work fine so that's hopefully that solves your question about how you deal with that kind of problem yeah, it's not going to be deallocated so just to give you like for my sexy chart over here I have a reference so over here I have owner reference and this thing these things are both pointing to the same thing but this thing is a reference so when this thing goes out of scope this won't be deallocated yes, good question I'm going to deal with that next very good question okay but before I deal with that I'm going to deal with another issue first which is what happens decides to mutate something in the previous example I was trying to add some votes let's say I try that over here again, I pass something as a reference and then I try to push aha cannot compile why? because by default references are immutable so you can't mutate stuff if you want to mutate stuff rust makes it hard it makes it hard you have to do something like this so what exactly is going on over here you need this thing called an ANDMUT ANDMUT means mutable reference okay so over here what I do is I define a mutable variable called X don't worry about the brace for now I'll explain why I have this later and I define a mutable reference to X called Y then I change the value inside the thing this is how I dereference the reference and then when I print this X now my X will actually have changed to 6 so like I mentioned mutable borrow from an immutable value will give you an error so one thing is know that my X has to be mutable if my X is not mutable it will still throw a compile time error which is great for those pesky memory bugs to cross the border right alright and yeah you need to start to access the content for the reference so those of you who are like me wondering who likely encode and are wondering what the hell is with the braces there's a few more rules sigh the rules are borrowing rules the first rule for borrowing is that any borrow you do must not last for a scope that's greater than the owner that means that you can, that answers your question previously what if I have something that's outside my reference is outside of the scope of the owner what will happen it won't compile it will give a compile time error that's the first rule the second rule mutable reference yeah so you can have one or more immutable references as many as you want doesn't matter but you can only have one mutable reference at a point in time yeah and this is enforced at compile time so if you have this it will throw something at you like it will screw you I'm not going to compile this shit so this is the definition of a data race a data race in computer science parlance occurs when you have two or more pointers accessing the memory looking at the same time and one of them is writing and one of them is reading you don't want that right when something is writing you want all the reads to stop you don't want anyone reading at that time right so this is actually pretty good actually this is preventing data races at compile time I don't need to have all the stupid blocking blocks all those mechanisms that I have in order to manage data races at runtime myself I don't need that anymore this is pretty awesome actually although it makes writing rust code 90% of the time fighting this so let's relook at the example I showed you just now here's the example without the braces so I define my mutable variable X here I define the variable Y which is a mutable reference to X I change this thing here and then I print X and then that's it that's all I'm doing innocent looking code will not compile this will not compile because we are breaking the first rule actually we have more than one reference pointing to this thing or rather we have a mutable reference and an immutable reference in this case the Y is the mutable reference and Y goes out of scope at the end of this block the print line will also create a reference because print line won't borrow it can't borrow it has to take the reference in and because this thing creates an immutable reference at this point in time in scope you have two references one mutable one immutable and this is a data race and this thing that's why will not compile okay second rule references must not live longer than the resource that they refer to so here's an example and this kind of code is something that happens this is Vishnu's case actually whereby I define this reference called Y and then inside this block I assign inside the scope block I assign X equal to 5 and do this Y is equal to this reference right not compile because X does not live long enough or why basically the owner needs to live longer than the borrower alright you can't borrow something without an owner that doesn't work Rusts will screw you okay so that's the memory model there's a few more things as well which I haven't talked about things to manage lifetimes which I won't go into it right now but that's the gist of it so now I'm going to continue like a few more features that Rust has Rust has structs structs look like structs in any other language they are structs just a collection of data types and you the way you declare them is you just declare the name and the name of the type the interesting difference in structs in Rust and other languages is let's say I want to when I create a struct like this this thing is immutable because it's a let, not a letmute if I want to create a mutable version of this I have to do this mute and then I can change the X value and I can change the Y value to the more the guys with more keen eyes or the guys who do a lot of oop stuff you notice there's no field level mutability what that means is I want to specify specific properties to be mutable or immutable if I have a user struct and I want user ID to be immutable but I want the name to be mutable how do I do that I can't do it out of the box actually in order to achieve something like that I need to use a mutable pointer to the actual thing in order to achieve that so it's a bit more complicated by default not off the actual data so that's how structs work because Rust is cool it doesn't have classes but your structs can have methods so over here I have a simple struct called circle x, y radius has a simple methodical area a nice boring example from an oop textbook and if you notice the only interesting thing in this example is this if you notice the nice our good old friend and and self means it's a immutable reference to self meaning that if you change anything inside self compile time error okay if you want to use if you want to mutate self you have to use and mute self if you want to own self then you have to use self yo dog put a self in yourself so the way you call a method is very simple like any other programming language you just dot method name that's it okay so one more thing I want to talk about which you guys might not have heard of that much is static versus dynamic dispatch how many of you know what static dispatch is alright not that many people okay so static dispatch means that if you have a function or a method it has a fixed address in memory so when I call this function I can just go directly to that address I don't need to dynamically calculate the address of that thing that's what static dispatch means dynamic dispatch means the address of that function needs to be evaluated at run time it's not available at compile time for you okay so to those of you who do JavaScript don't care about this why do I need to worry about this at all actually these things matter a lot in a systems programming language because there's different levels of optimizations you can do with static dispatch and rust by default static dispatch allows you to do several optimizations like inlining so what do I exactly mean by inlining let's look at that let's say I'm stupid and I define a function called add because I don't like the plus operator so I want to have a function for it because let's say I'm a functional programming geek so I want everything to be functions actually I am a functional programming geek anyways so let's do that alright now this thing just takes simple takes in return in the body by default the last expression will always return okay that's cool so what it'll come fill at compile time like unless the function does not return anything because a function's type definition is specified here if you don't specify anything here or you return void it won't throw an error then okay good question rust is fun that way alright so this is how I call this function so let's look at what the output of this would be in assembly code if the compiler was not inline so if it's not inline the compiler will generate some assembly like this whoa what is this right okay let's just go through this very easily basically we have the gist of it is we have a lot of instructions the first instruction is to set the pointer to know where this function is then you need to push the arguments then you need to call the instruction then you need to put the result in another register then you need to pop everything back that's generally how function calls work in assembly to those of you who have done some low level stuff this is probably like child's play but this is what happens every time you have a function call so if I have a simple function then this is actually pretty expensive function calls are expensive that means I shouldn't have functions calls so what can I do here however if this thing is inlined inlining will just reduce it to this so inlining when you inline this it'll actually just compile to an add instruction that piece of code and the reason for that is what inlining will do is it'll actually expand the function body inside it'll actually expand the function body at the point where you're calling it so it won't actually be a function call it'll just be kind of like a macro expansion which allows the compiler to do several kind of optimizations which is pretty awesome so generally a lot of high performance languages will try to use static dispatch whenever possible that's static dispatch inlining however even that has problems if you have dynamic dispatch, there are some cases where dynamic dispatch is actually more effective so for example and this is where I'm going to touch a bit into Rust protocols just a bit so let's say I have a method called do something and this do something is define so if I want to constrain my argument type to some type that implements protocol the way I do it is this way so my X is of type T where T is generic type and this generic type conforms to this protocol called foo and in this protocol foo I've defined this method called X so that's why I can call this method directly so that's how Rust's protocol types work okay and let's say I have this and I define the same method foo U8 foo byte and foo string the way the Rust will generate code is it uses this thing called monomorphization which sounds like a load of balloony but basically all it means is that it'll generate stupidly generate functions for all the different types so for every type that conforms to this protocol it'll generate a method and this is cool because it allows static inlining but it's not so cool because it can bloat code size because if I have a protocol and every object every type conforms to protocol my code size will be immensely large and this will actually result in more instructions and if my code size is limited like it is on the pebble my friend showed you earlier then you're doomed so generally in these cases like this dynamic dispatch might be more effective Rust actually allows you to specify which type of dispatch you want to use per function which is pretty cool so yeah that's why it's awesome for this kind of stuff okay next thing just going to go through is Rust enums Rust enums, yeah go ahead yes by default obviously do some optimizations it's not guaranteed but you can guarantee it because they have a meta information you can add to a function and the meta information you can specify that you want it to be inline or you don't want it to be inline respect of the length if I honor the request yes correct because it assumes that you are smart enough to know what you're doing so it will follow that request and it'll make it inline or not inline but it allows you to configure it by default it has some smart way to do it so for enums Rust enum types are very powerful C C++ enums only allow enums to be integers and Rust your enums are what we call tagged unions they can have data inside of the enum so this is pretty powerful if you can associate so if I have like something called a message which can be of type quit, change color, move I can have data inside this thing as well which is pretty powerful like you can imagine if you've done programming languages like Haskell, Swift this is a new hotness it's every hot new programming language has this so if you don't have this then you're a sucky programming language and this allows you to do very powerful representations it's very easy to represent an abstract syntax tree for a programming language sort of yes but union in C is what you call yeah it's like it's like a union in C except that in union in C if I'm not wrong there is some limitation on what types you can have a union of sometimes I'm not mistaken over here your type can be anything and the container can be anything as well so you can represent anything you want for those of you who have done functional programming ML, Haskell, Swift etc last topic for today strings, everyone's favorite data type strings are awesome right everyone uses strings for everything so C strings those of you who doubt the C string know a C string is just a buffer of bytes with a null term with a null character at the end Rust strings support UTF-8 by default because that's new hotness as well in programming languages if you don't support UTF-8 by default so Rust strings what does that mean actually is I'll go a bit more into UTF-8 in a bit but let's say I just declare a string like this like hello there what's the type of the string this thing is a static string right a static string is actually just of type str string it's just statically allocated and it's available throughout the entire duration of the program so this will actually be in your data segment in your code so you don't need to worry about memory management for this one it's always available Rust also has another type of string called string so there's two types of strings one is a statically allocated string which is str the other one is a heap allocated string confusingly enough and for this string I can do mutations and what not so if I have this thing and I want to push hello world this is how I do it so this is confusing two string types looks like a recipe for trouble another issue with strings that you might not have issues usually when you do strings you can just use an index and get a character out works flawlessly programming languages exactly so rust strings do not support indexing why? the reason for this is actually in UTF-8 indexing does not actually mean anything the reason is in UTF-8 the same character your character in terms of human representation character does not necessarily occupy a fixed number of bytes so you can have if you have a Japanese character or a Chinese character you can have many bytes inside if you notice all the standard new emojis they come up with have you ever wondered how they're encoded is because the type the data itself actually can be multiple bytes so if you think of strings that way string indexing is an ON operation it's not an ON operation anymore because you have to walk through the entire string to find out what's the grouping how to compose all these characters together and then find the end character of a string so that's why rust doesn't hide this from you generally most languages will hide this from you so if you're dealing with a localization like we do a lot and you do try to index something that's actually not a valid character you get a very funny result at the end of the day if you have Chinese language stuff and then you access the third element you get invalid string and you have to be very careful that's non-asky so what rust does it allows you to look at stuff from whether you want the bytes point of view or you want the characters point of view if you want the bytes point of view like in this case I have hachiko and these are over here I print the raw bytes inside here so if you notice the bytes are a lot more than when I print the characters out okay you need to be aware of this when you're dealing with strings and actually this is not a new thing in Swift as well it's also a new sexy programming language it's the same thing similar concept because UTF-8 is important so with that the next section is foreign function interface which is how do I call rust from other languages because if I'm doing like a large project like on iOS and I want to use rust I can't use rust because I don't support rust so how do I do it rust allows you to communicate with C which is awesome and how you do it is you do something like this you can define an extern method and if you define this as an extern you can actually call this method in C and rust so rust function can also be called from C and you can also call rust you can also call C function in rust so in this case sorry I define this function in rust and this is defined somewhere else in some C library and I can call it directly but I have to use this unsafe modifier what unsafe does it says okay I'll let you handle your memory I won't touch this you're in charge and you can do whatever you want to do inside an unsafe modifier so this will allow you to call C code and to interface with C code and you can also expose your functions to be called in C code so in this case I have a rust hello world and if I call this function in C it'll actually print out give me a return to me a void star array of characters because that's what C likes and yeah this thing will work in C so that's just a I don't want to go into detail but the point is you can do it so you can have so keeping that in mind and the fact that rust supports a very large number of architectures because it's using LVM under the hood and so any architecture that LVM supports rust also supports means that we can work almost anywhere so if you think about it if you want to write cross-platform code on Android and iOS for example you want to have some common library you can implement it in rust and you can have bindings to call this library in iOS or C or Android and Java using JNI it's possible you can do that so it makes it very exciting actually to use you don't have to just use it for the system programming language although that's what it's designed for with that in mind that's it that's the end of my presentation any questions people to ask Roland thank you I've not gone through rust before having been through the hell of synchronized blocks in Java I'm not yet convinced but I get why it's exactly that problem the other was garbage collection I was going to say that you're out of date by about 15 years but actually you're not Android for some reason did not adopt the hotspot VM until 2014 and so up until two years ago what you were saying was correct now it is the case that there is no sweeping in general operations there's no sweeping and also that there's no sudden stop there's a small fixed amount of work at fixed intervals to keep people out of control and desktop has been true since 2000 I hadn't realized and there was a point to this which I forgot about you're right about the optimizations that have been made but having said that it's always going to be not as efficient as you manually managing memory yourself the point is the GC will always have some lag what you're measuring exactly if you're in a hard real-time function and you're extremely bug sensitive then the train garbage collector is actually a good trade-off except that it further doubles in every one so you've got to get what you pay for and you're doing hard real-time while you're running something on Linux in the first place so unless you fuck so there's that questions? so you mentioned that concurrency is one of the big tenets so how does this kind of linear memory management fit with I didn't point to a lot of detail in concurrency because a lot of the times you commit this to hash to do concurrency properly but the big thing that memory management helps is because of the immutability and the memory model that prevents data races anyway you don't need to worry about stuff mutating state across multiple threads so by default everything is thread safe anyway because you don't have a data race so multiple threads can access everything nicely if you have a data race, compile time error so like one big chunk of problems that occur from concurrency can be solved already there are obviously other problems with concurrency that does solve but that one I didn't go into much detail myself actually with multi-threaded yes sir, there are five multi-threads on this one so that's partially designed for kind of in one thread then it's less of a problem but this is actually more for when you're multi-thread you can have data races quite easily so this is kind of designed for that thing in mind yes what exactly are the advantages of using Bach already getting from C++ and the advantages are my first slide which is like all the potential memory bugs that you get when you use CC++ basically there's a lot of memory bugs that can occur with manual memory management you don't have these kind of controls like I mentioned earlier on dangling pointers, buffer overflows buffer overreads and these things the programmer has to manually check and have those bounce checks everywhere that function if you miss a bounce check you have a vulnerability the problems are already solved in Java what exactly is this language giving as opposed to just not already there and it's solved in Java you think like it's solved in Java because Java has garbage collection but we can't use garbage collection in this case because we're talking about systems programming languages so we can't we can't use garbage collectors because of the numerous reasons I mentioned earlier on but CC++ still have those problems OpenSSL still has a lot of unsolved bugs which I'm sure most of you are aware every couple of months I see a hacker news article on new OpenSSL vulnerability found yay and it's not just OpenSSL it's a lot of other CC++ libraries that often have these vulnerabilities exposed so it's an ongoing problem it's like playing that game where you hack one bug then another one pops up after some time so having this kind of programming model helps prevent that to a large extent because your compiler is checking for these kind of bugs which is more effective than your programmer checking for it yeah so you're right that it's difficult but it's safe by default because what you can guarantee is that you want a memory bug that's the guarantee that you have the CC++ may not be difficult but there's no guarantee that you will have a memory bug or not that's the mean thing that you're trying to solve at compile time yes what about testing? testing Rust has testing frameworks and Rust actually comes with a very nice build tool called cargo which basically sets up your project for you it sets up a testing directory you can write all your tests there so it has built-in support for testing and the whole thing ecosystem is actually quite mature because the reason I didn't mention this earlier is that Rust is actually developed by Mozilla and they're using Rust to implement servo servo is their new rendering engine for Firefox and they actually designed Rust to develop servo so it's something that has very real use cases that's one of the reasons why the ecosystem is actually quite mature now at least when it comes to things like testing, build tools, dependency management et cetera, it's actually very mature it's way more mature than cc++ there are some bits of the servo project which are currently in the nightly and the developer builds of Firefox in production today and if you want to contribute actually you can if you want to contribute to like a browser this is your chance yeah good question very good question it's a very difficult I wanted to include closures memory management requirements and memory management rules and I didn't include them because I would run very very over time but very good question I recommend the Rust Langbook chapter yeah let's say you have a tree and you have your child nodes you want to point to your parent but then you will have multiple references yes, it's very insightful no, just a child pointing back to the parent you have two children so if you have a cyclic kind of data source or cyclic kind of object Rust's borrower checker has issues when it comes to dealing with the borrower checker will give you problems the way to break those cycles is to Rust provides a reference counting library as part of the core as part of the core implementation and if you want to do something like of that sort, you can use a reference counting library to develop a weak pointer to your parent which doesn't increase your reference count if you've done with reference counting before and that's one way for you to have such kind of cyclic data sources the rest objects that deal with their parent and stuff like that but you need to do something of that sort out of the box will give you issues at compile time yeah GC will definitely solve the problem cyclic object references is a problem that GC solves in non-GC environments so in Objective-C for example there's a reference counting where you specify an object graph level which references a strong reference which references a weak reference and a strong reference will increase your reference count a weak reference will not increase your reference count so you don't have memory leak there because cyclic reference will call the memory leak otherwise so you need to use reference counting to manage that any other questions can you give examples of maybe big programs or platforms that have been built by Rust okay good question there's a list of them on the Rust project the Rust project read me but the biggest project by far is Servo but definitely the browser itself there's a server implementation in Rust which I can't remember the name of it it's supposed to be very high performance high performance server then one another interesting project that they're doing is they're trying to build, redo all the core Utils in Rust so all your core Unix Utils Move, LS, all these they're trying to reimplement them in Rust which is an interesting starter project if you want to start Rust I recommend you take a look at that see how they're implemented then you can get a nice example of how to do things in Rust but as far as other big projects goes I'm not sure at the moment I'm going to check the internet I don't have Wi-Fi right now but the biggest by far is definitely Servo Rahul do you know any other none that are announced well there are some people like me we might think about redoing this in Rust but no one's outright from out and said we are doing this in Rust and the reason for that is very simple Rust has only been 1.0 stable for about a year now if you were writing a big project in Rust you wouldn't announce it yet because you would only have started 86 months ago nobody ever uses like 1.0 anything except for Go which for some reason people started using 0.7 Rahul wrote a DNS library in Rust which is I also wrote a library that talks to any API in Rust which apparently there is no the XML processing library for Rust all were based on events there's no way to get a tree so I had to write that as well because you know the cycle that you mentioned they all work on the basis of what so the tree so you get events like element start, element end what about the name of the process all the checks 4 questions can you imagine that the project where you write the details yes from Rust yes correct because once again Rust allows you to call C functions so you can call all your system calls here there was another project I saw where they did Objective C to try to run Rust as an iOS app written in Rust and the way they did that is that they call Objective C using the Objective C runtime methods where they call the runtime methods directly basically so as you can do cool stuff like that you can talk to Omar there will be breaks of course thank you how are we doing for time do I still have to finish at that time okay so I get to finish alright let me know when alright hey everyone my name is Justin and today I'm going to share about Hyperledger which I have actually heard about Hyperledger okay very few guys how many of you have heard of or rather done blockchain then a bunch of you guys right use Ethereum maybe one of the implementations alright how many of you understand what blockchain is how about that okay great more people then alright so hi this is actually the third presentation I've done the first presentation I've done was very early overview of Hyperledger when the Hyperledger project was just announced and there were actually no source code and I was back in 4th Asia when I first introduced Hyperledger the second presentation I did was in Berlin when there was actually source code to show and stuff like that of an implementation of Hyperledger so today what I'm going to do is I'm going to give an overview hopefully get right into the code as quickly as possible and actually share with you what changes has been made and done and what's going to happen going forward in the next 3 to 6 months in the Hyperledger project so that's what I'm going to cover today so for those who don't know what blockchain is in a very simple sense a blockchain is a distributed database as simple as that but it is a distributed database with elements of smart contracts with elements of authorization authentication membership services and things like that that you would not have found or you would not find in a distributed database and that's exactly what a blockchain is Hyperledger set standpoint it's basically a distributed database 3.0 if you would like to call that with all that fancy stuff that you want out of it within the network of peers why do we want to do blockchain in the first place I'm sure you guys have heard a lot about blockchain from bitcoins you guys know what bitcoins are so that's bitcoin is an implementation of a blockchain a very specific implementation use case of blockchain but blockchain is the general form of what bitcoin networks are and it's meant to be transition across the different business transactions and business processes in order to build all of that into the distributed database traditionally let's take for example from a banking sector banking standpoint we have a lot of different parties to do clearing ledges clearing the money and things like that there's a lot of different ledges to be done and traditionally what do we do in IT department we create APIs that's what we do we create APIs and then bank one calls our API I call their API we have a customer client who creates their own application they call my API, I call their API there's integration issues there's this, there's that basically everything becomes a whole slew of mess that's what happens and what real life is and there's a lot of approval processes I'm sure you know what banks are like layers and layers of approvals and what not so what hyper ledger is trying to add into the features of distributed databases is to be able to have a permission level ledger that is shared and replicated leveraging on the database the distributed database technologies to replicate it out there so that you actually own your own ledger and it gets updated and synchronized automatically but not only that you actually only can have or see the particular hashes or particular entries in the ledger so that means that you only have the permission and it's built into as part of the system that you can actually see only what you can see that makes it very interesting from industries that have a lot of regulations especially you don't want bank A to look at bank B's records you want only MOM or other ministry of no it's not MOM what's the ministry that, MAS who can see everything but you don't want the other organizations to see the different records so you want to have those kind of flexibility and you want to have everything that's tracked and one of the things that a lot of times a lot of people ask me in terms of a blockchain can I actually roll back can I actually roll back an entry in the ledger so one of the things that is built into part of the ledger is that it's immutable you must remember that and I get a lot of developers who ask me can I delete this entry I made a mistake if you want to delete the entry yes write another entry into the ledger that reverts back that entry so everything is all tracked scary isn't it and that makes it very interesting on both ends you have accountability and on the other hand it's difficult to work around certain things which is what's happening in the industry right now so the hyper ledger has changed the hyper ledger project has changed a lot over the past one year actually about nine months now it was announced in the beginning of the year and it's about nine months now and it's changed quite a bit it is meant to be a collaborative effort to advance blockchain technologies with a cross standard industry open standard to transform business transactions so the key point is actually business processes and transactions it's not just about money it's not just about bitcoins it's about processes and you can actually map blockchain technologies to pretty much any industry out there even let's say in the manufacturing industry right one implementation is actually to track the materials and the bill of materials essentially everything within the ledger itself that's one example and many other examples that might not have benefitted from the blockchain is now considered and being looking at other thing that makes it very interesting about the ledger it's meant to become a connector to connect various private blockchains together and this is one of the things that hasn't been implemented yet and it makes it very interesting in the near future which let's say for example a business network in the finance industry has its own hyper ledger distributed blockchain you can connect that distributed blockchain to another distributed blockchain automatically and do permission permission synchronization which makes very very interesting use cases from cross industry standpoint so this is something that hopefully we'll see in the next 10 years maybe 20 years down the road and one of the biggest problems with blockchain is it is a business network so the entire business networks and all customers and the entire industry within the business network needs to adopt which makes it one of the very interesting problems when adopting with blockchain is that you can't just get any benefit if I use blockchain and one client use blockchain and that's it right there is absolutely no benefit it makes more sense when the entire organization when the entire cross industry actually adopts so hyper ledger what is in scope of hyper ledger there are the standards are which is in the scope is smart contracts I'm sure you guys those who are under ethereum in smart contracts what hyper ledger calls it is chain code it's essentially the same thing as smart contracts it's literally code that's being run and executed which I'll show you later what the chain code looks like you have the data structures and this ledger data structures will be standardized across the different hyper ledger projects and this is the one main change over the last presentation I did is that now there are two more implementations of the hyper ledger standardization so to speak the more popular one is fabric and there is two more which just came out and one that's already approved which is from Intel which is Sawtooth and the other one which is Iroha from a Japanese company which is still in the process of being approved so hyper ledger as a project so to speak is about having multiple implementations of the ledger and the blockchain and but following a set of standard data structures so that you can actually have different technologies or different implementations with their own benefits and what not but still following a standard contract so to speak you know a language so to speak from a data structure standpoint membership services is part of it which allows you to connect with your whatever authorization mechanism that you need or want using CA certificate authorization and all that validation framework identity services and so in so forth network peer services what is not in scope is actually development stuff so the main purpose of hyper ledger of course is for organizations to actually make money out of it and one of the things of making money out of it is actually with the operations the specializations are more modules to be created from an overview standpoint blockchain, hyper ledger blockchain looks like this from cross implementation standpoint you have the membership module you have the blockchain module itself you have the transactions and you have the chain code these are the three main things that are happening in blockchain and very basic stuff membership handles essentially all your authorization your registration identity and all of that stuff very standard you can there is no OAuth authorization at the moment but it's possible to actually create a module and plug in and that's how it's trying to achieve to be modular one of the key benefits of hyper ledger the main core focus is the blockchain itself the protocol itself and the doing actual transactions that is the core of the blockchain services which has all that stuff consensus the consensus manager actually allows you to plug in multiple different consensus algorithms so at this point of time there's a few of them like proof of execution, proof of things like that there's a whole bunch of it advanced crazy algorithm stuff which I don't really want to understand and how you actually do consensus if you want you can read a lot of papers from the universities themselves the main thing is a distributed ledger a distributed ledger is a database full stock at this point of time it's actually leveraging if I'm not wrong on rockdb and couchdb on the back end so it's just a database that's it done the P2P protocol right now it's leveraging on gRPC using htp2 protocol gRPC versus the REST so one of the updates which I'll show you later is that the REST APIs that were created previously it's going to be deprecated in the next few versions in preference for gRPC on htp2 the new specifications and the ledger storage the ledger storage is just blob storages so it can be any physical hard disk it can be any S3 or whatever it is you can do that that's where the actual content if you want to store videos images whatever it is into that storage and of course what is chain code is business logic that's it so business logic that it will be run automatically when a transaction is trying to be made and approved that's pretty much it and the chain code initially in the hyper ledger fabric which is the main project implementation is leveraging on goal goal lang and just very recently only it's got approved the java chain code and yes java don't shake your head you know why because the thing is for the chain code itself it's built in such a way that it's extensible to any language so it's just a matter of time there are shims available and written already it's just the different languages to implement those shims in order to integrate in so if I want to have a c++ chain code I can have that if I want to have a ruby python or whatever it is there is work being made for python right now for chain code and the various other languages and the other implementations also but for now officially it's java and goal so if you want to contribute the other languages in the last maybe that would be great to write on chain code so that's chain code how much time do I have let's quickly get into the actual code so benefits reducers, saves, removes blah blah blah and so on this is the one that comes in the core of what I want to share with I mentioned there are three implementations right now in the incubator status these are all incubated that means that it is still it's not even beta it's still alpha it's developer builds right so right now fabric which is the main one is versions 0.6 it's still developer preview it's written in goal goal lang the protocols is gRPC it still does have the rest api but 0.6 would be the last implementation of having the rest api implemented in it in 0.7 0.7 or 0.8 when the first beta release comes out it will be removed the rest api is leveraging on HTTP 1.1 would be removed alright there is a reason behind that it's mainly performance so right now the performance for doing the rest api you can actually call on a standard deployment about 15 to 20 requests per second with the move to gRPC it goes up to about 200 to 500 requests per second and it's a huge performance jump that's the reason why they are going with gRPC Java chain code support just recently implemented with goal chain code already in there chain 2 which is another language in a way which I don't quite understand but that's another one that's there and then what just came out was the hyperledger fabric client SDK which is just a wrapper of calling the gRPCs and the rest api within an SDK which I'll show you a little bit later the code base just came in was contributed in July I think, June-July it's currently released 0.7 it's written in an implementation of Python it's leveraging on rest apis one of the benefits or one of the interesting implementations of Sawtooth Lake, why Intel this is by Intel by the way proposal by Intel it has its own new algorithm called proof of elapsed time which leverages on their Intel software guard extension thing which is a hardware thing for their processes so it's one of the consensus models one of the consensus models to do the proof of elapsed time I have no idea what the algorithm is because it's quite complex but it does the consensus approval and then quorum voting consensus which is essentially you vote yes, you vote yes, you vote yes and all consensus say everything else we approve so that's the consensus model Iroha is something that's very interesting that just came out hasn't been approved yet for the incubator it is written in C++ it's pure C++ it's leveraging on also JRPC and rest apis it wants to actually be a lightweight blockchain implementation so that to help address with android phones and iOS phones and there are a lot of interesting stuff that's happening also one of the things that I am actually personally working on is trying to see whether if I can actually port the Hyperledger project one of the implementation into iOS using Swift the entire thing so having your phone as a note that would be very interesting and I'm working with a few guys internally to get this implementation going so I'm actually quite excited about this so from application architecture standpoint you still follow your same 3 tier architecture you have your application you have your API layer and then you have your database in the backend which is the ledger there are a few differences now in the ledger itself you have multiple peers the peers form a network each of the peer has its own implementation of a chain code one peer can have the same chain code as the other peer or it can have different chain codes as long as the chain codes get approved there is one major change going forward in the version 1.0 preview one major change is that they are splitting up the peers so previously peers are nodes consenters essentially these two they are splitting up the the trust assumptions or rather the approvals to another node type so to speak called endorsers which means that an endorser is not a consensus okay an endorser is someone who basically says or an application who basically says approve so this is one of the things, think about it it's not a consensus approval it's actually let's say for example doing some checks within the organization having a physical person to say approve or having a push notification to someone let's say you saying do you want to approve this transaction yes or not so that's what exactly an endorser is in order to split that peer into two away from the chain code so right now in order to do endorsement the endorsement code is actually part of the chain code which makes it very heavy in terms of execution so one of the big major changes is to split that apart so that the chain code business logic only focuses on the business logic and then the endorser part of it essentially is the approval so whoever, however you do the approval you do it under an endorser peer that's the main change so what does that mean that means that now they separate the trust assumptions away from the chain code and the consensus you can scale the endorsers because it's now lighter weight away from the chain code execution confidentiality now extracts out and moves to the endorser so your chain code doesn't have to handle any of the confidentiality which means very interesting now remember I mentioned that each of the entry in the ledger is seen by one person or a group of people a role now with an endorser that can actually the endorser will be able to give approval for a particular role or a person or identity to read that particular line of ledger so that's one of the other things that an endorser can do and then of course privacy alright there is also a big change in the implementation of the application level itself on the application level previously everything was very separate the application calls the REST APIs to the HTTP server and then from the HTTP process and everything actually goes out to the peer going forward in the future everything will be into an native API before going out to an endorser so now it goes to an endorser to validate the actual thing and then goes into a peer sorry a chain code peer okay so this is a transaction flow I'm not going to go through I'm going to show code I think I have a few more minutes to show code so all in all this is the updates on the changes that's going to happen into the hyperledger project specifically fabric itself so whenever I say hyperledger I usually mean fabric because that's the main implementation of hyperledger alright so how do I get started let's see so all the source code and everything for hyperledger is all open source because it's under Linux foundation you can download from the github do a clone from the github account let's see oh my god it's all small can you see okay hold on let's make it bigger there we go so you just clone it from the github account right get clone github.com hyperledger fabric.gith and one of the things is because fabric is written in Go so you need to your usual go path and all of that stuff so I'm not going to go through that you probably know how to do that alright so there are two ways of getting the development environment set up one way or rather one way is actually if you're going to make changes to the actual core ledger itself right and you just do a vagrant vagrant what vagrant start vagrant in it vagrant what yeah vagrant sorry ah yes yes correct vagrant up vagrant up you just have to do that and it will set up the environment and everything and everything will be set up you can also set up the environment within your mac or linux it does not support windows I think at this point of time if anybody's interested in getting working windows then sure why not but for mac you need to install a few more packages so the fastest way is actually just to use vagrant right and just just keep running there are two or rather two applications that are created so as you can see it's a make file you just do a make peer or make chain tool these are the two applications that are being created so that peer is the main ledger essentially or rather the main blockchain I won't call it ledger it's the main blockchain because the ledger is the database behind it right so it's the main blockchains and it will create the notes so once I give me a second let me just get into the vagrant and I don't really like to do it with vagrant unless I'm actually changing the core source code itself is it big enough or is it okay bigger tell me when to stop more really I don't know it looks huge of my laptop so vagrant up right great all the base images and everything is already there you can actually create your own base image if you want to all the source code is all within this project itself so let it run I've really created it so it doesn't have to download all of that there are a few ports that you need to remember as you can see there is the port 7050, 7051, 70 blah blah blah as we can see 7054, 7053 and so on and so forth some of the main ports that is required or needs to be open how do I know what the ports are and you can actually change the ports so if you go to peer there is actually a core.yaml right let's just do a quick see in this this is the main configuration for your various peers if you're going to install and run a peer on the different boxes and everything this will be your configuration you can also change the configuration so you can have one configuration and different changes to your configuration using environment variables so each of the environment variables are essentially if you want to change the address and then just change it in the environment variable so that's the core.yaml let me quickly H into vagrant SSH and everything is already copied and set up over here once you have it so it's located under where is it located under I think it's located under hyper ledger and you have all the source code here all you need to do is just do a peer this is the main application you can register a chain code you can create a network you can mainly to just do a peer node start and that's how you create one node and that's it and I've just created a blockchain as simple as that actually I have some socket issues as you can see so not exactly so this is from the source code itself but if I actually want to do implementation you guys are familiar with Docker I love Docker one of the things is I would rather use Docker to have my peers and my member services already created so what I do is Docker there is a service and the peer this is all generated as they change the code this is the latest version that's all I need after that all I need to do is just do a Docker compose set all my machines up as you can see the environment variables for the core YAML if I want to override it I just create environment variables here and I run I start the node so if I want to have multiple nodes I can actually create more Docker containers images and just spin them up okay so that's from a basic level so what happens when I do a Docker compose what do I do just like that start there you go oops oh yes the reason is because I need to halt my vagrants because the port has been binded to halt my vagrant instance okay Docker compose start and I just created a membership service and a peer right I can have multiple peers if I want to and that's it I've just created a blockchain so what do I do now alright the next step is actually to register a chain code your business logic into the peer so in order to register a business logic into the peer what do I have to do that's a very good question compose up alright now my dockets are running so the next step is actually to register the chain code itself you have to compile the chain code in the different languages so I have not this one let me go to the chain code example too this is a very simple example I have to just do a go build to build the the chain code itself so it's actual execution code and then I have to run or rather register the chain code into the peer I want to register it to right so that whenever something happens a transaction happens the code the chain code executes automatically you can you can register multiple chain codes alright so it's registered 0.0.0 is actually pointing to my peer which I really exposed right with 7051 so it's really registered with a shim and everything so I have the execution for this this chain code is really simple it's just a and b two entities I just call it a and b that's it and I associate a number to it right and that's it and I just do plus minus so I just do in the in the query itself I just do move number from one to the other number the other so very simple right so from a very basic level the chain code you can have more very more complex chain codes but from a basic level that's what the chain code is alright so remember I said that you can actually do it everything in the rest API right so I have here postman and it's really small I have no idea how to is there a way to actually increase I'll just zoom okay so the very first thing right is because I really set up the membership right so you need identity you need to register yourself in the membership how do I know what the registry rather what members I have let me quickly go here under fabric in the membership service there's another YAML file right and this actually creates the set up the certificates the CA the SHA and all of that stuff and also your users your affiliations your roles your admin whatever it is okay so right now I'm just going to use one of the default ones which is Lucas right so I'm going to register myself I'm going to say hey I am Lucas I'm going to register and let's see whether it works send alright so I'm logged in alright so now I'm logged in what I want to do is I want to actually initialize so there are a few steps to all doing peers right you want to initialize your back-end ledger so to speak with some data right and my data is essentially A and B at this point in time so there are a few methods in a chain code the first one is deploy deploy actually initializes in this arguments in my chain code it takes this query arguments it extracts it out and just stores it in the database as simple as that right I have a secure context identifying who I am right and just the ID of the this call so I call it to I call it to chain code 7050 and what it does is it returns back it initializes so now the next step is I want to query what exactly what data do I have in this right so I do a query so in my chain code itself I have written implementations say if I have query the query is BA whatever I name it return just the number integer right so remember what I did it's thousand two thousand over here so what is expected if I have B it's two thousand right so I query for the database returns two thousand so it's correct right really simple and then now the third method is invoke right so invoke is it can be as complex as you want this is the main transaction alright this is the main transaction and it can be anything you want in terms of the message and what not this message is really simple it's just I'm going to transfer a hundred from A to B that's it alright very simple alright I execute send returns me it's successful transfer everything is okay and the actual you know like basically when it fails is that if A is less than zero essentially so you can do all your checks in the chain code itself in order to say whether the transaction goes in or not and gets accepted and then of course each transaction forms a hash within the blockchain so I go to a query a get query to chain and essentially I have the current block hash and the previous block hash and if I want to actually check the individual transactions I can go to each of the block right query the transactions and it will show me the type of transaction the id the payload and everything the certificate and what not and also the hash data right and do remember the blockchain is just a block is just a chain of blocks hence blockchain a chain of blocks of hashes it does not actually contain any data got it it does not contain any data the actual data is actually on another database which is stored as blocks or what we call blocks right and that's data and it's stored as binary stream that's it right and it's not part of the blockchain it is reference based on the hashes you can lose your entire data blocks if you want to the benefit of blockchain is that the chain the blocks of chain of the hashes are there and that's what you want is the records the ledger records the data does not matter and that's one of the things that a lot of people don't get with blockchain is that if you lose the data it's okay because you still have the transactions there the actual transactions there and that is good enough from regulation standpoint regulator you don't actually need the data you need just the record of transactions alright so this is a very basic overview of the blockchain itself from a fabric standpoint those who have used Ethereum it is slightly different I'm sure definitely there are a lot of concepts that are very different from the fabric and but the main one thing is that it is meant to be modular it's meant to be extensible and it's meant to have multiple peers with multiple chain codes and everything and each of this peer can be hosted in any basically distributed so all you need to do is just pipe the IP address properly and that's it alright so from a code perspective that's pretty much it so the core oh wait I want to show the chain code itself the implementation of the chain code where is the implementation of the chain code over here okay I'm just going to zoom in like that so the shims so the shims are essentially in it which invoke delete and query these are basically the shims so in it you know is the initialization invoke is the invocation of just now we saw the method invoke delete is to delete the entire state and of course the query is just to query what the thing is and all the code is here it's just to get the state and what not so that is from a chain code perspective that's the very basic stuff so what you usually do to start creating your blockchain is to work on the chain code itself the different business logic within the different peers set up the peers and then the next step is to set up your actual application and the actual application recently or rather a few months ago they released the Node.js SDK for the Hyperledger Fabric client which essentially is just a wrapper for all the GRPC calls that's it so that's all it's just GRPC calls you can just write for any language itself not just Node.js just do a GRPC call and that's it and then you can on your application side of things you can just do all the calls and entries and transactions to the blockchain alright I should be finishing already okay so that's that that's how you get started work on the chain code set up the peers set up the membership service run all of that in different containers, virtual machines, whatever you want right attach the chain code, register the chain code to the individual peers and then call call the invocation methods to the individual peers itself so you have when you set up multiple peers you have one standard main peer that you can call to or you can call to another peer and you automatically propagate out to the other peers for consensus and run the other chain codes alright so all the automation and everything is done automatically for you in the background and hence the beauty of distribution some use cases rounding codes shared routing codes you have vehicle maintenance the most obvious is the financial ledger you have letter of credit and so and so forth there's a lot of things that you can base out of from an industry standpoint the key thing is to think of the ledger or the hyper ledger as record of transactions that's it don't overthink what blockchain is blockchain hyper ledger is just a transaction a distribution record of transactions and it can be on anything that transforms the business processes business processes of a business network and that's the basic stuff of what blockchain is alright so there's a lot of companies that's doing it there's communities out there these are all the links later when you get the PowerPoint slides you will be able to click on all that and that's it is there any questions for secreting the restrictions you must adhere to the shims at this point of time there are no obvious restrictions in terms of the amount of time and things like that to execute it is based on the consensus model the consensus model by default which is what I put is just no op which doesn't do any consensus so the consensus is the one that does the restriction on the chain code sure and then it crashes yeah so that's the one thing in terms of contracts is malicious right you need to you need to control the registration of the contract of the smart contracts going in they are still building out the what do you call that the the not the restrictions but yeah to a certain extent the various restrictions that you can set I think they call it a they call it a policy which you can set on a separate module but the policy is not it's not ready yet at this point of time it depends I think that it depends on cost on the execution time yes yes all of this you can set it in a policy within the consensus model that's it I think that's the one any other questions I know it's really technical it's quite a lot of information to absorb oh yes question yes can you can you write directly to the blockchain yes with the hashes the ledger at the database you mean to synchronize it to the different peers or what are you changing what are you modifying when you've got multiple peers who have consensus then a mutation on the then the mutation happens on the actual ledger itself the transaction gets approved and the mutation happens on the ledger correct only once the consensus model the consensus protocol is all run and approved once the chain code the chain code itself has been run and also no issues any problems in future there will be one more layer which is the endorses it will be endorses chain code followed by consensus so three layers in order for anything to be approved the transaction will be approved and then in future also I didn't really cover transactions can be in two views a single transaction and a batch transaction ledger even more interesting because now you can batch transactions into as one transaction hash right so that batch of transactions become one immutable block in the chain which makes in terms from a use case standpoint even more interesting use cases can be implemented in this kind of transaction entry alright if there are any other questions I'm still around you can come and talk to me and there is actually a lot more information that I did not actually talk about this is just the bare level and there's a lot of new stuff and a lot of things that are not have not been implemented yet also within the hyperledger fabric I didn't have any time to actually show you the sawtooth but sawtooth is another one that's very interesting that's upcoming and then hopefully in my next presentation in the fourth one that hopefully I'm going to do I can actually show you implementation of a peer on an iOS device alright thank you very much hey thank you I don't know I think it's a little bit yeah I was looking for I found a source to quickly talk about DevFest agent for a couple minutes so you can like set up here just like the cable will be optimized for a bit what's your top oh Facebook yeah yeah the chain code is a business logic which is just execution code so modification is data so the data is the data is the ledger itself which is just a database so I guess my question was in the chain code all you need to write is business logic and then automatically like one of the data updates that happens to the data so whenever there's an execution of an execution and then the next step is so the tool I go to is OO to be submitted to a peer from one of the years so you're not going to see a peer so you get an opportunity to set up so you have to run that we can put a label on specifically this label to promote Singapore and what is happening here in the tech ecosystem and give you the opportunity to learn more about what's actually happening the main thing that I want to say actually is you come to this party it's like if you are within the first hundred people you get free drinks too that's on the 18th of November so it's coming up in about 20 days so it's not too far away and check out all the other events that are happening during DevFest Asia a lot of them free, some of them paid but really good content on the workshops with people helping us running mobile app prototypes workshop for two days where you actually end up building an actual application in two days we have a careers meet up if you're looking for a job that's free Rails Girls, NodeSchool RedMind event is already sold out the product hunt GoldenKitty awards are going to be given out in that time with special editions of some favorite meetups here in Singapore like WordPress Meetup Melissa is doing another product thinking workshop we have obviously the big conference the CSS conference and JSConf with cool parties afterwards like the Code in the Dark and the DevFest Asia Talk.js special edition loads of stuff, hardware workshops with TESL, our five year anniversary party and this party you should attend too actually, you can just join as well, get a drink, it's going to be at the open, sorry, the Great Escape at the Golden Mile Tower Web Audio Hack Day right, yeah you can see BandLab as a partner here as well Yay BandLab that's going to be a cool one to me, it's organizing that one so this is the Community Organized Festival and I organized two of those events which kind of started it and you might have seen that one this is CSSConf and we're going to have some, I think phenomenal people over this year go check it out with Rachel Andrew Sarah Drasner, Lea Varue Soledad from Mozilla it's pretty high profile this year so you can expect a lot of good content here for that and then the other one is JSConf which you might have heard about two days and this is going to be mad, we're taking over the entire Capital Piazza Mall essentially for this whole thing with the theater inside we have live performances people explain beforehand what they're going to do before they perform something live because it's all JavaScript powered a lot of machine learning we're going to have brainwave reading with JavaScript we're going to have people from Microsoft actually and PayPal as well that are contributing telling us about the developer tools that are coming into edge loads of loads of good stuff please check out the website tickets are still there and if you need reasons for your company to actually pay I'm super happy to give you group discount codes and there's going to be a discount code geekcamp that I'm going to create right after this which you can also use for 20% of the tickets so forward that to everybody and additionally I hear there's a band lab competition going on with pictures and you're hashtagging and then you can win some of these amazing headsets and I want to join in I'm super happy to give away like a combi ticket for CSSConf and JSConf people that tweet with the hashtag DefestAsia take cool pictures geeky stuff you can combine this just band lab and DefestAsia they're contributing to DefestAsia too so I don't think their mind and we're going to look through this and see over the next week who does the coolest tweet with the hashtag DefestAsia and I'll contact you on Twitter and you can get a free ticket it's worth by now almost $1,100 so it's definitely worth it that's about it please take it all out we have DefestAsia stickers here and at the counter and I would love to see you contributing or attending any of those events thanks how you doing next we have Joss talking about graphql hello okay hi everyone I'm Joss I work at PayPal and today I'd like to share with you Graphql which is a new way to build and expose web APIs so you can follow along the slides are available at this URL I'll just keep it there for a few seconds so you can if you'd like okay so just to get this over with I work at PayPal but this is from my own personal experimentation and nothing to do with it's just personal stuff so I'll walk you through a bit of background about where web APIs are at the moment followed by REST which is how they're currently building web APIs it will then get into Graphql and some code of an actual Graphql service written in Node and that's it so this is the key idea of my talk if nothing else this is it that an API is a user interface for developers so we should put some effort into making it pleasant that the user experience of an API matters so a bit of background so today web APIs are I think the world so there's a service for every imaginable need payments, video encoding, file storage analytics, transactional email most likely if you're building a web app today services so APIs are pretty important but for some of you you might be wondering hey but I'm not building an API so why should I care so how many of you use Slack at work or personally a lot of people use Slack so I think part of the reason we use Slack is because of the massive selection of integrations available we could for example notify a channel when a PR is built when a PR is opened or when a CI job completes and how was Slack able to do this each integration wasn't built by Slack's developers but rather by third party developers that found it very easy to integrate with Slack's API so I think we can attribute part of Slack's success to the fact that their API has a great user experience and that generates immense value for Slack and they use this API's pretty important user experience methods so next I'll be we're going to REST which is how we're currently building web APIs so the key idea of REST is we separate our API to different logical resources so for example if you have a blogging platform you would have things like users posts, users and comments for example and then we map CRUD actions create, read, update, delete which is get and post and URIs so for example to retrieve a list of posts you make an HTTP get request to the post namespace and so on for each CRUD action so for the rest of this talk let's imagine that we're building a hackle, news or reddit clone and so we have a list of posts and links which are essentially links to interesting stuff so what resources do we need first would be posts could be users and comments so this example will be revisiting throughout this talk and this could be a possible data model for such an app we have three tables with some fields and we expose some endpoints to allow our API consumers to interact with our underlying data model so that's it, well we're done with our REST API so we can REST easy right well let's try using our API so again this is what we're trying to build what we're trying to render so first we have to make an API call to get a list of posts so we get to the post namespace and we get a JSON response which is an array of JSON objects but we're still missing the author's username here so we have to make a separate API call to get the user's name but we have to do this for every single post so the problem is we have to make multiple round trips each API call is a separate request response cycle and especially on mobile where we have variable network conditions it might be undesirable so one solution is we auto load we allow the clients to specify that hey, I also want to load this other resource alongside my post but this solution seems more of a hack because again one is that we are polluting the end point the post end point by specifying hey there's this other resource I want to pull alongside the post and another problem with rest APIs is we often over fetch so a lot of APIs usually you get a massive JSON object that is returned even though we might only be using say a couple of fields and another solution is again clients should maybe be able to specify the fields that they need but again the query string not as clean a solution I think and so one solution is we create a custom end point for each client in each version which returns exactly the fields they need so they don't have to manually specify fields in their query strings but the problem with this is you end up with a massive amount of end points that are very tightly coupled to the clients this is also not desirable and a final challenge with APIs is documentation it's pretty important imagine integrating with a third party API with no documentation if you're lucky maybe they have some hate OS links they can discover the API but it's a nightmare so who needs to learn about our API so if you have an API product this would be your users API consumers if you're looking in like a microservices setting it would be developers from other teams that needs to consume your service or it could be new hires who needs to get on board very quickly and start getting productive and the documentation must cover things like what resources does the API manage what can I do with the API and for each end point what parameters does it accept and is it a string, is it an integer so these things have to be documented and currently the solution to this is something called an API specification language so how many of you have heard of Swagger so Swagger is essentially and others are essentially a DSL to describe your APIs for example my API has this end point and it returns a string for example or it takes a parameter and it's an ID for example and the benefit of using API spec languages to describe your API is that you can auto generate a lot of these things and for the source of truth your documentation can be sure to match the underlying implementation so you can auto generate documentation server steps and client code and many others so it saves you a lot of effort so to summarize very quickly these are the drawbacks of REST that GraphQL tries to solve first is multiple round trips you need multiple REST calls to get the data that you need then you have custom end points because the problem is because you can't change the response that you get without changing the backend code and finally documentation is a challenge so next we'll look at GraphQL which is the most exciting part I think so GraphQL so GraphQL is actually more than one thing first of all it's a query language for the clients to describe the shape of the data that they need to pull from the server that's the first thing the second thing is a type system for both the client and the server to have a shared vocabulary of what the objects they are discussing and finally it's a runtime for the server to translate the queries from the clients into a JSON response for example so it was devised by the Facebook product team to solve some of the problems that we've seen in the previous section it's data store independent it doesn't make any assumptions about database or data store that you're using you could use SQL, NoSQL or even another REST API it's language and platform independent bindings are available in most major languages and frameworks so you can definitely use GraphQL for your stack so the name of Graph in GraphQL has actually nothing to do with Graph databases it's actually from the fact that we can technically model our business domains as a graph so from this diagram so this is like a library system you can imagine it's like a library system we have books and authors so if you look at the top book node the one with the dashed circle a book has a title so it has an edge the economics of inequality and it has authors so the edges here refer to the relationships between the objects in your business domain and GraphQL lets you extract trees from this graph of relationships so for example one possible tree might be starting from the top book node we traverse the graph we follow the outgoing arrows and until we can't anymore so that's one tree so formally a tree is directed a circuit graph with only one parent per node so this is the key idea of Graph in GraphQL so this is an actual GraphQL query so on the left is a GraphQL query and on the right is the response that a GraphQL service returns so if you look at this you can sort of tell just from the shape of the GraphQL query what the response would look like so this is what they mean when they say GraphQL is declarative so the key idea here is that we give power to the client instead of the server the client can choose to specify only the data that it needs and the server will do what's necessary and optimized on the server side to return the data so the client is no longer beholden to what the server returns and GraphQL server is just like a REST server or an RPC server it accepts GraphQL queries from clients and returns a JSON payload and behind the scenes the GraphQL service can be talking to multiple data sources and it's completely abstracted away from the clients and he buys the same space as REST and RPC it's a view for your underlying business domain so here's a quick example you can go to this URL if you'd like to follow along actually I have it open here so this is a GraphQL IDE this is called GraphQL it's essentially like a postman or a wipe IDE for GraphQL services so on the left here we enter our query GraphQL queries so let's say you want to return posts and press this GraphQL will just fix some solvable issues on their own so here retrieving posts, the IDs of all our posts let's say you want to change pull a different attribute you can check documentation and see what this root returns so posts returns an array of type posts so let's look at what posts can return so posts has these following fields so we can just so it has autocomplete as well and let's try pulling the author as well so the author is a type user and let's see what user returns let's get the name and that's it so we extract the complex graph of relationships of our business domain so that's pretty cool and one other thing nice thing about GraphQL is because the schema is saved on both the server and the client the client can perform client-side validation so for example, name is an invalid field here for the type post and the client can know even before we send this request to the server that if a query is valid or not so that's graphical so wow, the first time I saw this I was like, oh shit this is the thing wow and when you compare the user experience of graphical to pouring through pages and pages of API documentation it's clear that GraphQL has a better user experience and when you think about it having a great API documentation is actually a good marketing tool it drives adoption, especially if your product is an API and how is this possible the type system to your mix is possible so if you use RPC frameworks this might seem familiar or if you use swagger this might seem familiar or protocol buffers this also might seem familiar so the schema language lets you describe the objects of your domain so for example, in a blogging platform you have a post object with the fields title, author and comments and each field has a type for example this could be a scalar type or it could be another custom type such as user and comment so the exclamation points just means it's not now and we also have to define the possible roots of our tree usually it's called the query type or root type I won't spend too much time here and it's a full type system you have scalar types, you have more advanced versions like enams, lists, interfaces tag unions, some types and so on so you can describe your business domain very accurately so how does Graphical solve the drawbacks of REST as we've seen in the previous section of course it's a single round trip you don't have to make multiple REST calls to get all the graph of relationships it's client specified, you don't need custom endpoints because the clients can just tell you exactly what they need and you just give it to them and it's so introspective because by annotating your business domain, you get a lot of stuff for free such as the graphical playground client-side validation and documentation yep, so that's GraphQL so now to make it more concrete let's look at an actual GraphQL server written in node so to create a GraphQL server you need two things you need a schema which is essentially a description of your business domain the objects in your application and resolver functions which tell GraphQL where to pull the data from and for this example we'll be using node and express but again, bindings are available in all media languages so you're free to use which ever you like so this is visible so we'll be looking at three files package.json, server, schema so package.json is like a RubyGem file python, requirements.txt or java.pom.xml it lists out the dependencies of the application so this is our dependencies and these are the GraphQL libraries that we are using for our GraphQL service and then we have server server essentially first we are importing the GraphQL library upon to a schema which we'll be covering next and then we use the GraphQL middleware to enable our route slash GraphQL to interpret GraphQL queries from the client using our given schema so essentially one interesting thing is that in GraphQL there's only a single endpoint it's the slash GraphQL endpoint and then we start the server so this is the actual schema first we import the types that we'll be using in our schema such as lists object type which is for custom types integers, floats and so on we then point to our data source so this could be your ORM or your database just a light wrapper over your data source and this is the actual schema of your object so we'll take a look at this very slowly so we can understand it fully so first we define some metadata, for example what's the name of our type and object type here just means it's a type with a type with some fields that might be of other types and so for each type we define the fields that it has, for example ID and for each field we define its type, for example an ID is an integer, title is a string URL is a string and so on and then for associations links to other nodes essentially if you remember the graph diagram outgoing nodes to other non-leaf nodes needs a resolve function which tells GraphQL where to get the data from so for example this auto here is of type user and the resolve is a function which accepts the source node for example that means like the post node and it uses that information to retrieve the next node, in this case the user, any questions at this point and then next we have to define the roots the possible roots of our tree and this is often called the query type so for example so the query type defines the valid spotting points of our queries and so post we've seen in the graphical demo so post is a possible root of our tree and it's type list, a list of posts it returns a list of type posts it will specify arguments which let us specify acceptable arguments for particular nodes for example I didn't actually show you but you can specify arguments along with your query and then you can make use of these arguments in your resolve function which again let's GraphQL know where to get your data from and finally we return the schema and that's it so we've seen this and so the thing about resolve functions is I mentioned that GraphQL is data store independent in your resolve function you can technically do anything as long as it returns the data that you need you could make another you could call another API you could do some SQL, raw SQL queries or you could use some ORM so that's pretty much it in closing give GraphQL a try I think it's pretty interesting if your product is an API or if you have many clients with flexible requirements I didn't cover, deliberately didn't cover many features of the GraphQL because these are mostly syntax sugar but the core idea is the same and because GraphQL is still pretty new best practices are still emerging so we'll see what how GraphQL comes along so just a bit of a plot if you're interested in all things like APIs, API design, API development processes and new technologies like GraphQL do check out this meetup group ePairCraft Singapore I think you'll be interested in something like this and thanks yes, so in GraphQL it's called mutations I didn't cover it because it's actually more of the same yes it's possible so what you do is actually I have an example here so I can quickly demo some writes so in GraphQL it's called mutations and it's something like RPC for example let me just show you so you just do this and you specify all the arguments you return the success response or the object or whatever which is pretty much the same any other questions? thank you okay I'll try not to I'll try not to I'll try not to darn it sorry sorry just a minute sorry just a minute lost my slides just a minute oh wait okay okay great now that we've got the now that we've gotten the technical issues out of the way today I'd like to talk about Kubernetes for small organizations and in case you're wondering this is not a technical talk this is more talk of our experience with Kubernetes quick show of hands if you guys all know what Kubernetes is somewhat the majority I'll do a very brief overview in just a minute but first a quick introduction my name is Raven and I work with Lomotif you guys may have heard Lomotif is a social media amp that allows people to very easily stitch together music videos so if you've got a bunch of videos together with friends, holiday, road trip thank you you can use Lomotif, stitch them all to music video apply a soundtrack, share that to your favorite social media very brief introduction of what we do so the question that we're trying to answer today is Kubernetes is it worth it so briefly Kubernetes is this thing that allows you to manage your infrastructure as you know in this day of microservices and distributed services you have all sorts of different servers spinning up all over and how do you effectively manage them in a way that essentially keeps you sane and doesn't drive you mad and Kubernetes is one of these frameworks that allows you to do just this I asked Angad good friend of mine some of you may know him but it was worth it for those who don't know Angad was an SRE at Twitter he worked with DevOps at Vicky locally and now he's back at Twitter again so he knows his stuff I assume and with all this experience at hand his advice was no we are very small company, we are 7 people 4 of which are engineering so given that this is Angad's professional advice I'm just kidding he just said no but let's take a look at the motivations that started me on this route of investigating Kubernetes why did I go down this way in the first place you know how if you guys maintain servers you have production server, dev server staging server, personal dev boxes and so on and sooner or later you will run across this message 77 packages to be updated restart required so my dev box is fine my staging box is fine but what if it's production this gets really annoying in production how am I going to reboot a production server if my service is running on it I can't take my service down yes you should build for redundancy have distributed services do a stage rollout but the problem is we have these things we are very small company, let me remind you again we have 4 engineers but we have one main API we have 3 secondary services we have a CI CD setup we have a chatbot and all sorts of other little scripts at one point or the other by the way our chatbot is called lowbot he says hi we have all these things but there's one backend guy and that's me and this gets really really tiring if I had to go in to every single server, slowly reboot move my service off, update the packages reboot, move the service back and do that for every single server we had then it gets really annoying if you guys have any servers you know what I mean so in the end as all engineers realise eventually laziness wins what's the most amount of impact I can achieve in the least amount of work I looked around and Kubernetes seemed like a very promising solution because it does a lot of stuff for you so let's take a very brief look at what Kubernetes does for you so basic concepts of Kubernetes services, deployments and ports each port is a set of servers it could be application server it could be DB basically it's a small set of functionality you take this functionality like your API for example and you duplicate that multiple times and that becomes a deployment so you can say a deployment of my music API I would like to scale it to two ports or three ports or five ports so the deployment is what manages the scaling of your ports and that makes it very easy to have the services on top of that that point that map logical functions to the actual servers application servers that are running for example my service would be the music API service and this maps a I could have it map a public IP and have it load balance across my family of ports that are managed by my music API deployment for example so very brief overview but you get this this is great, I don't have to worry about individually deploying Docker containers for example to one server, two servers if load increases and you spin up more servers that gets old really fast and if you take a look at it Kubernetes gives this to you out of the box so it's a great win for me so we decided to experiment with Kubernetes we've been experimenting with it since July this year thereabouts some services migrated a couple of the supporting services not quite a main API quite as yet hopefully we'll bring production production here meets main API across at some point into Kubernetes we're currently deployed only on AWS but hopefully we'll be able to bridge across Azure as well so that means we have that kind of balance in our deployment so the whole point of this talk is to share the experience that we've had when we were up and deploying it and in case any of you were considering deploying Kubernetes yourself let us let our failures and let our success help you avoid those things let's take a look what are the good things about Kubernetes I'm sure you've heard a lot about it if you read Hacker news, you read all the tech blogs everybody's talking about Kubernetes and other orchestration frameworks a lot of good points of course the community is very very active it's one of the fastest growing projects I think on GitHub you can see if 36,000 over commits by over a thousand contributors it has huge year on year growth these are all stats done by OpenHub you can follow the link below but what this means is the community is very very active so new things are coming in all the time you have lots of blog posts you have all those people who are willing to help that's a great thing so if you were looking for an active project this is really one of the most active there so a large active community Open process is a lot of discussion for Kubernetes is discussed on the GitHub page so thousands and thousands of issues on the GitHub page the thought process is all laid out so it's very open to understand the rationale behind certain decisions there's documentation with so many volunteers of course you have a nice set of documentation only that exists sometimes so it's there but making use of it we'll see how that goes so the other good thing about Kubernetes I would actually say stability initially I wanted to say stability but I realized the word I was looking for is perhaps resilience and why resilience right as IT people I'm sure would fall face this right the quickest way of making any IT system work is turning it off and turning it on again and surprisingly this is what Kubernetes gives you alright because it's a managed framework you tell it okay I want these services I want these ports to be spun up at any one time Kubernetes is constantly watching your architecture to make sure that it satisfies these requirements and the great thing is if something goes it fails like if I have a node that fails just before I go there so Kubernetes sits on top of all your VMs so let's say I have 10 VMs what Kubernetes is going to do is treat this as a huge resource pool and then it will take a look at my service description say okay this port needs for example half a CPU and two gigs of RAM and it will look across my entire resource pool and say oh this node has that amount of resources I'm going to put this container or port here right if I have another service at different requirements it will look around my resource pool oh I have space here let me put this here right so it abstracts individual machines into a huge resource pool so keep this in mind what happens when one of these nodes in this resource pool malfunctions let's say the Docker demon fails and the network goes down something goes wrong you literally turn it off and let Kubernetes turn it on again I am not kidding I've done this right Docker demon in a node went down couldn't contact it Kubernetes told me the node was down no way to go in I couldn't shell in and couldn't reboot the machine through the shell I went to the AWS console I terminated the node the nice part was Kubernetes recognized this injected itself into the node spun up my containers onto that node brilliant didn't have to do a single thing beyond clicking terminate and this goes beyond the node level I can do this at the port level as well if my individual ports on a single node they are malfunctioning delete port they come back up so it saves a lot of time when it comes to managing your deployment and for a single admin like myself I don't have to go in and debug my first course of action in any case turn it off and turn it back on again I am not kidding so essentially the benefit I'm getting here is that out of all the previous slides Kubernetes provides us with very well established patterns to do certain things if I wanted to spin up a new service documentation that's a template for that if I wanted to spin up a new set of ports for example the documentation for that how do I put a new service onto my Kubernetes cluster the documentation for that and as a small team when I need to hire my consideration is how do I onboard someone very quickly so the conclusion in this case for me was why do I need to invent my own deployment process why do I need to go document the process any new guy who comes aboard I can just say literally go RTFM and then come and deploy the stuff as we need to right so it's really about maximizing the efficiencies for a very small company in this case I've found that this does work for us of course at a bad point as well so we talk about documentation sometimes it's there sometimes it's not the worst case where it's there so in this case if you so the scenario is I need to hook up our chatbot to our Kubernetes API I would like to manage that through our chat ops system I would like to say Kubernetes sorry I would like to say low bot give me this do that and I would like to have low bot manage my Kubernetes cluster for me so I was reading up on the Kubernetes REST API and I came across this field that was part of the REST API documentation and it says field selector you can do it by field so of course the first thing when you're investigating an API is to say what does this field do, what are the behaviors what can I do with it and after an afternoon of experiment after an afternoon of experimentation I find this in case you can't hello hello in case you can't see this github command by Kubernetes member says what would you like to use field selector for the lack of documentation is intentional pending implementation of ticket 1362 which is cool but I've wasted an entire afternoon trying to figure out why it doesn't work when it just doesn't work I would like to be told up front and if you take a look at issue 1362 on github if you can see the timestamp up there on the date that's 19th September 2014 two years back it is still open so I'm not missing the contributors I'm sure they're doing very good work they're doing the best they can but with any open source project there are gaps in how that is delivered and executed so if you ever considering implementing Kubernetes on your own these are the things that you probably want to be aware of right these things have been open from 2014 you may waste an afternoon but that's on par for the cause for an open source project thank you so other things for example we have these things called Petsets Petsets are nodes that have a more defined identity in Kubernetes your standard port is treated as ephemeral may come and go at any one point in time so long as your desired number of ports are still available Petsets tend to have a fixed identity for example webvpi1 part of a Petset should always be webvpi1 that can be addressed by DNS and so on as part of the Petset documentation they say you need to have this feel in your service description otherwise they won't spin up properly if you want that three it will stop at one waiting for you to set this flag so before it continues spinning up to three sorry it's a bit small but if you can see a little red underline there it says in the documentation wait for one port to come up and then flip the switch edit the port through Kubernetes and flip this flag to true and it will continue spinning up the rest of the port sounds great right then if you look lower down the page they say unfortunately the only feel that can be updated for Petsets is not that feel it's a feel determining how many replicas they are so even things like this when they have the documentation and they do tell you upfront of the certain pitfalls or the certain workarounds that you have to perform the system itself does not support actually updating the Petset right so these are a couple of things that we've come across these are actually fairly minor fairly minor issues when it comes to the grander scheme of things if you think about the documentation issues with sufficient Googling and GitHub searching you eventually get there but do be prepared that you may spend an afternoon also hunting down the causes for these issues so the documentation really needs to be taken with a little bit of pinch of salt lots of stack overflowing lots of GitHubing you want to be able to figure out all these cases before you feel you are ready for production right so another thing we found about Kubernetes is about persistence so having a huge resource pool having my server spin up and down scale up to five, scale down to three that sounds really good if I don't need to maintain state if my pods don't need to remember where they were before, who they were before what they used to deal with that's great 5, 10, 15, 20, down to one that's brilliant but dealing with databases is a different thing altogether you really don't want your database to be thermal right so this was in the beginning this was my diagram when I first started dealing with Kubernetes thinking of all the great things I could do with it so what I have is a gray log cluster a postgres cluster and I thought each of these things will be individually scalable via Kubernetes and I just layer the services on top of it of a huge collection layer and they will round individually to my various persistent services until I realize that persistence source really like being stable so the reason why I crossed out the word stability in my earlier slide and put in resilience is that resilience it's very resilient to damage I can take it down and know what it will spin back up but stability probably implies something different if I wanted my pods to be stable not only do I want DNS to be stable music API 1 for example I would also like the IP to be stable as well and the problem or at least the feature of Kubernetes is if a pod dies and your controller brings them back up you can come back with a different IP so if an e-formal service music API 5, 10, 15, 20 pods it really doesn't matter, your service will load balance across a set of them but if you're dealing with a persistence cluster if you're doing a shattered postgres for example, if you have a ready cluster the controllers really don't like a pod disappearing from the cluster and coming back up under a different IP my experience has been with Redis Sentitles and Memcache and they go crazy when the pods go missing if you guys have dealt with Sentitles before they never forget a slave so when your machines your pods are coming up and down your sentinel remembers what these pods were addressed at and if by some chance you had some other pod come up at the exact IP that the sentinel recognized my sentinel would now try and co-opt that guy who just came up and he may belong to a completely different Redis cluster so that is a problem same thing goes with things like Memcache farms you shard your Memcache across multiple servers one of the pods goes down then your sharding fields you have a block of your results that are no longer being served from cache so this may be very specific to maybe Redis or to Memcache it is a characteristic of how Kubernetes functions so if you ever deploy persistence into Kubernetes this is something you probably want to keep in mind as well so you can see just on my feed service the Redis I have three in a pad set so you recognize pad sets by fixed numbers at the end of the name and you can see these Redis servers have restarted 7, 25 times Sentinels really don't like this yeah so once again as I said you do have stable DNS identities you can always identify your pod by feed Redis 1 feed Redis 2 which is the internal DNS system they will resolve to the right pod for you but there is no guarantee that will always be the same pod there is no guarantee that they will always resolve to the same IP and this gets tricky for example like certain Memcache implementations want you to address your nodes via IP for example so let's say you set your farm your Memcache farm one dies, now your configuration is invalid you have to manually come back reset to the new set of IPs so these are a couple of things that we have come across in our implementation that again you probably want to be aware of at some point we could probably change this around maybe to avoid having a set of IPs maybe put it behind twin proxy or something but again for a small team you probably want the most direct, fastest route to get there so far the ugly so it's not really bad, there are some points about Kubernetes that really make you slap someone or go bang your head against the wall because these are really crazy so let's see if let's see if you guys have seen this I get phantom pods so Kubernetes is really great that's bringing up pods, bringing down pods sometimes they bring down pods and then leave them behind and they don't tell you about it so I went into this issue where I was looking, this particular IP this particular pod IP was being identified as a Redis Sentinel somewhere so he as a Sentinel was trying to co-opt my various Redis nodes under 18.6 but when I looked through all my pods I grabbed through all the pod IPs I couldn't find it so I had a phantom pod somewhere in my system so I was co-opting my Redis servers and there was nowhere to be found so how could I address it so what I had to do was to shell into one pod and then use that IP get a Redis shell and just kill it to fire so how do these things happen I'm not sure, doesn't make sense that it should happen but the experience it does so be careful, this is really bad, I can't explain it either so I haven't had time to investigate so if any of the people know, please tell me I would love to know being a very fast moving open source project as well, their tooling is always evolving, features always evolving, documentations always evolving as we found out tooling is always evolving as well when I first started implementing Kubernetes back in July there are very few ways to officially spin up a Kubernetes cluster so you can do it the manual way of spinning up individual nodes installing the cube daemon on it there were some scripts that some people had assembled, there was a script in the Kubernetes distribution if you downloaded tarball, you would get a kubub script seems very convenient, it comes with the distribution let's run kubub kubub spins up a cluster, you set a few environment variables, it spins up a cluster you set four nodes in whichever AWS does it all for you it was up and running until I started to try to integrate low-bot into Kubernetes and what I needed to do here was to establish TLS auth between my chatbot and my Kubernetes master API, my master node so you use TLS auth to ensure that A, my client which is my chatbot can verify the authenticity of my API server otherwise it's just an SSL connection I wouldn't know whether there's a man in the middle that would be bad so I would like auth to first identify my master node but I would like also my master node to identify my client so that's how we know which user the client is authenticating as to is sorry one step back Kubernetes uses a self-signed SSL auth to protect communication between the master server, the cube control that lives on your personal machine the master API and then the million nodes so it uses a self-signed SSL auth to protect the communications if I essentially wanted to create another client, another cube control and this would authenticate to the API master so therefore I needed to provide credentials or at least I wanted to so in a scenario like this having self-signed certs I would need the first certificate authority that was used to create the individual certs for the master and the million nodes and I would need to sign a cert for my client so this is how my master node would say okay this client is valid we can authenticate it and I can allow his commands to come into the system the problem is and I didn't discover this until three months after I deployed the cluster it's a Cubub script that does not keep your certificate authority's private key so without the private key I cannot sign any more new client certs I have zero ways of creating a new client to authenticate to my Kubernetes master it's a great way to spin up a cluster but little things like that really kind of throw you for a loop because now I essentially have to recreate my entire cluster just so that I can get back my private key or get a private key and I can sign client certs for other clients that may want to authenticate to my API master without this I'm stuck and at some point I have to schedule a recreation of the entire cluster just for that so yeah very very painful the good news is there are newer ways of spinning up a cluster now before I hear something called COPS Kubernetes Ops it's now out and that's the more recommended way of spinning up a cluster than Cubub.sh so if you're ever spinning up a Kubernetes cluster check out kops instead yeah so this is me trying to find my private key you know when the blank command line comes back it's very sad yeah so was it worth it? the question we're all here to listen this is what it's all about Angad said no I said yes as a very very small company we need to move fast, we need to automate as much as possible we need to be able to automate the process of information transfer as much as possible if someone new joins team cannot spend two weeks onboarding him or her with all our custom deployment scripts and so on we just need to point him at the Kubernetes documentation and understand look at our system exactly the same just apply so this really helps us be more efficient if anything breaks we don't have to wreck our brains oh shit what do I do sorry what do I do I have to struggle to remember what my shell script did I can just hop onto Kubernetes slang and ask them hey how do I do this or follow github report your thing is broken can you fix it for me so this provides you a level of support that you wouldn't otherwise get in a very homegrown custom solution they are caveats of course the learning curve is fairly steep when I started deploying I had to read the documentation for about two weeks before I got anywhere near speeding up the cluster so just to get my head around all the new concepts that were being introduced cluster sorry services, pods, deployments replication controllers services, endpoints so many new words it's a whole new field make sure you dedicate time to reading up and understanding the whole concept of Kubernetes first persistence also a problem persistence is very important to a lot of a lot of startups data is pretty important so if you want to deploy Kubernetes, deploy your persistence into Kubernetes you're going to have to experiment a little bit of it there are problems with main cache and redis as I've told you but there may be other problems with other databases as well heads up and of course there are rough edges with tooling and documentation and so forth that's it it's a really good system it really helps me feel that my cluster is a lot more resilient than it has ever been I can just blow away a node, construct right up blow away a port, construct right up it feels a lot more assured that my system will function the way it's meant to be and the way I want it to be so yeah, if you're ever considering deploying your application into Kubernetes bear in mind that these things there are problems around so just bear in mind when you're deploying to production because last but not least so anyone wants to find out more, please come talk to me find out more about architecture, more about what we do but we're looking for full start backend guys so here if anyone is talking for a gig so that takes me to the end of the talk any questions? your comment is you can get hosted Kubernetes services for smaller teams might be a good way to go second question is for self-hosted how about upgrading Kubernetes itself that is a huge problem let me tell you upgrading is a huge problem for Kubernetes I've asked on the Kubernetes like many many times I've not gotten a decent answer the closest answer I can come across is you shell in to each of the nodes individually replace a line in the YAML file restart the daemon that addresses your running nodes that does not address the saved configuration in for example your auto scaling group that AWS uses to split up more nodes so I can upgrade the running ones but if I have to scale my cluster then I have to perform that on the rest of the new nodes that come up because they will still be using the old configuration unless I replace the saved configuration but again the documentation around that is very very sparse I wish I knew, sorry so the master node is can the master node feel okay and the implication of that is you lose connection to your Minions your Minions are continue running the Minions don't need your master to be alive to continue serving but then you will lose the ability to control your Minions oh yeah not quite no I don't think this is the community there's one very clear master so Kube DNS lives in what we call the Kube system namespace so in Kubernetes in a cluster you can split your resources into multiple namespace or production dev and so on so these are just namespace ways of addressing your ports, not network isolation by the way so your DNS lives in a port called Kube DNS and that lives in the Kube system namespace so it doesn't appear by default if you do get ports but you can specify the Kube system namespace and you will see the port one note about Kube DNS more notes, you will want to spin up your Kube DNS replica set as well so you will probably want one DNS port to each physical node to be able to serve that otherwise your Kube DNS service will get overloaded storage on disk for example for your databases so not considering assigning the whole IP issue but Kubernetes provides you this thing called PVs, persistent volumes and when you write your service descriptor, so it's a YAML file that says use this image I want these resources and so on you can specify I want this PV so you can do it two ways, you can create for example in AWS as an example I can create my EBS volume first and then I recognize it as a PV mount that into my port otherwise I use PetSets which do auto provisioning for you you use the PetSets you get out of that Kube it's a YAML file in 1.4 so be careful with that anything else? cool the world is full of adapters my life has been full of adapters for many many days once again obligatory sponsor slide we couldn't have done this event without these people there is no way we could have been here without Microsoft because we wouldn't be in Microsoft and we wouldn't have a venue APAL for the food sponsorship you guys would have all gone hungry not for them and last but not least thank you Bandlap for the coffee or it would be obvious and I wouldn't be awake and standing here right now also thank you so much for the lovely headphones that speaking of headphones I hope that was enough well look at this very simply I will drag up a tweet on screen I guess I guess in a way that none of the instagram photos want I guess but I will drag a tweet on screen and if you are in this room come up here and without your phone and prove that that Twitter account is yours and you get a pair of headphones good to see you Bandlap if you're not in this room which I guess we wouldn't know unless you're watching the live stream right now well I'm sorry the headphones are going to someone else then um yeah that's pretty much what's going to happen alright first one I'm going to make that bigger so we can do this a little bit of coffee script because coffee I thought that phone was pretty good next why do Java programs make losses because they don't see jobs they don't have the last person to get a pair of headphones I'm sorry this is just absolutely adorable alright that was prizes and we're all tired and sleepy possibly lots of new information into the brain lots of old information recycled in your brain lots of dust going on maybe time to retire and food and stuff so bye if you spoke at heat camp today I'd like to meet you at the front the rest of you no seriously bye