 Hai, terima kasih kerana datang. Sekejap-kejap saja. Kamu telah mencuba untuk belajar sebuah leksur. Bagus untuk kamu. Nama saya Jejun. Saya akan bercakap tentang Phoenix. Jadi jemput jemput jemput jemput. Siapa yang pernah dengar Phoenix? Jemput saja. Okey. Semuanya. Berapa banyak orang menggunakan Phoenix dalam produksi? Okey. Sebelum sejujurnya, ada sekitar 30 orang. Dan lebih daripada 10 orang menggunakan tangan mereka. Jadi... Bukan banyak. Jadi saya beritahu bahawa saya tahu bahawa kita perlu bercakap dengan Phoenix. kerana banyak orang menangis dan tidak tahu... apa yang mereka berkongsi dengan. Jika anda tidak pernah dengar Phoenix, ia adalah leksur web-framework. Saya akan berkata adalah leksur web-framework. Tapi bercakap tentang leksur web-framework... sebuah leksur yang baru, tidak terlalu menarik. Kerana di petang-petang, atau barangan. Mereka tidak menyebabkan perkara baru. Berharap ini digunakan sebab sebab sebab sebab sebab sebab sebab sebab... sebab sebab... sebab... sebab sebab... sebab... sebab... sebab... sebab... sebab... sebab.... sebab... sebab... di mana Phoenix benar-benar mempunyai, di mana saya rasa ia lebih daripada apa-apa yang lain. Jadi, sesungguh-sungguh bahawa web berlainan adalah sebuah masa yang benar-benar. Di mana-saat ini harus diletakkan, anda tahu, aplikasi web modern dengan Phoenix. Saya tidak terlalu baik untuk membuat kata-kata itu, tetapi beberapa adalah. Jadi mari kita mulakan dengan memperkenalkan stack. Seperti yang saya beritahu, ia adalah framework web di dalam leksa. Dan ia juga berlainan di beam. Semua stack yang anda katakan, ia adalah Ruby di rails, atau jango, written python. Tetapi dengan Phoenix, anda harus bersyukur mengenai faktor itu, ia adalah pengalaman Erlang. Kenapa? Kerana apa yang menarik adalah bahawa kedua-dua adalah komparatif baru. Phoenix 1.0 telah dibuat 2.5 tahun lalu, pada tahun 2015. Itu sangat-sangat sederhana. Aleksha mendapatkan 1.0 tahun lalu sebelum itu. Sekarang dalam termasuk teknologi konsumer, ia adalah Apple Watch yang keluar 2.5 tahun lalu. Dan ia adalah iPhone 6, yang adalah iPhone pertama dengan dua sizes. Dan Erlang datang dari tahun 80-an. Ini adalah Mobira City Man 900. Ia sebenarnya adalah Nokia yang pertama mobil. Untuk memasukannya ke perspektif, jika anda ingin jawa, jawa telah dibuat pada tahun 90-an. Dan Erlang mempunyai jawa selama 10 tahun. Sekarang, ia tidak ada kesilapan bahawa Aleksha syinteks membeli-belah dari Ruby. Untuk yang anda tahu, apa yang Aleksha nampak. Kerana ia adalah pembangunan. Jose Valin adalah sebahagian daripada timpang. Dan Phoenix II, dengan kata-kata, sangat menginsipkan oleh pembangunan. Kerana ia adalah pembangunan. Chris McCord adalah pembangunan pembangunan. Jadi, semua pembangunan dari Aleksha dan Phoenix, jelas-jelas membeli-belah Ruby pada pembangunan. Jadi, kenapa mereka memilih untuk membuat pakaian baru? Untuk pembangunan ini, ia harus menjadi sesuatu yang sangat istimewa tentang Erlang. Jadi, pilihan yang paling terkenal di Erlang adalah WhatsApp. Pada tahun 2012, mereka memiliki banyak perhatian setelah mempunyai pakaian yang berlaku. Mereka mempunyai 2 juta pakaian tcp koneksi pada sebuah pakaian. Mereka mempunyai 2 juta pakaian yang berkoneksi kepada sebuah pakaian. Mereka berkoneksi dengan sebuah pakaian. Mereka berkoneksi dengan sebuah pakaian dan mempunyai untuk mempunyai. Sekiranya saya tidak salah, pakaian CPU berkoneksi berkoneksi sekitar 40%. Ia adalah pakaian besar, seperti 24 CPUs dan 96GB RAM. Tetapi masih pakaian yang terkenal di setiap pakaian. Ada beberapa pakaian yang lebih menarik. Seperti jika itu tidak menarik, pada tahun 2015, mereka dapat mengalami sekitar 1 juta pakaian yang berlaku dengan hanya 50 pakaian. Selepas Facebook membuangkan mereka, mereka mempunyai sebuah pakaian yang lebih menarik untuk membuangkan pakaian yang lebih menarik. Tetapi itu semua yang mereka perlukan. Untuk membuat pakaian untuk membuangkan pakaian untuk membuangkan pakaian 60 juta pakaian pada hari yang lain, ada tiga kali yang mempunyai sms global sepanjang hari. Selepas ini, Chris McCord, pakaian pakaian memperkenalkan beberapa masalah yang memperkenalkan pakaian yang lebih menarik pada Ruby. Dan pakaian tentang WhatsApp. Saya bekerja untuk pakaian secara segala. Saya menemukan yang selamat oleh Erlang. WhatsApp pilihan tidak teruk. Erlang memperkenalkan oleh Ericsson untuk Telekom. Terutamanya untuk membuat pakaian telekons. Jika anda membuat pakaian telekom atau telekom startup, anda tidak perlu membuat pakaian ke dalam Erlang tetapi anda harus membuat pakaian ke dalam Erlang. Jadi Chris memperkenalkan dan memperkenalkan ke dalam Erlang. Dan sebabkan Ruby, dia tidak terlalu menarik untuk mencari pakaian Ruby, tukang yang hebat, memperkenalkan dengan cepat, memperkenalkan dengan cepat, tidak di sana. Sebenarnya, dia menemukan susah untuk membuat pakaian ke dalam Erlang. Tapi, dia menemukan beberapa perkara yang terbaik. Sebelumnya, sejak 1986, ia mempunyai dan dan Erlang juga memperkenalkan semua pakaian telekom di dunia. Jika anda menggunakan pakaian di mana-mana di dunia, sebuah peluang 50% memperkenalkan dengan sistem Erlang. Itu banyak pakaian yang memperkenalkan secara sukses. Sebelum selama tahun. Sebenarnya, beberapa sistem memperkenalkan 9 pakaian yang memperkenalkan. Jadi, mereka di luar sana, memperkenalkan pakaian Erlang selama selama tahun, dengan tiba-tiba, tiada masa untuk memperkenalkan. Tapi, tiba-tiba, pengalaman pakaian yang memperkenalkan. Kemudian, Chris memperkenalkan aliksyur. Dan dia memperkenalkan Jose. Ini adalah aliksyur kreator. Memperkenalkan kembali pada tahun 2011. Dia fikir ia gila dan Chris memperkenalkan dia. Tapi sekarang dia menemukan Erlang. Mungkin, aliksyur berguna. Jadi, Jose adalah terkenal di komuniti Reels. Dia adalah seorang pakaian yang memperkenalkan aliksyur dan memperkenalkan pakaian yang terkenal di Aliksyur Autentikasi untuk Reels. Dan seorang pakaian ini, Jose Malin memperkenalkan aliksyur kreator ini. Dan Chris memperkenalkan apa yang dia melihat, yang memperkenalkan sepenuhnya, apa yang terbaik tentang Ruby dan memperkenalkan mereka ke Erlang. Fokusnya pada aliksyur kreator dan memperkenalkan semua yang Erlang mempunyai. Dan Chris memperkenalkan Phoenix untuk menolakkan masalahnya. Dan di aliksyur ini, ia mempunyai web modern. Web modern memperkenalkan 5 aplikasi html, aliksyur API, sistem kreator. Dan sebab aliksyur berguna, aliksyur memperkenalkan aliksyur kreator dan juga aliksyur yang sangat mudah. Baiklah. Komuniti Reels selalu memperkenalkan aliksyur yang lebih mudah, kerja hampir berllingkai, berbicara berapa banyak periksa Twitter? Berapa banyak projek yang sering mengarangkan reaksi Twitter? Tetapi, masalahnya berfungsi pada kecil lebih kecil. Dia memiliki projek cara aplikasi yang berkata-kata bahawa sebaiknya api yang berbeza Dan pusesa melihat perjalanan setelah mengambil agar ada banyak projek. Jadi, dia memperkenalkan pelancongan hipunkah Jadi sesuatu yang tidak komputasinya sangat susah, akhirnya menjadi sangat ekspensif untuk menghubungi. Dengan Alexia dan Phoenix, kamu tidak lagi perlu membuat keputusan ini. Kamu boleh mendapatkan keputusan dan performa. Fokus adalah pada keputusan, tetapi kamu dapat keputusan dan kelihatan daripada langkah-langkah. Baiklah. Jadi apa yang boleh saya bina? Jadi kamu boleh fikir begini seperti jenis jenis jenis. Jadi kamu boleh membuat keputusan, kelihatan daripada langkah-langkah, kelihatan daripada langkah-langkah. Mereka semua ada di sana. Tetapi kamu boleh juga membuat keputusan yang tidak kompensif. Dan seperti yang saya katakan tadi, saya rasa ini di mana Phoenix sebenarnya mempunyai. Mari kita fokus sedikit pada keputusan. Apa yang saya maksudkan oleh keputusan? Jadi jika kamu menentu keputusan, keputusan dan kelihatan daripada langkah-langkah, membuat keputusan, kelihatan daripada langkah-langkah, semua yang kamu perlu membuat keputusan. Mereka membuat keputusan, kelihatan daripada langkah-langkah. Jadi keputusan ini bagus. Dan ia menolakkan web untuk menggabungi kerana kamu boleh terus membuat keputusan. Semua keputusan yang kamu perlu lakukan adalah menjadikan keputusan. Memang tidak terdapat apa yang berlaku sebelum untuk membuat keputusan yang klien perlukan. Kamu boleh membuat keputusan dan keputusan. Dan kemudian keputusan yang terdapat adalah untuk membuat keputusan yang berbeda untuk membuat keputusan yang tidak bergabungi. Jadi Phoenix adalah framework MBC. Ia berlaku dengan membuat keputusan yang terdapat dan membuat keputusan yang berlaku dengan keputusan yang berlaku dengan keputusan yang berlaku dengan keputusan yang berlaku sekarang. Tapi dengan keputusan yang berlaku, kamu membuka sebuah prosesan antara klien dan perlukan. Dan mereka bercakap dengan mereka pada sebuah mesyuarat yang menggunakan. Perlukan yang boleh digunakan dari klien, tetapi memperkenalkan ke perlukan atau beberapa perlukan yang berlaku di perlukan dan perlukan yang perlu diperlukan. Sekarang, komunikasi ini bidireksional dan boleh berlaku pada masa-mengar. Jadi ketika MBC adalah ekstraksi yang bagus untuk keputusan, MBC memperkenalkan keputusan yang bagus untuk keputusan. Jadi Phoenix memperkenalkan keputusan yang baru diperkenalkan. Dan boleh memikirkan seperti perlukan dalam MBC, tetapi perlukan masa-mengaranya untuk keputusan untuk keputusan. Setiap klien berlanggan untuk sesuatu yang diperkenalkan. Kamu boleh memikirkan mereka seperti ruang virtual. Jadi setelah kamu membawa ruang, kamu boleh menentu dan sebagainya akan mendengar. Jadi ruang adalah hanya sebuah ruang. Seperti yang kamu boleh lihat di sini. Kerana pelanggan juga memperkenalkan yang ada klien yang berlanggan untuk menyukainya. Dan apabila klien atau pelanggan memberi mesej kepada mesej itu, semua orang yang menyukainya untuk menyukainya akan menerima mesej itu. Jadi di belakang, ia adalah sebuah mesej dan di kanan, kita akan menyebabkan sebuah mesej. Dan ini adalah bagaimana ruang terlihat untuk channel. Selain itu, ini sedikit berbeza daripada kontrolan kamu di MBC. Ini adalah kode Alexia. Ini adalah kali pertama yang saya menunjukkan di sini. Seperti yang anda boleh beritahu, so the topic is room and the subtopic is everything related to this topic is handled by the room channel module which is the third, the second argument to this macro. I will explain what's macro later. And that's it. This is the client. And this is java script. You create a socket object. We should present a web socket connection in this instance. And you join the channel with the room lobby topic. After joining the channel, you can push any message to the server and it's pushing a new message. The new message message message to room lobby. They can be anything to be add command. So when you enter something, you can send whenever you have finished entering that value. And you can also subscribe to different messages depending on who has joined the topic. For example, you can listen in on new command. So whenever people public broadcast new command, you will receive it on the client back to the server. This is the module for the socket that I mentioned earlier that you can join. As you can see, it's listening to all rooms as I mentioned. And there's also a connect function here in case you want to process socket joins for validation, for example, whether you allow that client to join that socket. And here's the proper channel code. And like socket, there's also a function called join, which allows you to do certain people processing even. So if previously you don't allow people to join your socket, then it's either yes or no there. But after they join the socket, maybe you only want to allow them to join certain topics. And that's where you control it using the join method here. For example, you can join the lobby. They cannot join another room, which is private, room colon one or something. Handle in is the handlers for incoming messages. And this is the handler for add comment. As you can see, every time the client sends an add comment message, this handler extracts the comment and broadcast a new comment message. So, let me repeat that. So you're handling a message, incoming message that's an add comment, incoming message. And you decide to handle it, you broadcast another message for new comment. And everyone subscribe to new comment will receive this message. And that's it. That's all you need to do is to add real-time features to your application. So, one of the primary goals of Phoenix Chris said is to add real-time, to make real-time web programming as easy as adding a rest endpoint. I think he successfully did it here. So, channels are not just for browsers. In fact, it's client agnostic. There's a spec that you can write channels on any platform and as long as you conform to the spec, you can talk to it in a second. So, there are clients for iOS and Android out there used in production now. Channels are also transport agnostic and what I mean is earlier I showed an example of Phoenix socket join and that's WebSockets which is a default and a sensible default for bidirectional communications. But if that's not available, say, you're using IE9, you can fall back on long polling. And everything just works as far as the developer is concerned. So, here you have a bunch of devices connecting to your server. But what if you have more than one server? We're talking about stateful services and persistent connections. It's important to know which server is talking to which client. So, Phoenix builds upon Erlang's view-in distribution features to provide synchronization or elaborate on that later. But every time a message is sent, Phoenix knows how to send it for handling. So, it seems like magic. But remember that Erlang was originally created for telephones. So, just by substituting some name and icons, you can see it's just a bunch of phones connected to telephone switches across the world. It's the exact same concurrency model that Erlang was designed for and it's the exact same problem faced by the modern web. So, here's a little bit picking under the hood. So, what happens is when a client connects to a Phoenix and Erlang process is spawned that's the green circle over there. It's an Erlang process. It's not an OS process. Like OS processes, Erlang processes are isolated and they run concurrently. They don't share memory with other processes. You need mechanisms to talk to other processes. But unlike OS processes, they are actually very lightweight. So, on my laptop here, I can run millions of processes and everything will still be responsive. Some of you notice as the actor model of computing. So, a process is created to handle the transport for that client. In this case, web socket session. And for each channel that the client joins, the process is also created. So, those yellow circles is probably room lobby or room one or room two. You know? If a channel joins two client joins two topics, it will be represented by two Erlang processes on the back end. Like an OS process, if a process crashes, it doesn't affect other processes. So, you can imagine that when a channel crashes a process, you did something weird there, exhaust the database wrongly, didn't pass validation or whatever. It crashes. But it won't affect other channels or the original transport. A new process is simply spawned. So, this is the model of computing implied by the actor model and it's incredibly fault tolerant. Now, if you're running one node, one server, my story is done here, but that's more exciting stuff. So, if your web app is getting popular and you need to scale by adding more nodes, there simply isn't enough CPU who handle the workload that your startup is getting. Phoenix chips with a PubSub feature which is powered by a distributor, Erlang. Alright. That allows you to talk to other nodes and synchronize data. So, you can spin up 10, you see a traffic spying, you can spin out 10 nodes and they all can handle channels and they know how to talk to each other and forward messages accordingly with built-in tools only built-in features. You can swap out distributor Erlang for Redis. I don't actually know anyone who has done it. That's how awesome this feature is. Action cable which is inspired by Phoenix channels. I'm getting ahead of myself. Has to be powered by Redis because they don't have distributed Erlang. So, right here I have explained how Erlang can do massively concurrent systems and concurrent means you can easily scale it for performance, horizontally scale but it's also fork tolerance because a process crashing does not affect in another process with 9-9 reliability. All this performance is theoretical but has it been tested? When Phoenix 1.0 was really someone suggested on the addiction mailing list that they should really put their claims to the test not just micro benchmarks but test that can put the framework under load and try to break it. So, the Phoenix teams found the biggest server they could find I think it was 40 calls and they set up a Phoenix channel spun up and hold other army of servers to subscribe to just one topic and here's the result. They started pushing load and clients joined a rate of 20 to 25,000 clients per second and that's across time. And steadily climb and scale all the way up to 2 million plus connections. Now, the interesting thing is that 2 million was actually a hard coded cap on the OS level because they were optimistic that you could even come close to 2 million when they were configuring for this benchmark they set it at 2 million but they could actually exceed 2 million and they think they could have kept going and if you spin up H-top you can see that it's a powerful machine 40 calls, yes and you can see they are all idle now. When they got the 2 million connections they downloaded a Wikipedia article and they broadcast it to everyone and everyone receive that Wikipedia article within 2 to 3 seconds and as he did that they saw the CPU spy and the things were lighting up but then after that everything just returned back to normal so that's channels yep maybe I talked a bit too much about channels let's come back and talk about what else Phoenix did so back to stateless applications it's also really fast when it comes to normal HTTP request right so here's another benchmark by the way disclaimer always take benchmarks with a pinch of salt as always you can see Phoenix is handling more throughput and latency than everyone except Jin or Play and it's not a fair comparison to Jin because Jin is a micro framework while Phoenix is a full blown MBC framework which also handles things like sessions out of the box so the only full blown frameworks here are Rails and Play and as for Play it appears to be faster and it's also one of the most inconsistent if you see the right most column not sure why maybe the JVM is doing funky things again but the the point that if you choose to believe this benchmark Phoenix is the youngest framework here so personally I think it's pretty fast for the youngest framework but that's just a little benchmark right this is a very famous they call it the poster boy article Bleacher report high traffic sports news site reduce 150 servers to just five after migrating from Ruby and Rails to Phoenix and they think they're over provision you can read the article with the news clip yourself okay enough about channels and performance let's round up the top a bit let's briefly talk about productivity so Phoenix is not just about performance in fact as I mentioned earlier the focus is on productivity over performance meaning if they ever accounted trade-offs where they choose between one or the two Chris will always pick productivity and what I mean by productivity is after you install a lecture installation is really easy just download mixarchive.install and you can already create a new app if you are coming from Rails this will look familiar this is the router this is very easy to read because like Alexia like Ruby Alexia has optional parenthesis so you can construct very readable DSLs so you have helpers like resources which you can find in Rails too that will generate all the corrupt routes for you there's scope which is new that allows you to define a top-level path component in group-related function together slash admin for instance scope also has another purpose wish to call attention to the pipe-true browser there that means for all routes that belong to scope apply the browser pipeline to it so what's a pipeline a pipeline is just a set of middleware you wish to apply to just this set of routes so you normally want to do very different things for different routes for example standard browser request versus API request for browser request you typically want to fetch the session and that's very very different from API request and this allows you to group middleware to different routes and treat them very differently at the same time make it very readable and maintainable controllers again if you're a Rails user this you will familiar with the only difference is probably show where there's pattern matching in params to extract the ID there's also generators in case you want to generate boilerplate really quickly in addition to generators there's also tasks like mixed-acto migrate to migrate and generate migrations here in my opinion is one of Phoenix killer features all views are functions so what do I mean by that all views here is something called abandoned elixir very similar to ERB in Ruby and this template is actually compiled at build time into a function and what I mean into function I really mean just another function so thanks to elixir's metaprogrammic features all your templates are compounded functions and when you deploy and you run it because they are functions they are all loaded into memory at runtime and they are just read off memory whenever a request comes in so that's one of the reasons why and most people are shocked to find out that when they first spin up Phoenix the responses return within nanoseconds and not microseconds and you heard me write nanoseconds because they don't need to do any disio or any parsing nothing needs to be read from this everything is just read from memory all that slow stuff is out of the way when views are just functions so that's what you get out of Phoenix if you're not familiar with elixir there are other productivity boosts there we have an amazing console in Movedian it just gets better at every release they added breakpoints as a built in help they can just type I almost never refer to the web documentation I always try to tap and auto complete in IX to discover documentation and how to use certain tools certain functions there's also observer which allows you to inspect at run time what processes are running so all those are Erlang processes you can see the repo pool and that's your database pool I was completely soaked on this stack when I decided to kill one of the connections and it spun back up literally like I can't see I thought I wasn't killing it that's how fault tolerant Erlang is you can see the pubs I mentioned earlier and all that so Eric MJ the maintainer of Hex Hex is elixir package manager not unlike NPM for node and Ruby gems for the Ruby community kindly share this data for me popularity wise it's been going exponentially since the start it's at 3.6 million downloads now Phoenix 1.0 was launched in August 2015 so a little bit in the first row I guess and not showing any signs of slowing down in fact I think it's accelerating this is data all the way up to last January not last January the past January and a little fun fact Phoenix is actually the second package ever published on Hex so I work for e-commerce company called Bazaar and we build our app from ground up in elixir and Phoenix so yes although the stack is quite young and you may not have heard of it we are using it in production now and we are shipping elixir code daily so if any of these things excite you and you want to work on bleeding edge web technologies every day come and talk to me so that question so you said that you didn't use this or anything like that for distributive program so do we actually use distributive program in your application today? Ya, I'm using Pub-Sub I guess that counts I guess but do you use different nodes? Do I use different nodes? I don't use different nodes Well Phoenix Pub-Sub uses that underneath so I would say I'm using it indirectly I haven't directly used I did oh, this is a good question last week I did have to console into production to debug something and I use distributive to connect my IES console into the the running node and set big points on it which is insane actually it's the first time we're doing it so that counts as distributive but you're unmanipulate a big server like no, it was unmanipulate it was it was across IPs okay because I tried the deploy is pretty complicated because you need to just sort all the different instances yep yep so I'm going to plug it's impossible like we use a rubber so I'll plug another product here I'm using Nanobox as an engine and all that is taken care of you just have to do both attach yep and it can find the other node awesome just like that yeah yep so since up private like VPN so you wouldn't get traffic that was the issue people have because it's not secure if you can just connect to any node so within that subnet net they help you set all the cookies and all the settings yep since Phyllis is kind of new I mean it's been few years but you still run into a situation where you might not have set of libraries yep it's my primary right now so what exactly I hope that it could fall back or a long but a long packages also having their own issues so won't you actually put in that situation okay so the question is if the ecosystem is weak what do you do I don't have a good answer because it's my primary problem but what I found is because Alicia is so easy to code in I could quickly spin up roughly just what I need moreover it's very very similar to like syntax wise to Ruby so there was once or twice where I actually looked up a Ruby library find out how they did it and just rewrote it for example a sluggyfire library which you can find was I looked through a lot Ruby libraries for that yes so the sluggyfire library is open source I'm gonna open source in another library soon just stay tuned yep yeah okay one more question sure because you mentioned Alang in the slides so it's probably a question to everyone so one of the things for this Alang you won't do it as well in Alexa which is basically adding syntax on about Alang and Phoenix is doing great but in terms of investment into Alang I think there hasn't been much that's what I have and in fact what about scalability wise of course it was British actors in mind so British VMs the Java VMs the JVM person were actors in mind that's why it's called us I think in some ways it's like same kind of acavis like acavis program but in terms of being CPU on CPU on processing I thought that Alang VM actually has to choose yeah so the question is has there been that much investment in the Alang VM these days yes and if someone is processing CPU on processing I don't know if he wants to answer in the community so I've heard Alang is not that great so people sometimes resolve to remain uneducated uneducated into fist that's good okay so to answer the first question first if you follow the Alang repo it is there's a lot of development going through it so I'm not sure what you mean when you said not having invested I know a lot of investment banks I would name them it's critical to their production they just can't abandon the platform there's no way like I said half of the world's telecom runs in Alang you don't deprecate a language just like that when it's so widely used so I wouldn't be too worried about language development in the sense that it's going to be abandoned or anything like that but maybe you'll be worried that development is a bit slow but they have to it's so mature and so better tested and so proven like you can't just change things too much right but surprisingly thanks to Aliksha coming on board Aliksha is bringing that developer mindset from Ruby Jose himself and a lot of people from Aliksha hipsters they call is adding a lot of changes back into contributing a lot of changes back to Erlang for example, UTF-8 atoms and that sort of thing so we are seeing pace actually picking up on OTP actually Erlang OTP to answer the second question Erlang DM it's not exactly great for CPU bound work I've shown you things like 2 million connections those are not CPU bound those are IO bound work and that is correct the beam is actually optimized for low latency very quick responses the scheduler is very very pre-emptive and very aggressive on top of that if you spend a little bit too many CPU cycles they call it reductions if you spend more than 2,000 reductions it just ask you to shut up and it will just give that CPU cycles to another processor so the right tool for the right job if you're trying to do CPU bound work like a game engine please don't write it in Erlang Erlang is great for doing what I think the modern web needs now you want very fast very quick responses like fast latency and whenever you need CPU bound work most of the time if not all of the time coming out to a background process now that background process can be managed by Erlang but the actual computation can be done in another language like these days Rust is quite popular in the literature community and you want to do this background process even maybe outside the server that you're running literature on so that they won't fight for CPU cycles so you can continue to serve those responses quickly and then and synchronize all the CPU bound work and give it to languages who are better suited and the thing is almost every language is tuned other than Ruby and Python is tuned for CPU performance like C, Rust we have those languages doing that already why not a language optimized for latency and to me I think the biggest computing problem is actually serving web pages why is there no languages they are optimized for latency oh thank goodness there's Erlang and election now just to add on point I mean for Erlang why people say there is no latency is because the GC is actually done on the process itself and everything that is within that process that very likely process once the process dies everything is collected GC immediately so it's many many many small GC versus like Java, JDM one big GC they go through all the generations and pick it up and GC all the stuff but for Erlang it's that process immediately throw in the whole chunk because no memory is shared so when you know the process is done you just kill that memory you don't need to stop the world because let the stop the world coverage collection and that's why sometimes you screw in the android too fast you get jitters anyway if there's a language that was designed from day one to be on the back end it would be Erlang you know like Java was originally designed for smart TVs and now it's the biggest server-side language not yeah server-side language it is Ruby was just a little shell-shifting tool and now but Rails is number one use now but here we have a back-end language designed from back-end from day one you know so how do you get started and what resources would you like to write and collaborate are you saying how or if you want to get started what should you do and how do you get started okay so it's quite interesting so originally so I was building this project it was an MVP and we did it in Rails because it was the fastest but then when we the MVP was successful and when we wanted to to take it further someone in my co-working space just suggested oh have you looked into a lecture and I was like what's a lecture or have you looked into Phoenix oh I don't know what's Phoenix so I actually decided to watch Phoenix videos before I even knew a lecture I didn't really care about what a lecture was I thought this channel thing is really cool this real-time stuff is really cool it's exactly what I need for what I'm trying to do all those problems I faced building MVP could be solved with Phoenix you know and it was only after that that digging into Phoenix that I discovered how awesome a lecture is and by extension how awesome Erlang is so that's how I got started alright how do you get started there's a very famous guy called Dave Thomas who sort of introduced Ruby to the Messers he wrote a book called Programming Ruby he happens to like a lecture just as much and he wrote a book called Programming a lecture by Dave Thomas so look that book up I learnt a lecture using that book but if you don't want you want something faster with more tutorials the official guides on alecturelang.org alecture-lang.org are really really really good to onboard new people to my company I normally ask them to try alecture coons yes, there's a coons for a lecture as well so those are three very different ways some writing books some like just learning from a company editor that's three choices right there for learning Phoenix though I still think the official book is the only and the best way unfortunately it's extremely outdated now because Chris doesn't have time they change a lot of things between the current version of Phoenix and the last Phoenix we're at 1.3 now the book is for 1.2 the book is projected to be completed middle of this year but no guarantees because it was projected to be completed middle of last year the best thing can do because they change a lot of things around is to actually jump on alecture readable but jump on alecture slack where I hang out all the time and ask questions there alecture community is one of the best I've ever met they are all very very helpful which is really good bonus slide so that's the city man that I mentioned by Nokia Mobera it's actually released in 1987 which is one year after Erlang came out so Erlang predates this phone even alright thank you