 Dobro, ki sem, Luka, iz Italija. Zdrupljamo v Italiji, kaj je Velnet. Zdrupljamo v modulji webprofile, ki je v develi sveti in nekaj nekaj drupljave modulji. Zdrupljamo v Italiji. In today I want to show you a project we have built in the last month to solve a problem we have. And I want to show you how we implement this system using some micro services architecture. Vse lahko odnosil, da je nožno vziv, da pa se počutil, da je najpovršen po vse problemi. Tako, da skelačimo, da počutilem vse otroj teknologi, da se počutil. In da, zelo, Skvl, ne je nepa vse, da je prav in dovelj nekaj informacij. And then maybe somewhere on the internet, someone already built some piece of code that we can use in our system, so we don't have to reimplement it. And in this way, Drupal 8, for example, could be part of a more complex and distributed system. And we have different components that communicate through HTTP, for example. Ok, so in this presentation, which is analyze this system to solve a problem we have, and the problem is that composer is difficult to set up and learn for some kind of people because it requires some degree of knowledge, it requires a big amount of RAM to run on shared hosting, it's difficult to use. So we think it would be very useful to build an interface to choose modules and themes for a Drupal 8 website, for example. And have a remote system, build the system, build the website, and then provide us a zip file to download with all inside, all the vendor packages, dependencies and so on. So we build such a service in a microservices way using Drupal 8 as front-end. We have 10 Docker containers, some Go programs, a couple of RabbitMQQs, one elastic search on Amazon web services, one Redis, obviously one Drupal 8 website. So the main point of this presentation is that those aren't a lot of things. So we can build the first demo in a couple of weeks. You don't have to be Netflix to use microservices. You can build very, very small system with this methodology. For instance, this is the diagram of our application. We have some Docker container for Apache, PHP, FPM, MySQL and Redis for Drupal 8, the elastic search. Then when a user requests a build of a project, we include requests in a RabbitMQ and then use some Go and Docker images to build the website. Then we have a notifier that communicates back to Drupal, with a rest and point to a browser with web socket. Let me show you a very quick video about how the system works for a final user. So you go to the website, you log in, then you can create a project. You can choose a Drupal standard distribution. If you choose to build a Drupal project, you have to insert the name of the project, the version of the core, a description. And then you can start adding extension, like modules and themes for Drupal. So for example, you can have the commerce, which depends on some external PHP packages. For example, you can add the val or monologue, which is another Drupal module that requires an external PHP dependency, so you cannot use monologue only if you build a Drupal website with Composer. Then you can choose to receive automatic updates, if you want to install developer tools and so on. Then when you click build, the role of Drupal ends, we enqueue a message to rabbit enqueue that will be consumed by a Go program that spin up a Docker container that builds your package. Then you can, after a couple of minutes, download it, extract it on your local machine, for example. And if you look at the content of this package, you have the modules you have required with, for example, every module that depends on the module you choose, and also in the vendor folder, all the dependencies of the modules you choose in the Drupal core. OK, let's start to analyze how we build this system. We start with the rabbit enqueue. Rabbit enqueue is an open source message broker that implements this advanced message queuing protocol. You can download it on rabbitenqueue.com and it allows two or more microservices to communicate asynchronously by sending and messaging in a published subscriber model. So someone posts a message, some other components consume messages to perform a task. In this example, Drupal should delegate long-running tasks or tasks that are more easily implemented in some other technologies to a queue. So we can use rabbitenqueue as middleware between our microservices. We let Drupal post a message to a queue as some other process consume these messages. For instance, in this way, the UX of the Drupal frontend could be better because we don't have to wait for Drupal to perform some task, but we can come later to our website, for example. In the next example, we will define a method producer in PHP, for example, a Drupal custom module and a message consumer as a go process. Just two words about how rabbitenqueue works. In rabbitenqueue we could have different virtual host, each of them have multiple exchangers that receive messages from channels and dispatch those messages to queue based on a routing key. But we want to start simple, so we have one virtual host, the default exchange, the default exchange, dispatch all messages sent to a routing key to the queue with the same name of the routing key. So it's very simple. In the next example, the queue is called builds. We need an external PHP library to communicate, to simplify the communication with the rabbitenqueue. So we bring in our website this library using Composer, of course. So in our custom module we create a composer.json file with some instruction. We define a name, the type must be Drupal module, a description, and then a list of required external dependencies. In this case is the PHP MIQP library and the version. Then on our PHP code we define the queue name builds in this case and an array, we are in Drupal, an array of some values, for example we send out that the type is Drupal, some project name, core version and so on. Then we open a connection to the rabbitenqueue server that responds on an host name, a part with a username and password and the virtual host, which is the latest parameter. Then we retrieve a channel from the connection. We declare the queue. So we tell rabbitenqueue that we will have a queue called builds. This is because we don't know if the consumer of the producer starts first. So we need to declare the queue on both sides of the connection. Then we prepare a message and we publish the message to an exchange without name because it's the default exchange with the routing key equals to the queue name we defined and at the end we close both the channel and the connection to free resources, of course. On the other side we have a go process that consumes these messages. Go is a free and open source programming language created by Google. It is compiled and statically typed language. It is well suited for CLI applications, concurrent application servers. You can just download the standard toolkit from the Go website and start writing and compiling your code easily because all the tools you need are in the tool chain. And we choose go over Node or Java or Python or something else because Go is compiled in a single binary file that runs directly on the host machine. No dependencies, no virtual machines. So it's very easily deployable everywhere. Go is concurrent by design so right concurrent programs is more easily than in other languages. It's strongly typed so this is good but it doesn't need a register of class interfaces like Java, for example. It's very opinionated so you are enforced to maintain a very shared style how a program is writing in Go and in my opinion it's not so difficult to learn. Why not? The Go tool chain is very opinionated in the way that you have in the tool chain itself a command to format the code. The coding standard of Go is defined by the Go creators so you don't have any possibility to change the format so every Go program looks similar because the formatting styles guide are in the tool chain. You can go build to compile packages and you can go get to download and install packages and dependencies like Composer in PHP. So we use Go get to download an external library for Go to communicate with RabbitMQ. This also is a standard. The Go get takes the HTTP endpoint of the same Git repository of the package in every case is something like GitHub or GitHub or something else the vendor and the package. In the Go side we define a struct with the same structure of the array posted by Drupal. In this way we can use Go to directly and marshal the JSON object into the struct. We can to let this work the name of the member of the struct must match to the one sent by Drupal but if you want to change something just tag the member with the version in the JSON object for example we do the same things as we done in Drupal we connect to an endpoint with a username, password hostname and the port we connect to a channel in this RabbitMQ server we declare the queue also on this side of the connection because maybe this start first and then we start consuming messages that arrives to the built queue Go is concurrent so messages is a channel in Go language and the range keyword will block the execution of the program until a message arrives and then it marshal it and we can use this message the data from this message in our Go program in our system we use those information to spin up a Docker container that builds the launch the composer install command that builds the website another component of this architecture is elastic search we use it as a common storage for data about extension in data on Drupal.org so when you build your configure your project on our website you can just start typing the name of a module or a theme and we retrieved of the information from this elastic search server elastic search is a search engine is based on Lucene as a solar and is useful useful because it provides distributed text search engine with an HTTP interface so we can insert it in our microservices architecture and is useful our common data storage in our case we use a Go program to store data into elastic search and a PHP code read and destruct data from elastic search start with Go in the Go side we need another external library to connect to an elastic search server and then we simply use this library the new client method to connect to a new URL and index name and then we index some document to elastic search we we need to specify the index name the type of the document an ID the content and then we can post this message to elastic search in in Go there is a standard way to manage errors because in Go functions can return multiple values and usually the second the second value is the error so for instance in this example the new client returns the client maybe the error if the error is not an ill something goes wrong so in this case the only thing we can do if we cannot connect to the client is call this function which is panic that stop the program in this case we index a document if some error occurs just log some information because it is not a blocking error on the PHP side we also need an external library this elastic search elastic search so we add it to our ComposerJSON file and use ComposerUpdate for example to add this library to our vendor folder and then we use this library to connect to the same elastic search server so we need the host we are on Amazon so the host is Amazon Machine then we set up an array with parameters we have the index name the type of the document and a query from elastic search in this case we want to retrieve a document with a name equals develop and then we perform a search on the server and use the results to build an array suitable for the auto-complete functionality in Drupal so we have an array of arrays with label and value as a key because this is how Drupal auto-complete works so in our case we see that the elastic search is very useful to produce data for an auto-complete field in Drupal and this is very simple to do we have just to define a new controller that reads the Q argument from the URL performs a query to elastic search and return a JSON response like this and then in the routing file of our module we define a new route for this controller and in the form text field we add the key auto-complete route name with this route name and Drupalate do the magic to have the auto-complete functionality ok we can also let our microservices talk with some rest and point so we can define rest and point and rest is a way of providing interoperability between computer system on the internet because rest compliant web services allow requesting system to access and manipulate textual representation of web resources using a uniform and predefined set of stateless operation in Drupalate rest is baked in the core thanks to the API first initiative so it's very easily to expose a rest and point in Drupalate and in the next example we will define a rest and point using Drupal and a go code to post message to that endpoint in the Drupal.org websites there is a lot of complete documentation how to build rest and point basically you have to create a plugin in a custom module in a very specific name space and then you can define a rest resource config in a yaml file in the module in install folder we start with the plugin plugin in Drupalate are classes with some annotation on top of them in this case the annotation we use is rest resource which have an ID and a set of path in this case we only need the canonical one so we define that project slash ID which is a variable slash update status is the endpoint to contact in this rest and point and then we provide a function in this case called patch that receive the ID from the URL and the payload from the body of the HTTP request and then we use this ID to retrieve the project from the type manager storage and then we use the status value in the payload to change the status of our build for example and then we return a resource response to the client and in this in this case we use this endpoint to update the status of a build with success or error from the we will see later a go program go program ok other than the plugin we need a configuration file into the module in the custom module we have this configuration file as an ID a plugin ID which is the same plugin annotation the granularity could be resource or method if we want to specify information for the whole resource or for single method in this case we have only one method so it's the same and we see that the method that our rest and point accept is patch the format is jason and we need to do a basic authentication before calling this endpoint in rest the HTTP method have a very specific meaning so you have to use the correct one when you expose or consume unrest endpoint we have get for retrieve a resource post to create a resource update a resource put to replace completely a resource and delete to remove a resource so for example to extract information from a Drupal resource we can use get and the URL of that resource to create information we use post and we also use post to ask Drupal to do something on a resource for example we have this update status on the entity 42 of that type project on the other side we have a go program in go you don't need any external library to perform HTTP connection you just have to require this be build a new request object from the HTTP package we can add an header with the content type JSON because this is a JSON request and then we use the default client of the HTTP package to do the request if the response status code is different 200 we have a valid response and then we use some go code to read the response and then we have this information from the server we can do what we need to do for example in this case this is a patch request so we have a payload and we post this JSON object with only status to it means success in this case obviously we can do the opposite let Drupal consume some rest resources exposed from another microservice and we can do this using the HTTP client service that is in the Drupal core that uses the gazel package to perform the request or maybe we can use the HTTP client manager model that leverage the gazel service description feature and this is very useful to abstract how gazel works so it's more easily to build a rest client in Drupal 8 of course we can also use something different that rest maybe we can use GraphQL to share information between microservices GraphQL is a data query language developed by Facebook it isn't in in core like rest but there is a module a contrib module to expose the entity of a Drupal websites through GraphQL so it's very easy ok the latest way of interact between microservices or our system and the outside is web socket notification web socket is a TCP based protocol that uses HTTP to do the initial and shake and then we can use web socket to communicate between a server and a browser for instance to perform real time data notification so in our system when the builds and we post a web socket message to the browser to notificate the user that the build is ready so in the next example we define a javascript code because web socket runs in a browser in javascript to connect a remote web socket implemented in go on the go side we need an external library because go standard does not support web socket and then we can this is very similar to node if you use it we define that when we receive a request to a specific URL we call a specific function in this case when we receive a message on the wsn point we call the message function and then we start a server on a 88 in this case when we receive a request we use the library we download to upgrade the request to a web socket request and then we use this web socket to write message to the client and the content of the message in this case is jason but could be whatever you want another useful functionality of go is that you can defer the execution of a statement just before the function ends also the function called panic so you can use these usually to free resources because go calls this statement in any case on the client side in a in a Drupal module in a Drupal theme we can define a javascript a javascript file that uses the behaviors of drupal javascript facility to attach a function and then start listing for a web socket on a specific endpoint when a message arrives the the browser calls the onMessage function which is a callback that can parse the data and do whatever you want with this message of course you can do the opposite also so you can post data to a web socket endpoint last is that this architecture is more complex that having a single Drupal website somewhere because we have a lot of different services maybe on different machines so one important thing is that you have to describe also the architecture of your system in code so you can version it and you can use some automated tools to deploy and maintain the infrastructure for you because manually managing a microservices architecture would result in enormous time of overhead because you have multiple services deployed on multiple machine that communicates to each other instead of doing this manually if you have some standard management tools you can build, test, deploy, configure provision, new host or relocate services automatically and this is good because there is less work for us in our system we do this docker and ansible these two technologies we can use docker because docker is a tool that can package an application and all its dependencies in a virtual container and then you can deploy this virtual container on any Linux server and this enable portability because you can just deploy the container and all your dependencies are into the container but docker itself is not useful to describe an architecture only describe every single container so we need an orchestrator to manage multiple containers the simplest one is docker compose which is a tool that read from a YAML file and then starts a set of containers with any information described in the YAML file in this example we have an Apache server so we define the base image which part are exposed by the container if we want to restart the container if something goes wrong and the volumes that map files or directory from the host machine to the container environment so for example the Apache configuration the let's encrypt keys and certificate and the actual Drupal document route the same for example for a database we can use a MariaDB for example image and then we define some environment variables some parts and this is the same as in Apache or we can have a container for the PHP-FPM process in this cases it use a custom made image but it's the same and in our system we have container for Apache one container for PHP-FPM one container for MySQL one container for Redis for the Drupal cache one container for RabbitMQ one container for the Notifier which is the go process that communicate through REST and WebSocket and to execute SSHD to connect with aliases remotely to the rush and in the development environment we also have a mail log container to catch the emails elastic search and Kibana because on the dev environment we don't use the Amazon services and a Blackfyra your container to performance tuning but at this point you have to install docker on the server deploy your container launch your container you have to do a lot of tasks manually we can use Ansible to do this for us because Ansible is an automation tool that automates software provisioning, configuration management and application deployment and with docker and docker compose it will allow you to describe or your service architecture in code so you can version it for example Ansible is very useful because it is agentless so you don't need any daemon or tools running on a server the only requirement is on the managed machines you have Python and you can do a SSH connection to those machine and then Ansible reads a list of host from an inventory file and performs a set of tasks defined in a playbook and those playbook uses modules to describe the operation to be executed on every host in your inventory for example this is an Ansible playbook the start of an Ansible playbook we have a name we say that we want this playbook to run on every host we have and then we we want to become a root to perform the operations and then we have a list of tasks for example we want to install Python set up tools and docker on the remote machine then we want to install other tools needed to run and retrieve for example docker compose and at the end of these tasks on the remote machine we have docker and docker compose deployed ready for us and in the last step is to start the docker demon and to add this is Amazon so the user we have is this one add this user to the docker group so it can run docker directly and then docker compose file to the server and run docker compose on it that starts up every container we define it so we don't have to do anything on the server Ansible do does for us does for us obviously because at the moment every component other than the elastic search runs on docker but in this case Amazon provide us a lot of already implemented component that we can use to for example implement web base with RDS for example we can replace already container with Amazon elasticash we can replace the rabbit and queue container with simple notification services we can use at the moment go doesn't run on Amazon lambda but maybe in future we could use lambda to run storage and so on so we can use Kubernetes for example for the container we that remains maybe this could be the next evolution of our architecture so maybe we can need to we need to use some different tools for describe this architecture for Amazon but the concept is the same ok, a couple of take away take away ok, don't try to do anything with Drupal because maybe it's not the best tool to do everything you should use Drupal for it's best features it's CMS user management and plating it's very fast and good to use Drupal try go because it's very interesting language start small so you don't have to buy 1000 server services and so on you can just start with all services in Docker containers on one server and then put all your infrastructure under version control because you are sure that it works as you expected ok, this this tool we build is available to everyone it's called Compose Compose.io is the website you can compose core modules and teams, download a zip file with all inside you can build distribution so if you want to try out open social or content and so on you can build it on our system you can choose the folder layout if you want the vendor directory in the document root or outside the document root and you can subscribe for automatic updates and so on just a quick slide about well net is hiring so if you are interested in work also in Go, Java, JavaScript or AWS other than Drupal 7 and Drupal 8 send us an e-mail we are looking for people from over the Europe so you are welcome then don't forget to come to the contribution sprint on Friday the Drupal needs a lot of peoples to go ahead not only developers also testers, writers designers and so on so don't be shy come to the contribution sprint don't forget to rate this session if you want and thank you if you have any questions there is a mic thank you for this nice tool and your presentation just the question is if the composite in its build will create a composer JSON file in the root so that I might go on from the first build in a usual composer way updating yes, we don't strip out the composer JSON, the composer lock file from the package so if you want to run composer on your machine later you are able to do so when you want so it's not lock in to the platform any other questions? okay, thank you