 Hi everyone, my name is Ishaï, I'm the CTO and co-founder of Lifecycle. Today I'm going to talk about how to create an amazing onboarding experience for developers when they engage with a new codebase, a new and unfamiliar codebase, using self-contained development environment. I'll start with a short story, a few years ago I led the development of an internal product that was later released as an open-source solution for feature flagging and remote configuration. Now, during that time, we wanted developers at the company to contribute code to the project. It was an open-source project, after all. And we, to achieve that, we've organized a company-wide Acton. Because the project was called Tweak, we naturally called it a Tweakathon. And we were excited, we created additional documentation for the project. We created a dedicated backlog for the event, we promote the event, print t-shirts. But at the end, well, it didn't work that well. We got very few contributions. And the main reason was that developers were struggling to run the application, let alone write code or test it or develop new features. And it was because Tweak was a complex application, still is a complex application. And it can take lots of hours and frustration of developers and lots of tricks to make such a complex project work on a development machine. And today I'm going to talk about how we can create a better experience. Before we dive into that, let me tell you a bit about myself. I'm Mishai, I'm a CTO and co-founder of Lifecycle. I'm a full-stack developer for at least a decade. I'm passionate about cloud development, backend architecture, user experience, developer experience, functional programming, and of course, Docker. I'm the creator and maintainer of Tweak, an open-source cloud-native feature management solution. That's what I talked before. And I care deeply on simplicity, consistency, and elegance in code. So that was me. Let me tell you about Lifecycle. We are an early startup that's building the next-generation collaboration tools for development teams. It's based on building continuously playground environments that are rich and interactable and any stakeholders in the organization can engage with them. And we are trying to bridge the gap between coders and non-coders. The project is currently itself, but it will soon be in public beta. So be sure to check it out. So let's get back to the problem. How does it feel to start working on a new complex codebase? Okay, so first of all, we are trying to build and run. And we might encounter the problem that we have there on iOS. Or we have missing dependencies that we need to install, like SDK, language SDK, run times. In many cases, we have a conflicting SDK like in Node. We can have like Node 12 and it will call Node 16. The same can go with Ruby or Python or the Net. And beside that, we have the package managers that are going to probably fall some random errors that we are going to search them on StackOverflow or ask someone that managed to get it properly. Afterwards, we'll read the readme and try to make it work. We'll run some magic scripts and watch them fail in some cases. We might need to change our host file, install other tools and dependencies or databases or whatever the readme required us. And maybe if you're in an organization that have lots of security, you might need to install a root CA or something like that. And additionally, afterwards, we are trying to develop that debugging not necessarily work. Breakpoints are not triggered. The ID might not have correct autocomplete or dependencies. The code watch rebuild mechanism doesn't work and we have exhausted our watch files and we need something like Watchman. ArchModel reloading doesn't work because whatever WebSocket issue or course issue. And maybe we have additional problems with external dependencies that we need to use and connect and configure. And then we get to integration test. And well, that's an all different nightmare with Selenium and other tools that require much more complex and they are flaky and not working properly. But the worst part about it beside all of this is that usually after a few months of working on a different project, we'll probably need to do it all over and over again. Because the environment change, our computer change and then we'll be like, ah, again. So why is it so difficult? I mean, all developers love to build tools. I mean, shouldn't we solve this problem? I mean, probably every developer know it to some extension depending on the complexity of the project they are working. So it's difficult. Part of it is because we have so many tools. We have so many versions of different SDK and run times, different operation systems. We have this work on my machine syndrome that if it work on my machine, I'm not going to touch it and well, I'm going to use this old machine that work and install a new developer machine after mine will burn or something. We have fast amount of tool chains and ID and they are rapidly changing and we have new and new versions. Our development flows are much more complex and can easily break. It's not just building and running. We have debugging. We have watch. We have water loading. We have Docker months. Our developer machines are polluted and overloaded with tools. And yeah, it's a waste of time and lots and lots of frustration. To solve it, I mean, what's the dream of the development environment? Well, the way I see it, we want developing environment that are consistent. They provide the same predictable experience. No random stuff that is happening out of nowhere. They are reproducible. So I can kill them, destroy them and then rebuild or reprovision them. So that's important. Additionally, they are isolated. They are not affected by other stuff that is running on my computer, whether it's other developer environment or my machine itself. They are self-contained, meaning that all the dependencies and tools needed for development are defined and packaged inside that environment and they are unbreakable. We want struggle with countless hours to get things working again. In the worst case, we can just rebuild it. Now, some of the things are said about containers and about Docker. That's one of the premises that it provides with Docker images. And two years ago, this code, one of the most popular ID, had support for developing on a container. It was part of the remote extension. And the interesting part is, I mean, in the past, there are some companies that use a VM and they provision them and the developers use them instead of their own machine. But there are some problems with that. Besides that, it requires lots of operations. In addition, we somehow don't have the best experience because the ID itself is not running locally. But in this code case, the front end of the ID is running locally on our machine and the back end, which contained the application file, a terminal session, the language server and extensions on the running, are run in a server that is packed inside a container. So we get a local experience, but we operate on a remote file system and on a remote machine. So that's pretty awesome. And I'm going to show how we are going to use this technology. Before we begin, I'm going to show lots of examples on different code repository and how to run them. All the examples and slides are available on GitHub. So you can check it out afterwards. All the tools in this presentation are based on open-source and free-to-use projects. So you can use them yourself easily. And most examples here are far from bulletproof. They can be optimized. And some of them use tools that can be considered experimental. That's a bit of a disclaimer here. So let's begin. And I'm going to start with a project. It's a simple one. It's a CLI tool for creating ASCII art from images. It's a simple image to ASCII. The project is written in Go, but it's an older version of Go before we have Go models. And I'm going to run it with a development container. So let's open it. I have it here. And that's the open-source repository open, or actually a fork of it for all repositories I'm using at Fox. And you can see that the IDE looks like a regular VS Code, actually run inside a development container. We see it here in the status page in the green one. What does it mean? Well, I'll show a quick example. If I open here a command prompt, and yes, I'm running Windows, which is embarrassing, but I'm going to try to run Go. And I don't have it here. I don't have it here because it's my personal machine. And it's not a development... The OS operation system does not have any developer tools beside Docker and Git. So I have Docker, and I can see that here I have the... The different development container running. We'll get to it afterwards. But what's interesting, we don't have Go here, but we do have it here. And the reason for that is that this terminal is not running on my actual computer, it's running inside the development container. So how does this happen? How does this happen? We have a dev container that defines... that we're going to use it... that contains instructions for VS Code how to open this environment. So we have some settings related to Golang. We have extension that... means that the extension of Golang is going to be installed inside the IDE. So we see it here. And we have a Docker file that contains the image... instruction for how to build the image for that development container. So we can see that it's a base Ubuntu image. We go install, go 1.16, we say go modules off because it's an old product. We need to use... we need to use Go without Go modules. And we also install a DEP which is a... it's an early Golang dependency manager that this project is using. And some extension for the shell that we have autocomplete for Golang and Git. Okay, so that's basically it. And another thing, we can see that the PWE, the directory here is actually gofrcgithub.com inside the go path of a module. Now, the reason for that is because it's an old project, we have to clone it in a specific directory. So to achieve it, that we have a workspace that defined this folder and in the dev container, we have also... we create a sim link for that directory. And that's pretty awesome because on my machine it will be really terrible to put it in that way. And I mean, I always have a problem with Go that it... Go in the past that it has... I mean, it forced me to install the project to clone to a specific directory. That was a terrible experience. Just in my opinion. Okay, so we run in the right folder with the right tools. We have also DEP, which is for the dependency. For checking the Go dependency, we have the Go package here. And I'm going to try to run the project. And I'm going to put a URL that I'm going to turn into ASCII. Now, it doesn't work because this CLI only works with files. It doesn't work with URLs. So I want to edit this feature. So I'll go here to the convert. And you can see that I have autocomplete and everything work properly. Basically, the code that I wanted to change here is the function that downloads the image. And it's the open image file. Now, I'm not going to do it right now, but believe me, it's only a few lines of code. I'm just going to check out to a branch that I've already implemented it in. Okay, so we can see the open image file as HTTP check. And if so, it download the image. Very few lines of code. And then I'm going to try to run it. And we see we have an image. We can also run image of myself here. So everything works properly. And we've added some code. And we have a good experience here. Everything work out of the box. Despite this is an older version of Go and Go models that we need to insert in the same directory. Actually, in most cases, opening the repository and making it work properly with the tool can take more time than implementing this small fix. Okay, so let's get back to the presentation. What we've seen here, we saw development container that is integrated with the SCM. I could do a checkout to a branch. I did remote code editing. I have a remote terminal. And we configured the environment. We set up the runtime SDKs, the shell extension we want. Some takes with environment variable and path like the Go models and the sim link. And we defined the extension we need. In this case, a single extension, which is a Go link. So that was simple. Let's try something that is a bit more complex, a web server. So that's a simple flask app to send email-based email and based on SendGrid example. We have here some new challenges, such as running and interacting with a server and managing the applications in Python also and also managing secrets, because we need to have like SendGrid API key. And I also show how we can debug it. So let's get to the next example, the simple email sender example. So we have here a dev container with Python. We can see that we have the extension of Python. We have a docker file. And the docker file in addition to installing Python, or Python is already installed because it's a base image, we are going to install an additional tool called SOPS. SOPS is a tool for encrypting secret and then we can put them inside the repository. How does it work? Well, I have here the secret encrypted JSON that contains SendGrid API key and mail default sender. And you see that the data is encrypted. We can't use it, but I can decrypt it using SOPS-D and write it to a .n file. I actually have the script running in the init. So let's do it. And what is happening? The SOPS-YAML definition defined the PGP key that we are supporting. It actually can be not just PGP, it can also connect to a cloud encryption service solution like Azure Key Vault or AWS KMS. And now that created the .n with the right values. To showcase an example how it looked like, I'm going to take another file that we have here, an example, and I'm going to decrypt it. We have here some secret number, so I'm going to decrypt instead of the secret. I'm going to decrypt the example. And we see that we got the data with some secret number before. It didn't work before because this format is not compatible with the .n that I was asking. So very simple, that's the value, the meaning of the universe. And okay. So let's try to run our Python app. And you see that I've already have a configuration here. That's because that in the lunch day, so I define how to run the application here and how to debug it. And I'm going to run Python debug. And the application is running. No, because the application is running on a port inside the container, so I need to do a port folding to be able to access it on my local machine. So we see that it's already did it automatically, but if we look here, we have a port, and we see that the port 5000 is bound to port 5000 on my local machine. So if I'll go here, I'll have the application running and I'm going to send an email at ebc123 at mailinato. Mailinato, let's check. Yeah, I'm testing ebcd. Oh, it's ebcd. ebcd123 at mailinato. Maybe I should copy it. No, it's work. Great. So we have the email and it was sent. And beside we see that the application is working and I'm interacting with it on my local machine. I can also put breakpoints. So let's go to the app pie and put here some breakpoints and send again. And it's stuck probably because of the breakpoint and we see the breakpoint here. So we have a good debugging experience and it works after the box and that's awesome. So let's kill it. Oh, yeah. And let's talk a bit about it. So I'll go back to the slides. So we saw the usage of secret encryption. So the secret stills, the secret sits inside the repository but encrypted. We are using Mozilla SOPs for encrypting and decrypting the secrets. In this example, I'm using a GPG key. I haven't show it, but if I'll go here to GPG list keys. So that's it. GPG list keys. No, that's ASCII, sorry. That's the project. GPG.list keys. So you see that I have the GPG key installed on that machine so I can use it for decrypting the data. So GPG keys are nice for that but there are so many problems with GPG. So in most cases, it will be better to integrate with a cloud encryption as a service solution such as KMS and Key Vault. The metadata is saved and encrypted which make it easy to do defing and check history. And this is a practice that is popular when using GitOps. So in GitOps we want to have our deployment configuration configured inside Git as a single source of stuff but in many cases we need to encrypt secret so one of the solution is to encrypt the secret and put them in the repository and they are going to be decrypted by the CD solution via push or pull. And because it's a popular practice there are other solutions as well to achieve that. Additionally, we saw some other IDE settings like the launch setting that we are used to define the configuration of the Python app and we put it inside the dev container and also port forwarding we can fold port on the local host and the port forwarding definition itself is also part of the dev container a JSON file. Now the next project is more complex. It's a real project, not like that. Simple app. And it's an open source web-based RPG for organizing your life. It's called Abitica. I remember that I played with it like a few years ago and it got a lot bigger and more complex and it has new challenges. It's a huge project. We have front-end, back-end and DBL and we are going to run it. So the first thing we are going to do because this is a full stack app instead of running in the dev container we have a Docker file we are going to run a Docker compose which will contain the dependency we have. So we have the dev container of the app here but additionally we have the DB Mongo Express for easily viewing the DB documents and the traffic which is a reverse proxy I'll talk about it later why we need one. So that's basically our services and in the launch of the JSON we have a configuration for the client the server, storybook which is for the UI component and also a file for the documentation site. So I'm going to run every one of them. Let's run the docs the server I see that the client and the storybook are still running so no need to run them and let's check it out So the reason I wanted to use a reverse proxy is because I can we are running lots of services like for services and I want to use only a single port to serve them so a reverse proxy can use we can use a reverse proxy for that and based on the host we are going to serve the right website so let's see how it looks like how it looks like what does it look like So I'll run them in parallel The first thing to load is Mongo Express Here we can see the database and we have a user database and we can see that I already have a test user here we'll talk about how we have this user we have the we have the commentations site here that's also part of the repository and storybook for looking at the component and here we have the application itself I see that I've already signed in so I'm going to do a logout and re-login So how do I have a user in this application? It's pretty simple, when the application is created I've created a script here that is going to post create command for the dev container that is going to insert data directly to Mongo so the data is containing a test user some tasks and some groups data and basically every time someone is going to create the development container they are going to do initial data it's called data fitting so if I'll go here and try to log in I have a test user and the application is running as you've seen before I have all the user has already initialized with some data so we'll see a list of tasks here the page takes some time to load I think it's like several tens of megabytes this page so it can take some time here we see the task and I can change them and some tasks so that's pretty awesome and notice that we have four different applications that are exposed on the same ports how does it work? Well, we use a wildcard localDNS like localTest.me it means that everything if I'll put like a command line here so you'll see that if I'm going to use an NSLOOKUP so it's going to put to 127.001 and basically every subdomain of localTest.me is going to go to my localhost now on the localhost I have a single port but based on the subdomain I'm routing it to the right place how does this magic happen? so we are using a tool called traffic which is our verse proxy that have very smart service discovery system so based on some annotations we add to the local compose we can define what the host that we want to run and what port we want to expose to traffic so basically that's what's happening here and traffic itself is running on port 8000 so I'll just stop the app okay let's go next now the next project so that's with our full stack application with Docker compose database another thing that I haven't shown we have here here we have Mongo extension so we can also see the document here directly from the ID and it's time built in and in the dev container as well what's the extension we want to use okay sorry for the flicking images so we have installed some additional tools we use a DB image of Mongo and Docker compose we did data seeding with basic script alternatively it can be done by replicating or cloning data from staging or production we use a reverse proxy to route dependencies to several subdomains instead of port we use wildcard localhost dns so localtest.me or xap is a similar one and we use traffic a very simple and developer friendly reverse proxy now the next project is personal and naturally it's tweak the project I talked about before and it's very complex project we already use Docker compose inside for developing and because this project contains several microservices, several databases, messaging system cross service communication it's polyglot so we need to install lots of different SDK to provide good experience the architecture looks something like that so yeah it's a real complex system and yet we are going to run it and even quite easily so let's look at tweak devcontainer and because we already use Docker so we are going to install in our devcontainer a Docker and Docker installation what does it mean? if I do docker ps I have a Docker demo that is dedicated to this dev environment so it's like running Docker inside of Docker so I'm installing Docker inside of Docker I'm installing some extensions I'm installing .NET, Golang Node.js and Yarn and I'm installing tilt which is a tool for rebuilding watching and rebuilding services and pushing them to the Docker demo I'll show in a minute how it looks like I can run it a bit like Docker compose up but with auto-reloading so we have a devcontainer we have all the relevant extension one for Golang, for .NET for a Docker itself we have the extension here, a prettier we have a script that installed all the NPM packages and everything that we need and and the idea here that tweak has a a yaml for development it already had it initially that defined all the services and some services are mock, for example we have an open-id server mock to fake an open-id provider or open-id connect provider like Google for example and additionally we use Redisir and we use Minio which is a solution an an object storage solution that is wire compatible with S3 so we can use it instead of S3 that is one of the dependency of tweak as well okay so we have the tilt running tilt define how to build all the images it loaded the Docker compose yaml and it also supports auto-reloading as we are going to see in a second for features like the UI so it's copied the file for my computer to the internal container now notice that if I'm going to do docker ps here I'm going to see all tweak services that are running now if I'm going to open tilt so in tilt we can see all the resources that are running that they were built we can see the log for issue service for example we can look at here at the editor and here we have the editor running and I'm going to show example of editing code in the editor oh I have it here open so I'm going to change the welcome message everyone just let's see that it's sorry spoiler so that's what it looks like welcome tweak and I'm going to change it to allow everyone change the font to 60 change the color of it and save now we see that it changed instantly and we have a great developer experience with just opening the dev container and running tilt up and that's for all the services and I can't I mean it's pretty amazing I wish I have this kind of solution a few years ago we had so much a better hackathon with it it's really amazing that you can go to a complex project like tweak and start developing and everything just work not just for the editor but for the all the component as well so I'll go back back to the to tweak in this example we see a docker in docker example there are several techniques for running nested containers in this example we use actual docker in docker we are developing in nested containers and using tilt and docker compose for rebuilding and watching and not reloading and naturally because we're running inside containers and our code is run inside development containers that require rebuilding things will get a bit slower but it's totally worth it additionally we saw an example of mocking cloud dependencies so in tweak we use a docker image of a database Redis production we use a cloud solution equivalent we use a wire compatible solution to S3 to S3 it's Minio for OpenID connected to OIDC mock server we can mock cloud dependency additionally with full frameworks like local stack or encrypt credentials and connect to dedicated tenants or do dynamic provisioning that's also an option if we want to have real cloud dependencies and I'm going to finish with the last example which is a cube cost it's a tool for managing Kubernetes cost the reason I'm going to show this example because it uses Kubernetes so it's pretty cool and we need Kubernetes metric server and parameters so I'm going to turn off tweak because it's quite heavy and we'll move to the last example so here I'm in the cube cost dev container and you'll notice that I have a Kubernetes running here I have a cube cutout and I have also a Prometheus here I think that it's in a space monitoring yeah and it's basically we have like before we have a docker file but instead of just installing docker and docker and docker we install cube cutout and Elm and another cool tool that is called K3D now K3D is a tool for creating clusters of K3S which is a very lightweight Kubernetes distribution so we can do like a K3D cluster list and see the cluster and it's actually running inside docker so if I'll do a docker ps I'll see the K3S images running K3S v1.21 so that's pretty amazing we have a Kubernetes that runs inside docker and we also have a registry for putting the image so tilt can integrate it in the build, in the watch build loop which is amazing and we also have a Prometheus YAML that define how to run the Elm chart so one cool thing about a K3S that is that it has built in support for Elm chart is that there's a custom controller and a custom resource that called Elm chart and we can declare these Elm charts declaratively so it's pretty cool and I'm going to run it the same way we have a tilt up instead of connecting to the docker compose tilt interact directly with Kubernetes so the same will apply if I'm going to change code it's going to rebuild the image we are going to serve it as well just that you notice we have the contributing guide and to build a product besides we need to install Kubernetes and Prometheus we need to build the image create new space apply it and do a port recording and every time like this manual steps I need to build the image when I'm developing but here I'm doing something else completely I'm going to do it but everything is happening automatically thanks to tilt so that's pretty awesome so here we have need to refresh it here so here we have tilt running with cost model and the UI we can see that it works here the API it costs data model and we can see that we have data and we also have the UI here and I'm going to show that it's working and integrated it works against the relevant backend here now the nice thing about it that in this case while the cost model is a component the UI is a local one so it's kind of lighter in that regard we can see that in tilt file we have a local resource and a Kubernetes resource so that's about it we can see that we have all the right ports here and it's real, real data awesome so that was the last example thank you for listening so far I'll go back okay so the last example was a cube cost and basically it's Kubernetes local development is already difficult there's lots of fragmentation, we have mini-cube, docker for desktop micro, Kubernetes kind and using a single Kubernetes distro and version can make life easy in this case we use K3S it's a minimal Kubernetes distribution you can push to the dedicated registry that is built-in with K3S installation or in this case it's actually by K3D it's the configuration is in K3S but K3D is running the dedicated registry and it's stable, cluster can be recreated easily just to showcase that I'm going to here I'm going to tear down the cluster okay and I'm going to recreate it and you can see how fast it is and if you work with other Kubernetes tools before you know that usually creating a Kubernetes cluster takes can take some time this happening in less than 20 seconds and it's running inside a container so it's pretty amazing I have working Kubernetes so that's about it now it's also about built-in with an end controller that I'm chart declaratively and we still use tilt for building, pushing and running and to put this setup simply we have a Docker host that inside is a dev container that we have the run registry K3S node that run container D that run our app or to put it visually so that is for the demos okay so we saw container as development environment the good thing about it that the development environment configuration is also source controlled and correspond to the application code the developer machine stays clean it can scale well to multiple environments without conflict it can run locally or remotely easily because it based on Docker our own setup is life cycle contains of tens of micro services in goal and in typescript auto model reloading with frontend, our own Kubernetes custom resource and controller external dependencies that we have GraphQL engine, database full Blon CI system, container registry and lots and lots of stuff and CLIs and SDKs and the time it takes to rebuild our environment from scratch is about 15 minutes the time to build run test code changes are 10 seconds around it, time to onboard new developer it's 3 hours less than that and that includes remote provision of a dedicated host on AWS and giving all the right access and credentials time to introduce new tool and update this development if needed it's less than 5 minutes and we're constantly adding a new CLI or tool there's no more work on my machine occurrences there's no stand on the developer machines because they are running remotely developers need to deal with less with secrets because they are in the repository so no passing them around and our team work on both M1 and Intel Max so it's pretty amazing I hope to optimize it further and build cache so if several developers build the application so everyone will get a better build performance snapshot for reducing the initial load and in the future use a cloud provider optimize for dev machine then of course location hibernation, CPU, RAM etc there are some drawbacks over decorating the initial setup can take some time it took me several hours to add this dev container as a file for almost everyone of the repositories I showed you there are so many tools here some of them are bleeding edge so they can break we have additional code to manage which is the dev container, the Docker file, the initialization data seeding dev environment are not standardized yet so if you use a different solution that VS Code and Docker there are different standards on how to do it so we are coupled in our solution to VS Code, Docker, Git and Linux there are several performance issues and there are some security challenges when working between development and production although it's solvable now can we do it without VS Code? so it's probably possible to use a terminal based code at all we can use it for that example but it should work also there is Gitpod.io and Fiat that have similar features with a different configuration file called Gitpod.yaml JetBrains is a solution for remote development that you can run a projector inside the application or the VM or the container and it's supposed to work a bit like it's supposed to stream the environment I'm not sure how well it works because I haven't tried but there are so many developers that love JetBrains stack so definitely worth keep watching and we can run local ID with documents but to be fair it doesn't work that well I haven't shown example of serverless but it should probably work in many cases at least the cases we can run it locally so basically we should use a fast framework an SDK that can run locally and for the past features we can use tools like Minio or local stack or emulators or something like that and if necessary we can throw infrastructure code tool to the mix like Pulumi or Terraform and do like a dynamic provision for every development environment we create all the examples I've shown are mono-repo I'm not sure what's the best way to approach it in a multi-repo maybe a single-dev container with multiple sub-modules maybe treating other repositories as dependencies there are some interesting project that take approach that we have a git URL and can translate it easily to a Docker image native mobile if I'm very optimistic about a multi-repo or other editors native mobile is a disaster the problem exists and I remember engaging in epic battles with mobile IDE and with Gradle it might be possible to stream the applications that they run in emulator and then we stream it but the experience is not optimal containers are also optimized for Linux for running mobile app emulator we need to virtualize, massive virtualization the IDE themselves are mobile are tailored for mobile development so it's very difficult to develop an Android iOS project without their own development tools it's pretty easy or possible with cross-platform frameworks such as React Native or Flutter and now it's part of the larger trend that we see that is happening that we put more stuff in the repository like LinkedIn, style guide, documentation, open API CI pipeline definition, workflow design system, etc etc at the end we have self-contained repositories all code tools, knowledge definition and process related to project we sign the repository, get acted as a single source of tools the code is more accessible, we are lowering the barrier of entry, the applications are portable and as developers as code owners we can create a fine-grained developer experience which is awesome all of this is happening part of an emerging ecosystem we can sit with remote development environment cloud IDE, PR environment which is one of the things we are offering at Lifecycle specifically PR environment and I've shown many tools here different challenges and solutions and what to use all the resources and examples will be uploaded to the talk to the talk page and to my talk repository and to my Twitter account so you can check it out and thank you very much