 Hi, my name is David Feller. I'm an architect on the .NET team. And hi, my name is Justin. I'm a developer on the ASPINnet Core team. And today, we're going to talk about prototype. All right, so let's talk about the problems that exist today when people or developers are trying to build microservice applications. Several questions tend to come up when you're developing these applications. Like, how do I build and run multiple services locally? I may have a microservice that consists of multiple services, like a front-end and a back-end API that I need to run on a single machine. Similarly, I may need to run things like Redis or some other dependency, like MongoDB locally as well. How do I run those things together and have that all work locally easily? How do I debug those services? When I'm running one service, it's pretty easy to run my application and attach debugger. What if I'm running two or three services? How do I get that whole end-to-end working so I can debug these services while running locally? Also, how do I do service discovery? So imagine I'm trying to have services A, B, and C locally, or a back-end and a front-end and a cache. How do I get the address of those services when they run on my local machine? How do I view logs for each service? Logging is super important in microservice applications. And I want to be able to test my logs and see what they log in development so that when I'm in production, I can look at an outage and figure out what's going on. And these are just a couple of questions, but there's several more that tend to come up when you're developing these microservice-based applications. If you look at today's landscape in the Kubernetes world or microservices world, there will be a camp of people who will try to get you to use a cluster, either locally or remotely for development, because the idea is that that cluster tries to mirror your production environment locally. So that is pretty involved, because it means that to do any kind of development with my microservices, the first step is to spin up a cluster, either locally using something like Micro-K8, Docker Desktop comes with Kubernetes, or you can use some other distribution of Kubernetes that is shrunk down to run on a single machine. Or I can provision my cluster in one of the clouds, Azure, GKE, Amazon, and I can use my remote cluster as my development environment for microservices. So once I do that, I figure out how to get a cluster, I install the cluster, I have to get Docker and learn how to write Docker files. This is, remember, I'm trying to actually develop my application and run it locally to test these various things. I need to get a container registry. I need to learn how to write the Kubernetes manifest because I can't even, I have to deploy, essentially to even run a simple application. And then I can use a tool like Scaffold, which actually tries to automate and build the building and pushing of container images and applying that change to Kubernetes. So the barrier here is really high because it requires me to, before I can do any kind of development with multiple services, I need to provision the cluster so I actually have an environment that can actually give me features like service discovery and those things that I require when I'm trying to build microservices. Our solution is product tie. Tie is a tool chain that basically tries to give you enough of the features of Kubernetes, not all of them, but enough to do development. So it provides a local orchestrator for development time. This is to offer things like service discovery and a couple of other features that we'll show you as we go by. It also makes deploying microservices to Kubernetes pretty, pretty easy. So we believe that a lot of the default experiences trying to go from my application to Kubernetes should be easy. And there are a lot of boilerplate and things we can infer from your project types without you having to describe in a manifest kind of a similar concept of a service and a deployment. We believe we can automate kind of the getting started 90% of those cases. And then it allows the developers to use the existing tools. So you don't have to learn anything new. I don't have to learn Docker to run applications locally. I can kind of run my applications like normal, locally without having to worry about a cluster or any new tools. And I only have to learn new tools when I opt in to try to learn more of the stack. So let's get into a demo of Ty. The application we're using today is a simple app that has a front end and a back end. This is like a hello world scenario we can come up with that has two microservices. The front end will call into the back end to get some forecast information from the back end about some upcoming weather. Now at the command line, in the root of the directory we can see there are app consists of these two services, a front end and a back end. These are two simple dotnet apps that we created from basic dotnet templates. Really nothing special going on here except a few things that we'll describe in a bit. We'd like to be able to run this app in its entirety without having to deal with Docker containers, just two processes running locally. Typically people solve this problem in the dotnet world by either having a script that runs all your apps or by like configuring Visual Studio or VS Code to have multiple startup projects. Ty and Sid solves this with a single command called Ty run. This will run all of the services here and also do a few additional things. As you can see, both the front end and the back end are running. Also, Ty will start up a dashboard for you to look at all of your applications. The dashboard shows the entire topology of the application. It has some binding information, it has logs, it also mentions how many replicas and how many times each of these replicas have restarted and also mentioning some extra information. There's logs here that show the output of the application for both the front end and the back end. This is just showing the binding information. Now, if you actually click on one of these bindings, you actually go to the app. So the app here is the front end app and this front end is actually calling into the back end to get this weather information. To prove that, we can go to the back end and if we go to the right URI, weather forecast, you'll actually see this JSON response that's being sent from the back end. So another thing that Ty solves is port conflicts. So these ports here at the very end of these bindings for HTTP and HTTPS are all random. They're guaranteed to be random and they're also guaranteed to be unused. So when you start an app up in Ty, you never need to worry about having a port the conflict and especially when you grow the number of services that you run locally, this becomes a larger and larger problem. With that, there's a few interesting questions that come up. First, how did the front end actually know what the URI of the back end was? You can't just hard code it in the front end. You need to have some way to make it discoverable. So how do you actually implement service discovery in Ty? What Ty does is before each process starts, Ty will inject environment variables with a well-known scheme into each process that's starting, which includes both the binding information and or connection string based on the service. This allows each service to know about all other services. So when you execute Ty run, Ty will know which host name, port and protocol is being used for each service and it will inject these environment variables. For dotnet apps, we also provided a configuration model. So if I go into VS Code and take a look at the front end application, we can see that there's a method we added to this configuration model, which gets the URI of the back end. This is effectively just parsing the environment variable we injected into the process to get the back end URI. We also have a VS Code extension here. If we go ahead and take a look at the Ty Explorer that shows some extra information as well as making it easy to debug Ty applications. So if I click here, this would actually bring up the dashboard, but it actually takes a look at both the back end and front end that are currently running. Let's go ahead and add a breakpoint to the back end and the bug attached to it. Let's go ahead and add it inside of the back end and add it to the controllers. So what's gonna happen here is when we send a request on the front end, it will actually cause us the bug breakpoint to hit. So as you can see, it hit the spread debug breakpoint. This shows just another element of why Ty is really useful for debugging multiple service applications. So in that demo, we saw a couple of things. We saw that Ty can run all of our services locally. We saw that Ty can also get the address from the back end and the front end. We also saw that we don't really need to worry about anything with regards to conflicting hosts and ports because Ty handles all that for you. And finally, we also saw that dashboard that gave a really nice view of everything. Now let's say we wanted to actually deploy this application to Kubernetes. As mentioned, deploying to Kubernetes isn't that easy. We need to add Docker files for both the front end and the back end. We need to build each of these images, version each image every single time. We need to push them to a container registry like Docker Hub, author Kubernetes manifest, and then I guess you should have profit after that. I think your app should be working. Some tools like Scaffold solve some of these problems but not everything. With Ty though, a single command can get this application running on Kubernetes without any of that ceremony. To start, I'm going to use an empty Kubernetes cluster. If I do kubectl, get services, you can see just Kubernetes is running in deployments. No deployments have been found. I'm going to run the command ty deploy in passins interactive flag. What this is going to do is it's going to prompt me anytime I need some sort of configuration from the user. So I was going to ask me for my container registry and I'm going to pass my Docker Hub account. At this point, what Ty is going to do is it's going to produce build output of each of your .NET applications. It's going to create a temporary Docker file to create the Docker image from. It's going to execute Docker build to create said Docker image. It will then push each of these images to your Docker Hub account. And finally, after all that's done, the Kubernetes manifests are going to be generated and it's going to specify your container registry and the container image name in there. Finally, it's going to actually apply that manifest to Kubernetes and make it so your application is available on Kubernetes. So now if we do kubectl get deployments, we can see that both the back and in front are available. And we do the same with services. We can see that both the back and in front have associated services as well. If we want to actually take a look at what the app looks like, we can use the command kubectl port forward to the front end. Now if we go into the browser and go to port 5000, what we see is the same front end app that we had running locally. I'm going to close that now. Another thing you may be wondering is how services discoveries handle on Kubernetes. So locally, we had that environment variable injection that injected the correct host and port name or a host and port into the application. Well, we do the same thing when we actually deploy these apps to Kubernetes. So if we do kubectl describe front deployment front end, we're going to actually see a bunch of these environment variables that we saw locally be injected. So this naming scheme of service front end protocol eventually is used by the dominant application to build the URI. In Kubernetes, it ends up just being the DNS name of front end and back end for the host name that you need to use as well as the port of 80. Now before we spoke earlier about the steps you had to take to even run a Kubernetes cluster as your development environment. And when we think about Kubernetes, we think about the learning curve of having to even like get started where I want to develop a front end and a back end. And I'm supposed to kind of configure a cluster as my bare minimum so I can mimic my production environment. The complexity is just way too high for getting started. So with Ty, we actually believe that our goal is to not hide Kubernetes but flatten the learning curve so that you only need to learn things that are relevant to your application at the right time. So whenever, so when I start off, I can use the tools that I'm used to for running, for debugging, for looking at logs, for compiling. I can very simply get service discovery and a bunch of core microservice features without having to learn the entire Kubernetes ecosystem and patterns all at once. And then gradually as you get more comfortable with those tools, you can grow up gradually into learning more and more about the stack and you can get more control. So Ty kind of smoothens that learning curve from development all the way through to deployment. So we showed you a super basic application with Ty front and then the back end. But we also have a bunch more features that are in the realm of trying to help you develop microservices faster. So as you saw before, you can discover your pair of services via configuration, we inject the right variables and then you can read them in your application. We have support for developing a cross-mode for repositories. So let's say you have service A, B and C and you need A, B and C to develop locally, but A, B and C are in different repositories. I can actually run a command to clone A, B and C, pull them down, pull it together into a single type manifest and run those and I get the same features of service discovery and other things like that. Ty has support for modeling ingress as well. The intent being that it isn't, it isn't to say that you would actually model your entire application with Ty, but I can model a couple of core concepts like ingress. So I can map specific routes to specific services and I have a local proxy that runs on my local environment before I deploy. So I can actually test to see how routes flow into other services. And then I can actually deploy ingress and it will preserve those rules all the way to Kubernetes. I can use Docker for dependencies. So I can list, as you saw in my, in the demo, you saw containers, you saw, sorry, you saw the applications as services. I can actually depend on other containers like Redis or MongoDB as part of my application. And I can actually get use service discovery, the same discovery I used before for other services also work for container dependencies. Hot reload what we call watch support. I can kind of put tie into a watch mode where it runs forever for an application. I can change the application but it's without having to recompile or rebuild manually. And then we want, we support this thing called SideCars. So SideCars are a pattern mid-popular by Kubernetes. We can integrate with things like the ElkStacks. So we wanna build recipes for, for common microservice patterns. So let's say I have a bunch of infrastructure provision by my ITR ops people. I can model that same integration locally in my tie manifest for development. So it matches my production experience. And then I can generate Kubernetes manifest and Docker files for services. So let's say I actually wanna use tie but I don't want to actually use it for deployment. I don't wanna go all the way. I can generate the manifest that tie will use to go to those services. I can generate those and check it into source control for example, if you're doing GitOps. So it's pretty powerful. The thing we didn't talk about intentionally with Docker Compose. Compose does solve a bunch of these problems but we believe the learning curve is too steep because Docker Compose requires containers to begin with. It does have really great features. It does solve a bunch of the same problems that we believe. We believe that Docker is great for dependencies but the learning curve to having to package your own services as containers before you get started is too big of an issue. Too big of a learning curve for most people. Our customers we think would prefer the smooth arm wrap where you are using your tools to begin with and then you can learn containers and then you can learn Kubernetes as you care about that curve, as you care about more of the stack. So here's Compose where we think it fits on the spectrum. Now it isn't to say that Compose is hard to learn but if you aren't concerned with containers as yet and you're just trying to run multiple services you're met with that complexity upfront. So you can see Compose is somewhere in between Thai and Kubernetes in complexity but it does work very well for the same kinds of tasks. Okay, so future of Thai, we believe Thai is useful in general and through our customer research we've seen people using JavaScript front-end with .NET back-ends and Python for ML workloads in the same application. So we're seeing Polyglot services appear even for .NET developers. So we think it would be interesting to support multiple languages with Thai and then we wanna do more Kubernetes integration. So for example, we wanna have a recipe for HTTPS. So imagine I could have an HTTPS enabled environment locally and then when I deploy we would configure a cert manager, less encrypt, cert rotations, those kinds of things. We wanna have those end-to-ends working from development and when you deploy. So I can model my environment and test it locally and test it in production or at least in after deployment. We wanna model a bunch more microservice patterns and primitives. What I mean by that is Kubernetes made a bunch of patterns very popular. Like sidecars is one that comes to mind. The idea of a sidecar container and a pod are patterns that happen to have been created by Kubernetes but have become kind of patterns that are now used by industry to do different things. We wanna model the ones that we can so I can run those and test those locally and also have those models persist in production. And then we wanna have more integrations with CNCF projects. So for example, I was trying to run Envoy as a sidecar in my local environment and that isn't something that actually is well described or defined. It's normally for Kubernetes only and I think we wanna be able to model more of those things as time progresses. We have a booth at the virtual booth at the conference. So come check it out and learn about what Microsoft is doing with Kubernetes. Thanks a lot.