 Okay guys, show the start So my name is Alexander I work from a renties and I've been driving Artifact repository in glance for about a year already the official announcements were made about six months ago at Paris Paris design summit and Here I'm going to talk to you about how to use glance artifact repository to deliver The applications to end users of your cloud so When we speak about Applications in the cloud we usually mean different things. We usually mean different components designed to work together and The simplest way to deliver application is just to have a single VM image with pre-installed pre-configured software, which can just run a virtual machine and use it Well, sometimes we do have such things, but usually you don't need a cloud to run such software, right? It's something very simple In real-world situation when you have a real-world production-grade services You have to run multiple multiple virtual machines using different resources configured to interact with each other and the software deployment is like a complicated process So it's not just a single VM. That's usually VM image plus some binary software bundle needed to deploy that plus some Orchestration scenarios you may use heat or you may use something else to orchestrate all the stuff and then you have configure software on each VM to interact with each other and with a Cloud environment and then you have to maintain to run maintained scenarios on all of that stuff to do I don't know backups health health checks fellow recoveries and so on so this is a bunch of different stuff which cannot be natively placed at the same place in OpenStack images They live in glands, right? And that's a good major component which has been with OpenStack for almost all its history and Well, the binary software may be placed in various kinds of package repositories If you're on Python, then pipe is probably fine and you may use various kinds of repositories of your Linux distribution, but well sometimes it's not possible. Sometimes you don't have internet connectivity sometimes you have something very specific which cannot be placed in Debian or some other rapport and Well, in this case you have to place all that stuff to some kind of file storage to different kinds of volumes mapped on your block storage and you have to Well, maintain all that stuff all together Or if you are speaking about various kinds of public clouds with internet connectivity You may want to place all your content to some kind of content delivery network Which has its pros and which has its cons So that's just a number of different places To put the components of your application Which usually do not provide you the actual tooling you need to control the whole distribution process There are proprietary solutions like there is very good Artifactory solution which provides you a tooling to build like your CICD system and publish the artifacts built in your CICD systems to your end users, but the thing is not the thing is not natural to open stack It's like third party solution, which is not present in some clouds and you have to maintain it and deploy it and Well, there are others which has their pros and cons as well So we have decided to have a native solution which will be a natural to open stack Which will be built by open stack community and maintained by open open stack community to Provide solutions to distribute to deliver the assets needed for applications needed for cloud users cloud use cases to the end users The thing is that glance Can do very good work in delivering images to end users, right? It has a catalog it has tags. It has various kinds of policies. It has various kinds of Metadata registry registry and its properties. So for images These solutions already exist and we all use them So if it exists for images Let's try and do the same for everything else in the very same manner and in the very same project So we have announced the initiative called artifact repository I was presenting a talk about that in Paris and the initiative itself is an active development since about the July so in killer release We have almost completed its development and glance so most of the code is already landed I'll touch it a bit a bit later. So the overall idea of artifact repository is that it acts as a glance For everything else in open stack like glance stores images and they're consumed by Nova and Artifact repository stores different kinds of other objects, which may be consumed by other open stack projects So this may be heat templates. This may be some kinds of other Objects used by the projects for example, Mr. Workbooks, Murano packages. I don't know the What's the name for Magnum? Magnum resources. They have they have some The solemn plan files also used to require some kind of a catalog. So a lot of different Components and sub projects in open stack did require some kind of Catalog to store their entities their assets So this is not only about applications, right heat template on its own is not an application It's just an orchestration thing which may or may not be used to deploy an application and same goes to Well almost everything else But the thing is that artifact repository is pluggable and the type of the entity which can be and should be placed in artifact repository is Entirely user definable or deployer definable. So it's a plug-in. It's a pythonic Program a pythonic code which defines the structure the meta structure and the virus kinds of logic For artifact type for asset type. So it becomes possible to define Your application to describe your application as an artifact But first of all, what is an artifact? That's a slide from my previous presentation But probably not all of you have been in Paris. So last the last year when I was presenting it so the artifact is defined as Consu an object consumable by open stack services So that's not just an arbitrary chunk of data and it's not like your pictures. That's not like SMS messages its object consumed by open stack services, which contains binary data and Has some metadata describing it and describing its various properties and capabilities The structure of these properties and the binary data is well defined Defined by the developers of the projects who are going to consume that So if we speak about your application and you are the developer who is going to consume that application deploy it on the end users cloud and run it It's your responsibility to define the structure to define data and metadata for your artifact type and you publish it in the artifact repository You make it available for your users in one way or another I'll tell about that then it becomes immutable. So nobody can modify it. Nobody can Change it somehow in an expected way and expected for you and for your users So as soon as your users use artifact repository to discover Find and locate your application. They may be sure that this application is the exact thing You have placed there. You have delivered to them and also this thing has like a versioning support dependency support various kinds of things which you will expect to see an application catalog So when you define an artifact as I have told you you have to describe its metadata and data So when we speak about applications You will probably want to describe properties which are natural to your application So you just specify the name the description The version of your application in the catalog The license which you use to distribute application to your users the information about you as a developer or the Support or maintainer or whatever Whoever you are for you for this application. You may publish the website links You may add any information you want and any information you think is useful for your users At the end all this data will become available at the catalog It will be browsable and searchable the users will be able to filter your applications in catalog for example, if they want like GPL only applications, they may filter by license and well everything you specify as Application metadata may be used against you may be used to help you And then You define the data. So as I said, there are a number of resources Which compose the application? Fame images are a good example They may be a crucial part of your application because Sometimes applications require the software to be pre-installed on the virtual machine Then if you want to orchestrate the deployments, you'll have hit templates which are very specific for your environments very specific for your Applications and very specific to your deployments. So you want them to be bundled with the rest of your application and There are configuration scripts virus kinds. It depends on what's tooling do you prefer to configure your VMs? This may be just simple shell scripts or you may use pop it or unsubill or salt or whatever you like and then When you are done with defining the application deployment scenarios You want them to be displayed in a catalog somehow So you may want to bundle icons or screenshots or whatever whatever resources graphical your sources you want to Be displayed in the catalog in the dashboard or whatever tool you use to show this application to users before they deploy it for you So that's also part of your application and it becomes binary part of the artifact when you define your application as artifact and Then virus kind of not necessarily binary data the textual data You may include user manuals. You may include end user license agreements. You may include help Once again, everything you want It's up to you to use this data at the end user side But the catalog provides you two link to describe the data structure for that So and then there is more It's usual to have like requirements applications rarely go along, right? So in most in most catalogs We have the concept of Requirements between some prerequisite libraries and the end applications Artifact repository support that as well. So when you design the your artifact for your application You specify the various kind of dependencies this application may have on other applications and on other artifacts, which are not applications on their own but Maybe required for your applications to run on the previous slide I had this VM image here as a data object and this is very natural when this image is very Specific for your application and cannot be used without it, but in some cases you are using some Some generic images which are not part of your application, but are still very valuable or very usable for your application And in this case, you may want this two entities to be two independent artifacts with independent life cycles and Independent well entries in a catalog, but still you may want to have a dependency between them So when your end user uploads your application from cat or downloads your application from catalog the catalog will provide a hint in API or in UI and We'll provide a way to download all the dependencies all together same as pipey does when you as a pattern developer Request require a pack a Python package from that repository and it downloads all the requirements and well Spoils your virtual environment. That's pipey So and of course all the components Which may be linked by these dependency relations may be versioned So you don't have to link particular let's say VM image to your application You may specify that your application requires latest Ubuntu image or even latest Ubuntu long-term service release with some sophisticated query which defines the metadata which should be present on the artifact you are referencing to and This reference will be established and when your end user wants to download the application The dynamic query will be executed at the time and The appropriate requirement will be downloaded once again very similar to pipey if you're familiar with piece of software And last but not least When the application is published in the catalog you may want some other actions to be executed first of all you want to make sure that all the data which is Part of application data or metadata is valid It may be valid at all globally or it may be valid for the particular Open stack cloud for the particular environment where you want this application to be used So the particular catalog may validate the contents of artifacts And so of your applications according to the current state of the cloud for example, if your application Requires an image which is not present at that particular deployment for some reason for example The user didn't upload the prerequisite or it was forbidden by policies The validation will detect that and mark the artifact as invalid So you users will not be able to use that applications the logic for this validation Maybe extended and customized by you as I said Artifacts are defined by Python plug-ins by programs So the logic for that is just a regular Pythonic methods, which may be executed Synchronously or asynchronously with various kinds of checks and policy enforcements Allow me you to specify various kinds of custom logic checks and workflows for example a very common situation is to Compute a digital to validate a digital signature to validate the authorship if you download them Application from some external source, I will touch it a bit later You may want to make sure or your users may want to make sure that This application is indeed Published by the person who pretends to be its publisher because well, you know, I may publish something very very of very bad quality and like State that it's written by I don't know Bill Gates somebody else Everybody will believe me because it's okay, but well Sometimes you need to verify the authorship and Well, the common words of the present world use the public keys public public digital signatures for that and you may define the signing logic and you may define the Validation logic which your end users will use to make sure that the application is indeed valid and safe also, you may add like a virus checks or an appropriate contents checks and Anything you think is appropriate for your kind of application Also, you may integrate with various kinds of third-party services If you want to send like email notification about the application being published You may do it. You may add that custom logic hook and the catalog will do it for you. So In your cloud you may organize the delivery of your applications between the tenants of your cloud so open stack is a multi-tenant environment and once the application is Published in open stack. It's a publisher may make it available to other users as a tenants of open stack It may be just private for that particular tenant or maybe global for the whole cloud or something in between like I share the application with some other tenants and others cannot see it and It's very customizable and it has various kinds of policies So for example without modifying any line of code by just changing the policies You might get up store like behavior when the user submit the applications for pre-moderation when administrators of the cloud checked applications make sure that they don't violate different kind of rules or policies of Enterprise and then make that applications available to other users and that particular cloud or You may make a post-moderation system where the applications become available to everybody immediately But they may complain that they're bad or Tell something some problems about that and then administrators will temporary disable them make sure that well Just do some investigation make sure that complain complain is valid and delete it or just say okay That that was a wrong false alarm and make it available back again. So you have various kinds of options to Distribute the applications between different tenants of your cloud But not only within a single cloud The cross-cloud application delivery is also possible with artifact repository. So for example, the simple thing is when you import and Application to your cloud from some other locations or you may export it as a single file place in some I don't know put it on Dropbox and deliver to your friends. We are Dropbox or something else So applications are packaged into single files, which may be distributed peer-to-peer between the clouds however, the most powerful thing is ability to be able to Build federations of repositories like think of DNS system like when there is centralized repo And there are like a downstream repose and well the leaves of the tree correspond to particular clouds and at some points there are central repositories of applications for particular Companies or enterprises and at the top of the tree there may be some global repository like community blessed open-stack repository of application Applications if you have seen the keynote on Tuesday, you have probably seen that open stack foundation has announced the Ups.openstack.org. It's a community application catalog it's a open-stack foundation Initiative to deliver various kinds of assets or artifacts or Applications to the users. It doesn't use artifact repository yet But it may be used for that and even if not if it's not it will it may become the center to the center which Distributes applications for various kinds of artifact repositories around the world Which will be used again to redistribute them to the particular clouds the actual catalog at the ups.openstack.org is in beta now and It's up to us and it's up to you as developers if you're developers to affect the way how it moves to contribute and if you're interested in building this federation of application repositories Plistic part in the design sessions please take part in the community work with that initiative and We'll build a better application delivery network together The current state of the thing so that's actually tricky question there's this session was announced at about February when they were submitting the talks to the presentations and this was like in the middle of development and For now the most of the code is landed in Glantz Most but not all of the code so unfortunately I cannot tell you now that take the latest glance master branch And use it to deploy the artifact repository Unfortunately couple of commits are still being reviewed, but we do really plan to see them landed at Liberty one milestone So in about a month, we'll have this thing follow functional and working at the Glantz master The API status for that thing is experimental so it means that we still may change it and we still may adopt various kinds of best practices and Latest latest happenings in OpenStack community because if you have attended the API working group sessions You see that you saw that there the guidelines are modifying the guidelines are evolving and They all work very hard to make OpenStack APIs better and we want to support that as well because initially artifact repository API was trying to copy and Replicate the existing Glantz APIs, but the world doesn't stay stay still So the APIs are evolving and we want to make sure that artifact repository APIs is on the cutting edge of OpenStack API standards and Important thing is that the current Glantz images should become artifacts as well eventually Some day they're not artifacts now and they will not be artifacts in Liberty. However Eventually, we want all the assets among all the OpenStack clouds to be artifacts of a single artifact federation that's our ultimate goal and it will just well it will give us a synergy between different project and in this case the Artifact repository will be like an integration point for all the projects among all the OpenStack ecosystems Because of that, there is a question which is which doesn't have the answer yet. Should the thing remain in Glantz? We have incubated there. That's true however Glantz is mostly about VM images and Now we have an open discussion in Glantz community about if we need to separate artifact repository into a standalone project and at some point in future deprecate the existing legacy Glantz image image listing image browsing APIs and Let Glantz just do what they do the best image related manipulations data conversions Inspections and all that stuff and let artifact repository to be the catalogue of all the stuff including images So there is a contributors meetup tomorrow for Glantz if you have like ATC badges and if you are willing to take part in In contributing to Glantz and to artifact repository please feel free to stop by at contributors meetup tomorrow and Once again, you may take your part in the shaping of artifacts future and at some sense This is a very important moment in shaping OpenStack future as I believe So I think that's all I wanted to tell you about the artifacts and how you may use artifacts For delivering your applications there are many questions which remain unanswered yet and I can talk much longer about some technical details but I would prefer to hear any questions first and like Answer them first Any questions? Wow in this case, thank you for your attention and please join the initiative of the community application catalogue because this may be the future of OpenStack applications on your clouds