 So Today we will talk about a topic Which was actually originated from within the foundation networking community I will talk about a bit of history of this topic like empowering multivander in telecommunications with standardization so I will go back around like six seven years 2016 70 and That is when we start discussing these topics and now things are finally happening We'll actually talk about more technical details, but before we start maybe we can tell who we are Yeah, thanks Fatih my name is Andrea Frittoli I work as a developer advocate and software engineer for IBM and Represent the CDF here and there and the chair of the technical oversight committee and a member of the governing board as well Thanks, Andrea, and my name is fight their manager and I work at the links foundation leading the contents their foundation I joined the links foundation June 2020 is One and a half years, but before joining the links foundation. I worked at Ericsson for 16 years and I've been a contributor to the links foundation networking projects I started with opnfv open platform for network functions virtualization, which is now called Anuket, which was like after CNTD and all those things it was relaunched as Anuket and as I mentioned like What we will be talking about coming tournaments or so is something we talked a lot about within opnfv and newx foundation networking Because we realized the changes that is happening in telecommunications industry Actually is Especially from software delivery perspective or continuously perspective that is happening everywhere Not just within the networking industry or within the telecommunications and that actually helped us to you know get broader collaboration around this topic interoperability within continuous delivery So telecoms transformation and software delivery and this is like I'm sure you all have seen similar diagrams like this and this It's pretty similar to opnfv diagram back in the day like in opnfv We were working on creating an open stack telco stack using open source telco stack using like industry no Standard hardware and that's industries standard software and when we started in 2014 15 We were mainly working with open stack that was the platform on the line platform and on top of that We were bringing STN controllers such as open daylight and we were bringing orchestration such as on up and others and Trying to integrate those different technologies together and opnfv was kind of a different project because opnfv was dealing with integration We weren't actually developing those components ourselves, but but we were consuming those different technologies from upstream communities and As we continued doing that within opnfv, we realized that the integration aspects or continuous delivery aspects of that problem is actually Mind-boggling and we can't solve those problems alone and I see Luka and Gargely joining they were part of those conversations as well back in opnfv days And then that actually resulted us in reaching out to all these different communities open stack Open daylight on up cncf and so on And we said like we should perhaps Start working on these problems together because like if you look at open source projects and open networking ecosystem. There are lots of projects Some of them are pretty well established. Some of them are new with nephew and sylvia coming up to speed and this problem will continue to exist And we start working on this topic 2016-17 And that's why when I talk about you know the Software delivery I always go back to You know opnfv challenges we were facing that day Like again, if I go back in opnfv, we were trying to bring these indices standard Components were bought hardware and software And we were having a lot of challenges But if we don't You know Think about where this could go Things could look simple like we have one vendor for example or one community Whatever it is developed by that vendor is consumed by the end users or telecom operators in this case And that looks simple How hard it can be and that was kind of what we started our that was what our assumption was in 2015-16 But over time we realized all this desegregation and openness which will actually cause a lot of difficulties for software delivery Because the idea with openness and desegregation is to you know Disagrate get all the network functions starting with version network functions and clarinating network functions and that increases Vendor diversity so you can or operators can essentially pick and choose different network functions from different vendors So if there is only one vendor and if everything is physical network functions You can perhaps directly work with that vendor and establish these pipelines between vendors environment your environment But as you go through this transformation and embracing openness and desegregation Then you will start interfacing with Multiple vendors and if you look at two vendors You are getting some vnf and cnf from one vendor you are getting another set of vnf and cnf from another vendor Things are still not that complicated but if you Think about like coming years like and this is already happening When the number of vendors increase it really becomes very difficult to you know have these software delivery pipelines established from vendors to And user csp's and having feedback from and user csp's back to vendors becomes a problem Why that is because all these vendors they are probably using different processes to Deliver the software they are probably using different technologies to deliver their software Again, we can give some example open source projects like Jenkins, Pinnaker, Argo, Tecton CD ecosystem is flourishing as well and that gives Choice to users But that also brings additional challenges like there is no alignment. There is no harmonization. There is no standardization within the ecosystem And this results in a lot of challenges for end users especially Because vendors could deal with end users in this case There is one end user and multiple vendors and this might be simpler for you know vendors to tackle with but when it comes to end users They need to interface multiple vendors and how to Bring those pipelines in place is really challenging And this has been the question we have been trying to find an answer for the last six seven years and that actually started us again reaching out to other projects and other communities and Release engineering gave us some head start in this under opnfe And then we established an initiative called cross community ci within opnfe and you can see rp I think this is 2017 open source summit Europe in Prague Where he was talking about what we are doing in opnfe and then in 2018 we actually Got out of support from other communities and we had a community meeting practitioners like Continuous very practitioners workshop in los angeles collocates with open networking summit there in 2018 And in that photo you see there are eight communities because iran understood this is a problem We need to work together to solve and that actually Increased the awareness within the continuous integration continuous the projects as well Because if you think networking projects for networking projects the networking is their priority So continuous integration and continuous it doesn't get the attention it needs And this actually helped us to you know Get the topic in front of ci cd communities and start working with and users from different industries together And that's why I said when I started like CD evolution started within LF networking again other communities were doing Things similar to things we were doing but we actually scaled that effort and brought more people into conversation And yeah, that brings us to actual topic interoperability in cd Thank you pati Yeah, so after all this history We have the continuous delivery foundation And one of the topics that we really care about there is interoperability within the ecosystem um So we started there was a group formed within the continuous delivery foundation where we started discussing about interoperability especially interest group and out of that We created also actually projects and we are going to talk about This project today to Tackle this problem of interoperability So um switching gears a bit going deeper into the ci cd space So this could be like your typical simple version of a cd pipeline So you start typically where software which is stored in some configuration management system And nowadays, um, it's very Common to have software in git But there is still like software, uh, which is stored in svn or other systems and beside the Gitter svn the way that developers Interact and develop the the software is through platforms that are built around those So like your git lab or github a bit bucket or garrit or what's more So these can already be different With different software coming from different vendors And then you have all your different type of testing static analysis Building and building Especially testing and building can be very specific to the programming language that is used By the community that is developing software Test frameworks Deployment there is deployment twice because we're deploying to staging and the production then And even after production there is a observant monitor Which is still part of the entire Pipeline right and all these different steps and tools they they produce output So it could be like From the testing parts the test results they can go in the database You can have an artifact repositories with all the software packages are stored lock servers and other kind of artifacts and also it's typically Very useful to have visualization tools to visualize the entire workflow But this structure of pipelines bring Different concerns as I was mentioning Because you may have different tools involved different vendors So it becomes Complex to maintain and integrate all the different tools together I mean even within a single pipeline you have to Integrate some of the tools Together and you to be able to provide something like For instance the dashboard or commonplace where all the artifact is stored And so that's that's one concern Another concern is that these different tools that are involved in the pipeline internally They have their own different and opinionated data models So when you go and collect data from all the different tools You might be presented data With different structure some tools may expose some part of the data and others may not And so when you go and bring them all together You may actually end up losing some of the critical bits of information Because you have to adapt all this data to common format so This also affects it may affect the scalability and one of the Way you can see the scalability impact is also in terms of organization So if you have different pipelines for different software components within organization And they use different tool set or different pipelines It would be harder to have this brought at the organization level right because you have again different data formats different interfaces So it's harder to scale up to the entire organization And finally lock-in effects. So if you're starting working with a certain tool and you want to switch to another one But you implemented a number of integration already So changing this tool to another tool It will it means that uh, you will have to re-implement all these different interfaces because the interfaces are not standardized And so I wanted to look at integration versus interoperability Right, so in what uh, I talked about until now was like all the integration effort all the engineering effort That has to go into integrating the different tools and making them work together Adapting the data models and so forth But that means that then organization doing that they have to bear the Maintenance cost for this all this bit of integration that they're doing right and so replacing one of the components can be very costly While if we switch to interoperability In the interoperability work the different tools They have more consistent data models They have a common api that can be used to interact with each other Right, so it makes it much easier to replace one tool With another tool or to make a number of tools working together and so, um This is the vision that we have of interoperability And for that we created a project that it's called cd events for continuous delivery events And cd events is a specification. It's a specification for continuous delivery events And as I mentioned earlier the project was uh Born out of our the work that we are doing in a continuous delivery foundation to bring interoperability into the cd space Right, so uh, just to give you an overview of what cd events looked like So, uh, we have mission that serves different areas. So Interoperability is the main goal. So the Ability for different tools to talk through similar interfaces. So let's see if this works Yeah So starting from the software configuration management for build so forth So all the tools can If they generate events that they send signals about what they're doing This can be used to trigger the tools and to help them talk to each other And they can do that through a standard interface, which is what we Specify in cd events The second use case, um, is observability and matrix and visualization So once you have all the diff all the different tools in your tool chain Generating events about what they're doing saying, okay, I started this activity I completed this activity and this is these are the details in there So you what you can do you can collect all this in a single Evidence store and what this allows you to do is then have all the data in a consistent format in one place Coming from the different tools and then through this data You can build and end to end view so you can build a dashboard where you can see the entire workflow Happening you can calculate metrics. You can generate notifications out of it and so forth And another important use case that you can Look at through this evidence store is actually the supply chain security Because the fact that you're collecting all this data allows you to go And dig into the data and track where a certain artifact came from And or if you have a certain deployment happening or you have a certain incident happening in production You can then trace back All the things that happened in your tool chain that led to that point A bit more details about the cd event specification So it's organized in different group of events that correspond to different parts of the typical cicd pipeline So we have events related to the orchestration For tools like jenkins or tecton about pipeline starting task starting ending and so forth We have software configuration management events If you think about the the webbooks that you get typically from your github and github So this obtained to standardize the format used for those so that you can If you have all the different tools emitting emitting this Same type of event you can just do integrate them once We have continuous integration events. Those are about builds and artifacts And these are especially important when you want to track Artifact or you want to apply policy for instance about an artifact being signed about having things like Is a provenance available for this artifact? What are what is the s bond for that artifact? So we are building this kind of features into these events testing events Also, they can be used to Have a look at what what all the tests that have been executed on the software are And you can use them as well to enforce policies for instance If you want to make sure that your software has been scanned for vulnerabilities at certain security tests or integration tests that have been executed Before bringing into staging or production. You can do this for this data And finally we have continuous deployment and continuous operation events And that's that's more about the deployment and operation part of the pipeline And apart from that in city events, we also define how this Data is transported. So we rely on a standard project as part of a cncf. It's called cloud events And the advantage of a cloud event is that it provides binding for a number of different Transports underneath so you can transport cloud events on top of htp But also things like nqtt kafka nuts webbooks and more okay So apart from the specification in city events, we also provide sdks. So today we have golang java and python And we have a few adopter projects So there is a plugin available for jenkins And we are working on implementation for spinnaker tecton as an experimental implementation Test test cube a testing framework also implemented city events adopted city events And we are working with the harbour harbour and argo communities. So they have an rfc app for integrating city events And we have contribution and support from many companies I won't read through the for the list there Our community keeps growing And in terms of future, of course, we are working well continue development and we We plan to Develop more sdks as those are really important to integrate with different tools that are written in different languages So the next app that we have planned for our dot net and javascript And also we are starting to focus a lot on supply chain security type of use cases I mentioned that earlier so we have some Events related to that but we want to enrich the data model to account for more information in in that area like S bombs and provenance data and so forth Further to that we are working from Architecture point of view also within the continuous delivery foundation. We have an initiative About creating a reference architecture. So we are collaborating between the city events project and the cdf To build it architecture The idea is we want to make it easier for project adopting City events to to give a view of What is the intention how these events can be used and how they can be implemented in the different tools and then used together across the different tools um Yeah, we also have We're not working just with the cdf. We are collaborating with a number of different communities So we have been discussing with the cncf the tag app delivery They're also very interested in standardization in this area So we we hope to collaborate with them further in the future So, um, we have an effort ongoing also collaborating with the open telemetry project because they're also very interested of course in the A mission of monitoring and collecting data and of the we're thinking about standardizing how you transport data specifically for this CICD use cases over telemetry. So we are starting a collaboration with them to see how we can achieve same goal together We're also Interesting discussion with the open mainframe project. They're also interested in this kind of Work in consolidating and with the values tree management consortium and they work with Tools that allows us to define features that you may want to have in your software up to the Time where they get in front of users and so they're interested in in tracking all the data What how long it takes for our features to be defined and then implemented and what is the value it brings to users? and so They have a good amount of overlap with what we do in on the cd event side So further to that we will continue working with more and more Communities I mentioned as I mentioned we are working with argo and flux communities reached out to the flux Colleagues as well. So we're trying to get cd events Known as adopted as many communities as possible to bring more value to to our users The next step of course once we reach enough adoption And we started this conversations already would be to City events supported by different vendors In this area so company selling services in the cd space supporting that and we have Our first end users as well for cd events and we have been collaborating with the continued delivery foundations to publish end user stories Which are very interested interesting if you want to learn more about how companies are investing and adopting cd events to streamline Their cicd platforms internally you can find this on the cd foundation website and Yeah, how to join So we have cd events dot dev website And so there you can find links to the specification and the community so You can find links to our Chat channel to our mailing list and to the all the meetings and the working group that we host if you want to to connect to us And especially We have a slack that is hosted by the continuous delivery foundation Where a lot of the conversation happens. So Please join join us on this slack and if you have questions or you want to contribute Be free to reach out to us there And I think yeah, that's all we had for today Yeah, I think we are a bit early, but we have some time for questions If you have any and thank you any questions here in the room Okay, I'm not seeing any on the virtual platform as well so We've got a few minutes before our next presentation So if you all just feel free to take a quick break and we'll commence back here at One I'm sorry 220 Just one last thing As I said like this work originated from LF networking from telco Well, unfortunately, we don't have We don't see many, you know People joining to this effort on cdf site lots of other Companies and industries like this. I think this Andrea shown It's like we have lots of contributors and there are even more Contributors we don't know about but it's really important for uh telco Or networking industry to actually Take part in these conversations and I see Some especially csps doing really cool stuff with githops and so on But those things are like local fixes like those things don't actually address the interoperability issue in that that issue will still be there if people adopt githops and use flux or arg or whatever, but in then those Pipelines will need to interface with other pipelines both within csps and You know towards vendors and that is the key thing. I think it is important if we can bring some, you know Contributors from networking communities Then we can have, you know broader conversations and networking use cases could perhaps be addressed under this effort as well and we can hopefully standardize within the cd ecosystem these, you know cd events Based data model and so on so Please join quick question So you were talking about potentially bringing in networking use cases here Do you see a potential forever working with the 5g super blueprint? Exactly. I think we I I don't remember who I talked to it might be your header or My probably header header kerksky because like 5g blueprint It has some cd stuff in it as well not very visible, but again we can Experiment with these use cases directly under the 5g blueprint as well like Demonstrating like how multiple vendors could interact with its target csp for example, which the 5g blueprint could be the csp in the end. So I think like again Since I worked telco in us for 16 years My worry is like we will continue working on these topics and community will continue moving forward And since the project is very young it is easier to bring in your use cases there and and part of the specification if it takes for time for a while for Networking industry to start engaging things might become a bit more difficult because the specification will be stabilized 1.0 release will be made and it might take Longer to get those things addressed within the you know cd1 specification and 5g blueprint could be Something we can you know talk and see how we can have some kind of Demo to start with how this could help 5g blueprint for example We'll take a quick break. Thanks everyone