 The room is very full. We're very glad to be here. My name is Emily. Yes, and my colleague is Fredrik. We only have one mic, so we're going to switch in the middle, sort of. But let's get started. I have an agenda here, and at the beginning we see... I planned to walk through the definition of a pipeline, but considering the audience I think that's unnecessary. So I'm going to skip that. And I'm coming from Ericsson, which is not open source, of course it's a company, but we do contribute to open source, and that's why we're here today. I'd like to start off with describing what kind of challenges we've seen considering the communication between different CI CD tools. And then I'm going to present the way we solved it. Yes, I'm going to speak a little bit louder. And also how we open sourced our solution, and how Fredrik and his company Axis started using the same solution. So let's skip this part, as promised, and instead go straight for the communication challenges that we've seen at Ericsson. So first of all, why does Ericsson care about CI CD? Well, any company that produces software should care about CI CD, obviously. We have a very complex environment. We have a lot of different products going out to our customers. Different configuration applied on top of them, considering if it's going to be to customer A or if customer B wants them. And we have many different pipelines and different pipeline tools used to produce our software. Very often they're integrated into each other. They're called connected pipelines, and that's what we like to call them. And I'll go through what I mean by that later. We are a very large organization. We have hundreds of teams developing our products. Each one of them uses different toolings. And the demands on traceability is very high. We need to keep track of our software. We need to see which stages does it go through in the pipeline, no matter if it stops at delivery or if it goes all the way into deployment to the customer networks. We need to follow and see where does our software go. We also need to be able to monitor it, independent of whichever technology we use in the background. We need to be able to keep track of the important KPIs. And I guess also visualize it in a good way. And one visualization tool would be nice. So I spoke about connected pipelines. And what I mean by that is we usually integrate our products into each other, sort of. So they're hierarchies of products. So you've got product A and B to the left here, and they're produced in their own separate pipelines before they're together integrated into the next level of the next connected pipeline producing the product AB. This is a very common phenomenon at Ericsson where we have multiple integration spots. And it's tough to keep track of all of these dependencies to integrate upstream and see where does this software come from and how it has been tested before we integrated. And again traceability is important here all the way. So I've tried to summarize into a few questions what are the exact challenges we see? What is the problems that we need to solve? One thing is when we use different pipeline environments, how do we communicate about when our product has been delivered or released? Let's say our product A is done here. How does somebody downstream become aware of this? How can we announce this to the world? And how can we, if we have another perspective, how can we become aware when others release their software automatically the independent of whichever technology stack we use would be nice to become aware of, oh hey this version X, Y, Z is now available so we automatically integrate it into our pipeline and go for it. And how can we visualize these connections? How can we see what kind of tests has been done upstream but also downstream on our software later on? How can we see how many has downloaded our products? It would be nice to know. Were their tests successful into their environment or not? And why? So the solution for this was that we focus mostly on the communication. How can we communicate across different tools? So we came up with a solution that's like a common language if you will across these different pipelines, across different tools, across different visualization tools that could be based off the same information. And we called it Eiffel. Now Eiffel is created in 2012 so we've been using it for a lot of years internally. A couple of years back we open sourced it and it's now available on GitHub. But what it is, it's a message protocol. It's event-based. It's defining a set of events that you can use to communicate about different concepts used in your pipeline. Be it artifacts, be it tests, source code changes. You can communicate about this independent of whichever technology you use. It doesn't matter if it's a Java application, it's uploaded somewhere to a Maven repository or if it's a Docker image you uploaded it to Docker Hub. And you can still communicate about it in the same thing, in the same way. And what I think is great about Eiffel at least, you can cherry pick whichever events suit your needs. You don't have to take all of the protocol, it's a lot of events. You can cherry pick the ones that you specifically need in your use case. So let's say you're only interested in the test case related events, then you only care about those. And the events are also linked together. And that's very important for us at least to maintain the traceability between them. So you can trace back from the latest event, you can go backwards in time and see the whole pipeline chain. You can represent the pipeline. And with Eiffel, you can answer some of the questions I posed previously. For example, when we have a new artifact available, you can send out an event saying that I've now created this artifact. You see it to the left in the picture, it's called Eiffel Artifact Created Event. What comes after that could be a published event. Whenever you upload it somewhere, you can send additional events, linking back to the previously created artifact. The linking here is important. The protocol itself defines how you can link your events, if they must have or if it's optional links. These are all in JSON format, as you're wondering. Next question could be, how can we visualize our pipeline? A pipeline consists of many different steps, right? Independent of how it looks, you want to maybe tell the world about what has happened, how do we visualize this? This is a very simple CI pipeline. It's GitLab syntax, so you build your product, then you test it, you test it some more, and then you upload, you build binary somewhere. Very standard case. Some of the Eiffel events that could be used for this are, for example, these. You can have an artifact created event to signify the first built. This is where you actually build your binary, whatever it might be. Then you start some tests, and then you might want to send test case events, so you've got the triggered, started and finished events for this. These can be sent, of course. These jobs are in parallel in this pipeline, so you can send them in two sets. Then comes the artifact published event, as the last one, to signify that, hey, now my artifact is available somewhere for downloading. I'm going to show you the exact same side of events, only connected in an event graph to make it clear. So start to the left, got artifact created event. This is the first one in chronological order. Then comes the test case events, which links back to the artifact, because this is what we're testing. And the link name in this case is called item under test. Makes perfect sense to me, at least. Then comes the test case started event, linking back to the triggering. And then we've got the finished event, test case finished event. This one contains the result of the test, if it was successful or not. Now, of course, if you have several test cases, you might want to group them under some sort of label that's saying, yes, I've tested it to this extent, and I want to label my artifact now. This is ready for the next level of delivery somewhere. Then you can send the conference level modified event. This is very free-formatted in a way that you can name the conference level whatever you want to, as long as you standardize it, it shouldn't be a problem. And then you point back to the artifact that you've created. So this is very useful for us as listeners or consumers of this product. We know once it has been tested enough, we know when it's reached the conference level we want it to have before we actually start using it. And once the last event comes in, it's the artifact published event, we know where we can find it, where is the artifact located, where can I download it. And I have one last wrap up here. As I said, we have open sourced this, and it's available on GitHub. And we really hope that the community keeps growing and getting more contributions, because we see that a lot of different companies and organizations have the same problems that we faced. And one thing I liked about Eiffel is that it provides one common language across different technology stacks, across different tools you might use. So you can base visualization tools on the events in this data, for example. And what's also important is the links between the events. It enables traceability, so you can follow a complete set of pipeline. You know what has happened to your software, and you know where it might go even further downstream. And I'd like to invite Fredrik to this side of the room so we can switch mics, because he's going to tell you about how Axis started contributing and using Eiffel. Set this up here. Okay, so I'm from a company called Axis Communications, and we do surveillance cameras, which is kind of a scary thing, but there's a lot of software running in these cameras, and somebody said roughly around 20 million lines of code in the camera, so there's a lot of software going into this small thing. It's basically a Linux box, but still. And we have nowhere the scale that Ericsson has. We have about 1,000 developers in total, and the teams that do the cameras are about 400 developers, and about 150 testers. We like to think we do continuous integration, at least, because we commit to master a lot of times every day. And I'm going to try to tell you our Eiffel story. Emily gave you a good background on what Eiffel is, and I'll try to go through some kind of a timeline, how we found out about Eiffel, and how we adopted it, and how we now very much would like to drive all of our pipelines using it, at least as the ambition for the future. And first, I would like to give you some context on what exactly it is that we do. So Axis is quite an old company, and we've done, in the history, we've done network-connected stuff, and which is basically just making some kind of hardware, putting it in a box and selling it. That's the standard Axis way that we've been doing it for a lot of years. And it's basically what other companies do also. They sell cardboard boxes with tech stuff in it. But more and more we've been seeing a demand from our business and from our customers that they want complete systems, they want cloud integration, they want a lot of stuff, which in turn gives a lot more software that has to be tested, has to work together. So that's where the challenge of knowing which versions work together, which software stacks work best together, and so on. So that's kind of the similar challenge that Ericsson has. But back to our story. The first recorded presence of the AFL protocol at Axis. As I've heard, there were a couple of engineers attending some kind of a conference or a meeting where somebody from Ericsson was presenting some kind of visualization tool, how you could visualize your software. I don't think it was called, I don't even think it was referred to as a pipeline, but still some kind of software flow. And they found that this was something called AFL, which was underneath here. And they got intrigued. So they started doing some tinkering on their own and trying to make something with AFL in the Axis context, which was fine but really didn't lead anywhere. And then it resurfaced in some kind of a research academia industry project that Axis had, where some of the researchers had found out about AFL from Ericsson also. And it resurfaced and they did some more tinkering and some more engineers got interested, but it was still kind of under the radar stuff. Just engineers like technology enthusiasts that did stuff, but nothing that really drove the business or was a thing. And the next big thing, in my opinion, was when Axis actually released something that had to do with AFL. So these engineers had the tinkering more and they wanted to make something of it. They wanted to, AFL, to produce something that was of value. So we used Jenkins as almost everybody else. So they made a Jenkins plugin, which was very basic. It basically produced an artifact created event when we did a build. That was all we did. So it was kind of worth us on its own, but still it's kind of a footprint that now we're actually producing AFL events. We have a message bus where there are events coming out. But there was really nobody listening, which is kind of a problem, because when doing this type of thing, the whole point is to somebody listening and acting on the events. And there is like a chain created and a flow. So our biggest problem here was nobody was listening and if nobody's listening in a pub-sub culture, it's kind of hard to know if you're making sense in the beginning, at least, because you have to know that in the beginning you have to have some kind of acknowledgement that your event, it was useful. There's value to it. So we had to create some kind of value in our minimal AFL implementation. And this is basically where I came in, in the AFL thing. I was a part of a team that does test automation and we were struggling with the testing part of our CI pipeline. It was kind of old and it didn't really want to do what we wanted to do with our testing. So we started looking to rebuilding it, basically, or changing it and so on. And I had a coffee discussion with a colleague of mine. He was one of those AFL tinkering engineers. He said, you should look at AFL. If you're going to do things from scratch, you might as well look at AFL because it's pretty cool. You can do cool stuff with it. And we looked into AFL and me and a colleague, and we really liked it. We saw that this was something that could probably not solve our immediate problems, but there was a bigger issue of the system of systems talk and knowing information about the software that goes in from everywhere. So we decided to make our new testing part of the CI CD pipeline talk AFL and understand AFL. And in this, we started listening to the Jenkins events, the lonely guy over here that was playing AFL music and nobody was listening. So we actually started listening to that and we started triggering our tests based on those events, which gave us much more understanding of what it is like to have an event-driven pipeline and what it was like to adhere to a protocol like AFL. And we actually wound up being a great listener today for protocol. During the development of our testing pipeline, we had to learn AFL, basically. So we wrote a bunch of Python libraries. And two of them, which were kind of important, at least we thought, we had to contribute back since it was an open source. If it was an open source community and open source protocol, we said we can contribute here. We can learn people how to teach people how to use AFL. And these two libraries, the AFL Python lib, which is a kind of small library that lets you publish AFL messages and listen to AFL messages on the message bus. It's very useful if you want to get started with AFL. And then there is something called the AFL Graph QL API, which is basically an API on top of the event repository, which is basically a storage for events because if you want to look at events back in history, you have to store them somewhere. You can't just keep them on the bus. So that's basically our Graph QL, which is kind of useful if you want to start tinkering with storing events and looking back in history and so on. And this open sourcing of this got access more engaged in the community. And we actually, there's a conference called AFL Summit. It's a very small conference where we actually access actually hosted it this fall. And in moving forward, we're going to try to keep the backlogs, working with an open backlog concept. As we learned about AFL, how our journey will try to contribute directly to the community rather than just doing stuff inside and then just dumping it outside, which is kind of a bad thing. And then moving forward, as AFL is now a thing at Access, we actually have a small team that work with AFL and we're really going to try to drive all of our pipelines. We see them being event-driven in some way. Moving forward. And also we're going to try to contribute more to the AFL community, trying to, like Emily said, want to spread it, want to make more people try it out and see if it can become a thing. So we'll try to contribute more of our experiences moving forward in 2020. And that's my access AFL journey basically. And we'd like to thank you for listening to us. And here's some contact information. The AFL community on GitHub and we have a Google group and a Slack channel and a YouTube channel also with some videos if you want to learn more or get a more basic tutorial of what AFL is. And now I guess it's questions. If not, how can I do it in the jobs or say just manually to send events to the AFL? So the question was, is there some kind of a GitLab integration for AFL? Not that I know of currently on GitHub. I don't know, not yet. And I'm not quite sure how GitLab works. So it should be possible. I don't see any big technical blocking it. It's just that somebody has to contribute it and do it. So let's say, is there any kind of stuff how to do it within my, like manually, like triggering events? They're awesome. If you look at the Python library, there is some basic tutorial how to set that up, how to send events and receive events. And then there's also something called, I think it's called AFL easy to use, which is basically a set of Docker containers that you could just spin up and you get a full, like a small pipeline which produces events. It's something that Ericsson contributed. I don't know if all of it is open sourced yet. Might be, but it's basically something you can try out to get some messages going to try to get a feel for how it works. Yes, absolutely. So the question was, since we've been sending events for several years now within Ericsson, we have, of course, a lot of data accumulated. So the question was if we can look at lead times and see where are the bottlenecks in the pipelines. And yes, of course, we do have that possibility. And that's one of the great benefits that we've seen internally, at least. And that's why I think, I don't know, how many millions of events we have stored, but there's a lot of them. So it's indeed possible if you have, like, implementation of a storage somewhere, and then you can look it up and you can see, because all the events are timestamped. So you can also see lead times between the events, between different activities in your pipeline, which is very useful. And yes, Fredrik mentioned, and just like to add, because aside from the protocol, we have some example implementations of services surrounding the protocol, helping to send the events, helping to visualize them. I think we also have open source, and they're more coming, they're more planned for that. It's just a matter of prioritization and so on. Any more questions? Yes, in the back. So the question was have we tried to use Eiffel outside of development to trigger some manual activities, if I understood it correctly. And I'm not aware that we listen and act upon these events and then doing some sort of manual step. Of course it's at all possible, but the listeners usually are performing some sort of automatic task afterwards. And as far as I know, we mostly focus on up until continuous delivery sort of. Deployment stages don't really trigger on Eiffel events later on yet, at least. I can add something to that. So there are like more generic Eiffel events, like announcement events that say something, that something has happened. And you could definitely use that for signaling other people in the organization. And we actually use that to sort of a debugging thing so we know that we've gone a certain path through our testing CI system. We do announcements here and there to see okay, we've gone this far because it's a microservice architecture and it's kind of hard to keep track of everything. So it's a handy thing, but there are announcements events that are very generic, so you could use them for almost anything. Any more questions? Okay, thank you.