 Yeah. Well, the... of the... for this year. No talks, first up. Rest, so to speak. And then we've got Douglas talking about a programming aspect of G-Streamer with writing plug-ins, and that'll be at 1410. So on that note, I'll hand over to Jan. Thank you very much. Bork is to give a... I'm Jan Schmidt. I am... a G-Streamer developer. First contributing... for a while now. For a large chunk of the last 12 years, it's been my full-time job. And as of a year and a half ago, completely my full-time job. They started a company with a couple of other G-Streamer core developers, and we're full-time doing open-source consulting around G-Streamer and using the profits of that to feed back into the project. Um... I live in South East Australia on the New South Wales-Victoria border. And we live on a... shed out in the paddock in... man cave that I've... constructed... out in a nice, wider... trees and things to hang out and... source software without too many distractions. And the piece of software that I have on is G-Streamer. Sure. Not working well. So G-Streamer is a multimedia framework that uses an abstraction of pipelines and basic building blocks connected together. And it kind of follows an abstraction of electrical components that are connected together through their pads. So there's a super-simple example pipeline there. You have some kind of source element. You feed it through a filter and then it goes to a sink. And so there's a... we have this pipeline description syntax that you can use for a simple command line that you can use for a simple command line building. And there's an example of that at the bottom. So you have a file source that we'll read from a file, read an mp3 that we give it, feed it through an element that's called decodebin, which automatically samples the contents of the file and investigates your available plugins and then selects a decoder for that and then feed it into the pulse sink that runs it out of pulse audio pipeline. That's kind of the most trivial hello world pipeline that we can build. And then from there we have examples of pipelines with multiple thousands of elements in them for 10-way video conferencing with video and audio streams per participant and signaling things going back and forth. So it goes from there way up. GStreamer is a completely open source project where LGPL and we rely heavily on other libraries for functionality, so it's much more aimed to build on top of things and tie them together into a coherent framework. It's cross-platform and runs on pretty much any platform and architecture you'd care to name using a geobject-based C API that is easily wrappable in other languages. And so hence we have many bindings. GStreamer is not a media player. It's not a library for playing movies. It is not a codec. It is not a protocol library. It's not a tool as such. It's not a streaming server but it is used to build all of those things. It is a media engine on which you can build other applications. The goal is to have a really flexible design. You drop in a plugin and then it can interoperate with all your existing other elements if you design your inputs and outputs correctly which makes it easy to use both as an underlying application library but also easy to integrate new application libraries into GStreamer. So, for example, OpenCV it's a well-known open source computer vision library and it's fairly easy to take OpenCV operations, wrap them up in a GStreamer plugin and then instantly be able to apply those to any source of video that you can generate from any of our many inputs or to take OpenCV and instantly turn it into an RTSP stream for people to watch online. And we have these days a nice ecosystem of consulting and support companies that are built around GStreamer. Many applications I imagine that most of the people in this room have probably heard of GStreamer at some point so some of this may just be useless background but there is a huge range of applications and websites and what have you. People have built around GStreamer these days and one does, it extends our abilities and makes for a more competent framework so after 16 years of iterating on that I think GStreamer is quite a featureful and powerful framework these days. 16 years since the first release in 1999 and I think a sort of slow warm-up through that left-hand column of 0.1, 0.2, 0.3 sort of experimental releases and then the really interesting stuff I think comes around 0.8 0.6 was the first version that GNOME started to use. 0.8 was the first one that I would consider we could start to play video reliably. 0.10 in 2005 it was really, I think, big milestone release but we rolled out 0.10 and said from now on we will provide an API and ABI stability guarantee that applications can rely on and that was tricky to grow the frameworks capabilities while still maintaining that level of API guarantee but I think we did a pretty good job of extending that so for seven years we maintained 0.10 and it wasn't until 1.0 that we finally said right now the time there are too many things we got wrong in 2005 now's the time to break ABI and API and so 1.0 a couple of years ago was the first time that people really had to do any application porting for a new release however now we have 1.0 as its own API and ABI stability guarantees so we currently 1.5 to 2 million lines of code depending on how you count it just using a quick overview of the slot count so that's 1.5 to 2 million lines of GStreamer itself and not at all attempting to count any of the hundreds and hundreds of libraries that it can depend on and compile so it's quite a big code base to keep track of and wrap your head around GStreamer 1.0 we released in 2012 after a big a long track of developing 0.10 we changed the versioning scheme so from 1.0 you'll see our previous releases were 0.10 0.10.1 0.10.2 up to 0.10. 0.36 and it was getting a bit silly so with 1.0 we have a new more commonly recognised versioning scheme that's a bit more like what the kernel does so we have 1.0 a year later we released 1.2 but in the meantime there was 1.0.1 1.0.2 that were only bug fixes so the new way we do versioning is to do a major release number and then a series of minor things that don't add new features but only add stability so people can standardise commercial application or they can target a single major release with a good guarantee that we're not going to break things by trying to introduce new features only doing bug fixes from there the difference between 0.10 and 1.0 in terms of application porting effort we changed the internals of 1.0 hugely from what had gone before and improved pretty much everything that we had put on our checklist of things we didn't like about 0.10 we rewrote however the external API changed it relatively little so the porting effort for people to move over from 0.10 to 1.0 is relatively small and we've seen a good good uptake after two years of 1.0 releases of people moving their applications across so certainly all the open source ones or their GNOME apps and things are all using Gstreamer 1.0 now so if you launch Totem in GNOME that's using the latest Gstreamer underneath but as someone who's working as a consultant it seems like most of our business does come from companies that are using Gstreamer for close source products and don't want to move to 1.0 and would like us to do bug fixes in 0.10 still we'd really like to kill that business off and persuade those customers that it really is worth moving over to what we think is quite a shiny new less buggy framework I have these graphs that I generate every now and then as kind of a internal check of how well we're doing as a project on just a couple of simple metrics and so this is our Git history across all the dozen Git repositories that embody our Gstreamer core and different plugin modules and what not and you can kind of see a couple of important inflection points in that graph if you look at it carefully so 2005 that was all 10 or so of us core Gstreamer developers at the time we were working for Fluendo they hired us and moved us all to Barcelona and we're all in the same room together working on producing 0.10 and you can really see that productivity spike of commits at the second half of 2005 just before we released 0.10 and then kind of a more slow track you know jump the if you look at the average line and we jumped up and we had a fairly smooth progression right up until the end of 2009 when we moved our project from CVS across to Git and after a fairly long discussion about whether or not to do that in a migration process that took a while and then there's a nice clear bump that shows just how good Git is at helping distributed projects work faster you can see that we immediately spiked up in the commit rate and that it continued to climb right up until we released 1.0 and I'm still undecided about what that downward trend from there means but I might talk about that a bit more after looking at a couple of these other other graphs is another metric of how many lines of code are we changing per month and shows a fairly you know a couple of bumps around those same 0.10 development points but otherwise generally just an upward trend until we get to 1.0 and then slides downward again the number of individual contributors that we're seeing that's a nice linear growth that doesn't really seem to be trailing off which is kind of I think interesting as a way to measure how widely used GStreamer is we see bug fixes and feature commits from a broader and broader range so this is every month we see 40 plus individual contributors offering some kind of patch that's been accepted then one final graph of dividing how many lines of code we change per month by how many commits we do per month to get a kind of monthly average it shows this interesting completely flat line so on average we change about 100 lines of code every time we do a commit and I'm still trying to think about other metrics we might use for tracking the health of a project I think bugzilla stats would be interesting because I suspect if I go and graph that we're going to see that one of the reasons that we've seen a slow down in commit rate from 1.0 is that people are more tied up doing this all it's worth 0.10 and that'll probably be reflected in our bugzilla count slowly increasing as people have less time to look at bugs although we make I think a purely an effort to keep looking at our bugzilla and push patches across so I have this question about what to conclude from our graphs about GStreamer are we doing less work since 1.0 came out has the work gotten easier and therefore the amount of effort required to do each commit is less I don't think that one's necessarily it because still our commits are about 100 lines per change so we seem to be still doing about the same amount of work on average per thing or is 1.0 just that much better that it needs less commits and less work to get to the same level and I think to some extent that might be true but I haven't got a definitive answer so things we've been doing recently in GStreamer a big one that we've centricular personally have been involved in is Ericsson labs have done an implementation of the webRTC signaling protocols that they call open webRTC and we've been helping them with releasing that code we have in the 1.4 release of GStreamer we integrated what was an external set of OpenGL elements and pulled those into the core set of plugins as well as the infrastructure that they use and that's led to some interesting developments in that we can now more easily integrate with platforms that are using OpenGL and GLES abstractions for passing data around and a key outcome of that is we can now use hardware resources more efficiently so in phones particularly or embedded devices we can we have much better support for memory to memory direct operations through hardware function units that hand you out a OpenGL texture ID or can map things into an OpenGL texture ID and we can now use zero copy operations from capturing a pixel on the camera to put it on the screen while simultaneously feeding an H264 stream out of the things that mean your CPU can stay uninvolved but you can still use GStreamer to set up a pipeline that will transparently use software encoders and decoders when necessary but also use hardware where it's available we have a quick demo this one relies on the network so may or may not may or may not work we have to see for some reason I am now sideways I don't know what that's about reload this page try and get it to maximize and then I so this is a web RTC demo on this side of things I just have a web browser and on the Android tablet here I'm running an APK that has a GStreamer pipeline involved and this one won't join it's the problem with live demos isn't it so give me two seconds here oh hang on I have to click this yes you're allowed to use the camera so it's still a little bit fraught because this is all that kind of code we've literally just compiled the night before out of the Git repository that's my video there should be a second video feed that pops up from this guy though which is the bit that's not appearing it probably helps if I put in a session ID we'll give this one more try and then we'll see whether this is might just give up on this I did kind of have this down as a this probably won't work kind of demo because it relies on the network and network demo is never ever work leave that running and check back in maybe sometimes it takes a few seconds just like a minute maybe to establish the call but I think this is just not working but it was working when I was sitting up the back earlier before the talk and what the demo does show when it works is that we have Nexus 7 running a custom simple call example as a standalone app with GStreamer and OpenWebRTC internals and we can place a call between this device and that device across the network with this one being Google's implementation of WebRTC and this one being our OpenSource GStreamer version they interrupt and operate nicely but this one will then be able to use any codecs you have available in your GStreamer install not just what Google or Firefox have chosen to include so it kind of gives that GStreamer flexibility to do anything with the incoming WebRTC what else do we have so yeah that was the demo that isn't going to work I might try it again at the end LG took over WebOS from Hewlett Packard and WebOS is an OpenSource operating system Linux based with a custom sort of JavaScript web based UI on the top and using GStreamer for all of its multimedia handling internally HP had it released on there palm phones the touch pads and things that went out a couple of years ago and LG bought that out and then they've since been releasing WebOS based TVs so it's we're starting to see some interest from LG in pushing some of their internal patches upstream so far they've just been a sort of silent consumer we've also seen Samsung hire a dozen or so sort of GStreamer engineers around the world and pull them in to build Samsung TVs that are using GStreamer internally so it would be interesting again to watch what's going on there we've had people working on HLS and Dash which are live streaming web streaming protocols that people like SoundCloud and Spotify and anyone doing HTTP streaming there's a good chance that they'll be using HLS or Dash to stream their video out and it's interesting in that it simultaneously encodes streams in multiple rates batches and that you can switch bit rates anytime you reach the end of a fragment you can decide or that one came in too slowly I'll jump down to a lower bit rate or that was streaming fine I'll jump up to a higher bit rate and so we've been working on elements for that about adaptive bit rate switching dynamically measuring how well you're doing with quality of presentation as well as supporting the trick mode operations that those can do where you play twice speed by fetching data fragment by fragment and then playing each fragment at twice the speed skipping to another one and that kind of thing had been working ESP retransmission which is a portion of the RTP spec you use your you have UDP packets that carry your actual media content and you have an RTCP TCP reliable channel for doing signaling back and forth and you can send retransmissions just if you lose a UDP packet you can just receive all of the incoming UDP but you can also use RTX to request permission of things that are running that are running that didn't arrive as long as you haven't tried to put that frame on the screen yet so you can use RTX to optimistically try and fetch things that were lost while not bothering if doing so would make would arrive too late anyway so it's kind of like you get a reliable channel enough to make your video work it's and then when you have a lossy network you can make a big difference in how well you manage to put the video on the screen we have the PTV project that is an open source video editor that's based on GStreamer and a couple of guys have really taken on PTV as their personal challenge and they ran a fundraiser that's still an open fundraiser although it has stalled out for several months they're aiming to fund their own development efforts to bring a high quality open source non-linear editor with a modern UI and access to all of GStreamer's processing and effects and the work that they've been doing to push that forward has really driven some good pieces upstream they're very focused on making sure that they have a reliable reproducible app so they put a lot of effort into doing validation and QA operations that automated testing and things so not even really working on the video editor but working on the framework that makes sure that as they develop their video editor they will know that it's rock solid underneath so that's in the GST Validate module they've worked on a component called GST aggregator that's about improving the efficiency, reliability and flexibility of our video mixing and audio mixing operations and that's gone upstream as well as into the GST editing services module that's all about building a high level API for doing video processing operations because it has a project in general GStreamer focuses on very small building blocks and so it's an ongoing work to try and build larger higher level pieces of API that can hide some of those interactions from application developers meanwhile on the mailing list we had an interesting email out of the blue a few months ago that indicates that soon our software will be in space so that's kind of cool and that's scheduled to go up in the next month or two I think that'll be kind of fun to watch that one thing that I have pushed on a little bit that is still an ongoing project is we have a network clock implementation that is kind of like our own NTP where you can synchronize operations across machines, across the network and I want that because the project that I showed last year was my distributed media player that does a Sonos style multi-room layout system and it uses the network clock to synchronize playback of audio streams across different machines that represent speakers in each room of your house so it really relies on the network clock being nice and stable and reliable and accurate to get tight timing so that when you walk through your house the stream that's playing sounds like a single stereo system and not like a stadium with huge offsets between the audio in each room and the graph I think it's a little hard to see in this they are both on the same scale there's kind of a before on the left and an after of the work that I've done so far and this is kind of like an extremely bad example of the worst case scenario for our network clock implementation in that I was measuring network clock synchronization across the network from here to my house with variable delays of anywhere between 70 milliseconds and 150 milliseconds or even worse spikes of packets getting lost for hundreds and hundreds of seconds at a time and the before shot kind of shows that even in those really bad circumstances our clock tracking worked reasonably well in that we're that's minus 15 milliseconds at the bottom up to 25 milliseconds at the top so stay synchronized to somewhere in the range of 40 milliseconds so within one video frame it's you're keeping these things synchronized even across an extremely bad network on a local network you can be in the micro seconds range on my wifi at home they're quite noisy for some reason I occasionally see stalls on my home wifi of up to 700 milliseconds and that can really throw out your measurements of something going on on the other side the network clock basically does a ping pong it sends a ping out and says what time is it it comes back and it tries to guess between when I sent it and when I received it when did that guy tell me the time was and it does regressions and things over time there's a bunch of filtering and work on that that kind of gets rid of some of these larger spikes and in general it stays within in good times even on this noisy network where within 5 milliseconds plus or minus and at worst maybe 10 milliseconds plus or minus and that's tight enough for a quarter of a second it's tight enough that probably you don't hear an offset if you're playing that in every room of the house it's going to be better because I would like these swings across like 30 seconds it's going plus or minus 10 milliseconds and that's a little bit of harmonic distortion that you go a keen ear might notice in the audio playback or else it manifests as a click or a pop when we stop to jump in time there's more work going on there another thing I've been doing lately is working on stereoscopic 3D video signaling so that we can handle 3D movies and I did if people are interested in any of that I'd go my whole talk at the GStreamer conference in October was about 3D video support and what's involved in fully supporting that but I won't delve into it too far just to a little just run a couple of demo pipelines here for example is a 3D encoded version of Big Buck Bunny this on my screen in front of me is a is that not going to go over the top this is a 3D version of Big Buck Bunny and it's delivered to you as a left eye and a right eye view and one it's not normally how you need to output it however this is just an encoding and they basically just pack two frames together and then encode the video as normal and in most cases as far as I can tell they don't give you any information in the file to actually signal that this is a 3D movie even though the methods for signaling matter are specified so you get a file it has a 3D movie but no one nothing in there will tell you that there's no way in UI for the user to be able to say reinterpret this and that's why typically your TVs at least the older HDMI 1.3s they've got a button that lets you cycle through your 3D interpretation modes here is another so this is another method of delivering content that is side by side encoding this is not going to be great if it takes me that long to get it up on the screen each time excuse me for a second while I fix my demo maybe this will pop up automatically where I want it side by side encoding in the file and again no external information telling you that that's that so I've written I've extended our API so that we can add markings to each frame to indicate that it contains various packings of 3D content and then written some elements that let you translate that into different forms so here it is is the same file but now it's translated so that it's a line by line packing which is the kind of thing that you usually would need to output to a TV that has static 3D glasses every alternate line on the TV is typically a left and a right with filter polarizing that's the word I'm looking for filter over the top for left circular or right circular polarisation of LHI view so it's not you know beyond that it's not a very compelling demo we take the top bottom so this is the other output mode that we can use with the rendering down to an anaglyph for your old school 3D glasses and then you can put that on and see left eye, right eye I've got a few if people want to grab them I can show you so we can render that big buck bunny down and this is another it's where our new integration of OpenGL is used because this is all rendered down to anaglyph using shaders and very little CPU comparatively little so it's chewing up one core to display that full 1080p times 2 software decode on this laptop and then funnel it out to OpenGL for rendering down works quite well but I haven't that's not upstream just yet I still need to polish and add more abilities I still need that API that will let an application specify to override any interpreted input mode we have had people adding a new what time do I need to finish okay cool a new device probing API that provides applications an easier way to explore the installed devices and so far that's been mapped to find audio and video devices so you can run the little GST device monitor thing and it will tell you that I have one camera and then it will enumerate the video formats that this camera can support and tell you that it's capable of outputting JPEG images directly at 30 frames per second and that I have a couple of audio devices one for output another one for outputting 5.1 surround and that I have one that's for input so you can capture from my microphone capture from my monitor so that's iterating the pulse audio available devices and so this new API makes it a lot easier for applications like cheese the camera camera booth style app for to locate devices to talk to without having to go outside Gstreamer and start talking to V4L interfaces itself we've also been focused a little bit on producing some higher level APIs as I mentioned before about wrapping up our smaller components and bundling them up into higher level APIs that people can build applications on we've had a big focus on improving our QA we've really reinvigorated our continuous integration system so every commit to Gstreamer now runs through a barrage of compiling machines and regression tests across all our architectures and platforms which really helps us to track introducing any problems across devices that not everyone has access to we have a new tracing subsystem that lets you add fine grained probes throughout the system and as you connect together these pipeline pieces you can start to add tracing that will automatically collect information and say your video decoder is using 37% of your CPU but surprisingly you have an unexpected tele-space conversion going on and that's wasting 5% of your CPU and you can really measure those things with a fine detail and optimise pipelines or debug them that way ties in with our GST debug viewer we have long had a logging system that can run when you turn on our full debug of all categories and all levels then you can rapidly generate gigabytes of logs so they're kind of hard to find an individual problem that might have occurred 500 seconds into your pipeline execution the GST debugger helps with filtering down those gigabytes of logs to a usable form and discarding data or and our GST validate that's come from the P2V guys it's an automated regression set of tests for individual elements so if you create a new decoder you can run it through the standard barrage of tests that the GST validate has for decoders making sure that they can negotiate different formats successfully so that helps with making sure our elements behave uniformly and GST harness that's another test harness framework from the Pekzip guys in Norway and all of this is being pushed into a new GST developer tools repository so we'd like to see grow I'd still like to see a live debugger that everyone's wanted for quite a while where you can have a GST streamer based application and connect to it over a TCP socket and then have it have the ability to introspect all of the GST streamer operations going on inside a process and other things we're doing an endless seemingly endless task of bug fixes finished doing whatever we want to be doing there will be bugs in bugzilla and patches from people that need to be shepherded and integrated we can always use more help with that there's always new codecs and formats to be integrated so H265 and DALA encoders and decoders are things that we've seen added in in the last year in a bit there's KLV support standardised way of encoding key value pairs into dvb and mpgts files as well as KLVs that should really be a subheading of dvb and mpgts improvements there's also been improvements in our mpgts handling in general around measuring durations and being able to seek better inside those kinds of files we've had Wayland support in a nascent form for a couple of years but that keeps improving and adapting as Wayland comes along so we have the ability to pass through encoded data to a Wayland server if it's able to decode on the other side of our connection or do synchronisation operations with Wayland v4l2 in the kernel has recently added API for dealing with hardware encoders and decoders and so some of our some devices come now with v4l2 exposed hardware decoders and encoders and memory operations so it's now we've added support for doing memory-to-memory operations between those devices to again pass through a chain of hardware devices we've got better support for mixing live feeds as they come in and adding timeouts and handling late feeds better if you'd like to get involved there's a couple of good places to come and find us we're in the hashgstreamer channel on the FreeNode IRC you can join us on our GST DEVEL mailing list on free desktop or you can come and have a look at our list of bugs in bugzilla and on free desktop and see if there's something that you'd like to hack on and congratulations does anyone have any questions good day just wondering what the main challenges are for 4k video and ultra high frame rate and things like that coming through the the main challenge is just purely the extra work that's required to handle them and making sure that the hardware can do it there's no there's no real magic to it it's just making sure that our descriptions of codecs can scale to those levels which in general they can we can play 4k video but you find interesting things like you go and you look at your xv port and you find that generally they go up to 2048 by 2048 as a maximum well this one I can go up to 16 384 on this one so this one supposedly I can pass it 4k video and it'll scale it and put it out but you go to another piece of hardware and you'll find your output mechanism only wants 2k as a maximum sort of texture size and those kinds of limitations are hard to work around I don't know what our OpenGL stuff could do if you pass it a 4k video I don't know if it's smart enough to split it up into multiple textures and display them on separate quads there are a couple of other places that 4k crops up with just in the last week or two Sebastian added support to our black magic deck link driver plug-in so that it can enable 4k modes on black magic cards those are the kinds of things TV studios use for passing 4k in and out on SDI ports so we can do it from that level now and capture our output 4k through there only keeping the CPU up and because we've done so much work to the kinds of APIs that you use for handling this stuff passing it through OpenGL and memory to memory stuff so that you avoid touching your CPU and you avoid trying to do hardware decode and encode we've done a lot of that work already you can pass things through and then it's really about whether your system is able to deal with it or not a sort of gesture where it can do the right things to avoid copying frames and wasting CPU cycles but that may not be enough to get you there on any given piece of hardware in which case we just start dropping frames really any other questions do you have any comment on kind of inter-operation with the AVB networking standards for time video no by triple E 1722 that stuff um no I don't know which standard that is okay it was time for a lightning talk yeah it might be I don't know which is that PTP or is that like PTP stands for the routers and everything in between to give guaranteed bandwidth okay no I don't know that anyone's done any work on that at least in the open source people may have done it privately any other questions thank you very much for your attention