 Gary Sullivan, Microsoft, Rapporteur for Visual Coding in ITUT Study Group 16. In 2013, early 2013, we developed a new standard called HEVC, High Efficiency Video Coding. And that at the time was a big advance in the video compression technology. Now we're working on trying to see if we can do better than HEVC. HEVC, which in ITU is known as Recommendation H.265, is a pretty capable standard. But we created a new group in late 2015 called the JVET, Joint Video Exploration Team. It's a partnership with MPEG, the Moving Picture Experts Group in ISO NIEC. And we're trying to assess whether we can beat the HEVC standard with something new that would be standardized around 2020. So now we're issuing a preliminary call for proposals for technology in that area. We already have some indication that maybe around 30% better compression than HEVC is possible. But the complexity of the schemes that we know about is quite high. And we're hoping to solicit new proposals to even go beyond that 30% and maybe create something as much as 50% better than HEVC. What we've been working on so far is basically built on the same principles as HEVC, but just going further into more complex and fancier schemes for coding technology. Some of the techniques require a lot of computing power, but to get coding efficiency we sometimes need to use more computing power. As time moves on the computing resources get greater and greater and so we're capable of doing more things if we can use that computing resource intelligently. HEVC itself for encoding needs a lot more computing power than the standard that has been dominant, which is known as AVC or recommendation H.264. HEVC is already a big step beyond H.264, AVC. Well, I think to some extent the requirements of more advanced compression technology help drive computing power because in the past we've seen people making custom silicon and special parallelism designs and things for processors because they need to be able to use the latest video coding design. I think there's a synergy there between the computing resources and the advances in algorithms. We try to keep the complexity at a minimum or at least within the bounds of what people can implement in a practical system, but we do tend to need more sophisticated methods as time moves forward and so the computing power requirements do go up, especially for encoders. Decoders, hopefully not so much, but a lot of the techniques we're seeing in the JVET also require a whole lot of computing power in the decoder. We hope we can reduce that once we have a real standard out of it. Well, we formed the JVET in late 2015 to start exploring new technology and we've already been working on algorithms and collaboratively developing software for more advanced technology. Last week we had a meeting here with over 100 contribution documents proposing and evaluating various algorithm designs. So we've been active already, but we're hoping to take the next step towards moving this into the realm of a real standardization project and practical implementation, solicit as much input as we can by issuing at this meeting a preliminary call for evidence. At the next meeting we'll have a final call for evidence. We're just trying to show that there is a practical possibility for creating a formal project. We expect to create a formal project with Impeg sometime around the end of this year and issue a call for proposals. We would also do that in two stages, a preliminary call for proposals and the formal call for proposals and then we would start actually working on a draft new standard. We think we could finish the standard around late 2020, but all of that is kind of up in the air at this point. We have to learn a lot more over the next year or so to determine what is actually feasible.