 Thank you everyone for this warm introduction and apologies guys for the glitched and the few minute delay. It's because of the protests and all the publicity. We are also live streaming I am told this morning, right? So we just set it up and good morning to everyone and hello to our online viewers as well. So I will be talking about AI-driven UX, the future of ODD platforms and I used to assume that everyone of us is aware of what ODD is. But I interacted with everyone this morning and she wasn't aware. So I just told a few who are not there. ODD stands for over the top and basically all the content platforms that you use every day with Samsung Prime, they are categorized as ODD platforms. So my thought would revolve around how we can leverage and use AI technology to elevate the user experience for ODD platforms, right? But before that I will quickly put you an introduction about my background. So currently as I have been introduced I am working as an associate director of experience design at To The View. To The View is a digital transformation services organization based out of Noida, NCR. And what we do is we provide digital consistency and products which is mobile apps and web apps to our clients across the world. We have offices in six countries and they are a product in Noida. I am also the founder and editor in chief of DesignMind magazine which is a digital magazine for UX professionals. The idea is to bring the community together and to feature stories which are not technical in nature and are more sort of issue-based. So for example we will have stories of someone who transitioned into UX design and is self-taught from being from a totally different background of being a lawyer or a doctor. So those kind of stories are featured in the magazine. And then I have a background of working with Accenture, Robologic and Nagarro. Overall I have about 13 years of experience, more than 13 years actually. But I am still learning I would say because we are in an ever-evolving industry and technologies and things change every six months also. So with that I will quickly move on to the discussion points for this talk. Initially we will talk about how ODE has risen over the years particularly in India. We will talk about the rise of AI especially in the last one year or so after chat GPD. And then we will bring these two together. We will see how we at Pooja New have come up with a framework to provide ODE services to our clients and are also leveraging the use of AI technology. And then we will talk a bit about what's in store for the future. So now we have all seen that ODE basically rose to prominence. And when I am talking about ODE and AI I am particularly talking about the Indian subcontinent. So around lockdown, the COVID-19 lockdown in March 2020 where everybody was sort of under the house arrest, ODE platforms suddenly rose to prominence and everybody was subscribing to Netflix, Amazon Prime, Sony and whatnot. So this is some statistics here which show that ODE is rising at a staggering rate and every three years the rate kind of doubles. So in 2023 we are sitting at $240 billion but in 2026 we are expected to rise to $486 billion that's almost double of 240. What leads to this rise in ODE like I said one is COVID-19 lockdown in India but there are other factors as well. There is affordable and high speed internet access. Now we have mobile phone and 4G to every nook and corner of the country. Then smart phones and smart TVs have also become very affordable. So you get a smart TV anywhere around 15 to 20,000 rupees now. So there is like affordance across the country of both these things which has given rise to ODE platforms. Similarly, AI has risen over the last one year or so especially when chat could be launched and they had this ODE services which the developers could use to extend their frameworks and they could use our chat in the future. Again, here we are expected to double in billion dollars the market size is expected to double every three years or so. So in 2023 we are at $538 billion but in 2026 we will be almost $900 billion. And there's a stock for last year. So in 2023 AI is already almost double the size of ODE. It's at 538 while in 2023 ODE stands at $240 billion. So AI is huge and ODE is still rising. And now is the perfect time to bring them together. Now what has edge to the rise of AI? Like I said, chat GPD especially their API services that make its services are extensible by use of API and the developers can use to extend modify and make use of their engine in any way they want. And there have been some algorithmic innovations and computation infrastructure advances as well. So now we look at video-ready which is an in-house platform created by the new digital services organization which brings these two technologies together to elevate the user experience and also to maintain the business requirements and give the business opportunities to the clients. So what is video-ready? Video-ready is to the news framework of flexible, extensible and reusable sort of components that help and build an ODE platform for our clients. So what that basically means is the client just have to upload their content library and they have a ready-made system which can be widely labeled about what AI means. So that gives them a faster time to market and the features among the many features that we have, most of them are AI powered, the use of AI technology and overall because it's all ready-made and it's hard time to go to the market overall it sort of needs to do a cost of ownership. Now what is the ultimate objective of building an ODE platform from a user designer's perspective? I would say that it's like maintaining, you know, one factor is of course that we have to give the users a splendid experience, right? That is obviously there, we have to cater to the user needs but also we have to give the business opportunities to our clients and offer them much more avenues that are normally provided so that we can also make money on the site, right? So this is what we cater to with video ready. We balance the business names as well as the user needs. This is how it looks like. This is a quick snapshot of the admin interface of the video ready. Like I said, the client can just upload their content library and they'll have a dashboard where we can see what kind of content they have across all genres, the number of users, how many are active, how many have been users and all sort of stuff, right? But particularly talking about among many features, the features that are powered by AI are these five features that I'll be talking about in subsequent slides. One is emotion analysis. Then we have AI powered product placement, a very brilliant feature that gives business opportunities for our clients in very subtle and remarkable ways. Then contextual ad breaks, thumbnail and tribute automation which is really a pain. Any of you content creators here would know that to create thumbnails and innovative reviews takes a lot of effort, right? So we automate that with video ready. And then bench markers. Now let us look at these features one by one. Emotion analysis. So now what we do is that any content that we have in our framework, we just feed it to our AI algorithm and it reads the content by way of reading the transcript of the video, be it a web series or a movie. And then it kind of categorizes and tags them based on the keywords that it encounters in the transcript. So if the video has a lot of references to ambulance and gun firing and all, so it will tag and characterize it as a violent or a poor movie or an action genre. And so on, right? What this leads to is it gives us enough power to have categories across a ton of content that we have. And then we can use those categories and tagging to elevate the user experience by offering them some personalized content and so on. It also helps in sort of reaching, you know, in making diverse content reach to a larger audience. So for example, regional movies which otherwise go, you know, get seen only by people of that region will be able to reach a wider audience. Let's see how. So how it works is, let's say we have a movie called X Machina and it is, you know, fair into the system. It quickly, it happens in a matter of minutes or maybe seconds sometimes. It quickly analyzes the movies content and it creates mood tags and mood categories and also genres and some keywords, right? Based on this across the library, it is able to generate similar titles. So what happens is just like we have an experienced design, a customer journey mapping or sort of a mood mapping of the user's journey across digital products, we get a mapping of all our content library. So if you see what you're seeing on screen is like, you know, video by video, movie by movie, we have a mood sort of mapping of all the content. So for example, we know that arranged, you know, it can be organized by way of mood levels. So for example, all the enjoyment movies if you see are at the top and as we go down, we have some movies with elements of fear and indifference and sadness, right? Across each movie as well, we can see what scenes and which sort of end frames cater to what mood level, right? So for the first movie, for example, you see that there are more elements of enjoyment but in the middle, there is some sadness, right? And then it again picks up to give more enjoyment. While in Rain Man, it's kind of divided. It begins with some anticipation. There are elements of enjoyment. It goes back to anger and anticipation. There are some elements of sadness as well. And finally, it ends with enjoyment elements, right? So now, this is very powerful. What we can do as UX designers is make use of these mood mapping of all our content and use this to kind of personalize recommendations. We can give recommendations of content that aligns with the mood of the user. Now, of course, it will happen over the period of time when we sort of track what users are viewing and we have some data around that. Then we can offer some better recommendations. So how it works is, let's say a user has been viewing some depressing movies for a while. We can say, hey, we see that you're feeling a little low. How about each year you are? And then we offer the content which are tagged with elements of enjoyment. Again, we can further empower the user by giving the control in the user's hand so we can say, I'm feeling blue. By default, we set to blue. And it could be a drop down. The user has the power to further get more depressed and watch more blue movies. By blue movies, I mean depressing movies, by the way. Or if they want to get sort of in a cheerful mood, they can pick and select from the drop down and click on the relevant selection, right? Then based on who we are watching the content with, this is another sort of indirect mood mapping helps us in this. So if we are alone, we are in a different kind of mood whereas if we are watching it with a family, we are in a different kind of mood. So all this mood mapping, categorization and tagging helps us in the offered content for the user and make some personalized recommendations. We can also use this to give them some more power and to sort of give them content trigger warnings. So for example, if we know the movie by-large has elements of enjoyment, but there is a bit of scene where it has a bit of gore and violence, maybe it can give them a warning. The sequence that you are about to witness or watch is gore or violent in nature and whether you wish to proceed or you wish to skip that time frame or whatever. Then time duration based videos as well. Suppose a user is waiting in the airport lounge and has a flight to catch in the next one hour or so, then they will be in a mood to watch shorter videos. So if the user selects videos under 30 minutes, maybe we can make an assumption that they are in a hurry and they might be in a mood to watch some adventurous or upbeat movies or videos and we can offer that. Similarly, an indirect way of tagging user's mood or making sort of note of their view habits is when we find out that they have a particular star which is their favorite, right? We can give them more power by giving them combinations, right? So I want to watch a movie and I like chemistry of Shah Rukh Khan and Kajol as a couple, right? So I can only be shown videos that feature both these stars. The next feature that we have in video ready, this is a very powerful feature like I said earlier, it's a product placement. So now product placement helps in generating revenue for us and currently how it's done is a movie is shot and an advertisement is inserted into it and it remains there in the video throughout the lifetime of the video, right? But with AI power, what we have done in video ready is we are producing subtle advertising and some flexibility campaign running, which is all based on data driven insights. I'll show a video to you guys quickly. So this is how video ready kind of... So we feed a video and this is TVF's Cubicle's web series and the video is read by the AI engine and as the video is playing, it gives us this slot automatically that there is an ad opportunity between the timeframe 1630 to 1845 and it automatically kind of opens up what products would be advertised in this given time slot. So there's an option of an iPhone or a Coke advertisement, right? Now let's move ahead and put Coke advertisement in this timeframe suggested by AI. So as we move ahead here, we click on Coke and we select it to apply. Now if we go here in the same scene, in the same scene you would see now a Coke can subtly placed into the frame. Earlier it wasn't there, but just way of one click we can...let me go back and show you it wasn't placed earlier. Sorry about this rubbish video. So in the same scene earlier there was nothing but by just one click now we can place a Coke can. So this is subtle advertising and also it's flexible because the Coke can hasn't been placed there at the time of shooting, right? So we can always remove the Coke can if the campaign runs out. So that gives more power to create business opportunities as well as it kind of elevates the user experience because the user is not bogged down by mindless product placements and it can only be there as long as the campaign is running. Now furthermore, this ad campaign can be further targeted to our selective demographics by gender, age groups or location. So this is what we have created by using the use of AI technology in video editing. This is pretty powerful, right? But now we have more. Video editing can also create contextual ad rates and what that means is users are always interested in seeing this viewing, right? And we want the engagement rate with our content on these ODD platforms and we want to create a real brand perception for our products. So what our engine does is it reads these video content that we read into it and it automatically detects the most optimal locations for placing advertisements and it also ranks these ad rates and there is a scene level or a mood metadata, right? So it understands the video so it knows if there is a grim scene maybe there is time to insert an advertisement by an active group of the audience so we are more engaged towards the rest of the party. It also does a scene analysis which is basically it reads the transcript and it is able to analyze what scene, you know and what is the mood and dialogue and action going on in the scene and based on that it creates a ranking of our advertisement for you. The other feature of this is it does a seamless storyline integration so for example the AI system would be able to understand that the character is maybe the favorite like I am right now, it is not related to the topic I am going so in a seamless way and suddenly, you know, it can insert a refreshing beverage advertisement in there and the user doesn't even get to know it is seamless so it is not obstructive then we have a ranking created by the AI of all the ad slots and based on that we can charge our advertisers based on the ranking created by the AI engine. Then we have a feature of thumbnail and preview automation again, when we feed a content into the AI framework, it is able to sort of auto generate some thumbnails and previews for our video content and now this helps us save time and money in creating thumbnails and previews because it takes a lot of effort and many people have to work really hard for it and it is not randomly generated the AI system is able to identify the characters and the scene and based on that it understands what is the premise of the movie what are the important characters in the movie and it is able to create reviews and thumbnails based on that so for example, let's say we feed a movie into it, it understands what is the main character and it will have a snapshot with that main character into the movie and a preview clip of the most important scene of the movie so that it generates interest and the user is able to watch the whole movie so for example, if we feed the movie into it, it will understand about three best friends and it will capture a screenshot from a movie which has all those three characters in the scene rather than creating a video clip randomly it also analyzes the mood and appearance and so for an update movie it will only create thumbnails which have an update functionality and it goes with the theme of the movie even though there might be some elements of tragedy in the movie it evaluates body language expressions of the characters and is able to select the best frame for thumbnail reviews then like I said, it is character centric and it understands scene by scene what is being talked about in the scene and create based on that then finally we have binge markers so all of us are, we have been binge watchers every now and then if you are interested in web series or movie comes up so with binge markers when we feed our content to the engine it is able to automatically identify the segments for putting up some invokers for the users to the action it makes the content more adjustable and it gives us the power to sort of make more interactions for the user so for example let's say we feed texture the web series into the system it understands the time frames where there is an intro, where the intro stops where the end credits are started and where there is a recap so based on that we can give some cues to the user to maybe skip an intro or if the end credits are rolling maybe go to another episode or skip recap and also offer other engagement sort of interactions for the user it also understands segment length optimization and using this data we can sort of improve the experience of the binge watching viewers then overall making use of this needs to reduce your fatigue engagement is increased by needs and bound alright now that we have looked at what's in the press and then what we are making use of to enhance the experience of OTT platforms let's look at what's possible in the future this will be very quick because my time is running out as I'm told right so in the future we will have hyper personalization so now like I said currently OTT and AI especially is very young right so we don't have that kind of user data but over the years when we have years and years of data of the users you will have it we will be able to offer them some hyper personalized recommendations we will be able to understand that this viewer will use certain kind of content during that say ending of the month and then goes on to watch different kinds of content and maybe by month then he reduces his content overall right so based on this data analysis we can offer them some hyper personalized recommendations and now we are at the moment we are using AI technology to understand the content that we have and in turn to use that data to increase the user experience but in the future we might also have some AI created content just like we have prompt based AI created images right now that could be possible with the videos as well in the future then immersive technology is something that's short to have to be there in the future we already have Apple releasing the Vision Pro Plus in the coming months and with immersive technologies we could make use of headsets like Vision Pro Plus to further enhance the experience of the users and make it a three dimensional or even beyond that for the viewers then social integration currently it stands at a bare minimum there is Amazon Prime's watch party and but it's a it's a sluggish interface it's not that used but we could in the future maybe use of immersive technology and further enhance it and maybe we could all watch it remotely and get feel like we are in the same zone with our friends and then there is also possibility of making way for a greener planet and making use of technology which is greener nature and planet friendly so this is a very quick glimpse into the future and now I am open for any questions or thoughts that you want to have My name is Amit and I have a question regarding the problem so we said in the standardized new series it would be a arise and you will get some options so in this I will see that in place of multi-styles you will get some options will there be an option where you can plug different thumbnails and this is your recommendation and this is the option that you will get Hi Amit, thank you for the question so like you said we currently have that ability to combine so it gives us various options when it auto-generates thumbnails and then ultimately the power lies with us to sort of merge, mix and match those thumbnails and sometimes the problems that we run into is let's say a star has a guest appearance in a movie now because they have a guest appearance, a cameo the AI does not really understand that that star power will be able to capture the audience's attention so it does not sometimes generate those kind of thumbnails and then we have to manually merge with our audience so yes that is possible and the ultimate power lies with us so while you were getting that signal and with that ability or the power of those things let us say there is some intent which has certain parts which may not be, you know, like my entire cooking can't watch together as it can't be with kids so can that skip, you know, that part of the movie could be skipped just to, you know yes, yes, I gave an example of there being the content trigger warning so that was exactly that, if by and large the movie is of, you know, it is family oriented but there are certain scenes that are objectionable and mature there could be a content warning, we could insert that if we are not able to and then the user has the power to skip it and move to the next thank you for the question I was curious what the last part was about how can we understand this type of technology could we give you an example hi, hi, thank you that's a good question that they all say but I am not entirely sure how it could be done but we currently have some websites already doing that and what they do is they are able to understand the power consumption of a viewer based on their screen settings or from more that they are viewing contents some movie platforms have a dark mode of light more bright, dark mode consumes less energy and based on that they automatically adjust the platforms sort of visibility preferences and now the user already obviously has the option to override those preferences but they can be auto-set in a way to generate and consume less energy so there are websites and I forget the name but I can give you a name and there is a website which understands that this the ISP that I am viewing it on is not very planet friendly and it automatically reduces its images quality and renders a black and white version of the pictures to the viewer in a way to save some energy production it does and like I said see there are pros and cons and we don't really understand how we can go to a cleaner planet but these are just baby sessions thank you so many questions look maybe I will take the my name is Srinagar and I am supposed to appear in that and I am not going to say it so is there any concern that we need to take something that the purpose will be very good and we hope to start this right now thank you so yeah that understanding is already so we are only feeding the content where we have an agreement with the video maker as well as the sponsor that we need to put some content here and this feature is primarily used by the movie makers who are interested in getting their movie sponsored by an advertiser so there are the primary users of it and they are the ones who suggest to the advertisers that these are the slots that we can place your product here we can do it we can do it here so is there any restriction for that products that can be based on so you are asking if the video editor system is able to sort of be in those bounds right no currently no so it is up to the description of the video makers and the sponsors and it has to be within the bounds of the companies, laws and social norms yeah but AI wouldn't really have the power to understand that currently yeah but it is up to the description of the content creators and advertisers because some of them can't come I come to you you showed how a user could get results for their specific interest can you change those interests? is it possible for you to subtly move towards something that you want to achieve in there? yes yes any user preferences and settings there will always be methods to overwrite things that are being created and recommended by the people or to be is their analysis so that's there there's always going to be an option to overwrite what AI does so as you said that we can feed the content in AI and we will read everything including all the content let's imagine we are doing this for dangerous and we have a lot of data regarding everything is it possible that in future we could just create a movie including these two actors with this tone? with the far future I'm sure it would be I don't know the technology so does that mean it would take all the film industry? because I work in advertising background curious that if we can use this in that content if we understand can we use that in advertising for creating this kind of advertising advertisement for a specific I'm sure at an advertisement level it is definitely possible because the characters have to be generated in nature and not all advertisements have stars so I'm pretty sure that's really possible in the coming future because we already have AI with movies and all but about your question of there being movies which feature when established stars I really cannot comment but maybe it's possible but it will have exposure points, not sure about that right, right I have a question so emotions are pretty subjective right, so how and they can change every day how can we make them experience which is which is not very intusive and asks us every time we go on OTT how we are feeling and plays with the net according to anything that you have researched or thought about sorry for viewers who are coming on OTT every day we have a kind of emotional so how can we make that experience better I think I showed an example where you know there was a drop down of mood emotion and you have to pick and choose from that drop down so by default with the data that we have we set it to something let's say a person only watches comedy movies so we set it default to comedy but the user has an option to pick and select maybe they are in a grip mode so they can select a non-comedy and you know you can dream and select that kind of that was my actual like we selected blue but what kind of funding will it show will it show me that some videos that we chair me up or will it be a kind of funding how will the person then further choose that oh yeah so yeah that depends on yeah so the AI would do both if you are interested in watching further grip movies they get you know that could be suggested as well but it is up to us as user experience designers how we could design it and how we want to you know give the options to the user so that's a good question actually genuinely because there are two ways you know you can take one is to further you know give them similar content or content that is very nature and kind of uplifts or makes a mode switch for the user further what suggestions do you need to have when you are trying to understand there is a lot of responsibility that comes with making the decision that if someone through the content that someone is watching you are able to recognize that they are not in good place further we need to suggest that there are some repercussions and could be some very extreme situations because some of these movies don't really have resolutions at the end of it they don't really have sat in so that could influence the world right so how does the platform kind of address the responsibility because it's not a human making my decision into that so who would take that AI just has that data to sort of give to us but ultimately the decision lies with us as designers how we make use of the data so AI would give us data for this who has been watching a lot of real movies over the past couple of months now we as designers have the power to make use of data to offer them something that is totally opposite to the mood of what we've been watching or AI doesn't really create an interface all it does in video editing it uses lots and lots of data that help us analyze the human patterns of our audience and then we can begin to choose what we based on the hopes and how we offer content