 for past two and a half years as a UX engineer. So in Adobe, I was working for this search and sensitivity, which is their AI division. And I was building a lot of interactive prototypes and custom interfaces. So one such interface was this, where a user can draw an object, and this will do a reverse image search based on what they doodled on their stock website. There was also a dashboard which allowed for multimodal searching. So you can type in a keyword, and then also upload an image. And so here, like I have the keyword of car and then uploaded an image of sunset. What this gives me is a lot of images which have that keyword car and have that gradient kind of a background. There was also this option for you to search for similar faces. So these are all these different AI models. The researchers were churning out, and I was trying to build them into interfaces. Again here, I had a webcam module built into the interface where I can click a picture of myself, for instance, and then search for similar faces in the repository of images we had. So again, searching for images with the keyword car and also selecting images with people, having similar faces to mine. Lastly, this was a feature release which came out last year's Max conference from Adobe, which allowed users to have similar images based on these three attributes, which is content, color, or composition. Currently what you're seeing over here is there's a keyword for flower, and then you can search for color. I've uploaded this image of different palettes of a color, and the results, as you can see, represent all those different colors. So what I'm going to discuss today is condensing all that learning from building all these different interfaces for AI-driven products. This is going to be through an initial narrative. So just to start it, an initial disclaimer, all characters and events portrayed in the story are fictitious, and it resembles to a real person living or dead is purely coincidental. And lastly, no managers were harmed in making up this presentation. So we start off like a designer, like any of you in this room. The responsibility for the designer was to inform the team about the user, conduct user research, secondary interviews, and figure out what their needs and expectations are from the product. Then there is a software engineer whose responsibility is to inform the team about the code, what the code is capable to do, what they can deliver in the given time, and resources that they have. Then there's lastly, there's a product manager whose responsibility is to inform the team about the business goals just to make sure the product they're going to deliver lives up to the expectations of the business goals in terms of the motive of the company and whether it generates the revenue. So that everyone in the end of the day gets a salary. So let's imagine this is the team we have and their task to build a dashboard. So based on the research the designer conducts, they come up with this interface for the dashboard. Now, unfortunately, though, when the designer hands it over to the software engineer, the whole interface breaks apart. They just put their minds together and just try to understand it. But sadly enough, the product manager is not happy with the whole sprint wasted on this task. Again, they come together and see what they can do. And they figure out they're possibly lacking expertise in terms of the data that they're presenting. So they bring a data scientist on board and try to understand what all the limits and bounds that they are working with. So the data scientist informs the team about what all goes into presenting and visualizing data. And armed with this information, the designer goes back to the drawing board. They understand that the data they are trying to present would only work for certain limited bounds and ranges. So what the designer does is now comes up with a slider at the bottom of the interface and asks the software engineer to limit it to only the conditions at which the data is required by the user. Now, with the product working fine, the data scientist gets a bit more ambitious. They're like, OK, maybe we can even speculate what the data would be down the line. So the data scientist comes in and says, we can put in an integration algorithm, project what this thing would look like down the line. But again, unfortunately, the thing breaks apart. And as product managers, again, not so happy with their progress. They again come together as a team trying to figure out what can happen. And they think maybe for projection, they need something smarter than just an initial regression. So they bring in an AI machine learning engineer whose role over here is to inform the team of all the possible algorithms out there to project and forecast on the data. There's one caveat, though, the machine learning engineer projects, is that I can tell you about all the possible algorithms, the parameters, and the data. But I can't tell which one would work, because machine learning is kind of like this magical black box where you'll have to try and figure it out for yourself. Luckily enough, the designer was competent enough to interact the prototype they had built with the APIs that machine learning engineer was providing. So what they got was this fancy prototype which had different options on the top right where you can select which model you're trying to work with and with which parameters. And so it goes that the second option which the machine learning engineer had provided worked splendidly well. They all had a great bonus. And didn't I say it was fictitious? But yeah. So just like every other story, it has some morals as well. You might remember yourself as a designer thinking that, OK, if I know whether this can be deployed in code and whether it fits the business needs, I'm good to go. But if you go towards more in the data science and data visualization with machine learning embedded into it, you have to widen your skills in that domain as well. You might go about asking, OK, how would I go about doing this? There are all these tools readily available that have helped me in the past. So just to get started in data visualization, you can pick off the shelf tools like Microsoft Excel and Tableau to put your data and visualize it. If you're a bit more on the code savvy side, you can try these JavaScript libraries D3 and Vega. There are also these books and people which have been very helpful. You can follow them on Twitter, look for their videos on YouTube, and also read through their fantastic books. Onto the AI and machine learning side of things, just to get a hands deeper into it, you can try for all these different cloud-based APIs provided by Amazon and Google and put your data and see whether it works for those algorithms or not. There's also this fantastic tool which recently came out called Runway ML. It is currently targeted towards artists to create new creations based powered with machine learning. But it's a good testing ground for you to understand what can actually do with machine learning. Also, again, if you're a bit more on the coding side, you can try ML5. It is a library from the processing organization. Processing, if you're not familiar, it's like an introductory library for coding for artists. And again, this allows you to do all the plethora of things which something like TensorFlow might allow you to do. Onto the side of the books and people, you might have heard of Andrew Ng. He's the person who started Coursera with the motive of informing everyone about machine learning. He has this great book coming up for machine learning yearning. It talks about all the things that go into making AI-driven products. There's also this great YouTube channel called Two Minute Papers, which goes into all the upcoming AI machine learning papers under two minutes in a very interactive and interesting manner. Lastly, again, going for the code inclining folks over here, there is this course called Fast.AI, which is all about the implementation details of AI, not going into the nitty gritty is what an Andrew Ng Coursera course might be going into. But you'll be like, OK, I get all this, but how do I go about doing things? So for me, it's like machine learning is this black box, and we are like baffling buffoons around it. We don't know what goes inside it. All we can decide is what is the input and what is the output. And that's what a black box is. You just decide what goes into it and analyzes based on the output. Unfortunately, though, if you throw garbage in, all you're going to get is garbage out of it. So just to start off, let's take this example. We are in Hyderabad, right? We don't care about hot dog or not. We care about this biryani or not. So let's imagine you're doing social media marketing for this company called Paradigm Biryani. And you want to decide whether the images your users are uploading to social media. Do they contain biryani or not? So the first thing you can try to do is visualize the data. Scrape all those images out, put it on the screen, and see what the data is telling you. What you're trying to do is look for any biases. So if you've been a food blogger, you might know that people usually take pictures of food from the top. So your data might be inclining towards that. And suddenly, if someone uploaded from the front, the model might not be able to analyze it. So what you'll have to do is equalize and balance your model for all the disparity of images that a user can upload. Secondly, if you find that there is a lot of variation in what the users are uploading, you might want to go for custom training data. What I mean by that is take all those images that you're getting and get them labeled to crowdsourcing offerings out there. There's this fantastic one called Figure 8, which was Crowdflower earlier. But you can also try something like MechanicalTurks. That allows you to do is create a task as a questionnaire for people to label those images. It does come with its own set of responsibilities because you are actually offloading that task and outsourcing it to the global world. So you have to make sure that you have checks and balances in there. And also, you have to make sure that there's this slight ethical conundrum around it because people, those who are actually filling those surveys out, are not paid equally. So you might have to take that into consideration as you go about doing it. Next up, let's say you figured your data set out. Then you go to a machine learning engineer. They're going to throw a bunch of questions at you. They'll be like, OK, do I want to train this model from scratch? Do I take a pre-trained model? Do I take ResNet? Do I take Inception, ImageNet, whatnot? And then they'll be like, OK, what will be the parameters? What's the decay ratio? What's the learning rate? They might be able to give you a pre-canned thing where you can just have to decide one thing or the other. But at the end of the day, as I said, because it is a black box, what you'll end up doing is creating this interactive prototype, which will have all these different options visualized in front of you, and have those parameters as sliders where you can decide at which slider value does it perform the best for my use case. So you expose all the levers, throw it out to the team, and test all the permutations possible. Next up, once the parameters are finalized, you have to make it productized. So you go about simplifying it. You decide, OK, at this threshold, I see that the model is performing best where it can distinguish between one biryani image versus the other. So to sum it all up, what I first see the role of design has been, it is all about removing uncertainty early on. That's where people brought in designers. Even if it was a consumer product, people brought in designers to understand the user's needs and preferences before they actually develop and put it in their hands. If it was a dashboard, they also have to add an additional responsibility of understanding the incoming data. And lastly, if it becomes an AI machine learning problem, you also have to bring it to context all the possible algorithms and parameters you can use. So with that, I thank you for joining me.