 Hello everyone and welcome to this week's Product School webinar. Thanks for joining us today. Just in case you didn't know, Product School teaches product management, coding, data analytics, digital marketing and blockchain courses online and at our 15 campuses worldwide. On top of that, every week we offer some amazing local product management events and host online webinars, live streams and ask me anything sessions. Head over to productschool.com after this webinar to check them out. Hello and welcome. My name is Matthew Jordan and I'm a Senior Product Manager at Ubisoft and I'm here to share with you today my first product development experience, how I approach the task of building a product from scratch and some of the lessons I learned along the way. This case study webinar is for those of you starting in your first product role or found yourselves being given the task of building a product with no prior experience for the first time. The product which is the subject of this story is an internal software as a service product that manages a variety of tasks and functions for live testing of the games created by our game studios around the world prior to their release. It is not something you can visit and try out yourselves unfortunately, but as is common in large organizations, custom built internal solutions are integral to internal operations, but not so much light is shed on the development as it lacks the glamour of B to C or B to B products. I learned a great deal from this experience and it has shaped my approaches to the development of other products within my portfolio today. My goal in this webinar is to give you an overview of this experience and share the insights I gained and lessons learned from starting a new product shortly after joining a new company. I've tried as much as possible to keep it true to memory and not to add in techniques or knowledge that I know now but that I didn't at the time. I'll start the story at the beginning. Chapter one context, problems, needs and feasibility. Ubisoft is a large company with production teams based all over the world of different sizes and resources and as our games were even more and more online the need to perform live testing of them was increasing in importance. There was however no shared or centralized approach or processes on how best to manage the live tests. So my management tasked me with building something to answer the needs. It was really that straightforward and simple, no brief or guidelines, just an objective. I was and am still today fortunate to be within a supportive management structure where I was given the freedom and autonomy to explore the opportunities arising from the simple objective of finding a solution for live tests. My first step was to understand the needs of teams, what is problematic for them, ideal capabilities of a solution and formulate a solution to them. I needed to build a global view of how the studio teams and all of the stakeholders work. As without knowing their environmental constraints, problems and needs, I would not be able to formulate a viable solution proposition. At this point I was working by myself and the first step I took was to do a feasibility study which contained a key task of contacting many stakeholders around the world from a variety of direct and indirect positions relating to production of live games. I had written a short survey to conduct in person, on the phone or over email to obtain information on their experience with live tests and indirectly to garner their support for the endeavor I was undertaking. Something that would in the future pay dividends as several stakeholders I worked with would continue on to be internal users of the product. The information collection phase was relatively straightforward as there is a strong collaborative culture within Ubisoft so I was met with a lot of enthusiasm and forthcomingness with information once I explained I was here to help. I put all the responses together along with my research into potential external solutions and our capabilities to develop an internal product into a feasibility document and decided the best approach was one where we could tailor a solution from the ground up to target our needs and avoid any superfluous feature sets or constraints that could be encountered from an external solution. This was centered around an assumption that an end-use operated approach would be a best fit to their needs. To ensure the product maintained focus I devised three pillars that should guide our concepts, help triage your features and structure roadmaps. The first of these being that the product needed to be scalable meaning useful to both small studio production teams with less resources as well as to larger production teams who had higher requirements for volumes reliability and connectivity. Secondly it needed to be flexible as production studio teams needs were not universal. We set out to make the product as modular as possible with features being a la carte to enable us to provide teams with everything they need and nothing they didn't and a microservices approach that if one power were to fail the remaining operations would remain up. Lastly we needed to build a centralized solution. Our teams are widely distributed around the world so it was not possible to offer local support and we needed a variety of different systems to talk to each other to enable the needs of teams to be supported. The product functional and technical architecture would need to be adaptable and robust as we would be a central point of API requests and calls. I found this definition of pillars approach very useful as it created a filter with which to review any future developments and feature developments requests we were receiving. They needed to meet these pillars objectives otherwise it didn't fit with our core needs and wouldn't metaphorically hold up the product. So at this point in the story I know what production teams are needing and wanting I've identified an internally developed product to be our best bet as our business solution and identified three pillars to guide our planning and development. I presented my findings and recommendations to my management and was met with approval. Now I needed to get things moving. Chapter two pilot projects and communication. Taking the vision to find at this point myself and a technical project manager set off to solidify our plans to build the product I had in mind but this needed to be on paper for others to follow. It was at this point that I started the internal promotion and awareness of our project for two reasons. Firstly to update all stakeholders we were moving forwards and that we kept them informed of progress and were open to any input that they had and secondly to avoid any duplicate projects arising. As a large software company more than 15,000 employees internally competing projects can arise and I wanted to solidify the first mover advantage and consolidate efforts on our initiative. This was to ensure production studio or support teams weren't needlessly using resources to build tools or local solutions just for their tests when they could contribute to something much larger and comprehensive for all. I'm wanting collaboration over competition. To make visually accessible the scope of the product I put together a concept map effectively a clearly defined feature map consisting of branches that address key groupings or features such as player management and feedback management. There's many good applications out there for building maps to explore ideals and I built my first mind maps in Coggle and then solidified all the branches so that there were no more than two from the center to enforce the organization and categorization ideas. With this we could easily explain the scope of our product at a high marketing level and also serve as a guide for strategic direction. An important approach I wanted to take was to pilot a product directly with production teams. This involved recruiting three established large production studios to agree to work with us for their titles to be used on the product once released and be a part of the design and development process. My objective here was to ensure that we were building what we were building was as much as possible in alignment with the real needs of teams and not end up in a black box scenario where we would toil away with the initial needs in scope only to return a year later with a ready to launch first version that was no longer useful or missing too many features to serve needs that had arisen or changed in the meantime. Monthly meetings were in place to update the pilot projects and when feasible we visited their studios to demonstrate progress and functional user flows. Now that we knew where we wanted to go and who we were going with we needed to drop the plans. I did this by first writing a yearly strategy document which outlined all stakeholders visions and goals risks and resources needed for the first year and key features to be part of the MVP. From this we built our first roadmap which was in two phases. The first phase was to build an MVP based on agreed needs with our pilot teams. The second once we had delivered and demonstrated this was to adapt any additional needs and refine the MVP ahead of our first live test which was to be one of the pilot teams early live versions of a significant game. Our two key deliverables were being able to acquire and manage a database of players with acquisition coming in the form of players being able to register their interests to participate on the game's website and secondly manage the distribution of the test version of the game to players. In this case on PC to a volume of around 100,000 accounts in a very short period of time typically a few hours at most. Common practice sprint organization and development was undertaken against a backlog of these features but with a strong focus on no needs and importantly reliability what we had to deliver was not a consumer product we could roll back and pull off the shelf if it wasn't up to the task of launch. On the back of our main assumption was that with enough testing validation and preparation we would be fine with such a controlled environment. Chapter three development and sticking to the plan. Now we were ready to develop our solution with two key success criteria of being able to acquire players through website registration and being able to distribute access to players directly to their PC gaming accounts without using an intermediary solution such as a token or a key. Two developers and a tester joined the team and we completed sprints over the course of nine months making alterations after pilot project consultation as required. An important decision point was reached during the development period as we progressed and promoted what we were doing production studios outside of our pilot team were taking note and we were requesting to use our solution ahead of time. We were faced with the question of do we give in to the temptation and excitement to have our product working earlier showcasing improving that we could deliver. I took the decision not to launch with this earlier team if I was unable to be confident we'd have the reliability they would need I wasn't prepared to risk their live test nor the reputation and trust I'd built so far in the journey. I maintained a quality first approach and as I was the ambassador for the product I had to protect it from any reputational damage lest we lose support. I was also wanting to avoid adding any functional debt so early into the product by cutting corners on the MVP and implementing partially what we'd agreed to do. I was not in the mindset of close enough is good enough. We may not have accrued any technical debt as our code would have been clean and done what it was supposed to do but our user experience would not have been good required too much operational workarounds and risk populating invalid datasets that we'd have to tangle with in the future. We continued on with development and testing and we're ready for one of the agreed pilot projects first slide tests a few months later. Chapter four always expect the unexpected. We were now ready to launch with one of our pilot projects using the distribution set of functions. We passed UAT and production environment testing coordinated the operational tasks with the production team and on the time and date specified we commenced the distribution process for players we were live. All seemed well until about 30 minutes in and we realized something wasn't right. The process was getting stuck. An important lesson was learned here. Tests thoroughly and also realistically. Within the process flows we had learned thorough low testing on servers and APIs but what we hadn't done is test the functional flows at the same time and volume we were going to use for the pilot. Now there were some constraints to doing this at the time. We couldn't test by giving 100,000 testing accounts to real players but in hindsight I would have focused on a solution to this constraint which we did go in for. The root cause in the end turned out to be a misconfiguration on the database causing data corruption something not uncovered in our testing. So everything stopped and we worked frantically to address the issue with infrastructure teams all the while taking careful notes of what was happening for a later incident report in post-mortem. In parallel to this it was paramount to keep live contact with production teams to advise of the status and to allow community management and support teams to respond to players inquiries as to why they hadn't gotten their test scan. All the product testing and validation reports in the world won't cover 100% scenarios so it's important to be ready to respond. There's no set and forget mentality and as we were now the face of the product we carried the expectations of any performance. The issue was fixed and we recommenced. All in all only a few hours were lost and our relationship built with the production studio to this date meant we hadn't shaken too much of their confidence and positively added to their perception of us by how we handled the issue and resolution. The remaining functions all worked as expected and the live test was a success in the end. We had turned 18 months of information gathering planning and development into a real product that was operating and removed a host of manual and slow tasks from production teams as well as building the first steps of a platform to provide a host of future benefits and opportunities. The epilogue. It's difficult to simplify complexity. A major assumption that was challenged again and again in the first year after launch was that live production teams would be able to easily understand the product, the many variables of the company's technical and functional ecosystems and coordinate with these other stakeholders for launch and self-manage the platform for the test themselves. Understanding all the moving parts and requirements of the ecosystem outside of the production studio and how this tied into our product and its setup was too complicated to pick up and understand quickly. My 18 months of learning the ecosystem needs cross-functional team requirements and defining processes could not be transferred through meetings, tutorials and productive support. The production teams did not have the time or resources to dedicate to managing the requirements of the product. Their focus was first and foremost on the game they were producing. Fast forward to today and we've since built up our team to eight with an operational team and a series of processes and procedures for onboarding new teams, managing issues, coordinating a diverse international set of multidisciplinary teams who all contribute to a live launch. The product is the platform that now globally manages all the user flows, functions and requests for millions of plays each year and is a critical part of our live productions. We are still evolving, adapting and solving problems to support our global production studios and a flexible and modular approach from the beginning has been crucial to our success. The evolution of feature requests and prioritization is largely based on my approach of, is it for the greater good? We have many teams so developing one specific niche or nice to have just for one of a dozen production teams doesn't fit this underlying approach. Features come from internal and external drivers and must be leverageable by other studio teams, otherwise we start to get features for and branches of our product tree with only one leaf. So we'll often look and see how it can be made to have a broader utility and if it can't then we keep it on the shelf until the more widespread need for it arises. We've also decommissioned or removed aspects or functions of the product. These could be functions we planned marketed to teams or built MVPs for but never found a home or where the underlying assumption of utility was disproved in prototyping or practice. As we've held the operational aspect, the balance of feature requests and priorities have shifted from the users to us internally. As the product team has become the body of knowledge and experience on how to perform live tests, build frameworks and with this we presented at several internal summits around the world to further spread our product and assure teams that with their games and our support they can be ready for their live tests. So with the story behind me, I'd like to highlight three main takeaways my first product management experience. Firstly, envision, plan and build modular and flexible foundations to your product. As your assumptions are challenged and needs evolve you don't want to be restricted in future development and feature opportunities due to rigid frameworks and architecture. A microservices approach for central operations proved critical to remove as much as possible dependencies within the system. If one were to fail we could revert to our contingency plan only for this service and all others continue unaffected. A modular approach allows you to build what you think a team needs with them but allows you to abandon assumptions and how you think something should be done in favor of the realities and needs of your users. Secondly, be transparent and proactive with your development roadmap and be its champion. Your users may not know what they need or their needs may evolve so ensure that they can see what features you are tracking to build for them and ensure regular opportunities for them to voice a change, addition or notify of anything obsolete. Internal customers and users should be viewed just as demanding as external ones but with the benefit of being much more readily available. You can call your colleagues around the world, present at workshops and set roundtables to discuss ideas and present proposals with relative ease, market the idea and solution and be the face of the product and inspire trust and engagement. Finally, my third point would be to find stakeholders to pilot your product with. I consider this invaluable as it forces real-world requirements to be baked into the foundation. We did pre-feasibility, development and first-live usage with our pilot teams enabling immediate and direct usefulness. It also builds greater trust from other non-pilot production teams. As we weren't presenting a new product out of the box, it was being built with teams so no one was going to be the test subject on an unknown solution and risk their first-live operations. I've now finished my story and I hope this has been useful to you, whether it has given you some insight or comfort that we're often in the same boat trying to navigate uncharted waters on your own, especially daunting for your first time. I learned a great deal from this experience and hope that sharing a part of it here in this webinar has been helpful. Thank you for listening.