 All right, hello everyone. Great to be here today. So I want to open with a little story. It was almost a year ago exactly, November 30th last year, and I got a Slack at about 10 a.m. saying, hey, we're launching Chatabt today. Don't worry, shouldn't be a big deal. Expect this to be a low-key research preview. Shouldn't really impact the sales team. I thought, cool, all right, I don't have to worry about this. And then something started happening. We were picked up by media everywhere, The Guardian, New York Times had us on the front page. Just blew our minds. We printed it out and framed it. BBC and as Noah Trevor put it, our Chatbot went viral. So I had to deal with going from about 30 inbounds a week with my small team to 10,000 with three sales reps. So that was a really interesting month or two or three. So to take a step back, how did we get here? OpenAI started as a research lab in 2015 with the ultimate goal of creating safe, beneficial AGI. AGI essentially being an autonomous system that can perform work as well as humans. From there, we began releasing our research to the public. So our first state of the art model was GPT-3, which we released in 2020. We had four flavors of our model ranging from Ada, our most basic, good for categorization, classification, all the way up to DaVinci, which could start to do some real generative content generation. And we began to see this powering real apps out in the wild like Jasper and CopyAI. We released GPT-4 in March of this year. It is likely the most complex piece of software mankind has created. Really big moment for us at OpenAI is when GPT-4 finished training in August of last year was really exciting for us. We didn't know exactly how powerful this model was gonna be and it definitely exceeded our expectations. We did a demo for Bill Gates, and he said, you know, there have been two moments in my career where I thought the world was about to go through a paradigm shift. The first is when I saw the graphic user interface and the second was today when I saw GPT-4 perform on the biology Olympiad. Since then we have released a number of different API endpoints. We have Dali, our image model, released last summer. This was a really big deal. I know it feels old hat now with a lot of image models on the market. This was really exciting stuff. And frankly, it's what made me want to join OpenAI when I was interviewing about a year and a half ago. We also released Whisper, our ASR model, which we have available both open source and via managed API. We've recently released Voice and Vision and also introduced some orchestration capabilities with function calling and our new assistance API. So I want to give a little guidance. I get this question a lot is how do I get started? How should I think of building my app on OpenAI's models? So at the base layer, you have the intelligence, the foundation, our different API models, whether that's GPT-4, our new assistance API, vision, Dali, Whisper, et cetera. On top of that, there are different ways to give this intelligence layer access to your data and customize the experience. So the first is embeddings. This is often referred to as RAG, or retrieval augmented generation, where you're essentially vectorizing data and allowing GPT to search across it for the results. There's also fine tuning, where you give the model access to samples of inputs and outputs and actually alter the underlying weights in the models to customize it for your organization. And the final is this brand new functionality, the ability to call APIs, whether you're outputting in JSON or you're using our function calling tools, that allow you to pull in external data in your results. So we're really focusing more and more after we've generated this amazingly powerful model of GPT-4 to give you the ability to customize it for your app and your product. And then finally, I just get this question a lot. What should we do after that? Well, the advice I give is using GPT-4 or using a model at this point is not an advantage. It's a platform. So build a product that has real differentiation and a real niche use case and think about the market you're going after and how you're going to succeed there versus just leveraging AI as your advantage. So what are we seeing out in the wild? What are companies doing with us? Salesforce has launched Einstein GPT, which lets you generate content, surface insights and run queries conversationally. We've got Ironclad, a B2B SaaS tool that's used for blinding contracts and they have released AI Assist, which lets you automatically red line a document and then go through and approve or deny the suggested changes. This is really meant to streamline contract management for sales organizations. I use Ironclad heavily and this has been a great help for me. Mixpanel, which is a data analytics tool introduced a way to query data conversationally. So instead of going into Mixpanel and saying how many user signups did I get last week and creating a funnel manually or running a cohort analysis, you can now query data conversationally and even generate visualizations of that data within their chat bot. And then finally we have Datadog, which lets you look across observability and manage incident responses using the GPT function calling mechanisms. So we've got hundreds of companies developing on our APIs at this point. It's been really exciting to see the proliferation of creativity across many organizations, whether that's giant companies like Morgan Stanley and Lowe's or tiny startups building AI native apps. So been really fun and exciting to watch this proliferation. So about two and a half months ago now, we released ChatGBT Enterprise. This is really exciting for us. I think this is also maybe a good moment to remind the audience that we never train on data sent to us via API. We also do not train on data sent to us in ChatGBT Enterprise. So this is our safe, private and fast way for organizations to leverage ChatGBT. Lot of fanfare when we announced this, we saw a lot of speculation. What does this mean for the future of work, for the future of the information worker? So ChatGBT Enterprise is essentially our internal productivity tool. It gives everyone in the organization a chief of staff, staff or an assistant, however you'd like to think of it, whether that's people in HR, data science, sales, PMs, project managers, your lawyers, everyone can now be more efficient and productive at their job. BCG ran a really great study. So they actually analyzed the use of AI across their consultants and found that on average when their consultants used AI in their day to day work, they finished their tasks 12% more quickly, I'm sorry, 12% more tasks, 25% more quickly with a 40% higher quality. So pretty impressive results using AI in your day to day work. So what else is in ChatGBT Enterprise? So this is my favorite tool that we have probably ever released. It's called advanced data analysis. Used to be called code interpreter. Same thing, we rebranded it. We thought code interpreter maybe was a little intimidating if you're not a developer. This essentially is a tool that's using Python to run queries for you. So you can ask a natural language questions about data and it will run Python, help perform math on that data set and even visualize it in different formats. So as an example, in this one, we're essentially saying create a scatter plot. You could create any sort of visualization bar chart, et cetera, you could compare different data sets and it will visualize it for you. It can also perform advanced math. So one interesting use case I like to tell is our finance team at OpenAI every month, they get a giant file with all the ChatGBT purchases and then they have their tax rates by municipality and they have to figure out how much tax do we owe? And the file for the ChatGBT purchases is so large that they can't open it in Excel, champagne problem I know. And so they were chunking it and opening in Excel one by one and taking hours and hours to run the report and they said, what if we just put this in ChatGBT? So they put this giant file in ChatGBT, they added in the municipality tax rates and within minutes it calculated how much tax they owe. So save them hours and hours of time. And we found that finance teams, investment teams are really relying heavily on advanced data analysis now in their day-to-day jobs to perform pretty rigorous mathematical operations. So I get asked a lot as a sales leader, how does my team use ChatGBT? How do we dog food our own products? And I think some of these are pretty obvious like generating emails and generating copy but I think advanced data analysis is actually the tool that I use the most for sales operations. As an example, when I needed to cut territories across my team, I had no RevOps or sales ops by the way, we hired our first RevOps person in August. Amen, hallelujah. But up until then I was doing a lot of this myself and advanced data analysis was really helpful for that. So instead of having to manually generate this in Excel, I just uploaded files of here's all of our customers, all our inbound leads, create territories and divide it by 10 reps or 20 reps. It could do it by geography, it could do it by alphabetical distribution, it could do it, you know, by round robin. But I basically wanted to say, here's the data about my companies and my inbound and I want you to generate territories for me and it was something that was very easy to do. Similarly, with rep performance, if I'm trying to understand how are my reps doing rather than having to run this analysis in Excel, I can just upload a file from Salesforce export and just say, hey, calculate performance, sales cycle, ASP, let me know which reps are performing well and who's not and run this all within ChatGBT with visuals so I can just put it in a deck to present to my executive team. You could also use it for coaching. So I upload transcripts of one-on-ones and ask ChatGBT, what could I do better? How could I be a better mentor or coach to my reps? And it can give me advice, like maybe you should give this feedback or maybe you should ask about this or you brought this up three weeks ago, you should probably follow up on it. So about 10 days ago, Sam Altman stood exactly right on this stage at our developer day conference, kind of a while to be back here 10 days later and announced GPTs, our big new functionality that essentially lets you customize your ChatGBT instance. So you can customize it using instructions. So essentially telling ChatGBT-4, what you would like to see in the output, give it access to expanded knowledge and actually take action using our new assistance API. So I wanted to walk you some examples of GPTs we've created internally and how we're using them. So one is a call summarizer and next steps so you can just dump in your raw notes from a conversation you had with the customer and it will automatically summarize that call and give you clear next steps that you can send to a customer to follow up email. So it makes it really easy to go from conversation to really great follow up. There's also the sales to CSM transition GPT. What this lets us do is when sales reps close deals they have to hand it off to a customer success manager. This lets them just put the notes from the, the raw notes from all their conversations with the customer and it formats it in our template that makes it really easy for the CSM to understand the different roles, the next steps and how to act upon that customer to help make them successful and on board them onto chat GPT. And then the final one, which is probably the most useful one we've created for the entire organization is our customer Playbook GPT. So this basically is like our giant FAQ that we've collected over the last couple of years. Every question a customer asks we put into this giant document which is like pretty unwieldy to search across. That's what I had been doing every time I got a question from a customer I didn't know the answer to. I'd go into this document and control app and try to find answers. Now I can just conversationally query this document. So as an example, I just threw in what encryption does our API use and I get back all traffic center API secured using TLS 1.2 encryption while in transit and AES 256 encryption while at rest. Great. So now I just have this GPT up all the time if I'm asked things while I'm on customer calls and I don't know the answers I can just quickly ask GPT and I can sound really smart on the phone. So where do we go from here? What does the future hold? Our ultimate goal is to create AGI or artificial general intelligence, autonomous systems that perform work as well or better than humans. Which means we need our models to know what a human knows and to be able to interact with the world away a human interacts in the world. That means it needs to be able to see, hear, speak have memory and who even knows what else we joke internally and say it needs to be able to smell and taste as well. So we'll figure those out. But in the meantime, we have released a lot of new functionality really just over the last couple of weeks. Some of that is the first one is speech. So you can now talk to chat, you can open up your app and your iPhone and you can have a conversation with GPT without having to type it in. Similarly, we have our new text to speech model so you can get output invoice of six different voices you can choose from. They sound very natural using both of those modalities of text to speech and whisper. You can now actually have a conversation with chat GPT. This is really great if you're like driving and you don't have that you don't wanna take out your phone and use it you can use the voice modalities. We also introduced vision. I think this is probably the coolest most underestimated thing we've released in a while. We have it in chat GPT it'll be coming to API soon. I'm so excited to see the creativity of the world as they start to have access to GPTV. This essentially is giving GPT for eyes. So think it's just analyzing an image it's not just OCR it's reasoning about the image it's seeing. It's making sense of the image. In this example, we give it a baseball diamond and we say, explain to me what these different positions are and also how does this game work? And we say, hey put it in a table format while you're at it. So you can see this is leveraging advanced data analysis GPT forward a reason about the image in front of it advanced data analysis to create a table output of it and then also explaining me how the game works. You can imagine with vision there's so many potential use cases. Some of the interesting things I've heard recently is insurance companies using this where you can take a picture of a car accident and we'll auto describe the accident and come up with the claim. Retailers wanna use this to say, upload a dress and accessorize it for me. You can imagine any sort of marketplace where you're uploading images of houses or anything you're selling it can automatically describe that for you. From a support perspective you could just take a screenshot and share that and then GPTV could analyze everything happening in the screenshot to try to debug an issue without having to go back and forth with a user and ask them a million questions about what they're seeing on their screen. So this is a really fascinating new modality really excited about it and excited to see what people build with it. And then finally we've got Dolly three. So we released our image model last summer and this is our next version of Dolly. You can see it's a lot better at hands and faces which I know it was notoriously not great at. It can do words. So showcasing some of those features here and what's really cool if you haven't tried it yet in chat GPT is we're leveraging GPT-4 to help the prompt. So before with Dolly you had to think, okay I have to describe an image in a certain way. I don't really understand what I want to see. I don't know the different types of ways to talk about artwork and photo reality. So what we're doing now is in chat GPT if you put in something that says you know generate a woman image of a woman making a heart with her hands. GPT-4 will create four different prompts and so you'll get four different images that are all slightly different. So we're actually leveraging GPT-4 to improve the inputs for the Dolly outputs. And then finally we've released Browse with Bing. So I know it's really frustrating and annoying when you ask chat GPT a question and it says I'm sorry my training cutoff is used to be August 2021, it's a little bit sooner now but still you would prefer for it to have access to real-time data, give you results that are real-time. Now you can do that by leveraging Browse with Bing. It will essentially browse the internet for you, come back and give you information. And this is really cool when paired with advanced data analysis. This is what we're showing in this example. You can say show me the top ski resorts in Vermont besides Killington, give me a quick summary and create a funny nickname while you're at it. You can see it goes and searches the web, it finds all of the resorts, creates summary, gives us a funny nickname and puts it in a nice table format. All right, so I love this quote. Sam said this exactly 10 days ago, stay up on the stage right here. This will all look quaint in a year from now. We're just getting started. Thank you.