 أدعاء إلى الطريقات أنتعامل الإنسان Now AI enables us to build amazing software that can improve healthcare, enable people to overcome physical disadvantages, empower smart infrastructure, create incredible entertainment experiences, and even save the planet. Now, what is AI or Artificial Intelligence? فقطًا. اعي is the creation of a software that imitates human behaviors and capabilities. Key workloads include machine learning. This is often the foundation for an AI system. This is the way we teach a computer model to make predictions and draw conclusions from data. Anomaly detection. The capability to automatically detect errors and unusual activity in a system. Computer vision. The capability of software to interpret the world visually through cameras, video and images. Natural language processing. The capability for a computer to interpret, written or spoken language and respond in kind. Knowledge mining. The capability to extract information from large volumes of often unstructured data to create a searchable knowledge store. Thanks for watching. I will see you in the next lecture. Understand machine learning. Machine learning is the foundation for most AI solutions. Let's start by looking at a real-world example of how machine learning can be used to solve difficult problems. Sustainable farming techniques are essential to maximize food production while protecting a fragile environment. The yield and agricultural technology company based in Australia uses sensors, data and machine learning to help farmers make informed decisions related to weather, soil and plant conditions. So how machine learning works? So how do machines learn? The answer is from data. In today's world we create a huge volumes of data as we go about our everyday lives. From the text messages, emails and social media posts we send to the photographs and videos we take on our phones we generate a massive amount of information. More data still is created by millions of sensors in our homes, cars, cities, public transport infrastructure and factories. So data scientists can use all of that data to train machine learning models that can make prediction and inferences based on the relationships they find in the data. For example, suppose an environmental conversation organization wants volunteers to identify and catalog different species of wildflower using phone app. First, a team of botanists and scientists collect data on wildflower. Samples. Second, the team labels the samples with the correct species. The labeled data is processed using an algorithm that finds relationships between the features of the samples and the labeled species. The results of the algorithm are encapsulated in a model. When a new sample is found by volunteers, the model can identify the correct species label. These are symbols, steps on how the machine learning works with the help of other parties or teams. Now machine learning in Microsoft Azure. Microsoft Azure provides the Azure Machine Learning Service, a cloud-based platform for creating, managing and publishing machine learning models. Azure Machine Learning provides the following features and capabilities. So here, let's have a look at the feature Automated Machine Learning. The capability of this feature enables non-experts to quickly create an effective machine learning model from data. So you can see that no expert, none expert, will quickly create an effective machine learning model from the data you have, which is that this is the feature Automated Machine Learning. Now another feature, Azure Machine Learning Designer. So a graphical interface enabling no code development of machine learning solutions. We have Data and Compute Management. It is a cloud-based data storage and compute resources that professional data scientists can use to run data experiment code at scale. We have the Biblines feature that a scientist, software engineer and IT operation professionals can define Biblines to orchestrate model learning, deployment and management tasks. So it will make things faster and quicker. Alright, thanks for watching. Understand anomaly detection. Imagine you are creating a software system to monitor credit card transactions and detect unusual usage patterns that might indicate fraud. Or an application that tracks activities in an automated production line and identifies failures. Or a racing car telemetry system that uses sensors to proactively warn engineers about potential mechanical failures before they have them. These kinds of scenarios can be addressed by using anomaly detection. A machine learning-based technique that analyzes data over time and identifies unusual changes. So first, sensors in the car collects the telemetry such as engine revolutions, brake temperatures and so on. An anomaly detection model is trained to understand unexpected fluctuations in the telemetry measurements over time. If a measurement occurs outside of the normal expected range, the model reports an anomaly that can be used to alert the race engineer to call the driver in for a bit stop to fix the issue before it forces retirement from the race. Now anomaly detection in Microsoft Azure. In Microsoft Azure, the anomaly detector service provides an application programming interface API that developers can use to create anomaly detection solutions. So we will handle that here. Like just a quick overview about it. No worries. Alright. I will see you in the next lecture. Understand computer vision. Computer vision is an area of AI that deals with visual processing. Let's explore some of the possibilities that computer vision brings. The Seeing AI app is a great example of the power computer vision designed for the blind and low vision community. The Seeing AI app harnesses the power of AI to open up their visual world and describe nearby people, text and objects. Computer vision models and capabilities. Most computer vision solutions are based on machine learning models that can be applied to visual input from cameras, videos or images. The next following slides will describe common computer vision tasks. So let's have a look. First image classification. Image classification involves training a machine learning model to classify images based on their contents. For example, in a traffic monitoring solution, you might use an image classification model to classify images based on the type of vehicle they contain. Such as taxis, buses, cyclists and so on. So you can see here from the image, this has been defined as taxi because it has a yellow or orange color as well as it has this icon or this thing on it. Object Detection. Object Detection Machine Learning Models are trained to classify individuals' objects within an image and identify their location with a bounding box. For example, a traffic monitoring solution might use object detection to identify the location of different classes of vehicle. So here you can see that the camera or the monitoring has been detected. So here you can see this is a car. This is a bus. And here you can see this is a bus as well. This is a cyclist as well as this one. So this has been detected. Semantic Segmentation. So it is an advanced machine learning technique in which individual pixels in the image are classified according to the object to which they belong. For example, a traffic monitoring solution might overlay traffic images with masks layer to highlight different vehicles using specific colors. So here you can see the bus has been defined as red, the car as blue, the cyclist here as green. And here you can see the other bus above here has been defined as red as well. Image Analysis. You can create solutions that combine a machine learning model with advanced image analysis techniques to extract information from Egypt images, including tags that could help catalog the image or even descriptive captions that summarize the scene shown in the image. So here you can see that from this image a person with a dog on the street. You can see this has been detected. Now we have face detection analysis and recognition. So face detection is a specialized form of object detection that locates human faces in an image. This can be combined with classification and facial geometry analysis techniques to recognize individuals based on their facial features. So you can see that here all of these people has been on their faces. We have, as you can see, a yellow square here. All of these people. Now we have Optical Character Recognition, the OCR, which is very popular. So Optical Character Recognition is a technique used to detect and read texts in images. You can use OCR to read text in photographs, for example, road signs or storefronts, or to extract information from scanned documents such as letters and voices or forms. So here you can see this in this image. This has been detected as you can see the Toronto Dominion Bank as you can see. Computer Vision Services in Microsoft Agile. Now Microsoft Agile provide the following cognitive services to help you create computer vision solutions. So the service is computer vision, the capabilities. You can use this service to analyze images and videos and extract descriptions, tags, objects and texts. This is called the computer vision. We have another service called custom vision. This service is used to trade custom image classification and object detection models using your own images. We have FACE. The FACE service enables you to build FACE detection and facial recognition solutions. We have a form recognizer. Use this service to extract information from scanned forms and invoices. Now all of these services we will have a quick look into it as an overview. No worries. But for now follow me in this. I will see you in the next lecture. Understand Natural Language Processing NLP. The Natural Language Processing NLP is the area of AI that deals with creating software that understand written and spoken language. NLP enables you to create software that can analyze and interpret text in documents, emails, messages and other sources, interpret spoken language and synthesize speech responses, automatically translate spoken or written phrases between languages, interpret commands and determine appropriate actions. For example, Starship Commander is a virtual reality VR game from a human interact that takes place in a science fiction world. The game uses Natural Language Processing to enable players to control the narrative and interact with in-game characters and Starship systems. Now Natural Language Processing NLP in Microsoft Azure. In Microsoft Azure you can use the following cognitive services to build NLP solutions. The service is language. This service is used to access features for understanding and analyzing text training language models that can understand spoken or text based commands and building intelligent applications. We have translators. Use this service to translate text between more than 60 languages. Use this service to recognize and synthesize speech and to translate spoken languages. We have Azure Bot. This service provides a platform for conversational AI. The capability of software agent to participate in a conversation. Developers can use the Bot framework to create a Bot and manage it with Azure Bot Service integration back in services like language and connecting to channel for web chat, email, Microsoft Teams and others. So these are the services, the language, the translator, speech and Azure Bot. Thanks for watching. Understand Knowledge Mining. Knowledge Mining is the term used to describe solutions that involve extracting information from large volumes of often unstructured data to create searchable knowledge store. Now Knowledge Mining in Microsoft Azure. One of these Knowledge Mining solutions is Azure Cognitive Search, a private enterprise search solution that has tools for building indexes. The indexes can then be used for internally only use or to enable searchable content on public facing internet assets. Azure Cognitive Search can utilize the built-in AI capabilities of Azure Cognitive Services such as image processing, content extraction and natural language processing to perform Knowledge Mining of documents. The products AI capabilities makes it possible to index previously unsearchable documents and to extract surface insights from large amount of data quickly. Thanks for watching. Challenge and Risks with AI Artificial Intelligence is a powerful tool that can be used to greatly benefit the world. However, like any tool, it must be used responsibly. The following tables show some of the potential challenges and risk facing an AI application developed. Now let's have a look about it. So here you can see that BIAS can affect results. So for example, a lone approval model discriminated by gender due to BIAS in the data with which it was trained. Error, another challenge, errors may cause harm. An autonomous vehicle experiences a system failure and causes a collision. Another challenge, data could be exposed. So a medical diagnostic bot is trained using sensitive patient data, which is stored insecurely. Another challenge, solutions may not work for everyone. A home automation assistant provides no audio output for visually impaired users. Another challenge, users must trust a complex system. An AI-based financial tool makes investment recommendations. What are they based on? Another challenge, who's liable for an AI-driven decisions? An innocent person is convicted of crime based on evidence from facial recognition. Who is responsible in that case? Thanks for watching. Understand responsible AI. At Microsoft AI software development is guided by a set of six principles designed to ensure that AI applications provide amazing solutions to difficult problems without any unintended negative consequences. First, fairness. An AI-system should treat all people fairly. For example, suppose you create a machine learning model to support a loan approval application for a bank. The model should predict whether the loan should be approved or denied without BIAS. This BIAS could be based on gender, ethnicity or other factors that result in unfair advantage or disadvantage to specific groups of applicants. A HR Machine Learning includes the ability to interpret models and quantify the extent to which each feature of the data influences the model's prediction. This capability helps data scientists and developers to identify and mitigate BIAS in the model. Another example is Microsoft Implementation of Responsible AI with the FACE Service, which retries facial recognition capabilities that can be used to try to infer emotional states and identify attributes. These capabilities, if misused, can subject people to stereotyping, discrimination or unfair denial of service. Reliability and Safety. An AI-system should perform reliably and safely. For example, consider an AI-based software system for an autonomous vehicle or machine learning model that diagnoses patient symptoms and recommends prescriptions. And reliability in these kinds of systems can result in substantial risk to human life. AI-based software application development must be subjected to rigorous testing and deployment management processes to ensure that they work as expected before release. We have the privacy and security. AI-systems should be secure and respect privacy. The machine learning models on which AI-systems are based rely on large amounts of data, which may contain personal details that must be kept private. Even after the models are trained and system is in production, privacy and security need to be considered. As the system uses new data to make predictions or take action, both the data and decisions made from the data may be subject to privacy or security concern. Inclusiveness. AI-systems should empower everyone and engage people. AI should bring benefits to all parts of society regardless of physical ability, gender, sexual orientation, ethnicity or other factors. Transparency. AI-systems should be understandable. Users should be made fully aware of the purpose of the system, how it works and what limitation may be expected. We have accountability. People should be accountable for AI-systems. Designers and developers of AI-systems solutions should work within a framework of governance and organizational principles. That ensures the solution meets ethical and legal standards that are clearly defined. That principles of responsible AI can help you understand some of the challenges facing developers as they try to create an ethical AI solution. Thanks for watching. Suppose you want to help your team understand the latest artificial intelligence AI innovations in the news. Your team would like to evaluate the opportunities these innovations support and understand what is done to keep AI advancement ethical. You share with your team that today stable AI models are regularly bought into production and used commercially around the world. For example, Microsoft's existing Azure AI services have been handling the needs of businesses for many years to date. Now in 2022, OpenAI and AI Research Company created a chatbot known as ChatGBT and an image generation application known as DAL-E. These technologies were built with AI models which can take natural language input from a user and return a machine created human-like response. You share with your team that Azure OpenAI service enables users to build enterprise-grade solutions with OpenAI models. With Azure OpenAI, users can summarize, text, get code suggestions, generate images for a website and much more. These modules dive into these capabilities. Now we are the capabilities of an OpenAI models. There are several categories of capabilities found in OpenAI models. Three of these include generating natural language, generating code and generating images. For generating natural language, as example, such as summarizing complex text for different reading levels, suggesting alternative wording for sentences and much more. Generating code, such as translating code from one programming languages into another, identifying and troubleshoot bugs in code and much more. Generating images, such as generating images for applications from text descriptions and much more. Thanks for watching. What is Generative AI? OpenAI makes its AI models available to developers and built powerful software applications such as ChatGBT. There are tons of other examples of OpenAI applications on the OpenAI site, ranging from practical, such as generating text from code, to purely entertaining, such as making up scary stories. Let's identify where OpenAI models fit into AI landscape. So artificial intelligence imitates human behavior by relying on machines to learn and execute tasks without explicit directions on what to output. Machine learning models take in data like weather conditions and fit the data to an algorithm to make predictions like how much money a store might make in a given day. Deep learning models use layers of algorithms in the form of artificial neural networks to return results for more complex use cases. Many Azure AI services are built on deep learning models. You can check out this article to learn more about the difference between machine learning and deep learning. Generative AI models can produce new content based on what is described in the input and the OpenAI models are a collection of Generative AI models that can produce language, code and images. Thanks for watching. Describe Azure OpenAI. Microsoft has partnered with OpenAI to deliver on three main goals. To utilize Azure's infrastructure including security, compliance and regional availability to help users build enterprise-grade applications. To deploy OpenAI model capabilities across Microsoft products including and beyond Azure AI products. To use Azure to power all of the OpenAI's workloads. Now let's have an introduction to Azure OpenAI service. Azure OpenAI service is a result of a partnership between Microsoft and OpenAI. The service combines Azure's Enterprise-grade capabilities with OpenAI Generative AI model capabilities. Azure OpenAI is available for Azure users and consists of four components. Retrained Generative AI models. Customization capabilities, the ability to fine-tune AI models with your own data. Built-in tools to detect and mitigate harmful use cases so users can implement AI responsibly. Enterprise-grade security with role-based access control RBAC and private networks. So using Azure OpenAI allows you to transition between your work with Azure services and OpenAI. While utilizing Azure's private networking, regional availability and responsible AI content filtering. Now understand Azure OpenAI workloads. Azure OpenAI supports many common AI workloads and solves for some new ones. Common AI workloads include machine learning, computer vision, natural language processing, conversational AI, anomaly detection and knowledge mining. Other AI workloads Azure OpenAI supports can be categorized by tasks they support. Like generating natural language, text completion, generate and edit text. Embedded search, classify and compare text. And we have generating code, which means generate, edit and explain code. And we have generating images so to generate and edit images. Now Azure OpenAI's relationship to Azure AI services. So you can see that here. We have two types here at the Microsoft AI portfolio. We have business users and citizen developers. And we have developers and data scientists. So for the first part here, the business users and citizen developers. We have Microsoft 3065, Dynamic 3065, Edge, Microsoft Bing, Windows, Xbox. We have the Power PI, Power Apps, Power Automate and Power Virtual Agents. For the Power Platform and the Applications. Now, on the other hand, the developers in that sense, for the Azure AI, we have the Applites AI Services and Cognitive Services. And the ML Platform, Machine Learning Platform. So for the AI Services, we have Bot Service, Cognitive Search, Form Recognizer, Video Indexer, Metric Advisor, Immersive Reader. For the Cognitive Services, we have Vision, Speech, Language Decisions and OpenAI Service. For the ML Platform, we have Azure Machine Learning. Azure AI Services are tools for solving AI workloads and can be categorized into three groups. Azure Machine Learning Platform, Cognitive Services and Applied AI. Azure Cognitive Services has five pillars, Vision, Speech, Language Decision and the Azure OpenAI Service. The services you choose to use depend on what you need to accomplish. In particular, there are similar overlapping capabilities between the Cognitive Services Language Service and OpenAI Service such as translation, sentiment analysis and keyword extraction. While there is no strict guidance on when to use particular service, Azure Existing Language Service can be used for widely known use cases that require minimal tuning. The process of optimizing a model's performance. Azure OpenAI Service may be more beneficial for use cases that require highly customized generative models or for exploratory research. Now, when making business decisions about what type of model to use, it's important to understand how time and compute needs factor into machine learning training. In order to produce an effective machine learning model, the model needs to be trained with substantial amount of clean data. The learning portion of training requires a computer to identify an algorithm that best fits the data. The complexity of the task model needs to solve for and the desired level of model performance all factor into the time required to run through possible solutions for best fit algorithm. Thanks for watching. How to use Azure OpenAI? Currently, you need to apply for access to Azure OpenAI. Once granted access, you can use the service by creating an Azure OpenAI resource like you would for any other Azure services. Once the resource is created, you can use the service through REST APIs, Python SDK or the web-based interface in the Azure OpenAI Studio. So here you can see this is an image. Here is an image of the OpenAI Studio. So you can, this is as a preview. So in the Azure OpenAI Studio, you can build AI models and deploy them for public consumption in software applications. Azure OpenAI's capabilities are made possible by specific generative AI models. Different models are optimized for different tasks. Some models excel at summarization and providing general and structured responses and others are built to generate code or unique images from text input. These Azure OpenAI models fall into few main families. GPT-4, GPT-3, Codex, Embeddings, Dell E. So Azure OpenAI AI models can all be trained and customized with fine tuning and we will not go into custom models here but you can learn more about it on the fine tuning your model Azure Code documentation. No worries, we will handle that later on. Now we have the playgrounds. So in Azure OpenAI Studio, you can experiment with OpenAI models in the playgrounds. In the completion's playground, you can type in prompts, configure parameters and see responses without having to code. So here you can see that this is an example of it. Now in chat playground, you can use the assistant setup to instruct the model about how it should behave. The assistant will try to mimic the responses you include in tone, rules and format you have defined in your system message. So as you can see here, this is an example of it. Alright, thanks for watching. Understand OpenAI's natural language capabilities. Azure OpenAI natural language models are able to take in natural language and generate responses. Natural language learning models are trained on words or chunks of characters known as tokens. For example, the word hamburger gets broken up into tokens ham, bur, and gear, so hamburger, while a short and common word like beer is a single token. These tokens are mapped into victors for machine learning model to use for training. When a trained natural language model takes in a user's input, it also breaks down the input into tokens. Now understanding the GPT models for natural language generation. So generative pre-trained transformer GPT models are excellent at both understanding and creating natural language. If you have seen recent news around AI answering questions or writing a paragraph based on a prompt, it likely could have been generated by GPT model. GPT models often have the version abended to the end, such as GPT 3 or 4. Azure OpenAI offers preview access to that GPT powered by GPT 35, Turbo and 2, GPT 4. Now what does a response from GPT model look like? A key aspect of OpenAI generative AI is that it takes an input or prompt 4 to return to a natural language visual or code response. GPT tries to infer or guess the context of the user's question based on the prompt. GPT models are great at completing several natural language tasks, some of which include like the following. So here is it. So you can see the task summarizing text. This is the prompt will be summarized this takes into a short blurb. The task is classifying text. Now the prompt will be what genre of book is this? The task will be generating names or phrases. The prompt write a tagline for my flower company. The task will be translation and the prompt will translate how are you into French. The task if answering questions the prompt will be what does Azure OpenAI do? The task suggesting content prompt will give me the five best winning songs. So for example, given a prompt where the user types in text asking for cookie recipe. So it will be like this. Write a recipe based on these ingredients and instructions. Fruit pie the ingredients would be strawberries, blueberries, flower, eggs, milk, and so on. You can see that. Here it is. Now here's an example for the OpenAI API generated response. So the instructions will be preheat your oven to 350°F. In the medium bowl, mix together one cup of flour and one egg until it forms a dough. Grease a 9 inch pie dish with butter or nonstick cooking spray. Roll out the dough and place it in the pie dish. Five in a separate bowl. Mix together one cup of strawberries, one cup of blueberries, and one cup of milk. Then bower the fruit mixtures into the pie dish. Then finally to bake it. So delicious. Maybe it is important to understand that the generated response are best guesses from a machine. In this case, the generated text may be useful for cooking something that tastes good in real life or not. Now, how models are applied to a new use cases? You may have tried out JVT predictive capabilities in chat portal, where you can type prompts and receive automated responses. The portal consists of the front end user interface UI users C and a back end that includes a generative AI model. The combination of the front and back end can be described as a chat bot. The model provided on the back end is not as available as building blocks with both open AI API and Azure Open AI API. So you can utilize chat GBTs capabilities on Azure Open AI via the GBT35 Turbo Model. When you see generative AI capabilities in other applications developers have taken the building blocks customize them to use case and build them into back end of a new front end user interfaces. Thanks for watching. Understand Open AI code generation capabilities Code generation AI models are able to take natural language our code snippets and translate them into code. The Open AI code generation model family Codex is proficient in over a dozen languages such as C-Sharp, JavaScript Burn PHP and is most capable in Python. Codex. Codex models are based off of GBT 3 and are optimized to understand and write code. These models have been trained on both natural language and billions of lines of code from public repositories. Codex is able to generate code from natural language instructions such as code commands and can suggest ways to complete code functions. For example, given the prompt write for loop count from 1 to 10 in Python, the following answer will be providing like the following here so you can see this is a for loop I in range 1 to 11 and then print the eye So code generation models can help developers code faster understand new coding languages and focus on solving bigger problem in their application Developers can break down their goal into simpler tasks and use codex to help build those out tasks using known patterns Examples of code generation Part of the training data for GBT 3 included programming languages so there is no surprise that the GBT models can answer programming questions if asked What's unique about codex model family is that it's more capable across more languages than GBT models. Codex goes beyond just writing code from natural language prompt giving the following code it can generate units like tests like this. So here you can see this is for button 3 this is numbers A and B it will return A multiplied by B This is the unit test for it Yeah these are the unit test for it as you can see Here you can see the test multi numbers it will take and assert if these equal 12 equal 0 equals 0 which means this is correct alright and the same thing for multi numbers negative So even if it is negative it should be working correctly. So this is a code and with the unit test Codex can also summarize functions that are already written explain SQL queries or tables and convert a function from one programming language into another. When interacting with codex models you can specify libraries or language specific tags to make it clear to codex that we want. For example we can provide this prompt formatted as HTML comment like you can see that build a page title let's learn about AI and then get this result. Now GitHub co-pilot OpenAI partnered with GitHub to create GitHub co-pilot which they call an AI bear programmer GitHub co-pilot integrates the power of OpenAI codex into a plugin for developer environments like visual studio code. Once the plugin is installed and enabled you can start writing your code and GitHub co-pilot starts automatically suggesting the reminder of the function based on code comments or the function name. For example we have only a function name in the file and the gray text is automatically suggested to complete it. So here an example of the GitHub co-pilot you can see that this is a function and it has like some values inside it. The constant value and it will retain whatever whatever okay. This is just an example. Now GitHub co-pilot offers multiple suggestions for code completion which you can tap through using keyboard shortcut. When given informative code comments it can even suggest a function name along with the completion function code. Thanks for watching. I understand OpenAI image generation capabilities. Image generation models can take a prompt a base image or both and create something new. These generative AI models can create both realistic and artistic images. Change the layout or style of an image and create variation on a provided image. DAL-E In addition to a natural language capabilities generative AI models can edit and create images. The model that works with images is called DAL-E Much like GBT models subsequent versions of DAL-E are abended into name such as DAL-E2. Image capabilities generally fall into the three categories of image creation editing an image and creating variation of an image. Image generation. Now original images can be generated by providing a text prompt of what you like the image to be of the more detailed the prompt the more likely the model will provide a desired result. With DAL-E you can even request an image in a particular style such as a dog in a style of Vincent van Gogh styles can be used for edits and variations as well. For example given the prompts an elephant standing with a border on top style digital art. The model generates digital art images debisting exactly what is asked for. Now here you can see this is the image. Here is the elephant above it is a border and here is another image as well these. Now when asked for something more generic like BinkFox the images generate are more varied and simpler while still fulfilling what is asked for this is one of the examples as you can see a BinkFox and however when we make the prompt more specific such as BinkFox running through a field in the style of Monet the model creates much more similar detailed images so you can see this is another one so this is a fox running in the field editing an image now when provided an image DAL-E can create the image as requested by changing its style adding or removing items or generating new content to add edits are made by loading the original image and specifying a transparent mask that indicates what area of the image to edit. Along with the image and mask a prompt indicating what is to be edited instructs the model to then generate appropriate content to fill the area when given one of the above images of a BinkFox a mask covering the fox and the prompt of a blue gorilla reading a book in a field the model creates edits of the image based on the provided so here let's have a look here so you can see this is a gorilla reading a book in the wild and here the images about it now image variations image variation can be created by providing an image and specifying how many variations of the image you would like the general content of the image will stay the same but aspects will be adjusted such as where subjects are located or looking background scene and colors may change I upload one of the images of the elephant wearing a burger as a hat I get a variation of the same subject so let's have a look here so you can see that this is elephant wearing a burger and so on alright thanks for watching it's important to consider the ethical implications of working with AI systems Azure AI provides powerful natural language models capable of completing various tasks and operating in several different use cases each with their own consideration for safe and fair use Teams or individual tasks with developing and deploying AI systems should work to identify, measure and mitigate harm usage of Azure open AI should follow the 6 Microsoft AI principles fairness AI system shouldn't make decision that discriminates against or support bias of a group or individual AI system should respond safely to new situations and potential manipulation privacy and security AI system should be secure and respect data privacy inclusiveness AI system should empower everyone and engage people accountability people must be accountable for how AI system operated transparency AI system should have explanation so users can understand how they are built so responsible AI principles guide Microsoft transparency notes on Azure open AI as well as explanation of other products transparency notes are intended to help you understand how Microsoft AI technology works the choices system owners can make that influence system performance and behavior and the importance of thinking about the whole system including the technology the people and the environment now limited access to Azure open AI as part of Microsoft commitment to using AI responsibly access to Azure open AI is currently limited customers that wish to use open AI must submit a registration form for both initial experimentation access and again for approval for use in production additional registration is required for customers who want to modify content filters or modify abuse monitoring city thanks for watching