 Today we are going to discuss the topic machine learning and its power. After this particular session, students will be able to demonstrate what machine learning is all about, how learning is done from data and the power of machine learning will be understood. Where does this machine learning come from? Machine learning is a branch of artificial intelligence which demands that the machine has to do the things that human beings are better at doing at present. When we look at the definition of machine learning, there are number of definitions but the most appropriate ones are machine learning is a programming of computers to optimize the performance criteria using examples. That is selecting data or from the past experience generating a present required result. This was given by Ethin Alpidin. The next definition goes as the goal of machine learning is to develop methods that are automatically detecting patterns in data and then to use this undiscovered or unvisited patterns to predict the failure of this particular data or other outcomes which are developed out of interest. This was given by Murphy and Bishop suggests that the field of pattern recognition is concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions. Therefore, machine learning involves undiscovered patterns, past experiences and algorithms to finally reach out to generate results. Let us now think for a while and answer the question, what is desired from this machine learning? Machine learning has a dramatic impact because the way software is designed is different and it has to keep pace with the changes that are happening in a business environment. It uses data and this data drives the business rules and the logics which are used in our particular environment. It allows a continuous learning that happens from time to time and from the particular data we will predict the future. It involves a set of algorithms and these algorithms are then converted into models by training them with particular data and we try to improve the processes which are used in order to get the final results and gain insights from the patterns which are available within the data. It is not an individual process but it consists of a team working together consisting of data scientists, data engineers, business analysts and business leaders who are interested in generating results for their business to continue smoothly. The main focus is on solving a business problem. Looking at the background, machine learning, artificial intelligence and cognitive computing are the coins of the day and these are emerging advances that are happening in technology which involve analytics providing competitive advantage and getting you the best results. It involves use of new strategies and all these strategies are supposed to follow data. We use models which are trained with this particular data available and finally we will be predicting a particular change when we are using it for testing. The solutions that are developed are constantly going to be updated and we use most appropriate and constantly changing data sources so that this randomization and constant changing will give you better and consistent result which abide by accuracy. We enable a system to learn from this particular data which is available through various sources rather than going for an explicit programming to provide a particular result. The process to be followed when dealing with machine learning are number of algorithms are available that can iteratively learn from data. We are supposed to select some of these particular algorithms to form our particular model and we can improve this particular data, describe it and predict the particular outcomes by our particular model which acts as a template. The algorithms ingest the training data and get satisfied to form a particular template on which the particular test data will then be given. This model which is created of one or more algorithms with trained data associated with it is output generated when the algorithm is completely trained and a template is formed. We provide this model now with the test input and generate a learned output for a new situation. Some of the examples that we have to remember when studying machine learning are the first prediction where we may use one of the n number of algorithms which are used for prediction. Then we prepare a predictive model which is trained with the particular data. Out of the total data we will decide upon some amount of data for example 30 or 40% amount of data for the training and the remaining data will be used for testing. Prediction received on the test data is based on the data that is trained already into the particular model. The next example is an e-commerce site wherein we use recommendation algorithms for the purchase of a particular product and this recommendation model which we are going to create as a second step is based on the browsing history and the purchasing data of a particular customer. Then we give recommendations on this test data from the model which is created in the training phase. The insights which are developed during machine learning are it follows an iterative learning from this particular data. So by going over and over again and visiting the data for unvisited patterns or new patterns we develop the best result which is required. This enables the model to train on the data set before being deployed. The models continuously adapt and keep on changing to the new data that is ingested into the training system. There are improvements which are seen in the types of associations between these data elements. Models can be used in real time so that with randomization and recent data you are going to generate more accurate results and a good learning will happen from your data. A complex algorithm can be automated because this is based on the rapid change in the variables and as we know randomness and rapid change generate best results. There is improvement and accuracy and this is because the training process is randomized and automation is adjusted on this particular environment. The algorithms continuously try to refine these models by processing new data. We may call this particular technology machine learning as an old wine put in a new bottle because if you go to refer artificial intelligence we see that a self-learning program was created for checkers by Arthur Lee Samuel right down 1959. And we see that this particular thrust area has once again come into importance in the recent years because on the focus on distributed computing models, cheaper computing which is available and a storage back we have gone to AI and ML being used to solve our particular problems. The basic requirements for a machine learning environment are data has to be accurate, data has to be meaningful towards the context, it has to be cleaned. For example you may require a numeric data and you have a text data so the text data has to be converted to the numeric data and this data has to be refined in a form which is required to be given as input to a particular environment. Sometimes we may use hybrid clouds where large amount of data is available and this should be handled properly because clouds allow you to reduce cost, involve security and increase the performance of your particular environment. I have used the following references. Thank you.