 Today, we are going to discuss the topic of machine learning cycle. At the end of the session, the student will be able to demonstrate the steps to be followed in developing a machine learning application. We have seen that operating a machine learning algorithm is an iterative process. It is different from software development life cycle. Here we concentrate on data changing and the preferences which we are giving to a particular problem that evolve during the implementation of the problem when the data is analyzed and the competitors that will emerge from a business scenario. We keep our model always fresh having the latest data when it goes into a production so that latest changes are involved. And the same level of training that was needed when we built the model need not be required at the various instance of time and hence there is no guarantee that the system is self-sufficient at all times. Let us think and ask ourselves a question. What steps are to be followed in a machine learning cycle? We now relate the particular steps. The first step involving identifying the data. This data should be a relevant data source, a reliable one and this data should be expandable so that we can now accommodate the recent changes in the particular data. We prepare the data by methods of cleaning the data, removing unwanted data and redundancy and concentrating on the security of the particular data and governing it for applying instances where we do not relate to inaccurate data. We select a machine learning algorithm which is based on a particular application. For example, if it is a crime detection we would try to go for a face recognition algorithm or if it is finding out the failure of a machine we would try to go for a regression algorithm. We now train the data which is available by using 60% or 80% of the data set which was input to a particular environment and we train the algorithm to create the model with respect to learning. We evaluate this model to find the best performing algorithm and we try to find out our expected result thus increasing the accuracy of our particular model. Now we deploy this particular model to cloud to save cost and to have on premises applications developed and we predict using this particular model, using the test data and make prediction based on this new data which is incoming into our particular system. We assess the predictions and we validate this particular predictions to improve the accuracy of our system. The role now goes of the first component of our machine learning which is an algorithm. This algorithm makes the machine learning operational. It must be composed as a program that computers can understand. So that machine learning algorithms which are of a variety can be selected and the different forms can be given to this particular algorithm when it is combined. For example, you may use association rule mining and then optimize the rules generated using a genetic algorithm. The combination of both these algorithms make the model for getting an optimized result. With machine learning, the data itself creates the model because we are learning from this particular data. The more data that is added to the algorithm, more sophisticated the algorithm becomes because more randomization, more unlabeledness, more unstructuredness and all these particular attributes contribute finally to develop more accuracy of the particular results. Now we go for the types of machine learning algorithms that are available. Since we know that the business challenge is that we face are going to be different, for the same challenge we may choose different algorithms to compare our particular results. We also understand different classes of machine learning algorithms which helps us to identify the best type of algorithms whether it is supervised, unsupervised, reinforced, use of neural networks or deep learning. So we first give the class called as Bayesian algorithms. A Bayesian algorithm allows data scientists to encode prior beliefs about what model should look like and how much independence is that particular data set which we are going to get as an output. It is useful when you don't have massive amount of data to confidently train a model. For example, for detecting a criminal we may use pass data because very less amount of data is available or the data may not be available also at some instance of time. So prior knowledge of some part of the model that can be coded directly is available which leads us to our particular goal. The second type of machine learning algorithm are called as clustering algorithms. These are also called unsupervised learning algorithms where objects with similar parameters are grouped together to form a cluster. Objects in a cluster are more similar than objects of other clusters and therefore similarity context comes into the picture here. Unsupervised learning because the data is not labeled and since the data is a very large volume we then first cluster them and then we may apply a supervised algorithm to classify them later on. The algorithm interprets parameters that make up each item and then groups them accordingly. Therefore we are simplifying the process and then we are going for classification. The decision tree forms the third type of machine learning algorithm which uses a branching structure to illustrate the results of decision. It uses a map to map between the possible outcomes of the decision and the reasons why we have taken this particular decision. Each node of a decision tree represents a possible outcome that is justified and supported. The percentages are assigned to the nodes based on the likelihood of the outcome that is occurring. This might be done at various levels till we encompass a particular goal. The fourth type is dimensionality reduction. Here we look at a data source from various angles called as dimensions. The attributes of the particular data might contribute to dimensionality of the particular object. It helps the system remove the data that is not useful for analysis and we concentrate on the data that leads us to our good prediction. It is helpful when analyzing data especially from sensors because the data is of huge volume and is unlabeled. The performance of a machine learning system will improve because the data is very large, randomized and hence supported to give all particular results of the tuples which are involved. It helps analysts visualize the particular data which is involved by our particular system. The next type is the instant base machine learning algorithm which is used when you want to categorize a new data point based on similarities to the training data. In semantic analysis and sentiment analysis we prefer to use this particular algorithm. There is no training phase before and hence this sort of orientation is called as lazy learners algorithm. A simple match is found between the new data with the training set. This new data is sometimes called as the testing data which has to be put to the template of the trained model. It is not well suited for the data sets with random variations since it involves sometimes in irrelevant data or data with missing values and therefore the type of reasoning becomes non-monotonic and hence the algorithm has to concentrate on these fill-in-the-blank items. Therefore it is very useful in pattern recognition so that we detect a pattern which is relevant to our particular result. The next type of machine learning algorithms are neural networks and deep learning. Here neural networks mimic the way a human brain approaches problems and deep learning algorithms are a collection of the best neural network algorithms for us to generate our particular results in the form of a best prediction. It uses layers of interconnected units and learns and infer relationships between the items of the different layers. It is able to adjust and learn as the data changes. Therefore it supports changes in a business environment and is often used when data is unlabeled and unstructured and when the data is very large-volumed resembling big data and is very veracious in its orientation we will try to use these sort of networks and especially deep learning algorithms which interpret this unstructured data to make near relevant results at the real time to give decisions. For our references we have used. Thank you.