 next library is called tensorflow this is not just a library, it is an ecosystem within itself if we are talking about python libraries and the tensorflow in those libraries like in pandas, if we have discussed other libraries some libraries are limited and some features are too much so tensorflow is one of them the libraries and the function features within it they are quite rich and it is quite extensively used again in machine learning environments in this definitely for researchers when we talk about health sciences, telecom, banking we talk about demographic, social environments we talk about problems everywhere data is definitely the handling of data the biggest challenge is the handling of data and then related to that data even data collection the problem statement or objective for which you have to collect data you have to process it and analyze it we have to always keep in mind the researchers they are also doing some sort of research social analysts, your data analysts it is not necessary that the researcher sitting in the lab is the same researcher every other thing we are talking about all this is considered in the research like I said, its ecosystem is very flexible it is very rich it has its own libraries, functions and features like if we talk about the SQL database you know SQL database it has its own functions and features in the same way tensorflow is very rich in features and functions in this you see it has its own tools then libraries and again its community is very rich we are saying from the very beginning the strength of python is its community in the same way tensorflow is its strength even though it falls within that but the contribution of the community that is remarkable it is very powerful for the success of tensorflow if we look at tensorflow there are two parts I will try to explain and share with you what it is and how it works tensor is basically the structure of the data we call tensor the way the data is the interactive development environment of data science of python the way the data is read that is called tensor and when you handle it that is called flow based on this its name is tensorflow in this you see we have a complete data setup I mean there is a matrix basically anything can happen we are not really concerned at this point that what is the data anything can happen the different cells some have data rest are blank empty or null we don't even know what is the purpose of these dots it means actually as far as I understand it is null there is no value in it but if we look at it from a big perspective or if we expand it it can happen that in some other dimensions we have some information available so this is important to understand this small piece I am showing you if there is some column in it in the data it may not be representing the complete picture this is very important for you to understand this is why it is important to see where this data came from how you have it you know you have it in your system where this data came from so you will have its complete history its complete architecture as a data scientist but this is just for you to understand your education from its point of view it will be represented in this way and the flow in it is exactly the same when we talk about the data pipeline that you have data from here you added some function in it you added some other function then you got your result so this is basically a tensorflow library you will understand these things only when you will practically use them and maybe when you will start your learning when you will start coding in python so tensorflow may not be the first one but numpy, scipy and some other libraries you will use them or matplotlib or cbond and in tensorflow when your understanding will develop some things will come then you will be able to appreciate how many useful things and where you can use it and again the same thing that data modeling how you have the statistical model and how you have to train it training data test data production actual data so this is the same question basically there is a question which you will have to always ask yourself as a data scientist and I will discuss them again and again but you always have to keep in mind that what is your entire data pipeline and why are you doing it and this is applicable in all your projects