 तेर, उन्ए land क्यर तखाआता तुई बार्क्छाया बरसटी कि सकते है! कँजा बशाँिया बरस्टी कोगाा? तोअव्टी कोगा यहा था वोगाईसा चुट्परी के औक से स्फ्रुटिकार लाहा करते तुवार्खें। कुजा ती बार्चा कि सकते है? ॐ Marshmallow ॐ ॐ we have more than two independent variables ॐ ॐ This is called the multiple linear regression Ð ॐ because we study in multiple ॐ Now, here is the introduction in simple linear regression ॐ In simple linear regression, we have simple variable ॐ dependent and single for independent ॐ Now, multiple is single dependent Ð but more than two independent variables ॐ we study simultaneously ॐ मूल्तिपल लिन्या लिग्रेश्चान is the statistical analysis method used to examine the relationship between the dependent and two or more independent variables. कितने है? तप्ट्टन वानी है. सम्ल में बेभी हमार पस वानी होता, पर सम्ल में वान इंद्टेप्टनडड़े है, और मूल्तिपल में two and more than two independent variables कों, यहर सहुझटेंटीं सेरों हैं सैंटँलेनियर एक्टरेश्चाम।जब लिணियर पाथचा स çalış साम्यूऽकर �麻改 रहतान्धेंगों। और पहुत्यता पहुता ही होसे ऐॆखदेन शिक्�目前 भीवा� nicely शिया from the bene model use for the prediction base predictive of single dependent variable planet variable senior�데 given values of host b k-1 explanatory or independent variables dependent variables response variable dependent variable charm Roxas is also explicitly saying that, खब वैरिबाल की अचीलथा. And we are also calling it as independent variable अचीलथा. Now, what is the problem that you have withligation of k-1. With the problem, this is the k-1 explanatory. Now, you check this k-1 as, this is the model, this is the dependent variable y. Plus beta is a constant, plus beta is equal to x2i. Where is 1, where is x1i? बीता वान के साथ सकताग त deafily द़ा में केखि़ की सब जुक्त में पुअत?!" लिकotics बीता वान के च्कटा ड़ा ळता नाूथ दरा to Chicago द़ा में आप के सब आप लख्तीर झाहींतच की का।�िद Jean सट्सकोंताव। spe ситуa 3x3i plus upto so on beta kxki plus epsilon or the error ei. This is the error term. Then you have an error term added, then you have a statistical model. Now here is the k-1 explanatory variable. Now k-1 you have an explanatory means independent variable and here you have k. What happened? If I introduce here x1, say beta 2x1, say beta 3x2, then how far will the variables go? The variables will go upto k-1. Okay, right? The variables are basically k-1, but here we are writing the notation as xk, so the total variable is k-1 because here x1 is not coming. So, if x1 comes, beta 2x1, beta 3x3, then beta kxk-1 will be the variable. So, basically this was a small idea. You have introduced that here you have k-1, so don't get confused that here is k-1 and here we are taking k. Now this is the model. What are we saying? yi, the dependent variable, this is the response, beta i, beta 1, beta 2, beta 3, beta k. These are the constant or regression coefficients. x2, x3, xk, these are the independent variables and ei is the error, where yi is the dependent variable, beta 1 is the intercept, beta 2, beta 3 upto so on, beta k are the coefficients of the independent variables. Constant here you have basically, what are we saying? Coefficient of the independent variables and independent variables we have basically x2, x3 upto so on, xk and ei is the error term. With this, we have become a regression model. The coefficient represents the change in the dependent variable for a one unit increase in the corresponding independent variables, holding all other variables constant. Pohati important aap ke paas ye values ye lines, ye lines basically aapko kya pataari hai. The coefficient represents the change in the dependent variable. Isme kithana change aayi ka dependent variable me for a one unit increase in the corresponding independent variable. One unit increase, yani x2, iski value aagal 1 bhi ho, to beta 1 plus beta 2, baaki rest of the, usne kya kya, holding the other are constant, holding the other values aam nahi leh re, just aam ne one unit increase kya. To one unit increase means x2 ke value 1 bhi aari kya to phir bhi response ke value exist karegi. One unit increase se aap ke paas prediction ke aari ye yi dependent value, dependent variable ke value ke exist karegi aam. Aur me aap se kethi hoon, suppose aam ye x2, x3, xde saap ko constant karegi aam. To yi equals to beta 1 to hain aap, yi basically aap ke paas unemployed person aap unemployed, uske paas ko income nahin hai. But usne khana to hain aap, kuch na kuch to use kane na, to wo jo khana hai usne, usne, kapde bhi pehne, usne, bal bhi usne. Wo jo aap ke paas expenditure aara hain, wo value kansa aaye ki. To wo uske aamare paas ke hain, this is the beta 1. Wo aap ke paas intercept aara hain. To iss ko aam simple kaise karte hain, agar aam koi cheese use nahi bhi kare, to kharcha to phir bhi ho raha na, aamare paas income koi nahi aari. But kharcha to aamare paas phir bhi ho raha. To wo aap ke paas ke hain, wo ye beta 1 aap ke paas kare. To rest of the variables agar aam constant karede, to phir bhi uski koi nahi koi value exist karegi. Aur 1 unit change, agar aam x2 ke aampe 1 unit change bhi karne, to phir model pe kya change aayaka, wo hame predict karna aayaka. Aap ye aap ke paas journal scenario hai ke haam iss model ko kaise likhenge, the term linear refer to the effect mean in a linear function of the unknown parameters. These are the unknown parameters. With n independent observation on y, n haamare paas sample hai, random sample haam ne n kali hai, n independent observation on y, and the associated values of the x2, x3 upto so on xk, the complete model becomes first y1, first aamare paas, beta 1 plus x2, beta x21, beta 3x31 plus aamare. Wo aamare paas previous kya tha? Previous agar raha aap ke paas hai, yi vari kar rahe, i12 upto so on n. To first aamare paas kya hogea, model y1, yap ke paas aamare. Similarly y2, aur yaha pe agar aam kye de jth value ya ith value to ya exist kar hain ki na, yj ki value suppose aajar, total 1 hai, to yj kya according aamare paas model aajar ka. Aur last is the yn, yn sample size last aamare paas model aajar. To yap ke paas complete model aajar aamare. Aap issaar ho ko, aap haam issaare models ko, kiss me leke jaa rahe hain, haam matrix notation me leke jaa rahe hain, ki matrix me iss ko kyaise leke sakte hain. Aap yi aap ke paas kya, zar aap yi aap hiya chekar ho? y1, y2 upto so on, yn, ye ek pura vector ban rahe hain hain hain. Basically agar ye dekho this is the vector, yap ke paas pura ek vector ban gyao, y1, ye aap ke paas agar. y1, y2 upto so on, yn. Kirtni dimension aajar ki? n into 1 ana? n rows, 1 column. Then, beta 1, beta 1, beta 1, kya lika haam ne? beta 1, beta 1, beta 1, 1, 1, 1, uska constant aap ke paas ki aagar 1, 1, 1. Next aam ne kya liya? x21, x22, x2n, x21, x22, x2n. To ye aap ke paas x ka kya ban gya poora? matrix. Aap notation dekho ye aap ke paas model aap model kya hain? In terms of matrix likha hain. y is the vector. To vector ka sigin hum ne small lekte hain. And matrix aap ke paas capital mein hota hain. Ye main aap ko baat previous lecture mein bhi pata aaya tha. Ki vector aap ne liya to y hain hain paas small leh. X, kyu ke x aap ke paas matrix aap, matrix ki notation capital X aap. Kirtni dimensions se uski? n rows, k column, n into k. Next, beta 1, beta 2, beta n. Ye jo aap ke paas coefficient se uska aam ne column ban aaliyya. Beta 1, beta 2 upto so on, beta n. How many dimension? n into 1. And last hamarin paas kya aara hain error? Plus, here is the sign of plus. Plus e1, e2 upto so on, e n. To n into 1 dimensions. Aap iss ko haam ne matrix notation mein likh hain. This is the vector. Aap aagin haa model kaun sa usk hainge? Basically, haa maare paas ye model hainah. Iske according haa aage solution karenge. Now, y equals to X, capital X, beta plus e. Aap jup aap ke paas kyu model hota hain to model ki koi na koi assumptions bhi hain. Here is the e satisfy, e kya hain error. Expected value of the error which is equals to 0. And covariance of the error which is equals to expected value of ee prime equals to sigma identity. Uske assumptions hain. Uske assumptions kya according haam ne iss ko solve karna hain. Basically, aap ko pata hain ke variance of e kya hota hain. Variance of e bhi hota hain expected value of ee prime. Ache maa general definition likh hain. e square minus expected value of e whole square. This is the general definition. And you know that this factor equals to 0. So, expected value of e square which is equals to variance of e. Similarly, aap ko variance of e hain maa rapaas kya hain. Square form me to hain matrix me likh hain. Square form ko kaise likh hain ee prime which is equals to sigma identity. Identity hain maa rapaas identity matrix hain. And sometimes, ko variance equals to this. Aap ye hain maa rapaas kya sigma square vee. Vee hain kaap use karthin hain. Vee hain kaap use karthin hain. Aap hain maa rapaas identity matrix ne yoga. It means ki uski koena koi value exist karegi. Ye hain pe to identity matrix hain. Means ke correlation aap ke paas hain. Correlated variables hain. Ye hain pe correlation exist ne yoga. Uski koena koi value exist karegi. To ye hain pe aap hain maa rapaas v ka matrix hain. Kyu ke ye kapital form me likh hain. Is an n into n positive definite matrix? If ko variance equals to this, we can call the regression model. The generalized multiple regression model. Aap wo generalized multiple regression model hain. Jab ko variance ke value this ho ki. It means ke yaha pe correlation one nahi aari. Yani ke correlated variables nahi hain. Correlated variables ke laava yaha pe aap ke paas correlated variables hain. Yaha correlated variables nahi hain. Isli aap ke paas vee exist karegi. Uski koena koi value exist karegi hain. It means ke yaha pe aap ke paas vee. Ye kis me aega jab huma rahe paas aara terms independent nahi ho ghi. Tike aara terms dependent ho ghi. Independent aara terms hain, kaumara covariance ye hain. Independent aara terms nahi hain, then hum covariance use karenge ye hain. So independent aara terms nahi hain. This is called the generalized multiple regression model. Ye journal model nahi aap rahe. Ye generalized multiple regression model hain. Here is the multiple linear regression can be used for prediction. Yes, hum predik kar sakte hain multiple linear regression me. Simple me bhi hum predik kar sakte hain aur prediction besikali ke hain within data prediction ho ghi. Outside the data hum predik nahi kare within data hum ne prediction kar nahi hain. Estimation and testing of hypothesis. Pohate important cheez hain estimation and the testing of hypothesis. Kaha use ho rahe hain hain hain hain hain hain. Multiple regression peish maara maare paas varieties hain. Peish maara maa paas field hain hain. Multiple linear regression use ho rahe hain. Kaha pe use kare ekonomics me bhi use ho rahe hain. Finance me marketing me social science me health science me boz zada use ho rahe hain. Multiple regression. Ab besikali aap ke paas kya ho ghi hain. Maa aap ko ek choritis example do multiple linear regression hain hain hain. Status of a person. Status ke andar kya ho ghi hain. Celery is the dependent variable. Celery se hum dekhne uska status. Tike. Uske baad hain hain hain independent variables me hum ne uski age le li. Hum ne uska experience le liya. Hum ne uski qualification chek kare li. Ab dekhon Celery kispe dependent kari hai. Celery besikali depend kari age. Age kitne hai. Age zyada ho to aapko pata experience me zyada ho jaata hain. Pher thodi Celery bhi increase ho jaati hai. Qualification bohot important hai. Qualification hain maara paas fact hai. Qualification ko hain quantitative bhi leh sakte hain. Qualification ko hain qualitative bhi chek kar sakte hain. To aap ke paas Celery kisna hain. Celery besikali dependent ho ghi. Aur status of the person kispe depend kari hai maara paas different independent variables pe. To this is the example of the multiple linear regression. Simple a multiple linear multiple linear regression ki besikali kuch assumptions to hain. Assumptions chek hi hai bagay to hain multiple ko use nahi kar sakte hain. Aap assumption me first aap ke paas assumption ke aoghi linear regression include linearity. Linearity bohot hi zaroori hai. Thi ke aap ke paas curve curve me bhi aasakta hain. Data aap ke paas sometimes aise bhi curvy linear bhi hain usko use kar sakte hain. Uske li paan koi nahi koi transformation kar ne pade hain. To hain me aap ke paas first kyaoghi linearity linearity ke liye rule of thumb aap ke kahta hain ke minimum 20 ka sample to zaroor hain. Sample size sample size minimum 20 ka zaroor hain. 20 se kaam ho to linearity besi nahi rai ghi to multiple aap ke zaroor hain. To rule of thumb aap ke paas linearity ke liye ki sample size minimum 20 zaroor ho. Normality check karni hain. Normality to bhoz zaroori hain. Normality check aap kar sakte hain. Goodness of fit test kar liye. Kolmograph ke saath usse aap normality check kar sakte hain. Aur different plotting ke saath bhi hain normality check kar lethe hain. Jaisi hain hain histogram banaya aur normal curve karni hain. Normality check karni hain. Homocitasticity and absence of multicolinearity. Homocitasticity karni hain. Arom variance ekvalo nahi chainh hain. Usse aamshne houn hotter tab hain. Multicolinearity kaahi hain. Ki v i f aur bi method multi-colinearity ko check kar liye. Auto-correlation bhi hum check kar sakte hain. Multicolinearity kaahi'ti v i f variance inflation factor. v i f agar equals 2 1 hain. It means ke variables independent hain correlated nahi hain. v i f agarap ke paas 5 hain. Maud rate तो मूल्टी कुलनेरोटि होती था है, तो यन्देपेंट्ड़िट वेरियबल्स नें, उनके बित्टिट्टिन, हाई कोरिलेशन आई एं, यद मीजग तो, मुल्टी कुलनेरोटि हाई ये, उस कोरिलेशन को बेसे कालि हमें पेले रडूस करनें, रहे न spects in the przez बच्ष्धो audit बहनकर तो तो दुषतowo जों तो भो प्यले। मैं आर ती क Laden Nonetheless ड़ consort और से ती। मैं Than further मैं उजी अजामषों harness चैक कर के अगा km ये पाशक घजे आजामषों म Люपर लिनार का look मॊन Resources घगिन रहिलन जी मॉल rescue नहीं ख़नत्न थमाब मल आुद्पल लिने supervisors or multi-linear regression concepts