Understanding Linear Regression

My husband and I were discussing the intuition behind OLS regression today and I decided to share the materials that I generated to break down covariance, correlations, and the linear regression equation. It may help to follow along in the Excel workbook that I did this in (see link).

First, let’s say that we have the following data:

 Y X1 X2 1 40 25 2 45 20 1 38 30 3 50 30 2 48 28 3 55 30 3 53 34 4 55 36 4 58 32 3 40 34 5 55 38 3 48 28 3 45 30 2 55 36 4 60 34 5 60 38 5 60 42 5 65 38 4 50 34 3 58 38

We plot the relationship between each X variable and Y, to get a visual look at their relationships:  I also like to look at the relationship in a 3D plot (this plot was made in Rstudio with plotly–see link for tutorial):

//plot.ly/~allets/6.embed

To calculate the mean of these data, we would have to sum each column and divide each column by the total number of observations (n=20):

 Sum 65 1038 655 N 20 20 20 Mean 3.25 51.9 32.75

Now we need the standard deviation. This is something most students do not find very intuitive unless you force them to calculate this step by step, either by hand or in Excel. You should start by subtracting each observation in each column by it’s respective mean:

 (Y-MeanY) (X1-MeanX1) (X2-MeanX2) 1-3.25=-2.25 40-51.9=-11.9 25-32.75=-7.75 2-3.25=-1.25 45-51.9=-6.9 20-32.75=-12.75 1-3.25=-2.25 38-51.9=-13.9 30-32.75=-2.75 3-3.25=-0.25 50-51.9=-1.9 30-32.75=-2.75 2-3.25=-1.25 48-51.9=-3.9 28-32.75=-4.75 3-3.25=-0.25 55-51.9=3.1 30-32.75=-2.75 3-3.25=-0.25 53-51.9=1.1 34-32.75=1.25 4-3.25=0.75 55-51.9=3.1 36-32.75=3.25 4-3.25=0.75 58-51.9=6.1 32-32.75=-0.75 3-3.25=-0.25 40-51.9=-11.9 34-32.75=1.25 5-3.25=1.75 55-51.9=3.1 38-32.75=5.25 3-3.25=-0.25 48-51.9=-3.9 28-32.75=-4.75 3-3.25=-0.25 45-51.9=-6.9 30-32.75=-2.75 2-3.25=-1.25 55-51.9=3.1 36-32.75=3.25 4-3.25=0.75 60-51.9=8.1 34-32.75=1.25 5-3.25=1.75 60-51.9=8.1 38-32.75=5.25 5-3.25=1.75 60-51.9=8.1 42-32.75=9.25 5-3.25=1.75 65-51.9=13.1 38-32.75=5.25 4-3.25=0.75 50-51.9=-1.9 34-32.75=1.25 3-3.25=-0.25 58-51.9=6.1 38-32.75=5.25

Then you will square each value in each column:

 (Y-MeanY)^2 (X1-MeanX1)^2 (X2-MeanX2)^2 -2.25^2=5.0625 -11.9^2=141.61 -7.75^2=60.0625 -1.25^2=1.5625 -6.9^2=47.61 -12.75^2=162.5625 -2.25^2=5.0625 -13.9^2=193.21 -2.75^2=7.5625 -0.25^2=0.0625 -1.9^2=3.60999 -2.75^2=7.5625 -1.25^2=1.5625 -3.9^2=15.21 -4.75^2=22.5625 -0.25^2=0.0625 3.1^2=9.610000 -2.75^2=7.5625 -0.25^2=0.0625 1.1^2=1.21 1.25^2=1.5625 0.75^2=0.5625 3.1^2=9.610000 3.25^2=10.5625 0.75^2=0.5625 6.1^2=37.21 -0.75^2=0.5625 -0.25^2=0.0625 -11.9^2=141.61 1.25^2=1.5625 1.75^2=3.0625 3.1^2=9.610000 5.25^2=27.5625 -0.25^2=0.0625 -3.9^2=15.21 -4.75^2=22.5625 -0.25^2=0.0625 -6.9^2=47.61 -2.75^2=7.5625 -1.25^2=1.5625 3.1^2=9.610000 3.25^2=10.5625 0.75^2=0.5625 8.1^2=65.61 1.25^2=1.5625 1.75^2=3.0625 8.1^2=65.61 5.25^2=27.5625 1.75^2=3.0625 8.1^2=65.61 9.25^2=85.5625 1.75^2=3.0625 13.1^2=171.61 5.25^2=27.5625 0.75^2=0.5625 -1.9^2=3.60999 1.25^2=1.5625 -0.25^2=0.0625 6.1^2=37.21 5.25^2=27.5625

If you sum up each column of the squared values, you will get the standard deviation:

 SD 29.75 1091.8 521.75

In addition to calculating the mean and standard deviation of Y, X1, and X2, you will also need to calculate the relationships between Y, X1, and X2 by first, multiplying them together, and then repeating each of the steps that we did above:

 X1*Y X2*Y X1*X2 1*40=40 1*25=25 25*40=1000 2*45=90 2*20=40 20*45=900 1*38=38 1*30=30 30*38=1140 3*50=150 3*30=90 30*50=1500 2*48=96 2*28=56 28*48=1344 3*55=165 3*30=90 30*55=1650 3*53=159 3*34=102 34*53=1802 4*55=220 4*36=144 36*55=1980 4*58=232 4*32=128 32*58=1856 3*40=120 3*34=102 34*40=1360 5*55=275 5*38=190 38*55=2090 3*48=144 3*28=84 28*48=1344 3*45=135 3*30=90 30*45=1350 2*55=110 2*36=72 36*55=1980 4*60=240 4*34=136 34*60=2040 5*60=300 5*38=190 38*60=2280 5*60=300 5*42=210 42*60=2520 5*65=325 5*38=190 38*65=2470 4*50=200 4*34=136 34*50=1700 3*58=174 3*38=114 38*58=2204

Again, (1) sum each column and (2) divide by the total number of observations (n=20) to get the mean.

 Sum 3513 2219 34510 N 20 20 20 Mean 175.65 110.95 1725.5

(3) In a separate table, subtract the respective mean for each column from each row value:

 (X1*Y)-(MeanX1*Y) (X2*Y)-(MeanX2*Y) (X1*X2)-(MeanX1*X2) 40-175.65=-135.65 25-110.95=-85.95 1000-1725.5=-725.5 90-175.65=-85.65 40-110.95=-70.95 900-1725.5=-825.5 38-175.65=-137.65 30-110.95=-80.95 1140-1725.5=-585.5 150-175.65=-25.65 90-110.95=-20.95 1500-1725.5=-225.5 96-175.65=-79.65 56-110.95=-54.95 1344-1725.5=-381.5 165-175.65=-10.65 90-110.95=-20.95 1650-1725.5=-75.5 159-175.65=-16.65 102-110.95=-8.95 1802-1725.5=76.5 220-175.65=44.35 144-110.95=33.05 1980-1725.5=254.5 232-175.65=56.35 128-110.95=17.05 1856-1725.5=130.5 120-175.65=-55.65 102-110.95=-8.95 1360-1725.5=-365.5 275-175.65=99.35 190-110.95=79.05 2090-1725.5=364.5 144-175.65=-31.65 84-110.95=-26.95 1344-1725.5=-381.5 135-175.65=-40.65 90-110.95=-20.95 1350-1725.5=-375.5 110-175.65=-65.65 72-110.95=-38.95 1980-1725.5=254.5 240-175.65=64.35 136-110.95=25.05 2040-1725.5=314.5 300-175.65=124.35 190-110.95=79.05 2280-1725.5=554.5 300-175.65=124.35 210-110.95=99.05 2520-1725.5=794.5 325-175.65=149.35 190-110.95=79.05 2470-1725.5=744.5 200-175.65=24.35 136-110.95=25.05 1700-1725.5=-25.5 174-175.65=-1.650000 114-110.95=3.05 2204-1725.5=478.5

(4) Square those values for the standard deviation:

 (X1*Y)-(MeanX1*Y)^2 (X2*Y)-(MeanX2*Y)^2 (X1*X2)-(MeanX1*X2)^2 -135.65^2=18400.9225 -85.95^2=7387.4025 -725.5^2=526350.25 -85.65^2=7335.9225 -70.95^2=5033.9025 -825.5^2=681450.25 -137.65^2=18947.5225 -80.95^2=6552.9025 -585.5^2=342810.25 -25.65^2=657.9225 -20.95^2=438.9025 -225.5^2=50850.25 -79.65^2=6344.1225 -54.95^2=3019.5025 -381.5^2=145542.25 -10.65^2=113.4225 -20.95^2=438.9025 -75.5^2=5700.25 -16.65^2=277.2225 -8.95^2=80.1025 76.5^2=5852.25 44.35^2=1966.9225 33.05^2=1092.3025 254.5^2=64770.25 56.35^2=3175.3225 17.05^2=290.7025 130.5^2=17030.25 -55.65^2=3096.9225 -8.95^2=80.1025 -365.5^2=133590.25 99.35^2=9870.4225 79.05^2=6248.9025 364.5^2=132860.25 -31.65^2=1001.7225 -26.95^2=726.3025 -381.5^2=145542.25 -40.65^2=1652.4225 -20.95^2=438.9025 -375.5^2=141000.25 -65.65^2=4309.9225 -38.95^2=1517.1025 254.5^2=64770.25 64.35^2=4140.9225 25.05^2=627.5025 314.5^2=98910.25 124.35^2=15462.9225 79.05^2=6248.9025 554.5^2=307470.25 124.35^2=15462.9225 99.05^2=9810.9025 794.5^2=631230.25 149.35^2=22305.4225 79.05^2=6248.9025 744.5^2=554280.25 24.35^2=592.9225 25.05^2=627.5025 -25.5^2=650.25

(5) Now sum up each column for the standard deviation:

 SD 135119 56918.9 4279623

Comprehensively, you should get a table like this:

 Y X1 X2 X1*Y X2*Y X1*X2 Sum 65 1038 655 3513 2219 34510 N 20 20 20 20 20 20 Mean 3.25 51.9 32.75 175.65 110.95 1725.5 SD 29.75 1091.8 521.75 135118.6 56918.95 4279623

Now we can derive the covariance between each variable, as well as the correlation, using these formulas: Such that your table should look like this:

 y X1 X2 Y 29.75 139.5 90.25 X1 0.77 1091.8 515.5 X2 0.72 0.68 521.75

Notice that the numbers in the diagonals (blue) are the standard deviations that we calculated. The numbers in the bottom triangle (underlined) represent the correlation, and the numbers in the top triangle (red) is the covariance. Below, I show you how I calculated each value in each cell (in Excel):

 y X1 X2 Y 29.75 3513-(1038*65)/20=139.5 2219-(655*65)/20=90.25 X1 139.5/SQRT(1091.8*29.75)=0.77403 1091.8 515.5 X2 90.25/SQRT(521.75*29.75)=0.724 515.5/SQRT(1091.8*521.75)=0.683 521.75

Now, you can calculate the betas for X1 and X2 using these formulas:  Notice that you are standardizing each variable by accounting for its covariance with the other predictor. Plugging in each value, you should get the following:

 b1 (521.75*139.5-515.5*90.25)/(1091.8*521.75-515.5^2)=0.086 b2 (*1091.8-521.75*515.5)/(0.683*-521.75^2)=0.088

Remember that the formula for OLS regression is simply: So, using algebra, plug in the variables to calculate the constant: a 3.25-(0.086*51.9)-(0.088*32.75)=-4.104

Now we have our regression equation:

Y’=4.104 – 0.086*X1 – 0.088*X2

Now we can calculate our column for y-hat by plugging each X1 and X2 value into the equation. You should get a column that looks something like this:

 Y’ 1.54282 1.536857 1.80801 2.844918 2.496897 3.276963 3.454552 3.802573 3.711394 2.331235 3.977777 2.496897 2.412873 3.802573 4.059415 4.409822 4.760228 4.841867 3.195325 4.237004

Now you can plot the actual Y versus predicted Y (i.e., Y’): There you have it! Hopefully this breakdown provides a better intuition of the numbers behind the OLS regression formula!

More Teaching Resources for GLM

I finally got a chance to go through all of my old Stata materials that I made for a GLM class and prepare them for sharing with a broader audience. These files include a tutorial on logistic regressionevent history analysis, count variables, and ordered logistic/probit regression. I was pleased to find all of the supplementary materials that I made for some of these topics, which I also included. For example, I forgot that I created a document (see Supplementary_Material) that shows the user how to find a package in Stata if the standard “ssc install” command doesn’t work. This document also includes an explanation of how to copy and paste the Stata results into Excel, and then separate the data into columns (see ugly gif below for preview of what I mean): You should be able to view all of the materials that I shared by going to my OSF profile. For a detailed explanation of what I include in my tutorial, see my previous post on OLS Regression in Stata.

OLS Tutorial in Stata Cont.

This post is an addendum to the OLS tutorial I posted two weeks ago. I will walk through the explanations that I provided in the program and I will interpret the output. If you haven’t read the previous post, you can access the materials using this link.

The tutorial begins by telling the user to run the following two lines of code:

clear all

set more off, permanently

The “clear” command will clear out data, along with any value labels, that are currently loaded in Stata. The additional command “all” will clear matrices, scalars,
constraints, clusters, stored results, sersets, and Mata functions and objects from the memory, in addition to closing all open files and postfiles, clear the class system, close any open Graph windows and dialog boxes, drop all programs from memory, and reset all timers to zero.

The next command, “set more off,” changes Stata’s default setting, which is “set more on.” The “set more off” command tells Stata not to pause or display the ” –more–” message when showing results. Personally, I hate pressing spacebar over and over again until all of my results are displayed.  Stata will make “set more off” the new default by adding the option (noted by the “,”) “permanently.” Note that once you run this command, you do not have to run it again, now that “set more off” is the default. You can change this back by typing “set more on, permanently”. You can read more about the clear function here and the set more off function here.

In the next step, the tutorial directs you to load your data:

use “[folder path data is in]\[datafile name]”

I stored my data in the folder “C:\Users\Stella\Documents\blog\ols”. This should be put in place of the text:”[folder path data is in]”, including the brackets. The file name is “nlsy97_2015”. This should be put in the place of the text: “[datafile name]”, including brackets, such that the final will read:

use “C:\Users\Stella\Documents\blog\ols\nlsy97_2015”

You could add the option “,clear” to this instead of using the “clear all” command above, and it would effectively accomplish the same thing.

Next, I walk through how to set up a log file. A log records your Stata session. You can run multiple logs at the same time if you wish. It’s nice to keep a log of your work so that you can track what syntax was used to generate your output. In the program, I store the log in the same folder as my data. I name my log SMnlsy97_2015.txt. I add the option “replace” to allow Stata to write over an existing log, if there is one. You generally want to do this because it is likely that you will make mistakes and have to re-run your program.

In the next section of the program, I walk you through commands that allow you to look at your variables. For example, I look at the variable Wage, which is the outcome variable in this tutorial:

codebook Wage

This will produce the following output: As you can see, codebook will show you information about the variable, such as the type of variable it is (numeric), the lowest value (0) and highest value (110,000), the number of unique values (400), the number of missing values (1400), the mean (40209), the standard deviation (27211.2), and the interquartile range. If you ran the command “codebook” without specifying a variable or variables, it would produce output for every variable in the dataset.

You can view a summary of the variable by typing:

summarize Wage, detail

For short, you can type “sum” in the place of summarize and “d” in the place of detail. This command should produce the following output: The output confirms some of the details we saw using codebook: The smallest value for Wage is \$0, while the largest is \$110,000; the mean wage is \$40,209.03, with a standard deviation of \$27.211.23. Unlike codebook, summarize shows us that there are 5,702 observations for the variable Wage; the median value, represented by the 50th percentile, is \$36,000; and we can now see that the data is skewed. The value associated “skewness” tells us the degree and direction in which the data is skewed, which is skewed to the right (indicated by the positive value). Note that we can tell that the variable is skewed because the mean exceeds the median. Kurtosis is a measure of how heavily the data is skewed–heavily skewed data will show a value greater than 3. See UCLA IDRE for a detailed explanation.

You can view a visual representation of the variable by creating a histogram:

histogram Wage, normal The “normal” option produces a bell curve that approximates what a normal distribution would look like. You can see that the distribution of Wage is not normally distributed. The bars spike above the curve and fall far below the curve throughout the distribution of wage Values (x-axis). Then you can see that there is a sudden spike again in the tail around the wage \$150,000, where I top coded the wages.

It is generally expected that you run descriptive statistics on all of your variables. More on this in a second. First, you should check your variables for missing observations. There are several ways to do this. One way to do this is with the command misstable summarize. Note that you can specify which variables to look at (e.g., misstable summarize variable1 variable2…). By default, the command will look at all variables in the data. In the case of these data, that is okay because I’ve shortened the dataset to contain less than 20 variables. The result should be this: You should see the variables that have missing observations, and the number of observations that are missing. For example, the variable mar_stat has 161 missing observations, whereas the variable biokids has 2,331. The table also reveals the number of unique values associated with each variable, along with the min and max values. The latter results can be observed in the summary table or if you use the command “tabulate”, which will produce frequency tables.

Another way to do this is with the command “mdesc”, which is a user-submitted command that tells Stata to generate a table that shows the missing values: The screen capture doesn’t show the results for all of the variables because it is cut off in my screen, but you get the idea. Again, by default, mdesc will run through all the variables in the dataset, unless you specify certain variables. Instead of showing you the number of unique values, and the lowest and highest values, mdesc shows you the percent that are missing, in addition to the total number of missing observations. I generally find this more helpful than the results produced by misstable summarize.

Next, I walk through different ways to address missing values. In the class that I made this tutorial for, replacing the missing values with the mean or mode was acceptable, as long as the student explicitly stated so. More advanced users will likely use methods such as multiple imputation (e.g., Stata13 Manual). You can also listwise delete any observations that contain missing values. I provide an example of a loop that will do this: Most users find this confusing, so bear with me. The command says that “foreach” variable represented by some letter or word–in this case “v’–of the following varlist (aka “var”) *–which is short hand for all variables in this data. Stata is going to drop the observation if it missing. For example, the first variable in the example data is id. Stata will go through id and see if any of the values are missing. No values of id are missing, so Stata will go to the next variable (birth_month) and check for missing variables, and so on. When Stata gets to mar_stat, Stata will see that 161 people did not report their marital status, so Stata will remove them from the data, which will reduce our sample down to  6,941 observations (7,102-161). Then Stat will remove 2,331 observations for missing data on the number of biological children that they have, and so forth. In the end, you should have 3,623 observations. You can check this by running mdesc again.

Side note: Sometime in the future, I will provide a tutorial on loops because it is huge time saver, and it’s something that is not often taught in statistics courses (at least not in any of mine).

Continuing on with the tutorial, in the next section, I show the user how to generate formatted tables with a user-submitted command “estout”:

ssc install estout

You can add the option “replace” to update estout, if you’ve already installed the command. You can read more about the command by visiting this link. Now type the following:

estpost sum *

esttab usingC:\Users\Stella\Documents\blog\ols\OLSdescripts.rtf”, replace ///
cell((mean(label(Mean/Perc.) fmt(%9.2f)) sd(par label(S.D.) fmt(%9.2f)))) label nonumber nomtitle

eststo clear

The command “estpost” will show the results associated with whatever function you tell Stata to run on the data, in this case, I requested a summary of all my variables, as indicated by the asterisk (*). It will produce the following output: The first column shows you the number of observations; the second shows the summary of weights (i.e., nonmissing observations); the third column shows you the mean of each variable; the fourth shows the variance; the fifth shows the standard deviation; the sixth shows the minimum value; the seventh shows the maximum value; and the last shows the sum of the variable. For descriptive statistics, we are interested in are: the mean, standard deviation, and the total number of observations. We use this information specifically in the next command which tells Stata to create a table (esttab) using this file location (C:\Users\Stella\Documents\blog\ols\) and this file name (OLSdescripts.rtf). I specify .rtf (rich text format) because I want to preserve the format. I use the option replace because I want to replace any existing document with this name. If you do not want to do this, simply change the name of the document (for example, OLSdescript2.rtf). Next, I tell Stata that I want the first cell (i.e., column) to display the mean, which should be labeled “Mean/Perc.” The label will be right justified and the numbers will show include two decimal places. The next cell/column will display the standard deviation, labeled as “S.D.” with the same formatting constraints. The next label option tells Stata that you want the labels associated with each variable, and to avoid adding the additional model number over each column that esttab adds by default. You should get the following output Note that not all of the variables are labeled. You will want to do this before you submit an assignment or use the table in a research paper.

You can also use esttab to generate formatted regression tables: I show you how to do this toward the end of the program, after I go through quick explanations of how to check basic OLS assumptions.

A final note about the formatted results. Although esttab is a quick and easy way to create formatted tables within Stata, the user-submitted command tabout will give you even more control over how your results are displayed, especially if you know how to use LaTex. I spent a lot of time tinkering with LaTex and I never mastered it. Maybe a project for the future.

OLS Regression in Stata

I thought I would share some Stata programming files that I made for some students who were struggling with coding in a statistics course. The program covers OLS regression in Stata and references (mostly 2015) data from the National Longitudinal Study of Youth 1997 (NLSY97).  You can access the Stata programs and data set that I created from the Open Science Framework, using this link. I also included all of the original files, and a program that shows you how I coded the data set that is referenced for this tutorial.

The program assumes that the user has some basic Stata coding knowledge, such as coding variables, loading and saving data, and generating descriptive statistics. The program also does not cover how to interpret the results from the OLS regression. Instead, the program mostly focus on exporting descriptive statistics and OLS regression results to formatted tables within Word. They also describe in detail, how to conduct sensitivity analyses on the results. The reason I do not spend time explaining how to interpret the results is because there are already a ton of websites that do a great job of this; for example, see this detailed explanation at IDRE at UCLA. Instead, I offer ways to speed up the user’s productivity by exporting pre-formatted tables and I also provide a supplemental explanation of what the syntax is doing. This way, new users will not get hung up on learning to code in Stata, but instead, focus on learning more about the statistical concepts they are practicing.

For example, I provide comments after almost every line to explain what the syntax does specifically. I also walk the user through how to load data and save a log, which will also show the results of any model that the user runs. The reason why I specifically include explanations such as “[folder path data is in]\ [datafile name]” is because I found that some students were not familiar with what this line was doing. For users who are not familiar with folder locations on the computer, you will have to include an additional step: showing how to copy and paste the folder path into the Stata program (illustrated by the highlighted folder address in the image below): When you are very comfortable with a computer, it’s easy to forget sometimes that this may be a totally foreign concept to someone who does not understand folder locations in a computer, which is a serious barrier to keeping up in statistics courses.

Basically, if the course is about programming in Stata, then of course, students should go through the steps for learning the syntax. If the course is about statistical concepts, and does focus on teaching the basics of programming, then I think it is important to provide more detailed explanations of what the syntax is doing.

With that said, the program that I provided is not perfect, but it certainly goes further than the typical explanation a student receives in a statistics course. I hope to improve it as I receive feedback from others. If you have any suggestions in how to do this, please let me know! Also, please feel free to share it with others or use it in your class, if you find it useful.