title | output | header-includes | |||||
---|---|---|---|---|---|---|---|
Linear Regression |
|
|
Determine which explanatory variables have a significant effect on the mean of the quantitative response variable.
Simple linear regression is a good analysis technique when the data consists of a single quantitative response variable
Mathematical Model
The true regression model assumed by a regression analysis is given by
The estimated regression line obtained from a regression analysis, pronounced "y-hat", is written as
Hypotheses
$$
\left.\begin{array}{ll}
H_0: \beta_1 = 0 \\
H_a: \beta_1 \neq 0
\end{array}
\right\} \ \text{Slope Hypotheses}
$$
$$
\left.\begin{array}{ll}
H_0: \beta_0 = 0 \\
H_a: \beta_0 \neq 0
\end{array}
\right\} \ \text{Intercept Hypotheses}
$$
If
If
Assumptions
This regression model is appropriate for the data when five assumptions can be made.
-
Linear Relation: the true regression relation between
$Y$ and$X$ is linear. -
Normal Errors: the error terms
$\epsilon_i$ are normally distributed with a mean of zero. -
Constant Variance: the variance
$\sigma^2$ of the error terms is constant (the same) over all$X_i$ values. -
Fixed X: the
$X_i$ values can be considered fixed and measured without error. -
Independent Errors: the error terms
$\epsilon_i$ are independent.
Interpretation
The slope is interpreted as, "the change in the average y-value for a one unit change in the x-value." It is not the average change in y. It is the change in the average y-value.
The y-intercept is interpreted as, "the average y-value when x is zero." It is often not meaningful, but is sometimes useful. It just depends if x being zero is meaningful or not within the context of your analysis. For example, knowing the average price of a car with zero miles is useful. However, pretending to know the average height of adult males that weigh zero pounds, is not useful.
Perform the Regression
summary(mylm) The `summary` command allows you to print the results of your linear regression that were previously saved in `mylm` name. Click to Show Output Click to View Output.
Example output from a regression. Hover each piece to learn more.
Call: lm(formula = dist ~ speed, data = cars) This is simply a statement of your original lm(...) "call" that you made when performing your regression. It allows you to verify that you ran what you thought you ran in the lm(...). |
Residuals:
Residuals are the vertical difference between each point and the line, |
||||
min -29.069 "min" gives the value of the residual that is furthest below the regression line. Ideally, the magnitude of this value would be about equal to the magnitude of the largest positive residual (the max) because the hope is that the residuals are normally distributed around the line. |
1Q -9.525 "1Q" gives the first quartile of the residuals, which will always be negative, and ideally would be about equal in magnitude to the third quartile. |
Median -2.272 "Median" gives the median of the residuals, which would ideally would be about equal to zero. Note that because the regression line is the least squares line, the mean of the residuals will ALWAYS be zero, so it is never included in the output summary. This particular median value of -2.272 is a little smaller than zero than we would hope for and suggests a right skew in the data because the mean (0) is greater than the median (-2.272) witnessing the residuals are right skewed. This can also be seen in the maximum being much larger in magnitude than the minimum. |
3Q 9.215 "3Q" gives the third quartile of the residuals, which would ideally would be about equal in magnitude to the first quartile. In this case, it is pretty close, which helps us see that the first quartile of residuals on either side of the line is behaving fairly normally. |
Max 43.201 "Max" gives the maximum positive residuals, which would ideally would be about equal in magnitude to the minimum residual. In this case, it is much larger than the minimum, which helps us see that the residuals are likely right skewed. |
Coefficients:
Notice that in your lm(...) you used only |
|||||
Estimate To learn more about the "Estimates" of the "Coefficients" see the "Explanation" tab, "Estimating the Model Parameters" section for details. | Std. Error To learn more about the "Standard Errors" of the "Coefficients" see the "Explanation" tab, "Inference for the Model Parameters" section. | t value To learn more about the "t value" of the "Coefficients" see the "Explanation" tab, "Inference for the Model Parameters" section. |
Pr(>|t|)
The "Pr" stands for "Probability" and the "(> |t|)" stands for "more extreme than the observed t-value". Thus, this is the p-value for the hypothesis test of each coefficient being zero. To learn more about the "p-value" of the "Coefficients" see the "Explanation" tab, "Inference for the Model Parameters" section. |
||
(Intercept) This always says "Intercept" for any lm(...) you run in R. That is because R always assumes there is a y-intercept for your regression function. |
-17.5791
This is the estimate of the y-intercept, |
6.7584
This is the standard error of |
-2.601
This is the test statistic t for the test of |
0.0123
This is the p-value of the test of the hypothesis that |
*
This is called a "star". One star means significant at the 0.1 level of |
speed This is always the name of your X-variable in your lm(Y ~ X, ...). |
3.9324
This is the estimate of the slope, |
0.4155
This is the standard error of |
9.464
This is the test statistic t for the test of |
1.49e-12
This is the p-value of the test of the hypothesis that |
\*\*\*
This is called a "star". Three stars means significant at the 0.01 level of |
\-\-\- |
Signif. codes: 0 �\*\*\*� 0.001 �\*\*� 0.01 �*� 0.05 �.� 0.1 � � 1 These "codes" explain what significance level the p-value is smaller than based on how many "stars" * the p-value is labeled with in the Coefficients table above. |
Residual standard error:
This is the estimate of |
15.38
For this particular regression, the estimate of |
on 48 degrees of freedom
This is |
Multiple R-squared:
This is |
0.6511, In this particular regression, 65.11% of the variation in stopping distance `dist` is explained by the regression model using speed of the car. |
Adjusted R-squared:
The adjusted R-squared will always be at least slightly smaller than |
0.6438
In this case, the value of 0.6438 is quite close to the original |
F-statistic: The F-statistic is found as the ratio of the MSR/MSP where MSR = SSR/(p-1) and MSE = SSE/(n-p) where n is the sample size and p is the number of parameters in the regression model. | 89.57 This is the value of the F-statistic for the lm(dist ~ speed, data=cars) regression. Note that SSE = sum( cars.lm\$res^2 ) = 11353.52 with n - p = 50 - 2 = 48 degrees of freedom for this data. Further, SSR = sum( (cars.lm\$fit - mean(cars$dist))^2 ) = 21185.46 with p - 1 = 1 degree of freedom. So MSR = 21185.46 and MSE = 11353.52 / 48 = 236.5317. So MSR / MSE = 21185.46 / 236.5317 = 89.56711. | on 1 and 48 DF, The 1 degree of freedom is the SSR degrees of freedom (p-1). The 48 is the SSE degrees of freedom (n-p). | p-value: 1.49e-12 The p-value for an F-statistic is found by the code pf(89.56711, 1, 48, lower.tail=FALSE), which gives the probability of being more extreme than the observed F-statistic in an F distribution with 1 and 48 degrees of freedom. |
Check Assumptions 1, 2, 3, and 5
plot( This version of plot(...) will actually create several regression diagnostic plots by default. mylm, This is the name of an lm object that you created previously. which= This allows you to select "which" regression diagnostic plots should be drawn. 1 Selecting 1, would give the residuals vs. fitted values plot only. : The colon allows you to select more than just one plot. 2 Selecting 2 also gives the Q-Q Plot of residuals. If you wanted to instead you could just use which=1 to get the residuals vs fitted values plot, then you could use qqPlot(mylm\$residuals) to create a fancier Q-Q Plot of the residuals. ) Closing parenthesis for plot(...) function.
plot( This version of plot(...) will be used to create a time-ordered plot of the residuals. The order of the residuals is the original order of the x-values in the original data set. If the original data set doesn't have an order, then this plot is not interesting. mylm The lm object that you created previously. $ This allows you to access various elements from the regression that was performed. residuals This grabs the residuals for each observation in the regression. ) Closing parenthesis for plot(...) function. Click to Show Output Click to View Output.
Plotting the Regression Line
To add the regression line to a scatterplot use the abline(...)
command:
abline( This stands for "a" (intercept) "b" (slope) line. It is a function that allows you to add a line to a plot by specifying just the intercept and slope of the line. mylm This is the name of an lm(...) that you created previoiusly. Since mylm contains the slope and intercept of the estimated line, the abline(...) function will locate these two values from within mylm and use them to add a line to your current plot(...). ) Closing parenthesis for abline(...) function. Click to Show Output Click to View Output.
You can customize the look of the regression line with

You can add points to the regression with...
To add the regression line to a scatterplot using the ggplot2 approach, first ensure:
library(ggplot2)
or library(tidyverse)
is loaded. Then, use the geom_smooth(method = lm)
command:
geom_point() geom_point() allows you to add a layer of points, a scatterplot, over the ggplot() framework. The x and y coordinates are received from the previously specified x and y variables declared in the ggplot() aesthetic. + Here the + is used to add yet another layer to ggplot().
geom_smooth( geom_smooth() is a smoothing function that you can use to add different lines or curves to ggplot(). In this case, you will use it to add the least-squares regression line to the scatterplot. method = Use "method = " to tell geom_smooth() that you are going to declare a specific smoothing function, or method, to alter the line or curve.. "lm", lm stands for linear model. Using method = "lm" tells geom_smooth() to fit a least-squares regression line onto the graphic. The regression line is modeled using y ~ x, which variables were declared in the initial ggplot() aesthetic. There are several other methods that could be used here. formula = y~x, This tells geom_smooth to place a simple linear regression line on the plot. Other formula statements can be used in the same way as lm(...) to place more complicated models on the plot. se = FALSE se stands for "standard error". Specifying FALSE turns this feature off. When TRUE, a gray band showing the "confidence band" for the regression is shown. Unless you know how to interpret this confidence band, leave it turned off. ) Closing parenthesis for the geom_smooth() function. Click to Show Output Click to View Output.

There are a number of ways to customize the appearance of the regression line:
geom_point() geom_point() allows you to add a layer of points, a scatterplot, over the ggplot() framework. The x and y coordinates are received from the previously specified x and y variables declared in the ggplot() aesthetic. + Here the + is used to add yet another layer to ggplot().
geom_smooth( geom_smooth() is a smoothing function that you can use to add different lines or curves to ggplot(). In this case, you will use it to add the least-squares regression line to the scatterplot. method = Use "method = " to tell geom_smooth() that you are going to declare a specific smoothing function, or method, to alter the line or curve.. "lm", lm stands for linear model. Using method = "lm" tells geom_smooth() to fit a least-squares regression line onto the graphic. The regression line is modeled using y ~ x, which variables were declared in the initial ggplot() aesthetic. formula = y~x, This tells geom_smooth to place a simple linear regression line on the plot. Other formula statements can be used in the same way as lm(...) to place more complicated models on the plot. se = FALSE, se stands for "standard error". Specifying FALSE turns this feature off. When TRUE, a gray band showing the "confidence band" for the regression is shown. Unless you know how to interpret this confidence band, leave it turned off. size = 2, Use *size = 2* to adjust the thickness of the line to size 2. color = "orange", Use *color = "orange"* to change the color of the line to orange.
linetype = "dashed" Use *linetype = "dashed"* to change the solid line to a dashed line. Some linetype options include "dashed", "dotted", "longdash", "dotdash", etc. ) Closing parenthesis for the geom_smooth() function. Click to Show Output Click to View Output.
In addition to customizing the regression line, you can customize the points, add points, add lines, and much more.
geom_point( geom_point() allows you to add a layer of points, a scatterplot, over the ggplot() framework. The x and y coordinates are received from the previously specified x and y variables declared in the ggplot() aesthetic. size = 1.5, Use *size = 1.5* to change the size of the points. color = "skyblue" Use *color = "skyblue"* to change the color of the points to Brother Saunders' favorite color. alpha = 0.5 Use *alpha = 0.5* to change the transparency of the points to 0.5. ) Closing parenthesis of geom_point() function. + The + allows you to add more layers to the framework provided by ggplot().
geom_smooth( geom_smooth() is a smoothing function that you can use to add different lines or curves to ggplot(). In this case, you will use it to add the least-squares regression line to the scatterplot. method = Use "method = " to tell geom_smooth() that you are going to declare a specific smoothing function, or method, to alter the line or curve.. "lm", lm stands for linear model. Using method = "lm" tells geom_smooth() to fit a least-squares regression line onto the graphic. formula = y~x, This tells geom_smooth to place a simple linear regression line on the plot. Other formula statements can be used in ways similar to lm(...) to place more complicated models on the plot. se = FALSE, se stands for "standard error". Specifying FALSE turns this feature off. When TRUE, a gray band showing the "confidence band" for the regression is shown. Unless you know how to interpret this confidence band, leave it turned off. color = "navy", Use *color = "navy"* to change the color of the line to navy blue. size = 1.5 Use *size = 1.5* to adjust the thickness of the line to 1.5. ) Closing parenthesis of geom_smooth() function. + The + allows you to add more layers to the framework provided by ggplot().
geom_hline( Use geom_hline() to add a horizontal line at a specified y-intercept. You can also use geom_vline(xintercept = some_number) to add a vertical line to the graph. yintercept = Use "yintercept =" to tell geom_hline() that you are going to declare a y intercept for the horizontal line. 75 75 is the value of the y-intercept. , color = "firebrick" Use *color = "firebrick"* to change the color of the horizontal line to firebrick red. , size = 1, Use *size = 1* to adjust the thickness of the horizontal line to size 1.
linetype = "longdash" Use *linetype = "longdash"* to change the solid line to a dashed line with longer dashes. Some linetype options include "dashed", "dotted", "longdash", "dotdash", etc. , alpha = 0.5 Use *alpha = 0.5* to change the transparency of the horizontal line to 0.5. ) Closing parenthesis of geom_hline function. + The + allows you to add more layers to the framework provided by ggplot().
geom_segment( geom_segment() allows you to add a line segment to ggplot() by using specified start and end points. x = "x =" tells geom_segment() that you are going to declare the x-coordinate for the starting point of the line segment. 14, 14 is a number on the x-axis of your graph. It is the x-coordinate of the starting point of the line segment. y = "y =" tells geom_segment() that you are going to declare the y-coordinate for the starting point of the line segment. 75, 75 is a number on the y-axis of your graph. It is the y-coordinate of the starting point of the line segment. xend = "xend =" tells geom_segment() that you are going to declare the x-coordinate for the end point of the line segment. 14, 14 is a number on the x-axis of your graph. It is the x-coordinate of the end point of the line segment. yend = "yend =" tells geom_segment() that you are going to declare the y-coordinate for the end point of the line segment. 38, 38 is a number on the y-axis of your graph. It is the y-coordinate of the end point of the line segment.
size = 1 Use *size = 1* to adjust the thickness of the line segment. , color = "lightgray" Use *color = "lightgray"* to change the color of the line segment to light gray. , linetype = "longdash" Use *linetype = "longdash* to change the solid line segment to a dashed one. Some linetype options include "dashed", "dotted", "longdash", "dotdash", etc. ) Closing parenthesis for geom_segment() function. + The + allows you to add more layers to the framework provided by ggplot().
geom_point( geom_point() can also be used to add individual points to the graph. Simply declare the x and y coordinates of the point you want to plot. x = "x =" tells geom_point() that you are going to declare the x-coordinate for the point. 14, 14 is a number on the x-axis of your graph. It is the x-coordinate of the point. y = "y =" tells geom_point() that you are going to declare the y-coordinate for the point. 75 75 is a number on the y-axis of your graph. It is the y-coordinate of the point. , size = 3 Use *size = 3* to make the point stand out more. , color = "firebrick" Use *color = "firebrick"* to change the color of the point to firebrick red. ) Closing parenthesis of the geom_point() function. + The + allows you to add more layers to the framework provided by ggplot().
geom_text( geom_text() allows you to add customized text anywhere on the graph. It is very similar to the base R equivalent, text(...). x = "x =" tells geom_text() that you are going to declare the x-coordinate for the text. 14, 14 is a number on the x-axis of your graph. It is the x-coordinate of the text. y = "y =" tells geom_text() that you are going to declare the y-coordinate for the text. 84, 84 is a number on the y-axis of your graph. It is the y-coordinate of the text. label = "label =" tells geom_text() that you are going to give it the label. "My Point (14, 75)", *"My Point (14, 75)"* is the text that will appear on the graph.
color = "navy" Use *color = "navy"* to change the color of the text to navy blue. , size = 3 Use *size = 3* to change the size of the text. ) Closing parenthesis of the geom_text() function. + The + allows you to add more layers to the framework provided by ggplot().
theme_minimal() Add a minimalistic theme to the graph. There are many other themes that you can try out. Click to Show Output Click to View Output.
Accessing Parts of the Regression
Finally, note that the mylm
object contains the names(mylm)
of
## (Intercept) speed
## -17.579095 3.932409
## 1 2 3 4 5 6 7
## 3.849460 11.849460 -5.947766 12.052234 2.119825 -7.812584 -3.744993
## 8 9 10 11 12 13 14
## 4.255007 12.255007 -8.677401 2.322599 -15.609810 -9.609810 -5.609810
## 15 16 17 18 19 20 21
## -1.609810 -7.542219 0.457781 0.457781 12.457781 -11.474628 -1.474628
## 22 23 24 25 26 27 28
## 22.525372 42.525372 -21.407036 -15.407036 12.592964 -13.339445 -5.339445
## 29 30 31 32 33 34 35
## -17.271854 -9.271854 0.728146 -11.204263 2.795737 22.795737 30.795737
## 36 37 38 39 40 41 42
## -21.136672 -11.136672 10.863328 -29.069080 -13.069080 -9.069080 -5.069080
## 43 44 45 46 47 48 49
## 2.930920 -2.933898 -18.866307 -6.798715 15.201285 16.201285 43.201285
## 50
## 4.268876
## 1 2 3 4 5 6 7 8
## -1.849460 -1.849460 9.947766 9.947766 13.880175 17.812584 21.744993 21.744993
## 9 10 11 12 13 14 15 16
## 21.744993 25.677401 25.677401 29.609810 29.609810 29.609810 29.609810 33.542219
## 17 18 19 20 21 22 23 24
## 33.542219 33.542219 33.542219 37.474628 37.474628 37.474628 37.474628 41.407036
## 25 26 27 28 29 30 31 32
## 41.407036 41.407036 45.339445 45.339445 49.271854 49.271854 49.271854 53.204263
## 33 34 35 36 37 38 39 40
## 53.204263 53.204263 53.204263 57.136672 57.136672 57.136672 61.069080 61.069080
## 41 42 43 44 45 46 47 48
## 61.069080 61.069080 61.069080 68.933898 72.866307 76.798715 76.798715 76.798715
## 49 50
## 76.798715 80.731124
Making Predictions
mylm <- lm(dist ~ speed, data = cars)
predict(mylm, data.frame(speed = 12))
1 29.60981 The value given is the "fitted-value" or "predicted-value" for the specified x-value. In this case, a car with a speed of 12 is predicted to have a stopping distance of 29.60981 feet. |
mylm <- lm(dist ~ speed, data = cars)
predict(mylm, data.frame(speed = 12), interval = "prediction")
fit The "fit" is the predicted value. | lwr The "lwr" is the lower bound. | upr The "upr" is the upper bound. |
1 29.60981 In this case, a car with a speed of 12 mph is predicted to have a stopping distance of 29.60981 feet. However, we are wise enough to recognize that the stopping distance for individual cars will vary anywhere from -1.749529 (or 0 because distance can't go negative) feet to 60.96915 feet. | -1.749529 This is the lower bound of the prediction interval. While we predict a stopping distance of 29.60981 feet, this prediction interval reminds us the stopping distance could be as quick as -1.749529 feet (or 0 because distance can't go negative). | 60.96915 This is the upper bound of the prediction interval. While we predict a stopping distance of 29.60981 feet, this prediction interval reminds us that the actual stopping distance could be as high as 60.96915 feet. |
mylm <- lm(dist ~ speed, data = cars)
predict(mylm, data.frame(speed = 12), interval = "confidence")
fit The "fit" is the predicted value. | lwr The "lwr" is the lower bound. | upr The "upr" is the upper bound. |
1 29.60981 In this case, cars with a speed of 12 mph are predicted to have an average stopping distance of 29.60981 feet, where the average could be anywhere from 24.39514 feet to 34.82448 feet. | 24.39514 This is the lower bound of the confidence interval. We are 95% confident that the average stopping distance of cars going 12 mph is greater than this value. | 34.82448 This is the upper bound of the confidence interval. We are 95% confident that the average stopping distance of cars going 12 mph is less than this value. |
Finding Confidence Intervals for Model Parameters
mylm <- lm(dist ~ speed, data = cars)
confint(mylm, level = 0.90)
5 % The lower bound of a 90% confidence interval occurs at the 5th percentile. This is because at 90% confidence, 10% is left in the tails, with 5% on each end. The upper bound will thus end at the 95th percentile, hence the 5% and 95% as the column names. | 95 % The upper bound of a 90% confidence interval ends at the 95th percentile. | |
(Intercept)
This row of output specifies a confidence interval for |
-28.914514
This is the lower bound for the confidence interval of the y-intercept, |
-6.243676
This is the upper bound for the confidence interval for |
speed
This row of the output provides the upper and lower bound for the confidence interval for |
3.235501 This is the lower bound of the confidence interval. In this case, you can be 90% confident that the slope lies between 3.235501 and 4.629317. | 4.629317 This is the upper bound of the confidence interval. In this case, you can be 90% confident that the slope lies between 3.235501 and 4.629317. |
mylm <- lm(dist ~ speed, data = cars)
confint(mylm, level = 0.95)
2.5 % The lower bound of a 95% confidence interval occurs at the 2.5th percentile. This is because at 95% confidence, 5% is left in the tails, with 2.5% on each end. The upper bound will thus end at the 97.5th percentile, hence the 2.5% and 97.5% as the column names for the lower and upper bounds, respectively. | 97.5 % The upper bound of a 95% confidence interval ends at the 97.5th percentile. | |
(Intercept)
This row of output specifies a confidence interval for |
-31.167850
This is the lower bound for the confidence interval of the y-intercept, |
-3.990340
This is the upper bound for the confidence interval for |
speed
This row of the output provides the upper and lower bound for the confidence interval for |
3.096964 This is the lower bound of the confidence interval. In this case, you can be 90% confident that the slope lies between 3.096964 and 4.767853 | 4.767853 This is the upper bound of the confidence interval. In this case, you can be 95% confident that the slope lies between 3.096964 and 4.767853 |
Linear regression has a rich mathematical theory behind it. This is because it uses a mathematical function and a random error term to describe the regression relation between a response variable
Regression Cheat Sheet (Expand)
Term | Pronunciation | Meaning | Math | R Code |
---|---|---|---|---|
$Y_i$
|
"why-eye" | The data |
$Y_i = \beta_0 + \beta_1 X_i + \epsilon_i \quad \text{where} \ \epsilon_i \sim N(0, \sigma^2)$
|
YourDataSet$YourYvariable |
$\hat{Y}_i$
|
"why-hat-eye" | The fitted line | $\hat{Y}_i = b_0 + b_1 X_i$ |
lmObject$fitted.values |
$E\{Y_i\}$
|
"expected value of why-eye" | True mean y-value | $E\{Y_i\} = \beta_0 + \beta_1 X_i$ |
<none> |
$\beta_0$
|
"beta-zero" | True y-intercept | <none> |
<none> |
$\beta_1$
|
"beta-one" | True slope | <none> |
<none> |
$b_0$
|
"b-zero" | Estimated y-intercept | $b_0 = \bar{Y} - b_1\bar{X} |
b_0 <- mean(Y) - b_1*mean(X)$ |
$b_1$
|
"b-one" | Estimated slope | $b_1 = \frac{\sum X_i(Y_i - \bar{Y})} {\sum(X_i - \bar{X})^2}$ |
b_1 <- sum( X*(Y - mean(Y)) ) / sum( (X - mean(X))^2 ) |
$\epsilon_i$
|
"epsilon-eye" | Distance of dot to true line | $\epsilon_i = Y_i - E\{Y_i\}$ |
<none> |
$r_i$
|
"r-eye" or "residual-eye" | Distance of dot to estimated line | $r_i = Y_i - \hat{Y}_i$ |
lmObject$residuals |
$\sigma^2$
|
"sigma-squared" | Variance of the |
$Var\{\epsilon_i\} = \sigma^2$ |
<none> |
$MSE$
|
"mean squared error" | Estimate of |
$MSE = \frac{SSE}{n-p}$ |
sum( lmObject$res^2 ) / (n - p) |
$SSE$
|
"sum of squared error" (residuals) | Measure of dot's total deviation from the line | $SSE = \sum_{i=1}^n (Y_i - \hat{Y}_i)^2$ |
sum( lmObject$res^2 ) |
$SSR$
|
"sum of squared regression error" | Measure of line's deviation from y-bar | $SSR = \sum_{i=1}^n (\hat{Y}_i - \bar{Y})^2$ |
sum( (lmObject$fit - mean(YourData$Y))^2 ) |
$SSTO$
|
"total sum of squares" | Measure of total variation in Y | $SSR + SSE = SSTO = \sum_{i=1}^n (Y_i - \bar{Y})^2$ |
sum( (YourData$Y - mean(YourData$Y)^2 ) |
$R^2$
|
"R-squared" | Proportion of variation in Y explained by the regression | $R^2 = \frac{SSR}{SSTO} = 1 - \frac{SSE}{SSTO}$ |
SSR/SSTO |
$\hat{Y}_h$
|
"why-hat-aitch" | Estimated mean y-value for some x-value called |
$\hat{Y}_h = b_0 + b_1 X_h$ |
predict(lmObject, data.frame(XvarName=#)) |
$X_h$
|
"ex-aitch" | Some x-value, not necessarily one of the |
$X_h = $ |
Xh = # |
Confidence Interval | "confidence interval" | Estimated bounds at a certain level of confidence for a parameter |
|
confint(mylm, level = someConfidenceLevel) |
Parameter | Estimate |
---|---|
|
## Brandon Notes
There are three main elements to the mathematical model of regression. Each of these three elements is pictured below in the "Regression Relation Diagram."
- The true line, i.e., the regression relation:
(Read more...)
The regression relation
Note:
- The dots, i.e., the regression relation plus an error term:
(Read more...)
Note:
- The estimated line, i.e., the line we get from a sample of data.
(Read more...)
Note:
This graphic depicts the true, but typically unknown, regression relation (dotted line). It also shows how a sample of data from the true regression relation (the dots) can be used to obtain an estimated regression equation (solid line) that is fairly close to the truth (dotted line).

Something to ponder: The true line, when coupled with the error terms, "creates" the data. The estimated (or fitted) line uses the sampled data to try to "re-create" the true line.
We could loosely call this the "order of creation" as shown by the following diagram.
par(mfrow=c(1,3), mai=c(.2,.2,.4,.1))
plot(y ~ x, col="white", main="A Law is Given", yaxt='n', xaxt='n')
curve(beta0 + beta1*x, add=TRUE, lty=2)
plot(y ~ x, pch=16, main="Data is Created", xaxt='n', yaxt='n')
curve(beta0 + beta1*x, add=TRUE, lty=2)
plot(y ~ x, pch=16, xaxt='n', yaxt='n', main="The Law is Estimated")
curve(xylm$coef[1] + xylm$coef[2]*x, add=TRUE, yaxt='n', xaxt='n')
curve(beta0 + beta1*x, add=TRUE, lty=2)

A Law is Given | Data is Created | The Law is Estimated |
---|---|---|
The true line is the "law". | The |
The law is estimated with lm(...) . |
Click open the "Code" buttom below to the right to find code that runs a simulation demonstrating this "order of creation".
## Simulating Data from a Regression Model
## This R-chunk is meant to be played in your R Console.
## It allows you to explore how the various elements
## of the regression model combine together to "create"
## data and then use the data to "re-create" the line.
set.seed(101) #Allows us to always get the same "random" sample
#Change to a new number to get a new sample
n <- 3 #set the sample size
X_i <- runif(n, 15, 45) #Gives n random values from a uniform distribution between 15 to 45.
beta0 <- 3 #Our choice for the y-intercept.
beta1 <- .1 #Our choice for the slope.
sigma <- 12.5 #Our choice for the std. deviation of the error terms.
epsilon_i <- rnorm(n, 0, sigma) #Gives n random values from a normal distribution with mean = 0, st. dev. = sigma.
Y_i <- beta0 + beta1*X_i + epsilon_i #Create Y using the normal error regression model
fabData <- data.frame(y=Y_i, x=X_i) #Store the data as data
View(fabData)
#In the real world, we begin with data (like fabData) and try to recover the model that (we assume) was used to created it.
fab.lm <- lm(y ~ x, data=fabData) #Fit an estimated regression model to the fabData.
summary(fab.lm) #Summarize your model.
plot(y ~ x, data=fabData) #Plot the data.
abline(fab.lm) #Add the estimated regression line to your plot.
# Now for something you can't do in real life... but since we created the data...
abline(beta0, beta1, lty=2) #Add the true regression line to your plot using a dashed line (lty=2).
legend("topleft", legend=c("True Line", "Estimated Line"), lty=c(2,1), bty="n") #Add a legend to your plot specifying which line is which.
The interpretation of
The interpretation of
To better see this, consider the three graphics shown below.
par(mfrow=c(1,3))
hist(mtcars$mpg, main="Gas Mileage of mtcars Vehicles", ylab="Number of Vehicles", xlab="Gas Mileage (mpg)", col="skyblue")
boxplot(mpg ~ cyl, data=mtcars, border="skyblue", boxwex=0.5, main="Gas Mileage of mtcars Vehicles", ylab="Gas Mileage (mpg)", xlab="Number of Cylinders of Engine (cyl)")
plot(mpg ~ qsec, data=subset(mtcars, am==0), pch=16, col="skyblue", main="Gas Mileage of mtcars Vehicles", ylab="Gas Mileage (mpg)", xlab="Quarter Mile Time (qsec)")
abline(lm(mpg ~ qsec, data=subset(mtcars, am==0)), col="darkgray")
mtext(side=3, text="Automatic Transmissions Only (am==0)", cex=0.5)
abline(v = seq(16,22,2), h=seq(10,30,5), lty=3, col="gray")

The Histogram | The Boxplot | The Scatterplot |
---|---|---|
The histogram on the left shows gas mileages of vehicles from the mtcars data set. The average gas mileage is 20.09. | The boxplot in the middle shows that if we look at gas mileage for 4, 6, and 8 cylinder vehicles separately, we find the means to be 26.66, 19.74, and 15.1, respectively. If we wanted to, we could talk about the change in the means across cylinders, and would see that the mean is decreasing, first by |
The scatterplot on the right shows that the average gas mileage (for just automatic transmission vehicles) increases by a slope of 1.44 for each 1 second increase in quarter mile time. In other words, the line gives the average y-value for any x-value. Thus, the slope of the line is the change in the average y-value. |
Residuals are the difference between the observed value of
We will denote a residual for individual
In summary...
Residual |
Error |
---|---|
Distance between the dot |
Distance between the dot |
Known | Typically Unknown |
As shown in the graph below, the residuals are known values and they estimate the unknown (but true) error terms.

Keep in mind the idea that the errors
Residuals have many uses in regression analysis. They allow us to
- diagnose the regression assumptions,
- estimate the regression relation,
- estimate the variance of the error terms,
- and assess the fit of the regression relation.
Not all regressions are created equally as the three plots below show. Sometimes the dots are a clustered very tightly to the line. At other times, the dots spread out fairly dramatically from the line.
par(mfrow=c(1,3), mai=c(.1,.1,.5,.1))
set.seed(2)
x <- runif(30,0,20)
y1 <- 2 + 3.5*x + rnorm(30,0,2)
y2 <- 2 + 3.5*x + rnorm(30,0,8)
y3 <- 2 + 3.5*x + rnorm(30,0,27)
plot(y1 ~ x, pch=16, col="darkgray", xlim=c(-1,21), yaxt='n', xaxt='n', ylim=c(-10,100), main="Excellent Fit")
abline(lm(y1 ~ x), col="gray")
plot(y2 ~ x, pch=16, col="darkgray", xlim=c(-1,21), yaxt='n', xaxt='n', ylim=c(-10,100), main="Good Fit")
abline(lm(y2 ~ x), col="gray")
plot(y3 ~ x, pch=16, col="darkgray", xlim=c(-1,21), yaxt='n', xaxt='n', ylim=c(-10,100), main="Poor Fit")
abline(lm(y3 ~ x), col="gray")

A common way to measure the fit of a regression is with correlation. While this can be a useful measurement, there is greater insight in using the square of the correlation, called
A sum is just a fancy word for adding things together. $$ 1 + 2 + 3 + 4 + 5 + 6 = 21 $$
Long sums get tedious to write out by hand. So we use the symbol
Test your knowledge, do you see why the answer is 6 to the sum below? $$ \sum_{i=1}^3 i = 6 $$
Computing sums in R is fairly easy. Type the following codes in your R Console.
sum(1:6) #gives the answer of 21
sum(1:3) #gives the answer of 6
However, sums really become useful when used with a data set.
Each row of a data set represents an "individual's" data. We can reference each individual with a row number. In the data below, individual 3, denoted by speed
of 7 and a dist
of 4.