Simple Linear Regression Example in R

A Simple Linear Regression Example can help reinforce the intuition of simple linear regression models. Consider an example of salary vs. years of experience. This is a good example to start with because the results intuitively make sense.

  • The null hypothesis will be that years of experience has no impact on salary.
  • The significance level is set at 5%. This level is arbitrary, but is most often used.
  • If the p-value for the years of experience variable is less then the significance level then the null hypothesis is rejected.

The first two steps in R for this simple linear regression example are to import the dataset, and split the dataset into a training and test set. The syntax for this is as follows:

dataset = read.csv(‘Salary_Data.csv’)install.packages(‘caTools’)
library(caTools)
set.seed(123)
split = sample.split(dataset$Salary, SplitRatio = 2/3)
training_set = subset(dataset, split == TRUE)
test_set = subset(dataset, split == FALSE)

It is assumed that the csv data file is in the working directory. It contains thirty rows of observations. The caTools package is used to split the dataset, wherein Salary is the dependent variable. The split ratio was set at two-thirds. So, the training set will have twenty observations and the test set will have ten.

Once the previous code is executed, the next step is to fit the simple linear regression to the training set. In essence, this creates the line of best fit.

regressor = lm(formula = Salary ~ YearsExperience,
data = training_set)

In the above syntax, the lm function was used to build the regression model. The two essential parameters were passed which are the formula and the data.

Remember, this model was built with the twenty observations in the training set. This model has no idea of what the observations are in the test set. So the test set can be used to make predictions based on the model. And then, predictions from the test set can be compared with data from the test set. The predict function is used to make predictions.

y_pred = predict(regressor, newdata = test_set)

It may help to write the predictions to a new csv file:

write.csv(y_pred, file = “salary_predictions.csv)

With a bit of finesse, the ggplot2 library can be used to plot the test set data against the regression model that was built with the training set.

Simple Linear Regression Example
The red dots represent actual data from the test set.

It is easy to see that the regression model does a great job of predicting results from the test set. In some cases, the red data points sit very near the line itself, so some predictions are very accurate.

Using the summary(regressor) function outputs the data we need to definitively if the null hypothesis should be rejected.

In this simple linear regression example the summary tells us the p-value for the years of experience variable is 1.52e-14. This is much less than 1%, so it easily falls below the significance level of 5%. Thus, the null hypothesis is rejected. In other words, there is a correlation between years of experience vs. salary.

Furthermore, the summary indicates an R squared value of 0.9649. This is close to 1. It is safe to say the correlation is very strong.

Leave a Reply

Your email address will not be published. Required fields are marked *