Linear Regression Calculator: Understanding Results – Simply Explained

Table of contents

- Hamid: What's that?
- Rambo: It's blue light.
- Hamid: What does it do?
- Rambo: It turns blue.

Rambo III


The linear regression analysis has been calculated, the model is ready, and now we see something like this:

results of linear regression online
What does this result tell us? This section explains how to interpret the regression table and what conclusions we can draw from the data. While we see the table from the linear regression, simply calculating and presenting it is, of course, not enough. It would be too easy if the analysis ended here. This article focuses on the individual metrics that emerge from the data analysis and how they should be understood. Some statistical background knowledge is assumed, but the theoretical foundations and background can also be reviewed here.

We know that the linear regression model looks like this:

y^ = β^0 + i=1 n β^i × Xi = β^0 + β^1 × X1 + β^2 × X2 + ...
  • y^ is the estimated value for our target – the variable we are interested in. In the example above, this is the price of a property.
  • β^0 is the estimated Y-intercept.
  • Xi represents the explanatory / independent variables (also called regressors), such as the area or condition of a property.
  • β^i represents the estimated changes of the regression line (called the regression coefficient or beta coefficient) with respect to Xi.
Using the linear regression equation, we examine a linear relationship within a defined research question.
The following happens when calculating this linear regression equation: The best line that most closely matches the data points X and Y is sought. Visually, this corresponds to a line that best fits through a scatter plot of X and Y. More precisely, the squared deviations between y ^ , the linear regression line, and y, the actual values of Y, are minimized. This method is called OLS (Ordinary Least Squares) or the method of least squares. A more detailed explanation and more information can be found in the introduction to linear regression..
In our example, the linear regression equation looks like this:
Price^ = β^0 + β^1 × Area + β^2 × Condition
With this knowledge, we can now understand the results of our linear regression and interpret them in the context of the respective question.

Beta coefficients of a Regression Analysis

After using the linear regression calculator, you will find the variables of the data analysis in the first column. variables of linear regression table

  • const is the constant of the linear regression equation (the Y-intercept).
  • Area (sq ft) is the variable whose influence on the target we want to measure (regressor/predictor).
  • Condition is a control variable in this linear regression. There can be any number of these control variables.
beta coefficients of linear regression
The 2nd column 'coef' contains the values of the beta coefficients β^0 , β^1 , ... , β^N that belong to the corresponding variables. The coefficients are defined as:
β^0 = E[Y] - β^1 E[X] or with sums: β^0 = 1n ( j=1n yj ) - β^1 1n ( j=1n xj )
and
β^i = Cov(Xi ,Y) Var(Xi) or with sums: β^i = j=1n (xj - x̄ ) ( yj - ȳ ) j=1n ( xj - x̄ ) 2 for all i=1,...,N

The latter refers to the regression coefficients of the variables included in the linear regression: β^1 is the coefficient of X1 , β^2 is the coefficient of X2 and so on. It is a convention to refer to the explanatory variable (also called the predictor or exogenous variable) as X1 , with β^1 as its corresponding coefficient. All subsequent variables X2 ,..., XN are control variables. In this example, this is also reflected in the results table of the online linear regression.
First, 'const' appears as the constant of the linear model with β^0 , then 'Area' appears as the explanatory variable X1 with its coefficient β^1=3268.2735 . Then comes the control variable 'Condition' as X2 with its coefficient β^2=124200 .

If β^1 is positive, i.e. β^1 >0 , this indicates a positive relationship between X1 and Y. If X1 increases by one unit, Y increases by β^1 . If β^1 is negative, i.e. β^1 <0 , this indicates a negative relationship between the variable X1 and the target Y. The larger X1 becomes, the smaller Y becomes. The same applies to β^2 , β^3 , etc.

β^0 is the constant in the linear regression model. The target variable Y is equal to β^0 if all other X1 ,..., XN = 0 . This can be clearly seen from the regression equation. Visually β^0 corresponds to the Y-intercept in the regression plot.

The Standard Error of Linear Regression

standard error of linear regression online
The column "std err" stands for standard error (SE) and shows the standard errors for our regression coefficients β^0 , ... , β^N . The standard error generally refers to the standard deviation, i.e. dispersion, of a parameter. The standard deviation itself, however, is the dispersion of the raw data points. In the case of linear regression, the standard error is therefore the standard deviation of the beta coefficients β^0 , ... , β^N . It indicates how much each β^i in the sample differs on average from the true βi in the population. The standard error is defined by :

SE(β^i) = 1n × Var(ε) Var(Xi) = σ ε σ Xi × n
with
  • n as sample size,
  • Var(Xi) as variance of the predictor Xi and
  • Var(ε) as the variance of the residuals ε from the regression.
  • σ is the corresponding square root of the variance – that is, the respective standard deviation.
The standard errors increase (i.e. the estimated regression coefficients are less precise) when the variance of the residuals ε is large. A high variance of the residuals ε indicates that the linear regression line does not fit the data particularly well. Unsurprisingly, the standard error decreases with increasing sample size n, as the estimate for the beta coefficients becomes closer to the population parameters. Similarly, a large spread (variance) of the regressor Xi is beneficial: As σ Xi increases, the estimation of the coefficients becomes more precise. More varied and "diverse" data points help to determine the linear regression line more accurately.

In addition to the "normal" standard error, there are also so-called "robust" standard errors. These robust standard errors have a key advantage in that they take into account heteroskedastic data. Heteroskedastic data is data that does not have a constant variance. This means that the dispersion of the data (along one or more dimensions) increases or decreases. When this happens, the standard error is biased. This leads to incorrect conclusions about beta coefficients, confidence intervals, etc.

data points with constant and increasing dispersion
Left: Data with constant variance, right: heteroscedastic data with variable, uneven variance. The dispersion in Y increases the larger X becomes.
However, robust standard errors are generally larger than normal standard errors. As a result, the average deviation from the true βi increases, making it more difficult to estimate the true parameter βi . Consequently, it becomes more difficult to obtain statistically significant results because the test statistic T becomes smaller (SE is in the denominator) and the confidence interval around the null hypothesis becomes wider. However, if the calculated β^i coefficients fall outside a confidence interval with robust standard errors, one can be even more confident that the observed effect of β^i on Y is real. There are several types of robust standard errors; for those interested in more detail, see HC1, HC2, HC3, HC4 and Eicker, White, Huber, MacKinnon. A possible definition of robust standard errors (RSE) is as follows:

RSE(β^i) = 1n × Var [ ( Xi - E (Xi)) × ε ] Var (Xi) 2

In the robust standard error equation, the unequal variance is accounted for by an interaction term (Xi - E(Xi ) ) . If Xi deviates significantly from the mean E(Xi) , the multiplication with the residual e "penalizes" the RSE more. A wider distribution of Y at the edges (with respect to X) of the data contributes more to the RSE. Thus, the RSE reflects the increased spread of Y at large (or small, depending on the case) X values. The normal standard error does not take this uneven variance into account. However, it can be seen that if the data does indeed have constant variance, i.e. is homoscedastic, then RSE ≈ SE.

The Test Statistic T of the Linear Regression Model

test statistic of linear regression
Under the column "t" you will find the test statistics for the respective beta coefficients β^0 , ... , β^N . The test statistic T is an aggregate value calculated from the available data sample. T is used to decide whether the effect of Xi on Y is also statistically significant. Statistical significance refers to the situation where the effect of Xi actually exists in principle and is not just a result of chance (see null hypothesis). This concept is discussed in more detail in the article on hypothesis testing. The test statistic T measures the extent to which our data reflect the null hypothesis H0 , and whether or not we can reject H0 . T is defined differently for each statistical parameter; in the case of linear regression and beta coefficients, T is defined as follows: T = β^i - βiH0 SE(β^i) Because T is calculated from a sample of data, it is also subject to random variation, which is represented by the t-distribution.
For example, if T is greater than +1.96 or less than -1.96, the regression coefficient is considered statistically significant with a 5% probability of error (also known as the significance level Alpha). This means that the influence of Xi on Y is real and more than just chance. "Mere" chance is represented by the null hypothesis H0 : βiH0 = 0 .
The values for the test statistic T vary depending on the significance level being considered or the desired probability of error. The values for T correspond to quantiles of the t-distribution and can be found in corresponding tables. The most relevant t-values and their associated metrics are:

Significance level Alpha Critical value t
Confidence interval
5% |1.96| 95%
1% |2.576| 99%
0.01% |3.291| 99.99%

The p-value of Regression Analysis

p-values of regression analysis
The p-value indicates the probability of obtaining a T' that is even more extreme than the calculated T from the available data sample. Thus, p-values represent probabilities under which the null hypothesis would be (falsely) rejected. p-value explanation
For example, if p-value0.05 (or 5%), the beta coefficient is considered statistically significant and the influence of X on Y is real and not due to chance. This statement can then be made at the 5% significance level, which means there is a 5% chance of error. Additional p-values, along with their corresponding significance levels and T-values, are as follows:

Significance level Alpha / p-value Critical value t
Confidence interval
5% |1.96| 95%
1% |2.576| 99%
0.01% |3.291| 99.99%

It is common to see decimal values in a regression table like the one shown above with an e, such as 7.945e-11 . This is scientific notation and means that the number 7.945 is multiplied by 10-11 . The exponent -11 indicates that the decimal point is moved 11 places to the left. Therefore, the number 7.945×1011 = 0.00000000007945 .

Confidence Intervals of Regression Coefficients

confidence intervals of linear regression calculator
The last two columns show the borders of the 95% confidence intervals around H0 : βiH0 = 0 . Since the beta coefficients also vary and can take different values depending on the sample, their distribution can be described by intervals. The second to last column on the left shows the lower bound, i.e. the 2.5% quantile of this distribution. The last column, the right one, represents the 97.5% quantile. If a calculated β^i falls outside the 95% confidence interval, it can be said with a 5% probability of error that β^i is statistically significant, and the null hypothesis H0 can be rejected. In this case, it is said that the corresponding variable Xi has a real effect on the target Y.
There are several types of confidence intervals; in statistical regression analysis, confidence intervals that are closed at both the upper and lower ends are usually considered. The formula for their calculation is: The formula for their calculation is:

[ βiH0 - tα2 × SE(β^i) βiH0 βiH0 + tα2 × SE(β^i) ]
In general, βiH0 = 0 .

Conclusion: Lineare Regression Online

- Gollum: My Precious!

Lord of the Rings
We now understand each column of the online linear regression table and are ready to interpret the results of the data analysis. results of linear regression calculator
We'll start with the second row, since it contains the results for the variable we're interested in: X1 (area). It lists all the metrics relevant to the statistical significance of the variable X1 , which in this example is the area of a property. The results indicate that area does indeed have a real impact on the price of a property (who would have guessed?!) and that the observed β^1 does not just differ from 0 by chance. Of course, it is more interesting to describe the strength of this effect. In this case, the price increases by $3268.2735 per additional square meter, since β^1 = 3268.2735 .

The following rows in the regression table, namely X2 , ... , X2 , represent the results for β^2 , ... , β^N of the control variables, such as the condition of a property in this case. Recall that it is a convention to denote the explanatory variable (also called predictor or exogenous variable) as X1 with β^1 as the corresponding coefficient. All subsequent variables X2 ,..., XN are control variables. Again, we can see which ones are statistically significant.
The top row contains the evaluation for β^0 , the constant in the linear regression model. The constant represents the value of Y, when all X1 , ... , XN = 0 . It is the base value from which the linear regression line starts. Visually, β^0 corresponds to the y-intercept. This initial value, the constant in the linear regression model, should always be considered in the context of the problem. Here, it makes little sense to assume a property with an area of 0 square feet.

Bonus: Linear Regression Calculator

All of the above analyses and metrics are calculated using the online linear regression calculator. Once the data analysis is complete, all of these results can be downloaded as a CSV or Excel file. This allows the results to be used for further analysis, such as inclusion in a report or scientific study. A plot of the linear regression line is also provided, which can be customized with a variety of graphical options.

The Linear Regression Online Calculator also provides additional valuable information about the data being analyzed. Further details about the data can be found below the regression results, which can provide deeper insight into the target variable under investigation. As a convenience, the interpretation of the results is provided directly. If certain analyses are no longer needed, they can be easily discarded to maintain clarity without affecting other experiments. To restart the calculator, simply reload the page.

Ready to use the linear regression calculator?

Use Regression Online and focus on what really matters: your area of expertise
Interactive
Results immediately
Plot included
Established tool