Okay, in this video, I'm going to demonstrate that general linear models that have one factor as the independent variable work similarly to general linear models that have one covariate as the independent variable. And we're going to, I'm going to make this comparison and show you how they work exactly the same by focusing on how sums of squares are calculated and the F statistic is calculated. We spend a lot of time in previous videos talking about how to calculate sums of squares and F statistics when we had a model with a single factor. All of that previous discussion is now summarized on this single slide. So if my discussion of the singles of all of this is a bit overwhelming by compressing it on to one slide. I encourage you to go back and watch the previous videos to get to revisit our previous explanations of this. Okay? So this is going to be a greatly condensed presentation of things we talked about in previous videos. So in order to calculate an F statistic, when we have a one factor general linear model, we do it like this. Let's imagine that our factor had three different levels, a, B, and C. We calculate three different types of sums of squares. The first thing we do is we calculate the total sums of squares, which we get by just fitting a grand mean or calculating the overall mean for all of our data points. And that's displayed in this slide by this red line. Okay? And we can calculate the total sums of squares for this, for these data by finding the difference between each individual data point. So say this data point that I'm pointing out here, find the difference between that data point and the grand mean, taking that difference and squaring it. And then we repeat that for every single data point, and then we add them up. This is the sum of squares because we're taking these differences. We're squaring them. That's where the squares comes in. And then we're summing them up to create to calculate the total sums of squares in our data. And that's the variation that we aim to explain whether one factor general linear model, this total sums of squares can be split up into either. The within factor or within treatment sums of squares, which is what's presented here. Or the, among the treatment or among group sums of squares. Let's start with the within group sum of squares. In order to calculate the within group sum of squares, which is also called the error sum squares or the residual sums of squares. We fit the mean value for each of our different levels. So mean value for level C, mean via crest level B and mean loud Love mean by for Level a. And then within each of these levels, we find the difference between each data point, like this data point that I'm pointing to here. And the mean value for its group. Take that difference and square it, and you add them up. And you do that for all the data points. And you add them all up. And that gives us our residual sum of squares or error sum of squares or within groups, sums of squares, whatever you wants to call it. Okay? Those are all appropriate terms. Add. Finally, we want to calculate our between-group sums of squares. And to do that, we take the mean values for each of our factors. So these are the same Vout. So these blue lines in this figure down here in the bottom left are the same, at the same positions as the blue lines in this top right figure. Okay, so this blue line here represents the mean value Treatment a. And there's that mean there. To calculate our between groups, sums of squares, we just take each mean and calculate the mean for each each level and find its difference between that mean and the grand mean. And then you take that difference and you square it. You do that for each of the different treatments and you add those up. Okay? So that's how we calculate our within group sum of squares and our between groups sums of squares. And then what we do to calculate an F value is we essentially compare these sums of squares between groups, sums of squares and the within group sum of squares. And you compare them in a way that accounts for our degrees of freedom. In order to obtain our F value. In short, you take each of these values is sums of squares. You divide them by the respective degrees of freedom. And then you take that gives us a mean square for that between-group measurement and for the within group measurement. And then you divide that Mean Square for between group by the mean square for the within group and that gives us f. Okay, that was our quick summary of what we talked about in previous videos. Now how does this compare to what happens with a model that has one covariate as opposed to one factor. Let's imagine that our data look like this. Ok? So we have our Y variable, which is continuous, and we have our x variable which is also continuous. The first step is the saying, We calculate the grand mean for our y variable. And there it is. Okay, it's this horizontal line going through the middle of our data. So that's the mean conduction velocity. Next, we find the difference between each data point and that grand mean. You take that difference and you square it. You do that for another data point and another data point. So take all those differences and you square them, and you do that for all of the data points. And then you sum up all of those squared values. And that gives us our total sums of squares. Okay? That's calculated in exactly the same way as we did in the, in the previous example where we had a one factor general linear model. So this is the variation that we want to explain exactly like with exactly as the previous scenario with a one factor general linear model. Our next step is to fit a line to the data. So the question really becomes, which line do we fit? Is it this line it's most appropriate or that line it's most appropriate or that line. Okay. How do we find the line it's most appropriate to go through these data? Well, the answer is, we want to fit the line that minimizes our next measure of sums of squares. Specifically, we want to fit a line that will minimize our error sums of squares. And our error sums of squares is calculated by taking our particular line that we're interested in and finding the difference between that line and each data point, like we've done here and there, and there. And you find all those differences. So the difference for that data point, that data point and all the other data points, you find those differences, you square them, and you add them up. Okay, that will give us our measurement of our error sums of squares or our residual sums of squares. Okay? Now exactly how is that line calculated? We're not going to go into that. It's not something that you would be doing yourself when you're using the function lm. That is something that the LM function will do for you. And exactly how that calculate exactly how our determines what line. This is something that we're not going to discuss. But the main point here though, is the line that is fit through these data will be the line that minimizes these error sums of squares. Okay? Now, since the total sums of squares is going to be equal to the sums of squares that's associated with covariate and plus the error sums of squares. We can calculate the covariance sums of squares like this. We can calculate the covariance sums of squares as being equal to the total sums of squares minus the error sums of squares. So we can do that because the total sums of squares is equal to the covariance sum of squares plus the error sums of squares. And you can see that's, that that's true just by taking the error sums of squares and moving it to the other side of this equation. Okay? So if we know the total sums of squares and the error sums of squares and we can calculate the covariate sums of squares. Then what we do is we compare the covariate sums of squares to the, to the error sums of squares in a way where we can calculate our mean square. So where we're accounting for our degrees of freedom, exactly like we did with a one factor, a general linear model. And then we can compare those two values of mean squares in order to obtain an F value. So that's really what I wanted to show you. What I wanted to show you is that although it looks like we're doing something very different when we do a one factor general linear model and a one covariate general linear model. It turns out the underlying mechanics for these analyses are identical. In both cases, we're calculating sums of squares, the total sums of squares, error sums of squares. And what I'm just going to call the effect sums of squares. So this is either the between the between groups sums of squares or the covariate sums of squares. Really the main point here is for calculating sums of squares for some effects that we're interested in. And then in both cases we're comparing that affects sums of squares to the error sums of squares in a way where we account for our degrees of freedom in order to calculate F. And then once we know our F statistic and we know our degrees of freedom, then we obtain a p-value. That is what I wanted to show you. And I hope that this video is, well, I hope it's helpful. Obviously. Really what I was trying to do when creating this video was to just kind of present some sense of unity. To provide some sense that much of what we're doing over this series, over the series of videos is we're basically performing the same kind of task, but in different circumstances that, that's essentially what we're doing when we're conducting a one-factor general linear model, multi-factor general linear model, or models that have covariates in them. These analyses are all essentially doing the same thing, but just in different circumstances. And I'll end this video on that note and say, thank you very much.