I was hoping to make this post about the equation I drew up earlier, particularly how to think about assimilating new information into the model. Given this week’s Eagles scenario, though, I think it’ll be helpful to apply similar thinking to a real world example first, then go back to the model.
The “real world” example is, of course, Nick Foles.
His last start was terrible. He looked completely overmatched and showed none of the strengths we had previously seen him use (pocket presence and accuracy in particular). That’s not really up for debate. However, there’s a BIG difference between knowing that information and using that information. The issue is, what does this performance tell us about Nick Foles’ skill/ability in general?
To answer that with any confidence, we have to frame it correctly. That means using ALL of our information, not just last game. For example, here’s a chart that shows Nick Foles’ passer rating by game. I’ve only included games in which he had at least 10 pass attempts.
What do we see?
– Well most shocking to me is that the Cowboys game wasn’t actually Foles’ worst game (by passer rating).
– The sample size (look at the X-axis labels) is still very small. He’s only seen significant time in 10 games. That means, regardless of what you think his performance to date says, you shouldn’t have that much confidence in it.
– When put in context, last week looks like an extreme outlier. This is why it’s important to view everything together, rather than focus on one particular event.
Insert Passer Rating disclaimer here.
So we’ve got a general idea of the larger picture. Now let’s take a sightly different approach. First, let’s use what we KNEW about Foles BEFORE the Cowboys game to see just how likely the Cowboys result was. Caution: Extremely over-simplified statistical analysis here. It’s illustrative, not definitive. There are definitely some more robust tools we can apply and some data adjustments we can make to increase confidence, but that’s a post for another day. Apologies in advance to the statisticians out there.
Prior to that game, Foles career Passer Rating was 87.13. Let’s assume, for a moment, that 87.13 is Foles’ “true” ability. It probably isn’t (small sample), but IF IT IS, we want to know how likely the Cowboys game result was. To do that, we’ll need not just his rating, but the standard deviation as well, which for Foles, was 27.03.
Given those to pieces of information, and assuming a Normal distribution (not necessarily a safe assumption) of potential outcomes, we can calculate what the odds were of Foles performing that badly. Now, this doesn’t account for defensive strength, but I’m trying to keep things relatively simple.
In a Normal distribution, roughly 68% of the data will fall within one standard deviation of the mean.
Here, the mean is 87.13, so we would expect, if that’s Foles “true” ability, that 68% of his starts would result in a Passer Rating between 60 and 114 (rounded).
Now, Foles ended the Cowboys game with a Rating of 46.2, which is roughly 1.5 standard deviations (27.03) away from the mean (87.13). 1.5 is the Z-Score.
Cutting to the chase, that tells us that, given our assumptions, the odds of Foles performing that poorly against the Cowboys was just 6.5%.
We can’t stop there, of course. Just getting the result isn’t enough. We then have to go back and view the observed outcome in light of its probability and our assumptions. Basically, there are two ways of viewing this:
– Foles performance was a result of random chance. Given what we knew, he had a 6.5% chance of playing that poorly, and it just happened to hit.
– Our initial assumption was wrong. Foles’ “true” Rating is worse than 87, which means the likelihood of him playing that poorly was actually higher (potentially much higher) than 6.5%. This one is attractive, because it let’s us increase the odds of occurrence for the event we witnessed.
I’m not yet ready to answer that question, but this is the real crux of the post, and the overall point I am trying to make. If you actually want to KNOW what’s going on, you have to examine all of the information, and try to reconcile it. Most fans, of course, have no interest in doing this. They’ll trust their “gut”, which usually results in them using the most recent event and discarding almost everything else. Think about the euphoria after the Bucs game. If you go back to the chart at the top of the post, it’s clear that was an outlier as well, though it was part of the overall uptrend and not as serious as the Cowboys game.
A big part of objective analysis is accepting that there’s usually a real chance that you’re wrong. In this example, we can view this from both sides. Foles supporters, while pointing to the “bad luck” explanation, HAVE to accept that fact that the second explanation, “bad Foles”, may be true. Similarly, Foles detractors can point to the “bad Foles” explanation and use the Cowboys game as proof that the Foles supporter were wrong. However, they too must recognize the potential validity of the alternative explanation. It’s possible (6.5% in our basic analysis), that it was simply a bad game.
Reconciling those two sides is mandatory for anyone try to learn the truth. As I tried to explain in the short disclaimer above, the example I used was extremely simplistic. It doesn’t account for things like quality of competition, potential for improvement, etc. There’s also the Normal distribution assumption and the small sample issue. There are things we can do to address a lot of those problems, I just didn’t have time to do it all for today’s post.
Going back to the larger topic of Information Assimilation, hopefully you can see how this type of analysis can be applied to the R value in our equation. And for anyone still skeptical as to the applicability of that model, I give you this:
Look at the score by quarter and tell me E = R ((60 – T) / 60) + C isn’t important.