Time for another guest post from Jared. You may remember him from:
– One of the Coolest NFL Charts Ever
– The Fourth Down Decision Series and Cheat Sheet (Part 1 of 4)
– When does it make sense to return a kick?
Well today, he’s ready to unveil at least the first iteration of his Hot Seat Index, and attempt to predict which coaches will be fired at or by the end of the season. You can follow him @jaredscohen and at kebertxela.blogspot.com. Without further ado:
So the Eagles had another especially depressing loss to the Giants this week. And with such a depressing loss, it got me thinking of another depressing outcome, watching your team stink to the point where they fire their coach.
So, this was something I wanted to wait on until much later in the season, but with Greg Schiano working as hard as he can to get fired as soon as possible, I needed to put it out a bit early.
One of the things I hate hearing about as the NFL season progresses is all the discussion around the coaching ‘hot seat’. It’s not that I mind the speculation, but all of the discussion centers on conjecture and the occasional ‘anonymous source’ (the exception being Schiano where the sources have noticeably opted to forgo anonymity!)
As I was thinking about it, I really wanted to see if we could make this a bit more objective and data driven. So with that as the goal, I gathered some data, did some analysis, and am now ready to introduce the NFL Coaching Hot Seat Index!
The Coaching Hot Seat Index is a model that, at any point in the season, will give you the approximate odds of an NFL coach getting fired after the season!
It’s based on a collection of data on all NFL coaching seasons since 1980. I broke the data down and for each coaching season, identified whether a coach was fired or kept their job (with some adjustments for retirements etc.)
With over 900 observations (and ~200 firings!), I started looking for factors which were significant in predicting whether coaches got fired or not. I tried lots of things, number of wins, playoff appearances, having the last name ‘Kotite”, all to see what was really significant in predicting when a coach will get fired.
I ended up with two primary factors, which were by far the most significant in predicting a coach getting kicked to the curb.
– Total point differential (points scored-points given up)
– Change in team wins from prior season (current year wins-prior year wins)
Those shouldn’t come as a huge surprise. (Note: I’m disappointed because I couldn’t examine another factor I wanted to see, the difference between Expected Wins at the Season Start and Actual Wins at season’s end…I wanted to use gambling lines to get at it but didn’t have nearly enough data to look at it. I still think it would be a more compelling variable)
But anyway, with those two factors, and using (or, abusing) a technique called logistic regression, I arrived at an equation to give us the odds a coach will get fired. Logistic regression is basically something you can use to help predict the likelihood of a binary outcome (a coach either gets fired, or he doesn’t) based on some variables (in this case, his team’s point differential and win change from prior season).
In the end, the result is the percentage chance (out of 100%) a coach will be fired after the season. The higher the odds, the more likely the coach is going to be filing some unemployment papers right around the Pro Bowl.
But I couldn’t just create a model and throw it out there without testing it a little. So I created a first draft of it, using only data from 1980-2011, and used that model to ‘predict’ the 2012 season based on its point differential and change in team wins (which, obviously, I already knew). When that passed the sanity check, I updated the model for 2012 and used it on teams from this year.
Those results are included below, with coaches sorted by their ‘odds to be fired’
Not a bad first result. Romeo Crennel, Mike Mularkey, Andy Reid all got canned, and those were the coaches with the highest likelihood. The model obviously isn’t perfect, as the plenty of coaches can still get fired, but at least there’s a structure and some logic here (we can also rank the coaches from safest to least safe!!!)
So this is interesting, and now we can apply it to the current season, and see what 2013 coaches are most likely to be fired!!!
Now, I had to make a couple of assumptions, because this model is based on a full season of performance, which of course we won’t have. But we do have a good sample and some reasonable projections of performance we can use as proxies.
For point differential, we can use a teams’ current point differential and pro rate it out for 16 games. This doesn’t account for changes in either a team’s performance or strength of schedule, but it’s not unreasonable.
For win difference from last year, we can take an estimate of the team’s full season (which I’ve borrowed from Football Outsiders Playoff Odds report, which runs simulations to calculate average team wins) and then just check the difference from 2012 wins.
So, at this point in the season (Week 7, because Week 8 is going on as I write this), which coaches have the highest odds of getting fired?
Well – it’s no surprise to see the Jaguars on the list, but we may have to make an exception for first year coaches as they typically get more than one season to right the ship. We’ll give Gus Bradley a pass (although no one can argue their performance is historically bad…it’s no wonder the odds are higher than anything seen in 2012)
Tom Coughlin checks in at number 2,
although we’ll unfortunately need to update that after the Eagles managed to lose to them this week. Their performance has been really bad as well, and any coach without Super Bowl rings would likely be on their way out. But Coughlin may overcome the odds with all the goodwill he’s built over the years (or maybe he changes his mind and “retires”)
Next comes Schiano, where I think we can all agree the odds calculation actually UNDERRATES his odds of being let go. This guy might want to think about booking some tee times in late November if he continues at his current pace.
But the next coaches on the hot seat, Gary Kubiak, Leslie Frazier, and Mike Shanahan, should also probably get their acts together if they want to stay employed.
From an Eagles perspective, Chip Kelly doesn’t seem to be in too much danger (although as I said, we probably should eliminate all first year coaches as a general rule). Of course, this assumes Kelly behaves competently, unlike the absolute sh*t show we just saw against the Giants (which my brother is probably already dissecting)
Of course, from a FORMER Eagles perspective, it looks like Andy Reid has done quite alright for himself in the move to Kansas City.
I’m just saying.
I think Schiano and Frazier is sure fire fires. Rex Ryan, Kubiak and Shanahan is tough. Shanahan might get 1 more year, but one never know with Snyder. I thought Kubiak should have been fired a few years ago and Rex Ryan might have his team at 8-8 and who knows if that buys him another year.
I also think Philbin might end up getting fired with us underarchiving squad. Garrett has potential to get canned if the Cowboys only go 8-8 and lose in the WC round. I doubt there is any Schottenheim or Lovie Smith type firing this year.
I truly enjoy models like this when they work, but I think there are too many variables and intangibles at play for this to be accurate. Your equation is really just showing us teams that are performing badly, but doesn’t account for length of coaching tenure (which you’ve already addressed), fickleness of ownership, past success, status of relationship with locker room, et al. From a mathematical standpoint, I think just looking at W/L differential leaves out the aspect of a team that is just always bad. If Gus Bradley goes 3-13 this year (after 2-14 last year), then 4-12 next year, the W/L differential would show consistent improvement, but I’d hardly call that a coaching performance that keeps him employed. Circling back to the intangible aspect, as you discussed, how can Greg Schiano be rated at only a 50% chance to be fired? Or how do you seriously believe Bill Belichick has a 10% chance to be fired? He’s not getting fired this year unless he commits a crime. That sort of result leads me to believe that a traditional subjective article (as opposed to be an equation) is best for this type of thing.
How is it possible the Eagles win change from 2012 is -1? I doubt our FO expected wins is to not win again.
I certainly think you need to incorporate a coach’s tenure into your analysis including past success as other replies suggest. The cases of Andy Reid, Bill Belichick, Coughlin, etc. need to be evaluated on a larger scale than just year to year. I would also look into weighting the second half of the season more heavily than the first. Just some thoughts of mine if this is something your looking to pursue further.
I can’t speak for Jared (hopefully he’ll comment himself), but I think there’s an issue with addressing tenure beyond ignoring rookie coaches.
Does a longer tenure help or hurt? I could see it both ways.
My biggest overall comment would be that assigning percentages leads to a degree of false precision. Instead, I might have just used the results to create a discrete ranking of coaches likely to get fired or not. As you and others have said, there are a lot of qualitative variables that get thrown into it.
However, given the results of the model (not bad), it’s pretty clear that a big chunk of it can be boiled down to wins/losses against expectations, perhaps unsurprisingly. At the very least, its a good start, and may allow us to eventually predict with some accuracy the conditions that will get each coach fired (that’s the real “value” of the model in my opinion).
After the season ends, you don’t really need a model to tell you, it becomes pretty obvious. But to be able to say something like this ( If Coach X doesn’t win more than 7 games, he’s likely to be fired ) is valuable.
On Wed, Oct 30, 2013 at 6:50 AM, Eagles Rewind
There’s something wrong with the win change column in the second table. For instance, the Chiefs are already better than +4.
I would like to know what kind of interaction relationships could be found. For example, I would expect tenure to increase the odds of getting fired for a coach who has had little postseason success, with super bowl winning coaches being virtually unfireable regardless of tenure and coaches who haven’t appeared in the playoffs being virtually guaranteed to be fired after five years.
Just spoke with Jared about it, error in the formula, though it doesn’t significantly change the results. Ill update soon (hopefully).
There’s definitely some more to explore with the tenure aspect. For example, maybe a Super Bowl ring’s effect depreciates over time at a somewhat predictable rate. So 1 year after your untouchable, but 10 years after it ceases to have any real effect. More to do, once it’s updated, I’ll probably take a crack at a revision/version 2.0 so we can try to incorporate some of the other things people have pointed out.
On Wed, Oct 30, 2013 at 9:41 AM, Eagles Rewind
First off – apologies for the table, I pulled in the wrong column when formatting it, which is why the data is off. There’s a corrected one which I’ve also updated for the new weeks’ worth of performance.
And regarding the points made here – yes, obviously there are weaknesses to this kind of simplistic approach to estimating which coaches will get fired. If there were a way to identify variables like ‘ownership impatience’, then sure, you’d think that would be a factor (particularly if your owner is Dan Snyder). But we’re a bit limited in terms of what we have.
And of course the percentages could give the impression of precision when that’s absolutely not the case. However, while I think the absolute percentages are directional at best (Belichick and Reid won’t get fired this year, no matter what), I’d take more away from the relative positioning of coaches when compared to each other.
I’m by no means saying this has the answer or should be taken as accurate to the hundredth of a percent, but my goal was to see if you could do something based in objective facts and make an albeit limited estimate of which coaches are on the firing line.
Love this article and great work. Totally agree that there is alot of subjectivity that goes into firing a coach and in no way can that be included in the modeling. However, I had a couple thoughts while reading this article.
1) Weighting games toward the end of the season could improve the model (e.g. Arizona Cardinals started 4-0, finished 5-11 – coach fired). Although there are counter examples for instance Detriot Lions lost their last 8 games but didn’t fire their coach.
2) Coaching Success – I feel like adding in some factors for playoff berths, playoff wins, SB wins, division champions, etc. could help as well. This would explain the long tenure of Andy Reid, Bill Belichick and perhaps the firing of Lovie Smith and Norv Turner. These factors could be weighted based on time (e.g. 1 year ago has a higher weight than 5 years ago).
Maybe you looked at these and there was no correlation but wanted to throw in my 2 cents.
While I didn’t play with weighting different parts of the season differently, it’s certainly an idea one could explore (the idea of momentum playing a role).
The coaching success is definitely something I looked into, but couldn’t find any significance. I didn’t try everything, but things like making the playoffs in the previous year, a previous super bowl appearance, none of those were significant factors. Part of that is sample size, but part of it is likely also that its less important than we assume.
I do think there’s a tenure-related point that’s absent here. My ingoing hypothesis was always that absolute failure is bad, but failure relative to incoming expectations is FAR worse. If you’ve had a couple seasons and some success, but then really fail to deliver (Reid, Smith), you’ll almost certainly be fired. Just couldn’t find a metric that would work to explain that (but obviously open to ideas)
I’m sorry if it seems like I’m nitpicking, but this table has intrigued me. Does the formula output really say Shanahan is 2 percentage points more likely to be fired than Schiano when they have the same win change and are only 2 points different in the point differential? Man, that safety in the Eagles-Redskins game was way more significant than I realized!
Ah, it does appear that way, but that’s at least partially driven by rounding of the win estimates for this year. Football Outsiders simulates the season thousands of times, and each team’s mean value isn’t fixed to an integer (e.g., your average win total could be 4.3)
When you round Shanahan and Schiano’s projected win changes, they appear the same, but there’s actually a half a win difference between them as well.
Of course, all that comes with the caveat that these percentages can’t be taken as that precise!
That makes a lot more sense. I had guessed that it had something to do with rounding, but I didn’t know that the win change values you used as input to the formula were more precise than those in the table.