More on the Eagles “Andy Reid Peak”

UPDATE: For those wondering, next week I’ll post the same stats adjusted by annual league average.  Similar story, but it clears up some of the inflation and year-to-year comparison issues.

First, you’ll notice the site looks a bit different today.  I’ve been meaning to change it for a while; hopefully this is easier to read.  I’m not happy with the entire layout, so you may notice more changes soon.  In any case, if it’s not an improvement, please let me know.

Now then, where were we?

Right, the Andy Reid Peak, which ended after the 2004 season…

Below is a table showing the top 30 best defenses (lowest PPG allowed) in my data set.  Remember, the data set is the last 10 seasons (320 teams) PLUS the 2000-2002 Eagles teams, for a total of 323 teams.  In the table, I’ve highlighted the Eagles in green.  Also, I’ve highlighted Super Bowl Champions in yellow.

Screen Shot 2013-05-10 at 11.34.09 AM

These are, roughly, the top 10% of all defenses.  Note that I have not included the rest of the league for the 2000-2002 seasons (not as easy as you’d think, for reasons I won’t explain here).  If I had, I believe there’d be 5-6 defenses that rank above the 2004 Eagles in this chart (including the outrageous 2000 Ravens, who allowed just 10.3 PPG).

In any case, including those years would actually IMPROVE the Eagles percentile rank here, so the point remains: the biggest strength of the Andy Reid Peak teams was defense.  Jim Johnson was with the team through the 2008 season, so the drop-off after 2004 is not related to coaching.

I’ll go into more detail about the drop-off next week (once I’m finished looking for potential explanations).  Today, though, is just about illustrating how good the Andy Reid Eagles were at their best.

How about offense?

Andy Reid is known as an “offensive” coach, so how was his team’s performance on that side of the ball?

The Eagles’ best offensive performance (PPG) under Reid was actually in 2010 (the Vick outlier season).  That year, the team scored 27.4 PPG, good enough for 27th overall in my data set (92nd Percentile).  Note that this was NOT during the “Andy Reid Peak”.

In fact, the best Andy Reid teams featured relatively average offenses.  Below is a table showing each Eagles team from 2000-2012 (I’m ignoring Year 1 for obvious reasons).  The first column shows that team’s offensive percentile ranking within my data set.  I’ve bolded the “Peak Era” teams.

Screen Shot 2013-05-10 at 12.10.30 PMOverall, Andy’s reputation appears justified.  6 of his 13 teams here finished in the top 25% offensively.  Three more finished better than average.

However, notice that the 2001 Eagles managed to win more games than the 2010 Eagles despite scoring SIX fewer points per game.  That’s a huge difference in performance.  There was some offensive inflation across the league over this time period, but nowhere near enough to account for the degree of change we see above.

I’ll leave it there for today, since I want to focus on the positive.  Next week, we’ll take a closer look at the relative trade-offs Reid appears to have made (sacrificing defense for offense) and if we can learn anything from that about overall team construction (I touched on this in the Necessary Conditions post if you want to get a head start).

The Good Side of the Andy Reid Era

I’ve spent a lot of time talking about how bad the Eagles were last year.  I do, however, want to spend some time on the good years.  Just how good were the Andy Reid Eagles at their peak?

To figure that out, we obviously have to first decide when the “peak” was.  Which Andy Reid team was best?

The quick answer would be 2004, since that team went to the Super Bowl (and lost by a field goal to a team that may or may not have cheated).  Indeed, the 2004 Eagles were very good.  As I’ve shown, Point Differential is the best statistical indicator of team performance, and the 2004 Eagles recorded 124 more points than their opponents.  Over the past 10 years, that leaves them in the 87th %tile, meaning that team was VERY good (obviously).

Quick note:  I’ve added the 2000-2002 Eagles teams to my data set, so from now on the data will include the last 10 years PLUS those 3 Eagles teams, meaning there are 323 total team season in the set.

HOWEVER, that was not the best team performance under Andy Reid.  In fact, the Eagles surpassed that +124 mark twice during Reid’s tenure; in 2001 (+135) and 2002 (+174).

The 2002 Eagles team, with PD of +174, ranks 12th overall in the entire data set, better than 96% of all teams during that timeframe (click to enlarge).

Screen Shot 2013-05-09 at 12.33.36 PM

Additionally, the 2002 team had a TO Differential of +14, better than either the 2001 team (+9) or the 2004 Super Bowl team (+6.)

The Sack Differential points to 2002 as well, with that team registering 19 more sacks than their opponents, compared to +10 for the 2004 team and +5 for 2001.

This is a long way of saying that the 2002 Eagles team was, in my opinion (though largely supported by the data), the “best” of the Andy Reid era.  Also, it’s pretty clear that the Eagles “peak was from 2001-2004.  That itself will not surprise anyone.

What I find most shocking is that the last year of the Andy Reid “peak” was nearly 10 years ago…

 

It was definitely time for a change.

Lastly, for today, I’ll leave you with the Point Differential per year for the Eagles under Reid (1999 excluded).  As you can see, there’s a pretty definitive down-trend over nearly the entire timeframe.Screen Shot 2013-05-09 at 12.40.15 PM

 

As I mentioned at the top, I’m going to spend some time illustrating how GOOD those Eagles teams were (2001-2004).  Additionally, I’ll be looking for potential causes of the long-term decline in team performance.

Fun with Charts

No great insights today, but I will give you a few charts to look at.  After yesterday’s post, I decided to go through and chart the Eagles’ performance over the last 10 years with several different statistics.  Each chart tells a piece of the “Andy Reid Era” story, some of which we’ve covered, some of which we haven’t.

First up is the defensive performance.  By that, I mean Points Against.  There are a number of variable that go into defensive performance, so summarizing it with just Points Against is incomplete, but nonetheless, it’s an interesting chart.  This chart has 2 axes (is that the plural of axis?), with Wins on the left and PA on the right.  Wins are the blue line.

Screen Shot 2013-05-08 at 11.24.53 AMGiven what we know about the correlation between Point Differential and Wins, we should expect to see something resembling an inverse relationship here (i.e. PA goes up = Wins go down).  We do see some of that, but from ’06-’09 that relationship doesn’t hold.  The biggest takeaway here is just how bad (and anomalous) last year’s performance was.  From 2003 to 20011, the average Points Against for the Eagles was just 20.11, with a high of 24.3 in 2005.  Last year the team allowed 27.8 points per game, more than a full touchdown per game over the long-term average.

UPDATE: I added the same chart below, but reversed the secondary Axis to make it a bit easier to read (thanks to a commenter).  Here, we should see a positive relationship.  Again the takeaway is that we see a bit more deviation than we’d expect, especially from ’06-’09 (and from 2010-2011).Screen Shot 2013-05-08 at 12.41.28 PM

How about Points For (Wins are the blue line again):

Screen Shot 2013-05-08 at 11.39.34 AM

Here the relationship we expect to see (positive), is much clearer and more consistent.  While they are both strongly correlated with winning, Points For/Wins is, in fact, the stronger relationship.

This should be somewhat encouraging to Eagles fans, since Chip Kelly is an “offensive” coach, and the team used its top draft choices on that side of the ball.

Remember, based on the last 10 years, having an above average offense is a NECESSARY condition for winning the Super Bowl.  Having an above-average defense IS NOT.

Here is a slightly different chart.  This shows annual turnovers forced and surrendered.  Screen Shot 2013-05-08 at 11.49.35 AM

As I covered yesterday, the 2011 and 2012 Eagles performed far worse than the team’s long term average.  Last year, in particular, reflects significant outliers for both measures (negatively), which led to the historically bad TO Differential.

Speaking of TO Differential (I know I’ve spent a lot of time on it), here is the histogram for all teams from 2003-2012.  The red bar is where the 2012 Eagles are located.  I may have run this chart before, but it’s ridiculous, so I’m doing it again.  Remember, the whole sample includes 320 seasons.  The height of each bar (the Y-Axis) represents how many teams finished a season with that TO Differential.Screen Shot 2013-05-08 at 12.12.19 PM

Click to enlarge.  That’s what I mean when I say historically bad.

Now that I’ve figured out how to do reasonable histograms in the new Excel (Microsoft took away the statistics tools), I hope to make a few more that will provide a good visual illustration of just how bad/good the Eagles of 2012 were.  I’ll also run them for Andy Reid’s better years, which will give us an idea of just how good the team was when it “peaked”.

Revisiting 2011: Andy Reid’s Downfall

I’ve spent a lot of time over the past few months examining just what went wrong last year. I’ve come to the conclusion that while the team was certainly not very good, it was also the victim of bad luck.  It is unfortunate then, that Andy Reid lost his job here at least partially as the result of circumstances which, in general, are extremely unlikely to occur.

However, it would be incomplete to say that Reid was fired purely as a result of last year’s 4 win season.  The larger issue is that the 2012 performance followed a season that saw the team win just 8 games.  These consecutive sub-par performances, together with the lack of playoff wins, ultimately led Lurie to go in a different direction.

Therefore, when discussing the end of the Andy Reid Era we must also look at the 2011 season.

Why did the team win just 8 games?  Was that a result of bad luck or was it close to the “true win value” of the team?

As you can probably guess, I’ve found some interesting statistical tidbits that I hadn’t picked up on before.

Let’s start with Point Differential.  As we know, Point Differential is the best determination of “true win value”. For obvious reason, Points Scored – Points Against correlates extremely well with Wins (.91 value).

In 2011, the Eagles’ Point Differential was +68.  Let’s look at that in a chart with every other team from the past 10 years.  Screen Shot 2013-05-07 at 11.46.53 AM

Above, we see Point Differential on the Y-axis and Wins on the X-axis.  The black line illustrates the expected win total for each Point Differential.  I’ve highlighted the 2011 Eagles in red.

While the 2011 Eagles are clearly not an outlier, the chart shows that typically, a +68 point team would be expected to win 10 games (which would have put the team in the playoffs). Instead, the Eagles won just 8 games, setting the team up for a true make-or-break season (in which it broke, to put it lightly).

Now let’s look at Sack Differential

Intrigued, I looked at a few more stats from the 2011 season and found another relatively surprising measure.  The Eagles registered a Sack Differential of +18.  As I explained before, Sack Differential (Sacks – Sacks Against) is strongly correlated with winning (.61 value).

Here is the chart:Screen Shot 2013-05-07 at 12.07.20 PM

 

As you can see, the Eagles are again featured well above the “expected win” line.  In fact, over the past 10 years, NO TEAM has registered a sack differential of greater than +18 and won FEWER than 8 games.  Just 1 team won 8 games with a greater sack differential (’06 Green Bay).

Now, of course, Sack Differential and Point Differential are connected.  These are not completely independent variables, so it is not that big of a surprise that the Eagles underperformed their “expected win value” in both charts.

Still, it’s interesting that in both charts, the Eagles are close the edge of each distribution range.

So what went wrong?

As you can probably guess, it was the Turnovers.  I should note that I haven’t gone through each game of the 2011 season and highlighted where each loss came from.  This is purely a 10,000 foot statistical view of the whole season.  The Eagles registered a TO differential of -14.  TO Differential is also highly correlated with Wins, with a slightly higher value of .64.

Here is the chart:Screen Shot 2013-05-07 at 12.28.53 PM

Here is where the team gave back the advantage it gained in the perviously examined statistics.  Using just this statistic, we would expect a -14 TO Differential to equate to a roughly 1 win team.

Similarly to this past season, the Eagles just turned the ball over too often.

From 2003-2010, the Eagles average TO Differential was +8.  They followed up that era of strong performance with consecutive years of -14 and -24.  Whereas last year the culprit was fumbles (lost 22 of them), in 2011 it was interceptions (threw 25 of them).

Just for fun, here is a chart of the Eagles performance from 2003-2012, with Wins and TO Differential illustrated.  The left Y-axis is Wins (Blue), the right is TO Differential (Red).Screen Shot 2013-05-07 at 12.41.08 PM

 

It should also be noted that 2010-2012 was the “Michael Vick Era”, though obviously Nick Foles has some responsibility for last year as well.  For all his faults, Donovan McNabb was EXCELLENT at taking care of the football.  The guy just did not throw interceptions (look at his rate, not totals).

While Vick put it all together in 2010, it’s possible that he just significantly outperformed his long-term skill level.

For all of Andy Reid’s reported “QB Expertise”, it looks as though a big part of his downfall was not finding another starter after McNabb declined.

Draft Recap part 5: Positional Breakdown

I am returning to the draft discussion today, because I believe there is still useful information to be gleaned from the event.  Today, we’re throwing positional impact away and focusing purely on the order that players were drafted and how they compared to others within their position group.

Before we get started, I want to refer us back to last year so you can get a sense of why this type of analysis is useful.  It’s too early to judge last year’s draft class, but we definitely have a sense of each player and what the draft order would be if it was re-done with current information.  Here, for example, are the top DTs from last year:Screen Shot 2013-05-06 at 11.17.50 AM

The “reach” measure is the same thing we looked at last week; a negative number there means the player was chosen EARLIER than “expected”.  However, I want you to look at the “Pos % Score” column.  Within the TPR model, each position has a different “maximum score” depending on the impact values I derive from the salary cap data.  This

“Pos % Score” column tells us what percent of that “max score” each prospect attained.  Basically, it’s an easy way to back out the positional impact adjustments and focus purely on the question “how good is this prospect?”.

There are a couple quick caveats.  I don’t have the NFP ratings from last year, so those don’t figure into the scores.  The TPR model does not account for “fit” or “role”, so a NT and a 4-3 DT will be compared against each other.  However, since all we are trying to do is identify the “best potential player(s)”, regardless of role, I’m not too worried about either of those.  In the future, I will try to increase the positional resolution of the model (I did so this year by splitting OLBs and MLBs) to better account for the “role/fit” issues.

Looking at the chart above, we can see that Fletcher Cox graded out as the best DT prospect, yet he was chosen AFTER Dontari Poe.  At this moment, it looks like that was a big mistake by the Chiefs.  Also note the very low rating for Derek Wolfe, who had a very weak rookie year.

Now here is a chart of the OLBs:Screen Shot 2013-05-06 at 11.29.08 AM

Notice Lavonte David rated significantly higher than the 3 LBs chosen before him, and his rookie season bears that rating out.

Here are a couple more positions from last year, then I’ll move to this year. Notice both the order and absolute difference between players’ Pos % scores:Screen Shot 2013-05-06 at 11.35.00 AM

Above, we see the Brandon Weeden received an almost identical score as Brock Osweiler, but was taken 35 picks earlier.  Also, we can clearly see the “tier” separation between prospects.

Screen Shot 2013-05-06 at 11.35.52 AM

In this WR chart, compare Rueben Randle’s rating with that of either A.J. Jenkins or Brian Quick.  While 49ers fans were probably surprised Jenkins couldn’t get on the field, this chart suggests he was drafted ahead of better prospects.

Ok, you get it.  What about this year?

Lets start with the CBs.  Please note that the only players I included are those that made the original TPR top 137:

Screen Shot 2013-05-06 at 11.45.57 AM

As you can see, there is some serious deviation from the pre-draft rankings.  Darius Slay, in particular, looks like he was chosen too high, since both Banks and Taylor carry SIGNIFICANTLY higher grades.  Obviously, these are not full-proof, but as we saw above they suggest the Lions (Slay) might have made a costly mistake.

Screen Shot 2013-05-06 at 11.51.58 AM

Here we have the DEs, and the variation is not nearly as severe as in the CB group.  The only one who really jumps out is Margus Hunt, drafted above Damontre Moore.  Hunt is a very high-risk/high-reward player, but note that the TPR rankings suggest this difference was due to Moore falling rather than Hunt being “reached” for.  In fact, Hunt was drafted exactly where the TPR system rated him.  Moore, however, fell 40 spots.  William Gholston, near the bottom, looks to be a bit of a steal, especially compared to Alex Okafor, but since his grade is just 71.7%, he doesn’t project to be an impact player anyway.

Screen Shot 2013-05-06 at 11.55.26 AMThe DTs are notable for their surprising LACK of deviation.  Jesse Williams “fell” a lot, but as I said last week, that’s due to injury concerns and likely reflects a medical risk that can’t be quantified.  Bennie Logan, unfortunately, rates as the biggest “reach” of the group, but note that his positional score is fairly close to the DTs drafted after him, meaning within the position group, he wasn’t a terrible pick.  This would seem to suggest that the Eagles, despite their claim to draft pure “value”, likely made this pick based on “need”.

Screen Shot 2013-05-06 at 11.58.31 AMLooking at the Guard position, we see evidence of the Bears’ perplexing draft strategy.  While Kyle Long “projects” as a potential OT in the future, this suggests that regardless of position, Larry Warford is much more likely to be a good player.  The fact that he was also available more than 40 picks later is more damning evidence against the Bears’ perceived “value” in this draft.  Again, Kyle Long might become a great player, all I’m saying is that the odds of that happening are less than the odds of Warford becoming a big contributor.

Screen Shot 2013-05-06 at 12.01.25 PMAt the OT position, we can see why the Fisher/Joeckel/Johnson triumvirate was so sought after.  Those three players represent the clear “top tier” at the OT position this year.  Nothing to note after that, as the OTs were drafted in almost the exact order they “should” have been.  However, we do see a big value difference if we look at DJ Fluker and Menelik Watson.  While both players graded out similarly, Watson was taken 29 picks later, meaning he was a significantly better “value”.

Screen Shot 2013-05-06 at 12.11.54 PMNot too much to say here that hasn’t already been said.  Mike Glennon is a very peculiar choice, but EJ Manuel represents the biggest risk.  Either Manuel was rated by several teams to be the only QB worth a 1st round pick, or the Buffalo Bills were bluffed into making a poor decision.  With a GM of questionable judgement and a rookie head coach, it’s likely they just screwed up.

This chart also throws the breaks on the Barkley hype.  Again, the story with Barkley is “great value”, NOT “great QB”.  His score is OK, but nothing special.  FOr reference, it’s roughly equal to Brandon Weeden and Brock Osweiler from last year (though notably it’s much higher than Nick Foles’).

Screen Shot 2013-05-06 at 12.17.10 PMHere are the TEs, and they tell a similar story as the DTs (as far as Eagles fans are concerned).  I have Ertz rated as a slight reach, but within the TE class, he was picked where he should have been (though Escobar is clearly the better value).

Without comment, below are some other positions.  In general, it’s important to remember the larger point here.  When a team “reaches” for someone, they are essentially saying “My evaluation of this kid is better than everyone else’s (or almost everyone)”, usually, they’re wrong.

Screen Shot 2013-05-06 at 12.19.26 PM

Screen Shot 2013-05-06 at 12.22.50 PM

Screen Shot 2013-05-06 at 12.23.22 PM

Screen Shot 2013-05-06 at 12.23.59 PM

 

Eagles Elusiveness and the Chip Kelly Offense

I may or may not return to the draft next week.  Today, though, I wanted to comment on the recent Football Outsiders article on “Broken Tackles”.  For those who haven’t seen it, here is the link.  The upshot is that LeSean McCoy led the league last year with 44 broken tackles (tied with AP).  Additionally, he registered a broken tackle on 17.3% of his touches, an incredibly high amount and third behind only Isaac Redman and Jacquizz Rodgers.  Bryce Brown had the 9th highest Broken Tackle rate (13.3%). Those numbers don’t really surprise me; Shady is a fantastic RB whose biggest strength is his elusiveness.

I have to note that this is based on the Football Outsiders play-grading, which is not exact.  However, I’ve been assured by one of the volunteer play-graders (my brother), that these numbers are legit if not absolutely perfect.

Today, I want to discuss how that might play within the overall offense.

When the Eagles were hiring, I mentioned that the team was the likely front-runner for Chip Kelly (the top candidate).  While there were several reasons I believed that, the most important was the existing talent on offense.  I highlighted how Chip was likely salivating at the thought of having Shady, D-Jax, and Bryce Brown (and Vick, to a slightly lesser extent).  I, however, did not elaborate, so I will do that now.

Chip vs. Andy 

I think a lot of the recent confusion over “Chip Kelly’s Offense” is connected strongly to the Andy Reid Era.  Andy Reid was largely a “scheme” coach, relying on play-design as the foundation of his offense.  As a result, Eagles fans have been conditioned to expect a coach to have a singular “offensive philosophy” or “offensive scheme”.

I think Eagles writers and fans are now projecting (wrongfully) a similar expectation onto Chip Kelly.

Why wrongfully?

I believe that Chip Kelly is much less devoted to “scheme” than Andy was/is.  Chip Kelly’s offense has its foundation in “talent” more than “scheme”.  By that I mean Chip’s overall offensive goal is to get the ball to the player with the biggest mismatch or getting it to a speedy player “in-space”.  Andy occasionally attempted to do that (the WR screens were particularly frustrating), but the only time I remember a concentrated effort to exploit mismatches was back when Brian Westbrook was in his prime.  Chip, by comparison, has repeatedly shown the ability and willingness to dramatically change his offense based on the talent then available.

That’s where Shady, B-Brown, D-Jax, Ertz, Casey, etc… come into play.

I expect to see an offense that is far more balanced (run/pass) than Andy’s was.  After all, if you have RBs like Shady and Brown, you’d be ignoring a major strength by not giving them a big role.

I expect to see three TEs on the field together. If one gets matched up with a MLB (which will happen if a S isn’t brought in to mark the TE), that’s a huge advantage for the Eagles.

The Quarterback will likely have a lot of “at-the-line” responsibility, in that he’ll have to determine on many plays whether to run or pass.  If the defense leaves LBs in to mark Ertz and Celek/Casey, then those are potential mismatches in the passing game.   Conversely, if the defense brings in Safeties to mark the TEs, than the Eagles will either run the ball to take advantage of the TE/S size difference, or throw the ball over the top to D-Jax, who will now have single-coverage.

As you can imagine, I could go on for a pretty long time like this.  There are MANY ways to create and exploit mismatches with the offensive line-up the Eagles now have.

The Key

It sounds easy, of course, but that plan requires two things that are not easy to find.

1) A very good offensive line, that can be equally effective in both run-blocking and pass-protection.  The Eagles have this IF healthy and IF Lane Johnson is as good as he’s expected be.

2) A smart, accurate Quarterback.  The QB will have to make the right reads at the line, and adjust the play accordingly.  He’ll likely have to move players around the formation fairly often as well.  Additionally, he’ll have to get the ball to these guys in-stride, to allow them to fully exploit their match-ups (or put the ball up high if it’s a TE/S matchup).  You can decide for yourselves whether the Eagles have that QB or not (and if so, who it is).

Overall, though, the point I wanted to make is that its time to FORGET ANDY REID.  Kelly will do things MUCH differently than Reid did, and I think the most obvious example will be in the “offensive philosophy”.  Kelly’s offense is predicated on players, not scheme.  That’s a big departure from the Reid-era, and one that will take some adjusting to.  However, expecting Kelly to install a singular “scheme” like Reid did is a mistake.

This is a recent quote from Chip Kelly:

“If you want to go big and put linebackers on the field, we believe we have pass mismatches for you. If you want to go small and put DB’s on the field, I think we have a mismatch in the run game….We are going to go three tight ends in a game. Now, if they go three linebackers, we spread them out and if they go DB’s, we smash you. So, pick your poison.”

It really is that simple, but only because the Eagles have the talent and personnel to pull it off.

Now if they could just do something about that defense…

P.S. One a personal note, today is a sad day for the Cohen family.  Last night we lost a close member of the family, our dog Gambit (yes, we’re so cool we named her after an X-Men character.)  Anyone who has had pets before knows how much it sucks to lose one.  For those who haven’t, all I can tell you is that it’s not too different from losing a human family member, there’s just less paperwork.

Gambit, a Boston Terrier, lived a long and happy (if not terribly healthy) life, and she will be greatly missed.  My family’s home is not pet-less though, as Gambit is survived by her step-sister…Storm.

Draft Recap Part 4: Potential “Steals”

Today I’ll cover the players that “fell” in the draft and see if we can identify any potential “steals”.  However, I want to start by doing something I should have done yesterday.  I have ESPN ratings and NFL.com ratings for several past years (I do not have NFP ratings or NFL.com ratings for 2011).  Therefore, I can run the TPR system through the 2010 and 2012 drafts and see if it correctly identified potential busts and steals.  I showed this analysis once before, but have since updated the model (and its more interesting now anyway).

As I explained yesterday, the model should be more successful in identifying busts than it is in identifying steals.  If a player falls dramatically, there is usually a reason for it, and one that can’t be quantified and put in our model.   However, players that go well ABOVE their TPR rankings, are usually just indications that teams did in fact “reach” for a player.  For 2010, this is what the biggest “reaches” list looks like:

Screen Shot 2013-05-02 at 10.42.23 AM

You’ll notice that, compared to this year, the “reaches” in 2010 were not nearly as significant.  Still, in the above chart, we’ve done a good job of identifying the players who were not “worth” their draft spot.  It’s not perfect, obviously, but the “bad” selections far outnumber the “good ones.

Consequently, I’m comfortable saying that when a team “reaches” for a player, rarely do they “know more” than everyone else.  In fact, the few successful cases may just be the result of luck (if you reach on enough players, a couple of them are going to work out).

Now you can go back and view yesterday’s post with a bit more evidence behind it.

The “Steals”

Unfortunately, as I mentioned above, the “steals” are not nearly as easy to identify.  For example, here is the same 2010 draft class, with the “value” picks shown:Screen Shot 2013-05-02 at 10.48.25 AM

There are definitely some players in there that qualify as “steals”.  However, they are mostly concentrated in the 8-12 region (that’s draft spots below TPR ranking).  I’m not entirely surprised.  Jimmy Clausen, for example, rates extremely high on the TPR system, due to his impact position and high consensus rating from ESPN and NFL.com.  Clearly, though, teams saw something about him that is not reflected here. They seem to have been justified (though he hasn’t been given a great chance).

So what can we learn?  I’m not sure, but perhaps we can be skeptical of anyone who fell “too far”.  That’s not as scientific as I’d like, but we have to start somewhere.  If a player fell more than a full round (30+ picks), we can assume that almost every team passed on him, meaning there’s likely something about that prospect that our model isn’t picking up.

Here is the chart for this year.  I’ve cut the sample to players drafted in the top two rounds (for reasons I explained yesterday) plus the 10 remaining players with the highest TPR rankings.Screen Shot 2013-05-02 at 10.58.07 AM

Several of these players have “health issues”.  Jesse Williams and Keenan Allen are both reported to have injuries that raised flags with most teams.  That’s obviously not what we’re after here, but if healthy, those two are pretty obvious candidates for “value”.  After reviewing the 2010 data, we should be skeptical of the other players that “fell” more than 30 places (roughly a full round).  As we saw with Jimmy Clausen, it’s likely teams saw something in Barkley, Nassib, and Wilson, that can’t be represented quantitatively.   I obviously hope they’re wrong about Barkley, but that’s probably not the case.

So where does that leave us?

I’d suggest that we focus on Arthur Brown and the players below him on the chart.

– Arthur Brown is, today, widely regarded as having been a great “value” pick.  I have no idea why he fell so far, but there have been no reports of injuries or off-field issues that I have seen.  However, pointing to him is cheating a bit, since every “draftnik” is already calling him a “steal”.

– Similar story with Sharrif Floyd.  While the TPR system did not rate him as highly as some “experts”, he was still graded a top 10 pick, and he fell 13 spots to #23.  Obvious candidate, and you didn’t need to come here to know that.

– The rest, however, I’ll take credit for, particularly those at the bottom.  Notice that Star Lotulelei, John Cyprien, Bjeorn Werner, and Cordarrelle Patterson all carry relatively high “positional scores”.  That means, ignoring of positional impact, they are good prospects.  Star, though he only “fell” 8 spots, jumps out due to his 92%+ score, but the others are clear “value” picks as well.

I was hoping Cyprien would be the Eagles #35 selection, but I was obviously not the only one who liked him (he was the first pick on day 2).

– The other players I’d point to are Jamar Taylor and Menelik Watson.  Each of them fell a significant amount (more than 20 picks), so it’s possible there are some behind-the-scenes issues with both players.  However, given the significant rankings deviation we saw throughout the draft, it’s also possible these guys were just overlooked.

Neither projects to be an “impact” player, but both have solid scores.  Additionally, both players play positions with relatively high historical “hit” rates, meaning their positional risk is less than most other players.

– Larry Warford I’m not too interested in, though Guards have a very high “hit” rate.  His positional score is relatively low, and it’s near impossible for a G to become an “impact” player anyway.  He might have a good career, but it’s going to be difficult for him to become good enough to count as a big “steal” in the draft.

– Tank Carradine is another player with injury issues.  It’s interesting, though, that he did not fall nearly as far as Williams or Allen.  That tells me he received a much better report from team doctors than the other players did.  If Tank is healthy, he can absolutely be an “impact” player.  He’s not a great fit for the Eagles, so I don’t mind passing on him, but he can also be looked at as an “impact” player that was taken in the 2nd round.  However, since the medical risk is real, even if he pans out it will be unfair to say he was a “value” pick.  Just because a player avoids the risk associated with him doesn’t mean the risk wasn’t real.

I’ll be closely tracking the progress of these players throughout next season.  While one season won’t be enough to judge each pick, we’ll likely be able to knock a few off the list one way or another.  Hopefully, after watching this class play out, we can adjust the model or create a new measure that will help us identify “steals” as easily as we’ve identified “reaches”.

Draft Recap Part 3: Overdrafts and Potential Busts

Now it’s time to go back to our TPR rankings and see which players represent the biggest potential “reaches” and “values”.  Today I’ll do the “reaches”.  A couple of notes before I start:

– The TPR system is only designed to analyze the first 2 rounds of the draft.  Therefore, today I will only be looking at players drafted in those two rounds.

– In general, I think we’ll be much more successful in identifying “busts” than we will be in identifying “values”.  If a player falls dramatically, my first assumption is that the league knows something that we and the media analysts do not.  It could be due to an injury risk or personality defect; neither of which we can measure.  The “busts” however, represent the reverse, where a single team picks someone well above their perceived value.  In this case, it’s likely that the team is being overconfident in its own assessment.

So who were the biggest round 1 and 2 reaches?

Below is a table illustrating the players that were most heavily “overdrafted”.  The column with red text represents the prospect draft pick minus their TPR ranking.  So if prospect X has a score here of -30, it means he was drafted 30 spots earlier than the TPR ratings suggested he should be.

Due to team differences in scheme and the relatively close ratings of a lot of prospects, we aren’t really concerned with small differences.  Large ones, however, should be very informative.Screen Shot 2013-05-01 at 10.30.06 AM

Overall, there was significantly MORE deviation than what I expected to see.  Before I break down the chart above, let me advance a theory about this draft:

It might be a lot worse than we think.

In the first round, 4 of the biggest “reaches” were for interior offensive linemen.  What might that tell us?  Well if the ENTIRE draft is weaker than we suspect (not just weak at the very top), then several teams might decide to make their picks purely based on risk.  If the risk/reward tradeoff for every player is skewed towards the risk side, then it would make perfect sense to “reach” for a relatively low-risk player like an OG or C.  By doing this, you will appear to have passed on the opportunity to select an “impact” player.  However, if there aren’t any (possible, though unlikely), coming out of the draft with a decent starter at a low impact position isn’t a bad outcome.  I hope this is not the case (and I don’t think it is), but it would explain a lot of the perplexing decisions made on day 1 and 2 of the draft.

Now back to the chart.  Here are my takeaways:

– In my opinion, the Bears had the worst draft (see the next two bullets).

– The biggest “reach” of the entire draft, based on TPR, was Jon Bostic, an ILB from Florida.  The Bears drafted him with the #50 pick, and I did not even have him in my top 137 players.  Since he’s a LB, the Bears will get the benefit of the doubt, but if you’re looking for a potential bust, he’s a very strong candidate (though in the second round it’s not as noticeable or meaningful).  He may turn into a good player, but at the very least, it appears as though the Bears should have waited a round (or two) to take him.

– Kyle Long was one of the aforementioned 1st round interior linemen “reaches”.  He was drafted #20 overall, despite a TPR ranking of just #94.  1st round Guards have a very low miss rate (Danny Watkins was a rare exception), so Long will likely have a productive career.  However, the Bears probably passed on several better prospects at more impactful positions.  This is an under-the-radar reach, since the player will probably contribute, but it represents terrible value nonetheless.

– That’s twice in the first two rounds that the Bears “got their guy” regardless of the actually value of each player.  Either the Bears know something nobody else does (or very few teams do at least), or they just screwed up.  Time will tell, but I know which side I’d bet on.

– Two RBs are near the top of the list, Christine Michael and Le’Veon Bell.  Both were taken in the 2nd round, so the bust potential is somewhat limited.  Running Backs, though, are terrible “value” picks near the top of the draft.  They’ve been proven to be, for the most part, interchangeable.  Neither of these guys (nor Bernard, also on the list), projects to be a LeSean McCoy-type impact RB.  If that’s the case, it would have been better to draft a higher rated prospect and search for a RB later.  In my success-odds table/database, second round RBs became starters just 21% of the time, well below the odds of prospects at several more impactful positions (for example, 50% of 2nd round CBs became starters).

– The Bills might have screwed up big-time with the EJ Manuel pick.  Not only was he a big “reach” by our rankings, he was also not even close to being the top QB on the board. I believe two things happened here:

1) The Bills had Manuel as “their guy”, which as you know, is a dangerous place to start from.

2) The Bills were likely bluffed into taking Manuel much higher than they needed to.  I know there have been rumors that several other teams (including the Eagles) also wanted Manuel, but given the way the rest of the draft went, it’s more likely that the Bills fell for the smokescreens.

The fact that the Bills have a less-then-sterling record when it comes to QBs only increases the inherent risk of this selection.  (Bad GM Theory)

– Everyone (including me) gave the Cowboys shit for drafting Travis Frederick early, but note that he is far from the biggest reach in these rankings.  Still, it looks like he was picked about a round too early.

– Matt Elam, picked by the Ravens, will be an interesting prospect to watch.  Since it was the Ravens and Ozzie Newsome that made the pick, everyone assumes it was a good one.  However, I’ve got it as a big reach.  I should note that Elam’s TPR ranking is lowered significantly by his NFP grade.  If I had to bet, I’d certainly pick Newsome and the Ravens over the NFP.  Regardless, given that they traded up to get him, the Ravens are representing to the world that they are extremely confident in Matt Elam being significantly better than the other safeties on the board (which they could have selected had they not traded up).

– The Eagles do make an appearance on this list, at the very bottom.  Zach Ertz was selected 15 spots higher than his TPR ranking.  As I explained on Monday, I’m not overly concerned by this, since 15 spots in the second round isn’t a MAJOR deviation.  However, it is entirely possible that Chip Kelly’s desire for a TE led the Eagles to make a poor “value” selection.  I’m betting on Chip here, and think Ertz will be a significant contributor.

For what it’s worth, here are the highest ranked guys (in TPR) that were NOT selected in the 1st two rounds:

Screen Shot 2013-05-01 at 11.14.06 AM

I’ll cover the “value” picks tomorrow, but it’s safe to say the guys listed here are potential “steals”.