Note: This section is now in read-only mode.
Please use our new community site for future posts.

Back To Board

Test

Posted By

One of the things I wanted to do this winter was to put some actual numbers to some old prospect lists. What does it really mean to be the Xth best prospect in the game according to person Y? How many top prospects really turn into significant players?

I was going to look at BA as the obvious default – right or wrong they’re the standard and they have lists going all the way back to 1990. I’ll probably end up doing that at some point anyway, but I thought it might be more interesting to look at the upstart “performance analyst rankings at BP.” Unfortunately, their lists don’t go back quite as far as I thought. I have copies of the BP annual all the way back to 1997, but it turned out that they only started their own prospect list in 1999. That’s not enough time to get a full picture of everybody on the list, but six years will give us a pretty good idea. The average age of players on the list was a bit over 22, so on average we’ve seen their age 23-28 years. That at least gets us into their peaks.

I used WARP3 as a measure of production despite its various flaws since it at least tries to measure everything and is ridiculously convenient. It’s certainly good enough for theses purposes. We’re not trying to differentiate a group of similar good players; we’re trying to differentiate between successes and busts.

I toyed around with a couple different ways to define successes and decided on a pretty loose and subjective definition. That way I keep the number of successes up a little to make comparisons a little more meaningful and it allows some leeway for latebloomers. My original minimum cutoff of 10 WARP was shattered by a trio of consensus stud picks who’ve all been somewhat disappointing and so far have career values between 9 and 10 WARP. Those three players are Nick Johnson, Sean Burroughs and Corey Patterson. All three have burned through their pre-arb years without really contributing all that much for whatever reasons, but I thought it worthwhile to consider them loosely defined successes. The general criteria I used was – over 10 WARP now with a great likelihood of exceeding 20 WARP based on several years as a solid regular, decent mid-rotation starter or closer. The closers tend to be the shakiest group as you might imagine, but that’s life.

In addition to simply going through the BP list I thought it was worthwhile to compare their rankings to BA. I included the BA ranking next to the BP ranking. I very briefly considered schemes to “grade” the rankings, but I’m not actually all that interested doing that. Unlike people affiliated with BA or BP I don’t have any financial interest in who “grades” better, so I don’t really care.

It’s also one of my biases that I don’t think the specific sequential rankings are all that interesting. That one group may have a stud player at #4 and the other has him at #9 is completely meaningless to me. In my opinion, any comprehensive grading scheme would have to invent some kind of meaningful difference in that case. Once you get past the very top premium prospects, I don’t think rankings that differ by 15 to 20 (or more as you get really low into the BA Top 100) are truly different.

And perhaps because of that bias, one of my conclusions about this little study is that there’s not a whole lot of differences between the two. At times there’s so much rhetoric intended to create a false conflict between two polar opposites – BA and it’s tools obsession against BP’s dogmatic “we’re performance analysts damnit!” Blah, blah, blah. In reality, there’s roughly 60% overlap between the two lists and that figure is higher in the premium top 10 or 20 spots.

And many players you might remember as typical BA tools goofs busts are actually ranked just as highly by BP. Ruben Mateo and Alex Escobar, I’m looking at you.

Conversely, there are performance oriented positionless slugger busts who were ranked just a