It's Time To Stop Using BABIP

Hi. I'm Dan. I thought about putting this on Beyond the Boxscore, but I "live" here. This post really isn't about BABIP -- it's about sabermetrics generally, and where we're going wrong. Please read the whole thing before vilifying me. If you still feel like vilifying me afterward, go watch this. It will make you feel better.

A week or so ago, the Mets award winning television team (well, the Gary and Ron parts) started talking sabermetrics -- specifically, BABIP. They tore it a new one, and for the most part, it's because they didn't understand what BABIP meant, or did, or... whatever. It doesn't matter.

What matters is that they talked about BABIP. Which is horrible, because they're going to botch it 100% of the time. And that's our fault, not theirs. It's time to stop using it.

By itself, batting average on balls in play means nothing. It tells us how often a player gets a hit during the at bats when he doesn't homer or strikeout, which in and of itself is worthless. We know better. Gary and Ron know better. BABIP doesn't differentiate between lineouts and popouts. It treats a double in the gap the same as a bloop single. Gary and Ron know it, and they laugh at our geekiness. We don't care how hard a guy hits a ball. We're nerds and the numbers don't tell us that. Literally:

Gary: Conversely, if a pitcher has a particularly low batting average on balls in play, they like to tell you it’s going to rise eventually. Well, to me that doesn’t make any sense. Certain guys hit the ball harder than other guys hit it. Certain pitchers induce more groundballs or more weakly hit balls than others. That’s part of what you’re trying to do. Am I totally off base with that?

Ron: No I totally agree with you, I think that for the average hitter, to have a high average putting balls in play, it’s probably because they do have some lucky hits. But certain hitters, like [David] Wright, hit the ball hard almost all the time.

Of course, we know it too. We measure line drive rates and stuff like that. We have xBABIP! Yeah, go us! And no, we don't differentiate between the bloop single and the gap double -- well, not independent of line drive percentage etc. But that's the whole point. We're trying to measure how lucky the batter has been. We want to know what the batter's expected batting average is.

So let's just say that. Stop with the BABIP. Stop with the esoteric number which only means something in relation to another number (BA) and even then really needs to incorporate other numbers (e.g. LD%) to truly say what we want to say. Let's do this instead.

1) Call it "Expected Batting Average."

Obviously, BABIP isn't a player's expected batting average. BABIP is a tool we use to try and figure out a players xBA (ooh! I acronymifieid it!), but that's OK. Let's figure out the xBA and call it xBA.

2) Explain it in words.

Start with this:

Know what the difference between hitting .250 and .300 is? It's 25 hits. 25 hits in 500 at bats is 50 points, okay? There's 6 months in a season, that's about 25 weeks. That means if you get just one extra flare a week - just one - a gorp... you get a groundball, you get a groundball with eyes... you get a dying quail, just one more dying quail a week... and you're in Yankee Stadium.

That makes a ton of sense. It has to. It's from Bull Durham.

But you know what? Dying quails are fluky. They're luck. Groundballs with eyes, same thing. Flares, gorps, whatever. Luck. That's what Crash is saying there. The difference between a .250 hitter and a .300 hitter is a little bit of luck each week.

Guys who hit the ball hard, they don't need as much luck. Turn those grounders into line drives and those dying quails into warning track doubles and they're hits -- to hell with luck. Luck is for guys like Alex Cora and Gary Matthews Jr. and that guy Rick Evans or something.

We say, screw that. Let's look at each at bat. If a guy hits a frozen rope that's caught, we know that's not his fault. Over time, that'll even out, and he'll get more hits. If a guy strikes out, that's an out every time. Same with a pop up. That won't even out. Homers? Always a hit. Grounders with eyes? Well, that's usually an out, and that'll even out over time to. We look at every single at bat and ask if the guy hit the ball hard enough to "make his own luck." That's xBA.

(And you know what? At the end of the day, that's what BABIP turns into, too. Except that BABIP sucks, because it doesn't actually start there, in either name or by its equation.)

3) Drop the arrogance of specificity. Use ranges when possible.

We're measuring luck. Luck isn't exact. So we'll never be right on the money. You'll never be able to find a season where a significant number of players have an xBA equal to their actual batting average. That makes us look stupid, when in fact, we're just being arrogant -- by being so exact.

We should use ranges. xBA should be the 50% confidence interval, not the midpoint thereof. More made up numbers: If a guy's xBA is .285, it's probably better expressed by saying that it's between .279 and .291, or whatever. It makes that .290 BA not seem "lucky" (it really isn't) but tells us that a .274 is really unlucky. In other words, it does the job -- without the excruciatingly nerdy exactitude we are (wrongly) associated with.

It's our job to communicate this stuff. It's not their job to get smarter (they're not dumb) or to figure it out themselves (they're busy) or that they don't respect us (true, but fixable). The problem is semantic, not logical, and semantic problems can -- and indeed, must -- be fixed by revising our language. It's time to stop using BABIP.

This FanPost was contributed by a member of the community and was not subject to any vetting or approval process.