Wine geeks love to obsess over vintages. This is justified to a point, but I would still argue that vintage evaluations are more misleading than informative. Now, anyone who knows anything about wine, knows that some years are better than others. That's not a miraculous discovery, but most consumers don't buy years. They buy wines in a year. So, once past the basics, there are lots of twists and turns, few of which are accounted for by a vintage chart.
First, almost as a parenthetical, not even the most important point: how much weight does an evaluator give to various factors that affect vintage evaluations but not wine evaluations? Will the evaluator make the same choices that you do? If scoring wines is sometimes very difficult and includes some degree of subjectivity, you can add several exclamation points for vintage evaluations. There are a lot of factors in vintage chart generalizations that may not make any difference to you whatsoever if you chose the right bottle in a vintage.
For example, I certainly feel that I must give some credit to a vintage for consistency. After all, in evaluating a vintage, I'm trying to give some insight into the year overall. So, let's say you have a vintage that is very consistent—but fairly bland—with few great wines and a lot of average ones. How do you weigh that against an erratic vintage that sometimes shows greatness, but has just as many failures (maybe more) and no consistency at all? To put flesh on this bare-bones hypothetical, I might suggest 2012 in Douro for the first and 2013 in Douro for the second—but that, too, is a generalization. Personally, I prefer the 2013s overall. Still, isn't it fair to take into account that the consistent vintage will rarely deceive you? If you're sitting in a restaurant choosing blind, perhaps you're a lot safer choosing the more consistent year. It may not make you go "wow," but it won't make you go “there's another $50 I threw down the drain."
Now we have what might be fairly similar vintage evaluations if reduced to a grade or a score—yet, depending on the circumstances and how the evaluator came to that decision, it may not help you at all. In that restaurant, it might. If you're looking for top wines to cellar, or things in a style that you like, it won't. If you're drinking the successes, you're perfectly happy with either vintage. Why should you care about an evaluation based on another 3,000 wines that you'll never buy or drink anyway? So, which vintage do you think should get a better rating—the consistent one or the one with a lot of misses but a better high-end? To me, that is not an easy answer. Unless you're going to simply say one type of year is always better to you philosophically, there's going to be "give some credit, take some credit away" in the final analysis. Ergo, that may mean that in two radically different years, the vintages will have roughly similar ratings. "Different, not better." That is not all that enlightening, is it, in telling you what to buy?
So, that's the first problem. The second problem is, perhaps even more serious, with far more twists and turns that are harder to figure out. If the vintage generalization on quality is accurate by your standards, it still falls apart quickly when you look at more individual circumstances. It still may not matter at all, based on what happened with an individual region or wine. What was the particular problem in a vintage—rain, hail, frost, heat spikes? That's the headline question, but it only tells you that you have to ask more questions. Did the viticulturist make the right decision on when to pick at a particular estate? What is the terroir? Is the grape early ripening? Is it a white or a red? In 2013 in Portugal, for instance, all of those are very important questions that are pertinent considering the torrential rains late in the harvest. Wines that came in before the rains were quite good. Whites were generally excellent. But the vintage is a bit erratic since many didn’t harvest in time, especially in late-ripening terroirs. In other words, Douro Superior is going to be more consistent than Bairrada for reds.
Confused yet? You should be. Vintage charts rarely have enough nuance, but how could they? You'd have to rate not only every color and every sub-region, but also start distinguishing grapes (early ripening?) and particular wineries. At a certain point, you might begin to wonder why you just don't read the tasting notes. Yes, exactly the point! Vintage charts are the world's most blatant generalizations. Or, by way of analogy, I remember the title of a book I once read: How to Lie With Statistics.
Let's take the 2014 vintage in Portugal, for another real-world example. In most of the country, this is a truly miserable vintage, one of the country's worst. Since I began reviewing Portugal in 2006, it is likely the worst vintage, challenged only by the diluted 2006 vintage itself. Why? That's an important question, because the reason why vintages are poor holds the clue as to whether the wine you're looking at is poor. In much of Portugal, rain began around mid-September in the middle of harvest, and continued off-and-on for a couple of weeks in many places. Winemakers kept looking for a pause, but there really wasn't one for many properties. Eventually, many had no choice but to pick. The result: they wound up with a lot of wines that are either diluted or unripe—or both—depending on what choice they made.
That's the broad, general headline. If you've been reading along, you know it now gets complicated. In central Douro, Cima Corgo, the famous heart of the region, the vintage was far below average, but there were still people who had success. While the wines often seem atypical (a little more like cooler-climate wines, not quite as rich and sweet), they aren't always bad so much as they are atypical. A good example of this is Lima & Smith. They made some pretty fine stuff—but the wines certainly lack the vivid flavor of other years, the fruit that you expect to see in Cima Corgo.
That's only the beginning of our exploration, though. In Douro Superior, harvests were largely finished before the rains started. So, the 2014 brush does not obliterate Douro Superior as much. Then, there's the whites, which weren't bad in Portugal as a whole. It's not a great vintage for whites. They are a bit on the lighter side—some did not wholly escape and 2014 did not seem to make many great wines even without the rains—but they have freshness and they drink beautifully. I've generally enjoyed them. In Branco, there is a lot to like and some big winners. They usually aren't equal to the 2013s or 2015 whites, but they don't show the same issues as the reds in most places.
Clear now? Well, wait. There's more. Let's leave Douro. In Dão, the 2014 vintage was probably worse than in Cima Corgo. Then, there's Bairrada. It was a disaster. I joked recently to Filipa Pato and William Wouters (Bairrada), that the best 2014 Baga was made in Setúbal by Luis Simões for Brejinho da Costa. Another producer in Bairrada (V. Puro/Outrora) told me that they looked at the poor-quality grapes and didn't even bother to pick them. Then, there's Southern Alentejo. Some, like Malhadinha Nova, told me that their early harvest was finished by the time the rains started. But Northern Alentejo may be yet another and different issue. Luis Louro (Monte Branco) spoke at length about how he and his father (Miguel at Quinta do Mouro) took different approaches, desperately trying to mitigate problems, guessing about how to roll the dice in terms of the picking dates. (Miguel wanted to risk waiting, while Luis favored earlier picking.)
So, even in the middle of a particularly awful vintage, we can find some sunlight and exceptions. Most of all, we can find some nuance. If you're sitting in a restaurant befuddled with a wine list, don't order a 2014 red. But if you know what the winery did, what the terroir was and how the grapes ripened, there may not be much of a problem. After all, you are drinking the wine, not the vintage.
To be sure, there are a couple of uses for vintage generalizations. They're useful if you go into that restaurant, as described above, knowing nothing about any of the wines and have to choose blind. They're also useful for those rare years when you say "Fabulous, everything works," or vice versa. (I'd still say it is rarely that clear.) Most importantly, though, they are useful in stylistic terms rather than qualitative terms—which, you'll note, is not really even addressed by vintage charts.
Regarding that key stylistic issue, here's another real world example. The 2008 and 2009 Douro vintages both have their inconsistencies. Neither is quite a great vintage. They both made some very nice wines and some so-so wines. Style is the far bigger issue, however. In 2009, the wines tended to be dense, very big, muscular and ripe, often to a fault. Many estates had alcohol spikes. Many couldn't control the alcohol, resulting in some heat. There are more than few wines in the country that have some "overripe" nuances to the fruit, too. The ones that work are beautiful, though. There were many wonderful successes, along with a fair share of failures. Then, let's add another level of nuance. If the vintage is erratic in warm areas, in cooler climates (like Bairrada), it worked out better. Those wines may not always be the most typical—they tend to be rather accessible and ripe—but they are generally in balance, without problems.
Then, let's remember to add the whites. The 2009 whites were so-so—ripe and sexy, immediately appealing, but not always adequate in structure, sometimes lacking a bit of freshness. (There are exceptions to everything, of course.) Is your head hurting yet? Sorry, we're not done. Let's talk about the lower level. One thing I think 2009 tended to do well is provide some extra oomph for the lower level wines. (An excellent example is the Pó de Poeira.) They have more concentration than normal. As long as the producer didn’t dump burnt grapes into the cheaper wines, they are pretty nice.
The 2008 vintage was the polar opposite. The wines are very elegant, crisp and fresh, lighter than usual instead of denser. Some of them do lack mid-palate concentration. There are many successes, though. Let's add another wrinkle: the whites are better in 2008. They are lighter than the 2009s, but they have fine acidity and freshness that allows them to age well. The 2009s generally won't. Want a mature white in a restaurant? Pick the 2008 every time.
So, with all of that to consider, which of those years would you say is better? That encompasses a lot of factors, as noted. I think the flaws in 2009 are more serious, even if the big successes are pretty fine. On the other hand, the lower level wines in 2009 are often wonderful, because they have a little extra something they mostly do not get. All things considered, that's a lot of "on the one hand" and "on the other hand," isn't it? Don't blame me for going with my gut at times, rather than trying to reduce this to some scientific formula, but no matter how you do it, the final "sum of all parts" generalization that we get with vintage charts does not help you much. In evaluating the two years, I have them roughly equal, with a one point edge to 2009. Yet, if I'm again in that now-famous restaurant choosing blind while knowing the stylistic differences as well as the vintage ratings, I'm more likely to pick a 2008 (a) because of its style difference—I appreciate the finesse and freshness of the vintage, that old-school elegance; (b) it's safer and I am not likely to be appalled, even if the wine is not a great success; and (c) if it's white.
And that's a wrap, an end to this gentle diatribe. Because you might have noticed, those three factors mostly are not actually revealed by the actual vintage chart. Nothing is ever easy, and vintage charts are the easy way out, meaning that they are as likely to mislead as to inform.