Ratings Criteria, Methodology and Overall Findings

We rank reviews for credibility in evaluating, testing and identifying the best products and services. Our highest ratings go to reviews that are the result of tests and that document their assumptions, test criteria and other methodology. We give our lower ratings to sweeping endorsements.

If an article is ranked low on our site, it is not necessarily "bad." The very same article can receive a high ranking in one of our reports and a low ranking in another. The ranking simply reflects how relevant the article is for a given category and how well the article meets our criteria. Article ratings also decay over time as they become dated.

Our editors evaluate articles on each of the following (assigning a 1 to 5 rating for each), and then average the result:

  • How current is the review compared to its peers?
  • How credible are its top picks against the top picks of other reviewers? That is, do the picks hold up to our scrutiny considering everything we read elsewhere?
  • How extensive and convincing is its methodological approach, testing and analysis compared to other reviews?
  • How expert is the reviewer? Is this reviewer obviously an expert and qualified to judge?

All else being equal, a review that compares many products will receive a much better rating than a single-product review. We like to know how clean the dishwasher gets the dishes compared to other dishwashers. How often does that coffee maker break down compared to others? How fast does that computer run applications compared to its competitors?

The current instructions for our editors on how to accomplish these instructions are about 80 pages long.

See also Frequently Asked Questions.

How We Handle Non-Comparative Reviews

Many publishers run collections of single-product reviews, which can sometimes rank highly in our All The Reviews Reviewed chart, especially in the absence of comparative tests by other reviewers. We allow a collection of reviews to be counted as a single review if: a) It's prepared by a single group of writers and editors; and b) We can tell from the collection which products the reviewers liked best overall. Such collections are more likely to get higher ratings from us if the reviewers make statements about how the products being looked at stack up compared to their peers. Reviews at CNet.com fit this description, for example.

Also, for some products and services, a few "gurus" may have extensive knowledge or provide incisive analyses. Their opinions can rank highly in our charts in the absence of comparative reviews.

Uncritical Reviews Rank Lower

Sources that do not perform product testing and analysis, and that offer uniformly enthusiastic recommendations, do not rank as high in our All The Reviews Reviewed charts as sources that perform competitive analyses.

We are especially hesitant to give high rankings to reviews that are too glib and uncritical or that sound like advertising copy, which is sometimes an indication that a publication is too close to its advertisers.

ConsumerSearch and Consensus

Although ConsumerSearch's Fast Answers are often based on a consensus of what the experts say, this is by no means a given. For example, when a new category of Digital Cameras hits the market, we heavily discount or ignore results from older reviews that do not include the new cameras. When a product category is undergoing rapid change, the consensus view can lag behind.

Consumer Review Postings

Professional print reviews better meet our ratings criteria than postings from individuals on various free websites. That's largely because sites that aggregate user opinions lack comparative information.

For more reading on the subject of consumer review web sites vs. professional review, see the article Web Wanderings, Consumers' Revenge: Online Product Reviews and Ratings by Greg Notess, the The Consumer's Always Wrong by Mark Gimein at Salon.com, and Michelle Slatalla's "Rating The Raters" story in the New York Times archives (she still likes Consumer Reports).

In our work, we have noticed many postings that do not read as if they are from ordinary consumers. When we read a well-reasoned post followed by: "I suggest you stop reading and go buy this camera today!" it raises questions in our minds about the motivation of the author. We have come to suspect some of these posts are actually by companies in the business of planting commentary in discussion areas on the Internet.

The Influence of Advertising on Magazines

Consumer Reports has popularized the notion that a publisher cannot remain objective if it takes advertising. This argument influences how we assign our ratings, and many of our top-rated reviews come from newsletters and other publishers that don't take advertising.

Nevertheless, we have found that it doesn't pay to ignore excellent work in publications that take advertising. When we interview the editors and reporters who produce articles at these sources, we are struck by how sincere they are and how hard they work to remain independent of commercial considerations. Most editors and writers, particularly at larger publications, say they would quit their jobs rather than trade in their integrity.

We have heard that smaller publications sometimes issue a "publisher's request" to their editors asking for coverage of a product or service. This is issued after the publisher has sold advertising to a client and wants a nice article about that client or his product. Typically such requests result in "puff pieces" that are fairly easy to spot. Our ratings criteria weed out puff pieces.

When we read reviews in publications that take advertising, sometimes we have to read between the lines. We interpret "We think the Boston Acoustics A70 deserves consideration by serious speaker shoppers" as a tepid endorsement. On the other hand, the statement "We think the Boston Acoustics A70 is one of the best sounding speakers we have ever heard in its price class" deserves more attention.

Journalists at some magazines do downplay negative comments. Nevertheless, we have not heard of writers actually falsifying findings for the benefit of publishers. If you hear of this kind of thing, let us know, as we'd like to investigate it.

How Selected Reviewers Rank Given Our Criteria

In our reports, Consumer Reports is top ranked in our All the Reviews Reviewed charts 33% of the time. Consumer Reports meets our ratings criteria much better than its magazine competitors, Good Housekeeping or Consumer Guide. These publications don't do the advanced testing, disclosure of methodology or rigorous competitive product analysis of Consumer Reports.

Consumer Reports tops our charts more often in family and baby categories, as well as small and large household appliances. In other categories, like computers, software, automobiles, and sports and recreational equipment, Consumer Reports is often beaten by more specialized review sources such as Cook's Illustrated, Backpacker Magazine, Car & Driver, CNet.com, Imaging-Resource.com or Popular Mechanics.

PC Magazine typically ranks high in our charts on rigorous competitive analysis. But in the fast-paced computer market, most printed magazines' long lead times mean that their reviews have a finite shelf life. Therefore, PC Magazine and other magazines can yo-yo up and down in our charts as their reports become out-of-date and are then updated.

PC World magazine has an interesting approach in that it maintains a ranked list of the best computer products -- making this magazine very current. But the editorial analyses in PC World's product rankings reports are often quite brief, and as a result these reports don't usually rank at the top of our charts. They can rank highly, however, when PC World's rankings are based on important new market conditions that other publishers have not caught.

More Information

Back to top