Announcement

Collapse
No announcement yet.

Do Reviews with Ratings Help?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • cpp
    replied
    For me it all depends on the reviewer and do you trust their opinion or not.

    Leave a comment:


  • MylesBAstor
    replied
    Funny thing is that I've been in stores with people carrying Stereophile's annual equipment guide while shopping for equipment. Not to mention agree or disagree, many of us do take a peek at the listings. (personally I find the analog/turntables and at times speaker ratings confusing. This is in large part because different reviewers submit different products. Especially sometimes what Stereophile deems full range and not full range speakers.

    On the other hand, there's no question that a lot of audiophiles read that issue as it's one of the most popular and sold out copies. The issue with the Stereophile guide is how do you integrate so many different reviewers favorite pieces of gear. Not to mention certainly some reviewers are harder graders than others and that no single reviewer has heard every piece of equipment. But obviously some more than others. I think Stereophile also changed the guide slightly adding an editor's choice section. To me that makes more sense.

    The one system that seems to get the most comments is Martin Collom's 0-100 rating system. Of course he leaves himself some slack but like being that first person out on the rings or ice, do you get the best rating or scoring? But Martin's advantage over the others is the amount of equipment he gets to hear in his system that only adds to value of his ratings.

    Then there's ETM's system. One thing that I do like about it is they break the sound down into different categories that makes sorting out sound to me somewhat easier rather than a broad overall rating. But of course, there are components whose performance exceeds the sum of the parts too because of synergies that come into effect.

    Leave a comment:


  • Guest
    Guest commented on 's reply
    And he has hosed over others.

  • Johnny Vinyl
    replied
    I always enjoyed reading the short summaries by 3 reviewers in UHF Magazine (out of Quebec). One reviewer would take the lead and do the proper review and then 2 others would chime in with their opinion(s). There wasn't always agreement either.

    Leave a comment:


  • Guest
    Guest commented on 's reply
    Cultivate your ears and familiarity with live music. :-)

  • allenh
    replied
    I think as a novice in this hobby rating systems could help guide you in gear selection in a quick concise form for better or for worse . But as experience grows, I would think gear ratings would become less important, if not ignored, and your own experience/opinion matters more.

    Leave a comment:


  • Rob
    replied
    Originally posted by atmasphere View Post

    The problem with Jerry of course is that he is perfectly willing to change those ratings and the text of his reviews months or years after the fact. One time he reviewed one of our kit amplifiers which he had bought used (and was a rat's nest inside). It did not fare well; when we confronted him about this problem of reviewing a poorly-built kit as a factory -built unit, he instantly changed a positive review of one of our preamps to a negative one and took away some of the 'LP' score he had given it.

    This is not saying that there is a problem with the rating idea in itself, just that there is a problem with Jerry.
    Yikes. I don't take his word for gospel and disagree with his points of view from time to time. when he reviewed ARC gear he had to buy the review unit from a dealer, the mfr wouldn't provide one. at least JS tells the reader upfront he's not ARC's biggest fan nor is JS a big fan of the ARC 'sound' whatever that is.

    Leave a comment:


  • atmasphere
    replied
    Originally posted by Rob View Post

    Jerry Seigel ranks components under review from 1 to 10 LPs. IMO, that system has some validity because a 5 LP score doesn't mean its necessarily bad, it's based on the reviewer's biases and system synergy. it leaves room for the reader to decide whether or not it merits further investigation.
    The problem with Jerry of course is that he is perfectly willing to change those ratings and the text of his reviews months or years after the fact. One time he reviewed one of our kit amplifiers which he had bought used (and was a rat's nest inside). It did not fare well; when we confronted him about this problem of reviewing a poorly-built kit as a factory -built unit, he instantly changed a positive review of one of our preamps to a negative one and took away some of the 'LP' score he had given it.

    This is not saying that there is a problem with the rating idea in itself, just that there is a problem with Jerry.

    Leave a comment:


  • Guest
    Guest replied
    Do you like or is it useful when a reviewer sums up an equipment review using a rating system employing a numerical value, certain number of stars, percentage improvement, etc.? Or gives the DUT five stars for tonality, 10 for soundstaging, etc.
    No. Numerical ratings are silly.

    Here's the minimum:

    - A brief description of the unit's feature set.
    - An account of the unit's design goals as stated by the designer.
    - Any particular problems or issues, caveats, requirements, and perhaps comments on apparent physical quality.
    - The review usage context - what other gear was in play during the review period.
    - A report on sonics. At least a portion of this must include specific and identified passages of referenced music that support the reviewers claims. These must be mostly using music that is relatively available to a broad audience. (It p***es me off that Valin invariably uses unreleased pre-orders or super obscure recordings that make it impossible to replicate the passage he describes.)
    - Brief sonic comparisons with a like type component, preferably in the same price range. Otherwise there is no context or ground (grund) for a sonic description. This is not always feasible; for example the reviewer of an audio rack or acoustic dressing may not have something comparable for comparison.
    - A brief conclusion wherein the reviewer can sum up and express his opinion on relative value or overall merit, etc.
    - Most of all, the above is done with clear, grammatically correct sentences, organized paragraphs, and some sort of logical progression that doesn't bounce the reader all over the place.

    Of course I'm biased.

    Leave a comment:


  • Rust
    replied
    A few comments from the cheap seats.

    I rather like what some of the British publications do, review five or six similar components in around the same price range. For instance, integrated amplifiers. All tested with the same source, wires and speakers. Pretty much as the typical consumer would use it.

    Note that typically such reviews are of relatively affordable gear. You know, stuff I could actually buy myself.

    Such reviews wouldn't work for more expensive gear for a variety of reasons. At a certain level gear becomes so idiosyncratic that system matching is a must. At that point the review becomes less about the actual DUT and more about system tuning/matching to determine ultimate performance of the system as a whole.

    I think both styles of reviewing are relevant albeit for different audiences and price points.

    Leave a comment:


  • Rust
    commented on 's reply
    I had an H2 Mk IV when I was a kid and it was the E-ticket ride (for those of you who know what an E-ticket is). And it wasn't quite as bad as some pundits suggested. Well, at least until I stuck a set of expansion chambers on it and finally got the jetting right. Then it was purely evil wicked fun..

  • MylesBAstor
    replied
    Originally posted by Rob View Post

    breaking it down like Steve Rochlin's reviews? I rather see a composite score taking into consideration all aspects because the reader is less likely to misinterpret the reviewer's intent, that's what descriptive writing is for. Jerry Seigel ranks components under review from 1 to 10 LPs. IMO, that system has some validity because a 5 LP score doesn't mean its necessarily bad, it's based on the reviewer's biases and system synergy. it leaves room for the reader to decide whether or not it merits further investigation.

    I dislike Stereophile's 'Class system' the most (e.g. A, B, C) its confusing for the short timer that's building a system based on these rankings, its pure rubbish.
    Or like Martin Colloms does.

    Does anyone remember (know MEP does) when Dave Wilson used to draw diagrams?

    Leave a comment:


  • MylesBAstor
    replied
    Originally posted by Johnny Vinyl View Post
    What is a DUT?
    Device Under Test.

    Leave a comment:


  • Steve Lefkowicz
    commented on 's reply
    A friend in college had the 500CC triple from this line (the H1 or H3, I forget). Fast as all get out, but a death trap otherwise. I much preferred the Norton Combat 750 twin I had at the time.
    Neither bike was terribly reliable.

  • Steve Lefkowicz
    replied
    They are only relative to that particular reviewer's past reviews. We used to have stars (quality) and checks (value) at Listener. I remember reviewing the Rega P3 and gave it a certain number of checks and stars. It started a long internet discussion, because Herb Reichert (one of my two favorite audio writers, along with Artie) had reviewed a different more expensive table and gave it a half a star lower rating. You'd think we had just told people their kids were ugly. That star discrepancy totally invalidated all of our reviews to some people.

    Then again, I only look at reviews as a personal tale of how a product worked in my system. I don't expect anyone else to know or like the music I play or like (why I don't often mention specific tracks in my reviews) and I don't particularly care for what goes on inside the box (I spend all day at work dealing with complex technology). That to me is the designers choice and I just want to see what the results are.

    The famous naval architect Bob Perry (quite the audiophile too) used to review sailboats for Sail magazine. but he reviewed the engineering and design, on paper, never actually sailing them. Great articles that gave great insight to what yacht design was all about. However, they never gave an indication of how they sailed or what it was like to be on the water in one. I also subscribed to Practical Sailor, a great monthly newsletter that reviewed sailboats strictly on the water in terms of how they worked. Totally the opposite of Perry's articles. It was really nice getting both viewpoints.

    I consider myself more of the Practical Sailor type than the Bob Perry type, though it is nice to have both articles to read.

    Odd points, before I knew Perry was an audiophile and occasional letter writer to Positive Feedback (in the old paper days) he was who my wife and I had decided would design our first custom sailboat if we ever could afford one. Practical Sailor was owned by the same company that bought Listener.

    Leave a comment:

Working...
X