Re: Amazon Ratings - One Thumb Down #ratings #prediction-markets
Tom <tombarfield@...>
Bruce I found your comments on Tuesday and in this note insightful.
I also liked the insights I heard from Kent Greenes (I think).
In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area. Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.
I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad. In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.
If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that. It would be
like saying I only want a rating on 5 star stuff.
If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.
Tom Barfield
Accenture
--- In sikmleaders@..., "Bruce Karney" <bkarney@a...>
wrote:
I also liked the insights I heard from Kent Greenes (I think).
In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area. Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.
I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad. In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.
If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that. It would be
like saying I only want a rating on 5 star stuff.
If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.
Tom Barfield
Accenture
--- In sikmleaders@..., "Bruce Karney" <bkarney@a...>
wrote:
think "Amazon-
Hi all,
In Tuesday's call, I made a comment about why I don't
style ratings" are an effective KM strategy. Let me explainbriefly
why I believe that, and what I think is a better approach.these
Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.
1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.
2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because
post-publication reviews cannot affect the published work, theynot
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are
peers of the authors. Nobel prize winners and your local pizzain
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).
Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this
the last few years. However, the only way Amazon reviews impactthe
world at large is to pull more readers toward a book or push a fewimprovement.
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?
That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document
of
I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy
study I did on the topic earlier in the year, and would like totechnical
share my findings:
For a sample of 57 "Knowledge Briefs," which are 6-12 page
documents desighned to inform and enlighten, there were 12,295review
downloads and only 53 ratings/reviews. This is a ratio of 1
per 232 downloads, and slightly less than one review perdocument.
40
ALL ratings were either 4 or 5. The 53 reviews were provided by
different individuals, so the vast majority of people whosubmitted
a review submitted only one, meaning (perhaps) that they lacked ato
valid base for comparing the Knowledge Brief they were reviewing
any other Brief. The most reviews submitted by a single personwas
7, and the second-most was 3.on
I contend that if you were perusing a listing of Knowledge Briefs
a given subject, all of which were either unrated or had ratingssteer
between 4.0 and 5.0, you would not have information that would
you towards best documents or away from poor documents. You wouldprovides
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme
NO value to the prospective reader. Worse yet, if there were aKnowledge
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which
Briefs are rated at all.publication
My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.
Cheers,
Bruce Karney
http://km-experts.com