Re: Amazon Ratings - One Thumb Down #ratings #prediction-markets


Tom <tombarfield@...>
 

Bruce I found your comments on Tuesday and in this note insightful.
I also liked the insights I heard from Kent Greenes (I think).

In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area. Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.

I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad. In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.

If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that. It would be
like saying I only want a rating on 5 star stuff.

If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.

Tom Barfield
Accenture


--- In sikmleaders@..., "Bruce Karney" <bkarney@a...>
wrote:

Hi all,

In Tuesday's call, I made a comment about why I don't
think "Amazon-
style ratings" are an effective KM strategy. Let me explain
briefly
why I believe that, and what I think is a better approach.

Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.

1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.

2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because
these
post-publication reviews cannot affect the published work, they
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are
not
peers of the authors. Nobel prize winners and your local pizza
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).

Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this
in
the last few years. However, the only way Amazon reviews impact
the
world at large is to pull more readers toward a book or push a few
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?

That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document
improvement.

I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy
of
study I did on the topic earlier in the year, and would like to
share my findings:

For a sample of 57 "Knowledge Briefs," which are 6-12 page
technical
documents desighned to inform and enlighten, there were 12,295
downloads and only 53 ratings/reviews. This is a ratio of 1
review
per 232 downloads, and slightly less than one review per
document.

ALL ratings were either 4 or 5. The 53 reviews were provided by
40
different individuals, so the vast majority of people who
submitted
a review submitted only one, meaning (perhaps) that they lacked a
valid base for comparing the Knowledge Brief they were reviewing
to
any other Brief. The most reviews submitted by a single person
was
7, and the second-most was 3.

I contend that if you were perusing a listing of Knowledge Briefs
on
a given subject, all of which were either unrated or had ratings
between 4.0 and 5.0, you would not have information that would
steer
you towards best documents or away from poor documents. You would
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme
provides
NO value to the prospective reader. Worse yet, if there were a
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which
Knowledge
Briefs are rated at all.

My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE
publication
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.

Cheers,
Bruce Karney
http://km-experts.com

Join {main@SIKM.groups.io to automatically receive all group messages.