Re: Amazon Ratings - One Thumb Down #ratings #prediction-markets
Mark May
I really enjoy this topic because it has great potential to benefit our
practitioners in the field and because it is a concept that everyone can
envision, given the ubiquity of the Amazon experience.
My thinking starts with how one would propose to use rating/feedback data.
I can see at least three possible uses - 1) Provide confidence or caution
to people who are considering using an IC artifact; 2) Help order search
results to put higher rated content above lower rated content; 3) Provide
input to content managers either to promote well rated IC or improve or
retire lower rated IC.
These are all worthwhile and valuable uses of ratings/feedback data.
However for many of the reasons that people have cited below, I don't think
that a five star rating system provides data that really is of value to
meet these ends. In addition, most content is not rated at all by anyone
or is not rated by enough people or it takes too long to get enough ratings
to represent a consensus opinion.
Given these limitations of an Amazon-like system, the IBM team is trying
something a bit different. First of all, we have deployed the standard
five star system and allowed space for comments on all artifacts in the
major repositories. We feel that users have become conditioned through
other internet experiences to expect this kind of a feedback approach.
However, we don't use that raw data by itself for KM purposes. We combine
the rating with other data to impute a VALUE for that artifact. The other
data includes number of times it was read, downloaded, forwarded to others
and printed. These factors are weighted so that a download counts for 10
times the value as a read, for example. We also give significant extra
weight to any comments since we think that the comments are much more
valuable than the ratings.
We have deployed this approach and are actively calculating imputed values
now. However, we have not yet begun to use the data. One of our highest
priorities is to help people find stuff faster, so we are eager to use
these imputed values to order search results. We also plan to make it
available to content managers so that they can see what is accessed and
"valued" most (and least) among their inventory of content. The jury is
still out on how much our practitioners will actually benefit from this
imputed value approach. We have some pilots planned for later this year to
see how if it works as well in practice as we think it should.
Mark May
IBM
"Tom"
<tombarfield@sbcg
lobal.net> To
Sent by: sikmleaders@...
sikmleaders@yahoo cc
groups.com
Subject
[sikmleaders] Re: Amazon Ratings -
01/20/2006 06:02 One Thumb Down
PM
Please respond to
sikmleaders@yahoo
groups.com
Bruce I found your comments on Tuesday and in this note insightful.
I also liked the insights I heard from Kent Greenes (I think).
In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area. Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.
I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad. In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.
If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that. It would be
like saying I only want a rating on 5 star stuff.
If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.
Tom Barfield
Accenture
--- In sikmleaders@..., "Bruce Karney" <bkarney@a...>
wrote:
Yahoo! Groups Links
practitioners in the field and because it is a concept that everyone can
envision, given the ubiquity of the Amazon experience.
My thinking starts with how one would propose to use rating/feedback data.
I can see at least three possible uses - 1) Provide confidence or caution
to people who are considering using an IC artifact; 2) Help order search
results to put higher rated content above lower rated content; 3) Provide
input to content managers either to promote well rated IC or improve or
retire lower rated IC.
These are all worthwhile and valuable uses of ratings/feedback data.
However for many of the reasons that people have cited below, I don't think
that a five star rating system provides data that really is of value to
meet these ends. In addition, most content is not rated at all by anyone
or is not rated by enough people or it takes too long to get enough ratings
to represent a consensus opinion.
Given these limitations of an Amazon-like system, the IBM team is trying
something a bit different. First of all, we have deployed the standard
five star system and allowed space for comments on all artifacts in the
major repositories. We feel that users have become conditioned through
other internet experiences to expect this kind of a feedback approach.
However, we don't use that raw data by itself for KM purposes. We combine
the rating with other data to impute a VALUE for that artifact. The other
data includes number of times it was read, downloaded, forwarded to others
and printed. These factors are weighted so that a download counts for 10
times the value as a read, for example. We also give significant extra
weight to any comments since we think that the comments are much more
valuable than the ratings.
We have deployed this approach and are actively calculating imputed values
now. However, we have not yet begun to use the data. One of our highest
priorities is to help people find stuff faster, so we are eager to use
these imputed values to order search results. We also plan to make it
available to content managers so that they can see what is accessed and
"valued" most (and least) among their inventory of content. The jury is
still out on how much our practitioners will actually benefit from this
imputed value approach. We have some pilots planned for later this year to
see how if it works as well in practice as we think it should.
Mark May
IBM
"Tom"
<tombarfield@sbcg
lobal.net> To
Sent by: sikmleaders@...
sikmleaders@yahoo cc
groups.com
Subject
[sikmleaders] Re: Amazon Ratings -
01/20/2006 06:02 One Thumb Down
PM
Please respond to
sikmleaders@yahoo
groups.com
Bruce I found your comments on Tuesday and in this note insightful.
I also liked the insights I heard from Kent Greenes (I think).
In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area. Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.
I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad. In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.
If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that. It would be
like saying I only want a rating on 5 star stuff.
If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.
Tom Barfield
Accenture
--- In sikmleaders@..., "Bruce Karney" <bkarney@a...>
wrote:
think "Amazon-
Hi all,
In Tuesday's call, I made a comment about why I don't
style ratings" are an effective KM strategy. Let me explainbriefly
why I believe that, and what I think is a better approach.these
Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.
1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.
2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because
post-publication reviews cannot affect the published work, theynot
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are
peers of the authors. Nobel prize winners and your local pizzain
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).
Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this
the last few years. However, the only way Amazon reviews impactthe
world at large is to pull more readers toward a book or push a fewimprovement.
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?
That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document
of
I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy
study I did on the topic earlier in the year, and would like totechnical
share my findings:
For a sample of 57 "Knowledge Briefs," which are 6-12 page
documents desighned to inform and enlighten, there were 12,295review
downloads and only 53 ratings/reviews. This is a ratio of 1
per 232 downloads, and slightly less than one review perdocument.
40
ALL ratings were either 4 or 5. The 53 reviews were provided by
different individuals, so the vast majority of people whosubmitted
a review submitted only one, meaning (perhaps) that they lacked ato
valid base for comparing the Knowledge Brief they were reviewing
any other Brief. The most reviews submitted by a single personwas
7, and the second-most was 3.on
I contend that if you were perusing a listing of Knowledge Briefs
a given subject, all of which were either unrated or had ratingssteer
between 4.0 and 5.0, you would not have information that would
you towards best documents or away from poor documents. You wouldprovides
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme
NO value to the prospective reader. Worse yet, if there were aKnowledge
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which
Briefs are rated at all.publication
My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.
Cheers,
Bruce Karney
http://km-experts.com
Yahoo! Groups Links