Amazon Ratings - One Thumb Down #ratings #prediction-markets


Bruce Karney <bkarney@...>
 

Hi all,

In Tuesday's call, I made a comment about why I don't think "Amazon-
style ratings" are an effective KM strategy. Let me explain briefly
why I believe that, and what I think is a better approach.

Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.

1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.

2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because these
post-publication reviews cannot affect the published work, they
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are not
peers of the authors. Nobel prize winners and your local pizza
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).

Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this in
the last few years. However, the only way Amazon reviews impact the
world at large is to pull more readers toward a book or push a few
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?

That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document improvement.

I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy of
study I did on the topic earlier in the year, and would like to
share my findings:

For a sample of 57 "Knowledge Briefs," which are 6-12 page technical
documents desighned to inform and enlighten, there were 12,295
downloads and only 53 ratings/reviews. This is a ratio of 1 review
per 232 downloads, and slightly less than one review per document.

ALL ratings were either 4 or 5. The 53 reviews were provided by 40
different individuals, so the vast majority of people who submitted
a review submitted only one, meaning (perhaps) that they lacked a
valid base for comparing the Knowledge Brief they were reviewing to
any other Brief. The most reviews submitted by a single person was
7, and the second-most was 3.

I contend that if you were perusing a listing of Knowledge Briefs on
a given subject, all of which were either unrated or had ratings
between 4.0 and 5.0, you would not have information that would steer
you towards best documents or away from poor documents. You would
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme provides
NO value to the prospective reader. Worse yet, if there were a
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which Knowledge
Briefs are rated at all.

My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE publication
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.

Cheers,
Bruce Karney
http://km-experts.com


Ravi Arora
 

Hi,

I 100% agree with what Bruce has to say. An inefficient and ineffective
system is as good as having no system.

a) But the fact remains how does a company ensure that people
Collaborate?
b) People do not participate in activities that they feel are not
worthy.
c) Why review a document to store it? Best is to generate a document
JIT when it is required.

Thank you,


Ravi

-----Original Message-----
From: sikmleaders@yahoogroups.com [mailto:sikmleaders@yahoogroups.com]
On Behalf Of Bruce Karney
Sent: Friday, January 20, 2006 7:00 AM
To: sikmleaders@yahoogroups.com
Subject: [sikmleaders] Amazon Ratings - One Thumb Down

Hi all,

In Tuesday's call, I made a comment about why I don't think "Amazon-
style ratings" are an effective KM strategy. Let me explain briefly
why I believe that, and what I think is a better approach.

Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.

1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.

2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because these
post-publication reviews cannot affect the published work, they
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are not
peers of the authors. Nobel prize winners and your local pizza
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).

Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this in
the last few years. However, the only way Amazon reviews impact the
world at large is to pull more readers toward a book or push a few
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?

That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document improvement.

I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy of
study I did on the topic earlier in the year, and would like to
share my findings:

For a sample of 57 "Knowledge Briefs," which are 6-12 page technical
documents desighned to inform and enlighten, there were 12,295
downloads and only 53 ratings/reviews. This is a ratio of 1 review
per 232 downloads, and slightly less than one review per document.

ALL ratings were either 4 or 5. The 53 reviews were provided by 40
different individuals, so the vast majority of people who submitted
a review submitted only one, meaning (perhaps) that they lacked a
valid base for comparing the Knowledge Brief they were reviewing to
any other Brief. The most reviews submitted by a single person was
7, and the second-most was 3.

I contend that if you were perusing a listing of Knowledge Briefs on
a given subject, all of which were either unrated or had ratings
between 4.0 and 5.0, you would not have information that would steer
you towards best documents or away from poor documents. You would
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme provides
NO value to the prospective reader. Worse yet, if there were a
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which Knowledge
Briefs are rated at all.

My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE publication
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.

Cheers,
Bruce Karney
http://km-experts.com






Yahoo! Groups Links


Tom <tombarfield@...>
 

Bruce I found your comments on Tuesday and in this note insightful.
I also liked the insights I heard from Kent Greenes (I think).

In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area. Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.

I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad. In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.

If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that. It would be
like saying I only want a rating on 5 star stuff.

If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.

Tom Barfield
Accenture


--- In sikmleaders@yahoogroups.com, "Bruce Karney" <bkarney@a...>
wrote:

Hi all,

In Tuesday's call, I made a comment about why I don't
think "Amazon-
style ratings" are an effective KM strategy. Let me explain
briefly
why I believe that, and what I think is a better approach.

Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.

1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.

2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because
these
post-publication reviews cannot affect the published work, they
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are
not
peers of the authors. Nobel prize winners and your local pizza
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).

Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this
in
the last few years. However, the only way Amazon reviews impact
the
world at large is to pull more readers toward a book or push a few
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?

That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document
improvement.

I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy
of
study I did on the topic earlier in the year, and would like to
share my findings:

For a sample of 57 "Knowledge Briefs," which are 6-12 page
technical
documents desighned to inform and enlighten, there were 12,295
downloads and only 53 ratings/reviews. This is a ratio of 1
review
per 232 downloads, and slightly less than one review per
document.

ALL ratings were either 4 or 5. The 53 reviews were provided by
40
different individuals, so the vast majority of people who
submitted
a review submitted only one, meaning (perhaps) that they lacked a
valid base for comparing the Knowledge Brief they were reviewing
to
any other Brief. The most reviews submitted by a single person
was
7, and the second-most was 3.

I contend that if you were perusing a listing of Knowledge Briefs
on
a given subject, all of which were either unrated or had ratings
between 4.0 and 5.0, you would not have information that would
steer
you towards best documents or away from poor documents. You would
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme
provides
NO value to the prospective reader. Worse yet, if there were a
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which
Knowledge
Briefs are rated at all.

My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE
publication
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.

Cheers,
Bruce Karney
http://km-experts.com


Mark May
 

I really enjoy this topic because it has great potential to benefit our
practitioners in the field and because it is a concept that everyone can
envision, given the ubiquity of the Amazon experience.

My thinking starts with how one would propose to use rating/feedback data.
I can see at least three possible uses - 1) Provide confidence or caution
to people who are considering using an IC artifact; 2) Help order search
results to put higher rated content above lower rated content; 3) Provide
input to content managers either to promote well rated IC or improve or
retire lower rated IC.

These are all worthwhile and valuable uses of ratings/feedback data.
However for many of the reasons that people have cited below, I don't think
that a five star rating system provides data that really is of value to
meet these ends. In addition, most content is not rated at all by anyone
or is not rated by enough people or it takes too long to get enough ratings
to represent a consensus opinion.

Given these limitations of an Amazon-like system, the IBM team is trying
something a bit different. First of all, we have deployed the standard
five star system and allowed space for comments on all artifacts in the
major repositories. We feel that users have become conditioned through
other internet experiences to expect this kind of a feedback approach.
However, we don't use that raw data by itself for KM purposes. We combine
the rating with other data to impute a VALUE for that artifact. The other
data includes number of times it was read, downloaded, forwarded to others
and printed. These factors are weighted so that a download counts for 10
times the value as a read, for example. We also give significant extra
weight to any comments since we think that the comments are much more
valuable than the ratings.

We have deployed this approach and are actively calculating imputed values
now. However, we have not yet begun to use the data. One of our highest
priorities is to help people find stuff faster, so we are eager to use
these imputed values to order search results. We also plan to make it
available to content managers so that they can see what is accessed and
"valued" most (and least) among their inventory of content. The jury is
still out on how much our practitioners will actually benefit from this
imputed value approach. We have some pilots planned for later this year to
see how if it works as well in practice as we think it should.

Mark May
IBM




"Tom"
<tombarfield@sbcg
lobal.net> To
Sent by: sikmleaders@yahoogroups.com
sikmleaders@yahoo cc
groups.com
Subject
[sikmleaders] Re: Amazon Ratings -
01/20/2006 06:02 One Thumb Down
PM


Please respond to
sikmleaders@yahoo
groups.com






Bruce I found your comments on Tuesday and in this note insightful.
I also liked the insights I heard from Kent Greenes (I think).

In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area. Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.

I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad. In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.

If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that. It would be
like saying I only want a rating on 5 star stuff.

If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.

Tom Barfield
Accenture


--- In sikmleaders@yahoogroups.com, "Bruce Karney" <bkarney@a...>
wrote:

Hi all,

In Tuesday's call, I made a comment about why I don't
think "Amazon-
style ratings" are an effective KM strategy. Let me explain
briefly
why I believe that, and what I think is a better approach.

Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.

1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.

2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because
these
post-publication reviews cannot affect the published work, they
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are
not
peers of the authors. Nobel prize winners and your local pizza
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).

Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this
in
the last few years. However, the only way Amazon reviews impact
the
world at large is to pull more readers toward a book or push a few
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?

That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document
improvement.

I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy
of
study I did on the topic earlier in the year, and would like to
share my findings:

For a sample of 57 "Knowledge Briefs," which are 6-12 page
technical
documents desighned to inform and enlighten, there were 12,295
downloads and only 53 ratings/reviews. This is a ratio of 1
review
per 232 downloads, and slightly less than one review per
document.

ALL ratings were either 4 or 5. The 53 reviews were provided by
40
different individuals, so the vast majority of people who
submitted
a review submitted only one, meaning (perhaps) that they lacked a
valid base for comparing the Knowledge Brief they were reviewing
to
any other Brief. The most reviews submitted by a single person
was
7, and the second-most was 3.

I contend that if you were perusing a listing of Knowledge Briefs
on
a given subject, all of which were either unrated or had ratings
between 4.0 and 5.0, you would not have information that would
steer
you towards best documents or away from poor documents. You would
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme
provides
NO value to the prospective reader. Worse yet, if there were a
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which
Knowledge
Briefs are rated at all.

My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE
publication
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.

Cheers,
Bruce Karney
http://km-experts.com






Yahoo! Groups Links


J Maloney \(jheuristic\) <jtmalone@...>
 

Hi --

I almost agree here. What is missing for me is the dynamic nature of knowledge and, importantly, reputation. At any giving point in time, the conclusions below are fine, but it is over time, where ratings and rankings are  really important. It is the notion of creating a reputation market. As we hear over and over (yet seem challenged to heed) KM is about connection not collection. Giant monoliths like Amazon and enterprise repositories of documents are information, not knowledge.  It is fine to rate cataloged, indexed and codified information, the documents, but really what's happening is an implicit ranking of the author and the derivative or derived future of their tacit knowledge and, very importantly, their capability and willingness to get it used, e.g., the social networks, persuasive skills, etc.. It is the dynamic of knowledge creation and use where rankings and ratings really are an advantage. Reputation and identity are social systems that need the attention of KM. Remember, for KM it is less about how it is managed (library science, database management) and far more about how it is created, used, applied. If the author is unavailable, then it the social network and conversation that emerges around the ranked document, such as a wiki, that carries the value forward. Information needs to be socialized to release value. Yahoo! Answers is an example of a crude reputation system/market. There are more on the horizon. 
 
 
Cordially,
 
John

John Maloney

IM/Skype: jheuristic
ID: http://public.2idi.com/=john.maloney

Prediction Markets: http://www.kmcluster.com/nyc/PM/PM.htm

KM Blogs: http://kmblogs.com/

 
 
 

Points and Levels

To encourage participation and reward great answers, Yahoo! Answers has a system of points and levels. The number of points you get depends on the specific action you take. The points table below summarizes the point values for different actions. While you can't use points to buy or redeem anything, they do allow everyone to recognize how active and helpful you've been. (And they give you another excuse to brag to your friends.)
Points Table
Action Points
Begin participating on Yahoo! Answers One time: 100
Choose a best answer for your question 5
Put the answers to your question to a vote 5
Answer a question 2
Log in to Yahoo! Answers Once daily: 1
Vote for a best answer 1
Rate a best answer 1
Have your answer selected as the best answer 10
Receive a "thumbs-up" rating on a best answer that you wrote (up to 50 thumbs-up are counted) 1 per "thumbs-up"
Levels are another way to keep track of how active you (and others) have been. The more points you accumulate, the higher your level. Yahoo! Answers recognizes your level achievements with our special brand of thank you's!
There's also a color associated with the levels. These colors will have more meaning as Yahoo! Answers rolls out new features in the coming weeks, and you'll see how "sharing what you know" and "discovering something new" can be fun and rewarding.
Level Yahoo Answers! Thanks You's Colors Points
7 Be the first to find out! Black 25000+
6 Be the first to find out! Brown 10000 - 24,999
5 Be eligible to be a Featured User on the home page masthead. Purple 5,000 - 9,999
4 Be eligible to be a Featured User on the community editorial page. (coming soon!) Green 2,500-4,999
3 A super-special Yahoo! Answers thank you. Blue 1,000-2,499
2 A special Yahoo! Answers thank you. Yellow 250-999
1 Full access to Yahoo! Answers! White 0-249
And finally, as you attain higher levels, you'll also be able to contribute more to Yahoo! Answers - you can ask, answer, vote and rate more frequently.
Limits (per day) Level
Unlimited Questions, Answers, Votes & Ratings 5, 6, 7
40 each: Questions, Answers, Votes & Ratings 4
30 each: Questions, Answers, Votes & Ratings 3
20 each: Questions, Answers, Votes & Ratings 2
10 each: Questions, Answers, Votes & Ratings 1
 



-----Original Message-----
From: sikmleaders@... [mailto:sikmleaders@...]
Sent: Friday, January 20, 2006 12:51 PM
To: sikmleaders@...
Subject: [sikmleaders] Digest Number 16

There are 2 messages in this issue.

Topics in this digest:

      1. Amazon Ratings - One Thumb Down
           From: "Bruce Karney" <bkarney@...>
      2. RE: Amazon Ratings - One Thumb Down
           From: "Ravi Arora"


________________________________________________________________________
________________________________________________________________________

Message: 1        
   Date: Fri, 20 Jan 2006 01:30:02 -0000
   From: "Bruce Karney"
Subject: Amazon Ratings - One Thumb Down

Hi all,

In Tuesday's call, I made a comment about why I don't think "Amazon-
style ratings" are an effective KM strategy. 


David Snowden <snowded@...>
 

Sorry not to have been active, or on the calls - too much travel
However this is a topic of considerable interest to us, and a part of our own software development and method research.
I think there are some key points that should be made
1 - Goodharts Law states "The minute a measure becomes a target it ceases to be a measure" and we can extensive evidence of this in government and industry
2 - Snowden's variation on that law is that "anything explicit will be gamed"
3 - It follows that it is key to any rating system, that the rating does not produce financial or status based rewards, if they do it will be gamed. (look at Bay, the ability to game Goggle searches etc. etc.)
4 - Bloggs are now being manipulated and also some other folksonomies.
5 - Any rating system needs to allow people to choose a filter in which they see ratings based on people who;s opinion they respect.
6 - artefacts cannot have absolute value (sorry to disagree with my former employer here), they have value in context.
7 - if we look at the three generations of understanding of data we can see (i) its all about the data, followed by (ii) its about the data with conversations (CoP, folksonomy etc.) and now (iii) data with conversations in models
8 - this third approach is at the heart of the work we are currently doing on horizon scanning, weak signal detection etc. in anti-terrorism, but which is also applicable (and is being applied) in more conventional KM settings.  It requires a switch from taxonomic structures and search mechanisms to ones based on serendipitous encounter within the context of need.
9 - The concept of corporate memory needs to start to mimic human memory which is pattern based.  An expert for example has over 40K patterns on their long term memory sequenced in frequency of use which are selected on a first fit basis (Klein and others).  Each of those patters is a complex mixture of data, experience, perspective.  By storing data-conversation-model combinations in context but without structure we can start to allow contextual discovery

Now a lot of that is cryptic, but the most important words are CONTEXT and SERENDIPITY.  We need to move away from storing, accessing and rating artefacts and start to look at corporate memory as a complex system in which patterns of meaning will emerge within models




Dave Snowden
Founder, The Cynefin Centre
www.cynefin.net


On 20 Jan 2006, at 23:42, Mark May wrote:

I really enjoy this topic because it has great potential to benefit our practitioners in the field and because it is a concept that everyone can envision, given the ubiquity of the Amazon experience.

My thinking starts with how one would propose to use rating/feedback data. I can see at least three possible uses - 1) Provide confidence or caution to people who are considering using an IC artifact; 2) Help order search results to put higher rated content above lower rated content; 3) Provide input to content managers either to promote well rated IC or improve or retire lower rated IC.

These are all worthwhile and valuable uses of ratings/feedback data. However for many of the reasons that people have cited below, I don't think that a five star rating system provides data that really is of value to meet these ends. In addition, most content is not rated at all by anyone or is not rated by enough people or it takes too long to get enough ratings to represent a consensus opinion.

Given these limitations of an Amazon-like system, the IBM team is trying something a bit different. First of all, we have deployed the standard five star system and allowed space for comments on all artifacts in the major repositories. We feel that users have become conditioned through other internet experiences to expect this kind of a feedback approach. However, we don't use that raw data by itself for KM purposes. We combine the rating with other data to impute a VALUE for that artifact. The other data includes number of times it was read, downloaded, forwarded to others and printed. These factors are weighted so that a download counts for 10 times the value as a read, for example. We also give significant extra weight to any comments since we think that the comments are much more valuable than the ratings.

We have deployed this approach and are actively calculating imputed values now. However, we have not yet begun to use the data. One of our highest priorities is to help people find stuff faster, so we are eager to use these imputed values to order search results. We also plan to make it available to content managers so that they can see what is accessed and "valued" most (and least) among their inventory of content. The jury is still out on how much our practitioners will actually benefit from this imputed value approach. We have some pilots planned for later this year to see how if it works as well in practice as we think it should.

Mark May
IBM

"Tom" <tombarfield@...>



To

sikmleaders@...

cc


Subject

[sikmleaders] Re: Amazon Ratings - One Thumb Down

Bruce I found your comments on Tuesday and in this note insightful.  
I also liked the insights I heard from Kent Greenes (I think).

In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area.  Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.

I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad.  In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.

If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that.  It would be
like saying I only want a rating on 5 star stuff.  

If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.

Tom Barfield
Accenture


--- In sikmleaders@..., "Bruce Karney"
wrote:
>
> Hi all,
>
> In Tuesday's call, I made a comment about why I don't
think "Amazon-
> style ratings" are an effective KM strategy.  Let me explain
briefly
> why I believe that, and what I think is a better approach.
>
> Let me contrast two kinds of reviews or rating schemes.  These are
> not the only two kinds, but they represent the opposite ends of a
> spectrum.
>
> 1. Peer review, prior to publication: This is the standard used by
> scientists and academics.  In essence, drafts of articles are
> circulated to "experts" who offer advice and input prior to
> publication.  This input is used by the author (and perhaps the
> editor of the journal) to improve the work BEFORE it is exposed
> (published) to a wide audience.
>
> 2. Consumer review, after publication: Amazon, ePinions, and many
> similar rating and awards systems use this approach.  Because
these
> post-publication reviews cannot affect the published work, they
> are "criticism" in the literary sense.  In Amazon's case, no
> credentials are required to post a review, so the reviewers are
not
> peers of the authors.  Nobel prize winners and your local pizza
> delivery guy have an equal voice in Amazon-land (and the pizza guy
> probably has more free time).
>
> Being able to write one's own review is a satisfying thing for the
> reviewer, especially since it has only become possible to do this
in
> the last few years.  However, the only way Amazon reviews impact
the
> world at large is to pull more readers toward a book or push a few
> away.  Isn't it better, especially in a business context, to use
> techniques that IMPROVE THE QUALITY OF THE BOOK?
>
> That's what Peer Review is designed to do.  If business KM systems
> can't support pre-publication Peer Review, they should at the very
> least focus on post-publication Peer Review and document
improvement.
>
> I also mentioned that at HP, where I used to work, most document
> ratings were 4's or 5's on a scale of 1-5.  I have located a copy
of
> study I did on the topic earlier in the year, and would like to
> share my findings:
>
> For a sample of 57 "Knowledge Briefs," which are 6-12 page
technical
> documents desighned to inform and enlighten, there were 12,295
> downloads and only 53 ratings/reviews.  This is a ratio of 1
review
> per 232 downloads, and slightly less than one review per
document.  
>
> ALL ratings were either 4 or 5.  The 53 reviews were provided by
40
> different individuals, so the vast majority of people who
submitted
> a review submitted only one, meaning (perhaps) that they lacked a
> valid base for comparing the Knowledge Brief they were reviewing
to
> any other Brief.  The most reviews submitted by a single person
was
> 7, and the second-most was 3.
>
> I contend that if you were perusing a listing of Knowledge Briefs
on
> a given subject, all of which were either unrated or had ratings
> between 4.0 and 5.0, you would not have information that would
steer
> you towards best documents or away from poor documents.  You would
> believe that any of the documents could be worthwhile, inasmuch as
> none of them had low scores.  Therefore, the rating scheme
provides
> NO value to the prospective reader.  Worse yet, if there were a
> documented rated 1, 2 or 3, that rating would probably be a single
> individual's opinion because of the infrequency with which
Knowledge
> Briefs are rated at all.
>
> My conclusion: don't RATE documents, but create systems to provide
> detailed written feedback from readers to authors BEFORE
publication
> if possible, or AFTER publication if that's the best you can do.  
> Encourage COLLABORATION, not CRITICISM.
>
> Cheers,
> Bruce Karney
>
http://km-experts.com
>







Yahoo! Groups Links

<*> To visit your group on the web, go to:
   
http://groups.yahoo.com/group/sikmleaders/

<*> To unsubscribe from this group, send an email to:
   sikmleaders-unsubscribe@...

<*> Your use of Yahoo! Groups is subject to:
   
http://docs.yahoo.com/info/terms/







SPONSORED LINKS
Knowledge management Consulting firms System integration
Computer security Computer training Computer internet


YAHOO! GROUPS LINKS






John Maloney <jtmalone@...>
 

Dave --

Yep, all great points here.

One model abstraction for exploiting complexity, emergence, social
networks and self-organizing systems are prediction and knowledge
markets.

'Gaming the system' has been the traditional nemesis of enterprise
ranking/rating/reward systems since time immemorial.

Today, enterprise knowledge markets are transforming these 'natural'
behaviors into an essential method for the creation, conversion,
transfer and applied use of knowledge.

I also share grave concerns about ranking/rating of
documents/artifacts in the enterprise repository. It doesn't work
and never will for the reasons mentioned and about a half dozen
others.

Again, it is typical for enterprise managers to focus on documents,
control, management, ranking (library science) versus leading the
*practice* of how knowledge is really created, applied, used.

Artifact/document rating & rating may give a warm 'in control'
feeling, and are good for corporate librarians, but it doesn't
create any business advantage or advance enterprise knowledge. It is
important to the corporate mission like bookkeeping is important.

There is too little time to go into details on knowledge markets, so
I will be lazy and just share the press release and links. These
ongoing and forthcoming market conversations are highly germane to
this important thread.

Note: this event was scheduled for next week, but
people/participants from the World Economic Forum in Davos asked to
postpone until after the WEF. They are running some knowledge
markets at WEF on world events, energy prices, climate, H5N1 Virus
and so on, and wish to conduct conversations at the Summit.


<snip>

Prediction Markets Summit - February 3, 2006 - New York City

Download this press release as an Adobe PDF document here:

http://pdfserver.prweb.com/pdfdownload/335221/pr.pdf

Note. There are scant few seats left for this summit.

Colabria® and CommerceNet announce Google, Yahoo!, MIT Sloan School,
NewsFutures, Corning, InTrade and HedgeStreet to join the Prediction
Markets Summit February, 3rd, 2006 in New York City.

San Francisco, CA (PRWEB) January 19, 2006 -- Colabria® -- the
leading worldwide action/research network of the knowledge economy -
announces NewsFutures, InTrade and HedgeStreet. will join Google,
Yahoo!, CommerceNet and others for the Prediction Markets Summit,
February 3rd, 2006 in New York, New York USA.

http://www.kmcluster.com/sfo/PM/PM.htm

"Prediction markets are brutally honest and uncannily accurate." —
Geoffrey Colvin - Value Driven – Fortune Magazine.

Thomas W. Malone, Professor of Management, MIT Sloan School, founder
MIT Center for Coordination Science, Author, "The Future of Work" is
a keynote speaker. Thomas will discuss how Intel uses prediction
markets for manufacturing capacity planning.

James Surowiecki, author of "The Wisdom of Crowds: Why the Many Are
Smarter Than the Few and How Collective Wisdom Shapes Business,
Economies, Societies and Nations" is also an event keynote speaker.

Emile Servan-Schreiber, CEO of prediction market leader NewsFutures,
will describe how Corning uses enterprise prediction markets to
forecast demand for liquid crystal displays.

Charles Polk, CEO, Common Knowledge Markets, will lead a
conversation on Pandemic Flu Prediction Market (PFPF) and the H5N1
Virus Outbreak.

Knowledge markets are becoming commonplace in the smartest firms.
Top firms using prediction markets for KM are Google, Yahoo!,
Microsoft, Eli Lilly, Abbott Laboratories, HP, Intel and Siemens.

This event is sponsored by participants and CommerceNet
http://www.commerce.net/, NewsFutures http://us.newsfutures.com/,
InTrade http://www.intrade.com/ and HedgeStreet
http://www.HedgeStreet.com/.

Prediction market pioneer Yahoo! Research will sponsor a Pre-Summit
Reception, February 2nd, 2006 at their offices in Manhattan (for
registered participants only).

Summit sessions are practical and conversational. All are welcome.
Secure, pre-registration online required.

http://www.kmcluster.com/nyc/PM/PM.htm

-jtm
http://kmblogs.com/





--- In sikmleaders@yahoogroups.com, David Snowden <snowded@b...>
wrote:

Sorry not to have been active, or on the calls - too much travel
However this is a topic of considerable interest to us, and a part
of
our own software development and method research.


David Snowden <snowded@...>
 

I agree that prediction markets are useful, but (i) we do not know yet what will happen when people try to game them and (ii) as the results are visibile then they will change the predictions.

My gut feel is that we should get rid of the"prediction" word for a bit and talk instead about "prevention" and "enablement"




Dave Snowden
Founder, The Cynefin Centre
www.cynefin.net


On 21 Jan 2006, at 10:18, John Maloney wrote:

Dave --

Yep, all great points here.

One model abstraction for exploiting complexity, emergence, social
networks and self-organizing systems are prediction and knowledge
markets.

'Gaming the system' has been the traditional nemesis of enterprise
ranking/rating/reward systems since time immemorial.

Today, enterprise knowledge markets are transforming these 'natural'
behaviors into an essential method for the creation, conversion,
transfer and applied use of knowledge.

I also share grave concerns about ranking/rating of
documents/artifacts in the enterprise repository. It doesn't work
and never will for the reasons mentioned and about a half dozen
others.

Again, it is typical for enterprise managers to focus on documents,
control, management, ranking (library science) versus leading the
*practice* of how knowledge is really created, applied, used.

Artifact/document rating & rating may give a warm 'in control'
feeling, and are good for corporate librarians, but it doesn't
create any business advantage or advance enterprise knowledge. It is
important to the corporate mission like bookkeeping is important.

There is too little time to go into details on knowledge markets, so
I will be lazy and just share the press release and links. These
ongoing and forthcoming market conversations are highly germane to
this important thread.

Note: this event was scheduled for next week, but
people/participants from the World Economic Forum in Davos asked to
postpone until after the WEF. They are running some knowledge
markets at WEF on world events, energy prices, climate, H5N1 Virus
and so on, and wish to conduct conversations at the Summit.




Prediction Markets Summit - February 3, 2006 - New York City 
  
Download this press release as an Adobe PDF document here:

http://pdfserver.prweb.com/pdfdownload/335221/pr.pdf

Note. There are scant few seats left for this summit.

Colabria® and CommerceNet announce Google, Yahoo!, MIT Sloan School,
NewsFutures, Corning, InTrade and HedgeStreet to join the Prediction
Markets Summit February, 3rd, 2006 in New York City.

San Francisco, CA (PRWEB) January 19, 2006 -- Colabria® -- the
leading worldwide action/research network of the knowledge economy -
announces NewsFutures, InTrade and HedgeStreet. will join Google,
Yahoo!, CommerceNet and others for the Prediction Markets Summit,
February 3rd, 2006 in New York, New York USA.

http://www.kmcluster.com/sfo/PM/PM.htm

"Prediction markets are brutally honest and uncannily accurate." —
Geoffrey Colvin - Value Driven – Fortune Magazine.

Thomas W. Malone, Professor of Management, MIT Sloan School, founder
MIT Center for Coordination Science, Author, "The Future of Work" is
a keynote speaker. Thomas will discuss how Intel uses prediction
markets for manufacturing capacity planning.

James Surowiecki, author of "The Wisdom of Crowds: Why the Many Are
Smarter Than the Few and How Collective Wisdom Shapes Business,
Economies, Societies and Nations" is also an event keynote speaker.

Emile Servan-Schreiber, CEO of prediction market leader NewsFutures,
will describe how Corning uses enterprise prediction markets to
forecast demand for liquid crystal displays.

Charles Polk, CEO, Common Knowledge Markets, will lead a
conversation on Pandemic Flu Prediction Market (PFPF) and the H5N1
Virus Outbreak.

Knowledge markets are becoming commonplace in the smartest firms.
Top firms using prediction markets for KM are Google, Yahoo!,
Microsoft, Eli Lilly, Abbott Laboratories, HP, Intel and Siemens.

This event is sponsored by participants and CommerceNet
http://www.commerce.net/, NewsFutures http://us.newsfutures.com/,
InTrade http://www.intrade.com/ and HedgeStreet
http://www.HedgeStreet.com/.

Prediction market pioneer Yahoo! Research will sponsor a Pre-Summit
Reception, February 2nd, 2006 at their offices in Manhattan (for
registered participants only).

Summit sessions are practical and conversational. All are welcome.
Secure, pre-registration online required.

http://www.kmcluster.com/nyc/PM/PM.htm

-jtm
http://kmblogs.com/


   


--- In sikmleaders@..., David Snowden wrote:
>
> Sorry not to have been active, or on the calls - too much travel
> However this is a topic of considerable interest to us, and a part
of 
> our own software development and method research.






YAHOO! GROUPS LINKS






J Maloney \(jheuristic\) <jtmalone@...>
 

Hi Dave --

Brief response in caps.

I agree that prediction markets are useful,

GOOD.

but (i) we do not know yet what will happen when people try to game

THEY ARE INTENDED TO BE 'GAMED' AGGRESSIVELY. THAT IS THE WHOLE POINT.

them and (ii) as the results are visible then they will change the
predictions.

YES, ABSOLUTELY. MARKETS MUST BE TRANSPARENT TO BE EFFECTIVE. THE CHANGES
DETERMINE THE PRICES.

LOOK AT THESE CONTRACTS FOR AN IDEA:

http://www.intrade.com/jsp/intrade/contractSearch/

My gut feel is that we should get rid of the"prediction" word for a bit and
talk instead about "prevention" and "enablement"

YEAH, MAKES SENSE. MARKETS ARE VERY USEFUL, ESSENTIAL REALLY, TO RISK
MANAGEMENT (HEDGING).


More to the story...

"Thirty years ago, when Charles Schwab bucked Wall Street and founded the
first ever discount brokerage, people were skeptical. It couldn't be done,
they claimed. Wall Street was governed by the large financial giants which
charged enormous fees, catered to the very rich or the institutional
investors of the world, and largely paid lip service to the small investor.
Despite being blackballed by Wall Street, Schwab changed all that, bringing
the financial markets to the masses, reducing commissions to affordable
levels such that the nation's significant middle class could invest for
their own futures in the same way institutions had done for years. It was a
watershed moment." {hedgestreet] The rest, as they say, is history.
"Chuck" now has the biggest compound in Pebble Beach!


It is roughly the same thing today. Only highly 'sophisticated' brokers and
bankers trade derivatives, about 3% of very, high net worth investors and
giant institutions. It make up a HUGE market, some say bigger than the stock
market. "Retail hedge markets are coming. See: HedgeStreet's site
www.hedgestreet.com enables members to trade small, inexpensive,
easy-to-understand "event derivative" contracts (called HedgeletsR) in
markets never before accessible to individual traders." {hedgestreet]

Cheers,

John





John Maloney
T: 415.902.9676
IM/Skype: jheuristic
ID: http://public.2idi.com/=john.maloney

Prediction Markets: http://www.kmcluster.com/nyc/PM/PM.htm

KM Blogs: http://kmblogs.com/


Jerry Ash <jash@...>
 

Hi All.

I have to be careful in relating this story because it may involve people you may know, even though they are not members of this group. But the story is significant to this discussion.

Member #1 has never written a book or earned recognition with the cadre of innovative, ground-breaking KM pioneers, but has earned a high popular profile in the field through prolific communication on the Internet. Though Member #1 has only a moderate record in KM practice, the member's understanding of KM practice is excellent. Because of Member #1's credible communications abilities, and the proliferation of documents on the Internet, Member #1 has a strong following.

Unfortunately, Member #1 doesn't like member #2 whose credentials go back to the beginning of modern KM and who is prolific in producing original thoughts, tactics, publications and writing and initiatives in the KM and related fields worldwide. Member #2 also has many supporters as well as a cluster of critics who often raise both professional and personal issues.

Professional conflict is inevitable in any field, but the problem is Member #1 uses review and rating opportunities to trash almost everything Member #2 does, including the panning of several books written by Member #2. In the latest episode, Member #1 has written a negative review of a book Member #2 has only now seen in the final printer's galley. The book is not yet published!

Now, I don't take sides in these professional soap operas, but my point is that 'peer review,' whether it takes place in the loose domains of Amazon, personal websites or blogs, must be suspect. Ratings, whether among professional groups or coworkers, are fraught with unseen agendas. And in an arena of few contributors rating/review process are fertile ground for abuse.

Jerry Ash
Founder, AOK; Special Correspondent, Inside Kowledge Magazine


Bettenhausen, Richard <richard.bettenhausen@...>
 

We have employed a similar approach as Mark's. Although perhaps for a different reason up front. Because our IC library may still be small in comparison to most of yours, we don't have the variety of similar postings to make a rating effective. Our focus is on 'guilting' users into leveraging because so many others have found the value in doing so. By tracking read/download and other metrics, we are able to determine the 'top picks' of the organization and use that to drive future behavior.

Richard Bettenhausen
Fidelity Information Services, Inc.

-----Original Message-----
From: sikmleaders@yahoogroups.com on behalf of Mark May
Sent: Fri 1/20/2006 6:42 PM
To: sikmleaders@yahoogroups.com
Cc:
Subject: Re: [sikmleaders] Amazon Ratings - One Thumb Down



I really enjoy this topic because it has great potential to benefit our practitioners in the field and because it is a concept that everyone can envision, given the ubiquity of the Amazon experience.

My thinking starts with how one would propose to use rating/feedback data. I can see at least three possible uses - 1) Provide confidence or caution to people who are considering using an IC artifact; 2) Help order search results to put higher rated content above lower rated content; 3) Provide input to content managers either to promote well rated IC or improve or retire lower rated IC.

These are all worthwhile and valuable uses of ratings/feedback data. However for many of the reasons that people have cited below, I don't think that a five star rating system provides data that really is of value to meet these ends. In addition, most content is not rated at all by anyone or is not rated by enough people or it takes too long to get enough ratings to represent a consensus opinion.

Given these limitations of an Amazon-like system, the IBM team is trying something a bit different. First of all, we have deployed the standard five star system and allowed space for comments on all artifacts in the major repositories. We feel that users have become conditioned through other internet experiences to expect this kind of a feedback approach. However, we don't use that raw data by itself for KM purposes. We combine the rating with other data to impute a VALUE for that artifact. The other data includes number of times it was read, downloaded, forwarded to others and printed. These factors are weighted so that a download counts for 10 times the value as a read, for example. We also give significant extra weight to any comments since we think that the comments are much more valuable than the ratings.

We have deployed this approach and are actively calculating imputed values now. However, we have not yet begun to use the data. One of our highest priorities is to help people find stuff faster, so we are eager to use these imputed values to order search results. We also plan to make it available to content managers so that they can see what is accessed and "valued" most (and least) among their inventory of content. The jury is still out on how much our practitioners will actually benefit from this imputed value approach. We have some pilots planned for later this year to see how if it works as well in practice as we think it should.

Mark May
IBM

"Tom" <tombarfield@sbcglobal.net>




"Tom" <tombarfield@sbcglobal.net>
Sent by: sikmleaders@yahoogroups.com

01/20/2006 06:02 PM

Please respond to
sikmleaders@yahoogroups.com



To

sikmleaders@yahoogroups.com


cc




Subject

[sikmleaders] Re: Amazon Ratings - One Thumb Down


Bruce I found your comments on Tuesday and in this note insightful.
I also liked the insights I heard from Kent Greenes (I think).

In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area. Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.

I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad. In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.

If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that. It would be
like saying I only want a rating on 5 star stuff.

If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.

Tom Barfield
Accenture


--- In sikmleaders@yahoogroups.com, "Bruce Karney" <bkarney@a...>
wrote:
>
> Hi all,
>
> In Tuesday's call, I made a comment about why I don't
think "Amazon-
> style ratings" are an effective KM strategy. Let me explain
briefly
> why I believe that, and what I think is a better approach.
>
> Let me contrast two kinds of reviews or rating schemes. These are
> not the only two kinds, but they represent the opposite ends of a
> spectrum.
>
> 1. Peer review, prior to publication: This is the standard used by
> scientists and academics. In essence, drafts of articles are
> circulated to "experts" who offer advice and input prior to
> publication. This input is used by the author (and perhaps the
> editor of the journal) to improve the work BEFORE it is exposed
> (published) to a wide audience.
>
> 2. Consumer review, after publication: Amazon, ePinions, and many
> similar rating and awards systems use this approach. Because
these
> post-publication reviews cannot affect the published work, they
> are "criticism" in the literary sense. In Amazon's case, no
> credentials are required to post a review, so the reviewers are
not
> peers of the authors. Nobel prize winners and your local pizza
> delivery guy have an equal voice in Amazon-land (and the pizza guy
> probably has more free time).
>
> Being able to write one's own review is a satisfying thing for the
> reviewer, especially since it has only become possible to do this
in
> the last few years. However, the only way Amazon reviews impact
the
> world at large is to pull more readers toward a book or push a few
> away. Isn't it better, especially in a business context, to use
> techniques that IMPROVE THE QUALITY OF THE BOOK?
>
> That's what Peer Review is designed to do. If business KM systems
> can't support pre-publication Peer Review, they should at the very
> least focus on post-publication Peer Review and document
improvement.
>
> I also mentioned that at HP, where I used to work, most document
> ratings were 4's or 5's on a scale of 1-5. I have located a copy
of
> study I did on the topic earlier in the year, and would like to
> share my findings:
>
> For a sample of 57 "Knowledge Briefs," which are 6-12 page
technical
> documents desighned to inform and enlighten, there were 12,295
> downloads and only 53 ratings/reviews. This is a ratio of 1
review
> per 232 downloads, and slightly less than one review per
document.
>
> ALL ratings were either 4 or 5. The 53 reviews were provided by
40
> different individuals, so the vast majority of people who
submitted
> a review submitted only one, meaning (perhaps) that they lacked a
> valid base for comparing the Knowledge Brief they were reviewing
to
> any other Brief. The most reviews submitted by a single person
was
> 7, and the second-most was 3.
>
> I contend that if you were perusing a listing of Knowledge Briefs
on
> a given subject, all of which were either unrated or had ratings
> between 4.0 and 5.0, you would not have information that would
steer
> you towards best documents or away from poor documents. You would
> believe that any of the documents could be worthwhile, inasmuch as
> none of them had low scores. Therefore, the rating scheme
provides
> NO value to the prospective reader. Worse yet, if there were a
> documented rated 1, 2 or 3, that rating would probably be a single
> individual's opinion because of the infrequency with which
Knowledge
> Briefs are rated at all.
>
> My conclusion: don't RATE documents, but create systems to provide
> detailed written feedback from readers to authors BEFORE
publication
> if possible, or AFTER publication if that's the best you can do.
> Encourage COLLABORATION, not CRITICISM.
>
> Cheers,
> Bruce Karney
> http://km-experts.com
>







Yahoo! Groups Links












SPONSORED LINKS
Knowledge management <http://groups.yahoo.com/gads?t=ms&k=Knowledge+management&w1=Knowledge+management&w2=Consulting+firms&w3=System+integration&w4=Computer+security&w5=Computer+training&w6=Computer+internet&c=6&s=141&.sig=eYMh0W3JFVzi_onB7wWNsg> Consulting firms <http://groups.yahoo.com/gads?t=ms&k=Consulting+firms&w1=Knowledge+management&w2=Consulting+firms&w3=System+integration&w4=Computer+security&w5=Computer+training&w6=Computer+internet&c=6&s=141&.sig=qC85xUWyICccVBB4XO7ErQ> System integration <http://groups.yahoo.com/gads?t=ms&k=System+integration&w1=Knowledge+management&w2=Consulting+firms&w3=System+integration&w4=Computer+security&w5=Computer+training&w6=Computer+internet&c=6&s=141&.sig=5cyNxCa5ZWSbYCzlo8Z9ug>
Computer security <http://groups.yahoo.com/gads?t=ms&k=Computer+security&w1=Knowledge+management&w2=Consulting+firms&w3=System+integration&w4=Computer+security&w5=Computer+training&w6=Computer+internet&c=6&s=141&.sig=BZ4QQ-bY8GPn5oh2Kn9mXA> Computer training <http://groups.yahoo.com/gads?t=ms&k=Computer+training&w1=Knowledge+management&w2=Consulting+firms&w3=System+integration&w4=Computer+security&w5=Computer+training&w6=Computer+internet&c=6&s=141&.sig=dzPwYsmI9jBFE89zPpBF2A> Computer internet <http://groups.yahoo.com/gads?t=ms&k=Computer+internet&w1=Knowledge+management&w2=Consulting+firms&w3=System+integration&w4=Computer+security&w5=Computer+training&w6=Computer+internet&c=6&s=141&.sig=WGcRrUozBPz7w2nqIxaVFA>

_____

YAHOO! GROUPS LINKS



* Visit your group "sikmleaders <http://groups.yahoo.com/group/sikmleaders> " on the web.

* To unsubscribe from this group, send an email to:
sikmleaders-unsubscribe@yahoogroups.com <mailto:sikmleaders-unsubscribe@yahoogroups.com?subject=Unsubscribe>

* Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service <http://docs.yahoo.com/info/terms/> .


_____


John Maloney <jtmalone@...>
 

Hi --

Another issue to add to the many concerning rating and ranking is
that the folks leading these initiatives often cannot articulate
what the business outcomes are. That's a problem.

Anecdotal evidence doesn't cut it anymore. Just tell executives or
shareholders, "...our goal is to create a repository of ranked
documents." You will be quickly reassigned, retired or dismissed
like so many KM people lately.

Often, current ranking schemes are no different the the Internet
bubble hysteria of eyeballs or 'hits'(how idiots track success).

Again, a rating system is simply making a market with numbers (1-5)
as currency. Implicitly, submitters `trade' for higher ratings, for
example. Again, this is not an end in itself. What is the purpose?
The goal? The advantage?

Simply, consider making it a real market. Submitters rcv a cash
micropayment for content that is retrieved, used. They also make a
micropayment to list, submit their offerings.

Also, rating/ranking systems must have a critical mass to be
legitimate. (Solves Jerry's vignette too.) Transparent knowledge
markets will quickly correct, ameliorate a lot of the problems with
gaming and politics, for example. Elance shows some of these
characteristic. http://www.elance.com/


Here is another, more elaborate view from some friends at USC
Marshall School.

http://www.kmcluster.com/knowledge.pdf



Cordially,

-jtm
http://kmblogs.com/


Dave Snowden <snowded@...>
 

You have a lot more confidence in markets than I do

____________________
Dave Snowden
Founder, The Cynefin Centre

www.cynefin.net

+44 7795 437 293

Rowan Cottage
51 Lockeridge
Marlborough
SN8 4EL
United Kingdom




On 21 Jan 2006, at 21:36, J Maloney ((jheuristic)) wrote:

Hi Dave --

Brief response in caps.

I agree that prediction markets are useful,

GOOD.

but (i) we do not know yet what will happen when people try to game

THEY ARE INTENDED TO BE 'GAMED' AGGRESSIVELY. THAT IS THE WHOLE POINT.

them and (ii) as the results are visible then they will change the
predictions.

YES, ABSOLUTELY. MARKETS MUST BE TRANSPARENT TO BE EFFECTIVE. THE CHANGES
DETERMINE THE PRICES.

LOOK AT THESE CONTRACTS FOR AN IDEA:

http://www.intrade.com/jsp/intrade/contractSearch/

My gut feel is that we should get rid of the"prediction" word for a bit and
talk instead about "prevention" and "enablement"

YEAH, MAKES SENSE. MARKETS ARE VERY USEFUL, ESSENTIAL REALLY, TO RISK
MANAGEMENT (HEDGING).


More to the story...

"Thirty years ago, when Charles Schwab bucked Wall Street and founded the
first ever discount brokerage, people were skeptical. It couldn't be done,
they claimed. Wall Street was governed by the large financial giants which
charged enormous fees, catered to the very rich or the institutional
investors of the world, and largely paid lip service to the small investor.
Despite being blackballed by Wall Street, Schwab changed all that, bringing
the financial markets to the masses, reducing commissions to affordable
levels such that the nation's significant middle class could invest for
their own futures in the same way institutions had done for years. It was a
watershed moment." {hedgestreet]   The rest, as they say, is history.
"Chuck" now has the biggest compound in Pebble Beach!


It is roughly the same thing today. Only highly 'sophisticated' brokers and
bankers trade derivatives, about 3% of very, high net worth investors and
giant institutions. It make up a HUGE market, some say bigger than the stock
market. "Retail hedge markets are coming. See: HedgeStreet's site
www.hedgestreet.com enables members to trade small, inexpensive,
easy-to-understand "event derivative" contracts (called HedgeletsR) in
markets never before accessible to individual traders." {hedgestreet]  

Cheers,

John





John Maloney
T:  415.902.9676
IM/Skype: jheuristic
ID: http://public.2idi.com/=john.maloney

Prediction Markets: http://www.kmcluster.com/nyc/PM/PM.htm

KM Blogs: http://kmblogs.com/




SPONSORED LINKS
Knowledge management Consulting firms System integration
Computer security Computer training Computer internet


YAHOO! GROUPS LINKS






J Maloney \(jheuristic\) <jtmalone@...>
 

Hi Dave --

It is not a matter of confidence, rather of outcomes.

We've all been trained to listen to experts, pundits and highly credentialed
people. Oh, and there are those polls and polling that are so important. Not
to mention that enterprise standby, the survey. Having confidence in these
instruments is, well, not a very good approach. Markets consistently
outperform these methods.

Here are some pop media: http://kmblogs.com/public/item/106758


Here is a great PM vortal: http://www.chrisfmasse.com/



Cheers,

John

-----Original Message-----
From: sikmleaders@yahoogroups.com [mailto:sikmleaders@yahoogroups.com]
Sent: Wednesday, January 25, 2006 1:52 PM
To: sikmleaders@yahoogroups.com
Subject: [sikmleaders] Digest Number 21

There is 1 message in this issue.

Topics in this digest:

1. Re: Digest Number 17
From: Dave Snowden <snowded@mac.com>


________________________________________________________________________
________________________________________________________________________

Message: 1
Date: Tue, 24 Jan 2006 16:29:34 +0000
From: Dave Snowden <snowded@mac.com>
Subject: Re: Digest Number 17

You have a lot more confidence in markets than I do

____________________
Dave Snowden
Founder, The Cynefin Centre