Date   

Gartner High Performance Workplace Blog #workplace

john_schwiller <john.schwiller@...>
 

This is a new Gartner blog on high performance workplaces which may be
of interest.

I have some traction with Tom Austin who started it and have posted
some comments. Unfortunately it's not possible for outsiders to start a
new thread but you can comment on their posts.

http://blog.gartner.com/blog/index.php?blogid=3

"The High Performance Workplace integrates a diversity of perspectives
and a wide swath of technologies. The HPW blog will touch on a variety
of issues that will enable you to raise employee performance and
productivity. Join us as we look at the work processes and technologies
of the high-performance workplace - the stuff that helps users get
their jobs done!"


LinkedIn Networking #LinkedIn

john_schwiller <john.schwiller@...>
 

Some of us are on LinkedIn www.linkedin.com eg Stan, Paul and I.

If anyone else is on it, feel free to link to me using my work address
john.schwiller@logicacmg.com


Virtual Communities Conference #CoP #conferences

John Schwiller <john.schwiller@...>
 

This is a link to Infonortics who run the annual Virtual Communities
Conference, this year in London - I went to the 2003 one.

http://www.infonortics.com/vc/index.html

The 2004 archive on this site has a prez by Wenger of CoP 'fame'.
Dave Snowden is also a regular keynote contributor.


Blog on Why KM Is So Important #value

Mark May
 
Edited

I thought that the community members might be interested in this blog
entry. The author says that he was formerly CKO of the consulting
arm of a Big 4 accounting firm.

https://howtosavetheworld.ca/2005/08/21/why-knowledge-management-is-so-important/

I found these two quotes particularly resonant given what I am
working on now:

Business leaders tend to see value in centralized repositories
of 'best practices' and SOPs, and the reuse of knowledge collateral.
KM leaders are more likely to see the value in context-rich
conversations between peers, 'pointers to people', mining the content
of front line people's desktops, and tools that enhance collaboration
and innovation.

I'm increasingly convinced that this ignorance of the aggravation of
people doing their best on the front lines -- not being able to find
the people, experts and knowledge they need (sometimes even when it's
on their own hard drive) to do their jobs properly is at the heart of
problems as diverse as low productivity, lack of work-life balance,
high turnover of 'stars' (and the need to pay them exorbitant sums,
and everyone else inordinately less, to keep those stars),
dissatisfied customers, employee burnout, lousy service and high
employee illness rates.


Usage agreements for client/partner collaboration extranets #collaboration

Paul Rehmet <paul.rehmet@...>
 

We will soon provide externally accessible collaboration spaces to
Unisys sales and delivery teams and their clients and business
partners. We are trying to determine the right level of legal
protection that minimizes liability risk to Unisys without
discouraging productive use of the facility. I would appreciate
hearing any experience people have in this area, specifically:

1) Do you host an extranet for two-way document sharing and
collaboration with clients or partners?

2) If so, do you require client or partner organizations to sign a
specific terms of use document?

3) Do you require each user of the environment to acknowledge
standard
terms and conditions of use prior to entry?

4) Do you use some other legal strategy to minimize company liability?


Usage agreements for client/partner collaboration extranets #collaboration

Garfield, Stan <stanley.garfield@...>
 

Here is a reply from Craig Gilbert of HP's IT group to Paul's questions.

-----Original Message-----
From: Gilbert, Craig craig.gilbert@hp.com
Sent: Thursday, September 01, 2005 10:40 PM
Subject: RE: Usage agreements for client/partner collaboration extranets

Please see my replies below. There are still some questions about how we
will implement the Phase 3 (customers, partners) part of external
access. Please let me know if you have any questions about what I have
written. Hopefully this is useful.

Craig

-----Original Message-----
From: sikmleaders@yahoogroups.com On Behalf Of Paul Rehmet
Sent: Tuesday, August 30, 2005 5:33 PM
To: sikmleaders@yahoogroups.com
Subject: [sikmleaders] Usage agreements for client/partner collaboration
extranets

We will soon provide externally accessible collaboration spaces to
Unisys sales and delivery teams and their clients and business partners.
We are trying to determine the right level of legal protection that
minimizes liability risk to Unisys without discouraging productive use
of the facility. I would appreciate hearing any experience people have
in this area, specifically:

1) Do you host an extranet for two-way document sharing and
collaboration with clients or partners?

[Craig Gilbert] We are currently in the process of enabling SharePoint
to be used in this manner. At present users requiring this capability
are requested to consider the HP eRoom solution. Our long term plan is
to shut down eRoom and make SharePoint the collaborative solution for
inside and outside of HP.

2) If so, do you require client or partner organizations to sign a
specific terms of use document?

[Craig Gilbert] There will be a requirement for users to sign a document
or at least acknowledge the "terms of use" for the SharePoint external
access solution via a web form. Unless users provide a signature or
agree (via the web form) to the "terms of use" they will not be allowed
to access our servers.

3) Do you require each user of the environment to acknowledge standard
terms and conditions of use prior to entry?

[Craig Gilbert] Yes

4) Do you use some other legal strategy to minimize company liability?

[Craig Gilbert] The team has been working with HP legal to ensure that
what we are offering and how we have enabled it will minimize the
liability. Basically, this will be handled via the wording of the
agreement.


November 2005 SIKM Call: Mark May - Knowledge Enablement in IBM Global Services #monthly-call

Garfield, Stan <stanley.garfield@...>
 
Edited

Today we held our sixth monthly call.  Here is a summary.

Attendees

1.      Jerry Ash, Association of Knowledgework
2.      Tom Barfield, Accenture
3.      Steven Berzins, Accenture
4.      Seth Earley, Earley & Associates
5.      Stan Garfield, HP
6.      Jean Graef, Montague Institute
7.      Bruce Karney, independent consultant
8.      Michael Koffman, independent consultant
9.      Doug Madgic, Cisco
10.     Mark May, IBM
11.     Sarah Oldrin, Solvay
12.     Paul Rehmet, Unisys
13.     Chris Riemer, Knowledge Street LLC
14.     Kiran Seshadri, Tata Consultancy Services
15.     Reed Stuedemann, Caterpillar
16.     Sanjay Swarup, Ford
17.     Jack Vinson, Knowledge Jolt, Inc.
18.     Rebecca Winter, Whirlpool

Mark May presented "Knowledge Enablement in IBM Global Services."  His presentation is available at  IGS KE for Consulting and SI KM leaders November 15 2005.ppt.  Mark will invite a colleague to discuss communities of practice at IBM on a future call.

Members from Caterpillar, Ford, and Solvay expressed interest in translation of content.  This could be the topic of a future call.

Thanks to those of you who were on the call.  I will send out a reminder for future calls on the business day prior to the call.

Future Calls

December 20:  Tom Davenport will discuss his latest book, "Thinking for a Living."

January 17:  Doug Madgic will present on KM at Cisco Advanced Services.

If you are willing to give a presentation on a future call, please let me know the topic and the desired month.


Multi language support in artifact repositories #content-management

Mark May
 

Greetings, all. On the call this past Tuesday, several people asked
about the multi language support capabilities within the IBM Global
Services artifact repositories. I posed this question to the
KnowledgeView owner and wanted to share his reply with the group.
Please let me know if you have additional questions.

Best regards,
Mark May


In KnowledgeView specific projects have been run to provide support
for the nine non-English languages that are most important to the
business - Spanish, Portuguese, French, German, Korean, Chinese
Mandarin, Japanese, Italian, Dutch.

The focus is on providing content management resource in those
languages and having resource to encourage knowledge harvesting,
encourage the creation of a leadership message and identify key links
for that language (these may be targetted at the countries that use
the language rather than the language itself).

Certain types of entries are required to always have an english
language abstract (because they can be a pointer to people and work
products), but the remainder of the document can be in local
language. For other types of documents an english language abstract
should be provided if the document is of regional or global
relevance. If it is deemed to be only relevant to a particular
country, then the entire document can be in local language.

The taxonomy is global and in English language - we don't provide non-
English equivalent terms.

Search is available in all the languages supported (including the
double-byte languages Japanese, Korean, Chinese).

Navigation is English language only.

We have not availed ourselves of using the IBM or business partner
products that do realtime language translation. For basic
translation, users can leverage free sites on www like altavista,
yahoo, etc. Machine translation still doesn't do that well on
technical terms, so for an important document human involvement may
be needed. There is an organization available within IBM to provide
this service.


December 2005 SIKM Call: Tom Davenport - Thinking for a Living: Improving the Performance of Knowledge Workers #monthly-call

Garfield, Stan <stanley.garfield@...>
 
Edited

Today we held our 7th monthly call.  Here is a summary.

Attendees

1.      Tom Barfield, Accenture
2.      Richard Bettenhausen, Fidelity Information Services
3.      Raj Datta, MindTree
4.      Stan Garfield, HP
5.      Bill Ives
6.      Michael Koffman
7.      Doug Madgic, Cisco
8.      Mark May, IBM
9.      Kevin Mayes, Aon
10.    Sarah Oldrin, Solvay
11.    Paul Rehmet, Unisys
12.    Chris Riemer, Knowledge Street LLC
13.    John Schwiller, LogicaCMG
14.    Marguerite Sclafini, Marsh
15.    Kiran Seshadri, Tata Consultancy Services
16.    Wout Steurs, KPMG
17.    Reed Stuedemann, Caterpillar
18.    Sanjay Swarup, Ford
19.    Jack Vinson, Knowledge Jolt, Inc.
20.    Dianna Wiggins, McDonalds Corporation

Tom Davenport presented "Thinking for a Living: Improving the Performance of Knowledge Workers."  His presentation is available at Tom Davenport.ppt

Raj Datta suggested knowledge creation as the
topic of a future call.

 

Thanks to those of you who were on the call.  I will send out a reminder for future calls on the business day prior to the call.

Future Calls

·       January 17:  Doug Madgic - "KM at Cisco Advanced Services"
·       February 21: Jack Vinson - "Knowledge communities, community building, and blogging"
·       March 21: Reed Stuedemann - "Caterpillar's Knowledge Network"
·       April 18: Bill Ives - "What blogs and wikis bring to business and knowledge management"
·       May 16: Kent Greenes - "Making learning & performing routine"
·       June 20: Raj Datta - "Building a Knowledge Culture"


Articles about our members in recent issues of Inside Knowledge #periodicals

Garfield, Stan <stanley.garfield@...>
 

TO: SI KM Leaders Community

Our members have written articles or have been featured in recent issues of Inside Knowledge Magazine.  Jerry Ash has had articles published in every issue.  And the following articles appeared in the October and December/January issues.

October http://www.ikmagazine.com/xq/asp/sid.556BD53A-2856-40D2-8BC7-2A1BF75A8920/volume.9/issue.2/qx/displayissue.htm

Masterclass: Social-network analysis

“So now what?” is the big question following a network analysis. Youve done the planning, got the stakeholders engaged, decided what questions to ask, completed the survey and the analysis and you have some interesting results to review. In and of itself, youve just completed a great deal of work, but in the overall scheme of things youve just started a process that is intended to move an organisation or set of organisations from a current state of connectedness to a state of improved connectivity and awareness of the importance of being connected.

Practical examples of how different leadership and KM practices improve overall connectivity in an organisation. By Patti Anklam

Case study: Ford

Use of 6-Sigma and other quality-improvement programmes are very prevalent. What is rare is a robust business process to replicate quality-improvement practices across all business units of an enterprise. To tackle this challenge, what is needed is a proven process to capture, share, and fully leverage any and all quality improvements that occur in remote corners of an enterprise.

Applying KM to improve quality - Best-practice replication at Ford enhances quality-improvement practices and encourages collaborative working. By Sanjay Swarup

Cover story: Knowledge in action

A major international science and technology company is currently engaged in a change-management process that will transform the company. Kent Greenes, senior vice president and chief knowledge officer (CKO) at Science Applications International Corporation (SAIC), was originally hired by SAIC as a rainmaker in the KM consulting market, but now is the lead change-management strategist under the companys first new CEO since the organisation was founded in 1969.

Using KM lessons learnt to transform business processes at SAIC. By Jerry Ash

The knowledge: Bruce Karney

When Lew Platt, former CEO at HP said, “If only HP knew what HP knows” he succinctly described the aspirations of many organisations taking their first steps in knowledge management. Recognising that the companys continued success relied on its employees knowledge of its markets, products and customers, HP started to design and implement processes and tools to help its people connect, collaborate and learn.

Mixing his knowledge of industrial engineering, marketing, change management and technology, Sandra Higgison finds out how Bruce Karney has created and delivered KM tools and processes to help staff across HP perform their jobs more effectively.

December/January http://www.ikmagazine.com/xq/asp/sid.8C4A6D9C-C22E-49D1-B3A3-DC0ADD8B840A/volume.9/issue.4/qx/displayissue.htm

Cover story: Capture and re-use http://groups.yahoo.com/group/sikmleaders/files/Inside%20Knowledge%20Article%20on%20HP.doc

The subject of knowledge capture and re-use has always created tension in the knowledge-management (KM) community because it carries visions of IT-driven knowledge repositories, choking with documents that are difficult to find and lacking in relevance to the searcher. Hewlett-Packard, however, may have cracked it. Jerry Ash reports.

Hewlett-Packards Engagement KM initiative balances people, process and technology. By Jerry Ash

Case study: Aon - Practice makes perfect

Informal communities have existed within Aon for years, but in 2000 the company decided to implement a more structured approach to community development. By Sarah Adams


January 2006 SIKM Call: Doug Madgic - KM at Cisco Advanced Services #monthly-call

Garfield, Stan <stanley.garfield@...>
 
Edited

Today we held our 8th monthly call.  Here is a summary.

Attendees

1.      Patti Anklam, Hutchinson Associates
2.      Jerry Ash, AOK
3.      Tom Barfield, Accenture
4.      Gary Borella, Cisco
5.      Seth Earley, Earley & Associates
6.      Jason Ferguson, EDS
7.      Stan Garfield, HP
8.      Kent Greenes, SAIC
9.      Bruce Karney, KM-Experts
10.     Michael Koffman
11.     Doug Madgic, Cisco
12.     Mark May, IBM
13.     Mark Neff, CSC
14.     Sarah Oldrin, Solvay
15.     Kiran Seshadri, Tata Consultancy Services
16.     Sanjay Swarup, Ford
17.     Jack Vinson, Knowledge Jolt, Inc.
18.     Rick Wallace, SAIC

Doug Madgic
presented KM at Cisco Advanced Services, supported by Gary Borella.  His presentation is available at Cisco_CKC_Overview.ppt

There was considerable discussion on user ratings of content.  If you would like to continue the discussion, please do so by using our group's threaded discussion capability.

Thanks to those of you who were on the call.  I will send out a reminder for future calls on the business day prior to the call.

Future Calls

·       February 21: Jack Vinson - "Knowledge communities, community building, and blogging"
·       March 21: Reed Stuedemann - "Caterpillar's Knowledge Network"
·       April 18: Bill Ives - "What blogs and wikis bring to business and knowledge management"
·       May 16: Kent Greenes - "Making learning & performing routine"
·       June 20: Raj Datta - "Building a Knowledge Culture"
·       July 18: Sanjay Swarup - "Seven Strategies for Maximizing the value of Knowledge Sharing"
·       August 15: Brian Gorman - "Virtual Collaboration in the Global Enterprise at Intel"
·       September 19: Steve Denning - "Leadership, Innovation, and Business Narrative"


Amazon Ratings - One Thumb Down #ratings #prediction-markets

Bruce Karney <bkarney@...>
 

Hi all,

In Tuesday's call, I made a comment about why I don't think "Amazon-
style ratings" are an effective KM strategy. Let me explain briefly
why I believe that, and what I think is a better approach.

Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.

1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.

2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because these
post-publication reviews cannot affect the published work, they
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are not
peers of the authors. Nobel prize winners and your local pizza
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).

Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this in
the last few years. However, the only way Amazon reviews impact the
world at large is to pull more readers toward a book or push a few
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?

That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document improvement.

I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy of
study I did on the topic earlier in the year, and would like to
share my findings:

For a sample of 57 "Knowledge Briefs," which are 6-12 page technical
documents desighned to inform and enlighten, there were 12,295
downloads and only 53 ratings/reviews. This is a ratio of 1 review
per 232 downloads, and slightly less than one review per document.

ALL ratings were either 4 or 5. The 53 reviews were provided by 40
different individuals, so the vast majority of people who submitted
a review submitted only one, meaning (perhaps) that they lacked a
valid base for comparing the Knowledge Brief they were reviewing to
any other Brief. The most reviews submitted by a single person was
7, and the second-most was 3.

I contend that if you were perusing a listing of Knowledge Briefs on
a given subject, all of which were either unrated or had ratings
between 4.0 and 5.0, you would not have information that would steer
you towards best documents or away from poor documents. You would
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme provides
NO value to the prospective reader. Worse yet, if there were a
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which Knowledge
Briefs are rated at all.

My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE publication
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.

Cheers,
Bruce Karney
http://km-experts.com


Re: Amazon Ratings - One Thumb Down #ratings #prediction-markets

Ravi Arora
 

Hi,

I 100% agree with what Bruce has to say. An inefficient and ineffective
system is as good as having no system.

a) But the fact remains how does a company ensure that people
Collaborate?
b) People do not participate in activities that they feel are not
worthy.
c) Why review a document to store it? Best is to generate a document
JIT when it is required.

Thank you,


Ravi

-----Original Message-----
From: sikmleaders@yahoogroups.com [mailto:sikmleaders@yahoogroups.com]
On Behalf Of Bruce Karney
Sent: Friday, January 20, 2006 7:00 AM
To: sikmleaders@yahoogroups.com
Subject: [sikmleaders] Amazon Ratings - One Thumb Down

Hi all,

In Tuesday's call, I made a comment about why I don't think "Amazon-
style ratings" are an effective KM strategy. Let me explain briefly
why I believe that, and what I think is a better approach.

Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.

1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.

2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because these
post-publication reviews cannot affect the published work, they
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are not
peers of the authors. Nobel prize winners and your local pizza
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).

Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this in
the last few years. However, the only way Amazon reviews impact the
world at large is to pull more readers toward a book or push a few
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?

That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document improvement.

I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy of
study I did on the topic earlier in the year, and would like to
share my findings:

For a sample of 57 "Knowledge Briefs," which are 6-12 page technical
documents desighned to inform and enlighten, there were 12,295
downloads and only 53 ratings/reviews. This is a ratio of 1 review
per 232 downloads, and slightly less than one review per document.

ALL ratings were either 4 or 5. The 53 reviews were provided by 40
different individuals, so the vast majority of people who submitted
a review submitted only one, meaning (perhaps) that they lacked a
valid base for comparing the Knowledge Brief they were reviewing to
any other Brief. The most reviews submitted by a single person was
7, and the second-most was 3.

I contend that if you were perusing a listing of Knowledge Briefs on
a given subject, all of which were either unrated or had ratings
between 4.0 and 5.0, you would not have information that would steer
you towards best documents or away from poor documents. You would
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme provides
NO value to the prospective reader. Worse yet, if there were a
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which Knowledge
Briefs are rated at all.

My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE publication
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.

Cheers,
Bruce Karney
http://km-experts.com






Yahoo! Groups Links


Re: Amazon Ratings - One Thumb Down #ratings #prediction-markets

Tom <tombarfield@...>
 

Bruce I found your comments on Tuesday and in this note insightful.
I also liked the insights I heard from Kent Greenes (I think).

In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area. Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.

I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad. In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.

If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that. It would be
like saying I only want a rating on 5 star stuff.

If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.

Tom Barfield
Accenture


--- In sikmleaders@yahoogroups.com, "Bruce Karney" <bkarney@a...>
wrote:

Hi all,

In Tuesday's call, I made a comment about why I don't
think "Amazon-
style ratings" are an effective KM strategy. Let me explain
briefly
why I believe that, and what I think is a better approach.

Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.

1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.

2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because
these
post-publication reviews cannot affect the published work, they
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are
not
peers of the authors. Nobel prize winners and your local pizza
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).

Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this
in
the last few years. However, the only way Amazon reviews impact
the
world at large is to pull more readers toward a book or push a few
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?

That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document
improvement.

I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy
of
study I did on the topic earlier in the year, and would like to
share my findings:

For a sample of 57 "Knowledge Briefs," which are 6-12 page
technical
documents desighned to inform and enlighten, there were 12,295
downloads and only 53 ratings/reviews. This is a ratio of 1
review
per 232 downloads, and slightly less than one review per
document.

ALL ratings were either 4 or 5. The 53 reviews were provided by
40
different individuals, so the vast majority of people who
submitted
a review submitted only one, meaning (perhaps) that they lacked a
valid base for comparing the Knowledge Brief they were reviewing
to
any other Brief. The most reviews submitted by a single person
was
7, and the second-most was 3.

I contend that if you were perusing a listing of Knowledge Briefs
on
a given subject, all of which were either unrated or had ratings
between 4.0 and 5.0, you would not have information that would
steer
you towards best documents or away from poor documents. You would
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme
provides
NO value to the prospective reader. Worse yet, if there were a
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which
Knowledge
Briefs are rated at all.

My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE
publication
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.

Cheers,
Bruce Karney
http://km-experts.com


Re: Amazon Ratings - One Thumb Down #ratings #prediction-markets

Mark May
 

I really enjoy this topic because it has great potential to benefit our
practitioners in the field and because it is a concept that everyone can
envision, given the ubiquity of the Amazon experience.

My thinking starts with how one would propose to use rating/feedback data.
I can see at least three possible uses - 1) Provide confidence or caution
to people who are considering using an IC artifact; 2) Help order search
results to put higher rated content above lower rated content; 3) Provide
input to content managers either to promote well rated IC or improve or
retire lower rated IC.

These are all worthwhile and valuable uses of ratings/feedback data.
However for many of the reasons that people have cited below, I don't think
that a five star rating system provides data that really is of value to
meet these ends. In addition, most content is not rated at all by anyone
or is not rated by enough people or it takes too long to get enough ratings
to represent a consensus opinion.

Given these limitations of an Amazon-like system, the IBM team is trying
something a bit different. First of all, we have deployed the standard
five star system and allowed space for comments on all artifacts in the
major repositories. We feel that users have become conditioned through
other internet experiences to expect this kind of a feedback approach.
However, we don't use that raw data by itself for KM purposes. We combine
the rating with other data to impute a VALUE for that artifact. The other
data includes number of times it was read, downloaded, forwarded to others
and printed. These factors are weighted so that a download counts for 10
times the value as a read, for example. We also give significant extra
weight to any comments since we think that the comments are much more
valuable than the ratings.

We have deployed this approach and are actively calculating imputed values
now. However, we have not yet begun to use the data. One of our highest
priorities is to help people find stuff faster, so we are eager to use
these imputed values to order search results. We also plan to make it
available to content managers so that they can see what is accessed and
"valued" most (and least) among their inventory of content. The jury is
still out on how much our practitioners will actually benefit from this
imputed value approach. We have some pilots planned for later this year to
see how if it works as well in practice as we think it should.

Mark May
IBM




"Tom"
<tombarfield@sbcg
lobal.net> To
Sent by: sikmleaders@yahoogroups.com
sikmleaders@yahoo cc
groups.com
Subject
[sikmleaders] Re: Amazon Ratings -
01/20/2006 06:02 One Thumb Down
PM


Please respond to
sikmleaders@yahoo
groups.com






Bruce I found your comments on Tuesday and in this note insightful.
I also liked the insights I heard from Kent Greenes (I think).

In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area. Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.

I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad. In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.

If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that. It would be
like saying I only want a rating on 5 star stuff.

If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.

Tom Barfield
Accenture


--- In sikmleaders@yahoogroups.com, "Bruce Karney" <bkarney@a...>
wrote:

Hi all,

In Tuesday's call, I made a comment about why I don't
think "Amazon-
style ratings" are an effective KM strategy. Let me explain
briefly
why I believe that, and what I think is a better approach.

Let me contrast two kinds of reviews or rating schemes. These are
not the only two kinds, but they represent the opposite ends of a
spectrum.

1. Peer review, prior to publication: This is the standard used by
scientists and academics. In essence, drafts of articles are
circulated to "experts" who offer advice and input prior to
publication. This input is used by the author (and perhaps the
editor of the journal) to improve the work BEFORE it is exposed
(published) to a wide audience.

2. Consumer review, after publication: Amazon, ePinions, and many
similar rating and awards systems use this approach. Because
these
post-publication reviews cannot affect the published work, they
are "criticism" in the literary sense. In Amazon's case, no
credentials are required to post a review, so the reviewers are
not
peers of the authors. Nobel prize winners and your local pizza
delivery guy have an equal voice in Amazon-land (and the pizza guy
probably has more free time).

Being able to write one's own review is a satisfying thing for the
reviewer, especially since it has only become possible to do this
in
the last few years. However, the only way Amazon reviews impact
the
world at large is to pull more readers toward a book or push a few
away. Isn't it better, especially in a business context, to use
techniques that IMPROVE THE QUALITY OF THE BOOK?

That's what Peer Review is designed to do. If business KM systems
can't support pre-publication Peer Review, they should at the very
least focus on post-publication Peer Review and document
improvement.

I also mentioned that at HP, where I used to work, most document
ratings were 4's or 5's on a scale of 1-5. I have located a copy
of
study I did on the topic earlier in the year, and would like to
share my findings:

For a sample of 57 "Knowledge Briefs," which are 6-12 page
technical
documents desighned to inform and enlighten, there were 12,295
downloads and only 53 ratings/reviews. This is a ratio of 1
review
per 232 downloads, and slightly less than one review per
document.

ALL ratings were either 4 or 5. The 53 reviews were provided by
40
different individuals, so the vast majority of people who
submitted
a review submitted only one, meaning (perhaps) that they lacked a
valid base for comparing the Knowledge Brief they were reviewing
to
any other Brief. The most reviews submitted by a single person
was
7, and the second-most was 3.

I contend that if you were perusing a listing of Knowledge Briefs
on
a given subject, all of which were either unrated or had ratings
between 4.0 and 5.0, you would not have information that would
steer
you towards best documents or away from poor documents. You would
believe that any of the documents could be worthwhile, inasmuch as
none of them had low scores. Therefore, the rating scheme
provides
NO value to the prospective reader. Worse yet, if there were a
documented rated 1, 2 or 3, that rating would probably be a single
individual's opinion because of the infrequency with which
Knowledge
Briefs are rated at all.

My conclusion: don't RATE documents, but create systems to provide
detailed written feedback from readers to authors BEFORE
publication
if possible, or AFTER publication if that's the best you can do.
Encourage COLLABORATION, not CRITICISM.

Cheers,
Bruce Karney
http://km-experts.com






Yahoo! Groups Links


Amazon Ratings - One Thumb Down #ratings #prediction-markets

J Maloney \(jheuristic\) <jtmalone@...>
 

Hi --

I almost agree here. What is missing for me is the dynamic nature of knowledge and, importantly, reputation. At any giving point in time, the conclusions below are fine, but it is over time, where ratings and rankings are  really important. It is the notion of creating a reputation market. As we hear over and over (yet seem challenged to heed) KM is about connection not collection. Giant monoliths like Amazon and enterprise repositories of documents are information, not knowledge.  It is fine to rate cataloged, indexed and codified information, the documents, but really what's happening is an implicit ranking of the author and the derivative or derived future of their tacit knowledge and, very importantly, their capability and willingness to get it used, e.g., the social networks, persuasive skills, etc.. It is the dynamic of knowledge creation and use where rankings and ratings really are an advantage. Reputation and identity are social systems that need the attention of KM. Remember, for KM it is less about how it is managed (library science, database management) and far more about how it is created, used, applied. If the author is unavailable, then it the social network and conversation that emerges around the ranked document, such as a wiki, that carries the value forward. Information needs to be socialized to release value. Yahoo! Answers is an example of a crude reputation system/market. There are more on the horizon. 
 
 
Cordially,
 
John

John Maloney

IM/Skype: jheuristic
ID: http://public.2idi.com/=john.maloney

Prediction Markets: http://www.kmcluster.com/nyc/PM/PM.htm

KM Blogs: http://kmblogs.com/

 
 
 

Points and Levels

To encourage participation and reward great answers, Yahoo! Answers has a system of points and levels. The number of points you get depends on the specific action you take. The points table below summarizes the point values for different actions. While you can't use points to buy or redeem anything, they do allow everyone to recognize how active and helpful you've been. (And they give you another excuse to brag to your friends.)
Points Table
Action Points
Begin participating on Yahoo! Answers One time: 100
Choose a best answer for your question 5
Put the answers to your question to a vote 5
Answer a question 2
Log in to Yahoo! Answers Once daily: 1
Vote for a best answer 1
Rate a best answer 1
Have your answer selected as the best answer 10
Receive a "thumbs-up" rating on a best answer that you wrote (up to 50 thumbs-up are counted) 1 per "thumbs-up"
Levels are another way to keep track of how active you (and others) have been. The more points you accumulate, the higher your level. Yahoo! Answers recognizes your level achievements with our special brand of thank you's!
There's also a color associated with the levels. These colors will have more meaning as Yahoo! Answers rolls out new features in the coming weeks, and you'll see how "sharing what you know" and "discovering something new" can be fun and rewarding.
Level Yahoo Answers! Thanks You's Colors Points
7 Be the first to find out! Black 25000+
6 Be the first to find out! Brown 10000 - 24,999
5 Be eligible to be a Featured User on the home page masthead. Purple 5,000 - 9,999
4 Be eligible to be a Featured User on the community editorial page. (coming soon!) Green 2,500-4,999
3 A super-special Yahoo! Answers thank you. Blue 1,000-2,499
2 A special Yahoo! Answers thank you. Yellow 250-999
1 Full access to Yahoo! Answers! White 0-249
And finally, as you attain higher levels, you'll also be able to contribute more to Yahoo! Answers - you can ask, answer, vote and rate more frequently.
Limits (per day) Level
Unlimited Questions, Answers, Votes & Ratings 5, 6, 7
40 each: Questions, Answers, Votes & Ratings 4
30 each: Questions, Answers, Votes & Ratings 3
20 each: Questions, Answers, Votes & Ratings 2
10 each: Questions, Answers, Votes & Ratings 1
 



-----Original Message-----
From: sikmleaders@... [mailto:sikmleaders@...]
Sent: Friday, January 20, 2006 12:51 PM
To: sikmleaders@...
Subject: [sikmleaders] Digest Number 16

There are 2 messages in this issue.

Topics in this digest:

      1. Amazon Ratings - One Thumb Down
           From: "Bruce Karney" <bkarney@...>
      2. RE: Amazon Ratings - One Thumb Down
           From: "Ravi Arora"


________________________________________________________________________
________________________________________________________________________

Message: 1        
   Date: Fri, 20 Jan 2006 01:30:02 -0000
   From: "Bruce Karney"
Subject: Amazon Ratings - One Thumb Down

Hi all,

In Tuesday's call, I made a comment about why I don't think "Amazon-
style ratings" are an effective KM strategy. 


Re: Amazon Ratings - One Thumb Down #ratings #prediction-markets

David Snowden <snowded@...>
 

Sorry not to have been active, or on the calls - too much travel
However this is a topic of considerable interest to us, and a part of our own software development and method research.
I think there are some key points that should be made
1 - Goodharts Law states "The minute a measure becomes a target it ceases to be a measure" and we can extensive evidence of this in government and industry
2 - Snowden's variation on that law is that "anything explicit will be gamed"
3 - It follows that it is key to any rating system, that the rating does not produce financial or status based rewards, if they do it will be gamed. (look at Bay, the ability to game Goggle searches etc. etc.)
4 - Bloggs are now being manipulated and also some other folksonomies.
5 - Any rating system needs to allow people to choose a filter in which they see ratings based on people who;s opinion they respect.
6 - artefacts cannot have absolute value (sorry to disagree with my former employer here), they have value in context.
7 - if we look at the three generations of understanding of data we can see (i) its all about the data, followed by (ii) its about the data with conversations (CoP, folksonomy etc.) and now (iii) data with conversations in models
8 - this third approach is at the heart of the work we are currently doing on horizon scanning, weak signal detection etc. in anti-terrorism, but which is also applicable (and is being applied) in more conventional KM settings.  It requires a switch from taxonomic structures and search mechanisms to ones based on serendipitous encounter within the context of need.
9 - The concept of corporate memory needs to start to mimic human memory which is pattern based.  An expert for example has over 40K patterns on their long term memory sequenced in frequency of use which are selected on a first fit basis (Klein and others).  Each of those patters is a complex mixture of data, experience, perspective.  By storing data-conversation-model combinations in context but without structure we can start to allow contextual discovery

Now a lot of that is cryptic, but the most important words are CONTEXT and SERENDIPITY.  We need to move away from storing, accessing and rating artefacts and start to look at corporate memory as a complex system in which patterns of meaning will emerge within models




Dave Snowden
Founder, The Cynefin Centre
www.cynefin.net


On 20 Jan 2006, at 23:42, Mark May wrote:

I really enjoy this topic because it has great potential to benefit our practitioners in the field and because it is a concept that everyone can envision, given the ubiquity of the Amazon experience.

My thinking starts with how one would propose to use rating/feedback data. I can see at least three possible uses - 1) Provide confidence or caution to people who are considering using an IC artifact; 2) Help order search results to put higher rated content above lower rated content; 3) Provide input to content managers either to promote well rated IC or improve or retire lower rated IC.

These are all worthwhile and valuable uses of ratings/feedback data. However for many of the reasons that people have cited below, I don't think that a five star rating system provides data that really is of value to meet these ends. In addition, most content is not rated at all by anyone or is not rated by enough people or it takes too long to get enough ratings to represent a consensus opinion.

Given these limitations of an Amazon-like system, the IBM team is trying something a bit different. First of all, we have deployed the standard five star system and allowed space for comments on all artifacts in the major repositories. We feel that users have become conditioned through other internet experiences to expect this kind of a feedback approach. However, we don't use that raw data by itself for KM purposes. We combine the rating with other data to impute a VALUE for that artifact. The other data includes number of times it was read, downloaded, forwarded to others and printed. These factors are weighted so that a download counts for 10 times the value as a read, for example. We also give significant extra weight to any comments since we think that the comments are much more valuable than the ratings.

We have deployed this approach and are actively calculating imputed values now. However, we have not yet begun to use the data. One of our highest priorities is to help people find stuff faster, so we are eager to use these imputed values to order search results. We also plan to make it available to content managers so that they can see what is accessed and "valued" most (and least) among their inventory of content. The jury is still out on how much our practitioners will actually benefit from this imputed value approach. We have some pilots planned for later this year to see how if it works as well in practice as we think it should.

Mark May
IBM

"Tom" <tombarfield@...>



To

sikmleaders@...

cc


Subject

[sikmleaders] Re: Amazon Ratings - One Thumb Down

Bruce I found your comments on Tuesday and in this note insightful.  
I also liked the insights I heard from Kent Greenes (I think).

In the next couple months I am going to be asking my team at
Accenture to develop our strategy in this area.  Here are some off
the cuff thoughts based on Tuesday's discussion and what Bruce and
Ravi shared in this discussion.

I wonder if we should consider moving away from trying to collect
feedback from everyone and instead try to get feedback form people
who feel very strongly about the content - either good or bad.  In
other words - if something warrants a 2,3 or 4 on a 5 point scale
then I don't really care as much about the feedback.

If I download a piece of content that turns out to be a big help to
me (score of 5 on a 5 point scale) I am probably more willing to
provide feedback saying thank you and recognizing that.  It would be
like saying I only want a rating on 5 star stuff.  

If I download something that I really find to be worthless (scale of
1 on a 5 point scale) I might be incented to provide feedback to
either improve it or get it out of the system so no one else has
deal with it.

Tom Barfield
Accenture


--- In sikmleaders@..., "Bruce Karney"
wrote:
>
> Hi all,
>
> In Tuesday's call, I made a comment about why I don't
think "Amazon-
> style ratings" are an effective KM strategy.  Let me explain
briefly
> why I believe that, and what I think is a better approach.
>
> Let me contrast two kinds of reviews or rating schemes.  These are
> not the only two kinds, but they represent the opposite ends of a
> spectrum.
>
> 1. Peer review, prior to publication: This is the standard used by
> scientists and academics.  In essence, drafts of articles are
> circulated to "experts" who offer advice and input prior to
> publication.  This input is used by the author (and perhaps the
> editor of the journal) to improve the work BEFORE it is exposed
> (published) to a wide audience.
>
> 2. Consumer review, after publication: Amazon, ePinions, and many
> similar rating and awards systems use this approach.  Because
these
> post-publication reviews cannot affect the published work, they
> are "criticism" in the literary sense.  In Amazon's case, no
> credentials are required to post a review, so the reviewers are
not
> peers of the authors.  Nobel prize winners and your local pizza
> delivery guy have an equal voice in Amazon-land (and the pizza guy
> probably has more free time).
>
> Being able to write one's own review is a satisfying thing for the
> reviewer, especially since it has only become possible to do this
in
> the last few years.  However, the only way Amazon reviews impact
the
> world at large is to pull more readers toward a book or push a few
> away.  Isn't it better, especially in a business context, to use
> techniques that IMPROVE THE QUALITY OF THE BOOK?
>
> That's what Peer Review is designed to do.  If business KM systems
> can't support pre-publication Peer Review, they should at the very
> least focus on post-publication Peer Review and document
improvement.
>
> I also mentioned that at HP, where I used to work, most document
> ratings were 4's or 5's on a scale of 1-5.  I have located a copy
of
> study I did on the topic earlier in the year, and would like to
> share my findings:
>
> For a sample of 57 "Knowledge Briefs," which are 6-12 page
technical
> documents desighned to inform and enlighten, there were 12,295
> downloads and only 53 ratings/reviews.  This is a ratio of 1
review
> per 232 downloads, and slightly less than one review per
document.  
>
> ALL ratings were either 4 or 5.  The 53 reviews were provided by
40
> different individuals, so the vast majority of people who
submitted
> a review submitted only one, meaning (perhaps) that they lacked a
> valid base for comparing the Knowledge Brief they were reviewing
to
> any other Brief.  The most reviews submitted by a single person
was
> 7, and the second-most was 3.
>
> I contend that if you were perusing a listing of Knowledge Briefs
on
> a given subject, all of which were either unrated or had ratings
> between 4.0 and 5.0, you would not have information that would
steer
> you towards best documents or away from poor documents.  You would
> believe that any of the documents could be worthwhile, inasmuch as
> none of them had low scores.  Therefore, the rating scheme
provides
> NO value to the prospective reader.  Worse yet, if there were a
> documented rated 1, 2 or 3, that rating would probably be a single
> individual's opinion because of the infrequency with which
Knowledge
> Briefs are rated at all.
>
> My conclusion: don't RATE documents, but create systems to provide
> detailed written feedback from readers to authors BEFORE
publication
> if possible, or AFTER publication if that's the best you can do.  
> Encourage COLLABORATION, not CRITICISM.
>
> Cheers,
> Bruce Karney
>
http://km-experts.com
>







Yahoo! Groups Links

<*> To visit your group on the web, go to:
   
http://groups.yahoo.com/group/sikmleaders/

<*> To unsubscribe from this group, send an email to:
   sikmleaders-unsubscribe@...

<*> Your use of Yahoo! Groups is subject to:
   
http://docs.yahoo.com/info/terms/







SPONSORED LINKS
Knowledge management Consulting firms System integration
Computer security Computer training Computer internet


YAHOO! GROUPS LINKS






Re: Amazon Ratings - One Thumb Down #ratings #prediction-markets

John Maloney <jtmalone@...>
 

Dave --

Yep, all great points here.

One model abstraction for exploiting complexity, emergence, social
networks and self-organizing systems are prediction and knowledge
markets.

'Gaming the system' has been the traditional nemesis of enterprise
ranking/rating/reward systems since time immemorial.

Today, enterprise knowledge markets are transforming these 'natural'
behaviors into an essential method for the creation, conversion,
transfer and applied use of knowledge.

I also share grave concerns about ranking/rating of
documents/artifacts in the enterprise repository. It doesn't work
and never will for the reasons mentioned and about a half dozen
others.

Again, it is typical for enterprise managers to focus on documents,
control, management, ranking (library science) versus leading the
*practice* of how knowledge is really created, applied, used.

Artifact/document rating & rating may give a warm 'in control'
feeling, and are good for corporate librarians, but it doesn't
create any business advantage or advance enterprise knowledge. It is
important to the corporate mission like bookkeeping is important.

There is too little time to go into details on knowledge markets, so
I will be lazy and just share the press release and links. These
ongoing and forthcoming market conversations are highly germane to
this important thread.

Note: this event was scheduled for next week, but
people/participants from the World Economic Forum in Davos asked to
postpone until after the WEF. They are running some knowledge
markets at WEF on world events, energy prices, climate, H5N1 Virus
and so on, and wish to conduct conversations at the Summit.


<snip>

Prediction Markets Summit - February 3, 2006 - New York City

Download this press release as an Adobe PDF document here:

http://pdfserver.prweb.com/pdfdownload/335221/pr.pdf

Note. There are scant few seats left for this summit.

Colabria® and CommerceNet announce Google, Yahoo!, MIT Sloan School,
NewsFutures, Corning, InTrade and HedgeStreet to join the Prediction
Markets Summit February, 3rd, 2006 in New York City.

San Francisco, CA (PRWEB) January 19, 2006 -- Colabria® -- the
leading worldwide action/research network of the knowledge economy -
announces NewsFutures, InTrade and HedgeStreet. will join Google,
Yahoo!, CommerceNet and others for the Prediction Markets Summit,
February 3rd, 2006 in New York, New York USA.

http://www.kmcluster.com/sfo/PM/PM.htm

"Prediction markets are brutally honest and uncannily accurate." —
Geoffrey Colvin - Value Driven – Fortune Magazine.

Thomas W. Malone, Professor of Management, MIT Sloan School, founder
MIT Center for Coordination Science, Author, "The Future of Work" is
a keynote speaker. Thomas will discuss how Intel uses prediction
markets for manufacturing capacity planning.

James Surowiecki, author of "The Wisdom of Crowds: Why the Many Are
Smarter Than the Few and How Collective Wisdom Shapes Business,
Economies, Societies and Nations" is also an event keynote speaker.

Emile Servan-Schreiber, CEO of prediction market leader NewsFutures,
will describe how Corning uses enterprise prediction markets to
forecast demand for liquid crystal displays.

Charles Polk, CEO, Common Knowledge Markets, will lead a
conversation on Pandemic Flu Prediction Market (PFPF) and the H5N1
Virus Outbreak.

Knowledge markets are becoming commonplace in the smartest firms.
Top firms using prediction markets for KM are Google, Yahoo!,
Microsoft, Eli Lilly, Abbott Laboratories, HP, Intel and Siemens.

This event is sponsored by participants and CommerceNet
http://www.commerce.net/, NewsFutures http://us.newsfutures.com/,
InTrade http://www.intrade.com/ and HedgeStreet
http://www.HedgeStreet.com/.

Prediction market pioneer Yahoo! Research will sponsor a Pre-Summit
Reception, February 2nd, 2006 at their offices in Manhattan (for
registered participants only).

Summit sessions are practical and conversational. All are welcome.
Secure, pre-registration online required.

http://www.kmcluster.com/nyc/PM/PM.htm

-jtm
http://kmblogs.com/





--- In sikmleaders@yahoogroups.com, David Snowden <snowded@b...>
wrote:

Sorry not to have been active, or on the calls - too much travel
However this is a topic of considerable interest to us, and a part
of
our own software development and method research.


Re: Amazon Ratings - One Thumb Down #ratings #prediction-markets

David Snowden <snowded@...>
 

I agree that prediction markets are useful, but (i) we do not know yet what will happen when people try to game them and (ii) as the results are visibile then they will change the predictions.

My gut feel is that we should get rid of the"prediction" word for a bit and talk instead about "prevention" and "enablement"




Dave Snowden
Founder, The Cynefin Centre
www.cynefin.net


On 21 Jan 2006, at 10:18, John Maloney wrote:

Dave --

Yep, all great points here.

One model abstraction for exploiting complexity, emergence, social
networks and self-organizing systems are prediction and knowledge
markets.

'Gaming the system' has been the traditional nemesis of enterprise
ranking/rating/reward systems since time immemorial.

Today, enterprise knowledge markets are transforming these 'natural'
behaviors into an essential method for the creation, conversion,
transfer and applied use of knowledge.

I also share grave concerns about ranking/rating of
documents/artifacts in the enterprise repository. It doesn't work
and never will for the reasons mentioned and about a half dozen
others.

Again, it is typical for enterprise managers to focus on documents,
control, management, ranking (library science) versus leading the
*practice* of how knowledge is really created, applied, used.

Artifact/document rating & rating may give a warm 'in control'
feeling, and are good for corporate librarians, but it doesn't
create any business advantage or advance enterprise knowledge. It is
important to the corporate mission like bookkeeping is important.

There is too little time to go into details on knowledge markets, so
I will be lazy and just share the press release and links. These
ongoing and forthcoming market conversations are highly germane to
this important thread.

Note: this event was scheduled for next week, but
people/participants from the World Economic Forum in Davos asked to
postpone until after the WEF. They are running some knowledge
markets at WEF on world events, energy prices, climate, H5N1 Virus
and so on, and wish to conduct conversations at the Summit.




Prediction Markets Summit - February 3, 2006 - New York City 
  
Download this press release as an Adobe PDF document here:

http://pdfserver.prweb.com/pdfdownload/335221/pr.pdf

Note. There are scant few seats left for this summit.

Colabria® and CommerceNet announce Google, Yahoo!, MIT Sloan School,
NewsFutures, Corning, InTrade and HedgeStreet to join the Prediction
Markets Summit February, 3rd, 2006 in New York City.

San Francisco, CA (PRWEB) January 19, 2006 -- Colabria® -- the
leading worldwide action/research network of the knowledge economy -
announces NewsFutures, InTrade and HedgeStreet. will join Google,
Yahoo!, CommerceNet and others for the Prediction Markets Summit,
February 3rd, 2006 in New York, New York USA.

http://www.kmcluster.com/sfo/PM/PM.htm

"Prediction markets are brutally honest and uncannily accurate." —
Geoffrey Colvin - Value Driven – Fortune Magazine.

Thomas W. Malone, Professor of Management, MIT Sloan School, founder
MIT Center for Coordination Science, Author, "The Future of Work" is
a keynote speaker. Thomas will discuss how Intel uses prediction
markets for manufacturing capacity planning.

James Surowiecki, author of "The Wisdom of Crowds: Why the Many Are
Smarter Than the Few and How Collective Wisdom Shapes Business,
Economies, Societies and Nations" is also an event keynote speaker.

Emile Servan-Schreiber, CEO of prediction market leader NewsFutures,
will describe how Corning uses enterprise prediction markets to
forecast demand for liquid crystal displays.

Charles Polk, CEO, Common Knowledge Markets, will lead a
conversation on Pandemic Flu Prediction Market (PFPF) and the H5N1
Virus Outbreak.

Knowledge markets are becoming commonplace in the smartest firms.
Top firms using prediction markets for KM are Google, Yahoo!,
Microsoft, Eli Lilly, Abbott Laboratories, HP, Intel and Siemens.

This event is sponsored by participants and CommerceNet
http://www.commerce.net/, NewsFutures http://us.newsfutures.com/,
InTrade http://www.intrade.com/ and HedgeStreet
http://www.HedgeStreet.com/.

Prediction market pioneer Yahoo! Research will sponsor a Pre-Summit
Reception, February 2nd, 2006 at their offices in Manhattan (for
registered participants only).

Summit sessions are practical and conversational. All are welcome.
Secure, pre-registration online required.

http://www.kmcluster.com/nyc/PM/PM.htm

-jtm
http://kmblogs.com/


   


--- In sikmleaders@..., David Snowden wrote:
>
> Sorry not to have been active, or on the calls - too much travel
> However this is a topic of considerable interest to us, and a part
of 
> our own software development and method research.






YAHOO! GROUPS LINKS






Re: Amazon Ratings - One Thumb Down #ratings #prediction-markets

J Maloney \(jheuristic\) <jtmalone@...>
 

Hi Dave --

Brief response in caps.

I agree that prediction markets are useful,

GOOD.

but (i) we do not know yet what will happen when people try to game

THEY ARE INTENDED TO BE 'GAMED' AGGRESSIVELY. THAT IS THE WHOLE POINT.

them and (ii) as the results are visible then they will change the
predictions.

YES, ABSOLUTELY. MARKETS MUST BE TRANSPARENT TO BE EFFECTIVE. THE CHANGES
DETERMINE THE PRICES.

LOOK AT THESE CONTRACTS FOR AN IDEA:

http://www.intrade.com/jsp/intrade/contractSearch/

My gut feel is that we should get rid of the"prediction" word for a bit and
talk instead about "prevention" and "enablement"

YEAH, MAKES SENSE. MARKETS ARE VERY USEFUL, ESSENTIAL REALLY, TO RISK
MANAGEMENT (HEDGING).


More to the story...

"Thirty years ago, when Charles Schwab bucked Wall Street and founded the
first ever discount brokerage, people were skeptical. It couldn't be done,
they claimed. Wall Street was governed by the large financial giants which
charged enormous fees, catered to the very rich or the institutional
investors of the world, and largely paid lip service to the small investor.
Despite being blackballed by Wall Street, Schwab changed all that, bringing
the financial markets to the masses, reducing commissions to affordable
levels such that the nation's significant middle class could invest for
their own futures in the same way institutions had done for years. It was a
watershed moment." {hedgestreet] The rest, as they say, is history.
"Chuck" now has the biggest compound in Pebble Beach!


It is roughly the same thing today. Only highly 'sophisticated' brokers and
bankers trade derivatives, about 3% of very, high net worth investors and
giant institutions. It make up a HUGE market, some say bigger than the stock
market. "Retail hedge markets are coming. See: HedgeStreet's site
www.hedgestreet.com enables members to trade small, inexpensive,
easy-to-understand "event derivative" contracts (called HedgeletsR) in
markets never before accessible to individual traders." {hedgestreet]

Cheers,

John





John Maloney
T: 415.902.9676
IM/Skype: jheuristic
ID: http://public.2idi.com/=john.maloney

Prediction Markets: http://www.kmcluster.com/nyc/PM/PM.htm

KM Blogs: http://kmblogs.com/

1 - 20 of 9248