Approach to validating expertise or skill level #expertise #expertise-location
In any business where we have so called 'subject matter experts' how do we validate their expertise? Are there hidden experts with the organisation, perhaps not working in their current field or not of the personality that will self promote? How do we develop confidence and trust when validating expertise and knowledge? ....
In 2008 there was an SKM conversation on "tools to validating expertise". What I would like input on is approaches rather tools to assist in developing a program which helps the business identify and validate people's skills and knowledge across a variety of domains. This program would fit within the scope of a Knowledge Transfer program of works. Look forward to your thoughts, lessons learnt, words of advice, recommended reading... Nicky Hayward-Wright
|
|
TRflanagan@...
Expertise is increasingly recognized not as a solitary trait but a
consequence of a juxtaposition of important ideas. As long as we persist
in seeking individual expertise, we will miss the power of defining expertise as
a construct that results from a powerful convergence of distinct yet also agile
perspectives.
The question, then, has less to do with who is the most credentialed in a
specific art than it does to recognizing who has the strongest perspective and
the most agile mind for working across fields of art ... who can speak the
languages and hear the words.
Peer nominations are powerful here. Ask "who are the top three
individuals who most effectively represents your perspective on matters of
importance to our community / organization?" Attend to names that fall on
many lists and names that are unique. This is one bit of wisdom that might
be most harvestable through crowd sourcing, no?
At least this way things seem to shake out in our practice groups.
Cheers
t
Tom Flanagan, Board President In a message dated 6/16/2012 9:07:54 P.M. Eastern Daylight Time,
n.haywardwright@... writes:
|
|
Connie Crosby
Nicky, Joel Alleyne is/was doing his PhD on the topic of expertise. He has compiled a reading list on Amazon that might have be helpful: http://www.amazon.com/Expertise-Management/lm/R6XRQ3JTB9SX1
unfortunately last updated in early 2009, but there look to be some interesting titles here.
Joel also gave our inaugural Knowledge Workers Toronto talk. It is a wide-ranging discussion introducing the idea of expertise. His slides are posted here, in case there is something of interest you can glean: http://www.joelalleyne.net/2009/02/12/presentation-to-the-toronto-knowledge-workers-group/
It's a very interesting question. In a large network such as academia we can look to see how often someone has been cited to help determine expertise; not sure if we can uncover similar data (such as how often someone's documents have been accessed, reused or adapted) in our organizations.
Cheers, Connie Connie Crosby Crosby Group Consulting Toronto, Ontario, Canada 416-919-6719 conniecrosby@... http://www.crosbygroup.ca | http://twitter.com/conniecrosby | http://conniecrosby.blogspot.com |
|
|
Tom Short <tman9999@...>
I like tag clouds. The trick is to get access to a representative source of employee-authored content, like email. Imagine an enterprise with 100,000 employees - email inevitably plays a significant role in internal communication, and most employees use it, albeit in varying degrees. Attaching a word cloud generator to their sent mail file generates a word cloud of terms showing which ones they use frequently. Is it a foolproof method of identifying expertise? No. Does it provide a nice proxy in a large organization for who is an SME in a particular area (or knows who the SMEs are)? Yes.
toggle quoted messageShow quoted text
For smaller orgs or departments of technical staff an alternative approach is to use self-declared expertise, but base it on objective, behavioral dimensions, rather than subjective scales. For instance, "I supported maintainenance on system xyz" indicates a different level of expertise than "I had primary responsibility for O&M on system xyz". Will be interested to hear about how you decide to proceed with this and what results you get - please keep us posted. rgds, Tom Short
--- In sikmleaders@..., "Nicky Hayward-Wright" <n.haywardwright@...> wrote:
|
|
Matt Moore <innotecture@...>
Hi, I think that Tom F has a set of valid points that also came out of the Using Expertise research that Patrick Lambe & I carried out: - We tend to treat "expertise" as though it is all the same thing. However expertise varies. Some expertise is easy to credentialize, some not. Some is very common, some rare. - Expertise is often a property of groups as much as individuals. - Ultimately, we are not interested in expertise in and of itself. We want to do stuff and an "expert" is someone who can help us do that stuff. What do this all mean in practice: - We should not expect our identification methods to apply equally well to all types of expertise. - We should look at group capabilities as much as individuals. - We should orient our efforts around what we want to do (both now and in the future) rather than exhaustively inventory what we know. Cheers, Matt
|
|
Robert L. Bogue
Well, I try to not have my first message to a group be divisive, but I felt like I had to respond to the tag cloud idea.
First, the thread was expertise validation, not identification, a tag cloud doesn’t address this need. My perspective is that the only way to do expertise validation is to have users recommend others for their expertise (like Linked in recommendations) but I’ve never seen this successful in practice because you don’t get enough activity in the engine to be meaningful. My answer is that there’s little reason to falsify expertise so I typically don’t see this as a problem.
Second, as it pertains to tag clouds, I think they’re a massive waste of space. They’re based on a concept of browsing – but this doesn’t seem to be what knowledge seekers do. In fact, the whole concept of a tag cloud is odd to me since it doesn’t lead them to knowledge, it leads them to articles via search… which may or may not be related to something useful. From my perspective, tag clouds are “full of sound and fury, signifying nothing.”
Rob
------------------- Robert L. Bogue, MS MVP: Microsoft Office SharePoint Server, MCSE, MCSA:Security, etc. Find me Phone: (317) 844-5310 Blog: http://www.thorprojects.com/blog
From: sikmleaders@... [mailto:sikmleaders@...] On Behalf Of Tom Short
Sent: Sunday, June 17, 2012 12:37 PM To: sikmleaders@... Subject: [sikmleaders] Re: Approach to validating expertise or skill level
I like tag clouds. The trick is to get access to a representative source of employee-authored content, like email. Imagine an enterprise with 100,000 employees - email inevitably plays a significant role in internal communication, and most employees use it, albeit in varying degrees. Attaching a word cloud generator to their sent mail file generates a word cloud of terms showing which ones they use frequently. Is it a foolproof method of identifying expertise? No. Does it provide a nice proxy in a large organization for who is an SME in a particular area (or knows who the SMEs are)? Yes.
|
|
Murray Jennex
One suggestion is to perform social network analysis on an organization to
see who people actually go to when they have a question. An expert who
isn't part of a busy communication path probably is not perceived as an expert
and someone who is may be an expert....murray jennex
In a message dated 6/17/2012 7:51:10 A.M. Pacific Daylight Time,
conniecrosby@... writes:
|
|
Robert L. Bogue
The problem with analyzing social networks is that you find only the expertise which is frequently used – which means it’s more common – which means that it’s not the critically high value stuff that you may be trying to connect in the first place. Sure, you should allow members to identify influence of other members by passively indicating who they go to for information, but this isn’t the same as validating the expertise of a member.
For instance, the Microsoft MVP community, of which I am a part, is largely influential – that’s the effective criteria for which the award is made. However, the influence of an MVP and their expertise are not related. Numerous MVPs I know answer forum questions with under-informed answers that don’t represent expertise as much as they represent the filling of space. So influence – the frequency with which someone is asked for expertise – and the expertise of the individual – what they actually know – are not the same thing and are in some cases not even close.
Rob
------------------- Robert L. Bogue, MS MVP: Microsoft Office SharePoint Server, MCSE, MCSA:Security, etc. Find me Phone: (317) 844-5310 Blog: http://www.thorprojects.com/blog Have you heard about The SharePoint Shepherd's Guide for End Users? Learn more at http://www.sharepointshepherd.com/
From: sikmleaders@... [mailto:sikmleaders@...] On Behalf Of murphjen@...
Sent: Monday, June 18, 2012 2:50 AM To: sikmleaders@... Subject: Re: [sikmleaders] Re: Approach to validating expertise or skill level
One suggestion is to perform social network analysis on an organization to see who people actually go to when they have a question. An expert who isn't part of a busy communication path probably is not perceived as an expert and someone who is may be an expert....murray jennex
|
|
This is a great thread, with thoughtful and insightful replies. Echoing the comments about tapping collective expertise, I think that if you post a query to a relevant community, expertise will emerge in the replies to the query.
By reading the full thread, you will get a sense of the different points of view, see points and counterpoints, and thus get a sense of what multiple people think. By following all threads in a community over time, you will see who posts on a variety of topics, the reactions to those posts, and thus be able to form an opinion on who provides the most useful advice. This community, and this thread, are examples of this.
|
|
Tom Reamy <tomr@...>
I agree about this thread being great, but it seems to me that one thing is clear – no one method works in all cases and there are caveats with all of them. Following posting behaviors for example doesn’t always distinguish between real experts and people that just like to hear themselves talk. With such a variety of methods, the question becomes how do you combine different methods of characterizing and following the variety of types of expertise.
One additional method that I’ve experimented with is using text analytics software to develop categorization rules that categorize both subject and expertise level expressed in documents. It turns out that it is possible to distinguish expert writing in a variety of ways. And the method has the advantage of being able to be applied to documents, sets of documents for an author, and whole communities of documents / posts, etc.
Tom Reamy Chief Knowledge Architect KAPS Group, LLC 510-530-8270 (O) 510-530-8272 (Fax) 510-333-2458 (M)
Text Analytics World Boston: October 3-4, 2012
From: sikmleaders@... [mailto:sikmleaders@...] On Behalf Of StanGarfield
Sent: Monday, June 18, 2012 5:42 AM To: sikmleaders@... Subject: [sikmleaders] Re: Approach to validating expertise or skill level
This is a great thread, with thoughtful and insightful replies. Echoing the comments about tapping collective expertise, I think that if you post a query to a relevant community, expertise will emerge in the replies to the query.
By reading the full thread, you will get a sense of the different points of view, see points and counterpoints, and thus get a sense of what multiple people think.
By following all threads in a community over time, you will see who posts on a variety of topics, the reactions to those posts, and thus be able to form an opinion on who provides the most useful advice.
This community, and this thread, are examples of this.
|
|
Murray Jennex
I agree with the below to a point. Analyzing social networks also
identifies who people go to when they have a question and even when it is
infrequently used knowledge they may still be gatekeepers as to who has it based
on past experience plus it could still be them. I had this actually happen
to me last year after Fukushima. I was contacted by the news media
initially because I had been a good source of knowledge pertaining to
the nuclear industry in the past. However, as the event unfolded it
turned out I had also actually tested nuclear containments including the type
used at Fukushima and was able to provide expert insight into how likely those
containments were to fail. That I ended up being right helped and that I
am able to explain things well has also resulted in my social network being
continually busy in this area.
As a devil's advocate, if you are looking to validate rare and infrequently
used knowledge, first, how will you know what you need of it, second, why would
you spend a lot of resources validating something you may never use, and third,
why wouldn't you still first go to proven related expertise first to see if
these sources have it? If they don't they still may know who
does....murray jennex
In a message dated 6/18/2012 4:11:10 A.M. Pacific Daylight Time,
rbogue@... writes:
|
|
Dave Cerrone
Great inputs from everyone…. here are a couple of my thoughts and observations…
** One practice to enable validation (and/or higher weighting) of expertise, is based on the expertise descriptors given to the SME’s of a community, by the community leaders. So just for example, if I am an SME for a CoP on Knowledge Sharing, and I am also an SME for a CoP on Digital strategies….the first group might have me “tagged” as an expert in Gamificcation, and the second group may have me “tagged” as an expert in “LinkedIn”. In both cases of this scenario, I could/would show up as an expert for either of these terms, but they were not self-selected, but rather given to me and validated through the CoP’s where I act as an SME.
** The terms “expertise” and “experience” can/should be viewed differently, although complimentary. While looking to connect with the recognized/validated subject matter experts, it can be valuable to hear from those who have experience with the topic, but who are not necessarily recognized or yet recognized as an expert.
- Dave
Dave Cerrone GE Energy
T 1-518-385-0196 M 1-518-605-6539
1 River Road, Schenectady, NY 12345
From: sikmleaders@... [mailto:sikmleaders@...] On Behalf Of murphjen@...
Sent: Monday, June 18, 2012 1:02 PM To: sikmleaders@... Subject: Re: [sikmleaders] Re: Approach to validating expertise or skill level
I agree with the below to a point. Analyzing social networks also identifies who people go to when they have a question and even when it is infrequently used knowledge they may still be gatekeepers as to who has it based on past experience plus it could still be them. I had this actually happen to me last year after Fukushima. I was contacted by the news media initially because I had been a good source of knowledge pertaining to the nuclear industry in the past. However, as the event unfolded it turned out I had also actually tested nuclear containments including the type used at Fukushima and was able to provide expert insight into how likely those containments were to fail. That I ended up being right helped and that I am able to explain things well has also resulted in my social network being continually busy in this area.
As a devil's advocate, if you are looking to validate rare and infrequently used knowledge, first, how will you know what you need of it, second, why would you spend a lot of resources validating something you may never use, and third, why wouldn't you still first go to proven related expertise first to see if these sources have it? If they don't they still may know who does....murray jennex
In a message dated 6/18/2012 4:11:10 A.M. Pacific Daylight Time, rbogue@... writes:
|
|
Simard, Albert <albert.simard@...>
I’d like to comment on this excellent thread from a different perspective.
The science community has been evaluating scientists for centuries. Methods range from one science director who said “I can’t describe good science, but I know it when I see it” to quantitative approaches. My experience follows.
Quantitatively, the most elementary statistic is the number of publications (“publish or perish”). In this case, the evaluator is deferring to journal editors and scientific reviewers to evaluate the merits of individual papers and, collectively, a body of work. Counting publications leads some scientists to pumping out quick, short little papers to get the numbers up which, in turn, leads managers to focus on papers in quality scientific journals. Of course publishing a book chapter or two is a very good thing for a scientist’s brand because there are a lot fewer of them and it implies that a book editor knows you and thinks enough of your work to invite you to write something. Except, of course, when there is an open call for chapters and editors select from whatever is submitted (but managers don’t generally have this inside tidbit of information!).
Then there is the “Science Citation Index, published periodically, which lists the number of times that a publication has been cited by other authors (effectively sorts the wheat from the chaff). This is similar to the network analysis methods listed in this thread by others. In a sense, the evaluator defers to the global science community to rank scientists. This, of course, leads to scientists attempting to get their name included as coauthor on as many publications as possible, hoping that some will be well-cited. This leads managers to focus on those publications in which the individual is the first author and has to describe their role in conceiving the idea, doing the research, and writing the paper. This leads to very long CVs (mine is more than 40 pages long!).
Still with the quantitative, in an applied research organization, technology transfer is also important. So, the number of presentations also count. Even here, they are classified into scientific symposia, professional conferences, and workshops, with decreasing subjective value. The number and quality of consultations is also an indicator of practitioner respect for a scientist in a domain. These are categorized as international, national, regional, or local – in decreasing order of importance. Committee assignments are a similar indicator from an organizational perspective: inter-organizational, enterprise, branch, or local. Finally, there is a listing and description of instances of applications of an author’s research (who, for what, benefits, etc.).
Seemingly more subjective, but surprisingly consistent is the “Scientist Review Panel.” Every science organization has something like this. For example, five scientists from different disciplines review the CVs of five scientists at least one pay grade lower. Each panel member independently classifies each scientist according to a set of criteria, such as those listed previously, on a five-point scale. Having participated in many such panels, I am constantly amazed at the consistency of the independent ratings – rarely differing by more than one level even across disciplines. When there is a large difference, the discipline expert provides additional detail to guide the discussion and a consensus rating is reached.
Bottom line. Although the specifics differ, I see so many parallels with the way that things are (or should be) done in KM. In spite of the considerable structure imposed by the bureaucracy on the scientist evaluation process, and multiple indicators used, I have found that:
I will leave it for the group to decide whether any of this adds any value to this discussion.
Al Simard
|
|
Patrick Lambe
Robert - I just wanted to take issue with the "signifying nothing" comment.
toggle quoted messageShow quoted text
I think the value of tag clouds depends on where the tags come from - ie what they signify. Folksonomy (uncontrolled) tags have very little value in a bounded system such as an enterprise because there's too much variability in tagging vs the size of the knowledge base - that's very similar to your point about having enough activity in the engine to be meaningful. To respond to Tom, using text analytics on email also has very variable results, especially in discerning the multi-term concepts, implicit concepts (terms that are never used because taken for granted), and in relating micro-comments to broader expertise areas. You have to have a very sophisticated understanding of the differences between expert language patterns and non-expert language patterns across all the relevant domains, and actually you may end up having to identify your experts so that you can model their language. Not to mention people worrying about privacy and accountability if their email correspondence is exposed to this sort of analysis. However the use of controlled tags eg from an enterprise taxonomy or thesaurus across documents contributed, discussions participated in, communities belonged to, projects, reduces the variability and extends the range of material you can try to see patterns in, assuming you can get a decent tagging discipline in place, and a taxonomy that is a reasonable representation of the knowledge-base. In this case, people accumulate tags based on the tags they frequently use, and the tag clouds can take you to people or documents, or both. An aside: I have also seen very effective tag clouds based on frequent search terms - the intranet team analyses the most frequent search terms to compile "best bets" search results (pushing the best documents - or people - for that search term up to the top of the results list), so that you can get qualified and validated results for the high volume search terms. I agree with you that these (and other) approaches only identify possible experts and that validation is an additional step. But identification is useful as a precondition for validation. The health warning with social network analysis can be summed up in a comment that somebody recently made in one of our client focus groups: "You don't necessarily go to the best qualified person to ask for help. You go to the most accessible and friendly person." Any individual approach has its limitations, but you can triangulate across different approaches to get a more robust picture - very similar to Albert's multi-strand approach towards evaluation of scientists. I didn't see any replies referring to the HR function of mapping out competency frameworks. In my view these are under-utilized in the KM space and should be directly relevant to expertise. Finally, Matt Moore and I did a project on expertise a couple of years ago, one piece of which was to collect a whole bunch of anecdotes about people's experiences with accessing expertise in their organisations. One of the things that emerged was that what is viewed as expertise is highly contextual to the need - ie what counts as "expertise" is not consistent across different situations and organisations, and may not even be recognised until a need is articulated that cannot be readily met. The anecdotes can be viewed at http://usingexpertise.blogspot.sg/ (they do come from a lot of different contributors, Matt and my names are on them to preserve anonymity). I reported on early results from this project at KM World in 2009 http://www.greenchameleon.com/gc/blog_detail/knowledge_continuity_at_km_world_2009/ P Patrick Lambe Partner Tel: 62210383 weblog: www.greenchameleon.com Have you seen our new KM Planning Toolkit?
On Jun 18, 2012, at 4:44 AM, Robert L. Bogue wrote:
|
|
stem1949 <jstemke@...>
In some of our technical online communities we handled this by creating two groups of "experts".
toggle quoted messageShow quoted text
There were recognized company experts who had been working in the discipline for many years. The subject area was part of their job responsibility. They were nominated as SMEs. However, with their workload, they did not always have time to quickly respond to messages from the community. But when someone needed an authoritative answer to a major issue, it was easy for them to find the relevant expert to ask for help. The second group of "experts" were labeled as people willing to answer questions in the discipline. Anyone could nominate themselves. This option takes advantage of the basic premise of social learning: everyone knows something. These people were more likely to be watching for questions (it was a good learning experience for them), and offering advice if they had relevant information. We used a caveat emptor philosophy. The person asking the question was responsible for selecting and applying feedback. If they weren't sure, they would check with a local resource before making changes. The recognized SMEs would also review answers and interject if any advice given was wrong. This approach worked very well. People got answers quickly. Experts weren't overburdened. People willing to answer also got recognition for their contributions. --Jeff
--- In sikmleaders@..., "Nicky Hayward-Wright" <n.haywardwright@...> wrote:
|
|
Laurence Lock Lee
I learnt a long time ago when working as a 'knowledge engineer' developing Expert Systems that 'expertise' is a multi-faceted concept. I interviewed 'Experts' in some depth and found that their so called technical expertise was often not that exceptional when compared to their peers. What appeared to be exceptional was the breadth of their 'general knowledge'. In fact I recall one anecdote from someone describing an expert that I was studying saying that 'Maurie even knows where the brooms are kept on back shift!' …. this collective experience led me to SNA, something I have practiced now for well over a decade and 50+ projects…..so to comment on the posts regarding SNA:
toggle quoted messageShow quoted text
- if you asks the question about validating expertise you can take the academic approach and check certified credentials … which I don't see too much certification in critical expertise like nuclear reactor meltdowns (the Steelmaking equivalent is Blast Furnace Freezes …though not as damaging to the environment). But I see SNA as the pragmatic alternative. - to respond to SNA only picking up the 'regular use' expertise, in my experience SNA does effectively identify true 'experts'. Yes they have to be accessible, but what use is an inaccessible expert? True experts in my experience do handle the 'irregularly needed' expertise as well. We generally find this out when we ask people why they nominated a particular expert? after the SNA is completed. In one instance I recall an interviewee saying that 'xxxx is probably one of the only people who can effectively resolve mixed steel events (that rarely happen but when they do they are complex to resolve)…. xxx was a centrally nominated expert. - Getting back to the main question …. I'm reading that the issue here is not so much validation but wanting to know if we have enough expertise in the organisation…i.e. some form of risk assessment. We use an SNA measure around the density of connections within a domain of knowledge to measure this …. see more here… http://www.optimice.com.au/corecompetencyassurance.php Dr Laurence Lock Lee llocklee@... Ph: 0407001628
On 19/06/2012, at 1:40 AM, sikmleaders@... wrote:
Systems Integration KM Leaders Community
|
|
Thank you to everyone for your response to my question on approaches to surfacing and validating expertise and skill. Also my apologies for my delay in this thank you.
I am in the early stages of resource gathering and it will be a while before I move forward; however I will keep you posted on my progress.
My humble offering is the following resources:
Environmental Protection Agency (EPA) Victoria developed a “expertise location” program. This program was shared at the March 2012, Knowledge Management Round Table Victoria. Notes are available here http://storify.com/nickyhw/kmrtv-2012-03
Capability Development program at Vic Roads was discussed in the November 2011, KM Round Table. Notes are available here http://storify.com/nickyhw/kmrtv-2011-11 Of interest is the Dreyfus novice to expert model. Also I have collated your responses in the attached document.
3378_expertise_locating_validating.docx Once again thank you for generosity in sharing and also the robustness of conversation.
Regards
Nicky
Nicky Hayward-Wright
Advisor Knowledge Management
GS1 Australia
|
|