Perspectives on KM World conference - case studies #case-studies


Matt Moore <innotecture@...>
 

Tom,

"This sounds like an example of a case study to me."

Yes - and you've hit on another bugbear of mine - bad case studies! I've had 2 bad case study experiences this year:

1. A paper that contained 2 "case studies". These case studies concerned very well-known organisations and the researcher's field work seemed to have consisted of reading a couple of articles about them in the press. Ta da - hypothesis proved by reading. I am highly sceptical about case studies based solely on second-hand material - because I have experienced the massive gulf between public perception & internal reality in several organisations.

2. A case study presented at a mixed practitioner/academic conference by an academic who had completely misunderstood what she'd be told by her interviewees. She had a theory (a rather rudimentary & out-of-date one) and seemed to have solely used her evidence to fit it. The practitioners tore her to shreds (yes, I did offer a little constructive criticism). The other academics sounded a little embarrassed by her behaviour.

Both these incidents highlight a problem with case studies. It's not so much the sample size as the lack of direct researcher experience and then critical thinking applied to that experience.

"Where things can get a bit tenuous is when you then attempt to take your conclusions from your n of 1 experiment and develop generalized prescriptive recommendations from them."

Absolutely agree - but then this is the cognitive model used by many consultants and business people (hmmm - better stop now before I hit another bugbear).

Matt


Murray Jennex
 

Matt,
 
You are right about case studies.  Many think like you say, talk to someone or read an article or two and you've done a case study.  This is what I mean by rigor of method.  Case study methodology is pretty much defined by Yin who preaches convergence of data, i.e. you do interviews, review organizational documents, observe, perhaps survey the organization, look at performance data, etc. and where all these data streams converge is where you have confirmed something.  To publish case studies the authors need to delineate what data streams they have looked at, how they are analyzed, and then what it means.  Many authors do not code the interviews they do nor try to quantify scoring or interpretation of documents or performance standards, these are poor case studies. 
 
This is probably one of the major differences between case study research in the US, which pushes the Yin approach, and Europe and Australia which haven't done so until recently, i.e. when pushed by journal publishers.  I have reviewed and rejected many articles who simply tell a story without providing the methods used to collect and analyze that data during the case study.
 
I know this has been a long and possibly tedious thread, but I think we are now getting to the heart of what I was referring to by harping on rigorous methodology for getting reflective research published.  thanks...murray