Re: How would you describe this knowledge failure? #question #strategy #culture

Tim Powell

Great job, Stephen -- in asking a challenging and provocative question, then fielding, summarizing, evaluating the various results -- and doing this all rapidly, while we’re still thinking about it.  Thanks too for all this relevant bonus material!


And thanks, all, for your interesting and diverging contributions – each one of which adds insight.  I am struck that (1) so many of us have experienced and/or observed this phenomenon, while (2) we have little common language (yet) to even describe the problem.  I’m not totally surprised, though, since I view this is a “dirty little secret” of KM – seldom discussed openly, but challenging to the credibility and practical usefulness of some of KM’s core principles and practices.


Am I the only one here who feels this “thing” – however one labels it -- is a profound, even existential, challenge to the knowledge discipline(s)?  Has any of you addressed it in a rigorous way?  How?  What resulted – and, in particular, what progress have you been able to make?


Stephen, your bringing this to light and focusing our “hive mind” represents a major leap forward, in my view.






PS:  Fair call that you assess the term I use as “technical,” which frankly I’m pleased about.  I am often uncomfortable that much of KM – and management theory in general – is anecdotal and non-falsifiable, whereas my unwavering belief is that (as Peter Drucker said decades ago) we need a science of knowledge.  That’s my own goal – though, perhaps, an impossible dream…


TIM WOOD POWELL | President, The Knowledge Agency® | Author, The Value of Knowledge |

New York City, USA  |  TEL + | 




From: <> on behalf of Stephen Bounds <km@...>
Reply-To: "" <>
Date: Friday, January 14, 2022 at 1:48 AM
To: "" <>, "" <>
Subject: Re: [SIKM] How would you describe this knowledge failure? #strategy #culture #question


Hi all,

Thank you so much for your suggestions! There was a lot of variety in your answers, which is perhaps to be expected. Some consolidated comments in response:

·         Rigidity trap (Andrew Farnsworth) - The adaptive cycle is an excellent concept and I think that "peak rigidity" point which captures what I'm looking for. I particularly like the phrase that "system resilience approaches a minimum as resources are committed to maintaining stability".

·         Obsolescence Aversion (Alex Zichettello) - Nice, but it has the slight drawback that it sounds like it is the organisation seeking to avoid obsolescence, whereas the intent is to describe

·         Epistemic latency (Tim Wood) - Another good term, but quite technical for a lay person, and doesn't fully capture the sense of an evolving environment

·         Culture (Joitske Hulsebosch) - Correct in the broadest sense! But not specific enough to support diagnosis and action IMO.

·         Plow ahead (Dan Ranta) - I like that this captures the problem of unreflective practices. It needs some nuance to tie it to the idea of changed assumptions / environments

·         Core rigidity (Patrick Lambe) - Quite similar to the rigidity trap, but as Patrick rightly points out, the problem is "one of unwinding the structural elements that reinforce the previous practices ... deliberate forgetting/unlearning" (and not just psychology)

·         Conservatism bias (Dennis Pearce) - I think this touches on another important aspect of the problem, which is an unwillingness to believe evidence indicating change is required

·         Dangle berries (Matt Moore) - Aside from the origin of the name, my problem with the term is that it doesn't capture the sense of something that "used to be" useful. But thanks for putting me off my dinner!

After some more research into the adaptive cycle, I found this useful quote from Daniel Wahl:

As any system begins to mature, there is an accompanying increase in fixed and ordered patterns of interactions and resource flows. The system becomes over-connected, or better, the existing qualities and quantities of connections are such that they inhibit the formation of new pathways needed for the system’s overall adaptation to outside changes and its continued evolution. Eventually this leads to rigidity within the system, and it becomes brittle, less resilient, and more susceptible to disturbances from the outside.

At this point, the effects of detrimental run-away feedback loops inside the system can further challenge viability. The often resulting gradual or sudden breakdown of the old order and structures moves the system closer to ‘the edge of chaos’ — the edge of its current stability (dynamic equilibrium) domain. The reorganization of resource flows and changes in the quality and quantity of interconnections within the system at this point create a crisis that can be turned into an opportunity for transformation and innovation. (my emphasis)

Curious, I looked further back in Google Scholar and found what appears to be one of the foundational papers of this concept by CS Hollings (2008), who himself credits Herbert A Simon with the concept of dynamic hierarchies that underpins it.

It's actually a fascinating paper and worth a read in its own right, with Hollings pointing out both the rigidity trap (high connectedness and potential) and the poverty trap at the opposite end of the cycle (low connectedness and potential).

(As an aside, I am sure the idea of a "poverty trap" organisation neatly describes somewhere most of us have worked in our past!)

Hollings (per Simon) represents the hierarchies of a typical social system in this way:

The excerpts I particularly like are:

As long as the transfer from one level to the other [of a dynamic hierarchy] is maintained, the interactions within the levels themselves can be transformed, or the variables changed, without the whole system losing its integrity ... this structure allows wide latitude for experimentation ... each of the levels of a dynamic hierarchy serves two functions ... One is to conserve and stabilize conditions for the faster and smaller levels; the other is to generate and test innovations by experiments occurring within a level ...

It is as if two separate objectives are functioning, but in sequence. The first maximizes production and accumulation; the second maximizes invention and reassortment. The two objectives cannot be maximized simultaneously but only occur sequentially. And the success in achieving one inexorably sets the stage for its opposite. The adaptive cycle therefore embraces two opposites: growth and stability on the one hand, change and variety on the other...

[Systems] have a minimal complexity we call the "Rule of Hand" whose features make linear polices more likely to produce temporary solutions and a greater number of escalating problems. Only an actively adaptive approach can minimize the consequences [of the inevitable release].

When a level in [a dynamic hierarchy] enters its [release] phase of creative destruction, the collapse can cascade to [a higher] level by triggering a crisis ["revolt")]. Such an event is most likely if the slower level is at its [conservation] phase, because at this point the ... level is particularly vulnerable ... [Similarly once] a catastrophe is triggered at one level, the opportunities for, or constraints against, the renewal of the cycle are strongly influenced by the [conservation] phase of the next slower and larger level [("remember"]) ...

Extremely large events can overwhelm the sustaining properties of panarchies, destroying levels, and triggering destructive cascades down the successive levels of a [dynamic hierarchy] ... Modern democracies ... diffuse large episodes of creative destruction by creating smaller cycles of renewal and change through periodic political elections. (my emphasis)

This all helps me to describe my original question more precisely. There may in fact be several diagnoses relevant to the situation:

·         The organisation or social system may be stuck in a rigidity trap, where active attempts are made to inhibit the adaptive cycle but which eventually and inevitably lead to catastrophic collapse

·         It may be that leaders have conservatism bias, ie they are not actively preventing change but are cognitively not predisposed to recognise that renewal is already occurring, or is necessary to commence. In these scenarios, it is a crisis that normally precipitates the change imperative but it would be better to commence in a planned way

·         A system may have accumulated insufficient capital for change even though a release of potential for the next phase of adaptation is needed, When thinking about staff, this is often talked about as "change fatigue". A temporary holding pattern may be a short term solution, if and only if external pressures for change - both above and below - are low.

·         However, if a lower level release is causing a "revolt" at a higher level, or a higher level is at risk of catastrophic collapse, systems may be forced into premature change to avert a full collapse and rely upon the higher levels of the hierarchy to re-establish norms and productivity after a chaotic period. (The alternative is a rigidity trap as described above, which is likely to lead to worse long-term outcomes.)

A very interesting exercise to work through. Many thanks to everyone for helping me out!


Stephen Bounds
Executive, Information Management
E: stephen.bounds@...
M: 0401 829 096

On 14/01/2022 4:04 am, Dennis Pearce wrote:

This seems like the organizational version of anchoring bias in cognitive psychology, or more specifically conservatism bias (see

(One of the items on my list of things to explore someday is to look across all the known cognitive biases in human individual learning and see how many might have analogs in organizational learning.)

Join to automatically receive all group messages.