Lessons Learned - Metadata #lessons-learned #metadata
TJ Hsu
Hi everyone,
Looking for anyone who might be able to share examples or insights for metadata (tagging / classification) for Lessons Learned - to make it easy to find/browse/filter for them. Examples specific to project delivery in pharma would be great. Thank you, TJ
|
|
Stephen Bounds
Hi TJ, You've asked a pretty generic question, so my insights are also necessarily pretty generic:
Cheers, ==================================== Stephen Bounds Executive, Information Management Cordelta E: stephen.bounds@... M: 0401 829 096 ==================================== On 9/04/2022 6:27 am, TJ Hsu wrote:
Hi everyone,
|
|
Hi TJ, 1- They reproduce the context in which the LL has happened, and describe the situation conditions. For example: project type, customer region, risk severity… Thank you
|
|
Nick Milton
A couple of things to consider TJ –
Firstly, speak to your users to understand the sorts of terms they would be searching or browsing for, and make sure the metadata fits their search patterns and needs. Secondly, the ultimate destination for a lesson is to become embedded within updated processes and procedures, or product design components. Therefore the metadata applied to the lessons must match your existing process/procedure taxonomy and/or product and component taxonomy.
Nick Milton
From: main@SIKM.groups.io <main@SIKM.groups.io> On Behalf Of TJ Hsu
Sent: 08 April 2022 21:28 To: main@SIKM.groups.io Subject: [SIKM] Lessons Learned - Metadata
Hi everyone,
|
|
TJ Hsu
Thank you so much for your responses. Great insights!
Stephen - while generic, your points are definitely good to keep in mind. Keep it simple, knowing that every additional field comes with a cost (to the user, the solution, etc). Ensure there is a management process to look at the lessons captured. Think about common taxonomy. Rachad - thanks for these examples, which make a lot of sense - and your note on aligning language. Nick - agree, we are planning to conduct some design thinking workshops to understand this from the end user's perspectives.
|
|
We have data from tagging exercises that reinforces the comments as well.
#1 We find 'doing the minimum' is the norm. If only one tag is required but you offer the ability to apply multiple, then one tag is what you'll get. Don't expect more than 3 fields to be completed... #2 'virtual Darwinism' applies to tags - you'll find a small number will be popular and usually the ones at the start of pick lists are the ones that are used. Tags have to resonate with the actual language being used by staff. For example, we asked 60 KM Managers to define 10 tags for their area of industry. The objective was to get no more than 600 tags that matched the key themes, terms etc. in that industry vertical. Apart from the fact we received over 1000 tags in response and had to deduplicate etc., we found that in practice many of the tags they thought would be common were not as common as they thought... We've been able to use Microsoft Viva Topics to prove that point. #3 Use automation to apply organisational tags e.g. project number, division etc. You just want people to provide the unique human classification that rules or automation cannot provide
|
|
Love this topic and love the replies.
But my take is don't let the lessons sit still long enough to be tagged! Embed them so they are actioned - that's the learning. I can think of no case of anyone ever thinking "oh, let's go search the lessons learned" No, we just expect the most current, up-to-date services, content, processes, products, quality, assistance - and we only get that by continuously embedding the lessons. That improvement is the learning. But I loved reading all of this!
|
|
Reply from Ian Fry in LinkedIn: Agree with most of the comments in the SIKM group. One important thing; always select "All that apply" - note the recommendation that 3 is a realistic maximum. There are facets which will apply, but never be the main facet.
|
|
Reply from Peter Reynolds in LinkedIn: Consider also ontologies and automation particularly for the provisioning of metadata and forming of connections to improve search/discovery and awareness of those lessons. Also consider Lessons Identified vs Learned, latter being all too often assumed rather than the reality!😝 Suggestion also to support “core” metadata (e.g. DCMI) and federate extensions to it as needed, owned by those relevant communities of practice/legal entities (e.g. confidentiality, integrity and availability labels)
|
|
I worked for 3 years in the British Army’s lessons team. We had multiple attributes added to our lessons from operations (Iraq, Afghanistan, Libya etc) but found the most effective to be keywords plus taxonomy codes.
Lessons were owned by the Army Capability Directorates (Training, Equipment, Personnel etc) and we added sub-categories to them for each of these. Then we added a code that indicated whether the issue at the heart of the lesson was: 1. New requirement 2. Quantity issue of existing capability 3. Performance issue of existing capability Etc. With over 2000 lessons under management and about 200 added every 12 months, we could identify themes and trends and underlying issues of which each individual lesson was merely a symptom. Happy to discuss offline anytime.
|
|
Hi TJ, Not sure what KM platform you are working on but... For our system content managers use a separate application server that is allocated where you can edit articles or create new ones using existing templates. Here you can also pull up the necessary statistics on search queries, response relevance, referrals, etc. Articles can be not only modified, but also optimized, for example, creating meta tags to improve search in Lessons Learned. In addition, the search can be improved by forcibly adding certain articles to certain queries. This is called the “Editor’s Pick”: when searching the user sees such materials in a separate column. Cheers!
|
|