Date   

Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Simon Denton
 

This is certainly tricky to quantify. I've just looked at our SharePoint statistics and it is easy to measure 'searches'. Clicks are much harder owing to result previews. There seems to be a behavioural pattern whereby if the 'searcher' sees what they think they need in the preview they might have found everything they need and so do not click through. We also employ tactics whereby we boost certain results, provide full document previews, document bread crumb trials etc. Hence the potential for high abandonment rates and lost clicks using your suggested kpis. I suspect this is true based on SharePoint statistics for abandoned and no result queries. I believe that if our search service was that bad we would have incurred the ire of the organisation by now.

For us it is more about closing the feedback loop, answering the "did you find what you where looking for?", "how did you?" etc.

Over the 5 years or so that our service has been running the majority of searches are people looking for people. Looking at the content searches most are for corporate items like "where can I find my payslip, book leave etc." Perhaps more a sign that we've not got the design of our intranet 100% right. The remaining content searches are simple keywords.

Regards,
Simon




From: main@SIKM.groups.io <main@SIKM.groups.io> on behalf of Lee Romero via groups.io <pekadad@...>
Sent: 01 March 2021 17:43
To: main@sikm.groups.io <main@sikm.groups.io>
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI
 
Thanks, Nirmala!

I agree with many of your points - I mentioned in my reply just now to Murray that ultimately the user is the one who defines what is "right" - my hope with my efforts here is to move toward a standard way of capturing that.  And I would like to ask for that confirmation (in theory) every time a user accesses information, whether they got there through search or not.  However, because I can't effect that change, I am proposing a partial "solution" that works within the context of search (well, I hope it works within the context of search).

Many of the points you raise are also interesting to consider, but I would look at them as imposing more on the solution than I would expect is viable at the outset.  So they are useful if you want to do more than "just the basics" but I'd like to define what "the basics" are first.  We do have ways in the current solution I'm working with to answer most of those questions you raise, but I don't see them as basic enough to use as a common starting point.

Thanks again for your comments!

Regards
Lee


On Mon, Mar 1, 2021 at 3:56 AM Nirmala Palaniappan <Nirmala.pal@...> wrote:
Hi Lee,

Interesting blog posts! Thanks for sharing and initiating this discussion. I am not sure if my response is extremely simplistic but I hope it helps you look at it from users’ perspective. Ultimately, if your intention is to arrive at measures that reflect how useful and efficient the search was for users, I believe the following aspects need to be combined into, perhaps, one formula:

1. Did the user get what they were looking for? (can only be confirmed by asking the user)
2. Did the search engine behave like a friendly assistant in the process and was the experience, therefore, a pleasant one? (Auto-suggest, recommendations, prioritisation of results etc)
3. How long did it take to find what the user was looking for? (Can be automatically tracked but will still have to be double-checked with the user)
4. Did the user find something in a serendipitous way? (Unexpected but useful outcome)
5. Does the search engine up-sell and proactively provide subscriptions, what’s popular etc? (User delight)

I believe including number of clicks and number of page scrolls will be useful but may not provide enough information on the effectiveness of the search as a lot depends on the user’s usage of keywords for the search and their purpose and frame of mind at the time of searching for information

Regards
Nirmala  

On Mon, 1 Mar 2021 at 1:03 AM, Lee Romero <pekadad@...> wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero

--
"The faithful see the invisible, believe the incredible and then receive the impossible" - Anonymous


Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Lee Romero
 

Thanks, Nirmala!

I agree with many of your points - I mentioned in my reply just now to Murray that ultimately the user is the one who defines what is "right" - my hope with my efforts here is to move toward a standard way of capturing that.  And I would like to ask for that confirmation (in theory) every time a user accesses information, whether they got there through search or not.  However, because I can't effect that change, I am proposing a partial "solution" that works within the context of search (well, I hope it works within the context of search).

Many of the points you raise are also interesting to consider, but I would look at them as imposing more on the solution than I would expect is viable at the outset.  So they are useful if you want to do more than "just the basics" but I'd like to define what "the basics" are first.  We do have ways in the current solution I'm working with to answer most of those questions you raise, but I don't see them as basic enough to use as a common starting point.

Thanks again for your comments!

Regards
Lee


On Mon, Mar 1, 2021 at 3:56 AM Nirmala Palaniappan <Nirmala.pal@...> wrote:
Hi Lee,

Interesting blog posts! Thanks for sharing and initiating this discussion. I am not sure if my response is extremely simplistic but I hope it helps you look at it from users’ perspective. Ultimately, if your intention is to arrive at measures that reflect how useful and efficient the search was for users, I believe the following aspects need to be combined into, perhaps, one formula:

1. Did the user get what they were looking for? (can only be confirmed by asking the user)
2. Did the search engine behave like a friendly assistant in the process and was the experience, therefore, a pleasant one? (Auto-suggest, recommendations, prioritisation of results etc)
3. How long did it take to find what the user was looking for? (Can be automatically tracked but will still have to be double-checked with the user)
4. Did the user find something in a serendipitous way? (Unexpected but useful outcome)
5. Does the search engine up-sell and proactively provide subscriptions, what’s popular etc? (User delight)

I believe including number of clicks and number of page scrolls will be useful but may not provide enough information on the effectiveness of the search as a lot depends on the user’s usage of keywords for the search and their purpose and frame of mind at the time of searching for information

Regards
Nirmala  

On Mon, 1 Mar 2021 at 1:03 AM, Lee Romero <pekadad@...> wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero

--
"The faithful see the invisible, believe the incredible and then receive the impossible" - Anonymous


Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Lee Romero
 

Hi Murray - Thanks for your comments.  

I will also say I don't agree with your initial characterization (agreeing with Stephen on that). 

My perspective is that no one can define what is right except the end user who needs information.  It is a hopeless task to think otherwise.

Ideally, I would like our information environment to have an omni-present "button" (at least in the metaphorical sense) that asks, "Is this the information you need to do your job?"  Then, regardless of how a user got to that information - found it on their hard drive, in their email, browsed around an intranet, found via search, we can start to determine if people are getting the information they need.  If we could do that, then we could better define the quality of search to ask - How frequently does a user click "Yes" on that button when they get to the content via search?  (You can also ask the same question if they get there through other means - suddenly opening up the possibility of answering the unanswerable question of whether navigation or searching is "better"...)

Given that we don't have that capability, I think it's reasonable to assume that there is a relatively fixed percentage of time that a user finds something in search where they would click "Yes" on that button.  I don't know what that percentage is, but let's say it's 50%.  In other words, half of the time a user finds something "of interest" via search it is actually what they need.  If we (through efforts to improve search) can increase the percentage of times a user does find something of interest, 50% of that is still higher, right?  

The metrics in search are just indicators - they are not a measure of "reality".  We need to keep that in mind.

Regards
Lee

On Mon, Mar 1, 2021 at 1:46 AM Murray Jennex via groups.io <murphjen=aol.com@groups.io> wrote:
not disagreeing with Stephen, just trying to answer the question that I thought was being asked and giving the reason why it can't be answered.  on the other hand, relative return on enterprise search is nice but still not a good measure, it will tell you you are okay in your efforts but never if you are actually right.  this is something I struggle with, as an engineer I like ground truth in measurements and am not happy without it, but also as a physicist I understand Heisenberg and how you can't actually know ground truth.  All that said, I go back to my point about focusing on the process for establishing enterprise search.  The better the processes, the more you can trust the results....murray


-----Original Message-----
From: Stephen Bounds <km@...>
To: main@SIKM.groups.io
Sent: Sun, Feb 28, 2021 10:08 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Sorry Murray, I think you are offering a false premise.
By this standard, you could only work out the utility of a medical diagnosis process with perfect knowledge of all diseases and symptoms of the human body.
This is a perfect illustration of where RROI (relative return on investment) shines.
The question is not: will I get perfect search with enterprise engine X?
The question is: Am I "shifting the needle" on organisational outcomes in meaningful ways by implementing enterprise search? For example, do I get the same results 20% faster? Are my results 20% more accurate?
Fast, comprehensive full text search is generally king in most enterprises precisely because it offers a meaningful speed improvement on manual browsing with minimal ongoing cost - it's more or less a one-off capital investment to the organisation.
Taxonomy based search significantly increases capture costs. It will only lead to a positive RROI where a core set of information needs to be repeatedly and rapidly located or the potential benefits from one-off identification of information are substantial, which typically only occurs means industrial-scale plant or high-value consulting work.
Cheers,
Stephen.
====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 1:26 pm, Murray Jennex via groups.io wrote:
agreed Matt, the only way to know if enterprise search is correct is to know what all is stored and where and then match the results of the search to actual contents of the organization.  Basically, you have to have perfect knowledge of the organization to do a thorough job of evaluating enterprise search.  Of course no one wants to pay the cost of generating perfect knowledge so there is no method of determining accuracy of enterprise search.  Now, who would want to create standards of good enough search when you won't really know what good enough is?  This kind of defeats the purpose of doing KM.  The processes needed are a taxonomy of knowledge (documents and other artifacts) and a naming convention and storage convention so that everyone knows where to put stuff and what to call it.  Enterprise data dictionaries (or catalogs as they are now called) are great but how good are enterprises at creating and maintaining them?  The point I'm making is that we all know what it takes to create good enterprise search but getting the will and resources to do it is tough and we don't really need standard measures as you can't create them without perfect knowledge.  For another analogy, lets consider weather forecasting and how do you measure the performance of weather forecasters? ..You can only do it after the fact when the weather is known.  We would like more perfect weather forecasting but aren't willing to pay for what it takes to get perfect knowledge of weather.....murray jennex


-----Original Message-----
From: Matt Moore <matt@...>
To: main@sikm.groups.io
Sent: Sun, Feb 28, 2021 6:42 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Lee Romero
 

Agreed, Matt!  This is not likely to succeed, but I figured I have to try.

I *am* engaging with the Enterprise Search Engine Professionals group on LinkedIn for exactly that reason.  So far, I have had one of the moderators of that group reach out to me to help engage others "in the industry" to have a deeper discussion on this.  Hopefully, that does help get this somewhere. 

Thanks again!

Regards
Lee
 

On Sun, Feb 28, 2021 at 9:43 PM Matt Moore <matt@...> wrote:
Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Lee Romero
 

Thank you for the feedback, Stephen!

I agree - there are a lot of complexities here.  I do not believe in the viability of the (admittedly simpler) approach of defining a standard corpus of content for this type of thing, though.  My intent here is not to assess search engines but search solutions.

The distinction being that a solution does need to consider the variations in content included and the variations in user information needs.  My own perspective is that modern search engines are (at an admittedly high level) functionally very similar.  Yes, the search engine vendors can get value out of testing against a standard corpus, but that does not translate into anything useful in the face of all of the challenges you face with an enterprise search solution - which likely encompasses content from many different content sources, each of which likely is structured quite differently (or not at all), has varying levels of quality of content and addresses different information needs.  

The manager of an enterprise search solution needs to understand these things.  And needs to address quality issues with sources (and content gap issues with those sources).  And understand the different use cases / information needs of their users.

That detail aside - yes, I am assuming the need to standardize the capturing of user behavioral data.  Which is kind of where I am hoping this line of discussion could lead.  While not claiming that my 4 basic metrics are "correct", I would like to get to a state where there are well-defined, specific standard metrics all enterprise search managers could expect to be supported by their engine - so that they know they will be able to compare between engines, for example.  Unless someone wants to put out a recommended starting point, we continue on in the mode of different terminology, different metrics and, in general, confusion in comparing anything.

Thanks again! 

 

On Sun, Feb 28, 2021 at 5:54 PM Stephen Bounds <km@...> wrote:

Hi Lee,

Very interesting set of posts. My gut says that a key problem you still need to address is the relationship between the search space (ie the number of documents, quality of corpus in potential results, amount and quality of metadata available), availability of user behaviour data, and the effectiveness of a search engine in that environment. I think this is necessary for a "complete" set of measures since they won't always correlate linearly.

In other words, if you're going to develop standardised metrics for search outcomes I suggest you should also consider standardised metrics for describing a search space. (Or alternatively, is it worth looking into creating a set of standardised content sources with different characteristics that can be reused to test a wide variety of search engines?)

User behaviour data is probably the trickiest to include in standard masures, since by definition it requires use of a search engine over an extended period of time by "real" users to meaningfully tune results. It would be very interesting to gather some statistics on whether it is possible to predict long-term usefulness of search results by carrying out specific testing on a small number of case study searches and reviewing the effect on results re-ranking.

A related question I have is whether it is useful or possible to benchmark search results against a baseline method which is relatively unoptimised but easy to standardise. These might be the equivalent of a simple "grep" search or locating useful documents through directory browsing. This would allow you to say something like, "the search engine allowed users to locate documents, on average, with 86% less clicks and 72% less time than a simple text-match keyword search".

Cheers,
Stephen.

====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 5:33 am, Lee Romero wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


How does an Italian passionately draw an "organizational knowledge map"? - Knowledge Cafè invitation - #mapping #knowledge-cafè

Ginetta Gueli
 

Dear SIKM community,
"It is time-consuming and inefficient when a staff member does not know what has been done before s/he started to work on a new task or is not updated of the status of ongoing work and does not know who are the most important contacts or partners and has no idea where to look for information and records".

if you went through this situation, or not but you agree, and you want to know more, on 16th March 2021 at 5pm CET (i.e. 12:00 GMT-4), together with John Hovell, PMP, CKM we will speak on how to solve these problems in a unconventional way, because as writer Umberto Eco used to say: “Being educated does not mean remembering all the notions, but knowing where to look for them”.

We’ll follow the Gurteen knowledge café model, which means that we will offer 15-20 minutes of thought-provoking content, then we’ll have 3 rounds of small group conversation. We’ll close the 90 minutes with a full group conversation.

If you are interested, this is the registration link: https://www.meetup.com/it-IT/Knowledge-Cafe/events/276439819/?isFirstPublish=true

Thank you for your positive consideration and looking forward to having you on board!
Warm regards,
Ginetta

--
Ginetta Gueli
Information & Knowledge Manager | Project Manager


Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Nirmala Palaniappan
 

Hi Lee,

Interesting blog posts! Thanks for sharing and initiating this discussion. I am not sure if my response is extremely simplistic but I hope it helps you look at it from users’ perspective. Ultimately, if your intention is to arrive at measures that reflect how useful and efficient the search was for users, I believe the following aspects need to be combined into, perhaps, one formula:

1. Did the user get what they were looking for? (can only be confirmed by asking the user)
2. Did the search engine behave like a friendly assistant in the process and was the experience, therefore, a pleasant one? (Auto-suggest, recommendations, prioritisation of results etc)
3. How long did it take to find what the user was looking for? (Can be automatically tracked but will still have to be double-checked with the user)
4. Did the user find something in a serendipitous way? (Unexpected but useful outcome)
5. Does the search engine up-sell and proactively provide subscriptions, what’s popular etc? (User delight)

I believe including number of clicks and number of page scrolls will be useful but may not provide enough information on the effectiveness of the search as a lot depends on the user’s usage of keywords for the search and their purpose and frame of mind at the time of searching for information

Regards
Nirmala  

On Mon, 1 Mar 2021 at 1:03 AM, Lee Romero <pekadad@...> wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero

--
"The faithful see the invisible, believe the incredible and then receive the impossible" - Anonymous


Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Murray Jennex
 

not disagreeing with Stephen, just trying to answer the question that I thought was being asked and giving the reason why it can't be answered.  on the other hand, relative return on enterprise search is nice but still not a good measure, it will tell you you are okay in your efforts but never if you are actually right.  this is something I struggle with, as an engineer I like ground truth in measurements and am not happy without it, but also as a physicist I understand Heisenberg and how you can't actually know ground truth.  All that said, I go back to my point about focusing on the process for establishing enterprise search.  The better the processes, the more you can trust the results....murray


-----Original Message-----
From: Stephen Bounds <km@...>
To: main@SIKM.groups.io
Sent: Sun, Feb 28, 2021 10:08 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Sorry Murray, I think you are offering a false premise.
By this standard, you could only work out the utility of a medical diagnosis process with perfect knowledge of all diseases and symptoms of the human body.
This is a perfect illustration of where RROI (relative return on investment) shines.
The question is not: will I get perfect search with enterprise engine X?
The question is: Am I "shifting the needle" on organisational outcomes in meaningful ways by implementing enterprise search? For example, do I get the same results 20% faster? Are my results 20% more accurate?
Fast, comprehensive full text search is generally king in most enterprises precisely because it offers a meaningful speed improvement on manual browsing with minimal ongoing cost - it's more or less a one-off capital investment to the organisation.
Taxonomy based search significantly increases capture costs. It will only lead to a positive RROI where a core set of information needs to be repeatedly and rapidly located or the potential benefits from one-off identification of information are substantial, which typically only occurs means industrial-scale plant or high-value consulting work.
Cheers,
Stephen.
====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 1:26 pm, Murray Jennex via groups.io wrote:
agreed Matt, the only way to know if enterprise search is correct is to know what all is stored and where and then match the results of the search to actual contents of the organization.  Basically, you have to have perfect knowledge of the organization to do a thorough job of evaluating enterprise search.  Of course no one wants to pay the cost of generating perfect knowledge so there is no method of determining accuracy of enterprise search.  Now, who would want to create standards of good enough search when you won't really know what good enough is?  This kind of defeats the purpose of doing KM.  The processes needed are a taxonomy of knowledge (documents and other artifacts) and a naming convention and storage convention so that everyone knows where to put stuff and what to call it.  Enterprise data dictionaries (or catalogs as they are now called) are great but how good are enterprises at creating and maintaining them?  The point I'm making is that we all know what it takes to create good enterprise search but getting the will and resources to do it is tough and we don't really need standard measures as you can't create them without perfect knowledge.  For another analogy, lets consider weather forecasting and how do you measure the performance of weather forecasters? ..You can only do it after the fact when the weather is known.  We would like more perfect weather forecasting but aren't willing to pay for what it takes to get perfect knowledge of weather.....murray jennex


-----Original Message-----
From: Matt Moore <matt@...>
To: main@sikm.groups.io
Sent: Sun, Feb 28, 2021 6:42 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Stephen Bounds
 

Sorry Murray, I think you are offering a false premise.

By this standard, you could only work out the utility of a medical diagnosis process with perfect knowledge of all diseases and symptoms of the human body.

This is a perfect illustration of where RROI (relative return on investment) shines.

The question is not: will I get perfect search with enterprise engine X?
The question is: Am I "shifting the needle" on organisational outcomes in meaningful ways by implementing enterprise search? For example, do I get the same results 20% faster? Are my results 20% more accurate?

Fast, comprehensive full text search is generally king in most enterprises precisely because it offers a meaningful speed improvement on manual browsing with minimal ongoing cost - it's more or less a one-off capital investment to the organisation.

Taxonomy based search significantly increases capture costs. It will only lead to a positive RROI where a core set of information needs to be repeatedly and rapidly located or the potential benefits from one-off identification of information are substantial, which typically only occurs means industrial-scale plant or high-value consulting work.

Cheers,
Stephen.

====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 1:26 pm, Murray Jennex via groups.io wrote:

agreed Matt, the only way to know if enterprise search is correct is to know what all is stored and where and then match the results of the search to actual contents of the organization.  Basically, you have to have perfect knowledge of the organization to do a thorough job of evaluating enterprise search.  Of course no one wants to pay the cost of generating perfect knowledge so there is no method of determining accuracy of enterprise search.  Now, who would want to create standards of good enough search when you won't really know what good enough is?  This kind of defeats the purpose of doing KM.  The processes needed are a taxonomy of knowledge (documents and other artifacts) and a naming convention and storage convention so that everyone knows where to put stuff and what to call it.  Enterprise data dictionaries (or catalogs as they are now called) are great but how good are enterprises at creating and maintaining them?  The point I'm making is that we all know what it takes to create good enterprise search but getting the will and resources to do it is tough and we don't really need standard measures as you can't create them without perfect knowledge.  For another analogy, lets consider weather forecasting and how do you measure the performance of weather forecasters? ..You can only do it after the fact when the weather is known.  We would like more perfect weather forecasting but aren't willing to pay for what it takes to get perfect knowledge of weather.....murray jennex


-----Original Message-----
From: Matt Moore <matt@...>
To: main@sikm.groups.io
Sent: Sun, Feb 28, 2021 6:42 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Murray Jennex
 

agreed Matt, the only way to know if enterprise search is correct is to know what all is stored and where and then match the results of the search to actual contents of the organization.  Basically, you have to have perfect knowledge of the organization to do a thorough job of evaluating enterprise search.  Of course no one wants to pay the cost of generating perfect knowledge so there is no method of determining accuracy of enterprise search.  Now, who would want to create standards of good enough search when you won't really know what good enough is?  This kind of defeats the purpose of doing KM.  The processes needed are a taxonomy of knowledge (documents and other artifacts) and a naming convention and storage convention so that everyone knows where to put stuff and what to call it.  Enterprise data dictionaries (or catalogs as they are now called) are great but how good are enterprises at creating and maintaining them?  The point I'm making is that we all know what it takes to create good enterprise search but getting the will and resources to do it is tough and we don't really need standard measures as you can't create them without perfect knowledge.  For another analogy, lets consider weather forecasting and how do you measure the performance of weather forecasters? ..You can only do it after the fact when the weather is known.  We would like more perfect weather forecasting but aren't willing to pay for what it takes to get perfect knowledge of weather.....murray jennex


-----Original Message-----
From: Matt Moore <matt@...>
To: main@sikm.groups.io
Sent: Sun, Feb 28, 2021 6:42 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Matt Moore
 

Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Stephen Bounds
 

Hi Lee,

Very interesting set of posts. My gut says that a key problem you still need to address is the relationship between the search space (ie the number of documents, quality of corpus in potential results, amount and quality of metadata available), availability of user behaviour data, and the effectiveness of a search engine in that environment. I think this is necessary for a "complete" set of measures since they won't always correlate linearly.

In other words, if you're going to develop standardised metrics for search outcomes I suggest you should also consider standardised metrics for describing a search space. (Or alternatively, is it worth looking into creating a set of standardised content sources with different characteristics that can be reused to test a wide variety of search engines?)

User behaviour data is probably the trickiest to include in standard masures, since by definition it requires use of a search engine over an extended period of time by "real" users to meaningfully tune results. It would be very interesting to gather some statistics on whether it is possible to predict long-term usefulness of search results by carrying out specific testing on a small number of case study searches and reviewing the effect on results re-ranking.

A related question I have is whether it is useful or possible to benchmark search results against a baseline method which is relatively unoptimised but easy to standardise. These might be the equivalent of a simple "grep" search or locating useful documents through directory browsing. This would allow you to say something like, "the search engine allowed users to locate documents, on average, with 86% less clicks and 72% less time than a simple text-match keyword search".

Cheers,
Stephen.

====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 5:33 am, Lee Romero wrote:

Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Enterprise search - defining standard measures and a universal KPI #metrics #search

Lee Romero
 

Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Re: What to call knowledge management? #knowledge-transfer #knowledge-flow #name

Matt Moore
 


Re: What to call knowledge management? #knowledge-transfer #knowledge-flow #name

John Muz
 

Thank you all for the input.
Indeed, that has clarified my doubt, and given me a clear picture and insight into the topic- KM name 

John Muzam,

PhD – Candidate

Wroclaw- Poland


Re: Knowledge Manager Position—Job Search in the time of COVID #COVID-19 #discussion-starter #jobs #remote-work

Vinod Shenoy
 

Hi Abbe,
I'm sorry to hear about your situation. I can relate having been in your shoes before. In my experience, the KM job market is a tough nut to crack but is not impossible. I have recently been contacted by a few recruiters on Linkedin and can certainly send them your way. How flexible are you with (eventual) relocation? 

Stay positive, I'm sure you'll land something soon. I sent you an invite on LinkedIn to connect.

Thanks,
Vinod


Re: What to call knowledge management? #knowledge-transfer #knowledge-flow #name

John Muz
 

Thank you, John. That sounds interesting :)    The scientific community we belong to is Knowledge Management.  Use whatever works to describe what you do. 


Re: What to call knowledge management? #knowledge-transfer #knowledge-flow #name

Stan Garfield
 


Re: What to call knowledge management? #knowledge-transfer #knowledge-flow #name

 
Edited

1. KFM is not “totally accepted in the KM community.” As a matter of fact, this is the first I’ve heard of it (granted, I’m not familiar with Leister’s work; and have not been actively researching the field in the last couple of years). 
2. KM as I understand it is a broad discipline, comprising dozens of areas of endeavor. Narrative design, communities of practice, best practice sharing, after action learning, taxonomy, portals...the list goes on and on (there’s a nice diagram or two  stashed in the file cabinet here somewhere). 
3. Along those lines, the F in KFM feels too limiting to me. At a minimum, regardless of terminology, I think everyone here would agree that “knowledge” can exist both as a stock and a flow. So why rename the KM so it focuses only on flow? Whither knowledge as a stock??
4. As others have already pointed out, it really doesn’t matter what you call it - the definitional battles have been fought more times than I can count, and further resolution of them is not likely forthcoming. Focus on a measurable business outcome and call it something that reflects the desired outcome and you’ll be fine. 
-- 
-Tom
-- 

Tom Short Consulting
TSC
+1 415 300 7457

All of my previous SIKM Posts

 


Re: What to call knowledge management? #knowledge-transfer #knowledge-flow #name

Douglas Weidner
 

Good advice John,

Call it whatever works for you, but as you said...make it successful.

From what I've seen, being successful is much more the issue to be resolved, than the what to call it. Be successful and you can call it whatever you want.

Douglas Weidner
Chief CKM Instructor
KM Institute

On Sat, Feb 27, 2021 at 10:26 AM John Antill <jantill4@...> wrote:
First as long as you are doing it there is no right or wrong answer to this.It is the same as Knowledge Broker. The scientific community we belong to is Knowledge Management. Locally you can use whatever works to describe what you do. I prefer the term knowledge broker because I am selling information flow one way or the other to another party. Everything has a cost. The biggest problem we have right now is the ability to show management the actual cost per person metric on how we save money. You upgrade computers to faster computers/internet/communications to save money.  Same thing with transportation. Call yourself a Knowledge Flow manager if that helps. 
An organization's knowledge inherently needs to show the hard and soft costs. Show that.
It is a river. There are docks, eddies, boats, and a whole plethora of ways to get goods down it. You hire a river master to get it the fastest most efficient way possible. 
All KM is a guide to what you should be doing. 
I am getting a Master of Science in Knowledge Management hence a STEM degree. It is derived from the Library of Science and today librarians call themselves a plethora of titles. The point is to use what you feel comfortable with. Until the tangent intellect of a company (business intelligence) is traded on wall street, it is loaned against but not traded, I shall use the term KNowledge manager or Knowledge Specialist, or whatever the title the US Army gives me. Some are Knowledge Consultants. 
John Antill
MCKM, CKS IA KT
Kent State MS KM Advisory Board Member
MS KM Student at Kent State
256-541-1229

1041 - 1060 of 9717