Topics

Enterprise search - defining standard measures and a universal KPI #search


Lee Romero
 

Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Stephen Bounds
 

Hi Lee,

Very interesting set of posts. My gut says that a key problem you still need to address is the relationship between the search space (ie the number of documents, quality of corpus in potential results, amount and quality of metadata available), availability of user behaviour data, and the effectiveness of a search engine in that environment. I think this is necessary for a "complete" set of measures since they won't always correlate linearly.

In other words, if you're going to develop standardised metrics for search outcomes I suggest you should also consider standardised metrics for describing a search space. (Or alternatively, is it worth looking into creating a set of standardised content sources with different characteristics that can be reused to test a wide variety of search engines?)

User behaviour data is probably the trickiest to include in standard masures, since by definition it requires use of a search engine over an extended period of time by "real" users to meaningfully tune results. It would be very interesting to gather some statistics on whether it is possible to predict long-term usefulness of search results by carrying out specific testing on a small number of case study searches and reviewing the effect on results re-ranking.

A related question I have is whether it is useful or possible to benchmark search results against a baseline method which is relatively unoptimised but easy to standardise. These might be the equivalent of a simple "grep" search or locating useful documents through directory browsing. This would allow you to say something like, "the search engine allowed users to locate documents, on average, with 86% less clicks and 72% less time than a simple text-match keyword search".

Cheers,
Stephen.

====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 5:33 am, Lee Romero wrote:

Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Matt Moore
 

Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Murray Jennex
 

agreed Matt, the only way to know if enterprise search is correct is to know what all is stored and where and then match the results of the search to actual contents of the organization.  Basically, you have to have perfect knowledge of the organization to do a thorough job of evaluating enterprise search.  Of course no one wants to pay the cost of generating perfect knowledge so there is no method of determining accuracy of enterprise search.  Now, who would want to create standards of good enough search when you won't really know what good enough is?  This kind of defeats the purpose of doing KM.  The processes needed are a taxonomy of knowledge (documents and other artifacts) and a naming convention and storage convention so that everyone knows where to put stuff and what to call it.  Enterprise data dictionaries (or catalogs as they are now called) are great but how good are enterprises at creating and maintaining them?  The point I'm making is that we all know what it takes to create good enterprise search but getting the will and resources to do it is tough and we don't really need standard measures as you can't create them without perfect knowledge.  For another analogy, lets consider weather forecasting and how do you measure the performance of weather forecasters? ..You can only do it after the fact when the weather is known.  We would like more perfect weather forecasting but aren't willing to pay for what it takes to get perfect knowledge of weather.....murray jennex


-----Original Message-----
From: Matt Moore <matt@...>
To: main@sikm.groups.io
Sent: Sun, Feb 28, 2021 6:42 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Stephen Bounds
 

Sorry Murray, I think you are offering a false premise.

By this standard, you could only work out the utility of a medical diagnosis process with perfect knowledge of all diseases and symptoms of the human body.

This is a perfect illustration of where RROI (relative return on investment) shines.

The question is not: will I get perfect search with enterprise engine X?
The question is: Am I "shifting the needle" on organisational outcomes in meaningful ways by implementing enterprise search? For example, do I get the same results 20% faster? Are my results 20% more accurate?

Fast, comprehensive full text search is generally king in most enterprises precisely because it offers a meaningful speed improvement on manual browsing with minimal ongoing cost - it's more or less a one-off capital investment to the organisation.

Taxonomy based search significantly increases capture costs. It will only lead to a positive RROI where a core set of information needs to be repeatedly and rapidly located or the potential benefits from one-off identification of information are substantial, which typically only occurs means industrial-scale plant or high-value consulting work.

Cheers,
Stephen.

====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 1:26 pm, Murray Jennex via groups.io wrote:

agreed Matt, the only way to know if enterprise search is correct is to know what all is stored and where and then match the results of the search to actual contents of the organization.  Basically, you have to have perfect knowledge of the organization to do a thorough job of evaluating enterprise search.  Of course no one wants to pay the cost of generating perfect knowledge so there is no method of determining accuracy of enterprise search.  Now, who would want to create standards of good enough search when you won't really know what good enough is?  This kind of defeats the purpose of doing KM.  The processes needed are a taxonomy of knowledge (documents and other artifacts) and a naming convention and storage convention so that everyone knows where to put stuff and what to call it.  Enterprise data dictionaries (or catalogs as they are now called) are great but how good are enterprises at creating and maintaining them?  The point I'm making is that we all know what it takes to create good enterprise search but getting the will and resources to do it is tough and we don't really need standard measures as you can't create them without perfect knowledge.  For another analogy, lets consider weather forecasting and how do you measure the performance of weather forecasters? ..You can only do it after the fact when the weather is known.  We would like more perfect weather forecasting but aren't willing to pay for what it takes to get perfect knowledge of weather.....murray jennex


-----Original Message-----
From: Matt Moore <matt@...>
To: main@sikm.groups.io
Sent: Sun, Feb 28, 2021 6:42 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Murray Jennex
 

not disagreeing with Stephen, just trying to answer the question that I thought was being asked and giving the reason why it can't be answered.  on the other hand, relative return on enterprise search is nice but still not a good measure, it will tell you you are okay in your efforts but never if you are actually right.  this is something I struggle with, as an engineer I like ground truth in measurements and am not happy without it, but also as a physicist I understand Heisenberg and how you can't actually know ground truth.  All that said, I go back to my point about focusing on the process for establishing enterprise search.  The better the processes, the more you can trust the results....murray


-----Original Message-----
From: Stephen Bounds <km@...>
To: main@SIKM.groups.io
Sent: Sun, Feb 28, 2021 10:08 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Sorry Murray, I think you are offering a false premise.
By this standard, you could only work out the utility of a medical diagnosis process with perfect knowledge of all diseases and symptoms of the human body.
This is a perfect illustration of where RROI (relative return on investment) shines.
The question is not: will I get perfect search with enterprise engine X?
The question is: Am I "shifting the needle" on organisational outcomes in meaningful ways by implementing enterprise search? For example, do I get the same results 20% faster? Are my results 20% more accurate?
Fast, comprehensive full text search is generally king in most enterprises precisely because it offers a meaningful speed improvement on manual browsing with minimal ongoing cost - it's more or less a one-off capital investment to the organisation.
Taxonomy based search significantly increases capture costs. It will only lead to a positive RROI where a core set of information needs to be repeatedly and rapidly located or the potential benefits from one-off identification of information are substantial, which typically only occurs means industrial-scale plant or high-value consulting work.
Cheers,
Stephen.
====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 1:26 pm, Murray Jennex via groups.io wrote:
agreed Matt, the only way to know if enterprise search is correct is to know what all is stored and where and then match the results of the search to actual contents of the organization.  Basically, you have to have perfect knowledge of the organization to do a thorough job of evaluating enterprise search.  Of course no one wants to pay the cost of generating perfect knowledge so there is no method of determining accuracy of enterprise search.  Now, who would want to create standards of good enough search when you won't really know what good enough is?  This kind of defeats the purpose of doing KM.  The processes needed are a taxonomy of knowledge (documents and other artifacts) and a naming convention and storage convention so that everyone knows where to put stuff and what to call it.  Enterprise data dictionaries (or catalogs as they are now called) are great but how good are enterprises at creating and maintaining them?  The point I'm making is that we all know what it takes to create good enterprise search but getting the will and resources to do it is tough and we don't really need standard measures as you can't create them without perfect knowledge.  For another analogy, lets consider weather forecasting and how do you measure the performance of weather forecasters? ..You can only do it after the fact when the weather is known.  We would like more perfect weather forecasting but aren't willing to pay for what it takes to get perfect knowledge of weather.....murray jennex


-----Original Message-----
From: Matt Moore <matt@...>
To: main@sikm.groups.io
Sent: Sun, Feb 28, 2021 6:42 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Nirmala Palaniappan
 

Hi Lee,

Interesting blog posts! Thanks for sharing and initiating this discussion. I am not sure if my response is extremely simplistic but I hope it helps you look at it from users’ perspective. Ultimately, if your intention is to arrive at measures that reflect how useful and efficient the search was for users, I believe the following aspects need to be combined into, perhaps, one formula:

1. Did the user get what they were looking for? (can only be confirmed by asking the user)
2. Did the search engine behave like a friendly assistant in the process and was the experience, therefore, a pleasant one? (Auto-suggest, recommendations, prioritisation of results etc)
3. How long did it take to find what the user was looking for? (Can be automatically tracked but will still have to be double-checked with the user)
4. Did the user find something in a serendipitous way? (Unexpected but useful outcome)
5. Does the search engine up-sell and proactively provide subscriptions, what’s popular etc? (User delight)

I believe including number of clicks and number of page scrolls will be useful but may not provide enough information on the effectiveness of the search as a lot depends on the user’s usage of keywords for the search and their purpose and frame of mind at the time of searching for information

Regards
Nirmala  

On Mon, 1 Mar 2021 at 1:03 AM, Lee Romero <pekadad@...> wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero

--
"The faithful see the invisible, believe the incredible and then receive the impossible" - Anonymous


Lee Romero
 

Thank you for the feedback, Stephen!

I agree - there are a lot of complexities here.  I do not believe in the viability of the (admittedly simpler) approach of defining a standard corpus of content for this type of thing, though.  My intent here is not to assess search engines but search solutions.

The distinction being that a solution does need to consider the variations in content included and the variations in user information needs.  My own perspective is that modern search engines are (at an admittedly high level) functionally very similar.  Yes, the search engine vendors can get value out of testing against a standard corpus, but that does not translate into anything useful in the face of all of the challenges you face with an enterprise search solution - which likely encompasses content from many different content sources, each of which likely is structured quite differently (or not at all), has varying levels of quality of content and addresses different information needs.  

The manager of an enterprise search solution needs to understand these things.  And needs to address quality issues with sources (and content gap issues with those sources).  And understand the different use cases / information needs of their users.

That detail aside - yes, I am assuming the need to standardize the capturing of user behavioral data.  Which is kind of where I am hoping this line of discussion could lead.  While not claiming that my 4 basic metrics are "correct", I would like to get to a state where there are well-defined, specific standard metrics all enterprise search managers could expect to be supported by their engine - so that they know they will be able to compare between engines, for example.  Unless someone wants to put out a recommended starting point, we continue on in the mode of different terminology, different metrics and, in general, confusion in comparing anything.

Thanks again! 

 

On Sun, Feb 28, 2021 at 5:54 PM Stephen Bounds <km@...> wrote:

Hi Lee,

Very interesting set of posts. My gut says that a key problem you still need to address is the relationship between the search space (ie the number of documents, quality of corpus in potential results, amount and quality of metadata available), availability of user behaviour data, and the effectiveness of a search engine in that environment. I think this is necessary for a "complete" set of measures since they won't always correlate linearly.

In other words, if you're going to develop standardised metrics for search outcomes I suggest you should also consider standardised metrics for describing a search space. (Or alternatively, is it worth looking into creating a set of standardised content sources with different characteristics that can be reused to test a wide variety of search engines?)

User behaviour data is probably the trickiest to include in standard masures, since by definition it requires use of a search engine over an extended period of time by "real" users to meaningfully tune results. It would be very interesting to gather some statistics on whether it is possible to predict long-term usefulness of search results by carrying out specific testing on a small number of case study searches and reviewing the effect on results re-ranking.

A related question I have is whether it is useful or possible to benchmark search results against a baseline method which is relatively unoptimised but easy to standardise. These might be the equivalent of a simple "grep" search or locating useful documents through directory browsing. This would allow you to say something like, "the search engine allowed users to locate documents, on average, with 86% less clicks and 72% less time than a simple text-match keyword search".

Cheers,
Stephen.

====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 5:33 am, Lee Romero wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Lee Romero
 

Agreed, Matt!  This is not likely to succeed, but I figured I have to try.

I *am* engaging with the Enterprise Search Engine Professionals group on LinkedIn for exactly that reason.  So far, I have had one of the moderators of that group reach out to me to help engage others "in the industry" to have a deeper discussion on this.  Hopefully, that does help get this somewhere. 

Thanks again!

Regards
Lee
 

On Sun, Feb 28, 2021 at 9:43 PM Matt Moore <matt@...> wrote:
Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Lee Romero
 

Hi Murray - Thanks for your comments.  

I will also say I don't agree with your initial characterization (agreeing with Stephen on that). 

My perspective is that no one can define what is right except the end user who needs information.  It is a hopeless task to think otherwise.

Ideally, I would like our information environment to have an omni-present "button" (at least in the metaphorical sense) that asks, "Is this the information you need to do your job?"  Then, regardless of how a user got to that information - found it on their hard drive, in their email, browsed around an intranet, found via search, we can start to determine if people are getting the information they need.  If we could do that, then we could better define the quality of search to ask - How frequently does a user click "Yes" on that button when they get to the content via search?  (You can also ask the same question if they get there through other means - suddenly opening up the possibility of answering the unanswerable question of whether navigation or searching is "better"...)

Given that we don't have that capability, I think it's reasonable to assume that there is a relatively fixed percentage of time that a user finds something in search where they would click "Yes" on that button.  I don't know what that percentage is, but let's say it's 50%.  In other words, half of the time a user finds something "of interest" via search it is actually what they need.  If we (through efforts to improve search) can increase the percentage of times a user does find something of interest, 50% of that is still higher, right?  

The metrics in search are just indicators - they are not a measure of "reality".  We need to keep that in mind.

Regards
Lee

On Mon, Mar 1, 2021 at 1:46 AM Murray Jennex via groups.io <murphjen=aol.com@groups.io> wrote:
not disagreeing with Stephen, just trying to answer the question that I thought was being asked and giving the reason why it can't be answered.  on the other hand, relative return on enterprise search is nice but still not a good measure, it will tell you you are okay in your efforts but never if you are actually right.  this is something I struggle with, as an engineer I like ground truth in measurements and am not happy without it, but also as a physicist I understand Heisenberg and how you can't actually know ground truth.  All that said, I go back to my point about focusing on the process for establishing enterprise search.  The better the processes, the more you can trust the results....murray


-----Original Message-----
From: Stephen Bounds <km@...>
To: main@SIKM.groups.io
Sent: Sun, Feb 28, 2021 10:08 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Sorry Murray, I think you are offering a false premise.
By this standard, you could only work out the utility of a medical diagnosis process with perfect knowledge of all diseases and symptoms of the human body.
This is a perfect illustration of where RROI (relative return on investment) shines.
The question is not: will I get perfect search with enterprise engine X?
The question is: Am I "shifting the needle" on organisational outcomes in meaningful ways by implementing enterprise search? For example, do I get the same results 20% faster? Are my results 20% more accurate?
Fast, comprehensive full text search is generally king in most enterprises precisely because it offers a meaningful speed improvement on manual browsing with minimal ongoing cost - it's more or less a one-off capital investment to the organisation.
Taxonomy based search significantly increases capture costs. It will only lead to a positive RROI where a core set of information needs to be repeatedly and rapidly located or the potential benefits from one-off identification of information are substantial, which typically only occurs means industrial-scale plant or high-value consulting work.
Cheers,
Stephen.
====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 1:26 pm, Murray Jennex via groups.io wrote:
agreed Matt, the only way to know if enterprise search is correct is to know what all is stored and where and then match the results of the search to actual contents of the organization.  Basically, you have to have perfect knowledge of the organization to do a thorough job of evaluating enterprise search.  Of course no one wants to pay the cost of generating perfect knowledge so there is no method of determining accuracy of enterprise search.  Now, who would want to create standards of good enough search when you won't really know what good enough is?  This kind of defeats the purpose of doing KM.  The processes needed are a taxonomy of knowledge (documents and other artifacts) and a naming convention and storage convention so that everyone knows where to put stuff and what to call it.  Enterprise data dictionaries (or catalogs as they are now called) are great but how good are enterprises at creating and maintaining them?  The point I'm making is that we all know what it takes to create good enterprise search but getting the will and resources to do it is tough and we don't really need standard measures as you can't create them without perfect knowledge.  For another analogy, lets consider weather forecasting and how do you measure the performance of weather forecasters? ..You can only do it after the fact when the weather is known.  We would like more perfect weather forecasting but aren't willing to pay for what it takes to get perfect knowledge of weather.....murray jennex


-----Original Message-----
From: Matt Moore <matt@...>
To: main@sikm.groups.io
Sent: Sun, Feb 28, 2021 6:42 pm
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Lee Romero
 

Thanks, Nirmala!

I agree with many of your points - I mentioned in my reply just now to Murray that ultimately the user is the one who defines what is "right" - my hope with my efforts here is to move toward a standard way of capturing that.  And I would like to ask for that confirmation (in theory) every time a user accesses information, whether they got there through search or not.  However, because I can't effect that change, I am proposing a partial "solution" that works within the context of search (well, I hope it works within the context of search).

Many of the points you raise are also interesting to consider, but I would look at them as imposing more on the solution than I would expect is viable at the outset.  So they are useful if you want to do more than "just the basics" but I'd like to define what "the basics" are first.  We do have ways in the current solution I'm working with to answer most of those questions you raise, but I don't see them as basic enough to use as a common starting point.

Thanks again for your comments!

Regards
Lee


On Mon, Mar 1, 2021 at 3:56 AM Nirmala Palaniappan <Nirmala.pal@...> wrote:
Hi Lee,

Interesting blog posts! Thanks for sharing and initiating this discussion. I am not sure if my response is extremely simplistic but I hope it helps you look at it from users’ perspective. Ultimately, if your intention is to arrive at measures that reflect how useful and efficient the search was for users, I believe the following aspects need to be combined into, perhaps, one formula:

1. Did the user get what they were looking for? (can only be confirmed by asking the user)
2. Did the search engine behave like a friendly assistant in the process and was the experience, therefore, a pleasant one? (Auto-suggest, recommendations, prioritisation of results etc)
3. How long did it take to find what the user was looking for? (Can be automatically tracked but will still have to be double-checked with the user)
4. Did the user find something in a serendipitous way? (Unexpected but useful outcome)
5. Does the search engine up-sell and proactively provide subscriptions, what’s popular etc? (User delight)

I believe including number of clicks and number of page scrolls will be useful but may not provide enough information on the effectiveness of the search as a lot depends on the user’s usage of keywords for the search and their purpose and frame of mind at the time of searching for information

Regards
Nirmala  

On Mon, 1 Mar 2021 at 1:03 AM, Lee Romero <pekadad@...> wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero

--
"The faithful see the invisible, believe the incredible and then receive the impossible" - Anonymous


Simon Denton
 

This is certainly tricky to quantify. I've just looked at our SharePoint statistics and it is easy to measure 'searches'. Clicks are much harder owing to result previews. There seems to be a behavioural pattern whereby if the 'searcher' sees what they think they need in the preview they might have found everything they need and so do not click through. We also employ tactics whereby we boost certain results, provide full document previews, document bread crumb trials etc. Hence the potential for high abandonment rates and lost clicks using your suggested kpis. I suspect this is true based on SharePoint statistics for abandoned and no result queries. I believe that if our search service was that bad we would have incurred the ire of the organisation by now.

For us it is more about closing the feedback loop, answering the "did you find what you where looking for?", "how did you?" etc.

Over the 5 years or so that our service has been running the majority of searches are people looking for people. Looking at the content searches most are for corporate items like "where can I find my payslip, book leave etc." Perhaps more a sign that we've not got the design of our intranet 100% right. The remaining content searches are simple keywords.

Regards,
Simon




From: main@SIKM.groups.io <main@SIKM.groups.io> on behalf of Lee Romero via groups.io <pekadad@...>
Sent: 01 March 2021 17:43
To: main@sikm.groups.io <main@sikm.groups.io>
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI
 
Thanks, Nirmala!

I agree with many of your points - I mentioned in my reply just now to Murray that ultimately the user is the one who defines what is "right" - my hope with my efforts here is to move toward a standard way of capturing that.  And I would like to ask for that confirmation (in theory) every time a user accesses information, whether they got there through search or not.  However, because I can't effect that change, I am proposing a partial "solution" that works within the context of search (well, I hope it works within the context of search).

Many of the points you raise are also interesting to consider, but I would look at them as imposing more on the solution than I would expect is viable at the outset.  So they are useful if you want to do more than "just the basics" but I'd like to define what "the basics" are first.  We do have ways in the current solution I'm working with to answer most of those questions you raise, but I don't see them as basic enough to use as a common starting point.

Thanks again for your comments!

Regards
Lee


On Mon, Mar 1, 2021 at 3:56 AM Nirmala Palaniappan <Nirmala.pal@...> wrote:
Hi Lee,

Interesting blog posts! Thanks for sharing and initiating this discussion. I am not sure if my response is extremely simplistic but I hope it helps you look at it from users’ perspective. Ultimately, if your intention is to arrive at measures that reflect how useful and efficient the search was for users, I believe the following aspects need to be combined into, perhaps, one formula:

1. Did the user get what they were looking for? (can only be confirmed by asking the user)
2. Did the search engine behave like a friendly assistant in the process and was the experience, therefore, a pleasant one? (Auto-suggest, recommendations, prioritisation of results etc)
3. How long did it take to find what the user was looking for? (Can be automatically tracked but will still have to be double-checked with the user)
4. Did the user find something in a serendipitous way? (Unexpected but useful outcome)
5. Does the search engine up-sell and proactively provide subscriptions, what’s popular etc? (User delight)

I believe including number of clicks and number of page scrolls will be useful but may not provide enough information on the effectiveness of the search as a lot depends on the user’s usage of keywords for the search and their purpose and frame of mind at the time of searching for information

Regards
Nirmala  

On Mon, 1 Mar 2021 at 1:03 AM, Lee Romero <pekadad@...> wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero

--
"The faithful see the invisible, believe the incredible and then receive the impossible" - Anonymous


Matt Moore
 

Lee,

What are the metrics that the managers of search engine professionals care about?

Regards,

Matt Moore
+61 423 784 504

On Mar 2, 2021, at 4:23 AM, Lee Romero <pekadad@...> wrote:


Agreed, Matt!  This is not likely to succeed, but I figured I have to try.

I *am* engaging with the Enterprise Search Engine Professionals group on LinkedIn for exactly that reason.  So far, I have had one of the moderators of that group reach out to me to help engage others "in the industry" to have a deeper discussion on this.  Hopefully, that does help get this somewhere. 

Thanks again!

Regards
Lee
 

On Sun, Feb 28, 2021 at 9:43 PM Matt Moore <matt@...> wrote:
Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


James Robertson
 


On 2/3/21 4:23 am, Lee Romero wrote:
Agreed, Matt!  This is not likely to succeed, but I figured I have to try.

I *am* engaging with the Enterprise Search Engine Professionals group on LinkedIn for exactly that reason.  So far, I have had one of the moderators of that group reach out to me to help engage others "in the industry" to have a deeper discussion on this.  Hopefully, that does help get this somewhere. 

Thanks again!

Regards
Lee

Hi Lee,

Sorry for jumping so late into the thread...

Have you come across Martin White in the UK (www.intranetfocus.com)? In addition to being my intranet counterpart in the UK, he's the author of several books on enterprise search. He's also released an enterprise search evaluation questionnaire which might be of interest.

I know there has been quite a lot of research on enterprise search, but primarily in the purely academic space. Martin is across all of this.

With your permission, I can also forward him links to the articles you've published?

PS. apologies if Martin's name has already been mentioned, I struggle to keep up with my emails on the best of days, and so could easily have missed that.

Cheers,
James


--
Step Two James Robertson
Founder and Managing Director | Step Two
Ph: +61 2 9319 7901 | M: +61 416 054 213
www.steptwo.com.au


Murray Jennex
 

Actually Lee I think you are totally wrong on this.  The user can't decide what the right results are, this is a satisficing approach to search results, you use the first results that you like.  You actually need to strive for an optimized search results where you are getting the best results possible.  One of the main tenants of KM is to improve decision making.  Letting the user decide what are the right results is like what we've argued about for the last year on misinformation where people were using the information they liked, not necessarily the truth or the right information.

Also, if you really read my response you would see that I said that getting perfect knowledge on results is just not going to happen.  You have to strive to ensure the search processes and the knowledge sources are the best they can be so that you can have as much trust in the process......murray jennex


-----Original Message-----
From: Lee Romero <pekadad@...>
To: main@sikm.groups.io
Sent: Mon, Mar 1, 2021 9:43 am
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Thanks, Nirmala!

I agree with many of your points - I mentioned in my reply just now to Murray that ultimately the user is the one who defines what is "right" - my hope with my efforts here is to move toward a standard way of capturing that.  And I would like to ask for that confirmation (in theory) every time a user accesses information, whether they got there through search or not.  However, because I can't effect that change, I am proposing a partial "solution" that works within the context of search (well, I hope it works within the context of search).

Many of the points you raise are also interesting to consider, but I would look at them as imposing more on the solution than I would expect is viable at the outset.  So they are useful if you want to do more than "just the basics" but I'd like to define what "the basics" are first.  We do have ways in the current solution I'm working with to answer most of those questions you raise, but I don't see them as basic enough to use as a common starting point.

Thanks again for your comments!

Regards
Lee


On Mon, Mar 1, 2021 at 3:56 AM Nirmala Palaniappan <Nirmala.pal@...> wrote:
Hi Lee,

Interesting blog posts! Thanks for sharing and initiating this discussion. I am not sure if my response is extremely simplistic but I hope it helps you look at it from users’ perspective. Ultimately, if your intention is to arrive at measures that reflect how useful and efficient the search was for users, I believe the following aspects need to be combined into, perhaps, one formula:

1. Did the user get what they were looking for? (can only be confirmed by asking the user)
2. Did the search engine behave like a friendly assistant in the process and was the experience, therefore, a pleasant one? (Auto-suggest, recommendations, prioritisation of results etc)
3. How long did it take to find what the user was looking for? (Can be automatically tracked but will still have to be double-checked with the user)
4. Did the user find something in a serendipitous way? (Unexpected but useful outcome)
5. Does the search engine up-sell and proactively provide subscriptions, what’s popular etc? (User delight)

I believe including number of clicks and number of page scrolls will be useful but may not provide enough information on the effectiveness of the search as a lot depends on the user’s usage of keywords for the search and their purpose and frame of mind at the time of searching for information

Regards
Nirmala  

On Mon, 1 Mar 2021 at 1:03 AM, Lee Romero <pekadad@...> wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero
--
"The faithful see the invisible, believe the incredible and then receive the impossible" - Anonymous


Murray Jennex
 

good points Simon!  I am always leery of letting users decide they have the right information as it leaves us in a situation where the user is picking their own truth and as we saw during the last year, users can often times pick misinformation as their right information.

Also, your comment on links to people is spot on.  The Jennex Olfman KM Success model has included links to knowledge as a key part of any KMS since it was first published back in 2002.  My research found and has continued to find that people are many times unsure of how to search so instead they look for who is the right person to talk too......murray jennex


-----Original Message-----
From: Simon Denton via groups.io <Simon.denton@...>
To: main@SIKM.groups.io <main@SIKM.groups.io>
Sent: Mon, Mar 1, 2021 10:15 am
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

This is certainly tricky to quantify. I've just looked at our SharePoint statistics and it is easy to measure 'searches'. Clicks are much harder owing to result previews. There seems to be a behavioural pattern whereby if the 'searcher' sees what they think they need in the preview they might have found everything they need and so do not click through. We also employ tactics whereby we boost certain results, provide full document previews, document bread crumb trials etc. Hence the potential for high abandonment rates and lost clicks using your suggested kpis. I suspect this is true based on SharePoint statistics for abandoned and no result queries. I believe that if our search service was that bad we would have incurred the ire of the organisation by now.

For us it is more about closing the feedback loop, answering the "did you find what you where looking for?", "how did you?" etc.

Over the 5 years or so that our service has been running the majority of searches are people looking for people. Looking at the content searches most are for corporate items like "where can I find my payslip, book leave etc." Perhaps more a sign that we've not got the design of our intranet 100% right. The remaining content searches are simple keywords.

Regards,
Simon




From: main@SIKM.groups.io <main@SIKM.groups.io> on behalf of Lee Romero via groups.io <pekadad@...>
Sent: 01 March 2021 17:43
To: main@sikm.groups.io <main@sikm.groups.io>
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI
 
Thanks, Nirmala!

I agree with many of your points - I mentioned in my reply just now to Murray that ultimately the user is the one who defines what is "right" - my hope with my efforts here is to move toward a standard way of capturing that.  And I would like to ask for that confirmation (in theory) every time a user accesses information, whether they got there through search or not.  However, because I can't effect that change, I am proposing a partial "solution" that works within the context of search (well, I hope it works within the context of search).

Many of the points you raise are also interesting to consider, but I would look at them as imposing more on the solution than I would expect is viable at the outset.  So they are useful if you want to do more than "just the basics" but I'd like to define what "the basics" are first.  We do have ways in the current solution I'm working with to answer most of those questions you raise, but I don't see them as basic enough to use as a common starting point.

Thanks again for your comments!

Regards
Lee


On Mon, Mar 1, 2021 at 3:56 AM Nirmala Palaniappan <Nirmala.pal@...> wrote:
Hi Lee,

Interesting blog posts! Thanks for sharing and initiating this discussion. I am not sure if my response is extremely simplistic but I hope it helps you look at it from users’ perspective. Ultimately, if your intention is to arrive at measures that reflect how useful and efficient the search was for users, I believe the following aspects need to be combined into, perhaps, one formula:

1. Did the user get what they were looking for? (can only be confirmed by asking the user)
2. Did the search engine behave like a friendly assistant in the process and was the experience, therefore, a pleasant one? (Auto-suggest, recommendations, prioritisation of results etc)
3. How long did it take to find what the user was looking for? (Can be automatically tracked but will still have to be double-checked with the user)
4. Did the user find something in a serendipitous way? (Unexpected but useful outcome)
5. Does the search engine up-sell and proactively provide subscriptions, what’s popular etc? (User delight)

I believe including number of clicks and number of page scrolls will be useful but may not provide enough information on the effectiveness of the search as a lot depends on the user’s usage of keywords for the search and their purpose and frame of mind at the time of searching for information

Regards
Nirmala  

On Mon, 1 Mar 2021 at 1:03 AM, Lee Romero <pekadad@...> wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero
--
"The faithful see the invisible, believe the incredible and then receive the impossible" - Anonymous


John Antill
 

Now before anyone argues smoking is bad  I am just reporting numbers
This is similar to lung cancer death rates.If you look at smokers per capita,

US Smoking Rate 17.25%
US Population 328,200,000
Estimate Lung Cancer Deaths 154,050
US LCDR 0.04%

You are seeing the same thing with COVID19
They say the deaths not the total percentage.

If you only show the numbers skewed to the hypothesis that is bad science.
I love this site for numbers  https://www.worldometers.info/


John Antill
MCKM, CKS IA KT
Kent State MS KM Advisory Board Member
MS KM Student at Kent State
256-541-1229


Lee Romero
 

Hi Simon - thanks for your comments!

What you describe with the use of "preview" is what I describe in my last post among these 4 as "good abandonment".  I don't know how to deal with that but based on my experience, I think that really being confident that users get what they need from the search results themselves (either via preview or, another I discussed previously, the case where you provide direct answers to natural language questions or even something as simple as showing the phone number for a person when you display that person's profile as a results) is quite hard to achieve at scale.

I think it can account for a percentage of abandonment in all search experiences, but I would suspect it makes up a small percentage.  Just to make up numbers to illustrate what I mean - if you have a search solution that measures abandonment rate as I describe, if your overall abandonment rate was 30%, I would expect that "good abandonment" is probably in the low single digits of all search sessions.  If you suspect it is higher than that, you should work out a way to measure it (or at least approximately measure it). 

I have had many conversations with my IT colleagues who immediately will gravitate to that point as the explanation for a higher abandonment rate - "Well, maybe users are getting what they need right from the results page, so there is no problem!"  Yes, some of that is true, but part of it is (in my opinion) wishful thinking. :)

Agreed 100% on the point that what really needs to be measured is "Did you find what you're looking for?"  ABsent a pervasively available way for users to say "Yes I did!" on every single piece of content, that is not something you can achieve in search.  Sure, you can put a widget on your results that allow a user to answer that but very few people will actually answer that and, honestly, by the time that a user is able to answer that (They have looked at what they accessed), they likely have lost the context of the search results and users aren't inclined to go back to the search results to click that.

Thanks again!
Lee 

On Mon, Mar 1, 2021 at 1:15 PM Simon Denton via groups.io <Simon.denton=mottmac.com@groups.io> wrote:
This is certainly tricky to quantify. I've just looked at our SharePoint statistics and it is easy to measure 'searches'. Clicks are much harder owing to result previews. There seems to be a behavioural pattern whereby if the 'searcher' sees what they think they need in the preview they might have found everything they need and so do not click through. We also employ tactics whereby we boost certain results, provide full document previews, document bread crumb trials etc. Hence the potential for high abandonment rates and lost clicks using your suggested kpis. I suspect this is true based on SharePoint statistics for abandoned and no result queries. I believe that if our search service was that bad we would have incurred the ire of the organisation by now.

For us it is more about closing the feedback loop, answering the "did you find what you where looking for?", "how did you?" etc.

Over the 5 years or so that our service has been running the majority of searches are people looking for people. Looking at the content searches most are for corporate items like "where can I find my payslip, book leave etc." Perhaps more a sign that we've not got the design of our intranet 100% right. The remaining content searches are simple keywords.

Regards,
Simon




From: main@SIKM.groups.io <main@SIKM.groups.io> on behalf of Lee Romero via groups.io <pekadad=gmail.com@groups.io>
Sent: 01 March 2021 17:43
To: main@sikm.groups.io <main@sikm.groups.io>
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI
 
Thanks, Nirmala!

I agree with many of your points - I mentioned in my reply just now to Murray that ultimately the user is the one who defines what is "right" - my hope with my efforts here is to move toward a standard way of capturing that.  And I would like to ask for that confirmation (in theory) every time a user accesses information, whether they got there through search or not.  However, because I can't effect that change, I am proposing a partial "solution" that works within the context of search (well, I hope it works within the context of search).

Many of the points you raise are also interesting to consider, but I would look at them as imposing more on the solution than I would expect is viable at the outset.  So they are useful if you want to do more than "just the basics" but I'd like to define what "the basics" are first.  We do have ways in the current solution I'm working with to answer most of those questions you raise, but I don't see them as basic enough to use as a common starting point.

Thanks again for your comments!

Regards
Lee


On Mon, Mar 1, 2021 at 3:56 AM Nirmala Palaniappan <Nirmala.pal@...> wrote:
Hi Lee,

Interesting blog posts! Thanks for sharing and initiating this discussion. I am not sure if my response is extremely simplistic but I hope it helps you look at it from users’ perspective. Ultimately, if your intention is to arrive at measures that reflect how useful and efficient the search was for users, I believe the following aspects need to be combined into, perhaps, one formula:

1. Did the user get what they were looking for? (can only be confirmed by asking the user)
2. Did the search engine behave like a friendly assistant in the process and was the experience, therefore, a pleasant one? (Auto-suggest, recommendations, prioritisation of results etc)
3. How long did it take to find what the user was looking for? (Can be automatically tracked but will still have to be double-checked with the user)
4. Did the user find something in a serendipitous way? (Unexpected but useful outcome)
5. Does the search engine up-sell and proactively provide subscriptions, what’s popular etc? (User delight)

I believe including number of clicks and number of page scrolls will be useful but may not provide enough information on the effectiveness of the search as a lot depends on the user’s usage of keywords for the search and their purpose and frame of mind at the time of searching for information

Regards
Nirmala  

On Mon, 1 Mar 2021 at 1:03 AM, Lee Romero <pekadad@...> wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero

--
"The faithful see the invisible, believe the incredible and then receive the impossible" - Anonymous


Lee Romero
 

Hi Matt - I can't answer for everyone but the sole question I would like to be able to answer accurately is, "Did you get to the information / tool / resource that you needed to access?"

Everything else is nice to know but if you don't answer that, you aren't doing your job.

As I mention in my post, the use of abandonment rate is not a direct measure of that, but I think it highly (but inversely) correlates with that question.  Users are not (typically) going to access results that don't look likely to be of use.  But, on the other hand, not clicking on anything (i.e., abandoning your search) is a pretty clear indication that they probably did not find anything useful (with the exception of the 'good abandonment').

Regards
Lee

On Mon, Mar 1, 2021 at 3:53 PM Matt Moore <matt@...> wrote:
Lee,

What are the metrics that the managers of search engine professionals care about?

Regards,

Matt Moore
+61 423 784 504

On Mar 2, 2021, at 4:23 AM, Lee Romero <pekadad@...> wrote:


Agreed, Matt!  This is not likely to succeed, but I figured I have to try.

I *am* engaging with the Enterprise Search Engine Professionals group on LinkedIn for exactly that reason.  So far, I have had one of the moderators of that group reach out to me to help engage others "in the industry" to have a deeper discussion on this.  Hopefully, that does help get this somewhere. 

Thanks again!

Regards
Lee
 

On Sun, Feb 28, 2021 at 9:43 PM Matt Moore <matt@...> wrote:
Lee,

As you know I respect your work (and also Martin White’s)

“I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions”

I would say that the biggest single challenge that you have is that there is no professional / buyer community to drive standards & research in this area.

Without that enterprise search community, no common standards will emerge. So I would first seek to build that community with both professionals, vendors & consultants and then you will get standards & research.

And related to this is that enterprise search is not considered an existential problem by the vast majority of organizations. Yes, it would be nice if enterprise search sucked less - but we get by at the moment.

I think the wider context will defeat your efforts but I wish you well and perhaps enterprise search can hitch itself to another innovation/fad to get some traction? E.g. machine learning, automation, bots, etc.

Matt Moore
+61 423 784 504

On Mar 1, 2021, at 6:33 AM, Lee Romero <pekadad@...> wrote:


Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero


Lee Romero
 

Hi James - I am quite familiar with Martin - in fact, I credit him (see my first post in this small series) with spurring me to action on this topic.  He commented on the first of my posts (both in the LinkedIn group I mentioned earlier and on my blog).

You can certainly share with whomever you'd like, though!

Thanks!
Lee 

On Mon, Mar 1, 2021 at 4:43 PM James Robertson <jamesr@...> wrote:


On 2/3/21 4:23 am, Lee Romero wrote:
Agreed, Matt!  This is not likely to succeed, but I figured I have to try.

I *am* engaging with the Enterprise Search Engine Professionals group on LinkedIn for exactly that reason.  So far, I have had one of the moderators of that group reach out to me to help engage others "in the industry" to have a deeper discussion on this.  Hopefully, that does help get this somewhere. 

Thanks again!

Regards
Lee

Hi Lee,

Sorry for jumping so late into the thread...

Have you come across Martin White in the UK (www.intranetfocus.com)? In addition to being my intranet counterpart in the UK, he's the author of several books on enterprise search. He's also released an enterprise search evaluation questionnaire which might be of interest.

I know there has been quite a lot of research on enterprise search, but primarily in the purely academic space. Martin is across all of this.

With your permission, I can also forward him links to the articles you've published?

PS. apologies if Martin's name has already been mentioned, I struggle to keep up with my emails on the best of days, and so could easily have missed that.

Cheers,
James


--
Step Two James Robertson
Founder and Managing Director | Step Two
Ph: +61 2 9319 7901 | M: +61 416 054 213
www.steptwo.com.au