Re: Enterprise search - defining standard measures and a universal KPI #metrics #search


Lee Romero
 

Thank you for the feedback, Stephen!

I agree - there are a lot of complexities here.  I do not believe in the viability of the (admittedly simpler) approach of defining a standard corpus of content for this type of thing, though.  My intent here is not to assess search engines but search solutions.

The distinction being that a solution does need to consider the variations in content included and the variations in user information needs.  My own perspective is that modern search engines are (at an admittedly high level) functionally very similar.  Yes, the search engine vendors can get value out of testing against a standard corpus, but that does not translate into anything useful in the face of all of the challenges you face with an enterprise search solution - which likely encompasses content from many different content sources, each of which likely is structured quite differently (or not at all), has varying levels of quality of content and addresses different information needs.  

The manager of an enterprise search solution needs to understand these things.  And needs to address quality issues with sources (and content gap issues with those sources).  And understand the different use cases / information needs of their users.

That detail aside - yes, I am assuming the need to standardize the capturing of user behavioral data.  Which is kind of where I am hoping this line of discussion could lead.  While not claiming that my 4 basic metrics are "correct", I would like to get to a state where there are well-defined, specific standard metrics all enterprise search managers could expect to be supported by their engine - so that they know they will be able to compare between engines, for example.  Unless someone wants to put out a recommended starting point, we continue on in the mode of different terminology, different metrics and, in general, confusion in comparing anything.

Thanks again! 

 

On Sun, Feb 28, 2021 at 5:54 PM Stephen Bounds <km@...> wrote:

Hi Lee,

Very interesting set of posts. My gut says that a key problem you still need to address is the relationship between the search space (ie the number of documents, quality of corpus in potential results, amount and quality of metadata available), availability of user behaviour data, and the effectiveness of a search engine in that environment. I think this is necessary for a "complete" set of measures since they won't always correlate linearly.

In other words, if you're going to develop standardised metrics for search outcomes I suggest you should also consider standardised metrics for describing a search space. (Or alternatively, is it worth looking into creating a set of standardised content sources with different characteristics that can be reused to test a wide variety of search engines?)

User behaviour data is probably the trickiest to include in standard masures, since by definition it requires use of a search engine over an extended period of time by "real" users to meaningfully tune results. It would be very interesting to gather some statistics on whether it is possible to predict long-term usefulness of search results by carrying out specific testing on a small number of case study searches and reviewing the effect on results re-ranking.

A related question I have is whether it is useful or possible to benchmark search results against a baseline method which is relatively unoptimised but easy to standardise. These might be the equivalent of a simple "grep" search or locating useful documents through directory browsing. This would allow you to say something like, "the search engine allowed users to locate documents, on average, with 86% less clicks and 72% less time than a simple text-match keyword search".

Cheers,
Stephen.

====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 1/03/2021 5:33 am, Lee Romero wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:


Regards
Lee Romero

Join main@SIKM.groups.io to automatically receive all group messages.