Re: Enterprise search - defining standard measures and a universal KPI #metrics #search

Lee Romero

Hi Murry - Thanks for your comments. I'll respectfully disagree - at least within the context of an enterprise search solution.

You are right that in the "wilds" of the internet guarding against misinformation and disinformation can be very important.

However, within an enterprise, I would say that it is not likely to be as significant an issue.  

It is certainly not impossible; a concrete sample could be one where your enterprise search solution exposes content from your communities - for example, discussions from forums/Yammer groups/discussion boards - and someone posts in such a place an answer to a question that is wrong.  That answer could be exposed in search and lead people astray.  Another example (perhaps less contrived) is when content publishers don't properly maintain content and, for example, users are finding policy or benefits information that is outdated.  

To expose my potential bias - I work with the assumption that, within a company, no one is [or at least very few people are] actively trying to mislead people.  And, I also work with the assumption that the search solution does attempt to grapple with ensuring the quality of content included (whether that quality is currency or whatever type of measurement you have).

If these aren't true, yes, you're right.  With those assumptions, though, I don't think anyone is actively trying to create their own 'bubble' and are just trying to get their job done efficiently and accurately.

Thanks again.


On Tue, Mar 2, 2021 at 2:12 AM Murray Jennex <murphjen@...> wrote:
Actually Lee I think you are totally wrong on this.  The user can't decide what the right results are, this is a satisficing approach to search results, you use the first results that you like.  You actually need to strive for an optimized search results where you are getting the best results possible.  One of the main tenants of KM is to improve decision making.  Letting the user decide what are the right results is like what we've argued about for the last year on misinformation where people were using the information they liked, not necessarily the truth or the right information.

Also, if you really read my response you would see that I said that getting perfect knowledge on results is just not going to happen.  You have to strive to ensure the search processes and the knowledge sources are the best they can be so that you can have as much trust in the process......murray jennex

-----Original Message-----
From: Lee Romero <pekadad@...>
Sent: Mon, Mar 1, 2021 9:43 am
Subject: Re: [SIKM] Enterprise search - defining standard measures and a universal KPI

Thanks, Nirmala!

I agree with many of your points - I mentioned in my reply just now to Murray that ultimately the user is the one who defines what is "right" - my hope with my efforts here is to move toward a standard way of capturing that.  And I would like to ask for that confirmation (in theory) every time a user accesses information, whether they got there through search or not.  However, because I can't effect that change, I am proposing a partial "solution" that works within the context of search (well, I hope it works within the context of search).

Many of the points you raise are also interesting to consider, but I would look at them as imposing more on the solution than I would expect is viable at the outset.  So they are useful if you want to do more than "just the basics" but I'd like to define what "the basics" are first.  We do have ways in the current solution I'm working with to answer most of those questions you raise, but I don't see them as basic enough to use as a common starting point.

Thanks again for your comments!


On Mon, Mar 1, 2021 at 3:56 AM Nirmala Palaniappan <Nirmala.pal@...> wrote:
Hi Lee,

Interesting blog posts! Thanks for sharing and initiating this discussion. I am not sure if my response is extremely simplistic but I hope it helps you look at it from users’ perspective. Ultimately, if your intention is to arrive at measures that reflect how useful and efficient the search was for users, I believe the following aspects need to be combined into, perhaps, one formula:

1. Did the user get what they were looking for? (can only be confirmed by asking the user)
2. Did the search engine behave like a friendly assistant in the process and was the experience, therefore, a pleasant one? (Auto-suggest, recommendations, prioritisation of results etc)
3. How long did it take to find what the user was looking for? (Can be automatically tracked but will still have to be double-checked with the user)
4. Did the user find something in a serendipitous way? (Unexpected but useful outcome)
5. Does the search engine up-sell and proactively provide subscriptions, what’s popular etc? (User delight)

I believe including number of clicks and number of page scrolls will be useful but may not provide enough information on the effectiveness of the search as a lot depends on the user’s usage of keywords for the search and their purpose and frame of mind at the time of searching for information


On Mon, 1 Mar 2021 at 1:03 AM, Lee Romero <pekadad@...> wrote:
Hi all - I recently started blogging again (after a very sadly long time away!).

Part of what got me back to my blog was a problem I see with enterprise search solutions today - a lack of standards that would allow for consistency of measurement and comparison across solutions.  I've been mentally fermenting this topic for several months now.

I have just published the 4th article in a series about this - this one being where I propose a standard KPI.

I'd be interested in your comments and thoughts on this topic!  Feel free to share here or via comment on the blog (though I'll say it's probably easier to do so here!)

My recent posts:

Lee Romero
"The faithful see the invisible, believe the incredible and then receive the impossible" - Anonymous

Join to automatically receive all group messages.