toggle quoted messageShow quoted text
Thank you for the feedback, Stephen!
I agree - there are a lot of complexities here. I do not believe in the viability of the (admittedly simpler) approach of defining a standard corpus of content for this type of thing, though. My intent here is not to assess search engines but search solutions.
The distinction being that a solution does need to consider the variations in content included and the variations in user information needs. My own perspective is that modern search engines are (at an admittedly high level) functionally very similar. Yes, the search engine vendors can get value out of testing against a standard corpus, but that does not translate into anything useful in the face of all of the challenges you face with an enterprise search solution - which likely encompasses content from many different content sources, each of which likely is structured quite differently (or not at all), has varying levels of quality of content and addresses different information needs.
The manager of an enterprise search solution needs to understand these things. And needs to address quality issues with sources (and content gap issues with those sources). And understand the different use cases / information needs of their users.
That detail aside - yes, I am assuming the need to standardize the capturing of user behavioral data. Which is kind of where I am hoping this line of discussion could lead. While not claiming that my 4 basic metrics are "correct", I would like to get to a state where there are well-defined, specific standard metrics all enterprise search managers could expect to be supported by their engine - so that they know they will be able to compare between engines, for example. Unless someone wants to put out a recommended starting point, we continue on in the mode of different terminology, different metrics and, in general, confusion in comparing anything.
On Sun, Feb 28, 2021 at 5:54 PM Stephen Bounds <km@...> wrote: