Lee Romero asked about managing risk of inaccurate info provided by AI-enabled search.
Over the course of 30 years of observing the rapid evolution of business systems and introduction of new tech to support knowledge workers during my consulting work; and studying the evolution of technolgy eras across the millennia, I arrived at the following
The first use of any new technology generally includes applying it to improve or automate whatever was already being done manually.
The corollary of this is: You cannot automate that which you don’t already do well manually.
Internal combustion, electric motors, spreadsheets, Google Maps (remember AAA Trip Tiks??), photography, ERP, WorkDay...pick any one of these, and take a look at how they were initially applied when they first came on the scene. And then consider the myriad
novel, unexpected ways in which they were put to use - often times far exceeding expectations that led to their rise in the first place.
And so it will be with AI. If we are worried about how AI-based agents used for info search might expose companies to greater risk due to inaccurate or superseded search results, the first place I’d look is how well these risks are being managed now in the
existing “manual” environment, in which a knowledge worker evaluates the search results and applies experience and judgement to discern which results have merit and which do not.
To the degree that there is a high degree of variability in those results between different operators, the ability to “automate” it via AI may be challenging at best; or not yet possible at worst. If some searchers do it better than others, is it due to lack
of training being developed? Or inability to develop effective training? If the latter, then how can we automate it with AI, if we still don’t know how to get uniform results from manual workers doing it once they’ve been properly trained? (See Short’s corollary
above: you can’t automate that which you don’t do well manually!).
This may be a case of using AI initially to tease out and codify the algorithms and heuristics used by expert searchers in order to program the AI to do it. This recursive process could be facilitated via Machine Learning (ML); or manually through trial and
error, via experts comparing their search results to the AI agents’ results, divining the sources of variance, and using that to “tune” the AI’s algorithms or heuristics. Rinse and repeat until the results reach a high enough level of fidelity so as to be
considered within the limits of tolerable risk. (Remember when WikiPedia was still new? It took awhile before researchers were forced to accept that the error rate contained in Wikipedia had reached parity with the then-standard for general reference, The
So that’s my take on this question, which is an interesting one, to be sure. But one that is definitely not without precedents from which we can gain insights regarding how we might anticipate it to evolve.
Tom Short Consulting