June 2019 SIKM Call: Kate Pugh - Conversational AI #monthly-call #conversation #AI


Stan Garfield
 

This is a reminder of tomorrow's monthly call from 11 am to 12 noon EDT.

SIKM Leaders Community Monthly Call



Stan Garfield
 
Edited

TO: SIKM Leaders Community

Yesterday we held our 166th monthly call. Here are the details:
Thanks to Kate for presenting and to the many members who participated in the conversation. Please continue the discussion by replying to this thread.


Tweets and group chat comments

  • From Joe Raimondo: Elucidating SIKM Leaders session with Kate Pugh on Conversational AI
  • David Eddy mentioned the 90-9-1 Rule of Thumb. My article on this - Fact or Fiction?
  • Tom Barfield mentioned
  • From Vijayanandam V M: Kate, I have a use case in our enterprise to link to Conversational AI. "Decision making in organization moving from virtual meetings to Conversational chat apps like WhatsApp"


Katrina Pugh
 

Hello, Stan and SIKM'ers
It was great to discuss "conversational AI" and "AI for conversation." Lee and Linda raised good questions, and I would love to hear others' thoughts:

1. Lee: How do we build trust when intelligent agents (chat bots) are imperfect? (I initiated the question about how we can underscore that we're all contributors to the bot's success, and see the complaints as opportunity for experimentation and engagement.  We become citizens!) 

2. Linda: How do we make sure that the AI-enabled conversation supports diversity, e.g., introverts? (Sierra added that there is more "space for reflection and return" in applications like Teams and Chat.)

Thanks, and I look forward to hearing from you!
Kate

Katrina Pugh
EY | Advisory Services | Digital, Data and Analytics Practice
Columbia University | Info and Knowledge Strategy Master's Program Faculty
Mobile 617-967-3910


On Wed, Jun 19, 2019 at 12:15 AM stangarfield@... [sikmleaders] <sikmleaders@...> wrote:
 

TO: SIKM Leaders Community

Today we held our 166th monthly call. Here are the details:

Thanks to Kate for presenting and to the many members who participated in the conversation. Please continue the discussion in the Yahoo! Group by replying to this thread.


Tweets and group chat comments

  • From Joe Raimondo: Elucidating SIKM Leaders session with Kate Pugh on Conversational AI
  • David Eddy mentioned the 90-9-1 Rule of Thumb. My article on this - Fact or Fiction? https://www.
  • Tom Barfield mentioned https://www.touchcast.com
  • From Vijayanandam V M: Kate, I have a use case in our enterprise to link to Conversational AI. "Decision making in organization moving from virtual meetings to Conversational chat apps like WhatsApp"


Mark Zoeckler
 

Hi Stan - Could you possibly change my email address for this group to zoeckler.mark@... - or is that something I have to do with Yahoo Groups?

I hope you are doing well.

Mark

On Tue, Jun 18, 2019 at 9:15 PM stangarfield@... [sikmleaders] <sikmleaders@...> wrote:
 

TO: SIKM Leaders Community

Today we held our 166th monthly call. Here are the details:

Thanks to Kate for presenting and to the many members who participated in the conversation. Please continue the discussion in the Yahoo! Group by replying to this thread.


Tweets and group chat comments

  • From Joe Raimondo: Elucidating SIKM Leaders session with Kate Pugh on Conversational AI
  • David Eddy mentioned the 90-9-1 Rule of Thumb. My article on this - Fact or Fiction? https://www.
  • Tom Barfield mentioned https://www.touchcast.com
  • From Vijayanandam V M: Kate, I have a use case in our enterprise to link to Conversational AI. "Decision making in organization moving from virtual meetings to Conversational chat apps like WhatsApp"


Stan Garfield
 

How to change your email address: Click on Edit Membership under the Membership drop-down menu in the upper right on the Yahoo group page, and then click on the pencil icon next to Identity to edit your email address.

If you have problems, I can remove you from the group and send an invitation to a new email address.


Lee Romero
 

Kate - Thanks for bringing up this question that'd I'd asked.

Another detail to share - like many large organizations, mine can be quite "risk averse" - beyond the "intelligent agent" context, it is something that (as I am the business owner of our enterprise search solution) I hear about in the vein of how we ensure that our users find authoritative content when they are looking for it - That is, when a user turns to our search and looks for some information related to a client engagement, if they find out-of-date or wrong information and make a decision based on that, we could open ourselves up to legal repercussions. 

That expectation does extend to intelligent agents.  If a user interacts with one and is given a wrong answer, that could be a significant problem.  

The "obvious" answer is ensuring your agent (or search) is only fed "correct" content, but that is in most practical situations not possible to guarantee.  And if you use external information with an intelligent agent (you mention this by way of using open source content, for example), you are opening yourself up to the possibility of using unvalidated information.

What have others done when faced with this challenge?

Regards
Lee Romero

On Wed, Jun 19, 2019 at 10:31 AM Katrina Pugh katrinabpugh@... [sikmleaders] <sikmleaders@...> wrote:


Hello, Stan and SIKM'ers
It was great to discuss "conversational AI" and "AI for conversation." Lee and Linda raised good questions, and I would love to hear others' thoughts:

1. Lee: How do we build trust when intelligent agents (chat bots) are imperfect? (I initiated the question about how we can underscore that we're all contributors to the bot's success, and see the complaints as opportunity for experimentation and engagement.  We become citizens!) 

2. Linda: How do we make sure that the AI-enabled conversation supports diversity, e.g., introverts? (Sierra added that there is more "space for reflection and return" in applications like Teams and Chat.)

Thanks, and I look forward to hearing from you!
Kate

Katrina Pugh
EY | Advisory Services | Digital, Data and Analytics Practice
Columbia University | Info and Knowledge Strategy Master's Program Faculty
Mobile 617-967-3910


On Wed, Jun 19, 2019 at 12:15 AM stangarfield@... [sikmleaders] <sikmleaders@...> wrote:
 

TO: SIKM Leaders Community

Today we held our 166th monthly call. Here are the details:

Thanks to Kate for presenting and to the many members who participated in the conversation. Please continue the discussion in the Yahoo! Group by replying to this thread.


Tweets and group chat comments

  • From Joe Raimondo: Elucidating SIKM Leaders session with Kate Pugh on Conversational AI
  • David Eddy mentioned the 90-9-1 Rule of Thumb. My article on this - Fact or Fiction? https://www.
  • Tom Barfield mentioned https://www.touchcast.com
  • From Vijayanandam V M: Kate, I have a use case in our enterprise to link to Conversational AI. "Decision making in organization moving from virtual meetings to Conversational chat apps like WhatsApp"




Ray Sims
 

I thought of Kate's SIKM presentation of yesterday when I listened to

https://a16z.com/2019/06/19/history-and-future-of-machine-learning/ today.

 

Tom Mitchell (http://www.cs.cmu.edu/~tom/) outlines a compelling vision for Conversational Learning with our phone digital assistant.

 

Here is a two+ minute audio clip of this part of the conversation: https://www.airr.io/quote/5d0a83f98ef6251ceb44a2b1

 

Ray Sims

http://www.the12thchapter.com


tman9999@...
 

Lee Romero asked about managing risk of inaccurate info provided by AI-enabled search.

Over the course of 30 years of observing the rapid evolution of business systems and introduction of new tech to support knowledge workers during my consulting work; and studying the evolution of technolgy eras across the millennia, I arrived at the following theorem:

The first use of any new technology generally includes applying it to improve or automate whatever was already being done manually.

The corollary of this is: You cannot automate that which you don’t already do well manually.

Internal combustion, electric motors, spreadsheets, Google Maps (remember AAA Trip Tiks??), photography, ERP, WorkDay...pick any one of these, and take a look at how they were initially applied when they first came on the scene. And then consider the myriad novel, unexpected ways in which they were put to use - often times far exceeding expectations that led to their rise in the first place.

And so it will be with AI. If we are worried about how AI-based agents used for info search might expose companies to greater risk due to inaccurate or superseded search results, the first place I’d look is how well these risks are being managed now in the existing “manual” environment, in which a knowledge worker evaluates the search results and applies experience and judgement to discern which results have merit and which do not.

To the degree that there is a high degree of variability in those results between different operators, the ability to “automate” it via AI may be challenging at best; or not yet possible at worst. If some searchers do it better than others, is it due to lack of training being developed? Or inability to develop effective training? If the latter, then how can we automate it with AI, if we still don’t know how to get uniform results from manual workers doing it once they’ve been properly trained? (See Short’s corollary above: you can’t automate that which you don’t do well manually!).

This may be a case of using AI initially to tease out and codify the algorithms and heuristics used by expert searchers in order to program the AI to do it. This recursive process could be facilitated via Machine Learning (ML); or manually through trial and error, via experts comparing their search results to the AI agents’ results, divining the sources of variance, and using that to “tune” the AI’s algorithms or heuristics. Rinse and repeat until the results reach a high enough level of fidelity so as to be considered within the limits of tolerable risk. (Remember when WikiPedia was still new? It took awhile before researchers were forced to accept that the error rate contained in Wikipedia had reached parity with the then-standard for general reference, The Encyclopedia Britannica).

So that’s my take on this question, which is an interesting one, to be sure. But one that is definitely not without precedents from which we can gain insights regarding how we might anticipate it to evolve.

Tom
TSC
Tom Short Consulting
San Francisco


Brett Patron
 

Tom Short said: "
The corollary of this is: You cannot automate that which you don’t already do well manually..."

Wow..that resonated with me. I have a client that would never believe this even though it is a spot on observation.


From: sikmleaders@... on behalf of tman9999@... [sikmleaders] <sikmleaders@...>
Sent: Thursday, June 20, 2019 7:08:16 PM
To: sikmleaders@...
Subject: Re: [sikmleaders] June 18, 2019 SIKM Call: Kate Pugh on Conversational AI
 
 

Lee Romero asked about managing risk of inaccurate info provided by AI-enabled search.

Over the course of 30 years of observing the rapid evolution of business systems and introduction of new tech to support knowledge workers during my consulting work; and studying the evolution of technolgy eras across the millennia, I arrived at the following theorem:

The first use of any new technology generally includes applying it to improve or automate whatever was already being done manually.

The corollary of this is: You cannot automate that which you don’t already do well manually.

Internal combustion, electric motors, spreadsheets, Google Maps (remember AAA Trip Tiks??), photography, ERP, WorkDay...pick any one of these, and take a look at how they were initially applied when they first came on the scene. And then consider the myriad novel, unexpected ways in which they were put to use - often times far exceeding expectations that led to their rise in the first place.

And so it will be with AI. If we are worried about how AI-based agents used for info search might expose companies to greater risk due to inaccurate or superseded search results, the first place I’d look is how well these risks are being managed now in the existing “manual” environment, in which a knowledge worker evaluates the search results and applies experience and judgement to discern which results have merit and which do not.

To the degree that there is a high degree of variability in those results between different operators, the ability to “automate” it via AI may be challenging at best; or not yet possible at worst. If some searchers do it better than others, is it due to lack of training being developed? Or inability to develop effective training? If the latter, then how can we automate it with AI, if we still don’t know how to get uniform results from manual workers doing it once they’ve been properly trained? (See Short’s corollary above: you can’t automate that which you don’t do well manually!).

This may be a case of using AI initially to tease out and codify the algorithms and heuristics used by expert searchers in order to program the AI to do it. This recursive process could be facilitated via Machine Learning (ML); or manually through trial and error, via experts comparing their search results to the AI agents’ results, divining the sources of variance, and using that to “tune” the AI’s algorithms or heuristics. Rinse and repeat until the results reach a high enough level of fidelity so as to be considered within the limits of tolerable risk. (Remember when WikiPedia was still new? It took awhile before researchers were forced to accept that the error rate contained in Wikipedia had reached parity with the then-standard for general reference, The Encyclopedia Britannica).

So that’s my take on this question, which is an interesting one, to be sure. But one that is definitely not without precedents from which we can gain insights regarding how we might anticipate it to evolve.

Tom
TSC
Tom Short Consulting
San Francisco