KM and AI: Use cases now and in the future? #methods #AI #tools
Great article makes an important observation about where we are in this moment of AI’s evolution. It describes ‘capability overhang’, the human phenomenon of applying old mental models to new capabilities before we can exploit those new capabilities more fully. AND - can we even begin to imagine how it might be used in the (near) future?? Tom Short Consulting All of my previous SIKM Posts |
||||
|
||||
On Fri, Dec 9, 2022 at 01:02 PM, Tom Short wrote:
Tom's query might have slipped through the cracks. What are your thoughts? |
||||
|
||||
> How do we think AI could be used right now to help out with doing KM work? I think it is excellent for spelling and grammar correction. This is important in OCR scanning of printed work.
The challenge, however, is in disposing of any notions that Large Language Models (LLMs) are in any way, shape, or form comparing their results to any objective facts (e.g. even cross-referencing with Wikipedia). They’re regression models, meaning they predict the next word or phrase based on the average of what its training text indicates is most likely. This leads to very reasonable-sounding, but made-up, work.
I would love to use them to do literature reviews, summarising a large body of digital texts into a single, cross-referenced (and referenced) report, but they’re as likely to make up things, including spurious references, as they are to include real reference text. Eventually, some future version will also validate the accuracy of the work, and then it will be helpful for these sorts of analyses but – until then – stick to using it for OCR correction.
>--------------------< Gavin Chait is a data scientist and development economist at Whythawk. uk.linkedin.com/in/gavinchait | twitter.com/GavinChait | gavinchait.com
From: main@SIKM.groups.io <main@SIKM.groups.io> On Behalf Of Stan Garfield
Sent: 27 December 2022 16:44 To: main@SIKM.groups.io Subject: Re: [SIKM] The AI view of KM #art-of-KM
On Fri, Dec 9, 2022 at 01:02 PM, Tom Short wrote:
Tom's query might have slipped through the cracks. What are your thoughts? |
||||
|
||||
Splitting this question off from the other thread, to explore how we might use AI right now in doing KM work; and also start imagining future use cases for AI that are currently not possible, or no one has yet considered. In terms of right now, one of the easiest use cases I can think of is simply providing best practices information and "how to" guidance about various KM tools and techniques. Whether it's social network analysis or COPs or narrative, practitioners and end users alike continue to ask the same questions again and again about what these things are, and how to go about doing them. chatGPT is pretty good at collating and summarizing everything that has been written about things like this, so it provides a solid starting point for answering these simple questions. These types of questions are examples of what could be considered deductive problem solving, which has the pattern, If A, then B. Where AI is not going to be as effective is knowing which tool to use for a given circumstance. This requires inductive problem solving, something that human brains are superior to doing compared to computers. In order to recommend a KM solution, one has to understand the current situation and identify either shortcomings or opportunities for improvement; and then within that, identify which KM tools might be brought to bear. As far as I know, this sort of multi-variate problem space is not well-suited to current AI's. The big question is, will it be one day? In the mean time, it seems to me that practitioners could embrace AI tools like chatGPT to reduce the amount of time and energy needed to get up to speed on various KM tools and techniques. Under the guidance of a mentor, this could be a way for newer KM practitioners to shorten their learning curve and expand their knowledge of available KM methodologies. Tom Short Consulting All of my previous SIKM Posts |
||||
|
||||
Try to make a computer behave more like a human and surprise! It's math skills deteriorate. :)
https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Christian-Terwiesch-Chat-GTP.pdf |
||||
|
||||
This is just a hypothetical musing, but suppose ChatGPT (or at least the underlying code behind it) becomes widespread. And suppose that part of that widespread use is the generation of online content -- news articles, blog posts, essays, real estate listings, biographies, you name it. Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content?
It seems like this could result in amplification of what would normally be slight biases in content to push them to more extremes, similar to news networks that toss out an idea to get their viewers talking about it, then report that it must be news because everyone is talking about it, perpetuating the cycle. |
||||
|
||||
Bart Verheijen
My thoughts about ~3 months ago on applying ChatGPT for content marketing: " So very easy but rubbish content at (close to) zero marginal costs. If I combine these two effects, they 𝗰𝗮𝗻 𝗼𝗻𝗹𝘆 𝗹𝗲𝗮𝗱 𝘁𝗼 𝗮𝗻 𝗔𝗜 𝗶𝗻𝗰𝗲𝗽𝘁𝗶𝗼𝗻; AI generated blogs & posts will be read and commented by AI (i.e. fake) profiles to boost their reach and (perceived) credibility The marginal costs of this vicious cycle are close to zero, so it's very likely to happen. " I think we will not even need the 'human' viewers; it will all be taken over by self-promoting AI bots & tools. Bart Verheijen +31 6 19 342 603 Op di 7 mrt 2023 om 16:54 schreef Dennis Pearce <denpearce@...>: This is just a hypothetical musing, but suppose ChatGPT (or at least the underlying code behind it) becomes widespread. And suppose that part of that widespread use is the generation of online content -- news articles, blog posts, essays, real estate listings, biographies, you name it. Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content? |
||||
|
||||
> Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content?
Sure, this is precisely the concern of a number of AI ethicists. We’re probably already past that point and some are stating that we’re at the data equivalent of “low-background steel” (https://en.wikipedia.org/wiki/Low-background_steel) where all steel produced since WW2 are contaminated with radionucleotides. In other words, we may only be able to trust that information was not produced by a computer if it can be absolutely dated as having been produced prior to 2022.
So, no, not a hypothetical musing at all.
>--------------------< Gavin Chait is a data scientist and development economist at Whythawk. uk.linkedin.com/in/gavinchait | twitter.com/GavinChait | gavinchait.com
From: main@SIKM.groups.io <main@SIKM.groups.io> On Behalf Of Dennis Pearce
Sent: Tuesday, March 7, 2023 4:55 PM To: main@SIKM.groups.io Subject: Re: [SIKM] KM and AI: Use cases now and in the future? #methods #tools
This is just a hypothetical musing, but suppose ChatGPT (or at least the underlying code behind it) becomes widespread. And suppose that part of that widespread use is the generation of online content -- news articles, blog posts, essays, real estate listings, biographies, you name it. Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content? |
||||
|
||||
Ben Duffy
One challenge for ChatGPT relates to the knowledge segmentation that workforces often benefit from. Any answer engine must have sufficient understanding of the body of knowledge applicable to a questioner to provide relevant and accurate answers.
Ben Duffy
From: main@SIKM.groups.io <main@SIKM.groups.io>
On Behalf Of Gavin Chait
Sent: Tuesday, March 7, 2023 11:33 AM To: main@SIKM.groups.io Subject: Re: [SIKM] KM and AI: Use cases now and in the future? #methods #tools
> Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content? Sure, this is precisely the concern of ZjQcmQRYFpfptBannerStart
ZjQcmQRYFpfptBannerEnd > Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content?
Sure, this is precisely the concern of a number of AI ethicists. We’re probably already past that point and some are stating that we’re at the data equivalent of “low-background steel” (https://en.wikipedia.org/wiki/Low-background_steel) where all steel produced since WW2 are contaminated with radionucleotides. In other words, we may only be able to trust that information was not produced by a computer if it can be absolutely dated as having been produced prior to 2022.
So, no, not a hypothetical musing at all.
>--------------------< Gavin Chait is a data scientist and development economist at Whythawk. uk.linkedin.com/in/gavinchait | twitter.com/GavinChait | gavinchait.com
From:
main@SIKM.groups.io <main@SIKM.groups.io>
On Behalf Of Dennis Pearce
This is just a hypothetical musing, but suppose ChatGPT (or at least the underlying code behind it) becomes widespread. And suppose that part of that widespread use is the generation of online
content -- news articles, blog posts, essays, real estate listings, biographies, you name it. Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly
recycling its own content? |
||||
|
||||
Tim Powell
Good points, Dennis – and not hypothetical at all. Chat bots are currently being used in large organizations to generate non-critical, boilerplate-heavy communications such as earnings reports and press releases – the humans taking the role of fact-checkers and editors. Press outlets are said to have used them on routine stories like sports scores.
ChatGPT is far from the first bot, though arguably the first to have broken the “wall of virality.” And this is only the beginning. Now the generative gold rush is on, the tech biggies will all have market entries, the bigger-and-better GPT-4 will be here soon – and serious venture money is flowing into the space. Game on!
But there are many dangers – and some of the best cautionary papers are (ironically) on OpenAI’s site itself – for example, this one: https://openai.com/research/forecasting-misuse Bots are industrial-strength enablers of rumors, propaganda, misinformation, disinformation, and “malinformation” (information intended to harm.)
And technology is (in general) moving so fast that public policy is left in the dust. So, as it has been with cybersecurity, organizations are mostly left to fend for themselves. As I see it, at the enterprise level, knowledge leaders can (and should) play major roles in the how these tools are managed – the governance, if you will.
I also see a more tactical need for high-level quality assurance for the information that an organization consumes. We (here in the US) have a Food and Drug Administration to opine on whether things we take into our bodies are safe – but, for things we take into our minds, both individual and organizational, we’re on our own. Organizations in all industries may soon need – as publishers and broadcasters have long had – sophisticated fact-checking operations. In my decades of conducting strategic intelligence studies for organizations large and small, the provenance of information was of paramount importance – what’s the truth versus recycled noise?
My own main beef with the gen-bots is that they cleverly separate information from its source – the people who work to create it in the first place. So there’s no attribution, no provenance, no accountability – and (naturally) no payments made to the creators/owners of the content – which as I understand it is scraped from the internet with neither the knowledge nor permission of its producers/owners. There are lawsuits underway about this, and I would think that in time this “oversight” will be corrected. But until that happens, the ethics of this strike me as sketchy, at best.
Sorry for the long ramble. We live in interesting times…
tp
TIM WOOD POWELL | President, The Knowledge Agency® | Author, The Value of Knowledge | New York City, USA | TEL +1.212.243.1200 | SITE KnowledgeAgency.com | BLOG TimWoodPowell.com |
From:
<main@SIKM.groups.io> on behalf of Dennis Pearce <denpearce@...>
This is just a hypothetical musing, but suppose ChatGPT (or at least the underlying code behind it) becomes widespread. And suppose that part of that widespread use is the generation of online content -- news articles,
blog posts, essays, real estate listings, biographies, you name it. Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own
content? |
||||
|
||||
An analogy I like is something I heard recently but unfortunately forgot from whom so I don't know who to credit to. But the analogy is to infectious disease. We've had viruses and bacteria killing us for thousands if not millions of years, but it wasn't really a problem for the species until we developed cities and travel, where close contact and intricate networks could make it easy for disease to spread and do much more damage than back in prehistoric times.
It took a long time, but eventually we learned about how disease spreads and developed ways to combat it. It takes a combination of societal action (sanitation, vaccines, government regulation, etc.) and individual action (hand washing, mask wearing, quarantining, etc.). Similarly, there has always been mis-/disinformation but it wasn't really a big deal to the species at large as long as communication was rudimentary and limited. But the creation of the internet was like the creation of cities, in that it suddenly allowed large numbers of people to interact with each other in ways that were previously impossible. Unfortunately the internet is still too new for us to have figured out the best ways to prevent the spread of misinformation, but eventually when we do it will probably also require some combination of combination of societal and individual action. |
||||
|
||||
Tim Powell
I’m sure you’re right, Dennis.
The analogy may have come from this book: https://www.amazon.com/gp/product/1541674316/ , which discusses “virality” as a general process governing both microbes and information.
tp
TIM WOOD POWELL | President, The Knowledge Agency® | Author, The Value of Knowledge | New York City, USA | TEL +1.212.243.1200 | SITE KnowledgeAgency.com | BLOG TimWoodPowell.com |
From:
<main@SIKM.groups.io> on behalf of Dennis Pearce <denpearce@...>
An analogy I like is something I heard recently but unfortunately forgot from whom so I don't know who to credit to. But the analogy is to infectious disease. We've had viruses and bacteria killing us for thousands
if not millions of years, but it wasn't really a problem for the species until we developed cities and travel, where close contact and intricate networks could make it easy for disease to spread and do much more damage than back in prehistoric times. |
||||
|
||||
Thanks! I remember I heard it (maybe a podcast?) but it could have originated here. This looks like a great book -- I've added it to my list to buy.
|
||||
|
||||
Bart Verheijen
Dennis, The quote could have come from Harold Jarche; it sure sounds a lot like his thinking. Interesting developments going on in the world and discussion in this group. Bart Verheijen +31 6 19 342 603 Op di 7 mrt 2023 om 20:35 schreef Dennis Pearce <denpearce@...>: An analogy I like is something I heard recently but unfortunately forgot from whom so I don't know who to credit to. But the analogy is to infectious disease. We've had viruses and bacteria killing us for thousands if not millions of years, but it wasn't really a problem for the species until we developed cities and travel, where close contact and intricate networks could make it easy for disease to spread and do much more damage than back in prehistoric times. |
||||
|
||||
Patrick Lambe
I’m reading a book at the moment about the potato famine in Ireland in the 1840s, and the fiasco of the famine response (I kept seeing shades of Covid response). You can see exactly the same patterns of fake news, misinformation, political chicanery, expressed through pamphlets and newspapers, but expressed nevertheless. It’s a chronic problem, it will use whatever media are to hand, and it thrives where there is a lot of uncertainty, competing economic stresses, and political disunity.
toggle quoted message
Show quoted text
I’m not sure there is a cure absent addressing root causes - certainly the traditional media have not turned out to be as subject to good governance and regulation as we might sometimes like to think, as the recent revelations from Rupert Murdoch suggest. P
Patrick Lambe
Partner Straits Knowledge phone: +65 98528511 web: www.straitsknowledge.com resources: www.greenchameleon.com knowledge mapping: www.aithinsoftware.com
|
||||
|