KM and AI: Use cases now and in the future? #methods #AI #tools
Great article makes an important observation about where we are in this moment of AI’s evolution. It describes ‘capability overhang’, the human phenomenon of applying old mental models to new capabilities before we can exploit those new capabilities more fully.
https://www.theverge.com/2022/12/8/23499728/ai-capability-accessibility-chatgpt-stable-diffusion-commercialization
So. How do we think AI could be used right now to help out with doing KM work?
AND - can we even begin to imagine how it might be used in the (near) future??
--
-Tom
--
Tom Short Consulting
TSC
+1 415 300 7457
All of my previous SIKM Posts
> How do we think AI could be used right now to help out with doing KM work?
I think it is excellent for spelling and grammar correction. This is important in OCR scanning of printed work.
The challenge, however, is in disposing of any notions that Large Language Models (LLMs) are in any way, shape, or form comparing their results to any objective facts (e.g. even cross-referencing with Wikipedia). They’re regression models, meaning they predict the next word or phrase based on the average of what its training text indicates is most likely. This leads to very reasonable-sounding, but made-up, work.
I would love to use them to do literature reviews, summarising a large body of digital texts into a single, cross-referenced (and referenced) report, but they’re as likely to make up things, including spurious references, as they are to include real reference text. Eventually, some future version will also validate the accuracy of the work, and then it will be helpful for these sorts of analyses but – until then – stick to using it for OCR correction.
>--------------------<
Gavin Chait is a data scientist and development economist at Whythawk.
uk.linkedin.com/in/gavinchait | twitter.com/GavinChait | gavinchait.com
Sent: 27 December 2022 16:44
To: main@SIKM.groups.io
Subject: Re: [SIKM] The AI view of KM #art-of-KM
On Fri, Dec 9, 2022 at 01:02 PM, Tom Short wrote:
How do we think AI could be used right now to help out with doing KM work?
AND - can we even begin to imagine how it might be used in the (near) future??C
Tom's query might have slipped through the cracks. What are your thoughts?
Splitting this question off from the other thread, to explore how we might use AI right now in doing KM work; and also start imagining future use cases for AI that are currently not possible, or no one has yet considered.
In terms of right now, one of the easiest use cases I can think of is simply providing best practices information and "how to" guidance about various KM tools and techniques. Whether it's social network analysis or COPs or narrative, practitioners and end users alike continue to ask the same questions again and again about what these things are, and how to go about doing them.
chatGPT is pretty good at collating and summarizing everything that has been written about things like this, so it provides a solid starting point for answering these simple questions.
These types of questions are examples of what could be considered deductive problem solving, which has the pattern, If A, then B.
Where AI is not going to be as effective is knowing which tool to use for a given circumstance. This requires inductive problem solving, something that human brains are superior to doing compared to computers. In order to recommend a KM solution, one has to understand the current situation and identify either shortcomings or opportunities for improvement; and then within that, identify which KM tools might be brought to bear.
As far as I know, this sort of multi-variate problem space is not well-suited to current AI's. The big question is, will it be one day?
In the mean time, it seems to me that practitioners could embrace AI tools like chatGPT to reduce the amount of time and energy needed to get up to speed on various KM tools and techniques. Under the guidance of a mentor, this could be a way for newer KM practitioners to shorten their learning curve and expand their knowledge of available KM methodologies.
--
-Tom
--
Tom Short Consulting
TSC
+1 415 300 7457
All of my previous SIKM Posts
https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Christian-Terwiesch-Chat-GTP.pdf
It seems like this could result in amplification of what would normally be slight biases in content to push them to more extremes, similar to news networks that toss out an idea to get their viewers talking about it, then report that it must be news because everyone is talking about it, perpetuating the cycle.
The marginal costs of this vicious cycle are close to zero, so it's very likely to happen.
+31 6 19 342 603
This is just a hypothetical musing, but suppose ChatGPT (or at least the underlying code behind it) becomes widespread. And suppose that part of that widespread use is the generation of online content -- news articles, blog posts, essays, real estate listings, biographies, you name it. Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content?
It seems like this could result in amplification of what would normally be slight biases in content to push them to more extremes, similar to news networks that toss out an idea to get their viewers talking about it, then report that it must be news because everyone is talking about it, perpetuating the cycle.
> Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content?
Sure, this is precisely the concern of a number of AI ethicists. We’re probably already past that point and some are stating that we’re at the data equivalent of “low-background steel” (https://en.wikipedia.org/wiki/Low-background_steel) where all steel produced since WW2 are contaminated with radionucleotides. In other words, we may only be able to trust that information was not produced by a computer if it can be absolutely dated as having been produced prior to 2022.
So, no, not a hypothetical musing at all.
>--------------------<
Gavin Chait is a data scientist and development economist at Whythawk.
uk.linkedin.com/in/gavinchait | twitter.com/GavinChait | gavinchait.com
Sent: Tuesday, March 7, 2023 4:55 PM
To: main@SIKM.groups.io
Subject: Re: [SIKM] KM and AI: Use cases now and in the future? #methods #tools
This is just a hypothetical musing, but suppose ChatGPT (or at least the underlying code behind it) becomes widespread. And suppose that part of that widespread use is the generation of online content -- news articles, blog posts, essays, real estate listings, biographies, you name it. Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content?
It seems like this could result in amplification of what would normally be slight biases in content to push them to more extremes, similar to news networks that toss out an idea to get their viewers talking about it, then report that it must be news because everyone is talking about it, perpetuating the cycle.
One challenge for ChatGPT relates to the knowledge segmentation that workforces often benefit from. Any answer engine must have sufficient understanding of the body of knowledge applicable to a questioner to provide relevant and accurate answers.
Ben Duffy
Sent: Tuesday, March 7, 2023 11:33 AM
To: main@SIKM.groups.io
Subject: Re: [SIKM] KM and AI: Use cases now and in the future? #methods #tools
> Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content? Sure, this is precisely the concern of
ZjQcmQRYFpfptBannerStart
|
ZjQcmQRYFpfptBannerEnd
> Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own content?
Sure, this is precisely the concern of a number of AI ethicists. We’re probably already past that point and some are stating that we’re at the data equivalent of “low-background steel” (https://en.wikipedia.org/wiki/Low-background_steel) where all steel produced since WW2 are contaminated with radionucleotides. In other words, we may only be able to trust that information was not produced by a computer if it can be absolutely dated as having been produced prior to 2022.
So, no, not a hypothetical musing at all.
>--------------------<
Gavin Chait is a data scientist and development economist at Whythawk.
uk.linkedin.com/in/gavinchait | twitter.com/GavinChait | gavinchait.com
From:
main@SIKM.groups.io <main@SIKM.groups.io>
On Behalf Of Dennis Pearce
Sent: Tuesday, March 7, 2023 4:55 PM
To: main@SIKM.groups.io
Subject: Re: [SIKM] KM and AI: Use cases now and in the future? #methods #tools
This is just a hypothetical musing, but suppose ChatGPT (or at least the underlying code behind it) becomes widespread. And suppose that part of that widespread use is the generation of online
content -- news articles, blog posts, essays, real estate listings, biographies, you name it. Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly
recycling its own content?
It seems like this could result in amplification of what would normally be slight biases in content to push them to more extremes, similar to news networks that toss out an idea to get their viewers talking about it, then report that it must be news because
everyone is talking about it, perpetuating the cycle.
Good points, Dennis – and not hypothetical at all. Chat bots are currently being used in large organizations to generate non-critical, boilerplate-heavy communications such as earnings reports and press releases – the humans taking the role of fact-checkers and editors. Press outlets are said to have used them on routine stories like sports scores.
ChatGPT is far from the first bot, though arguably the first to have broken the “wall of virality.” And this is only the beginning. Now the generative gold rush is on, the tech biggies will all have market entries, the bigger-and-better GPT-4 will be here soon – and serious venture money is flowing into the space. Game on!
But there are many dangers – and some of the best cautionary papers are (ironically) on OpenAI’s site itself – for example, this one: https://openai.com/research/forecasting-misuse Bots are industrial-strength enablers of rumors, propaganda, misinformation, disinformation, and “malinformation” (information intended to harm.)
And technology is (in general) moving so fast that public policy is left in the dust. So, as it has been with cybersecurity, organizations are mostly left to fend for themselves. As I see it, at the enterprise level, knowledge leaders can (and should) play major roles in the how these tools are managed – the governance, if you will.
I also see a more tactical need for high-level quality assurance for the information that an organization consumes. We (here in the US) have a Food and Drug Administration to opine on whether things we take into our bodies are safe – but, for things we take into our minds, both individual and organizational, we’re on our own. Organizations in all industries may soon need – as publishers and broadcasters have long had – sophisticated fact-checking operations. In my decades of conducting strategic intelligence studies for organizations large and small, the provenance of information was of paramount importance – what’s the truth versus recycled noise?
My own main beef with the gen-bots is that they cleverly separate information from its source – the people who work to create it in the first place. So there’s no attribution, no provenance, no accountability – and (naturally) no payments made to the creators/owners of the content – which as I understand it is scraped from the internet with neither the knowledge nor permission of its producers/owners. There are lawsuits underway about this, and I would think that in time this “oversight” will be corrected. But until that happens, the ethics of this strike me as sketchy, at best.
Sorry for the long ramble. We live in interesting times…
tp
TIM WOOD POWELL | President, The Knowledge Agency® | Author, The Value of Knowledge |
New York City, USA | TEL +1.212.243.1200 |
SITE KnowledgeAgency.com | BLOG TimWoodPowell.com |
From:
<main@SIKM.groups.io> on behalf of Dennis Pearce <denpearce@...>
Reply-To: "main@SIKM.groups.io" <main@SIKM.groups.io>
Date: Tuesday, March 7, 2023 at 10:54 AM
To: "main@SIKM.groups.io" <main@SIKM.groups.io>
Subject: Re: [SIKM] KM and AI: Use cases now and in the future? #methods #tools
This is just a hypothetical musing, but suppose ChatGPT (or at least the underlying code behind it) becomes widespread. And suppose that part of that widespread use is the generation of online content -- news articles,
blog posts, essays, real estate listings, biographies, you name it. Then since of course ChatGPT will be constantly ingesting and absorbing online content in order to stay up to date, will we reach a point where ChatGPT is just endlessly recycling its own
content?
It seems like this could result in
amplification of what would normally be slight biases in content to push them to more extremes, similar to news networks that toss out an idea to get their viewers talking about it, then report that it must be news because everyone is talking about it,
perpetuating the cycle.
It took a long time, but eventually we learned about how disease spreads and developed ways to combat it. It takes a combination of societal action (sanitation, vaccines, government regulation, etc.) and individual action (hand washing, mask wearing, quarantining, etc.).
Similarly, there has always been mis-/disinformation but it wasn't really a big deal to the species at large as long as communication was rudimentary and limited. But the creation of the internet was like the creation of cities, in that it suddenly allowed large numbers of people to interact with each other in ways that were previously impossible. Unfortunately the internet is still too new for us to have figured out the best ways to prevent the spread of misinformation, but eventually when we do it will probably also require some combination of combination of societal and individual action.
I’m sure you’re right, Dennis.
The analogy may have come from this book: https://www.amazon.com/gp/product/1541674316/ , which discusses “virality” as a general process governing both microbes and information.
tp
TIM WOOD POWELL | President, The Knowledge Agency® | Author, The Value of Knowledge |
New York City, USA | TEL +1.212.243.1200 |
SITE KnowledgeAgency.com | BLOG TimWoodPowell.com |
From:
<main@SIKM.groups.io> on behalf of Dennis Pearce <denpearce@...>
Reply-To: "main@SIKM.groups.io" <main@SIKM.groups.io>
Date: Tuesday, March 7, 2023 at 2:35 PM
To: "main@SIKM.groups.io" <main@SIKM.groups.io>
Subject: Re: [SIKM] KM and AI: Use cases now and in the future? #methods #tools
An analogy I like is something I heard recently but unfortunately forgot from whom so I don't know who to credit to. But the analogy is to infectious disease. We've had viruses and bacteria killing us for thousands
if not millions of years, but it wasn't really a problem for the species until we developed cities and travel, where close contact and intricate networks could make it easy for disease to spread and do much more damage than back in prehistoric times.
It took a long time, but eventually we learned about how disease spreads and developed ways to combat it. It takes a combination of societal action (sanitation, vaccines, government regulation, etc.) and individual action (hand washing, mask wearing, quarantining,
etc.).
Similarly, there has always been mis-/disinformation but it wasn't really a big deal to the species at large as long as communication was rudimentary and limited. But the creation of the internet was like the creation of cities, in that it suddenly allowed
large numbers of people to interact with each other in ways that were previously impossible. Unfortunately the internet is still too new for us to have figured out the best ways to prevent the spread of misinformation, but eventually when we do
it will probably also require some combination of combination of societal and individual action.
+31 6 19 342 603
An analogy I like is something I heard recently but unfortunately forgot from whom so I don't know who to credit to. But the analogy is to infectious disease. We've had viruses and bacteria killing us for thousands if not millions of years, but it wasn't really a problem for the species until we developed cities and travel, where close contact and intricate networks could make it easy for disease to spread and do much more damage than back in prehistoric times.
It took a long time, but eventually we learned about how disease spreads and developed ways to combat it. It takes a combination of societal action (sanitation, vaccines, government regulation, etc.) and individual action (hand washing, mask wearing, quarantining, etc.).
Similarly, there has always been mis-/disinformation but it wasn't really a big deal to the species at large as long as communication was rudimentary and limited. But the creation of the internet was like the creation of cities, in that it suddenly allowed large numbers of people to interact with each other in ways that were previously impossible. Unfortunately the internet is still too new for us to have figured out the best ways to prevent the spread of misinformation, but eventually when we do it will probably also require some combination of combination of societal and individual action.
Partner
Straits Knowledge
phone: +65 98528511
web: www.straitsknowledge.com
resources: www.greenchameleon.com
knowledge mapping: www.aithinsoftware.com
On 7 Mar 2023, at 8:35 PM, Dennis Pearce <denpearce@...> wrote:An analogy I like is something I heard recently but unfortunately forgot from whom so I don't know who to credit to. But the analogy is to infectious disease. We've had viruses and bacteria killing us for thousands if not millions of years, but it wasn't really a problem for the species until we developed cities and travel, where close contact and intricate networks could make it easy for disease to spread and do much more damage than back in prehistoric times.
It took a long time, but eventually we learned about how disease spreads and developed ways to combat it. It takes a combination of societal action (sanitation, vaccines, government regulation, etc.) and individual action (hand washing, mask wearing, quarantining, etc.).
Similarly, there has always been mis-/disinformation but it wasn't really a big deal to the species at large as long as communication was rudimentary and limited. But the creation of the internet was like the creation of cities, in that it suddenly allowed large numbers of people to interact with each other in ways that were previously impossible. Unfortunately the internet is still too new for us to have figured out the best ways to prevent the spread of misinformation, but eventually when we do it will probably also require some combination of combination of societal and individual action.
Microsoft's Copilot is basically chatGPT in the MS Suite. It's here (well, you never know with MS when things actually appear in your estate, but you get what I mean!)...
Does anyone know how it will be deployed and what sort of training there will be on it?
On Mar 21, 2023, at 10:53 AM, Stan Garfield <stangarfield@...> wrote:
We start in 8 minutes.
On Tue, Mar 21, 2023 at 7:36 AM Bart Verheijen <bart.verheijen@...> wrote:
Stan,
Just to confirm our starting time; do we start in 25 minutes or in 1 hour and 25 minutes?
Best regards,Bart
Bart Verheijen
+31 6 19 342 603
Op ma 20 mrt 2023 om 17:56 schreef Stan Garfield <stangarfield@...>:
This is a reminder of tomorrow's monthly call from 11 am to 12 noon EST.
- March 21, 2023 SIKM Call: Bart Verheijen - International Expert Mapping Using GuruScan
- Slides
- For online chat, use the group chat in FreeConferenceCall.com.
SIKM Leaders Community Monthly Call
- Where: (607) 374-1189 (US and Canada) Passcode 178302
- International Dial-in Numbers
- You can join online using your computer’s speakers and microphone at http://join.freeconferencecall.com/stangarfield
- Online Meeting ID: stangarfield- If you join online, be sure to click on the phone icon and then choose your audio preference.
- Please don't turn on video - this increases the size of the recording ten times.
- If you have problems connecting, call customer service at 844-844-1322.
- Occurs the third Tuesday of every month from 11:00 AM to 12:00 PM Eastern Time (USA)
- Community Site
- Slides (OneDrive, SlideShare) - There is no live screen sharing - you follow along by advancing the slides yourself.
- Previous Calls
- Future Calls
- Calendar
Confidentiality Warning:
Deloitte refers to a Deloitte member firm, one of its related entities, or Deloitte Touche Tohmatsu Limited (“DTTL”). Each Deloitte member firm is a separate legal entity and a member of DTTL. DTTL does not provide services to clients. Please see www.deloitte.com/about to learn more.
This message and any attachments are intended only for the use of the intended recipient(s), are confidential, and may be privileged. If you are not the intended recipient, you are hereby notified that any review, retransmission, conversion to hard copy, copying, circulation or other use of this message and any attachments is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, and delete this message and any attachments from your system. Thank You.
If you do not wish to receive future commercial electronic messages from Deloitte, forward this email to unsubscribe@...
Avertissement de confidentialité:
Deloitte désigne un cabinet membre de Deloitte, une de ses entités liées ou Deloitte Touche Tohmatsu Limited (DTTL). Chaque cabinet membre de Deloitte constitue une entité juridique distincte et est membre de DTTL. DTTL n’offre aucun service aux clients. Pour en apprendre davantage, voir www.deloitte.com/ca/apropos.
Ce message, ainsi que toutes ses pièces jointes, est destiné exclusivement au(x) destinataire(s) prévu(s), est confidentiel et peut contenir des renseignements privilégiés. Si vous n’êtes pas le destinataire prévu de ce message, nous vous avisons par la présente que la modification, la retransmission, la conversion en format papier, la reproduction, la diffusion ou toute autre utilisation de ce message et de ses pièces jointes sont strictement interdites. Si vous n’êtes pas le destinataire prévu, veuillez en aviser immédiatement l’expéditeur en répondant à ce courriel et supprimez ce message et toutes ses pièces jointes de votre système. Merci.
Si vous ne voulez pas recevoir d’autres messages électroniques commerciaux de Deloitte à l’avenir, veuillez envoyer ce courriel à l’adresse unsubscribe@...
I’ve decided to let the three main platforms entertain each other. 😃
I’ve got the Microsoft Edge plugin, Google Bard, and ChatGPT cued-up and am giving each of them something to do throughout the day. I’m finding the ability to draft and light-edit things by feeding them into one of the three to be easy and productive. So far, the AI chat tools are great collaborators to help get things done quicker. Someone has to feed them – so deciding what work they can do, feeding it in, and then taking the output and polishing it is the extend of what we have so far. It’s like having a really speedy intern (without the nose piercings).
Tom Olney, PMP, CKM, CUDE
Pronouns: He/Him
VP Organizational Development
Activator | Ideation | Communication | Connectedness | Learner
PSCU | Learning & Organizational Development
tolney@... | office:
727 566-4088 | mobile: 727 742-5229
580 Carillon Parkway | St. Petersburg, FL 33716
From:
main@SIKM.groups.io <main@SIKM.groups.io> on behalf of Ryan Fitzgerald via groups.io <ryafitzgerald@...>
Date: Friday, March 24, 2023 at 6:30 PM
To: main@SIKM.groups.io <main@SIKM.groups.io>
Subject: [EXTERNAL] Re: [SIKM] March 2023 SIKM Call: Bart Verheijen - International Expert Mapping Using GuruScan #expertise #monthly-call
What do you think about a conversation on ChatGPT? Is it the end of Knowledge Management industry as we know it? It very could be the single biggest disruptor to KM I have ever seen.
On Mar 21, 2023, at 10:53 AM, Stan Garfield <stangarfield@...> wrote:
We start in 8 minutes.
On Tue, Mar 21, 2023 at 7:36 AM Bart Verheijen <bart.verheijen@...> wrote:
Stan,
Just to confirm our starting time; do we start in 25 minutes or in 1 hour and 25 minutes?
Best regards,
Bart
Bart Verheijen
+31 6 19 342 603
Virusvrij.www.avg.com
Op ma 20 mrt 2023 om 17:56 schreef Stan Garfield <stangarfield@...>:
This is a reminder of tomorrow's monthly call from 11 am to 12 noon EST.
- March 21, 2023 SIKM Call: Bart Verheijen - International Expert Mapping Using GuruScan
- Slides
- For online chat, use the group chat in FreeConferenceCall.com.
SIKM Leaders Community Monthly Call
- Where: (607) 374-1189 (US and Canada) Passcode 178302
- International Dial-in Numbers
- You can join online using your computer’s speakers and microphone at http://join.freeconferencecall.com/stangarfield
- Online Meeting ID: stangarfield- If you join online, be sure to click on the phone icon and then choose your audio preference.
- Please don't turn on video - this increases the size of the recording ten times.
- If you have problems connecting, call customer service at 844-844-1322.
- Occurs the third Tuesday of every month from 11:00 AM to 12:00 PM Eastern Time (USA)
- Community Site
- Slides (OneDrive, SlideShare) - There is no live screen sharing - you follow along by advancing the slides yourself.
- Previous Calls
- Future Calls
- Calendar
Confidentiality Warning:
Deloitte refers to a Deloitte member firm, one of its related entities, or Deloitte Touche Tohmatsu Limited (“DTTL”). Each Deloitte member firm is a separate legal entity and a member of DTTL. DTTL does not provide services to clients. Please see www.deloitte.com/about to learn more.
This message and any attachments are intended only for the use of the intended recipient(s), are confidential, and may be privileged. If you are not the intended recipient, you are hereby notified that any review, retransmission, conversion to hard copy, copying, circulation or other use of this message and any attachments is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, and delete this message and any attachments from your system. Thank You.
If you do not wish to receive future commercial electronic messages from Deloitte, forward this email to unsubscribe@...
Avertissement de confidentialité:
Deloitte désigne un cabinet membre de Deloitte, une de ses entités liées ou Deloitte Touche Tohmatsu Limited (DTTL). Chaque cabinet membre de Deloitte constitue une entité juridique distincte et est membre de DTTL. DTTL n’offre aucun service aux clients. Pour en apprendre davantage, voir www.deloitte.com/ca/apropos.
Ce message, ainsi que toutes ses pièces jointes, est destiné exclusivement au(x) destinataire(s) prévu(s), est confidentiel et peut contenir des renseignements privilégiés. Si vous n’êtes pas le destinataire prévu de ce message, nous vous avisons par la présente que la modification, la retransmission, la conversion en format papier, la reproduction, la diffusion ou toute autre utilisation de ce message et de ses pièces jointes sont strictement interdites. Si vous n’êtes pas le destinataire prévu, veuillez en aviser immédiatement l’expéditeur en répondant à ce courriel et supprimez ce message et toutes ses pièces jointes de votre système. Merci.
Si vous ne voulez pas recevoir d’autres messages électroniques commerciaux de Deloitte à l’avenir, veuillez envoyer ce courriel à l’adresse unsubscribe@...
CAUTION: This email originated outside of PSCU. Do not click links or open attachments unless you recognize the sender and know the content is safe. If in doubt, use the Phish Alert button at the top of your Outlook toolbar to report suspicious emails.
Dear Fellow Members:
In my humble opinion, AI in general (and ChatGPT in specific) is a significant disruptor, but I don’t think AI implies the end of KM. AI is not human and may never understand us as individual complex minds forever seeking wisdom. My goal with each engagement (KM or otherwise) is to maximize participation of the people in the process and minimize resistance to change (minimize fear). Is AI able to minimize resistance to change, or will it only enhance it – I wonder, and time will tell?
In the words of Roger Von Oech, “There are two basic rules in life; change is inevitable, and everybody resists change (fears change).” AI use in KM is inevitable, and there is certainly value, especially in the Data, Information, Knowledge part of the journey, but will it deliver the Big W (wisdom)? I for one doubt it. As humans, we have a reactive brain, which is 80% of who we are, and our brain defaults to that reactive place always. We want the people that create knowledge content to be in their active brain, performing executive functions, having fabulous HUMAN thoughts and producing great outputs that truly improve the enterprise condition, drive positive change and deliver ultimate wisdom.
I for one would like to see AI solve our biggest industry challenge, fix the failure problem. As you know, it is estimated that between 50% and 75% of KM efforts fail, often because the human domain does not accept and often fully resists change. How might AI be utilized to improve these terrible metrics?
So far from my limited perspective, it feels like AI is more a blunt tool rather than a precision instrument for good.
My 2 cents…
Marc
--
Marc Belsher - CEO
My-E-Health USA My-E-Health UK Limited My-E-Health AB
345 The Greens Avenue Lowin House, Tregolls Road Båtsmansvägen 71
Newberg, Oregon USA Truro, Cornwall TR1 2NA S-239 31 SKANÖR, Sweden
T: +1 503 487 0036 T: +1 503 487 0036 T: +46 7 04065696
C: +1 503 330 8545 Company #: 14217826 Organization #: 556914-1269
Sent: Friday, March 24, 2023 3:47 PM
To: main@SIKM.groups.io
Subject: Re: [SIKM] March 2023 SIKM Call: Bart Verheijen - International Expert Mapping Using GuruScan #expertise #monthly-call
I’ve decided to let the three main platforms entertain each other. 😃
I’ve got the Microsoft Edge plugin, Google Bard, and ChatGPT cued-up and am giving each of them something to do throughout the day. I’m finding the ability to draft and light-edit things by feeding them into one of the three to be easy and productive. So far, the AI chat tools are great collaborators to help get things done quicker. Someone has to feed them – so deciding what work they can do, feeding it in, and then taking the output and polishing it is the extend of what we have so far. It’s like having a really speedy intern (without the nose piercings).
Tom Olney, PMP, CKM, CUDE
Pronouns: He/Him
VP Organizational Development
Activator | Ideation | Communication | Connectedness | Learner
PSCU | Learning & Organizational Development
tolney@... | office:
727 566-4088 | mobile: 727 742-5229
580 Carillon Parkway | St. Petersburg, FL 33716
From:
main@SIKM.groups.io <main@SIKM.groups.io> on behalf of Ryan Fitzgerald via groups.io <ryafitzgerald@...>
Date: Friday, March 24, 2023 at 6:30 PM
To: main@SIKM.groups.io <main@SIKM.groups.io>
Subject: [EXTERNAL] Re: [SIKM] March 2023 SIKM Call: Bart Verheijen - International Expert Mapping Using GuruScan #expertise #monthly-call
What do you think about a conversation on ChatGPT? Is it the end of Knowledge Management industry as we know it? It very could be the single biggest disruptor to KM I have ever seen.
On Mar 21, 2023, at 10:53 AM, Stan Garfield <stangarfield@...> wrote:
We start in 8 minutes.
On Tue, Mar 21, 2023 at 7:36 AM Bart Verheijen <bart.verheijen@...> wrote:
Stan,
Just to confirm our starting time; do we start in 25 minutes or in 1 hour and 25 minutes?
Best regards,
Bart
Bart Verheijen
+31 6 19 342 603
Virusvrij.www.avg.com
Op ma 20 mrt 2023 om 17:56 schreef Stan Garfield <stangarfield@...>:
This is a reminder of tomorrow's monthly call from 11 am to 12 noon EST.
- March 21, 2023 SIKM Call: Bart Verheijen - International Expert Mapping Using GuruScan
- Slides
- For online chat, use the group chat in FreeConferenceCall.com.
SIKM Leaders Community Monthly Call
- Where: (607) 374-1189 (US and Canada) Passcode 178302
- International Dial-in Numbers
- You can join online using your computer’s speakers and microphone at http://join.freeconferencecall.com/stangarfield
- Online Meeting ID: stangarfield- If you join online, be sure to click on the phone icon and then choose your audio preference.
- Please don't turn on video - this increases the size of the recording ten times.
- If you have problems connecting, call customer service at 844-844-1322.
- Occurs the third Tuesday of every month from 11:00 AM to 12:00 PM Eastern Time (USA)
- Community Site
- Slides (OneDrive, SlideShare) - There is no live screen sharing - you follow along by advancing the slides yourself.
- Previous Calls
- Future Calls
- Calendar
Confidentiality Warning:
Deloitte refers to a Deloitte member firm, one of its related entities, or Deloitte Touche Tohmatsu Limited (“DTTL”). Each Deloitte member firm is a separate legal entity and a member of DTTL. DTTL does not provide services to clients. Please see www.deloitte.com/about to learn more.
This message and any attachments are intended only for the use of the intended recipient(s), are confidential, and may be privileged. If you are not the intended recipient, you are hereby notified that any review, retransmission, conversion to hard copy, copying, circulation or other use of this message and any attachments is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, and delete this message and any attachments from your system. Thank You.
If you do not wish to receive future commercial electronic messages from Deloitte, forward this email to unsubscribe@...
Avertissement de confidentialité:
Deloitte désigne un cabinet membre de Deloitte, une de ses entités liées ou Deloitte Touche Tohmatsu Limited (DTTL). Chaque cabinet membre de Deloitte constitue une entité juridique distincte et est membre de DTTL. DTTL n’offre aucun service aux clients. Pour en apprendre davantage, voir www.deloitte.com/ca/apropos.
Ce message, ainsi que toutes ses pièces jointes, est destiné exclusivement au(x) destinataire(s) prévu(s), est confidentiel et peut contenir des renseignements privilégiés. Si vous n’êtes pas le destinataire prévu de ce message, nous vous avisons par la présente que la modification, la retransmission, la conversion en format papier, la reproduction, la diffusion ou toute autre utilisation de ce message et de ses pièces jointes sont strictement interdites. Si vous n’êtes pas le destinataire prévu, veuillez en aviser immédiatement l’expéditeur en répondant à ce courriel et supprimez ce message et toutes ses pièces jointes de votre système. Merci.
Si vous ne voulez pas recevoir d’autres messages électroniques commerciaux de Deloitte à l’avenir, veuillez envoyer ce courriel à l’adresse unsubscribe@...
CAUTION: This email originated outside of PSCU. Do not click links or open attachments unless you recognize the sender and know the content is safe. If in doubt, use the Phish Alert button at the top of your Outlook toolbar to report suspicious emails.
What do you think about a conversation on ChatGPT? Is it the end of Knowledge Management industry as we know it? It very could be the single biggest disruptor to KM I have ever seen.I participated in such a conversation. The details are in a thread about the KM View of AI. Also see the earlier posts in this thread about KM and AI.