The AI view of KM #AI #art-of-KM


Dennis Pearce
 

Interesting outcome: https://www.npr.org/2023/02/09/1155650909/google-chatbot--error-bard-shares

Just goes to show that while both bots make mistakes, open source mistakes are usually a lot cheaper to fix than proprietary ones.


Stan Garfield
 
Edited


Dennis Pearce
 
Edited

Here is the CEO of The Atlantic magazine providing a brief explanation of the "transformer framework" that is at the heart of both ChatGPT and Google's soon-to-be-released Bard:

https://www.linkedin.com/feed/update/urn:li:activity:7028507279729512448/


Stan Garfield
 

On Thu, Feb 2, 2023 at 10:53 AM, Stan Garfield wrote:
Dennis Pearce and I will join my friend Arnold Kling in a conversation next Monday, February 6, 2023 at 8 pm EST. It will be a discussion of ChatGPT, including its impact on the field of knowledge management.
Thanks to Arnold Kling for hosting the discussion, to Dennis Pearce for his insights, and to the SIKM Leaders Community members who joined.  It was fun.

You can subscribe to Arnold's daily Substack newsletter In My Tribe and read previous issues mentioning ChatGPT.

Here is the recording of the discussion.  Feel free to share it.

Here is a comment that I received after the call from Ben Duffy:
  • I’ve been very intrigued by the capabilities of generative chat and have played around with ChatGPT at home with my two boys.
  • At work I manage a team of writers that author business and system process documentation along with managing the digital infrastructure for it. I was approached by an innovation team that has a GPT-3 license and a two week challenge to prove the worth of generative chat.
  • I provided them about 25K documents from an old archive and we’ll see how well the chat engine can answer questions.
Following are the group chat transcript and the recording transcription made by ChatGPT.

Group Chat

20:05:06 From  Chitwood, Ami : DEC
20:05:47 From  Dennis P Waters : PDP-11!
20:05:52 From  Chitwood, Ami : 😀 Stan's forgotten more about KM than the rest of us!
20:06:06 From  Jennilyn Wiley (Auburn University) : Reacted to "😀 Stan's forgotten ..." with ❤️
20:07:20 From  Stan Garfield : My KM site
20:07:53 From  Tom Grey : Reacted to "My KM site hwith 👍
20:14:30 From  Stan Garfield : I just asked: What should i get my wife for Valentine's Day?
The ideal Valentine's Day gift for your wife depends on her personal preferences, but some popular options include:
Jewelry, such as a necklace, bracelet, or earrings
A bouquet of her favorite flowers
A romantic getaway or a staycation
A spa day or massage
A cooking or baking class
A sentimental item, such as a photo album or custom-made keepsake
A book or movie she has been wanting to read/watch
A thoughtful piece of home decor
Remember, the most important part is showing your love and appreciation for her, so choose a gift that fits her unique personality and interests.
20:15:46 From  Bart Verheijen : Reacted to "I just asked: what s..." with 👍
20:19:53 From  Barbara : I have found Viva Topics (part of the Microsoft Viva suite) to have a similar approach.  The AI "suggests" a lot and users are expected to answer question like "is this correct"?  Then the AI learns, I guess.
20:20:40 From  Chitwood, Ami : It does not perform references interviews.
20:25:15 From  Chitwood, Ami : Solving search...
20:27:27 From  Jennilyn Wiley (Auburn University) : it would be great if it cited sources
20:28:27 From  Bart Verheijen : You can ask it to give (links to) sources
20:28:44 From  Bart Verheijen : but they do tend to be wrong or sometimes non-existent
20:28:50 From  Tom Grey : Replying to "it would be great if..."
I think it's statistical token analysis is not like usual human sources, like wiki.
20:29:07 From  Stan Garfield : Wharton - ChatGPT Passed an MBA Exam. What’s Next?
20:29:07 From  Jennilyn Wiley (Auburn University) : Reacted to "but they do tend to ..." with 👍
20:34:07 From  Stan Garfield : My document on the 32 KM components and ChatGPT
20:34:23 From  Jennilyn Wiley (Auburn University) : just like librarians have helped people figure out search strings to be efficient and effective, that will transition to how to best write and refine prompts. A good prompt for ChatGPT makes all the difference
20:35:43 From  Chitwood, Ami : ^Jennilyn - 100% and the ability to evaluate results
20:36:00 From  Jennilyn Wiley (Auburn University) : yes, ChatGPT told me it is not able to evaluate results
20:36:55 From  Jennilyn Wiley (Auburn University) : SEO for AI
20:37:59 From  Chitwood, Ami : Just like you can manipulate social media (e.g., having a error) to increase chances for "viral," I predict that there will be ChatGPT stuffing/manipulations....
20:40:12 From  Bill Kaplan : Always trust but verify...always
20:40:28 From  Jennilyn Wiley (Auburn University) : Reacted to "Always trust but ver..." with 👍
20:40:33 From  Chitwood, Ami : Reacted to "Always trust but ver..." with 👍
20:40:35 From  Tom Grey : Reacted to "Always trust but ver..." with 👍
20:42:24 From  Bart Verheijen : Reacted to "Always trust but ver..." with 👍
20:42:32 From  Bill Kaplan : I have a 1908 Keuffel and Esser slide rule
20:45:14 From  JC Monney : Correct it does not have context
20:45:53 From  JC Monney : you need to provide GPT with your context. It is not good (yet) at asking context
20:46:13 From  Bill Kaplan : Reacted to "you need to provide G..." with 👍🏻
20:47:08 From  Barbara : In addition to providing context, it helps to ask good questions or at least try to ask the question in different ways to get slightly different answers.
20:49:10 From  JC Monney : Viva Topics works best with 100,000 documents
20:49:19 From  Barbara : Reacted to "Viva Topics works be..." with 👍
20:49:21 From  Chitwood, Ami : Imagine...looking in all of our personal KM systems? (OneDrive, Teams, Email...)
20:49:43 From  Tom Grey : Reacted to "Viva Topics works be..." with 👍
20:49:51 From  Jennilyn Wiley (Auburn University) : I'd love to know in the black box how it determines expertise. since it's probabilistic, is it just looking at volume? (e.g., the more prolific a writer, the more weight it gives their source info)
20:49:56 From  Barbara : Viva Topics and intranets also need to be relatively "clean" for this to work.
20:50:12 From  JC Monney : they will provide API to you enterprise content
20:50:18 From  Bill Kaplan : A key for GPT is to provide the right best context in the query and as you engage with chat GPT. I have found that the better the context for the Q and the A the better the outcome for your query
20:50:43 From  JC Monney : deep mind
20:51:09 From  Bart Verheijen : Replying to "Viva Topics and intr..."
clean meaning no duplicates or the content within the topics being true or verified?
20:51:43 From  Stan Garfield : Constitutional AI refers to the use of artificial intelligence (AI) systems and technologies in accordance with the principles, values, and laws that are enshrined in a nation's constitution. This approach involves the development, deployment, and use of AI in a manner that aligns with the fundamental rights and freedoms guaranteed by the constitution. It aims to ensure that AI is used ethically, responsibly, and transparently, while also protecting the privacy and human rights of citizens.
20:52:06 From  Bill Kaplan : I believe Microsoft is going to announce tomorrow or next day how ChatGPT has been integrated into Edge.
20:54:49 From  Bart Verheijen : Google news today: We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard.
20:54:50 From  Barbara : Bing already separates the external search from internal system search, so I look forward to seeing how ChatGPT will be integrated in Bing.
20:55:04 From  Bart Verheijen : Google - An important next step on our AI journey
20:55:36 From  Tom Grey : Reacted to "Google..." with 👍
20:57:24 From  Bill Kaplan : The nondisclosure issue reminds me about the original discussions and concerns with CoPs about what can be discussed online. within an organization
20:57:35 From  Barbara : Reacted to "The non disclosure ..." with 👍
20:57:44 From  Jennilyn Wiley (Auburn University) : These artists found out their work was used to train AI. Now they’re furious
20:58:18 From  Jennilyn Wiley (Auburn University) : Artists not happy about their art being used to train Open AI. I believe Getty sued.
20:58:30 From  Bart Verheijen : Reacted to "Artists not happy ab..." with 👍
20:59:27 From  Bart Verheijen : Replying to "Artists not happy ab..."
Thanks! That's basically artists requesting their (creative) work be removed or excluded from the training set; and thereby from the responses
20:59:53 From  Tim Wood Powell : All of this is a huge ripoff of the content creators!  Information is created by people who get paid to do it — this is just a way to “reuse” this on a massive scale — without attribution or payment!
20:59:54 From  Barbara : Knowledge graph+ChatGPT for internal use?
20:59:56 From  Bill Kaplan : Thank you for this time
21:00:04 From  Jennilyn Wiley (Auburn University) : Reacted to "All of this is a huge..." with 👍
21:00:30 From  Dennis P Waters : Lord help us if financial institutions decide to use this for enterprise risk management
21:00:31 From  JC Monney : thank you for the invite and great sharing
21:00:34 From  Dennis Pearce : The Mind at AI: Horseless Carriage to Clock
21:00:57 From  Chitwood, Ami : Right on - I remember when excel came to the workplace...people are still here 🤓
21:01:06 From  Jennilyn Wiley (Auburn University) : Reacted to "Right on - I remember..." with 😂
21:01:09 From  Tom Grey : Reacted to "Knowledge graph+ChatGPT..." with 👍
21:01:53 From  Jennilyn Wiley (Auburn University) : +1 Ami - disruptive and changing but not necessarily career ending

Recording Transcription

Here is the transcript using the ChatGPT chrome extension:
 
Arnold:  I got the idea for this discussion when I came across an article in which the chief technology officer of a top consulting firm was quoted as saying that in the near future these these new technologies were going to empower him to demand more Prestige at his company a fancier office and a bigger budget um no like what he said out loud was something else what he said out loud was that this new technology was going to uh really have a big effect on this on the Knowledge Management function of this
company and uh I saw the phrase Knowledge Management and I immediately realized that I have a long time friend uh Stan Garfield who is recognized as a leader in the field of Knowledge Management so I asked him to join the discussion and then uh he said well this guy Dennis Pierce is an expert in Ai and computers and so you should have him on and so that's that's what we we've got here so I'd like you guys to give us uh two minutes each of a little bit more of an introduction of yourself you know so
Stan in two minutes or less not only uh introduce yourself but maybe explain to my audience what Knowledge Management is again for less than two minutes and then what you know sort of where would they look on the internet to uh see the justification from calling you a leader in the field all right thanks Arnold and thanks for inviting Dennis and me to join you today um so Knowledge Management to start with that is essentially the ability of an organization to reuse what one part of the organization knows in another part
so it's an attempt to take full advantage of the knowledge skills and expertise of an entire organization so that you don't reinvent the wheel make mistakes over and over again and you take full advantage of everyone that's there it becomes more and more important as organizations grow large so if you have a small team it's usually easy for everyone to know what each one knows and does and does talk to each other to take advantage of their background but in larger organizations that's pretty hard so now it's
management attempts to deal with that through a variety of mechanisms so that people can get in touch with each other including people that they don't know but are able to help one another out at the time of need and involves both capturing knowledge and reusing knowledge for the benefit of the organization that's that's one way of summarizing it my own background in it is that I was doing it for many years before we had a name for it I worked in the computer industry and the Consulting business for
a long time but somewhere in the mid 90s we started to see the term Knowledge Management and in 1996 I was asked to start the first Knowledge Management program for digital Equipment Corporation some of you may remember that company it was actually the number two computer company in the world at one time uh behind IBM before it eventually disappeared altogether but I started Knowledge Management as a full-time job in 1996 and essentially been in the field ever since and as far as where you can go to find out more about why not Arnold invited me
if you Google my name Stan Garfield go to my site which is on Google sites you'll get a wealth of information about it more than you can possibly digest so you probably need to use chat cheapyt to help simple simplify okay and so you've written more than just one or two tweets that had the word Knowledge Management that's right I've written four books now a Knowledge Management contributed chapters to another um other books and writing blogs for a long time since 2006 and have a regular series of blogs with a company called
lucidia and um doing a series of webinars for them on something called the five C's of Knowledge Management so if you're interested in that information will be available on my website and you can attend those for free okay now Dennis you're I guess supposedly know something about computers did you like take a night course in Cobalt at some point or What's your deal no actually it was Fortran not too well uh but uh uh yeah I'm not definitely not a an expert in computers in AI I wouldn't call it that I
um most of my career was spent at uh IBM and then Lexmark the printer company that was spun off from IBM's uh printer division in Lexington Kentucky um when I was at IBM this was in the late 80s I did some work uh creating some expert systems and that rolled over into Lexmark um also creating some case-based reasoning systems there sort of the the old school AI not the Deep learning kind of things that are going on now um and uh transition actually the way I got into Knowledge Management was actually through AI because I was doing
this AI work at Lexmark and I went to this conference in Boston in the late 90s that talked about Knowledge Management because it seemed to relate to some of the AI work I was doing and there I got a broader picture of what Knowledge Management was and got very interested in that ended up coming back at Lexmark I got a PhD at the University of Kentucky in decision sciences and information systems uh along the way and uh I have not been actively doing that I retired from Lexmark in 2017 and the last four years I've been working uh as
a collaboration strategist for a non-profit based out of Chicago called start early that does research and provides training to teachers for early childhood education and so I manage their collaboration tools and systems there so I've not been you know deep into developing AI for a while but I'm more of a strongly interested bystander these days great thanks okay well I'll get back to you with a question it'll probably take me about 10 minutes to get there the question that I have is just what is
this uh reinforcement learning human feedback that I hear about with chat GPT um you know it sounds to me like I mean you know some guy and greasy overalls with a tool belt crawls into the algorithm holds up a flashlight and says well we're going to have to put an if statement here and we're going to connect a subroutine there I mean anyway I have no idea what what what that is but a hold off uh just I'm gonna it'll take me a little while to get there just to let people know I wanna and by the
time we're finished here get into some really kind of big questions like a skeptic who's aware of the unreliability of chat GPT is bound to ask at some point how can something that unreliable be useful in Knowledge Management so that you know that's a question we'll get to later and then the uh the question that The Optimist uh the Evangelist for Chad GPT would ask is uh is this technology and the AI technologies that are kind of following behind it right behind it uh how radically are they going to
change knowledge work itself let's say over the next seven years so that's that that's another question that we'll get to uh but first I just want to first I just want to start with a little rant uh which is that um chat the large language models and chat GPT in particular they don't know what they're reading and they don't know what they're writing I mean it really is true that the it has no idea what it's reading no idea what it's writing it's just parsing uh you know if if you're a human being
but probably the last time you did parsing was in Middle School when your teacher gave you a problem of order of operations and so you had to say all right well this is what I do with the parentheses you know this is uh by the way give me a thumbs up if you're hearing if you're not okay good all right so um so the the you know so what do you do with the parentheses what do you do with the exponents you know what do you do next what do you do next and what I the way I think of chat GPT composing answers is
playing a game of what comes next so if it started to write Little Red Riding the next thing it would write would be Hood because you know in all the billions and billions of sentences that it's been trained on every time it's seen Little Red Riding those three tokens I mean don't even call them words call them tokens Little Red Riding that what comes after it is hood and so that's what it's going to write and that's how how it proceeds to go now if it had only written little red then it
on that basis it wouldn't know whether to follow that up with writing or wagon or barn or whatever I mean it would have a very limited number of tokens that could follow little red with but it couldn't it couldn't just pick any token uh and so it would probably have to look at the context of other tokens so let's say I asked um you know but anyway so to me like parsing is like digestion it's not like thinking you know the people who are treating chat gpts if it's thinking and they're
asking you questions if it's thinking it's actually it's more like digestion you you put these token the sequence of tokens in there you feed it the sequence of tokens and sort of these enzymes go to work breaking them down and kind of reconstituting them and then finally it craps out an answer and that's um you know that that's kind of what what I think of it as doing is this kind of parsing so let's say I ask Chad GPT what should I get my wife for Valentine's Day it's gonna give an answer of the form
for Valentine's Day you should get your wife because that's the natural sequence you know if someone asks a question of the form for blah blah blah day what should I get blah blah blah it's gonna know well to respond for that day you should get Bubba you know so it might say something like for Valentine's Day you should get your wife a box of chocolates because those are the tokens that it kind of sees and that's kind of the order that it sees them uh you know in its billions and billions of
of sentences that it that it's that it's seen and uh on the other hand I could say well I'm at the florist what should I get my wife for Valentine's Day and it'll say it'll say well it'll it'll notice the token florist and then it might come back with something for Valentine's Day you should get your wife a bouquet of red roses okay so anyway you know but it's um it anyway I don't think it's thinking it's just processing these things um and it can get very uh articulated that
I mean the sentences will be very articulate but the content is what's not reliable that you know people have who've read about it have read about the phenomenon of hallucinations and the very first time I put a question in and you can tell about my ego is I I said write an essay about Arnold clang The Economist and it starts out Arnold cling is an economist he was he was born in New York City in 1961.
so structurally that's the right answer content wise is completely wrong I was born in St Louis in 1954 not in New York in 1961. you know it just you know the probably in its Corpus of billions and billions of sentences that it's read it included me next to another Economist who was born in New York City in 1961 and so came out with that answer um anyway so that's that's enough on that uh so excuse me so then uh there's this phenomenon of how this was corrected because obviously if this was all you had and and and this was
just the way it worked and there was nothing you could do about it it it would be a joke but there's this other element other element to creating it which is called I think reinforcement learning human feedback and now I'm going to get back to you Dennis and say can how does that actually you know what what are the steps in that what's the recipe for reinforcement learning human feedback um that's a good question and to be honest I don't know uh I don't know what's going on behind the scenes uh
with chat GPD uh I I do know that um they do have mechanisms both for um doing their own adjustments and for uh taking the responses in in the conversations that people have with it uh I I in fact I think the the people who are in the sikm uh session might have seen some of the things I posted I can't remember I posted this but I had a lengthy argument with it um where it was convinced that birds were mammals and we seem to go round and round where part of the reason was because bats are mammals and bats fly
and birds flying and then the you know every time I tried to ask it another question it would skirt around and end up back around again but within a few days that seemed to have been corrected um and so now it's aware that birds are not mammals so uh I don't know you know with as many people as are you know testing it right now I can't imagine that there's some set of people back there taking all this in and manually you know making these changes because that would just be overwhelming um yeah but um so that's that's yeah
well that is it's interesting this mystery I mean I hear the word reinforcement and I I think of sort of you know training a dog and it does what you wanted to do and you give it a treat and it doesn't it does something wrong and you slap it on the nose and I think if you use chat GPT you can give it like a thumbs up or thumbs down am I correct on these things and so that's like giving it giving it a treat or uh slapping it on the nose but I guess what I was wondering is does it respond automatically or is it like
you say that that the humans have to kind of you know look through these you know uh these transcripts and then like you know like my guy with the overalls walk in there and you know make an if statement and a subroutine call and it just so so if anyone out there is more familiar with it and wants to uh give me give me a better sense of what that uh what that looks like that would be that would be uh you're welcome to kind of raise your hand or whatever one thing I have found interesting about it is that um
you can and maybe that's a question you ask it and see what it says I I found when I'm not sure how it works I ask it questions about itself and uh it's able to respond um in fact this morning I got curious because I've never seen it respond in in the dialogues I've had with it I've never seen it respond with a question back to me you know so I was wondering can it even do that so this morning I asked it can do you just issue statements based on you know the questions people ask you or
do you ever ask questions back and it says I ask questions back you know clarifying questions it's obviously and in fact it even said that it's not going to ask original questions um but it can ask clarifying questions and so I said well can you give me an example of one and it said sure in fact they even said sure it said uh for instance if somebody says I need help with something I'll ask what do you need help with and I thought well that's about the most vague you know yeah so I don't I don't think it does a
very good job of uh if at all of clarifying asking clarifying questions like your example of the um the Valentine's Day gift a human might say well does your wife like chocolate before they recommend chocolate you know just to make sure that they're not recommending something that could be ruled out instantly it doesn't seem to have that that capability the um you know the most interesting story I've heard and it's a story that's too good to check but the guy who who alluded to it uh I think
is reliable and the story is somebody put a prompt in and I forget what the content of the prompt was they got that answer back and it was a a kind of answer and they said well what would a really smart A.I how would a really smart AI answer this question and they got a much smarter more sophisticated answer and what that brings up is the element that I call simulation you know in some sense both of those answers were simulations the first answer was simulating like a you know a a nondescript Ai and the second answer was
it was simulating a sophisticated AI um and It just strikes me that some of the most interesting use cases and abuse cases come from this ability to simulate you know so people have talked about you know taking some modern pop song and saying write it like um you know like Shakespeare and you know that that kind of simulation and I was thinking you know about 10 years ago maybe more maybe 15 years ago now uh this guy John popola who is a film director and Russ Roberts who's an economist collaborated on a rap video
between two early 20th century famous economists Friedrich Hayek and John Maynard Keynes and it was quite a hit so but in effect they were creating a simulation of in the you know 10 years ago of these guys who lived a hundred years ago arguing about you know the fundamentals of Economics it was very entertaining very educational but the amount of work they had to go through to create that was just you know intense it probably took more than a year of writing a script to get you know like a five to ten minute video and then
they've had to find actors and all that stuff now would be available uh through these new AIS uh so that you could you know you could maybe throw that together you could throw a mediocre version of that together you know in a few days you know if you probably to get a really good script you need humans to to think of more clever lines uh but the the simulation possible is the other simulation possibility that intrigues me is is you know his mentors you know I don't know if you've heard remember the movie Stand and Deliver I
don't know do you know what that's what this calculus teacher who was just great out in Los Angeles well you imagine you could you know put it out of you know turn every calculus teacher into a simulated version of him uh using some of this technology um so that's great but then you have the abuse cases which I think people have already had to deal with I mean somebody you know uh you know suppose somebody decide to put out a simulation of you know Stan and Barb doing something nasty or simulation of Stan being assassinated
you know those are you know those things are very real possibilities and create real challenges I know Sandy you have any sense of of the of the of the simulation you use in abuse cases you have any thoughts on on the pitfalls or Promises of that I haven't thought about the examples you just gave and hopefully won't come back but the the opportunity part of it I think we've already all started to internalized with some of the risks and dangers are but I hadn't really given a lot of thought to all the opportunities until I
started playing with it today and putting in uh the use cases that I had for Knowledge Management and the answers that I got back to me suggests that there is a lot of potential there I've defined in my writing over the years 50 different components of Knowledge Management so I went through trying to think about which ones would this maybe apply to I thought it'd be maybe a handful of them it turned out that I came up with 32 out of the 50 that it could apply to and then I went back and I asked it for each
one of those 32 what can chat GPT do for this and it gave me back in most cases reasonable answers there was a few that just seemed like a little bit forced but otherwise that they were coherent they made sense and if they were in fact valid they would really help Knowledge Management like they could do a much better job of creating user interfaces much better job of responding to search queries and so forth and then the other tests that I put it through just asking it questions it seemed like it came back with pretty
good answers just like you were talking I know I put in the question about Valentine's Day and I've reproduced it here in the chat seemed like a pretty reasonable answer that it came back with so I think there's plenty of risks and dangers but I think on the surface the the value that it can provide that that's what I I'm more focused on I would say can you give an example of of one of those things where it came up with something that you hadn't sort of considered and seemed like a an
interesting use case well for instance I asked that uh what how can we use for training as an example and it says it can be used for Content Generations generating training materials such as lesson plans modules and presentations it can cut it can do a virtual instructor integrated into a virtual learning platform provide personalized interactive training experiences it can answer questions related to the training material it can do assessment generation to generate assessments and quizzes ensuring Learners have a solid
understanding of the material adaptive learning it can be used to tailor training experiences based on the learner's progress preferences and needs well that's all true and of course we could be skeptical about it it's pretty good that's yeah yeah yeah and that's just one thing and I've got to believe you could do that for like you know a 10 year old right you could you know you could have a very enthusiastic encouraging Mentor for a ten-year-old and just uh you know it's that that would be amazing
but yeah but what does the reliability issue bother you for Knowledge Management at all either Stan or Dennis well I would say that if you're going to use it there's going to be two distinct use variations one is focused on the general knowledge that in the World At Large which is where most of us have been experimenting with it and then there's the potential use where you focus it on the knowledge and resources within an Enterprise or within an organization if you can properly do that if you can
turn it loose on all of an organization's content and then it can do the same thing that it it's done with the more Universal content and that's a big question because as We Know Google the search engine is different than internal search engines by a big big margin because of the differences in scale but assuming that it can work similarly on internal content then yeah I'd say I wouldn't be too worried about the the negative side of that I'd I'd be excited about how it could do a lot
better job of finding content and serving it up in useful formats than what we currently do with cobbling together stuff from you know search results and so forth Dennis you have any thoughts there yeah um I um I looked at a couple of things I said you might have seen recently in the news um you know now uh universities are giving it tests uh like uh I think uh was it Wharton the MBA operations management test and gave it a B minus or something like that and then um CNET was using it to write articles for money and had a bunch of errors but
the errors in both cases seem to be math errors which I thought was sort of funny that um in order to make it more human you make it poorer at math than a computer is um and I I think maybe people are you know thinking this is like you know the Wizard of Oz or something that that knows everything but I if you think of it more as an explainer rather than a calculator I think that's where it's its Niche area is and when you were talking about simulations and and then just recently what Stan was talking about too I think the ability to
take a concept and explain it at whatever level is appropriate for the person asking the question uh could be really valuable and just this morning I I pulled up an old paper I I remembered from a long time ago an AI paper that I thought uh has some relevance here and I was reading through it and um the guy who wrote it uh was talking about horn Clauses and I remembered them vaguely uh but I didn't remember exactly what they were so I had cat you know gtt up and I said what's a horn clause and it gave me this
explanation that seemed to come right out of a logic uh math textbook and so I just responded back and I said can you explain that at the level that a high schooler would understand and it came back with a very nice simpler explanation so to your point about a 10 year old you know I think I and maybe also the idea in the opposite direction what you said before about the AI you know sophisticated AI it might provide an explanation at a default level and then if you come back and ask for a different level of explanation it can tailor it up or down
depending on the sophistication of the of the reader so I think that kind of thing could be really powerful um you know if you want to do the math calculations to see what the interest rate is or whatever then we've got other tools for that maybe this is not the right tool for that yeah the uh reminds me that uh just today uh a blogger that I follow Tyler Cowan said that he now reads with chat GPT so if he's reading about some foreign country and and a region gets mentioned that he doesn't know anything about he'll just you know
ask chat GPT to tell him about what he needs to know about that region uh so he can kind of you know go back and forth and kind of follow the follow the book he says it's like having 10 books open at the same time um the uh I want to ask one more question and then I I hope the audience is kind of primed to to uh jump in uh and that's this big question of what will knowledge work work look like let's say in 2037 years from now will it be about the same now or radically different and I'm leaning toward the radically different
so those you know knowledge workers they're what Robert Rice used to call symbol analysts people who deal in words or equations or computer code um you know if you follow the software industry I mean it is being turned upside down because chat gbt is just great as a putting together what they call an IDE which I forget you know integrated development environment or whatever I mean people people can produce software just incredibly faster those skilled people and I you could sort of see the same thing happening in
all these fields again being able to put together you know a a complex video in you know in in days that would have taken you know months or years to put together um so let me ask the question this way suppose I say that um you know there's a the 75 percent chance that seven years from now the tools that we've seen and that are coming down the pike will be an absolute necessity for someone to be uh an employable knowledge worker they they'll have to be able to work with those tools or they'll you know they
just won't be uh won't be productive at all you know I'd say that there's this like it looks to me like there's at least a 75 chance that that scenario plays out Stan do you have an opinion would you go over or under I I think you're you're right Arnold I think you you didn't you write that you thought it would be a good time for people to start a business using chat yeah so that's sort of another statement of its potential impact isn't it so I think you're right this is very similar to me as an
inflection point as you pointed out on one of your blogs about the the World Wide Web right to me it has that kind of potential impact Dennis you have a yeah I think things will change in in two different directions one is things like this um especially you know as as more people are working uh remotely what you don't get when you're working remotely is the person sitting in the cubicle next to you that you can lean over and ask a question and say hey do you know how to do this um just in a casual way and I think GPD
GPT can do you know take the place of some of that um where hey do you know how to you know I've got this JavaScript I'm trying to figure out how to do this can you give me an idea how to you know that kind of thing um Stan mentioned uh finding content too um one of the challenges in organizations is always keeping content organized because people don't they always want everybody else's content organized but they don't want to organize their own for the benefit of other people and if uh and I tested this the other day my
wife was thinking about starting a business I live in South Carolina so I asked chat GPD you know where the forums to start a business in South Carolina and it came back and said they're on the Secretary of State's website um but it didn't give me a link it just said that's where they are so I I asked I followed up I said do you have a link gave me the link not just to the website but to the page on the website where the forums were um what I was thinking was you know if if this means that people can just dump
their stuff into their systems in any old way they want and not have to worry about it and and something like chat GPD can you know scan all that and figure out where everything is and just tell you where it is whenever you need to know that's pretty powerful the other the flip side I was thinking of is every technology ends up escaping the way the information the knowledge is formed because there's a there's an iterative kind of thing right where um for instance um people generate uh have button to
create a CSV file to be able to export into Excel so there's some structure to that data specifically for a tool I've read the companies are telling um you know applicants not to bother with a cover letter with their resume because the resume is being ingested into an automatic you know system so many websites are designed for SEO you know uh even more so than the actual content so I would not be surprised that if if something like GPT gets sort of downsized to where it could be deployed within an organization
um or even on the web um once people start to get the feeling for like how it's ingesting this information and they're probably going to go check their own content against it to see what it's saying about about them they'll probably start tweaking their content in a way that makes it you know as palatable as possible whenever these tools use it so there's a sort of you know iterative kind of backward uh loop that that changes the shape of our knowledge to fit the food we're using versus the other way around yeah those
are all great observations about the last one or like I remember when I had my commercial website in the in the 90s and um you know we at first we were focused a lot on the design of the home page because people were going to come to it from the home page then once the search process take took over people were just were coming into the back pages and you had to completely rethink you know how am I going to accomplish what I want from you know from the website's point of view get people where they where we
want them to be when they're coming in to to different pages they're not coming through the home page and the what your last Point reminded me of that that and um anyway so I I I didn't want to I I suspect we have a very interesting audience so if let me just get a get a gallery view up and if people want to put up their virtual hands and please don't make long comments just because I think there'll be other people who who want to talk I suspect but if you'll raise virtual hands or maybe physical hands I'll be
able to see it and some of you um and uh and then I can call on people and again try not to give long speeches and you know as they say on Jeopardy be sure it's in the form of a question by the end it doesn't absolutely have to be but I'll do that okay so JC can I ask you to unmute yeah can you hear me yeah yeah so uh you asked a lot of question about uh judge GPT and Knowledge Management so I think one way to look at it is uh as a knowledge assistant and you ask the question will it fundamentally uh revolutionize the the
knowledge worker the answer is yes so how it will do that and I'll give you some example I've been doing with child GPT um I needed to write some code I can write code but I ask him to write code for me and you wrote code for me okay so um I asked him to write a blog so I gave him some information and what is very important to understand with those generative model is to introduce the context of what you're doing so you can ask the assistant gpts as a imagine you are a lawyer and I need a contract in
who is following the rule of the State of California Etc and it will provide you with a contract following the rule of state of California so now the question here is how do you trust the answers and there is always this degree of what is an expert right so if I'm a novice and I'm looking for an answer of a domain I have no knowledge about you know I get an answer I have not really a way to appreciate can I trust this answer if I have some domain knowledge I can use some of my knowledge in that domain to appreciate the validity of
that answer but what I see happening is that those generative model will come into the Enterprise already charge GPD as an API and you could train the model with your own Enterprise data and what Stan said which is very relevant companies basically they do a terrible job of organizing the internal data and a model like a tool like chat gbt will do a much better model than that and today companies are using search engine but search engine you can I encourage anyone who has a company to go into the search engine log the number of
keyword given in a search engine is an average of two or less okay so there's a there's a limitation of the technology by the human interaction to the technology and this number has not changed for the last 15 years uh I I'm the former Chief knowledge of a piece of Microsoft I can tell you that I talked to the being my big colleague and internally with the same number so I think there will be a fundamental change there on a motor change because we will accelerate the task we will give better result to most of the basic tasks where
the system would have to improve is everything that is reasoning which we we see is not ready to do that but I think we are like touching the first Universal uh you know knowledge assistance and they would need some specialization there and I think as the time comes back it goes on we will see tremendous Improvement it's a little bit like you and I are probably the age of the slide ruler okay and then we are the TI calculator and then we had we used programming in Fortran right so this this transformation of a tool going from
a slide ruler to compute certainly is what's happening with AI right now and those kind of models so I think yes it will be extremely uh transformative and for the other reason that it's available everywhere and I would say okay sorry I'm gonna have to ask you to wrap it up because I'm sure other people are talking and and I want to say one thing that we've been talking about chat GPT and search and one difference is chat GPT maintains state so Dennis mentioned you're asking the question and then
reformulating it that's very natural with Chad CPT if if Google needs to um if that's all Google needs to fight they they can they can do that you know next week they can maintain the state and let people reformulate their queries uh so that if that's a revolution which it may be uh that'll happen really quickly all right Robert Boyd you have a hand up can I get you to unmute here still okay go ahead and ask sure hi uh thanks well uh you know uh you talked about uh having chat GPT write code for you and
I've been using a copilot for a little while which is github's um uh you know open AI codecs for writing code and uh I was using it to write react and material which doesn't really matter what it is but it was new to me um and I learned it quite a bit faster I think because it kept me out of the uh the semantic weeds if you say you know write this kind of code using react and material UI then it'll give you 10 suggestions uh and you can you can't exactly cut and paste it but it did make some really interesting suggestions
which of equality that I've never gotten in an agile meeting where everybody else is sitting around and supposed to be commenting on the way you're supposed to uh the best way to solve this problem but what they're really doing is wondering about their own problem at least chat GPT was paying attention to me um but I'll uh I'll tell you three things that I've seen that it doesn't it kind of doesn't do you know you you say write some code like this uh and and it will give you a suggestion
but what it won't suggest what it doesn't know is that hey you could have written this code better if you already had a typescript interface file or something like that so it doesn't go backwards it doesn't say you know it starts from where you are and goes forwards a little ways but it won't go backwards it's not like a real good Mentor who says well you don't really need that kind of code you need this other kind of code to solve that problem and you need these prerequisites and it doesn't go very far forward so
it's good for the next five or ten lines of code but you know if you're missing something like well you forgot to tell it to include a variable to you know a react thing called use state to maintain the state of all the information that you're using when people are typing it in and changing it well it won't tell you that unless you know that you need that so there is there's uh there's a good Advantage for some of us who were born in 1961 who kind of know all the pieces you do need but don't
really know the semantics or the specific um you know what is material UI have in to offer compared to semantic and some of the others it'll come up with those specifics for you but other than that it's uh it's it it it doesn't really do the mentoring thing quite yet um okay let's have uh Dennis can I okay sure thanks uh thank you um I I'm I'm curious getting back to the to the original question of of solar what I what you would call uh Enterprise Knowledge Management I mean a lot of the a lot of the
functionality of chat GPT seems to come from the fact that it has this this vast textual input petabytes worth I have no idea how much really um and I'm wondering sort of what is the minimum amount of text that you would have within an Enterprise that you could feed into this engine and have it be useful in other words is it it obviously operates at a at a very great scale when it's when it's looking at the whole internet but when it's looking at just what's within an uh an Enterprise what
is the how how much data does an Enterprise have to have before it could actually become useful does anybody know the answer to that point well I I don't I know nothing but I would guess you could do it with small scale as long as it as long as it it it it it knows semantics and and language structure and uh grammar from its large model well it doesn't know that does it know that in other words can it transfer that from the large model to a smaller model or is it or is it an engine that just works on what it's given
I would I I'm gonna guess it can transfer but uh oh yeah one thing I've noticed that it does is uh it takes it reads the files that I have in my project and does suggest things related to things in my project that I'm working on now even if I haven't told it you know there's some function in my file it will it will look for things that seem relevant so if your company knowledge was in a particular file and you were writing in that context it probably looks there first certainly does for co-pilot yeah Stan or
uh or Dennis you have any well I would say that as you said on you could you could do like a pilot by focusing it on a set of content but for it to work the best you'd probably turn it loose on your intranet and let it crawl and find everything that's there it's going to be a lot of stuff that wouldn't be all that useful but hopefully it can then differentiate and that's the problem with search engines it doesn't differentiate it just throws you out a bunch of stuff that you have to Wade
through and the advantage of this would be it could look through all of your Enterprise content and then figure out using its own algorithms what to do with it and I think the answer would be turn it loose on your old internet and if necessary figure out ways of giving it access to things that are that are behind passwords and security and the more content that you feed it the better Dennis any thoughts yeah I I was thinking um when you were talking about you know the sort of generalized capability versus the specific content
that Harkens back to like old days of expert systems where there was an inference engine and a knowledge base and the inference engine was the the general purpose kind of way of you know turning through things and then the knowledge base was the specific content so but again I don't know anything about what this tool was like behind the scenes but if it was ever going to be deployed within an organization if they hope to sell it that way I would think you would have to have those two components you'd have to have the basic
engine to drive you know how it looks at things and figures things out in general and then applying that to the specific content of the particular organization yeah I don't think it you know it has it if it has a digestion if you give it the digestion system to a different set of tokens uh it would kind of work that way okay Tom you've had your hand up so let me uh I I I had a quick comment question uh I understand that Google uh made a 300 million dollar investment in a competitor to chat GPT called Claude I
think from Claude Shannon and this is a constitutional AI and I'm very interested in the difference between a constitutional Ai and this chat GPT because I I have a feeling that the Constitutional AI is going to be more of the uh expert system type uh hybrid but I'm not sure about it so I was hoping someone would have an answer to that I don't turn constitutionally yeah yeah I'm not I did see that Google something came out just I think today they have a tool called Bard and the internal memo went out that they got to
deploy this internally and start testing it right away and they're they're very worried that chat GPT is going to suck upon that you know press and so they've got to get theirs out quickly so they do have one coming out to compete with it I I didn't hear anything about constitutional AI there so I'm not sure exactly what what that means just ask uh at GBP is the answer yeah yeah okay yeah it gave the answer yeah and social value into AI system to ensure they operate in responsible and fair manner okay yeah well you know this
whole project started out of you know the fear of the uh you know the paper clip maximizer scenario um and it's you know most of the people who who have that fear are very angry that that it was kind of let out to the public as soon as it was Ernie you had a physical hand up so you wanna thank you I've done mute of course okay yeah yeah I did yeah um a couple of comments I'll make them real fast first uh for small smaller companies they're not going to be able to adopt uh their own engine because most of it is requires uh
pretty much massive parallel processing which requires many many computers and uh if you're a small company and you want to use something like that you're going to have to use a tool that other users are also using you could have to pay a subscription fee or something to access a general a model of some kind which brings up a question I've had now for a couple of weeks it's how do you get the chat GPT to sign a non-disclosure agreement I I don't know and but it's a it's an interesting question uh my
daughter-in-law works for IBM she's using it and her groups are using it but I asked her what was she doing about non-disclosure because they're they're asking you to to deal with some questions that really ought to be remain private inside the company so I I I'd like to hear anyone's opinions about that how that's going to work in the in the future secondly Google is having a meeting on Wednesday a public uh some kind of announcement on Wednesday and the rumors are that it's about a competitor to chat GPT I don't
know that that's true or not but that's what I've heard and lastly um going back to coding you I I've been disappointed recently in some of the things I've asked it to do mainly because it doesn't quickly interpret a string as a string you put a string in a uh it's like a date you can write a date uh out as a string and then tell it you want to use a system date too well we'll use the system date as the date but it doesn't convert the string to a date at least in oracle's database
uh unless you put quotes around the data let it know it's a string so for novices they're not going to see that they're not going to anyone that's going to ask this thing that's not really familiar with the way systems work it's not going to perform very well okay that's all I have to say I I am interested in an NDA uh question though well I you know that problem kind of precedes uh this kind of thing you know the you know I started this saying that that you talked about this Chief technology
off officer of a big four consulting firm and it wasn't McKinsey but when I think of McKinsey you know what that what they sort of specialize in doing is talking to all the big players in an industry and sort of sharing in a in a very careful way what's going on at the other three or with other five players with the sixth player uh you know with disguising kind of where you know disguising where they're getting their information from it's a it's a very dicey kind of thing um and so I think what you're raising I
you know I don't think it's it can be solved with something as simple as a non-disclosure agreement because McKinsey you know just has to enforce in a detailed way has to know what it can disclose and what it can't and and they'd probably crosses the line at least indirectly all the time um okay uh Bart can you unmute and ask you a question or give your comment yeah well I I wanted to expand on the question from Ernie for the NDA um would you be looking at an NDA for the questions you ask it or would you be
looking for an ndea for maybe previous content you have written before like hey I do not want my stuff to be indexed or used to train the algorithm to begin with where we could be looking for an NDA or maybe both of those situations or yeah I would guess that that all those possibilities if they're a stand do you have any thoughts on you you've done some Consulting what what how do people deal with it just now without and they uh well I haven't had to deal with mbas much typically when you're consulting or
maybe they may ask you to sign something like that or they may just say that's part of the work agreement I haven't seen it uh come up in my own experience much okay um all right anyway the queue is open it's also approaching the end of the hour so I um I'll give give people like a couple moments to come up with any last question okay and I say Tom go ahead um yeah I wanted to comment on the Knowledge Management um I remember 15 20 years ago there was this huge search at Dell for the one source of Truth which is at the
accounting level so right now almost all of these multinational organizations have multiple uh jurisdictional silos of information and they've been doing uh trying to organize them in a way so that the president can get an overview and if there's a section that wants more detail you go to that detail and uh get an overview of that section and keep going down uh and it will all be consistent and while I was there 15 years ago they failed to create such an one source of Truth uh when I was at IBM um even eight years ago they were doing
a similar thing with blue Harmony which they then gave up part and now they've gone to something called a date in sap uh and now they've gone to something called Data Lakes where they're trying to make a lake of data so that it's available to uh everybody in the organization and what I'm certain of is that this AI is of which are specialized in the organization but have the uh capability of talking like chat GPT has the capability some hybrid mixture of that is going to allow the executives to
get rid of a lot of mineral managers and thereby as Arnold mentioned early raise their personal budget but lower their head count foreign okay um I don't see any of their hands up uh and I'd like to thank everybody for their questions and their thoughts and and what the things that they added in chat and I'd especially like to thank Stan and Dennis and uh give Stan the next to last word and Dennis the last word so Stan okay so one of the things to point out is that the main use of AI in my opinion is to augment human
capability not to replace it so I think there's a lot of potential for a tool like this to give you something that you could start with and then work from as opposed to just take and always use as is so in that context can save us a lot of time and effort but we might still want to spend some time checking it verifying it and refining it okay and Dennis the last word um I'll put a link to a paper that's 35 years old but I still find interesting and it's the idea that we don't get hung up on the term artificial intelligence
uh he makes the case that every technology um has this habit every major technology has a habit of first being named in an adjective noun form that relates back to something that's already known so you had locomotives were an iron horse a car was a Horseless Carriage printing presses were actually originally called artificial writing radio was Wireless Telegraph you know so there's this pattern of new technologies being named as old things until we figure out exactly what it is and realize that it's completely different
from anything that we've had before so when I think people think of this as intelligence it's not intelligence it's it's another tool it's a tool to use for information and um knowledge uh management and and practice but it's not really intelligence okay well I'll agree with that and then we'll sign off thank you everybody thanks Arnold thank you bye
 


Stan Garfield
 

Dennis Pearce and I will join my friend Arnold Kling in a conversation next Monday, February 6, 2023 at 8 pm EST. It will be a discussion of ChatGPT, including its impact on the field of knowledge management.

If you would like to participate, send me a private email at stangarfield@... and I will reply with the Zoom link. The link is not to be shared widely.


Bart Verheijen
 

It's a probabilistic (language) model and has no understanding at all. 
There are many examples (some in this SIKM forum as well) which show it. 

Bart Verheijen
+31 6 19 342 603






Op wo 11 jan. 2023 om 00:04 schreef Robert M. Taylor via groups.io <Robertmartintaylor=btinternet.com@groups.io>:

I read linguistics and AI and spent a very brief time working on computer understanding of language, my God, more than 35 years ago before I realised it wasn't for me. I mean, I know it's funny sometimes, but I'm in total awe as to what has so far been achieved when you consider the difficulties. Turing Test? Well, they're at least as smart as Dan Quayle when he was VP of the USA as far as I can see.
But seriously, there are many, many real-life situations where we accept at least the same degree of lack of understanding - think of almost any customer experience you've had with a retailer.
As for Searle's Chinese Room, once they start talking to each other there will be dialogue, yes, and probably decisions and actions flowing from that, but could anyone claim there would be any understanding at all going on?


Robert M. Taylor
 

I read linguistics and AI and spent a very brief time working on computer understanding of language, my God, more than 35 years ago before I realised it wasn't for me. I mean, I know it's funny sometimes, but I'm in total awe as to what has so far been achieved when you consider the difficulties. Turing Test? Well, they're at least as smart as Dan Quayle when he was VP of the USA as far as I can see.
But seriously, there are many, many real-life situations where we accept at least the same degree of lack of understanding - think of almost any customer experience you've had with a retailer.
As for Searle's Chinese Room, once they start talking to each other there will be dialogue, yes, and probably decisions and actions flowing from that, but could anyone claim there would be any understanding at all going on?


Rezwan Alam
 

Great conversations are going on about ChatGPT. As a former journalist, I ran a chat session with it two weeks ago, and here are the responses to my 15 questions. It took a minute to answer one question, which it most likely thought was "difficult."
Best,
Rezwan
Associate Professor

On Tue, Jan 10, 2023 at 1:45 AM Dennis Jonker via groups.io <jonker=me.com@groups.io> wrote:

It is a nice to tool to quickly find the right direction to look for your information. I have noticed in engineering, it most of the time came with a notification to check the expert for further information. The answers are not always correct because in some discipline terms are used very specific for that area which can be used different by others. It is certainly not the holy grail, but a good starting point.

That is my opinion.

But i do like the use of it.

Op 9-1-2023 om 20:37 schreef Tim Powell:

I’ve been vaguely uncomfortable while reading this compelling and insightful thread – and thanks Louis-Pierre for reminding me why!

 

Few would dispute that ChatGPT produces some amazing and amusing results.  The question remains, is this something each of us can use in our work — or just a clever party trick?  I.e., is it a tool or a toy?

 

Full disclosure, I have not been able to test it yet for my own purposes, as they currently are not taking new users.  But it seems, from what others have reported here and elsewhere, to produce quickie, unsourced, unvetted, insight-free rehashes of “what many people are saying.”  That’s fine for what it is — but we court trouble when we pretend it’s anything more than that.

 

The ends toward which powerful technologies can be applied are often obscured, even to the creators of those technologies.  Nuclear energy can be used to power entire cities — or to demolish them.  With all the “gee whiz” about AI, we sometimes forget that one of its widest applications today is in security systems that recognize and track identities.  Again, this can be a “good” or “evil” application, depending on a number of factors other than the efficacy of the technology itself.

 

Ironically, China has just introduced a regulation requiring AI-generated content, including “deep fakes” videos, to be labelled as such.  It will be interesting to see how, or whether, they enforce this.

 

As a knowledge practitioner, and a citizen of the world, my single biggest concern (among many) is the ever-increasing prevalence and power of disinformation.  By decoupling words from their meaning, context, and “truth value” — thus breaking the (formerly strong) bond between signifier and signified — something is lost.  To me, that something can be summarized as “accountability.” 

 

I represent myself to clients as, not just an agent, but more a steward of information.  As it becomes ever-easier to collect information, it becomes all the more important to curate it — to assess its timeliness, relevance, accuracy, and so on.  By definition, information that is “truthy” looks and feels indistinguishable from information that is actually true.

 

Alan Murray, the CEO of Fortune Media, describes ChatGPT as producing “informed bullshit.”  If that suffices for any given purpose, fine.  But as we as a society begin to embed AI in important decisions — criminal sentencing, for example — we obscure the fact that people are making important policy decisions — but in a way that is embedded deep within algorithms, ergo non-transparent, ergo non-accountable.  Putting decision-making on automatic pilot seems like an invitation to disaster.

 

To the extent we can’t determine the provenance of information — and even lose our respect for the very idea of it — we walk a slippery slope that leads us away from light and truth — and toward darkness and falsehood.  Ah, but it is oh so seductive…

 

Happy 2023 to you all,

 

Tim

 

TIM WOOD POWELL | President, The Knowledge Agency® | Author, The Value of Knowledge |

New York City, USA  |  TEL +1.212.243.1200 | 

SITE KnowledgeAgency.com | BLOG TimWoodPowell.com |

 

 

From: <main@SIKM.groups.io> on behalf of Louis-Pierre Guillaume <louis-pierre@...>
Reply-To: "main@SIKM.groups.io" <main@SIKM.groups.io>
Date: Monday, January 9, 2023 at 8:27 AM
To: SIKM <main@SIKM.groups.io>
Subject: Re: [SIKM] The AI view of KM #art-of-KM

 

My two cents on ChatGPT

 

Beside the performance of the tool, its main drawback/flaw is its inability to tell what are the sources that enabled it to write answers to our questions.

It is a black box, unfortunately.

It is not an oracle

 

Maybe soon a Tarte-à-la-crème 😂

 

Cheers

Louis-Pierre
_________________________________________________
Louis-Pierre Guillaume          Mobile: +33 6 10 33 63 21
louis-pierre@...     LinkedIn - Twitter
_________________________________________________

 

 

On Mon, Dec 5, 2022 at 6:07 PM Tom Short <tshortconsulting@...> wrote:

I think the most impressive output of this whole exercise is the limericks. Due to their compactness, it's hard to write a good limerick because in order to be any good they must not only conform to the limerick style/format/rhyming scheme, they have to be witty. Like haiku only with more constraints.

Great exploration, Dennis!

--
-Tom
--

Tom Short Consulting
TSC
+1 415 300 7457

All of my previous SIKM Posts


Dennis Jonker
 

It is a nice to tool to quickly find the right direction to look for your information. I have noticed in engineering, it most of the time came with a notification to check the expert for further information. The answers are not always correct because in some discipline terms are used very specific for that area which can be used different by others. It is certainly not the holy grail, but a good starting point.

That is my opinion.

But i do like the use of it.

Op 9-1-2023 om 20:37 schreef Tim Powell:

I’ve been vaguely uncomfortable while reading this compelling and insightful thread – and thanks Louis-Pierre for reminding me why!

 

Few would dispute that ChatGPT produces some amazing and amusing results.  The question remains, is this something each of us can use in our work — or just a clever party trick?  I.e., is it a tool or a toy?

 

Full disclosure, I have not been able to test it yet for my own purposes, as they currently are not taking new users.  But it seems, from what others have reported here and elsewhere, to produce quickie, unsourced, unvetted, insight-free rehashes of “what many people are saying.”  That’s fine for what it is — but we court trouble when we pretend it’s anything more than that.

 

The ends toward which powerful technologies can be applied are often obscured, even to the creators of those technologies.  Nuclear energy can be used to power entire cities — or to demolish them.  With all the “gee whiz” about AI, we sometimes forget that one of its widest applications today is in security systems that recognize and track identities.  Again, this can be a “good” or “evil” application, depending on a number of factors other than the efficacy of the technology itself.

 

Ironically, China has just introduced a regulation requiring AI-generated content, including “deep fakes” videos, to be labelled as such.  It will be interesting to see how, or whether, they enforce this.

 

As a knowledge practitioner, and a citizen of the world, my single biggest concern (among many) is the ever-increasing prevalence and power of disinformation.  By decoupling words from their meaning, context, and “truth value” — thus breaking the (formerly strong) bond between signifier and signified — something is lost.  To me, that something can be summarized as “accountability.” 

 

I represent myself to clients as, not just an agent, but more a steward of information.  As it becomes ever-easier to collect information, it becomes all the more important to curate it — to assess its timeliness, relevance, accuracy, and so on.  By definition, information that is “truthy” looks and feels indistinguishable from information that is actually true.

 

Alan Murray, the CEO of Fortune Media, describes ChatGPT as producing “informed bullshit.”  If that suffices for any given purpose, fine.  But as we as a society begin to embed AI in important decisions — criminal sentencing, for example — we obscure the fact that people are making important policy decisions — but in a way that is embedded deep within algorithms, ergo non-transparent, ergo non-accountable.  Putting decision-making on automatic pilot seems like an invitation to disaster.

 

To the extent we can’t determine the provenance of information — and even lose our respect for the very idea of it — we walk a slippery slope that leads us away from light and truth — and toward darkness and falsehood.  Ah, but it is oh so seductive…

 

Happy 2023 to you all,

 

Tim

 

TIM WOOD POWELL | President, The Knowledge Agency® | Author, The Value of Knowledge |

New York City, USA  |  TEL +1.212.243.1200 | 

SITE KnowledgeAgency.com | BLOG TimWoodPowell.com |

 

 

From: <main@SIKM.groups.io> on behalf of Louis-Pierre Guillaume <louis-pierre@...>
Reply-To: "main@SIKM.groups.io" <main@SIKM.groups.io>
Date: Monday, January 9, 2023 at 8:27 AM
To: SIKM <main@SIKM.groups.io>
Subject: Re: [SIKM] The AI view of KM #art-of-KM

 

My two cents on ChatGPT

 

Beside the performance of the tool, its main drawback/flaw is its inability to tell what are the sources that enabled it to write answers to our questions.

It is a black box, unfortunately.

It is not an oracle

 

Maybe soon a Tarte-à-la-crème 😂

 

Cheers

Louis-Pierre
_________________________________________________
Louis-Pierre Guillaume          Mobile: +33 6 10 33 63 21
louis-pierre@...     LinkedIn - Twitter
_________________________________________________

 

 

On Mon, Dec 5, 2022 at 6:07 PM Tom Short <tshortconsulting@...> wrote:

I think the most impressive output of this whole exercise is the limericks. Due to their compactness, it's hard to write a good limerick because in order to be any good they must not only conform to the limerick style/format/rhyming scheme, they have to be witty. Like haiku only with more constraints.

Great exploration, Dennis!

--
-Tom
--

Tom Short Consulting
TSC
+1 415 300 7457

All of my previous SIKM Posts


Tim Powell
 

I’ve been vaguely uncomfortable while reading this compelling and insightful thread – and thanks Louis-Pierre for reminding me why!

 

Few would dispute that ChatGPT produces some amazing and amusing results.  The question remains, is this something each of us can use in our work — or just a clever party trick?  I.e., is it a tool or a toy?

 

Full disclosure, I have not been able to test it yet for my own purposes, as they currently are not taking new users.  But it seems, from what others have reported here and elsewhere, to produce quickie, unsourced, unvetted, insight-free rehashes of “what many people are saying.”  That’s fine for what it is — but we court trouble when we pretend it’s anything more than that.

 

The ends toward which powerful technologies can be applied are often obscured, even to the creators of those technologies.  Nuclear energy can be used to power entire cities — or to demolish them.  With all the “gee whiz” about AI, we sometimes forget that one of its widest applications today is in security systems that recognize and track identities.  Again, this can be a “good” or “evil” application, depending on a number of factors other than the efficacy of the technology itself.

 

Ironically, China has just introduced a regulation requiring AI-generated content, including “deep fakes” videos, to be labelled as such.  It will be interesting to see how, or whether, they enforce this.

 

As a knowledge practitioner, and a citizen of the world, my single biggest concern (among many) is the ever-increasing prevalence and power of disinformation.  By decoupling words from their meaning, context, and “truth value” — thus breaking the (formerly strong) bond between signifier and signified — something is lost.  To me, that something can be summarized as “accountability.” 

 

I represent myself to clients as, not just an agent, but more a steward of information.  As it becomes ever-easier to collect information, it becomes all the more important to curate it — to assess its timeliness, relevance, accuracy, and so on.  By definition, information that is “truthy” looks and feels indistinguishable from information that is actually true.

 

Alan Murray, the CEO of Fortune Media, describes ChatGPT as producing “informed bullshit.”  If that suffices for any given purpose, fine.  But as we as a society begin to embed AI in important decisions — criminal sentencing, for example — we obscure the fact that people are making important policy decisions — but in a way that is embedded deep within algorithms, ergo non-transparent, ergo non-accountable.  Putting decision-making on automatic pilot seems like an invitation to disaster.

 

To the extent we can’t determine the provenance of information — and even lose our respect for the very idea of it — we walk a slippery slope that leads us away from light and truth — and toward darkness and falsehood.  Ah, but it is oh so seductive…

 

Happy 2023 to you all,

 

Tim

 

TIM WOOD POWELL | President, The Knowledge Agency® | Author, The Value of Knowledge |

New York City, USA  |  TEL +1.212.243.1200 | 

SITE KnowledgeAgency.com | BLOG TimWoodPowell.com |

 

 

From: <main@SIKM.groups.io> on behalf of Louis-Pierre Guillaume <louis-pierre@...>
Reply-To: "main@SIKM.groups.io" <main@SIKM.groups.io>
Date: Monday, January 9, 2023 at 8:27 AM
To: SIKM <main@SIKM.groups.io>
Subject: Re: [SIKM] The AI view of KM #art-of-KM

 

My two cents on ChatGPT

 

Beside the performance of the tool, its main drawback/flaw is its inability to tell what are the sources that enabled it to write answers to our questions.

It is a black box, unfortunately.

It is not an oracle

 

Maybe soon a Tarte-à-la-crème 😂

 

Cheers

Louis-Pierre
_________________________________________________
Louis-Pierre Guillaume          Mobile: +33 6 10 33 63 21
louis-pierre@...     LinkedIn - Twitter
_________________________________________________

 

 

On Mon, Dec 5, 2022 at 6:07 PM Tom Short <tshortconsulting@...> wrote:

I think the most impressive output of this whole exercise is the limericks. Due to their compactness, it's hard to write a good limerick because in order to be any good they must not only conform to the limerick style/format/rhyming scheme, they have to be witty. Like haiku only with more constraints.

Great exploration, Dennis!

--
-Tom
--

Tom Short Consulting
TSC
+1 415 300 7457

All of my previous SIKM Posts


Dennis Pearce
 

If you ask it to provide references, it often will.


Louis-Pierre Guillaume
 

My two cents on ChatGPT

Beside the performance of the tool, its main drawback/flaw is its inability to tell what are the sources that enabled it to write answers to our questions.
It is a black box, unfortunately.
It is not an oracle

Maybe soon a Tarte-à-la-crème 😂

Cheers
Louis-Pierre
_________________________________________________
Louis-Pierre Guillaume         
Mobile: +33 6 10 33 63 21
louis-pierre@...     LinkedIn - Twitter
_________________________________________________


On Mon, Dec 5, 2022 at 6:07 PM Tom Short <tshortconsulting@...> wrote:

I think the most impressive output of this whole exercise is the limericks. Due to their compactness, it's hard to write a good limerick because in order to be any good they must not only conform to the limerick style/format/rhyming scheme, they have to be witty. Like haiku only with more constraints.

Great exploration, Dennis!

--
-Tom
--

Tom Short Consulting
TSC
+1 415 300 7457

All of my previous SIKM Posts


Eli Miron
 

Quality of translating English to Hebrew: ChatGPT vs. Google translate.

The first part of the attached file is a set of criteria for comparing the quality of translation - an excellent article composed by ChatGPT.

 

The second part is an excerpt of an article that I have published.

The next three parts are most relevant to Hebrew readers: the third and fourth parts are two translation attempts by ChatGPT (quite different), and The fifth part is the translation by Google translate.

Final verdict: Google translate is by far superior to ChatGPT (in all parameters) for translating from English to Hebrew. In addition - ChatGPT is probably limited in the length of text it can translate (both attempts did not translate the last three lines).


Guillermo A. Galdamez
 

This is a brilliant prompt Dennis! Thanks for sharing! (I literally LOLd).

However, your question got me thinking about several things:

The first is Explainable AI. In my experience, organizations want increased transparency into outputs generated by AI (Like you asked yourself: Where did this come from?). In several spaces I've heard scepticism and distrust towards these types of products: "Is this information accurate?" "Can I really trust this information?".

This led me to a second thought/question: Will younger generations have a similar attitude towards these products, or will they just accept their outputs more readily, having grown up with AI from an early age? 

Best,

Guillermo

On Tue, Jan 3, 2023 at 11:30 PM Dennis Pearce <denpearce@...> wrote:
I continue to be impressed by ChatGPT's ability to handle nuance in instructions, even if the answers aren't always completely accurate.  For example, I asked it:

Tell me about enterprise social networks

Its response:

An enterprise social network is a platform that allows employees within an organization to connect, collaborate, and share information and ideas. These platforms are typically hosted internally, on a company's own servers, and are usually accessed through a web browser or a mobile app. Enterprise social networks are designed to facilitate communication and collaboration among employees, and often include features such as discussion forums, file sharing, and messaging. Some enterprise social networks also offer integrations with other business tools, such as project management software and customer relationship management systems. The goal of an enterprise social network is to improve productivity, foster a sense of community among employees, and make it easier for people to access and share information within an organization.

OK, not too bad except for the part about being typically hosted internally.  But I can see this as just being stitched together from content that's out there on the web.  So then I asked:

Tell me about enterprise social networks, but do it sarcastically.

The response had me LOL:

Oh boy, where do I begin? Enterprise social networks, the most exciting and innovative way for employees to communicate and collaborate with their coworkers! Nothing says "cutting-edge technology" quite like being forced to use yet another corporate platform to do your job.

But seriously, what could be more thrilling than spending hours scrolling through endless feeds filled with the banal musings and meetings invites of your colleagues? And let's not forget the always-enjoyable task of tracking down that one document you need for your presentation, buried somewhere in the depths of the company's file-sharing system.

But hey, at least now you can "like" and "comment" on your coworkers' updates just like on your personal social media accounts! Because nothing brings people together quite like corporate-sanctioned digital validation.

In all seriousness, enterprise social networks can be useful tools for improving communication and collaboration within an organization, but they are not without their challenges and limitations. Like any technology, they need to be implemented and used effectively in order to be truly beneficial.

Where did that come from?


Dave Snowden
 

So it homogenises to dominate patterns of platitudes?





Prof Dave Snowden

Director & Founder - The Cynefin Centre 
CSO - The Cynefin Company
Social Media: snowded
dave.snowden@...
thecynefin.co



On 5 Jan 2023, at 23:19, Jean-Claude F. Monney <Jean-Claude@...> wrote:

Been playing for a few weeks now and find it quite fascinating. Here is a dialogue example.
 
Best,
 
JC
 
JC: What are the key success factors of a KM program?
 
ChaGpt: There are several key success factors that can contribute to the effectiveness of a knowledge management (KM) program:
  1. Clear objectives: It is important to define clear and specific goals for the KM program so that it is aligned with the overall business strategy and objectives of the organization.
  2. Top management support: KM programs require the support and commitment of top management in order to be successful. This includes providing resources and ensuring that KM is integrated into business processes and decision-making.
  3. Culture of sharing: KM programs are built on the principle of sharing knowledge and information within an organization. Therefore, it is important to foster a culture of openness and collaboration in order for the KM program to be effective.
  4. Technology: The use of appropriate technology is crucial for the success of KM programs. This includes tools for storing, organizing, and sharing knowledge, as well as systems for capturing and codifying knowledge as it is generated.
  5. User adoption: KM programs can only be successful if they are widely adopted and used by employees. It is important to involve users in the design and implementation of the KM program to ensure that it meets their needs and is easy to use.
  6. Continuous improvement: KM programs should be flexible and adaptable, with a focus on continuous improvement. This includes regularly reviewing and evaluating the KM program to identify areas for improvement and making necessary changes.
  7.  
JC: What about a culture of re-using, sharing without re-use is useless.
 
ChaGpt:You are correct that a culture of reusing knowledge is an important aspect of successful knowledge management (KM). Sharing knowledge within an organization is a key principle of KM, but if that knowledge is not reused or applied in some way, it is not as valuable.
To create a culture of reusing knowledge, it is important to establish practices and processes that encourage employees to seek out and utilize existing knowledge, rather than reinventing the wheel or starting from scratch every time they need to solve a problem. This can include things like:
  • Establishing a centralized repository of knowledge, such as a knowledge base or a shared drive, that is easily accessible to all employees
  • Encouraging employees to document and share their own knowledge and expertise
  • Providing training and resources to help employees learn how to effectively search for and find relevant knowledge
  • Recognizing and rewarding employees who contribute to and reuse knowledge within the organization
Creating a culture of reusing knowledge requires a commitment to continuous learning and collaboration, as well as a focus on the value of sharing and reusing knowledge as a way to drive business results.
 
 
JC: Give me some practical implementation of a culture of sharing and re-using
 
ChaGpt: There are a variety of ways that organizations can foster a culture of sharing and reusing knowledge, some examples include:
  1. Establishing a centralized repository of knowledge: This can be a physical or digital location where employees can access and share knowledge and information. This might include things like a shared drive, a knowledge base, or a document management system.
  2. Encouraging employees to share their own knowledge and expertise: This can be done through initiatives like mentorship programs, peer-to-peer learning, or by encouraging employees to present on topics they are knowledgeable about to their colleagues.
  3. Providing training and resources on how to find and reuse knowledge: This can include things like training on how to effectively search for and access knowledge within the organization, as well as tools and resources that make it easier to find and use relevant information.
  4. Recognizing and rewarding employees who contribute to and reuse knowledge: This can include things like awarding prizes or recognition for employees who share their knowledge or reuse existing knowledge in a creative or impactful way.
  5. Encouraging a culture of continuous learning and collaboration: This can be done through things like regular team meetings, cross-functional projects, and encouraging employees to ask questions and seek out new knowledge.
By implementing these types of practices, organizations can create a culture that values and encourages the sharing and reuse of knowledge, which can drive business results and improve efficiency.
 
 
 
 
 

From: main@SIKM.groups.io <main@SIKM.groups.io> on behalf of Martin Dugage via groups.io<mrdugage@...>
Date: Thursday, January 5, 2023 at 5:08 AM
To: main@SIKM.groups.io <main@SIKM.groups.io>
Subject: Re: [SIKM] The AI view of KM #art-of-KM

👏👏👏
Did you use ChatGMT 😂? 



Dennis Pearce
 

I believe that if you include something like “and provide references” in most cases it will do that.


 

did you ask it where its ideas came from?  NO

 

Best,

 

JC

 

From: Murray Jennex <murphjen@...>
Date: Thursday, January 5, 2023 at 5:01 PM
To: Jean-Claude Monney <jean-claude@...>, main@SIKM.groups.io <main@SIKM.groups.io>
Subject: Re: [SIKM] The AI view of KM #art-of-KM

Interesting results and shows the AI is good. I do think it needs to provide sources for its comments, did you ask it where its ideas came from? All the thoughts in the below are in the KM literature so it hasn't done anything new, but it it can be prodded to say where the thoughts come from then we are making progress! This is what Watson was doing a few years ago in medical research, it was finding and summarizing research results for doctors; and as we are pointing out in the knowledge source discussion, key parts of that knowledge is still in the head of the knowledge holder so you need to know who put the knowledge out so you can discourse with them to get a full value of the knowledge....murray jennex

 

On another note, I have told my university that we can no longer use essay questions to assess student knowledge, ChatGpt is just too good at essays for undergraduate assessment of learning. I do think it is not yet ready to replace graduate level and expert level discourse though

-----Original Message-----
From: Jean-Claude F. Monney <Jean-Claude@...>
To: main@SIKM.groups.io <main@SIKM.groups.io>
Sent: Thu, Jan 5, 2023 3:19 pm
Subject: Re: [SIKM] The AI view of KM #art-of-KM

Been playing for a few weeks now and find it quite fascinating. Here is a dialogue example.

 

Best,

 

JC

 

JC: What are the key success factors of a KM program?

 

ChaGpt: There are several key success factors that can contribute to the effectiveness of a knowledge management (KM) program:

  1. Clear objectives: It is important to define clear and specific goals for the KM program so that it is aligned with the overall business strategy and objectives of the organization.
  2. Top management support: KM programs require the support and commitment of top management in order to be successful. This includes providing resources and ensuring that KM is integrated into business processes and decision-making.
  3. Culture of sharing: KM programs are built on the principle of sharing knowledge and information within an organization. Therefore, it is important to foster a culture of openness and collaboration in order for the KM program to be effective.
  4. Technology: The use of appropriate technology is crucial for the success of KM programs. This includes tools for storing, organizing, and sharing knowledge, as well as systems for capturing and codifying knowledge as it is generated.
  5. User adoption: KM programs can only be successful if they are widely adopted and used by employees. It is important to involve users in the design and implementation of the KM program to ensure that it meets their needs and is easy to use.
  6. Continuous improvement: KM programs should be flexible and adaptable, with a focus on continuous improvement. This includes regularly reviewing and evaluating the KM program to identify areas for improvement and making necessary changes.
  7.  

JC: What about a culture of re-using, sharing without re-use is useless.

 

ChaGpt:You are correct that a culture of reusing knowledge is an important aspect of successful knowledge management (KM). Sharing knowledge within an organization is a key principle of KM, but if that knowledge is not reused or applied in some way, it is not as valuable.

To create a culture of reusing knowledge, it is important to establish practices and processes that encourage employees to seek out and utilize existing knowledge, rather than reinventing the wheel or starting from scratch every time they need to solve a problem. This can include things like:

  • Establishing a centralized repository of knowledge, such as a knowledge base or a shared drive, that is easily accessible to all employees
  • Encouraging employees to document and share their own knowledge and expertise
  • Providing training and resources to help employees learn how to effectively search for and find relevant knowledge
  • Recognizing and rewarding employees who contribute to and reuse knowledge within the organization

Creating a culture of reusing knowledge requires a commitment to continuous learning and collaboration, as well as a focus on the value of sharing and reusing knowledge as a way to drive business results.

 

 

JC: Give me some practical implementation of a culture of sharing and re-using

 

ChaGpt: There are a variety of ways that organizations can foster a culture of sharing and reusing knowledge, some examples include:

  1. Establishing a centralized repository of knowledge: This can be a physical or digital location where employees can access and share knowledge and information. This might include things like a shared drive, a knowledge base, or a document management system.
  2. Encouraging employees to share their own knowledge and expertise: This can be done through initiatives like mentorship programs, peer-to-peer learning, or by encouraging employees to present on topics they are knowledgeable about to their colleagues.
  3. Providing training and resources on how to find and reuse knowledge: This can include things like training on how to effectively search for and access knowledge within the organization, as well as tools and resources that make it easier to find and use relevant information.
  4. Recognizing and rewarding employees who contribute to and reuse knowledge: This can include things like awarding prizes or recognition for employees who share their knowledge or reuse existing knowledge in a creative or impactful way.
  5. Encouraging a culture of continuous learning and collaboration: This can be done through things like regular team meetings, cross-functional projects, and encouraging employees to ask questions and seek out new knowledge.

By implementing these types of practices, organizations can create a culture that values and encourages the sharing and reuse of knowledge, which can drive business results and improve efficiency.

 

 

 

 

 

From: main@SIKM.groups.io <main@SIKM.groups.io> on behalf of Martin Dugage via groups.io <mrdugage@...>
Date: Thursday, January 5, 2023 at 5:08 AM
To: main@SIKM.groups.io <main@SIKM.groups.io>
Subject: Re: [SIKM] The AI view of KM #art-of-KM

👏👏👏
Did you use ChatGMT
😂?


Murray Jennex
 

Interesting results and shows the AI is good. I do think it needs to provide sources for its comments, did you ask it where its ideas came from? All the thoughts in the below are in the KM literature so it hasn't done anything new, but it it can be prodded to say where the thoughts come from then we are making progress! This is what Watson was doing a few years ago in medical research, it was finding and summarizing research results for doctors; and as we are pointing out in the knowledge source discussion, key parts of that knowledge is still in the head of the knowledge holder so you need to know who put the knowledge out so you can discourse with them to get a full value of the knowledge....murray jennex

On another note, I have told my university that we can no longer use essay questions to assess student knowledge, ChatGpt is just too good at essays for undergraduate assessment of learning. I do think it is not yet ready to replace graduate level and expert level discourse though


-----Original Message-----
From: Jean-Claude F. Monney <Jean-Claude@...>
To: main@SIKM.groups.io <main@SIKM.groups.io>
Sent: Thu, Jan 5, 2023 3:19 pm
Subject: Re: [SIKM] The AI view of KM #art-of-KM

Been playing for a few weeks now and find it quite fascinating. Here is a dialogue example.
 
Best,
 
JC
 
JC: What are the key success factors of a KM program?
 
ChaGpt: There are several key success factors that can contribute to the effectiveness of a knowledge management (KM) program:
  1. Clear objectives: It is important to define clear and specific goals for the KM program so that it is aligned with the overall business strategy and objectives of the organization.
  2. Top management support: KM programs require the support and commitment of top management in order to be successful. This includes providing resources and ensuring that KM is integrated into business processes and decision-making.
  3. Culture of sharing: KM programs are built on the principle of sharing knowledge and information within an organization. Therefore, it is important to foster a culture of openness and collaboration in order for the KM program to be effective.
  4. Technology: The use of appropriate technology is crucial for the success of KM programs. This includes tools for storing, organizing, and sharing knowledge, as well as systems for capturing and codifying knowledge as it is generated.
  5. User adoption: KM programs can only be successful if they are widely adopted and used by employees. It is important to involve users in the design and implementation of the KM program to ensure that it meets their needs and is easy to use.
  6. Continuous improvement: KM programs should be flexible and adaptable, with a focus on continuous improvement. This includes regularly reviewing and evaluating the KM program to identify areas for improvement and making necessary changes.
  7.  
JC: What about a culture of re-using, sharing without re-use is useless.
 
ChaGpt:You are correct that a culture of reusing knowledge is an important aspect of successful knowledge management (KM). Sharing knowledge within an organization is a key principle of KM, but if that knowledge is not reused or applied in some way, it is not as valuable.
To create a culture of reusing knowledge, it is important to establish practices and processes that encourage employees to seek out and utilize existing knowledge, rather than reinventing the wheel or starting from scratch every time they need to solve a problem. This can include things like:
  • Establishing a centralized repository of knowledge, such as a knowledge base or a shared drive, that is easily accessible to all employees
  • Encouraging employees to document and share their own knowledge and expertise
  • Providing training and resources to help employees learn how to effectively search for and find relevant knowledge
  • Recognizing and rewarding employees who contribute to and reuse knowledge within the organization
Creating a culture of reusing knowledge requires a commitment to continuous learning and collaboration, as well as a focus on the value of sharing and reusing knowledge as a way to drive business results.
 
 
JC: Give me some practical implementation of a culture of sharing and re-using
 
ChaGpt: There are a variety of ways that organizations can foster a culture of sharing and reusing knowledge, some examples include:
  1. Establishing a centralized repository of knowledge: This can be a physical or digital location where employees can access and share knowledge and information. This might include things like a shared drive, a knowledge base, or a document management system.
  2. Encouraging employees to share their own knowledge and expertise: This can be done through initiatives like mentorship programs, peer-to-peer learning, or by encouraging employees to present on topics they are knowledgeable about to their colleagues.
  3. Providing training and resources on how to find and reuse knowledge: This can include things like training on how to effectively search for and access knowledge within the organization, as well as tools and resources that make it easier to find and use relevant information.
  4. Recognizing and rewarding employees who contribute to and reuse knowledge: This can include things like awarding prizes or recognition for employees who share their knowledge or reuse existing knowledge in a creative or impactful way.
  5. Encouraging a culture of continuous learning and collaboration: This can be done through things like regular team meetings, cross-functional projects, and encouraging employees to ask questions and seek out new knowledge.
By implementing these types of practices, organizations can create a culture that values and encourages the sharing and reuse of knowledge, which can drive business results and improve efficiency.
 
 
 
 
 
From: main@SIKM.groups.io <main@SIKM.groups.io> on behalf of Martin Dugage via groups.io <mrdugage@...>
Date: Thursday, January 5, 2023 at 5:08 AM
To: main@SIKM.groups.io <main@SIKM.groups.io>
Subject: Re: [SIKM] The AI view of KM #art-of-KM
👏👏👏
Did you use ChatGMT 😂?


 

Been playing for a few weeks now and find it quite fascinating. Here is a dialogue example.

 

Best,

 

JC

 

JC: What are the key success factors of a KM program?

 

ChaGpt: There are several key success factors that can contribute to the effectiveness of a knowledge management (KM) program:

  1. Clear objectives: It is important to define clear and specific goals for the KM program so that it is aligned with the overall business strategy and objectives of the organization.
  2. Top management support: KM programs require the support and commitment of top management in order to be successful. This includes providing resources and ensuring that KM is integrated into business processes and decision-making.
  3. Culture of sharing: KM programs are built on the principle of sharing knowledge and information within an organization. Therefore, it is important to foster a culture of openness and collaboration in order for the KM program to be effective.
  4. Technology: The use of appropriate technology is crucial for the success of KM programs. This includes tools for storing, organizing, and sharing knowledge, as well as systems for capturing and codifying knowledge as it is generated.
  5. User adoption: KM programs can only be successful if they are widely adopted and used by employees. It is important to involve users in the design and implementation of the KM program to ensure that it meets their needs and is easy to use.
  6. Continuous improvement: KM programs should be flexible and adaptable, with a focus on continuous improvement. This includes regularly reviewing and evaluating the KM program to identify areas for improvement and making necessary changes.
  7.  

JC: What about a culture of re-using, sharing without re-use is useless.

 

ChaGpt:You are correct that a culture of reusing knowledge is an important aspect of successful knowledge management (KM). Sharing knowledge within an organization is a key principle of KM, but if that knowledge is not reused or applied in some way, it is not as valuable.

To create a culture of reusing knowledge, it is important to establish practices and processes that encourage employees to seek out and utilize existing knowledge, rather than reinventing the wheel or starting from scratch every time they need to solve a problem. This can include things like:

  • Establishing a centralized repository of knowledge, such as a knowledge base or a shared drive, that is easily accessible to all employees
  • Encouraging employees to document and share their own knowledge and expertise
  • Providing training and resources to help employees learn how to effectively search for and find relevant knowledge
  • Recognizing and rewarding employees who contribute to and reuse knowledge within the organization

Creating a culture of reusing knowledge requires a commitment to continuous learning and collaboration, as well as a focus on the value of sharing and reusing knowledge as a way to drive business results.

 

 

JC: Give me some practical implementation of a culture of sharing and re-using

 

ChaGpt: There are a variety of ways that organizations can foster a culture of sharing and reusing knowledge, some examples include:

  1. Establishing a centralized repository of knowledge: This can be a physical or digital location where employees can access and share knowledge and information. This might include things like a shared drive, a knowledge base, or a document management system.
  2. Encouraging employees to share their own knowledge and expertise: This can be done through initiatives like mentorship programs, peer-to-peer learning, or by encouraging employees to present on topics they are knowledgeable about to their colleagues.
  3. Providing training and resources on how to find and reuse knowledge: This can include things like training on how to effectively search for and access knowledge within the organization, as well as tools and resources that make it easier to find and use relevant information.
  4. Recognizing and rewarding employees who contribute to and reuse knowledge: This can include things like awarding prizes or recognition for employees who share their knowledge or reuse existing knowledge in a creative or impactful way.
  5. Encouraging a culture of continuous learning and collaboration: This can be done through things like regular team meetings, cross-functional projects, and encouraging employees to ask questions and seek out new knowledge.

By implementing these types of practices, organizations can create a culture that values and encourages the sharing and reuse of knowledge, which can drive business results and improve efficiency.

 

 

 

 

 

From: main@SIKM.groups.io <main@SIKM.groups.io> on behalf of Martin Dugage via groups.io <mrdugage@...>
Date: Thursday, January 5, 2023 at 5:08 AM
To: main@SIKM.groups.io <main@SIKM.groups.io>
Subject: Re: [SIKM] The AI view of KM #art-of-KM

👏👏👏
Did you use ChatGMT 😂?


Martin Dugage
 

👏👏👏
Did you use ChatGMT 😂?