Back in the early days of KM, big 4 consulting firms (I think there were six back then) saw the potential of KM and started experimenting with various tools and approaches. And it made sense for them to get onboard early: their assets were purely knowledge-based and went down the elevator every night.
Back in 1989, Andersen Consulting hired AI guru Roger Schank and gave him $30million to play with and continue his research (he was at Stanford) hoping for some breakthroughs they could apply to their business.
One KM-related project he worked on involved conducting and videoing knowledge elicitation interviews with experts. The thought was that if we could simply interview people and video it all, it would capture knowledge in a way that could be then inventoried, tagged and searched for future retrieval and re-use. I don’t know how much he spent on it, but word was it was in the millions.
In any case, that didn’t work. When I learned about this effort I was at IBM, and I knew it wouldn’t work. We didn’t have the tools to cope with vast amounts of unstructured data, even when it was in text format, much less video format.
But maybe now that is about to change. Some MIT alums are building a startup called Netra around an AI engine that is supposed to be able to parse video content and categorize it automagically. Might be a good one to watch. Who knows? Maybe Schank will be vindicated after all, and video knowledge elicitations will become a thing again.
Now MIT alumnus-founded Netra is using artificial intelligence to improve video analysis at scale. The company’s system can identify activities, objects, emotions, locations, and more to organize and provide context to videos in new ways.
--
-Tom --
Tom Short Consulting TSC +1 415 300 7457
|
|
Thanks for sharing. Seems like a natural evolution of video-conferencing, given where it's at today. Many of the standard VC/streaming apps have built in auto-captioning and transcription capabilities, and if you bundle that in with all of the other asynchronous tools we're using to message and share knowledge, that could lead to a lot of interesting applications in the KM space.
toggle quoted messageShow quoted text
Back in the early days of KM, big 4 consulting firms (I think there were six back then) saw the potential of KM and started experimenting with various tools and approaches. And it made sense for them to get onboard early: their assets were purely knowledge-based and went down the elevator every night.
Back in 1989, Andersen Consulting hired AI guru Roger Schank and gave him $30million to play with and continue his research (he was at Stanford) hoping for some breakthroughs they could apply to their business.
One KM-related project he worked on involved conducting and videoing knowledge elicitation interviews with experts. The thought was that if we could simply interview people and video it all, it would capture knowledge in a way that could be then inventoried, tagged and searched for future retrieval and re-use. I don’t know how much he spent on it, but word was it was in the millions.
In any case, that didn’t work. When I learned about this effort I was at IBM, and I knew it wouldn’t work. We didn’t have the tools to cope with vast amounts of unstructured data, even when it was in text format, much less video format.
But maybe now that is about to change. Some MIT alums are building a startup called Netra around an AI engine that is supposed to be able to parse video content and categorize it automagically. Might be a good one to watch. Who knows? Maybe Schank will be vindicated after all, and video knowledge elicitations will become a thing again.
Now MIT alumnus-founded Netra is using artificial intelligence to improve video analysis at scale. The company’s system can identify activities, objects, emotions, locations, and more to organize and provide context to videos in new ways.
--
-Tom --
Tom Short Consulting TSC +1 415 300 7457
|
|
Interesting Tom,
I actually think it is still likely to be a failure from an
enterprise point of view, because if you were to capture the
majority of meetings you are likely to get lots of:
- irrelevant discussion
- tasking ("so we've agreed that X will do Y and come back by
Z")
- acronyms and implied knowledge about past events
Additionally:
- meeting contexts will look pretty identical (either a
non-descript meeting table or Zoom grid) and
- emotions tend to be pretty guarded
So I feel the pickings of the AI in this context will be slim. On
the other hand, I'm sure the system will be a boon for the stock
video and content creator market, since if you're looking for a
quick way to find (context free) "cat runs into chair while chasing
toy mouse" videos, it will have you covered 😁
What would be much more interesting to me is a system that fully
and accurately transcribes videos, including assigning names to
speakers and descriptions of video actions on screen. That's
because the biggest drawback of videos is that you have to watch
the damn things. If you could instead skim through text to find
the bit at 42:55 where you show me exactly how to safely clear a
jammed lathe, that would be a valuable knowledge
tool.
Cheers,
Stephen.
====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 21/05/2021 2:01 am, Curtis A. Conley
wrote:
toggle quoted messageShow quoted text
Thanks for sharing. Seems like a natural evolution
of video-conferencing, given where it's at today. Many of the
standard VC/streaming apps have built in auto-captioning and
transcription capabilities, and if you bundle that in with all
of the other asynchronous tools we're using to message and share
knowledge, that could lead to a lot of interesting applications
in the KM space.
Back in the early days of KM, big 4 consulting
firms (I think there were six back then) saw the potential
of KM and started experimenting with various tools and
approaches. And it made sense for them to get onboard
early: their assets were purely knowledge-based and went
down the elevator every night.
Back in 1989, Andersen Consulting hired AI guru
Roger Schank and gave him $30million to play with and
continue his research (he was at Stanford) hoping for some
breakthroughs they could apply to their business.
One KM-related project he worked on involved conducting
and videoing knowledge elicitation interviews with
experts. The thought was that if we could simply interview
people and video it all, it would capture knowledge in a
way that could be then inventoried, tagged and searched
for future retrieval and re-use. I don’t know how much he
spent on it, but word was it was in the millions.
In any case, that didn’t work. When I learned about
this effort I was at IBM, and I knew it wouldn’t work. We
didn’t have the tools to cope with vast amounts of
unstructured data, even when it was in text format, much
less video format.
But maybe now that is about to change. Some MIT
alums are building a startup called Netra around an AI
engine that is supposed to be able to parse video content
and categorize it automagically. Might be a good one to
watch. Who knows? Maybe Schank will be vindicated after
all, and video knowledge elicitations will become a thing
again.
Now MIT
alumnus-founded Netra is using artificial intelligence
to improve video analysis at scale. The company’s system
can identify activities, objects, emotions, locations,
and more to organize and provide context to videos in
new ways.
--
-Tom
--
Tom
Short Consulting
TSC
+1 415 300 7457
|
|

Jay Kreshel
@stephen we use Gong pretty effectively for this purpose. Or does a pretty good job of transcribing and assigning names. It brings forth the calls to action and other key AI driven data. We also tried Avoma at one point.
toggle quoted messageShow quoted text
On May 20, 2021, at 3:53 PM, Stephen Bounds <km@...> wrote:
Interesting Tom,
I actually think it is still likely to be a failure from an
enterprise point of view, because if you were to capture the
majority of meetings you are likely to get lots of:
- irrelevant discussion
- tasking ("so we've agreed that X will do Y and come back by
Z")
- acronyms and implied knowledge about past events
Additionally:
- meeting contexts will look pretty identical (either a
non-descript meeting table or Zoom grid) and
- emotions tend to be pretty guarded
So I feel the pickings of the AI in this context will be slim. On
the other hand, I'm sure the system will be a boon for the stock
video and content creator market, since if you're looking for a
quick way to find (context free) "cat runs into chair while chasing
toy mouse" videos, it will have you covered 😁
What would be much more interesting to me is a system that fully
and accurately transcribes videos, including assigning names to
speakers and descriptions of video actions on screen. That's
because the biggest drawback of videos is that you have to watch
the damn things. If you could instead skim through text to find
the bit at 42:55 where you show me exactly how to safely clear a
jammed lathe, that would be a valuable knowledge
tool.
Cheers,
Stephen.
====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 21/05/2021 2:01 am, Curtis A. Conley
wrote:
Thanks for sharing. Seems like a natural evolution
of video-conferencing, given where it's at today. Many of the
standard VC/streaming apps have built in auto-captioning and
transcription capabilities, and if you bundle that in with all
of the other asynchronous tools we're using to message and share
knowledge, that could lead to a lot of interesting applications
in the KM space.
Back in the early days of KM, big 4 consulting
firms (I think there were six back then) saw the potential
of KM and started experimenting with various tools and
approaches. And it made sense for them to get onboard
early: their assets were purely knowledge-based and went
down the elevator every night.
Back in 1989, Andersen Consulting hired AI guru
Roger Schank and gave him $30million to play with and
continue his research (he was at Stanford) hoping for some
breakthroughs they could apply to their business.
One KM-related project he worked on involved conducting
and videoing knowledge elicitation interviews with
experts. The thought was that if we could simply interview
people and video it all, it would capture knowledge in a
way that could be then inventoried, tagged and searched
for future retrieval and re-use. I don’t know how much he
spent on it, but word was it was in the millions.
In any case, that didn’t work. When I learned about
this effort I was at IBM, and I knew it wouldn’t work. We
didn’t have the tools to cope with vast amounts of
unstructured data, even when it was in text format, much
less video format.
But maybe now that is about to change. Some MIT
alums are building a startup called Netra around an AI
engine that is supposed to be able to parse video content
and categorize it automagically. Might be a good one to
watch. Who knows? Maybe Schank will be vindicated after
all, and video knowledge elicitations will become a thing
again.
Now MIT
alumnus-founded Netra is using artificial intelligence
to improve video analysis at scale. The company’s system
can identify activities, objects, emotions, locations,
and more to organize and provide context to videos in
new ways.
--
-Tom
--
Tom
Short Consulting
TSC
+1 415 300 7457
|
|
Nice! I hadn't heard of either of those tools before, I'll check
them out.
====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 21/05/2021 9:00 am, Jay Kreshel
wrote:
toggle quoted messageShow quoted text
@stephen we use Gong pretty effectively for this purpose. Or does
a pretty good job of transcribing and assigning names. It brings
forth the calls to action and other key AI driven data. We also
tried Avoma at one point.
Jay.
On May 20, 2021, at 3:53 PM,
Stephen Bounds <km@...>
wrote:
Interesting Tom,
I actually think it is still likely to be a failure
from an enterprise point of view, because if you
were to capture the majority of meetings you are
likely to get lots of:
- irrelevant discussion
- tasking ("so we've agreed that X will do Y and
come back by Z")
- acronyms and implied knowledge about past events
Additionally:
- meeting contexts will look pretty identical
(either a non-descript meeting table or Zoom grid)
and
- emotions tend to be pretty guarded
So I feel the pickings of the AI in this context will
be slim. On the other hand, I'm sure the system will
be a boon for the stock video and content creator
market, since if you're looking for a quick way to
find (context free) "cat runs into chair while chasing
toy mouse" videos, it will have you covered 😁
What would be much more interesting to me is a
system that fully and accurately transcribes videos,
including assigning names to speakers and
descriptions of video actions on screen. That's
because the biggest drawback of videos is that you
have to watch the damn things. If you could instead
skim through text to find the bit at 42:55 where you
show me exactly how to safely clear a jammed lathe,
that would be a valuable knowledge
tool.
Cheers,
Stephen.
====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 21/05/2021 2:01 am,
Curtis A. Conley wrote:
Thanks for sharing. Seems like a
natural evolution of video-conferencing, given
where it's at today. Many of the standard
VC/streaming apps have built in auto-captioning
and transcription capabilities, and if you bundle
that in with all of the other asynchronous tools
we're using to message and share knowledge, that
could lead to a lot of interesting applications in
the KM space.
Back in the early days of KM, big 4
consulting firms (I think there were six
back then) saw the potential of KM and
started experimenting with various tools and
approaches. And it made sense for them to
get onboard early: their assets were purely
knowledge-based and went down the elevator
every night.
Back in 1989, Andersen Consulting
hired AI guru Roger Schank and gave him
$30million to play with and continue his
research (he was at Stanford) hoping for
some breakthroughs they could apply to their
business.
One KM-related project he worked on involved
conducting and videoing knowledge
elicitation interviews with experts. The
thought was that if we could simply
interview people and video it all, it would
capture knowledge in a way that could be
then inventoried, tagged and searched for
future retrieval and re-use. I don’t know
how much he spent on it, but word was it was
in the millions.
In any case, that didn’t work. When I
learned about this effort I was at IBM, and
I knew it wouldn’t work. We didn’t have the
tools to cope with vast amounts of
unstructured data, even when it was in text
format, much less video format.
But maybe now that is about to
change. Some MIT alums are building a
startup called Netra around an AI engine
that is supposed to be able to parse video
content and categorize it automagically.
Might be a good one to watch. Who knows?
Maybe Schank will be vindicated after all,
and video knowledge elicitations will become
a thing again.
Now MIT
alumnus-founded Netra is using artificial
intelligence to improve video analysis at
scale. The company’s system can identify
activities, objects, emotions, locations,
and more to organize and provide context
to videos in new ways.
--
-Tom
--
Tom Short
Consulting
TSC
+1 415 300 7457
|
|
Interesting, Stephen. I wasn’t thinking about from the standpoint of meetings and zoom calls - but that in itself presents another interesting use case. And there sure are a lot of calls recorded taking up space on servers that I bet are never accessed again.
What I was imagining was a library of “how to” videos, perhaps for field workers for example. So you’re out there repairing a piece of equipment in a chip fab, let’s say, and you video the repair, which is uploaded to a searchable video repository based on the make and model of the equipment, the type of repair, etc. The next time someone needs to fix that problem on that same type of equipment, they can “look over your shoulder” via the video and learn how to do it, saving time as well as expense of screw ups. Of course, someone would have to vet the videos and determine which ones were using the approved procedure and which ones weren’t.
Something like that, I would think, would be workable. The challenge has always been, as you say, finding the bit at 42:55 that is relevant to the specific question someone in the field has - figuring out which video it’s in is hard enough; then navigating quickly to the right bit - next to impossible thus far. -- -Tom --
Tom Short Consulting TSC +1 415 300 7457
|
|
I remember that Oklahoma State developed a searchable video system for the army on doing IED identification and disarming. They used the concept of knowledge nugget to keep the videos short (5-15 minutes) so that you didn't have to search through the video much to find what you need....murray jennex
toggle quoted messageShow quoted text
-----Original Message-----
From: Tom Short <tshortconsulting@...>
To: main@SIKM.groups.io
Sent: Thu, May 20, 2021 5:03 pm
Subject: Re: [SIKM] Here we go again? Searchable video.
Interesting, Stephen. I wasn’t thinking about from the standpoint of meetings and zoom calls - but that in itself presents another interesting use case. And there sure are a lot of calls recorded taking up space on servers that I bet are never accessed again.
What I was imagining was a library of “how to” videos, perhaps for field workers for example. So you’re out there repairing a piece of equipment in a chip fab, let’s say, and you video the repair, which is uploaded to a searchable video repository based on the make and model of the equipment, the type of repair, etc. The next time someone needs to fix that problem on that same type of equipment, they can “look over your shoulder” via the video and learn how to do it, saving time as well as expense of screw ups. Of course, someone would have to vet the videos and determine which ones were using the approved procedure and which ones weren’t.
Something like that, I would think, would be workable. The challenge has always been, as you say, finding the bit at 42:55 that is relevant to the specific question someone in the field has - figuring out which video it’s in is hard enough; then navigating quickly to the right bit - next to impossible thus far. -- -Tom --
Tom Short Consulting TSC +1 415 300 7457
|
|
Hi all,
- We transcribe videos: videos that you can upload directly to the platform or even youtube videos. - We "cut" all the text and index it into meaningful paragraphs. - We automatically detect and index all the key insights (ner detection). - We offer multilingual semantic search to find, so you can find specific moments in videos with the information you need.
In case you are interested in our approach on how we do it, I did this short Loom video for you:
Best!

toggle quoted messageShow quoted text
I remember that Oklahoma State developed a searchable video system for the army on doing IED identification and disarming. They used the concept of knowledge nugget to keep the videos short (5-15 minutes) so that you didn't have to search through the video much to find what you need....murray jennex
-----Original Message-----
From: Tom Short < tshortconsulting@...>
To: main@SIKM.groups.io
Sent: Thu, May 20, 2021 5:03 pm
Subject: Re: [SIKM] Here we go again? Searchable video.
Interesting, Stephen. I wasn’t thinking about from the standpoint of meetings and zoom calls - but that in itself presents another interesting use case. And there sure are a lot of calls recorded taking up space on servers that I bet are never accessed again.
What I was imagining was a library of “how to” videos, perhaps for field workers for example. So you’re out there repairing a piece of equipment in a chip fab, let’s say, and you video the repair, which is uploaded to a searchable video repository based on the make and model of the equipment, the type of repair, etc. The next time someone needs to fix that problem on that same type of equipment, they can “look over your shoulder” via the video and learn how to do it, saving time as well as expense of screw ups. Of course, someone would have to vet the videos and determine which ones were using the approved procedure and which ones weren’t.
Something like that, I would think, would be workable. The challenge has always been, as you say, finding the bit at 42:55 that is relevant to the specific question someone in the field has - figuring out which video it’s in is hard enough; then navigating quickly to the right bit - next to impossible thus far. -- -Tom --
Tom Short Consulting TSC +1 415 300 7457
|
|
This reminds me of what Stanley Black & Decker is doing with DeepHow.
toggle quoted messageShow quoted text
On May 20, 2021, at 7:03 PM, Tom Short <tshortconsulting@...> wrote:
Interesting, Stephen. I wasn’t thinking about from the standpoint of meetings and zoom calls - but that in itself presents another interesting use case. And there sure are a lot of calls recorded taking up space on servers that I bet are never accessed again.
What I was imagining was a library of “how to” videos, perhaps for field workers for example. So you’re out there repairing a piece of equipment in a chip fab, let’s say, and you video the repair, which is uploaded to a searchable video repository based on the make and model of the equipment, the type of repair, etc. The next time someone needs to fix that problem on that same type of equipment, they can “look over your shoulder” via the video and learn how to do it, saving time as well as expense of screw ups. Of course, someone would have to vet the videos and determine which ones were using the approved procedure and which ones weren’t.
Something like that, I would think, would be workable. The challenge has always been, as you say, finding the bit at 42:55 that is relevant to the specific question someone in the field has - figuring out which video it’s in is hard enough; then navigating quickly to the right bit - next to impossible thus far. -- -Tom --
Tom Short Consulting TSC +1 415 300 7457
|
|
Great find, @John_McQuary - this is brilliant! Thanks for sharing. So good see that AI/ML-powered video is finally getting some traction. (If I were still practicing I’d use the example you shared in client pitches and talks on KM). -- -Tom --
Tom Short Consulting TSC +1 415 300 7457
|
|
@Stan - may @Eudald could demo Flaps for us on an upcoming call and show us some of the customer use cases where it’s working well. -- -Tom --
Tom Short Consulting TSC +1 415 300 7457
|
|

Stan Garfield
The schedule of monthly calls is full through next April. However, just as special peer assist/discussion sessions were held recently for lessons learned and working out loud, it's fine with me if you want to schedule a special session for this topic and invite Eudald to join you.
|
|