Interesting Tom,
I actually think it is still likely to be a failure from an
enterprise point of view, because if you were to capture the
majority of meetings you are likely to get lots of:
- irrelevant discussion
- tasking ("so we've agreed that X will do Y and come back by
Z")
- acronyms and implied knowledge about past events
Additionally:
- meeting contexts will look pretty identical (either a
non-descript meeting table or Zoom grid) and
- emotions tend to be pretty guarded
So I feel the pickings of the AI in this context will be slim. On
the other hand, I'm sure the system will be a boon for the stock
video and content creator market, since if you're looking for a
quick way to find (context free) "cat runs into chair while chasing
toy mouse" videos, it will have you covered đ
What would be much more interesting to me is a system that fully
and accurately transcribes videos, including assigning names to
speakers and descriptions of video actions on screen. That's
because the biggest drawback of videos is that you have to watch
the damn things. If you could instead skim through text to find
the bit at 42:55 where you show me exactly how to safely clear a
jammed lathe, that would be a valuable knowledge
tool.
Cheers,
Stephen.
====================================
Stephen Bounds
Executive, Information Management
Cordelta
E: stephen.bounds@...
M: 0401 829 096
====================================
On 21/05/2021 2:01 am, Curtis A. Conley
wrote:
toggle quoted messageShow quoted text
Thanks for sharing. Seems like a natural evolution
of video-conferencing, given where it's at today. Many of the
standard VC/streaming apps have built in auto-captioning and
transcription capabilities, and if you bundle that in with all
of the other asynchronous tools we're using to message and share
knowledge, that could lead to a lot of interesting applications
in the KM space.
Back in the early days of KM, big 4 consulting
firms (I think there were six back then) saw the potential
of KM and started experimenting with various tools and
approaches. And it made sense for them to get onboard
early: their assets were purely knowledge-based and went
down the elevator every night.Â
Â
Back in 1989, Andersen Consulting hired AI guru
Roger Schank and gave him $30million to play with and
continue his research (he was at Stanford) hoping for some
breakthroughs they could apply to their business.Â
One KM-related project he worked on involved conducting
and videoing knowledge elicitation interviews with
experts. The thought was that if we could simply interview
people and video it all, it would capture knowledge in a
way that could be then inventoried, tagged and searched
for future retrieval and re-use. I donât know how much he
spent on it, but word was it was in the millions.Â
Â
In any case, that didnât work. When I learned about
this effort I was at IBM, and I knew it wouldnât work. We
didnât have the tools to cope with vast amounts of
unstructured data, even when it was in text format, much
less video format.Â
Â
But maybe now that is about to change. Some MIT
alums are building a startup called Netra around an AI
engine that is supposed to be able to parse video content
and categorize it automagically. Might be a good one to
watch. Who knows? Maybe Schank will be vindicated after
all, and video knowledge elicitations will become a thing
again.Â
Now MIT
alumnus-founded Netra is using artificial intelligence
to improve video analysis at scale. The companyâs system
can identify activities, objects, emotions, locations,
and more to organize and provide context to videos in
new ways.
--Â
-Tom
--
Tom
Short Consulting
TSC
+1 415 300 7457