This page lists examples of applications
that could be built with tridbit technology. The page is titled
“Future Uses” because there are still technical issues
to be worked out before any of these can be implemented. The white
paper discusses many of these technical issues. Nonetheless, we
believe tridbit technology will make applications like those listed
below possible in the not too distant future. Some, such as product
information and adaptive technology, we hope to see in the next
several years.
There are a few applications that we’d like to encourage,
and so would be very interested in talking to anyone doing work
in the three areas described below.
Voice Recognition
The first is voice recognition. We’ve attempted to hook up
Babble to a commercial voice recognition package (Via Voice) which
was not very successful. Babble will be so much nicer to talk to
if and when reliable voice recognition is available. See
Revisiting Voice Recognition for
how Babble might aid in this endeavor.
Adaptive Technologies
Next is adaptive technologies. Conversational interfaces using tridbit
technology would provide accessibility to the visually impaired.
No doubt there are other ways to use this technology to help those
with disabilities. This is an area in which we are gaining experience
and would be interested in discussing ideas with anyone
who currently develops adaptive software or hardware.
Smart Operating Systems
Last and most challenging, we believe it is about time
someone develops a smart operating system. One that knows about
the programs and information it contains. One that allows the user
to issue commands like “Bring up the spreadsheet that Bill
sent me last week,” or “Find the most recent paper I
wrote discussing proper feeding of goldfish and email it to Mark,”
It should also keep personal information such as “Do I have
any appointments today?” or “When was the last time
Fred called?” We believe this should be a collaborative effort
that could reside in the public domain, based on one of the various
public domain versions of UNIX. Obviously, this is a long term project,
with legal issues to be worked out as well as technical, not the
least of which would be how to meld public domain software with
the tridbit technology license. We offer some thoughts on how to
do this at the end of the page.
We’ve chosen the projects listed above because we feel they
are extremely important and may not otherwise find commercial backing.
So we’re willing to give them special consideration. The rest
of the list contains other interesting possibilities that might
appeal to various commercial interests. This is by no means an exhaustive
list. If you have an application you think would be appropriate
for tridbit technology, please contact us.
Information Assistants
Applications that require information to be stored and retrieved
are obvious candidates for tridbit technology. This sounds like
typical database applications, but there are some differences, which
are discussed in the FAQ under, “How
does tridbit technology compare to conventional relational databases?”
In general, Babble is best suited for storing imprecise or sparse
data where amazing transaction speeds are not critical and natural
language is a good interface.
Thus, Babble-like programs would make excellent personal assistants
where you wanted to be able to intelligently keep track of more
than names, addresses and numbers. Some examples of this type of
application are described below.
Product Information
Babble could easily be trained to become an expert on products such
as appliances, cars or electronic equipment. Much of the training
could be accomplished by simply allowing Babble to read the literature
available for the product. Once Babble has the product information
in its knowledge base, it can answer consumer questions about the
product.
Scheduling and Planning
A future Babble program could keep track of what an individual or
group is doing. By telling Babble what activities that are currently
scheduled and what actually takes place, Babble keeps a record of
upcoming events as well as a history of what occurred. This is not
limited to just time slots and meeting facilities. Babble could
intelligently store minutes from the meetings allowing one to ask
questions of it such as: When did the budget committee discuss the
cost of the Machine Intelligence Conference? How many people where
authorized to go? Who was in attendance? Print me the minutes, etc.
You could also design the program to help plan events, suggest
appropriate guests based on interests and who gets along with whom,
appropriate activities based on what your guests enjoy and meal
choices based on your guests preferences and allergies.
Observations, Recommendations, Opinions, etc.
Student records, employee records, health records, etc all include
passages of text that may include very important information. (Although
in some cases we may prefer it not be widely accessible.) Even if
an attempt is made to include it in a conventional database, accessibility
of such non-normal data is very limited. There might be note in
someone’s medical file stating the patient had extreme withdrawal
symptoms while stopping Prozac. If this information were stored
using tridbit technology, a doctor could ask the program if the
patient had any past negative experience with anti-depressants to
come up with the information. It would not be necessary to guess
the exact words used in the note, i.e. Prozac. Nor would every entry
dealing with anti-depressants be retrieved, only those representing
a negative experience of this specific patient.
If doctors were very consist about recording their observations
using tridbit technology, we would amass a sizable set of data points.
It would then be possible to have some idea how widespread various
side effects and other conditions are, and even some clues as to
how they relate to other variables.
Information Keeper
The applications listed above are all examples of specialized information
gathering and searching. Babble’s abilities to take in information
from natural language, reason with it and respond to questions applies
to any area of knowledge. The only advantage to specializing is
to limit the background knowledge that needs to be instilled in
Babble to operate intelligently in that arena. For example, for
Babble to be helpful in planning events that include meals, it would
be good for Babble to understand that humans eat food, how much
we tend to eat, how we like to structure our meals, just for starters.
There is just a lot of basic information we take for granted about
the world that Babble needs to learn.
In time, Babble’s basic knowledge of the world should allow
it to be a general purpose “Information Keeper”. That
is, you could give it access to any information, and it would be
able to make sense of it, reason with it and respond to any questions
relating to it. At some point in the distant future, you wouldn’t
need specific Babble programs for specific purposes. Perhaps you
might run a single program that shared a global knowledge base augmented
by your own personal information.
Babble may have other advantages in storing information over its
human counterparts, beyond the ability to remember everything it
is told. Babble stores the information it receives without altering
the meaning. This can be difficult for humans to do when they feel
strongly about the subject being discussed. An interesting potential
use for Babble technology would be to store historical documents,
where changes in the use of the language affect the meaning of the
document. Babble would be able to retain the meaning of such documents,
as intended by their authors.
Translation
Babble currently converses in English. We believe its tridbit representation
of meaning is universal to all human languages, and perhaps beyond.
Thus if Babble were trained with a French dictionary and French
syntax rules, it would converse in French. If it used English words
and rules to understand speech and French words and rules to produce
speech, it could in effect be translating from English to French.
Since Babble is very limited in its current speech production, Babble
will first need to learn to be more talkative in any language.
It is interesting to speculate as well, whether we might be able
to better understand animal communication if we use tridbits as
the underlying information structure that animals seek to communicate,
whether by whale songs or bee dancing.
Robot Brains
Many aspects of tridbit technology make it seem appropriate for
robotics. Robots get feedback from their environment in order to
do their tasks. Babble has a variety of learning mechanisms that
use experience to tune its performance. Tridbits themselves have
the ability to be “reinforced”. While this is not an
area in which we have much expertise, it seems likely a robot that
required a lot of flexibility could use Babble’s tridbit structures
to represent and manipulate its environment. And what would be a
more convenient interface than being able to talk to your robot?
Revisiting Voice Recognition
As mentioned previously, the fullest potential for tridbit technology
is entwined in the separate but related technology of voice recognition.
Voice recognition takes the sound produced by human vocal cords
during speech and tries to recognize the words contained in the
sound data. Babble, on the other hand, inputs words as represented
by sequences of letters (not sound data) and determines the underlying
meaning or information contained in the sequence of words.
Like so much of what humans do without even thinking about it,
separating speech into separate words is a very difficult task.
If you have never looked at the sound data produced by human speech,
you might expect it to show individual bursts of data for each word.
But we only hear speech that way because of the amazing auditory
system contained within our bodies. There is actually very little
in the sound data to say when one word ends and another starts.
Worse, the sound data for a word never looks exactly the same, it
changes depending on the speaker, the speaker’s enunciation,
the sounds before and after the word and the background noise. So
processing the sound data produced by speech into words is actually
a very difficult task. This is the task that voice recognition programs
try to do.
Because this is such a hard task, there are no voice recognition
programs (that we’ve tried anyway) that do such a good job
that you can say “Hello” to your computer and be as
sure that it will hear “Hello” as if you type H-E-L-L-O.
A good performance might get 90% of the words correct after substantial
training. Having to correct the input even that often tends to push
one back into using the keyboard where at least one can be certain
that what you type is what you get.
Now that computers can understand natural language, the desire
to input verbally will be greater than ever. Is there anything Babble
can do to help?
Probably. But it has nothing to do with the intense computational
analysis of sound data, which is way outside our area of expertise.
Instead it has to do with limiting and validating the word choices
the voice recognition system comes up with. Babble could do this
in several ways.
Suppose we are having a conversation with Babble that starts with:
Nicki went to the store.
Because we have established a context, there are certain words
that are more likely to occur in the next sentence, including any
of previously used words, pronouns representing them, i.e. she or
it, and related concepts such as shop, bought, paid, etc. So Babble
can prime the voice recognition system to anticipate these words.
In addition to the conversational context, Babble also has some
sense of what words tend to occur together. So if the next sentence
begins:
She bought …
Babble, just like a human speaker, has some notions about what's
likely to come next. Words like thought, he, around or running are
unlikely while a, some, groceries and supplies are very likely,
although obviously many other words could also go there.
Thus, Babble can provide information to the voice recognition system
to favor the detection of likely words both on a conversational
level and on a word by word level. More importantly, Babble can
reject the output of the voice recognition system if it doesn’t
make sense. For example, the voice recognition system might feed
the following to Babble:
She Boston shoes.
This would not make sense to Babble, so it would reject it and
tell the voice recognition system to try again. There would be situations
where the voice recognition system comes up with incorrect interpretations
that do make sense, such as:
She bought some shoes.
She got some shoes.
She bought Sam shoes.
All of these would make sense to Babble, even if the speaker intended
the first. A more advanced Babble might question the last sentence,
however, wondering who Sam is, if it is unaware of any Sams Nicki
might buy shoes for.
The current version of Babble would be able to provide much of
this type of information to a voice recognition program. We would
be interested in working with someone to give this a try.
Revisiting Smart Operating Systems
Here we will describe an approach we believe would work well in
developing a smart operating system. Or as we sometimes call it
“the Star Trek computer interface”. Remember however,
that voice recognition was no problem for the Enterprise’s
computer. Heck, it could even miraculously translate alien languages
after hearing only a few sentences of the new tongue. Obviously
some abilities of the Enterprise’s computer are unrealistic,
but being able to talk to your computer to give it commands and
information, without specifying the precise files and programs to
use, is a very realistic goal. To some degree, Babble can already
do this.
But Babble needs to know more about the programs and information
on its computer host before it can serve as a smart interface to
that computer. It needs to know what actions it can perform and
how, and it needs a way to sense the results of those actions, as
well as any important changes that occur, whether the changes are
a result of a program Babble initiates or not.
Lets consider the capabilities Babble needs to respond to a command
such as, “Find the most recent paper I wrote discussing proper
feeding of goldfish and email it to Mark.” Babble’s
representation of this command will involve a find event for which
Babble is the subject and a specific paper is the object. Babble
will need to locate the actual paper represented by the paper’s
referent tridbit. Babble can easily learn that files of certain
types contain papers, or any of the other common names we may give
them – document, report, etc. Babble can also process the
paper’s qualifications from the command it was given, i.e.,
the person speaking wrote the paper, the paper discusses the proper
feeding of goldfish, it should be the most recent version. But Babble
needs to know more information about the files on the computer to
locate the file the speaker wants.
The way to do this is not to require the OS file system be expanded
to include author, subject, etc. Instead the OS should facilitate
a constant logging of events, sort of a stream of awareness that
Babble can be constantly tapped into, if the user chooses to run
an intelligent interface. Thus when the user, let’s call him
John, was writing the paper, perhaps he did it the old fashion way
and just started up his favorite word processor, created a new document,
typed in the desired text and saved it in a file named Goldfish.doc.
As this is taking place, the OS is creating a log of events such
as, John started word processor, John created file Goldfish.doc,
John quit word processor. This allows Babble to store in its own
knowledge base which files John has created and when. Babble will
need to read each document to figure out which is about goldfish
- a little ambitious for the current Babble but something it should
be able to do in the future.
Babble could be even more helpful if John had begun writing his
paper by giving Babble a command such as, “I would like to
write a paper discussing the proper feeding of goldfish”.
He might specify his preferred word processor, but Babble could
assume the usual program. John might tell Babble a name to give
the paper, but this would not actually be necessary. He could give
it other information, such as I’d like to distribute this
paper in my “Introduction to goldfish” class. Then he
could ask Babble latter “Print the paper I wrote for my class”
and Babble would be able to use that information to print the correct
paper.
In order to be able to email or print the paper as requested, Babble
needs to have the ability to run the program that performs the requested
action in the requested manner. I believe UNIX would currently allow
Babble to start a print job and specify a filename. But what if
John had asked Babble to print the paper on the office printer using
duplex? Babble can only perform the actions using the parameters
that the application makes available. Babble could learn for each
application its peculiarities for specifying parameters, and to
some degree this may be unavoidable. However, it would be desirable
to develop a consistent protocol that would help Babble figure out
the parameters available without extensive training by the user.
“Babble friendly” applications should make any reasonable
parameters a user might want to specify available to Babble, and
publish, in some manner, the method used to invoke them.
Thus, what we envision is adding hooks to the OS to allow an intelligent
interface to be installed if the user chooses. The intelligent interface
would be installed as a separate program, which allows the user
to choose the type of interface to run and gets around licensing
issues. We believe the specific mechanism for these hooks should
be worked out in a public forum as an open standard. Security should
obviously be a consideration as this standard is developed. The
hope would be that with competent security in place, an intelligent
interface should add to the security of the OS, not compromise it.
We believe this could be an important project for the future of
UNIX and computing.
|