As Siri Gets Smarter, How Will We Learn to Trust It?
Serious work, driven by competition, is being done to develop Siri as a better artificial intelligence. Pioneering work is being done on how Siri, in the future, will assess the accuracy of its information. When the human-machine conversation gets really sophisticated, will Siri be able to judge its own authoritativeness? Will we?
Today, we ask Siri about some basic information and he/she scans databases, knowledge graphs. The assumption is that the data is accurate. So if Siri looks up a sports score, we assume that the original data that was input is correct.
As Siri becomes more intelligent and conversational, the AI will be looking at more complex databases, and answering more sophisticated inquiries. Siri will need a way to assess the credibility of its own sources. This process, related to Apple’s recent acquisition of Lattice Data is nicely explained in this article: “Apple is shoring up Siri for its next generation of intelligent devices.”
The Human Process
Human beings go through their own 18 year process, starting at birth, learning how to assess the value and credibility of information. By the time a human is a young adult, they’re usually pretty good an connecting reality (as perceived) to their own internal decision, knowledge, and physics models.
We’re very familiar with defects in the physics model. It sometimes starts with the declaration: “Hey, guys! Watch this!” The result is often a Darwin Award. Lately, the phenomenon of fake news has challenged people’s ability to assess the credibility of various sources and interpret what they’re told. What will be the impact when AIs are the source?
As the symbiotic relationship with our AIs progresses, both participants will be challenged to make these judgments. Conceivably, AIs could develop their own perceptions and models that have a subtle, almost imperceptible divergence from that of the human partner. How might that go bad for us?
More worrying is research into the self-judgments people make. Research has shown that those who are the most intelligent have the more well-founded doubts about their knowledge and decision processes. The less intelligent a person is, the more over-confident they are in their judgments. How will an AI influence that process?
Add to the mix that AIs, when asked to explain their reasoning, could actually develop the ability to lie about that, further muddying the waters. That’s why some people are so concerned about how AIs will interact with humans. See: “How to keep AI from killing us all.”
Setting Limits
Related
Currently, there are no government regulations dictating any kind of standard for how humans will interact with AIs. Organizations like Partnership on AI, of which Apple is a member, have been formed to do the following:
Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
It remains to be seen what kinds of mistakes remain to be made, how governments will view these AI agents, and how our culture, thinking and actions may be affected by AIs. One thing is for sure. It will be an adventure.
0 Response to "As Siri Gets Smarter, How Will We Learn to Trust It?"
Post a Comment