What Do We Really, Really Want From Siri?



Siri is just good enough that it makes us think about where it could go next. I have questions.





Siri is just good enough that it makes us think about where it could go next What Do We Really, Really Want From Siri?
Hey Siri! I’m gonna …





Apple’s Siri is an Artificial Intelligence (AI) agent. It was introduced in 2011 and has gotten incrementally better ever since. But Siri technology, being an AI, always begs the question: where can she (he) go next? What should Siri be able to do? What ought to be its ultimate manifestation? And limits?





The event that got me thinking was this event, reported in Particle Debris for February 1st, 2019.





A 13-year-old boy told Siri that he planned a school shooting. Siri’s response wasn’t reported, but the youngster took a screen shot of the conversation and posted it to social media. That’s how his intentions were discovered and reported.





For openers, we discussed this event on TMO’s Daily Observations Podcast for February 4th, 2019. The premise starts with the idea that, someday, Siri might have to directly handle such a dangerous situation. That raises many questions.





  1. Given that AIs will get much better and more intuitive in the future, should a personal AI assume the responsibility to report a planned, imminent crime?
  2. To whom should Siri report its concerns? Parent? Teacher? Police? All?
  3. Should Siri always obey Asimov’s Three Laws of Robotics? (For example, never allowing its human companion to come to harm.)
  4. Should conversations with Siri be privileged and protected, as with a priest or attorney?
  5. Who gets to decide the answers to the above questions?





Lifting Limits





Siri is just good enough that it makes us think about where it could go next What Do We Really, Really Want From Siri?
What are the limits? What do we want them to be?




Today, we excuse Siri’s failures with limits on AI technology, the hardware, and internet speeds. And no doubt, there are artificial constraints also placed on Siri. For example, if you ask Siri on an Apple Watch “what time is it?” she (he) will answer out loud. But if you ask “What’s my pulse?” Siri will launch the Heart Rate app, show it to you and remain silent.






This could be because it’s been determined by Apple engineers that personal health data should not be verbally expressed, given that there may be inappropriate bystanders. Or perhaps Siri doesn’t have direct access to health and fitness data. Or both. As time goes on, should we expect Siri to have wise access and also know when it’s permitted to speak out loud?





This article, which I’ve cited in Particle Debris, asks a related question. “Are Home Health Aides The New Turing Test For AI?” That is, can we judge the sophistication of an AI not by the Turing Test but rather by how it handles its owner’s medical situations?





What does it mean for a machine to be intelligent? For decades, the common answer to that question has been to pass the “Turing test.” This test, named after famed mathematician Alan Turing, says that if a machine can carry on a conversation with a human via a textual interface such that the human can not tell the difference between a human and machine, then the machine is intelligent….

But there’s a problem: we were able to create chatbots that could pass the Turing test a long time ago. We know that the intelligence they display is narrow and limited….

MIT’s Rodney Brooks proposes new ways of thinking about AGI [Artificial General Intelligence] that go way beyond the Turing test…. what he calls ECW. By this he does not mean a friendly companion robot, but rather something that offers cognitive and “physical assistance that will enable someone to live with dignity and independence as they age in place in their own home.”





Related

In short, robot/AI companions of the future may have to make intelligent, informed, compassionate decisions about the health and welfare of the human companion. Or, at least, confer wisely with another responsible human on a health or law enforcement emergency.






From Creepy to Pleasing Astonishment





Many would call this kind of emergent sophistication creepy. Especially as we know that there’s always a temptation to exploit our most personal chats with an AI against us by the developer. (Or by those with a warrant.) Or for financial gain.





But given that we can solve those kinds of problems, I would prefer to transition from the concept of creepy to astonishment. That is, are we constantly amazed at how good Siri is getting? Or must we always be generally disappointed with its limitations?






If the goal of AI research is to produce an intelligence that is indistinguishable (magically) from another human being, then we’ll have to grapple with many uncomfortable technical, privacy and legal decisions about their design. How we approach that will dictate whether our future AIs become astonishingly smart, competent and responsible or just plain perpetually disappointing.






What’s our preference as humans?


0 Response to "What Do We Really, Really Want From Siri?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel