Apple is researching ways where Siri can be better at assessing whether you really meant to call it, and also when you’re done speaking to it.
You’ve done this. You’ve had to hurriedly slap your palm over your Apple Watch because Siri has begun talking. And you’ve had that perplexing moment of trying to fathom out what you could possibly have just said that sounded anything like “Hey, Siri.” Apple wants to change that.
There is also the separate issue of when Siri doesn’t respond, or when you turn your Watch and the iPhone in your pocket replies instead, but that’s a problem for another day. For now, Apple is focusing on how Siri can decide whether to respond based on how interested you seem.
Apple has previously applied for a patent where the iPhone’s Face ID camera interprets your emotions when you ask Siri to do something. But a new patent application starts with whether you’re even looking at your device when you speak.
which both lists criteria for assessing whether you want Siri, and then what the device should do next.
“[The] process includes determining, based on data obtained using one or more sensors of the electronic device, whether one or more criteria representing expressed user disinterest are satisfied,” says the application.
There are a lot of factors involved in determining this “disinterest,” and effectively in when Siri should take you seriously or not. For instance, a key way of determining whether you’re interested is by having the device’s cameras look to detect your gaze.
If you’re looking directly at the device when you’ve activated Siri, the iPhone can be pretty certain that you meant to call for it and that you’re going to say a command next. However, you might well say “Hey, Siri,” and then be momentarily distracted by someone or something around you.
So the patent application discusses the length of time that might or might not be reasonable in working out your attention. Apple refers to this as the “characteristic intensity” of a contact, or in other words, of your saying “Hey, Siri.”
“[This] characteristic intensity is, optionally, based on a predefined number of intensity samples,” says Apple, “or a set of intensity samples collected during a predetermined time period… relative to a predefined event (e.g., after detecting the contact).”
The patent application avoids specifying actual durations for this delay, preferring instead to say that it varies depending on the situation. The application spends less time on how long a device should wait, and much more on how it can assess intention or disinterest.
As well as whether you’re looking at the device, for instance, the criteria for concluding that you’re disinterested includes using many different sensors. The device’s accelerometer can tell whether you’re lowering the device, or picking it up, for instance.
The system can also determine whether you’ve placed the device face-down on a surface. Similarly, if you have the display covered by your hand, perhaps as you carry it, then the front-facing light sensor can tell that.
That same sensor can also contribute to the device recognizing whether it is in an enclosed space. If you have your iPhone in a pocket or a bag, for instance, you’re less likely to be calling for Siri.
This isn’t just about cutting down on the number of times a phrase similar to “Hey, Siri,” is followed by your saying something a little more fruity. Apple is also looking to how it can maximize the performance of its devices.
“Operating a digital assistant requires electric power, which is a limited resource on handheld or portable devices that rely on batteries and on which digital assistants often run,” it says. “Accordingly, it can be desirable to operate a digital assistant in an energy efficient manner.”
So if your iPhone can tell very quickly that you weren’t calling for Siri, it doesn’t have to activate it pointlessly.
This patent application is credited to three inventors, including Rohit Dasari whose previous work includes related patents such as one about how apps can integrate with a digital assistant.