Apple just apologized for how they’ve handled Siri data in the past. Here’s how they’re fixing it.
About a month ago, Apple suspended Siri grading after the outcry when the general public learned that they were using human contractors to review the sometimes-accidental recordings.
Now they’ve come back with some new policies:
As a result of our review, we realize we haven’t been fully living up to our high ideals, and for that we apologize. As we previously announced, we halted the Siri grading program. We plan to resume later this fall when software updates are released to our users — but only after making the following changes:
First, by default, we will no longer retain audio recordings of Siri interactions. We will continue to use computer-generated transcripts to help Siri improve.
So there are two kinds of grading:
- Transcription grading, to make sure the speech-to-text is accurate. This requires access to the audio.
- Interaction grading, to make sure Siri responded to the transcribed text as expected. This requires access to the transcript and Siri’s response or actions.
This is an interesting distinction to make, as it means the more-sensitive transcription grading can occur under tighter timeframes or security conditions (and of course that’s what they did; see below).
Second, users will be able to opt in to help Siri improve by learning from the audio samples of their requests. We hope that many people will choose to help Siri get better, knowing that Apple respects their data and has strong privacy controls in place. Those who choose to participate will be able to opt out at any time.
By making transcription grading opt-in, Apple can focus on anonymizing the transcript before it gets reviewed. However, I wonder if they’ve decided where they’ll ask users to opt in for the first time. I think there are quite a few possibilities:
- During device setup, in the onboarding screens.
- After the user’s first Siri request is made.
- After the user inputs feedback on a Siri request; “Would you like to make this reporting automatic for next time?”
- In the Settings app, maybe with a neutral call-to-action in Notification Center like “Review how your Siri data is used”.
Third, when customers opt in, only Apple employees will be allowed to listen to audio samples of the Siri interactions. Our team will work to delete any recording which is determined to be an inadvertent trigger of Siri.
There’s got to be some way to automatically scrap the majority of accidental invocations without human review, so I would like to think that last sentence is just a hedge.
Other writers also seem satisfied with these changes, in spite of how late they’ve come in Siri’s lifetime.
It’s clear, concise, and has the benefit if being verifiable once implemented. It’s unfortunate that Siri recordings were being handled this way in the first place, but I appreciate the plain-English response and unambiguous plan for the future.
These are good changes, but it is how the program should have worked from the day is started. There’s no doubt that Apple failed to live up to its own standards here.
I am convinced this was a mistake — really, a series of mistakes — on Apple’s part, not an indication that the company’s privacy stance is hypocritical or merely . . . marketing hype. It is therefore not surprising, but satisfying nonetheless, to see Apple address it head-on like this.
Siri (along with some competitors in the voice assistant space) has always lacked some level of transparency; while Google has a screen where you can manage everything they collect about you during day-to-day usage, Apple has only promised that “[when] you turn off both Siri and Dictation, Apple will delete your User Data, as well as your recent voice input data”. Fortunately, that’s changing for the better.