re @andrewparker:
My iPhone auto-corrected “Harvard” to “Garbage”. Well played Apple engineers.
I was wondering how this would happen, and then noticed that each character pair has 0 to 2 distance on the QWERTY keyboard. Perhaps their model is eager to allow QWERTY-local character substitutions.
>>> zip(‘harvard’,'garbage’)
[('h', 'g'), ('a', 'a'), ('r', 'r'), ('v', 'b'), ('a', 'a'), ('r', 'g'), ('d', 'e')]
And then most any language model thinks p(“garbage”) > p(“harvard”), at the very least in a unigram model with a broad domain corpus. So if it’s a noisy channel-style model, they’re underpenalizing the edit distance relative to the LM prior. (Reference: Norvig’s noisy channel spelling correction article.)
On the other hand, given how insane iPhone autocorrections are, and from the number of times I’ve seen it delete a quite reasonable word I wrote, I’d bet “harvard” isn’t even in their LM. (Where the LM is more like just a dictionary; call it quantizing probabilities to 1 bit if you like.) I think Hal mentioned once he would gladly give up GB’s of storage for a better language model to make iPhone autocorrect not suck. That sounds like the right tradeoff to me.
Language models with high coverage are important. As illustrated in e.g. one of those Google MT papers. Wish Apple would figure this out too.
from a technical standpoint, is there any reason why iphone autocorrect shouldn’t be as good as google autocorrect?
on the most recent stack exchange podcast they talked about this — apparently a good deal of the google autocorrect was built from people typing one thing, realizing they mis-typed, editing, then searching again. google then interprets that as a correction event and records the change. slick.
couldn’t apple just do this times a billion?
Yeah, I bet Apple totally could do this.
I thought the iPhone did do a unigram LM character-by-character and this was modeled along with the coordinates of the touch on the keyboard as an observation modeled as gaussian centered on the letter you ‘intend’ to touch.
The real problem with the iPhone keyboard are (1) It’s not personalized at all. The unigram model is clearly some generic one. (2) As far as I can gather it’s not looking at the past word at all, so you get no-grammatical predictions a lot. I think you want to condition on the last word regardless of what else you do.
I don’t think they actually need a huge LM. I bet a 4gram letter-level LM could really go a long way.
gaussian model sounds totally awesome!! seriously though, i bet if you train a p(char|intent) model you’ll end up getting a transition model that reconstructs the topology of the keyboard — e.g. take the first two principal components and it might look like a qwerty keyboard. maybe.
ok screw big fancy LMs, the problem is just to look at a small amount of left context
Pingback: הזבל של הרווארד או של אפל?