fresh_42 said:
It did it here in chrome as well, but I remember occasions where I didn't use an article with field. There are other words, too, which are valid with switched letters: lots lost, brain brian (the spell checker doesn't check that Brian needs a capital letter).
The general problems of ortho-lexico-typo-graphical syntactic parsing, with some semantic processing, are in my view interesting, difficult, and not entirely intractable. In the early '80s a programmer colleague and friend of mine (we were both undergrad students, he was a better programmer, I knew more about language),
John Riedl recruited me to assist in presenting a project proposal involving natural language processing to some professors. I asked whether
Ken Sayre would be among the professors to whom we'd be presenting. My friend said, alas no, just the Math department. John knew that Prof. Sayre was a very strong AI guy, but was in the Philosophy department. At the time I was in Prof. Sayre's Epistemology class, and I volunteered to pitch the idea to him. John said let's wait and see if we get some support from the Math department first -- they're in charge of the computer. I acquiesced -- it was his project idea; not mine.
When we got to the presentation, with three senior Math Professors in attendance, John made his pitch. The professors quizzed John about some of the technicalities, and he breezed through that. Then they asked him what could this be used for, and John turned to me.
I began by saying that John had already outlined how we could deal with the problem of recognition of parts of speech, and that from there, we could build a more general syntactical parser, and could apply that framework as part of a grammar validator, or later, an interlingual translator. 'Pie in the sky' seemed to be the immediate reactional consensus. I acknowledged that the semantic components of the more general semiotic problems would be daunting, but disagreed, for example, with the postulation that we might be facing an equivalent of a known NP-complete or NP-hard or completely intractable problem.
I argued that we could isolate most of the drudgery work to a rule-based syntactical parser subset, which would consult a disambiguator component when it couldn't determine which meaning of a multi-meaning word was the intended one, and needed that information in order to decide which rule to apply. The machine could present a list of possibilities to the human, and he would select one, and the machine would carry on.
I could see the professors mentally weighing the likely computational resources involved. The verdict was that the project was not without merit, but would probably be too resource-hungry, and so it would not be wholly supported, but we could pursue it in papers for academic credit, but we would be allowed only such access to computational resources as would be necessary to support claims in our papers, and we already had that level of access. John didn't want to go forward under those conditions, so we agreed that we'd keep it pretty much as just a matter for discussion, at least until computational capacity became more abundant.
At the time, the main machine to which we had a share of access was an IBM System/370, and it was fully adequate for all of what we did and most of what we wanted to do. Today, my phone has much more memory and processing capacity than that machine had. Even so, none of my personally-owned machines of today could reproduce its multi-user interactive (time sharing) and concurrent batch processing capabilities, especially in the IO handling areas.