I'm super conflicted about this article. The portion on disabilities is great! But then, we see this:
It’s considered an ‘AI-complete’ problem, something that would require computers that are as fully complex as, and functionally equivalent to, human beings. (Which about five minutes ago was precisely what the term ‘artificial intelligence’ meant, but since tech companies managed to dumb down and rebrand ‘AI’ to mean “anything utilizing a machine-learning algorithm”, the resulting terminology vacuum necessitated a new coinage, so now we have to call machine cognition of human-level complexity ‘AGI’, for ‘artificial general intelligence’.)
This is honestly the first part that's outright objectively wrong. A quick look at the Wiki will tell us that the term AGI was already used in 1997, for example. You can't say that it was made up by tech companies about five minutes ago. And the author returns to this “rebranding” later in the article, so you can't just brush this away as a misguided aside; it's just clear that the author does not really know anything about AI, yet is willing to write an article about it. Mix this with the snarky tone, and it just gets very sad.
It's not like that I don't agree with what they say about AI either, and I definitely agree with the big conclusions; it's not like there are no people with a similar opinion that know more about AI (Gary Marcus, for instance), the comparision to disabilities is the novel (to me) part. But I just couldn't share this article with anyone. As I am writing, the top comment on [email protected] is criticizing the same part of the article, except in less nice words. I don't think that the person who wrote that comment will learn anything helpful about disabilities from this article…