> I’ve taught public school teachers, who were incredibly bad at formal mathematical reasoning (I know, because I graded their tests), to the point that I had not realized humans could be that bad at math — but it had no effect on how they came across in friendly conversation after hours. They didn’t seem “dopey” or “slow”, they were witty and engaging and warm.
> Whatever ability IQ tests and math tests measure, I believe that lacking that ability doesn’t have any effect on one’s ability to make a good social impression or even to “seem smart” in conversation.
> If “human intelligence” is about reasoning ability, the capacity to detect whether arguments make sense, then you simply do not need human intelligence to create a linguistic style or aesthetic that can fool our pattern-recognition apparatus if we don’t concentrate on parsing content.
In the context of the article, these are troubling assertions for the author to be making. They seem to be implying that people who struggle with mathematics are fundamentally less intelligent that those who don't, in a way that cannot be picked up by chatting to them.
If I understand correctly, the author furthermore seems to be saying that a GPT-2 style text generator will sooner be able to match the conversation of such a person than of someone more well-versed in formal mathematical reasoning.
This seems factually wrong to me; I think the author vastly underestimates complexity of the subconscious processing that people do in order to come to the viewpoints they hold, and to transform ideas into coherent speech.
As a related point / analogy, the process by which humans do conscious mathematics (such as arithmetic) is inherently slow and inefficient, whilst at the same time it manages to perform incredibly advanced "calculations" in the process of being a highly-advanced motion-control system.
I posit that the human process for synthesizing ideas is happening primarily in this more complex underlying format, which we are still some way off from being able to simulate (though I do believe we will be able to, eventually).
The author's conclusion seems a bit like seeing that computers are better at arithmetic than humans are and thus concluding that they will soon surpass us in intelligence.
Furthermore, the author's reasoning seems demeaning to people who struggle with mathematics and explicit logical reasoning, and is a few steps from a claim that such a person is inherently less "conscious".
To claim that a strong grasp of formal reasoning is necessary for those in a position of policy and decision making is one thing. But to assert (without substantial evidence to back it up) that someone with low mathematical-logical reasoning ability has speech which is significantly easier to emulate because it fundamentally contains less content seems to be simply a form of intellectual/academic self aggrandizement.
Thanks for outlining what I largely objected to within this article. I read this as a reductive misapplication of the author’s experience in mathematics and domains of machine learning to broader discussions of intelligence in a way that struck me as the author presupposing a highly culturally dependent interpretation of intelligence. I can understand the motivation for trying to use mathematical and symbolic reasoning abilities as proxies for abstract reasoning, but you’re painting with far too broad a brush stroke if you’re applying that across society given different levels of emphasis on schooling and mathematical literacy. I suppose that part of my core objection is that I believe, based on this article, that the author has a selfish view of human intelligence that focuses on their own competencies and judges those as lesser who don’t have similar expertise.
I’ve got a degree in physics from a top 3 university and I have met individuals more intelligent than me who suffered through various math classes, which I believe was largely due to a lack of experience with the machinery of math or formal reasoning.
> They seem to be implying that people who struggle with mathematics are fundamentally less intelligent that those who don't, in a way that cannot be picked up by chatting to them.
They are saying exactly that.
I wonder what the author's response would be when speaking with an individual vastly more intelligent than himself, who, interrupting the author mid-sentence says, "sorry, this is such a simple concept, I don't converse with imbeciles", and walks off.
It is stronger, they don't make the distinction that it is people with dyscalculia that are stupid, but rather simply being ignorant is enough to qualify as this lesser form of intelligence.
Being good at math (here I am talking more than just arithmetics) usually is a very good proxy for being good at manipulating abstractions. And, imho, that's at least one of the cornerstones of the intelligence.
The issue that the GP is pointing at is that the ability to reason in the abstract the author is implicitly stating to be the marker of intelligence.
It may be one form of intelligence, but certainly a brilliant writer, a gifted musician, or an exceptional artist can all be considered intelligent even if their ability to grok logical constructs is limited compared to those that spend their waking hours doing just that, and almost certainly have been honing this skill for their entire lives.
I think the second essential part of the GP's marker for intelligence is the ability to form sentences that convey information, and do it efficiently.
Ability at abstract reasoning is invisible to outsiders unless the bot can also transmit their information to others, as well as understand transmissions from others and react appropriately (constructively or entertainingly).
AFAIK, up to now, none of the measures of synthetic intelligence have tried to measure the flow of information from and into a bot -- its efficiency, coherence, or relevance. I think the rise of master aper bots like GPT-2 and Q&A bots like Watson that beautifully model syntax and rhythm yet no semantics may finally force this issue to the surface. To wit, information matters more than style.
Frankly, I welcome the arrival of bot overlords like these. Maybe they'll motivate us humans to pay more attention to the meat of what we hear, read, and say, and therein act less robotic ourselves.
Being good at math is also related to having been taught maths properly, and for most normal people getting enough encouragement to put in the work necessary to build one's skill is also important.
I know other traditions for labelling people stupid, that centrers around them lacking skill with driving or carpentry, and this "maths" ability tradition seems to be largely the same thing.
> Whatever ability IQ tests and math tests measure, I believe that lacking that ability doesn’t have any effect on one’s ability to make a good social impression or even to “seem smart” in conversation.
> If “human intelligence” is about reasoning ability, the capacity to detect whether arguments make sense, then you simply do not need human intelligence to create a linguistic style or aesthetic that can fool our pattern-recognition apparatus if we don’t concentrate on parsing content.
In the context of the article, these are troubling assertions for the author to be making. They seem to be implying that people who struggle with mathematics are fundamentally less intelligent that those who don't, in a way that cannot be picked up by chatting to them.
If I understand correctly, the author furthermore seems to be saying that a GPT-2 style text generator will sooner be able to match the conversation of such a person than of someone more well-versed in formal mathematical reasoning.
This seems factually wrong to me; I think the author vastly underestimates complexity of the subconscious processing that people do in order to come to the viewpoints they hold, and to transform ideas into coherent speech.
As a related point / analogy, the process by which humans do conscious mathematics (such as arithmetic) is inherently slow and inefficient, whilst at the same time it manages to perform incredibly advanced "calculations" in the process of being a highly-advanced motion-control system.
I posit that the human process for synthesizing ideas is happening primarily in this more complex underlying format, which we are still some way off from being able to simulate (though I do believe we will be able to, eventually).
The author's conclusion seems a bit like seeing that computers are better at arithmetic than humans are and thus concluding that they will soon surpass us in intelligence.
Furthermore, the author's reasoning seems demeaning to people who struggle with mathematics and explicit logical reasoning, and is a few steps from a claim that such a person is inherently less "conscious".
To claim that a strong grasp of formal reasoning is necessary for those in a position of policy and decision making is one thing. But to assert (without substantial evidence to back it up) that someone with low mathematical-logical reasoning ability has speech which is significantly easier to emulate because it fundamentally contains less content seems to be simply a form of intellectual/academic self aggrandizement.