Brief Neuropsychological Cognitive Examination Pdf Writer

When children are shown videoclips with situations where they see people suffering pain by coincidence, neural circuits related to pain are being activated in their brain. By the age of two years, children normally begin to display the fundamental behaviors of empathy by having an emotional response that corresponds with another person's emotional state.

Brief Neuropsychological Cognitive Examination Pdf Writer

Even earlier, at one year of age, infants have some rudiments of empathy, in the sense that they understand that, just like their own actions, other people's actions have goals. Sometimes, will comfort others or show concern for them at as early an age as two. Also during the second year, toddlers will play games of falsehood or 'pretend' in an effort to fool others, and this requires that the child know what others believe before he or she can manipulate those beliefs. In order to develop these traits, it is essential to expose your child to face-to-face interactions and opportunities and lead them away from a. According to researchers at the who used (fMRI), children between the ages of 7 and 12 years appear to be naturally inclined to feel empathy for others in pain. Their findings are consistent with previous fMRI studies of with adults. The research also found additional aspects of the brain were activated when youngsters saw another person intentionally hurt by another individual, including regions involved in moral reasoning.

Despite being able to show some signs of empathy, including attempting to comfort a crying baby, from as early as 18 months to two years, most children do not show a fully fledged until around the age of four. Theory of mind involves the ability to understand that other people may have beliefs that are different from one's own, and is thought to involve the cognitive component of empathy. Children usually become capable of passing 'false belief' tasks, considered to be a test for a theory of mind, around the age of four. Individuals with often find using a theory of mind very difficult (e.g. Baron-Cohen, Leslie & Frith, 1988; the ). Empathetic maturity is a cognitive structural theory developed at the Yale University School of Nursing and addresses how adults conceive or understand the personhood of patients.

Apr 24, 2012. Examination (n180 articles, 37% of all articles with cognitive/mood outcomes); the most commonly used mood. Neuropsychology outcomes scales stroke trials. Cognitive and mood disorders are common stroke se- quelae, each affecting approximately one third of stroke. Short Form-36 Health Survey. Background The current combat operations in Iraq and Afghanistan have involved U.S. Military personnel in major ground combat and hazardous security duty.

The theory, first applied to nurses and since applied to other professions, postulates three levels that have the properties of cognitive structures. The third and highest level is held to be a meta-ethical theory of the moral structure of care.

Those adults operating with level-III understanding synthesize systems of justice and care-based ethics. Individual differences [ ] Empathy in the broadest sense refers to a reaction of one individual to another's emotional state. Recent years have seen increased movement toward the idea that empathy occurs from motor neuron imitation. But, how do we account for individual differences in empathy? It cannot be said that empathy is a single unipolar construct but rather a set of constructs.

In essence, not every individual responds equally and uniformly the same to various circumstances. The Empathic Concern scale assesses 'other-oriented' feelings of sympathy and concern and the Personal Distress scale measures 'self-oriented' feelings of personal anxiety and unease. The combination of these scales helps reveal those that might not be classified as empathetic and expands the narrow definition of empathy. Using this approach we can enlarge the basis of what it means to possess empathetic qualities and create a multi-faceted definition. Behavioral and neuroimaging research show that two underlying facets of the personality dimensions Extraversion and Agreeableness (the Warmth-Altruistic personality profile) are associated with empathic accuracy and increased brain activity in two brain regions important for empathic processing (medial prefrontal cortex and temporoparietal junction).

Genetics [ ] Research suggests that empathy is also partly genetically determined. For instance, carriers of the deletion variant of show more activation of the amygdala when viewing emotionally arousing images.

The gene seems to determine sensitivity to negative emotional information and is also attenuated by the deletion variant of ADRA2b. Carriers of the double G variant of the gene were found to have better social skills and higher self-esteem.

A gene located near LRRN1 on chromosome 3 then again controls the human ability to read, understand and respond to emotions in others. Neurological basis [ ] Research in recent years has focused on possible brain processes underlying the experience of empathy. For instance, (fMRI) has been employed to investigate the functional anatomy of empathy. These studies have shown that observing another person's emotional state activates parts of the neuronal network involved in processing that same state in oneself, whether it is disgust, touch, or pain. The study of the neural underpinnings of empathy has received increased interest following the target paper published by Preston and, following the discovery of in monkeys that fire both when the creature watches another perform an action as well as when they themselves perform it. In their paper, they argue that attended perception of the object's state automatically activates neural representations, and that this activation automatically primes or generates the associated autonomic and somatic responses (idea of perception-action-coupling), unless inhibited. This mechanism is similar to the between perception and action.

Another recent study provides evidence of separate neural pathways activating reciprocal suppression in different regions of the brain associated with the performance of 'social' and 'mechanical' tasks. These findings suggest that the associated with reasoning about the 'state of another person's mind' and 'causal/mechanical properties of inanimate objects' are neurally suppressed from occurring at the same time.

A recent meta-analysis of 40 fMRI studies found that affective empathy is correlated with increased activity in the while cognitive empathy is correlated with activity in the mid and adjacent dorsomedial. It has been suggested that mirroring-behavior in motor neurons during empathy may help duplicate feelings. Such sympathetic action may afford access to sympathetic feelings for another and, perhaps, trigger emotions of kindness, forgiveness. Empathic anger and distress [ ] Anger [ ] Empathic anger is an, a form of empathic distress.

Empathic anger is felt in a situation where someone else is being hurt by another person or thing. It is possible to see this form of anger as a emotion. [ ] Empathic anger has direct effects on both helping and punishing desires. Empathic anger can be divided into two sub-categories: trait empathic anger and state empathic anger. The relationship between empathy and anger response towards another person has also been investigated, with two studies basically finding that the higher a person's perspective taking ability, the less angry they were in response to a provocation.

Empathic concern did not, however, significantly predict anger response, and higher personal distress was associated with increased anger. Distress [ ] Empathic distress is feeling the perceived pain of another person. This feeling can be transformed into empathic anger, feelings of injustice,.

These emotions can be perceived as pro-social, and some say they can be seen as motives for moral behavior. Atypical response [ ] Atypical empathic responses have been associated with and particular such as,,, and personality disorders;;;; and. Lack of empathy has also been associated with sex offenders. It was found that offenders that had been raised in an environment where they were shown a lack of empathy and had endured the same type of abuse, felt less empathy for their victims. Autism [ ] The interaction between empathy and is a complex and ongoing field of research. Several different factors are proposed to be at play. A study of adults with autistic spectrum disorders found an increased prevalence of, a personality construct characterized by the inability to recognize and articulate emotional arousal in oneself or others.

Based on fMRI studies, alexithymia is responsible for a lack of empathy. The lack of empathic attunement inherent to alexithymic states may reduce quality and satisfaction of relationships. Recently, a study has shown that high-functioning autistic adults appear to have a range of responses to music similar to that of neurotypical individuals, including the deliberate use of music for mood management. Clinical treatment of alexithymia could involve using a simple associative learning process between musically induced emotions and their cognitive correlates. A study has suggested that the empathy deficits associated with the autism spectrum may be due to significant comorbidity between alexithymia and autism spectrum conditions rather than a result of social impairment. One study found that, relative to typically developing children, high-functioning autistic children showed reduced activity in the brain's (pars opercularis) while imitating and observing emotional expressions.

EEG evidence revealed that there was significantly greater mu suppression in the sensorimotor cortex of autistic individuals. Activity in this area was inversely related to symptom severity in the social domain, suggesting that a dysfunctional mirror neuron system may underlie social and communication deficits observed in autism, including impaired theory of mind and empathy. The mirror neuron system is essential for emotional empathy.

Previous studies have suggested that autistic individuals have an impaired. Theory of mind is the ability to understand the perspectives of others. The terms cognitive empathy and are often used synonymously, but due to a lack of studies comparing theory of mind with types of empathy, it is unclear whether these are equivalent.

Theory of mind relies on structures of the temporal lobe and the pre-frontal cortex, and empathy, i.e. The ability to share the feelings of others, relies on the sensorimotor cortices as well as limbic and para-limbic structures. [ ] The lack of clear distinctions between and empathy may have resulted in an incomplete understanding of the empathic abilities of those with Asperger syndrome; many reports on the empathic deficits of individuals with Asperger syndrome are actually based on impairments in theory of mind. Studies have found that individuals on the autistic spectrum self-report lower levels of empathic concern, show less or absent comforting responses toward someone who is suffering, and report equal or higher levels of personal distress compared to controls, which may be a result of high found in autistic individuals. The combination in those on the autism spectrum of reduced empathic concern and increased personal distress may lead to the overall reduction of empathy. Professor suggests that those with classic often lack both cognitive and affective empathy. However, other research has found no evidence of impairment in autistic individuals' ability to understand other people's basic intentions or goals; instead, data suggests that impairments are found in understanding more complex social emotions or in considering others' viewpoints.

Research also suggests that people with may have problems understanding others' perspectives in terms of theory of mind, but the average person with the condition demonstrates equal empathic concern as, and higher personal distress, than controls. The existence of individuals with heightened personal distress on the autism spectrum has been offered as an explanation as to why at least some people with autism would appear to have heightened emotional empathy, although increased personal distress may be an effect of heightened egocentrism, emotional empathy depends on activity (which, as described previously, has been found to be reduced in those with autism), and empathy in people on the autism spectrum is generally reduced.

The empathy deficits present in autism spectrum disorders may be more indicative of impairments in the ability to take the perspective of others, while the empathy deficits in psychopathy may be more indicative of impairments in responsiveness to others’ emotions. These “disorders of empathy” further highlight the importance of the ability to empathize by illustrating some of the consequences to disrupted empathy development. The (E-S) suggests that people may be classified on the basis of their capabilities along two independent dimensions, empathizing (E) and systemizing (S). These capabilities may be inferred through tests that measure someone's Empathy Quotient (EQ) and Systemizing Quotient (SQ). Five different 'brain types' can be observed among the population based on the scores, which should correlate with differences at the neural level. In the E-S theory, autism and Asperger syndrome are associated with below-average empathy and average or above-average systemizing.

The E-S theory has been extended into the Extreme Male Brain theory, which suggests that people with an autism spectrum condition are more likely to have an 'Extreme Type S' brain type, corresponding with above-average systemizing but challenged empathy. It has been shown that males are generally less empathetic than females.

The Extreme Male Brain (EMB) theory proposes that individuals on the autistic spectrum are characterized by impairments in empathy due to sex differences in the brain: specifically, people with autism spectrum conditions show an exaggerated male profile. A study showed that some aspects of autistic neuroanatomy seem to be extremes of typical male neuroanatomy, which may be influenced by elevated levels of rather than gender itself. Another study involving brain scans of 120 men and women suggested that autism affects male and female brains differently; females with autism had brains that appeared to be closer to those of non-autistic males than females, yet the same kind of difference was not observed in males with autism. Psychopathy [ ] Psychopathy is a personality disorder partly characterized by antisocial and aggressive behaviors, as well as emotional and interpersonal deficits including shallow emotions and a lack of and empathy. The (DSM) and (ICD) list (ASPD) and, stating that these have been referred to or include what is referred to as psychopathy.

A large body of research suggests that psychopathy is associated with atypical responses to distress cues (e.g. Facial and vocal expressions of fear and ), including decreased activation of the and regions, which may partly account for impaired recognition of and reduced autonomic responsiveness to expressions of fear, and impairments of empathy.

Studies on children with psychopathic tendencies have also shown such associations. The underlying biological surfaces for processing expressions of happiness are functionally intact in psychopaths, although less responsive than those of controls. The neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear. Some recent fMRI studies have reported that emotion perception deficits in psychopathy are pervasive across emotions (positives and negatives). A recent study on psychopaths found that, under certain circumstances, they could willfully empathize with others, and that their empathic reaction initiated the same way it does for controls. Psychopathic criminals were brain-scanned while watching videos of a person harming another individual.

The psychopaths' empathic reaction initiated the same way it did for controls when they were instructed to empathize with the harmed individual, and the area of the brain relating to pain was activated when the psychopaths were asked to imagine how the harmed individual felt. The research suggests how psychopaths could switch empathy on at will, which would enable them to be both callous and charming.

The team who conducted the study say it is still unknown how to transform this willful empathy into the spontaneous empathy most people have, though they propose it could be possible to bring psychopaths closer to rehabilitation by helping them to activate their 'empathy switch'. Others suggested that despite the results of the study, it remained unclear whether psychopaths' experience of empathy was the same as that of controls, and also questioned the possibility of devising therapeutic interventions that would make the empathic reactions more automatic. Work conducted by Professor with large samples of incarcerated psychopaths offers additional insights. In one study, psychopaths were scanned while viewing video clips depicting people being intentionally hurt. They were also tested on their responses to seeing short videos of facial expressions of pain. The participants in the high-psychopathy group exhibited significantly less activation in the, and parts of the brain, but more activity in the and the when compared to control participants. In a second study, individuals with psychopathy exhibited a strong response in pain-affective brain regions when taking an imagine-self perspective, but failed to recruit the neural circuits that were activated in controls during an imagine-other perspective—in particular the ventromedial prefrontal cortex and amygdala—which may contribute to their lack of empathic concern.

It was predicted that people who have high levels of psychopathy would have sufficient levels of cognitive empathy but would lack in their ability to use affective empathy. People that scored highly on psychopathy measures were less likely to portray affective empathy. There was a strong negative correlation showing that psychopathy and affective empathy are correspond strongly.

The DANVA-2 portrayed those who scored highly on the psychopathy scale do not lack in recognising emotion in facial expressions. Therefore, individuals who have high scores on psychopathy and do not lack in perspective-talking ability but do lack in compassion and the negative incidents that happen to others. Despite studies suggesting deficits in emotion perception and imagining others in pain, professor claims psychopathy is associated with intact cognitive empathy, which would imply an intact ability to read and respond to behaviors, social cues and what others are feeling. Psychopathy is, however, associated with impairment in the other major component of empathy—affective (emotional) empathy—which includes the ability to feel the suffering and emotions of others (what scientists would term as ), and those with the condition are therefore not distressed by the suffering of their victims.

Such a dissociation of affective and cognitive empathy has indeed been demonstrated for aggressive offenders. Those with, on the other hand, are often impaired in both affective and cognitive empathy.

Other conditions [ ] Research indicates atypical empathic responses are also correlated with a variety of other conditions. Is characterized by extensive behavioral and interpersonal difficulties that arise from emotional and cognitive dysfunction. Dysfunctional social and interpersonal behavior has been shown to play a crucial role in the emotionally intense way people with borderline personality disorder react. While individuals with borderline personality disorder may show their emotions too much, several authors have suggested that they might have a compromised ability to reflect upon mental states (impaired ), as well as an impaired.

People with borderline personality disorder have been shown to be very good at recognizing emotions in people's faces, suggesting increased empathic capacities. It is, therefore, possible that impaired cognitive empathy (the capacity for understanding another person's experience and perspective) may account for borderline personality disorder individuals' tendency for interpersonal dysfunction, while 'hyper-emotional empathy' [ ] may account for the emotional over-reactivity observed in these individuals. One primary study confirmed that patients with borderline personality disorder were significantly impaired in cognitive empathy, yet there was no sign of impairment in affective empathy. One diagnostic criterion of is a lack of empathy and an unwillingness or inability to recognize or identify with the feelings and needs of others.

Characteristics of include emotional coldness, detachment, and impaired corresponding with an inability to be empathetic and sensitive towards others. A study conducted by and colleagues at the demonstrated that subjects with aggressive elicit atypical empathic responses to viewing others in pain. Subjects with conduct disorder were at least as responsive as to the pain of others but, unlike controls, subjects with conduct disorder showed strong and specific activation of the and (areas that enable a general arousing effect of ), yet impaired activation of the regions involved in self-regulation and (including ), in addition to diminished processing between the amygdala and the. Is characterized by impaired affective empathy, as well as severe cognitive and empathy impairments as measured by the Empathy Quotient (EQ). These empathy impairments are also associated with impairments in social cognitive tasks. Individuals have been observed to have impaired cognitive empathy and theory of mind, but increased affective empathy.

Despite cognitive flexibility being impaired, planning behavior is intact. It has been suggested that dysfunctions in the could result in the impaired cognitive empathy, since impaired cognitive empathy has been related with neurocognitive task performance involving cognitive flexibility. Lieutenant Colonel, in his book, suggests that military training artificially creates depersonalization in soldiers, suppressing empathy and making it easier for them to kill other human beings. Practical issues [ ].

This section has an unclear citation style. The references used may be made clearer with a different or consistent style of,,. (January 2013) () The capacity to empathize is a revered trait in society. Empathy is considered a motivating factor for unselfish, prosocial behavior, whereas a lack of empathy is related to antisocial behavior. Proper empathic engagement helps an individual understand and anticipate the behavior of another. Apart from the automatic tendency to recognize the emotions of others, one may also deliberately engage in empathic reasoning.

Two general methods have been identified here. An individual may simulate fictitious versions of the beliefs, desires, character traits and context of another individual to see what emotional feelings it provokes. Or, an individual may simulate an emotional feeling and then access the environment for a suitable reason for the emotional feeling to be appropriate for that specific environment. Some research suggests that people are more able and willing to empathize with those most similar to themselves. In particular, empathy increases with similarities in culture and living conditions. Empathy is more likely to occur between individuals whose interaction is more frequent. (See Levenson and Reuf 1997 and Hoffman 2000: 62).

A measure of how well a person can infer the specific content of another person's thoughts and feelings has been developed by William Ickes (1997, 2003). An anti-empathy crusader claims emotional engagement of empathy leads to racism and prejudice. [ ] In 2010, team led by Grit Hein and Tania Singer gave two groups of men wristbands according to which football team they supported. Each participant received a mild electric shock, then watched another go through the same pain. When the wristbands matched, both brains flared: with pain, and empathic pain.

If they supported opposing teams, the observer was found to have little empathy. Bloom calls improper use of empathy and as a tool can lead to shortsighted actions and parochialism, he further defies conventional supportive research findings as gremlins from biased standards. He ascertains empathy as an exhaustive process that limits us in morality and if low empathy makes for bad people, bundled up in that unsavoury group would be many who have Asperger’s or autism and reveals his own brother is severely autistic.

[ ] There are concerns that the empathizer's own emotional background may affect or distort what emotions they perceive in others (e.g. Goleman 1996: p. 104).

It is evidenced that societies that promote individualism have lower ability for empathy. Empathy is not a process that is likely to deliver certain judgments about the emotional states of others. It is a skill that is gradually developed throughout life, and which improves the more contact we have with the person with whom one empathizes. Empathizers report finding it easier to take the perspective of another person when they have experienced a similar situation, as well as experience greater empathic understanding.

Research regarding whether similar past experience makes the empathizer more accurate is mixed. Ethical issues [ ] The extent to which a person's emotions are publicly observable, or mutually recognized as such has significant social consequences. Empathic recognition may or may not be welcomed or socially desirable. This is particularly the case where we recognize the emotions that someone has towards us during real time interactions. Based on a metaphorical affinity with touch, philosopher Edith Wyschogrod claims that the proximity entailed by empathy increases the potential vulnerability of either party. The appropriate role of empathy in our dealings with others is highly dependent on the circumstances.

For instance, says that clinicians or caregivers must be objective to the emotions of others, to not over-invest their own emotions for the other, at the risk of their own resourcefulness. Furthermore, an awareness of the limitations of empathic accuracy is prudent in a situation.

Disciplinary approaches [ ] Philosophy [ ] Ethics [ ] In his 2008 book,, writer presents two reasons why empathy is the 'essence' or 'DNA' of right and wrong. First, he argues that empathy uniquely has all the characteristics we can know about an ethical viewpoint – including that it is 'partly self-standing', and so provides a source of motivation that is partly within us and partly outside, as moral motivations seem to be. This allows empathy-based judgements to have sufficient distance from a personal opinion to count as 'moral'. His second argument is more practical: he argues, 'Empathy for others really is the route to value in life', and so the means by which a selfish attitude can become a moral one.

By using empathy as the basis for a system of ethics, King is able to reconcile ethics based on with and accounts of right and wrong. His empathy-based system has been taken up by some, and is used to address some practical problems, such as when to tell, and how to develop culturally-neutral. In the 2007 book The Ethics of Care and Empathy, philosopher introduces a theory of care-based ethics that is grounded in empathy. His claim is that moral motivation does, and should, stem from a basis of empathic response. He claims that our natural reaction to situations of moral significance are explained by empathy. He explains that the limits and obligations of empathy and in turn morality are natural.

These natural obligations include a greater empathic, and moral obligation to family and friends, along with an account of temporal and physical distance. In situations of close temporal and physical distance, and with family or friends, our moral obligation seems stronger to us than with strangers at a distance naturally.

Slote explains that this is due to empathy and our natural empathic ties. He further adds that actions are wrong if and only if they reflect or exhibit a deficiency of fully developed empathic concern for others on the part of the agent. Phenomenology [ ] In, empathy describes the experience of something from the other's viewpoint, without confusion between self and. This draws on the. In the most basic sense, this is the experience of the other's body and, in this sense, it is an experience of 'my body over there'.

In most other respects, however, the experience is modified so that what is experienced is experienced as being the other's experience; in experiencing empathy, what is experienced is not 'my' experience, even though I experience it. Empathy is also considered to be the condition of and, as such, the source of the constitution of objectivity. History [ ] Some postmodern historians such as in recent years have debated whether or not it is possible to empathize with people from the past.

Jenkins argues that empathy only enjoys such a privileged position in the present because it corresponds harmoniously with the dominant discourse of modern society and can be connected to 's concept of reciprocal freedom. Jenkins argues the past is a foreign country and as we do not have access to the conditions of by gone ages we are unable to empathize. It is impossible to forecast the effect of empathy on the future. [ ] A past subject may take part in the present by the so-called historic present. If we watch from a fictitious past, can tell the present with the future tense, as it happens with the trick of the false prophecy. There is no way of telling the present with the means of the past.

Evolution [ ] An increasing number of studies in and claim that empathy is not restricted to humans, and is in fact as old as the mammals, or perhaps older. Examples include saving humans from drowning or from.

Professor Tom White suggests that reports of having three times as many spindle cells — the nerve cells that convey empathy — in their brains as we do might mean these highly-social animals have a great awareness of one another's feelings. A multitude of behaviors has been observed in, both in captivity and in the wild, and in particular in, which are reported as the most empathetic of all the primates. A recent study has demonstrated elicited by empathy in rodents. Rodents have been shown to demonstrate empathy for cagemates (but not strangers) in pain. One of the most widely read studies on the evolution of empathy, which discusses a neural perception-action mechanism (PAM), is the one by Stephanie Preston and de Waal. This review postulates a bottom-up model of empathy that ties together all levels, from state matching to perspective-taking. For University of Chicago neurobiologist Jean Decety, [empathy] is not specific to humans.

He argues that there is strong evidence that empathy has deep evolutionary, biochemical, and neurological underpinnings, and that even the most advanced forms of empathy in humans are built on more basic forms and remain connected to core mechanisms associated with affective communication, social, and. Core neural circuits that are involved in empathy and caring include the, the,,, and. Context evolution problems [ ] Since all definitions of empathy involves an element of for others, all distinctions between egoism and empathy fail at least for beings lacking self-awareness. Since the first lacked a self-aware distinction between self and other, as shown by most mammals failing at, the first mammals or anything more evolutionarily primitive than them cannot have had a context of default egoism requiring an empathy mechanism to be transcended. However, there are numerous examples in artificial intelligence research showing that simple reactions can carry out de facto functions the agents have no concept of, so this does not contradict evolutionary explanations of parental care. However, such mechanisms would be unadapted to self-other distinction and beings already dependent on some form of behavior benefitting each other or their offspring would never be able to evolve a form of self-other distinction that necessitated evolution of specialized non-preevolved and non-preevolvable mechanisms for retaining empathic behavior in the presence of self-other distinction, and so a fundamental neurological distinction between egoism and empathy cannot exist in any species. Psychotherapy [ ] is the main introducer of the principle of empathy in psychoanalysis.

His principle applies to the method of gathering unconscious material. The possibility of not applying the principle is granted in the cure, for instance when you must reckon with another principle, that of reality. In evolutionary psychology, attempts at explaining pro-social behavior often mention the presence of empathy in the individual as a possible variable. While exact motives behind complex social behaviors are difficult to distinguish, the 'ability to put oneself in the shoes of another person and experience events and emotions the way that person experienced them' is the definitive factor for truly altruistic behavior according to Batson's hypothesis. If empathy is not felt, social exchange (what's in it for me?) supersedes pure altruism, but if empathy is felt, an individual will help by actions or by word, regardless of whether it is in their self-interest to do so and even if the costs outweigh potential rewards. Education [ ] An important target of the method (LbT) is to train systematically and, in each lesson, teach empathy. Students have to transmit new content to their classmates, so they have to reflect continuously on the mental processes of the other students in the classroom.

This way it is possible to develop step-by-step the students' feeling for group reactions and networking. Rogers pioneered research in effective psychotherapy and teaching which espoused that empathy coupled with unconditional positive regard or caring for students and authenticity or congruence were the most important traits for a therapist or teacher to have. Other research and publications by Tausch, Aspy, Roebuck. Lyon, and meta-analyses by Cornelius-White, corroborated the importance of these person-centered traits. Business and management [ ] In the 2009 book Wired to Care, strategy consultant Dev Patnaik argues that a major flaw in contemporary business practice is a lack of empathy inside large corporations. He states that lacking any sense of empathy, people inside companies struggle to make intuitive decisions and often get fooled into believing they understand their business if they have quantitative research to rely upon.

Patnaik claims that the real opportunity for companies doing business in the 21st Century is to create a widely held sense of empathy for customers, pointing to Nike, Harley-Davidson, and IBM as examples of 'Open Empathy Organizations'. Such institutions, he claims, see new opportunities more quickly than competitors, adapt to change more easily, and create workplaces that offer employees a greater sense of mission in their jobs.

In the 2011 book The Empathy Factor, organizational consultant Marie Miyashiro similarly argues the value of bringing empathy to the workplace, and offers as an effective mechanism for achieving this. In studies by the Management Research Group, empathy was found to be the strongest predictor of ethical leadership behavior out of 22 competencies in its management model, and empathy was one of the three strongest predictors of senior executive effectiveness. Measurement [ ]. This section has an unclear citation style.

The references used may be made clearer with a different or consistent style of,,. (July 2013) () Research into the measurement of empathy has sought to answer a number of questions: who should be carrying out the measurement? What should pass for empathy and what should be discounted? What unit of measure (UOM) should be adopted and to what degree should each occurrence precisely match that UOM are also key questions that researchers have sought to investigate. Researchers have approached the measurement of empathy from a number of perspectives.

Behavioral measures normally involve raters assessing the presence or absence of certain either predetermined or ad-hoc behaviors in the subjects they are monitoring. Both verbal and non-verbal behaviors have been captured on video by experimenters such as Truax (1967b).

Other experimenters, including Mehrabian and Epstein (1972), have required subjects to comment upon their own feelings and behaviors, or those of other people involved in the experiment, as indirect ways of signaling their level of empathic functioning to the raters. Physiological responses tend to be captured by elaborate electronic equipment that has been physically connected to the subject's body.

Researchers then draw inferences about that person's empathic reactions from the electronic readings produced (e.g. Levenson and Ruef, 1992; Leslie et al., 2004 ). Bodily or 'somatic' measures can be looked upon as behavioral measures at a micro level. Their focus is upon measuring empathy through facial and other non-verbally expressed reactions in the empathizer. These changes are presumably underpinned by physiological changes brought about by some form of 'emotional contagion' or mirroring (e.g. Levenson and Ruef, 1992*; Leslie et al., 2004*).

It should be pointed out that these reactions, whilst appearing to reflect the internal emotional state of the empathizer, could also, if the stimulus incident lasted more than the briefest period, be reflecting the results of emotional reactions that are based upon more pieces of thinking through (cognitions) associated with role-taking ('if I were him I would feel.' Paper-based indices involve one or more of a variety of methods of responding. In some experiments, subjects are required to watch video scenarios (either staged or authentic) and to make written responses which are then assessed for their levels of empathy (e.g.

Geher, Warner and Brown, 2001 ); scenarios are sometimes also depicted in printed form (e.g. Mehrabian and Epstein, 1972 ). Measures also frequently require subjects to self-report upon their own ability or capacity for empathy, using Likert-style numerical responses to a printed questionnaire that may have been designed to tap into the affective, cognitive-affective or largely cognitive substrates of empathic functioning. Some questionnaires claim to have been able to tap into both cognitive and affective substrates (e.g.

Davis, 1980 ). More recent paper-based tools include The (EQ) created by and Wheelwright which comprises a self-report questionnaire consisting of 60 items.

For the very young, picture or puppet-story indices for empathy have been adopted to enable even very young, pre-school subjects to respond without needing to read questions and write answers (e.g. Denham and Couchoud, 1990). Dependent variables (variables that are monitored for any change by the experimenter) for younger subjects have included self reporting on a 7-point smiley face scale and filmed facial reactions (Barnett, 1984). A certain amount of confusion exists about how to measure empathy. These may be rooted in another problem: deciding what empathy is and what it is not. In general, researchers have until now been keen to pin down a singular definition of empathy which would allow them to design a measure to assess its presence in an exchange, in someone's repertoire of behaviors or within them as a latent trait. As a result, they have been frequently forced to ignore the richness of the empathic process in favor of capturing surface, explicit self-report or third-party data about whether empathy between two people was present or not.

In most cases, instruments have unfortunately only yielded information on whether someone had the potential to demonstrate empathy (Geher et al., 2001)*. Gladstein (1987) summarizes the position noting that empathy has been measured from the point of view of the empathizer, the recipient for empathy and the third-party observer. He suggests that since the multiple measures used have produced results that bear little relation to one another, researchers should refrain from making comparisons between scales that are in fact measuring different things. He suggests that researchers should instead stipulate what kind of empathy they are setting out to measure rather than simplistically stating that they are setting out to measure the unitary phenomenon 'empathy'; a view more recently endorsed by Duan and Hill (1996). In the field of medicine, a measurement tool for carers is the Jefferson Scale of Physician Empathy, Health Professional Version (JSPE-HP). The Interpersonal Reactivity Index (IRI) is the only published measurement tool accounting for a multi-dimensional assessment of empathy, consisting of a self-report questionnaire of 28 items, divided into four 7-item scales covering the subdivisions of affective and cognitive empathy.

Other animals [ ] Research has shown that the ability of empathy in other species is attainable. Many instances of empathy have been recorded throughout many species, including (but not limited to) canines, felines, dolphins, primates, rats and mice [ ]. In animals, empathy-related responding could have an ulterior motive such as survival, the sharing of food, companionship and pack-oriented mentality. It is certainly difficult to understand an animal's intention behind an empathic response. Many researchers [ ] maintain that applying the term empathy in general to animal behavior is an act of.

Researchers Zanna Clay and Frans de Waal studied the socio-emotional development of the chimpanzee. They focused on the interplay of numerous skills such as empathy-related responding, and how different rearing backgrounds of the juvenile bonobo affected their response to stressful events, related to themselves (loss of a fight) and of stressful events of others. It was found that the bonobos sought out body contact as a coping mechanism with one another. A finding of this study was that the bonobos sought out more body contact after watching a distressing event upon the other bonobos rather than their individually experienced stressful event.

Mother-reared bonobos as opposed to orphaned bonobos sought out more physical contact after a stressful event happened to another. This finding shows the importance of mother-child attachment and bonding, and how it may be crucial to successful socio-emotional development, such as empathic-like behaviors. Empathic-like responding has been observed in in various different aspects of their natural behaviors. For example, chimpanzees are known to spontaneously contribute comforting behaviors to victims of aggressive behavior in natural and unnatural settings, a behavior recognized as consolation. Researchers Teresa Romero and co-workers observed these empathic and sympathetic-like behaviors in chimpanzees at two separate outdoor housed groups. The act of consolation was observed in both of the groups of chimpanzees. This behavior is found in humans, and particularly in human infants.

Another similarity found between chimpanzees and humans is that empathic-like responding was disproportionately provided to individuals of kin. Although comforting towards non-family chimpanzees was also observed, as with humans, chimpanzees showed the majority of comfort and concern to close/loved ones. Another similarity between chimpanzee and human expression of empathy is that females provided more comfort than males on average. The only exception to this discovery was that high-ranking males showed as much empathy-like behavior as their female counterparts. This is believed to be because of policing-like behavior and the authoritative status of high-ranking male chimpanzees. It is thought that species that possess a more intricate and developed prefrontal cortex have more of an ability of experiencing empathy. It has however been found that empathic and altruistic responses may also be found in sand dwelling Mediterranean ants.

Researcher Hollis studied the sand dwelling Mediterranean ant and their rescue behaviors by ensnaring ants from a nest in nylon threads and partially buried beneath the sand. The ants not ensnared in the nylon thread proceeded to attempt to rescue their nest mates by sand digging, limb pulling, transporting sand away from the trapped ant, and when efforts remained unfruitful, began to attack the nylon thread itself; biting and pulling apart the threads. Similar rescue behavior was found in other sand-dwelling Mediterranean ants, but only and species of ants showed the same rescue behaviors of transporting sand away from the trapped victim and directing attention towards the nylon thread.

It was observed in all ant species that rescue behavior was only directed towards nest mates. Ants of the same species from different nests were treated with aggression and were continually attacked and pursued, which speaks to the depths of ants discriminative abilities. This study brings up the possibility that if ants have the capacity for empathy and/or altruism, these complex processes may be derived from primitive and simpler mechanisms.

Have been hypothesized to share empathic-like responding towards human species. Researchers Custance and Mayer put individual dogs in an enclosure with their owner and a stranger. When the participants were talking or humming, the dog showed no behavioral changes, however when the participants were pretending to cry, the dogs oriented their behavior toward the person in distress whether it be the owner or stranger. The dogs approached the participants when crying in a submissive fashion, by sniffing, licking and nuzzling the distressed person. The dogs did not approach the participants in the usual form of excitement, tail wagging or panting.

Since the dogs did not direct their empathic-like responses only towards their owner, it is hypothesized that dogs generally seek out humans showing distressing body behavior. Although this could insinuate that dogs have the cognitive capacity for empathy, this could also mean that domesticated dogs have learned to comfort distressed humans through generations of being rewarded for that specific behavior.

When witnessing chicks in distress, domesticated hens, show emotional and physiological responding. Researchers Edgar, Paul and Nicol found that in conditions where the chick was susceptible to danger, the mother hens heart rate increased, vocal alarms were sounded, personal preening decreased and body temperature increased. This responding happened whether or not the chick felt as if they were in danger. Mother hens experienced stress-induced hyperthermia only when the chick's behavior correlated with the perceived threat. Animal maternal behavior may be perceived as empathy, however, it could be guided by the evolutionary principles of survival and not emotionality. See also [ ].

[ Related to: ] I. I write a lot about the importance of IQ research, and I try to debunk pseudoscientific claims that IQ “isn’t real” or “doesn’t matter” or “just shows how well you do on a test”.

IQ is one of the best-studied ideas in psychology, one of our best predictors of job performance, future income, and various other forms of success, etc. But every so often, I get comments/emails saying something like “Help! I just took an IQ test and learned that my IQ is x! This is much lower than I thought, and so obviously I will be a failure in everything I do in life.

Can you direct me to the best cliff to jump off of?” So I want to clarify: IQ is very useful and powerful for research purposes. It’s not nearly as interesting for you personally. How can this be?

Consider something like income inequality: kids from rich families are at an advantage in life; kids from poor families are at a disadvantage. From a research point of view, it’s really important to understand this is true. A scientific establishment in denial that having wealthy parents gave you a leg up in life would be an intellectual disgrace. Knowing that wealth runs in families is vital for even a minimal understanding of society, and anybody forced to deny that for political reasons would end up so hopelessly confused that they might as well just give up on having a coherent world-view. From an personal point of view, coming from a poor family probably isn’t great but shouldn’t be infinitely discouraging.

It doesn’t suggest that some kid should think to herself “I come from a family that only makes $30,000 per year, guess that means I’m doomed to be a failure forever, might as well not even try”. A poor kid is certainly at a disadvantage relative to a rich kid, but probably she knew that already long before any scientist came around to tell her. If she took the scientific study of intergenerational income transmission as something more official and final than her general sense that life was hard – if she obsessively recorded every raise and bonus her parents got on the grounds that it determined her own hope for the future – she would be giving the science more weight than it deserves. So to the people who write me heartfelt letters complaining about their low IQs, I want to make two important points. First, we’re not that good at measuring individual IQs.

Second, individual IQs aren’t that good at predicting things. Start with the measurement problems. People who complain about low IQs (not to mention people who boast about high IQs) are often wildly off about the number. According to the official studies, IQ tests are rarely wrong. The standard error of measurement is somewhere between 3-7 points (,, ).

Call it 5, and that means your tested IQ will only be off by 5+ points 32% of the time. It’ll only be off by 10+ points 5% of the time, and really big errors should be near impossible. In reality, I constantly hear about people getting IQ scores that don’t make any sense. Here’s a pretty standard entry in the “help my IQ is so low” genre –: When I was 16, as a part of an educational assessment, I took both the WAIS-IV and Woodcock Johnson Cognitive Batteries.

My mother was curious as to why I struggled in certain subjects throughout my educational career, particularly in mathematical areas like geometry. I never got a chance to have a discussion with the psychologist about the results, so I was left to interpret them with me, myself, and the big I known as the Internet – a dangerous activity, I know. This meant two years to date of armchair research, and subsequently, an incessant fear of the implications of my below-average IQ, which stands at a pitiful 94I still struggle in certain areas of comprehension. I received a score of 1070 on the SAT, (540 Reading & 530 Math), and am barely scraping by in my college algebra class. Honestly, I would be ashamed if any of my coworkers knew I barely could do high school-level algebra. This person thinks they’re reinforcing their point by listing two different tests, but actually a 1070 on the SAT corresponds to about 104, a full ten points higher. Based on other things in their post – their correct use of big words and complicated sentence structure, their mention that they work a successful job in cybersecurity, the fact that they read a philosophy/psychology subreddit for fun – I’m guessing the 104 is closer to the truth.

From the comments on the same Reddit thread: Interesting, I hope more people who have an avg. Or low IQ post. Personally I had an IQ of 90 or so, but the day of the test I stayed up almost the entire night, slept maybe two hours and as a naive caffeine user I had around 500 mg caffeine.

Maybe low IQ people do that. I did IQTest.dk Raven’s test on impulse after seeing a video of Peterson’s regarding the importance of IQ, not in a very focused mode, almost ADHD like with rumination and I scored 108, but many claim low scores by around 0.5-1 SD, so that would put me in 115-123. I also am vegan, so creatine might increase my IQ by a few points. I think I am in the 120’s, but low IQ people tend to overestimate their IQ, but at least I am certainly 108 non-verbally, which is pretty average and low. The commenter is right that IQtest.dk usually underestimates scores compared to other tests.

But even if we take it at face value, his first score was almost twenty points off. By the official numbers, that should only happen once in every 15,000 people.

In reality, someone posts a thread about it on Reddit and another person immediately shows up to say “Yeah, that happened to me”. Nobel-winning physicist Richard Feynman famously scored “only” 124 on an IQ test in school – still bright, but nowhere near what you would expect of a Nobelist. Some people that it might have been biased towards measuring verbal rather than math abilities – then again, Feynman’s autobiography (admittedly edited and stitched together by a ghostwriter) sold 500,000 copies and made the New York Times bestseller list. So either his tested IQ was off by at least 30 points (supposed chance of this happening: 1/505 million), or IQ isn’t real and all of the studies showing that it is are made up by lizardmen to confuse us. In either case, you should be less concerned if your own school IQ tests seem kind of low.

I don’t know why there’s such a discrepancy between the official reliability numbers and the ones that anecdotally make sense. My guess is that the official studies give the tests better somehow. They use professional test administrators instead of overworked school counselors. They give them at a specific time of day instead of while the testee is half-asleep.

They don’t let people take a bunch of caffeine before the test. They actually write the result down in a spreadsheet they have right there instead of trusting the testee to remember it accurately. In my own field, official studies diagnose psychiatric diseases through beautiful Structured Clinical Interviews performed to exacting guidelines. Then real doctors diagnose them in big letters on the top.

If psychometrics is at all similar, the clashing numbers aren’t much of a mystery. But two other points that might also be involved. First, on a population level IQ is very stable with age. Over a study of 87,498 Scottish children, age 11 IQ and adult IQ, about as strong and impressive a correlation as you’ll ever find in the social sciences. But “correlation of 0.66” is also known as “only predicts 44% of the variance”. On an individual level, it is totally possible and not even that surprising to have an IQ of 100 at age 11 but 120 at age 30, or vice versa. Any IQ score you got before high school should be considered a plausible prediction about your adult IQ and nothing more.

Second, the people who get low IQ scores, are shocked, find their whole world tumbling in on themselves, and desperately try to hold on to their dream of being an intellectual – are not a representative sample of the people who get low IQ scores. The average person who gets a low IQ score says “Yup, guess that would explain why I’m failing all my classes”, and then goes back to beating up nerds. When you see someone saying “Help, I got a low IQ score, I’ve double-checked the standard deviation of all of my subscores and found some slight discrepancy but I’m not sure if that counts as Bayesian evidence that the global value is erroneous”, then, well – look, I wouldn’t be making fun of these people if I didn’t constantly come across them. You know who you are. Just for fun, I analyzed the lowest IQ scores in my collection of SSC/LW surveys. I was only able to find three people who claimed to have an IQ ≤ 100 plus gave SAT results.

All three had SAT scores corresponding to IQs in the 120s. I conclude that at least among the kind of people I encounter and who tend to send me these emails, IQ estimates are pretty terrible. This is absolutely consistent with population averages of thousands of IQ estimates still being valuable and useful research tools. It just means you shouldn’t use it on yourself. Statistics is what tells us that almost everybody feels stimulated on amphetamines. Reality is my patient who consistently goes to sleep every time she takes Adderall. Neither the statistics nor the lived experience are wrong – but if you use one when you need the other, you’re going to have a bad time.

The second problem is that even if you avoid the problems mentioned above and measure IQ 100% correctly, it’s just not that usefully predictive. Isn’t that heresy?! Isn’t IQ the most predictive thing we have? Doesn’t it affect every life outcome as proven again and again in well-replicated experiments? I’m not denying any of that. I’m saying that things that are statistically true aren’t always true for any individual. Once again, consider the analogy to family transmission of income.

Your parents’ socioeconomic status correlates with your own at about r = 0.2 to 0.3, depending on how you define “socioeconomic status”. By coincidence, this is pretty much the same correlation that found for IQ and socioeconomic status. Everyone knows that having rich parents is pretty useful if you want to succeed. But everyone also knows that rich parents aren’t the only thing that goes into success. Someone from a poor family who tries really hard and gets a lot of other advantages still has a chance to make it. A sociologist or economist should be very interested in parent-child success correlations; the average person trying to get ahead should just shrug, realize things are going to be a little easier/harder than they would have been otherwise, and get on with their life. And this isn’t just about gaining success by becoming an athlete or musician or some other less-intellectual pursuit.

Chess talent is correlated with IQ, about the same as income. IQ is some complicated central phenomenon that contributes a little to every cognitive skill, but it doesn’t entirely determine any cognitive skill. It’s not just that you can have an average IQ and still be a great chess player if you work hard enough – that’s true, but it’s not just that. It’s that you can have an average IQ and still have high levels of innate talent in chess. It’s not quite as likely as if you have a high IQ, but it’s very much in the range of possibility.

And then you add in the effects of working hard enough, and then you’re getting somewhere. Here is a table of professions by IQ, a couple of decades out of date but probably not too far off (cf. Discussion ): I don’t know how better to demonstrate this idea of “statistically solid, individually shaky”.

On a population level, we see that the average doctor is 30 IQ points higher than the average janitor, that college professors are overwhelmingly high-IQ, and we think yeah, this is about what we would hope for from a statistic measuring intelligence. But on an individual level, we see that below-average IQ people sometimes become scientists, professors, engineers, and almost anything else you could hope for. I’m kind of annoyed I have to write this post. After investing so much work debunking IQ denialists, I feel like this is really – I don’t know – diluting the brand. But I actually think it’s not as contradictory as it looks, that there’s some common thread between my posts arguing that no, IQ isn’t fake, and this one. If you really understand the idea of a statistical predictor – if you at a fundamental level – then social science isn’t scary.

You can read about IQ, or heredity, or stereotypes, or gender differences, or whatever, and you can say – ah, there’s a slight tendency for one thing to correlate with another thing. Then you can go have dinner. If you don’t get that, then the world is terrifying.

Someone’s said that IQ “correlates with” life outcomes? What the heck is “correlate with”? Did they say that only high-IQ people can be successful? That you’re doomed if you don’t get the right score on a test? And then you can either resist that with every breath you have – deny all the data, picket the labs where it’s studied, make up silly theories about “emotional intelligence” and “grit” and what have you. Or you can surrender to the darkness, at least have the comfort of knowing that you accept the grim reality as it is. Imagine an American who somehow gets it into his head that the Communists are about to invade with overwhelming force.

He might buy a bunch of guns, turn his house into a bunker, start agitating that Communist sympathizers be imprisoned to prevent them from betraying the country when the time came. Or he might hang a red flag from his house, wear a WELCOME COMMUNIST OVERLORDS tshirt, and start learning Russian. These seem like opposite responses, but they both come from the same fundamental misconception. A lot of the culture war – on both sides – seems like this.

I don’t know how to solve this except to try, again and again, to install the necessary gear and convince people that correlations are neither meaningless nor always exactly 1.0. So please: study the science of IQ. Use IQ to explain and predict social phenomena.

Work on figuring out how to raise IQ. Assume that raising IQ will have far-ranging and powerful effects on a wide variety of social problems. Just don’t expect it to predict a single person’s individual achievement with any kind of reliability. Especially not yourself. Sadly, the human brain really, really want to cast to boolean.

There’s either a 100% correlation or there’s 0% correlation. Article after article has persuaded many that it’s not 0%, so now their brain is continuously whispering to them that the correlation is 100%. Sure, they understand that IQ is not destiny on an individual basis, but the problem is that their brain *knows* it is. And facts and some logic is not going to stop their brain from performing what is was built to do. And in doing so, destroy them. Oh Scott, maybe you should consult with a psychologist from time to time. There are a thousand reasons why someone might end up with an IQ underestimate, but really no reason for an overestimate (except cheating or Clever Hans administration, I suppose).

A decent interpretation of a battery should include caveats and explanations, point out significant intra- and intersubtest scatter, significant observations about test behavior etc. Too often they don’t, but we’re all overworked these days. Composite variables are always better predictors than discrete variables (of course), because of the magic of canceling error terms, thus verbal subtests will be more stable predictors than nonverbal tests, especially in areas like engineering (where ceiling effects are also at play). And maybe you shouldn’t reconsider emphasizing IQ research quite so much.

Those studies do play well with people’s just-world fallacy. Well, except (as you’ve noticed) when they don’t. VV_Vv An IQ test that produces variable results on the basis of idiosyncratic interests has a validity problem. What you’re saying about unusually good days, cognitively, suggests there’s something undermining your performance that maybe you should see your doctor about.

If random guesses pay off, your test has a validity problem. Nancy, ADHD is a poorly understood condition that is independent of IQ. If you’re an intelligent person who finds cognitive problems stimulating, you’re likely to do well in many parts of an IQ test (though intersubtest scatter is likely to be an issue, because you’re unlikely to find all associated tasks interesting). This also can apply well to academic performance, though the same caveat (is this particular task stimulating to you?) applies. People with ADHD tend to do either very well or very poorly on processing speed tasks, as a function of what you happen to find stimulating. And they either read voraciously (minority), or never. Any given individual is much more likely to score lower than their “real” IQ than to score higher.

There is almost nothing a normal person can do to *significantly* increase their IQ in the short term. Maybe one of the no nootropic stacks can get you four or five points if you’re lucking, or have focus problems or whatever. But smoke a bowl before going in. Or have a couple long island Ice teas. Or heck, influenza or rhinovirus. A room that is too hot, or too noisy. Having someone sitting behind you that smells REALLY bad etc.

Actual measurement error is likely to be +/- about the same, but local conditions can do VERY little to help and a lot to hurt. Which I suspect is what GeneralDisarray is getting. What you’re saying implies that the likelihood of a false positive and a false negative are equal. Your intuition about IQ testing and testing generally is flawed. There are more ways to get lucky than getting a false positive on one of the test questions, or ‘samples.’ Even on types of tests where it is impossible to get a question right without knowing the answer, overestimation of your abilities is not only possible but will happen about half the time if the test is of appropriate difficulty for your ability. You have some ‘representative error rate’ and you can overshoot or undershoot that by luck.

You misremember less than you normally would, you recall quicker than you normally would, or the reverse. You reason more sloppily or less sloppily. There is no reason to expect that you would systematically undershoot more than you overshoot, unless you are up against a ceiling that is too low. Certainly the idea that you would never overshoot is insane. Nothing makes the best score you could possibly get on a test your ‘true score.’ The run where you get the luckiest, where you get every single question that you could possibly scrounge the knowledge and reasoning ability to get the right answer correct, is one where you’ve gotten a profoundly unrepresentative score. Tests are tools for comparison, and you’re not comparing with other people’s ‘best possible’ scores but their actual scores.

Any given individual is much more likely to score lower than their “real” IQ than to score higher. I don’t think you understand what ‘real’ IQ is. Your ‘real’ IQ is the result that you get. If you do multiple tries and take the best or worst one, that isn’t your ‘real’ IQ. There’s nothing you can do to make your result more ‘real’ and a hell of a lot you can do to make it less ‘real.’ The fuzziness, the noise, is part of the real test.

This is what has been given around the world and characterized. Actual measurement error is likely to be +/- about the same, but local conditions can do VERY little to help and a lot to hurt. You can have ‘local conditions’ (illness, blood sugar level, test room distractions, itchy legs, what the fuck ever) that are worse than average, better than average, or average.

This is part of noise. Everyone suffers from this.

You could get more than average or less than average. But if you try the test again because you are convinced you had more than average negative noise, you are creating a systematic bias which is far worse if you want a representative result. The average test taker was hurt some amount, by ‘local conditions’ and probably much more of other sorts of blind poor luck (not winning dice rolls on ambiguous choices). That’s your unicorn ‘representative sample,’ with no noise pushing them up or down.

There’s no reason to expect that there are way more people below this average level of bad luck than above it. There are some really unlucky people who lost way more dice rolls than chance would indicate, and some really lucky people who won way more. But if you let people go ‘wait, that was unrepresentative, I don’t like my result, I’m going to try for another’ you are replacing noise with systemic error. Which is bad. Even if the only effect is to remove the noise – which isn’t the case – this guy with his de-noised test is failing to represent IQ properly.

IQ is a thing in the real world with certain error bars. He’s a dude on the low end of the error bar, of the negative misrepresentation. That’s important.

If he goes on to win a Nobel Prize, that g-loaded achievement should be stacked up next to his actual IQ result, and that revealed error is important. If you take 5 different IQ tests and average them, or whatever, the accuracy of that result is not necessarily relevant at all to the guy who took AN IQ test and wants to know how representative that is. We have data on how g-loaded AN IQ test is, we don’t have nearly as broad a dataset of some homebrew composite. We haven’t gone all around the world giving that battery.

And also, if you started and stopped this re-test thing because of dissatisfaction/satisfaction with the results, you’ve introduced systemic error. And it’s possible that your composite wouldn’t reduce error as much as you think. These aren’t really independent samplings.

There could be something biasing the results of all these tests. Like you’re just unusually bad or good at taking tests, for example. There are tests for whether you’ve been exposed to some disease and are a health risk, they detect antigens. Let’s say you have one of these tests for smallpox, and it has a 90% rate of accuracy within some population of a given age or whatever. Meaning 90% of the people who test positive have been exposed to the disease.

The 10% error could be entirely people who have been vaccinated, which is unusual for their age group at this particular time, or whatever. You won’t be able to get rid of this type of error (unlike the fumble fingered technician kind of error) by testing multiple times. It’s not the case that someone who fails twice has a 99% chance of having been exposed to the real live disease. You don’t get to just assume that you can completely ‘de-noise’ some test with multiple samplings. There is very likely some that you can reduce, but to characterize the reduction you need a significant sample OF these multiple sampling composite tests. And you then have a new result that is not ‘real’ IQ but your SuperSample MegaBlast IQ or whatever you want to call your dubious contribution to psychometrics. This is worth re-iterating in a brief post on its own.

If we are talking about IQ as the useful test/set of tests that have been given round the world with proven predictivity – there is NO ‘REAL’ IQ separate from the result you get when you take an IQ test. That IQ score IS your IQ.

It measures your IQ PERFECTLY. What it measures VERY IMPERFECTLY, with errors that have been characterized, is ‘g,’ which is what gives IQ its value.

The general intelligence factor, your success at intellectually straining tasks. General intelligence is not IQ.

IQ is an attempt to measure general intelligence. A pretty damn good attempt, at that.

Still very imperfect. What can I say, we live in an imperfect world. When you say ‘omg this IQ test is wrong it gave me the wrong IQ I need a new test,’ whatever hare-brained scheme you go on you are not changing your IQ.

Your IQ is the result that you got. You could in your quixotic quest come up with new procedures and/or a new test that has a 1:1 correlation with g. You can call this test IQ and get your ‘real’ g-percentile tracking score. Guess what – your IQ didn’t change.

Your IQ is exactly what it was before. You created something else, which happened to be better (lucky you, very lucky). It’s still something. But more likely you won’t come up with something better.

Much more likely you will do some ‘clever hans’ nonsense where you keep trying tests and stop when you get a result that you like. We have lots of data about what it means when you take a dudester and you give them an IQ test and you look at the result and you look at the dudester. This is what we have data on. Stay in this world. This is the world of information that is of known utility.

Don’t do random other shit. You’re just exiting the world where results are known to be meaningful and entering the world where???maybe they are meaningful maybe they are moderately biased maybe they are super biased who knows???? The distinction here is between: * Taking one full IQ test where all factors (environment, time of day, tiredness, whether you’re hung over etc.) are nearly constant for the duration of the test, and * Taking a number of full IQ tests at random times of day in random locations in random states of mind over a period of several months. In the former case we would expect errors (really, “noise” in this context) to cancel each other out, and produce a measure of the individual’s IQ at that time when they are in whatever state of mind they’re in at that time.

In the latter case (as GeneralDisarray and Antistotle are arguing), there are lots of factors that might cause the results to skew downwards (hungover, tired, uncomfortable); but almost none that will cause a symmetrical bias in the positive direction. There are lots of factors that might cause the results to skew downwards (hungover, tired, uncomfortable); but almost none that will cause a symmetrical bias in the positive direction. Having an amount of ‘negative bias’ or bad luck below the average amount doled out to your fellow participants is a bias in the positive direction. These are two ways of saying the exact same thing. You seem to assume that ‘skew downwards’ factors are some kind of boolean that has a ten percent change of being flagged ‘true’ or whatever.

This is nonsensical. There are gradations to everything, including how hungry or tired or itchy you are. Including how distracting or comfortable the testing environment is. Including how lucky or unlucky you get on the many dice rolls.

@ilkarnal This is worth re-iterating in a brief post on its own. If we are talking about IQ as the useful test/set of tests that have been given round the world with proven predictivity – there is NO ‘REAL’ IQ separate from the result you get when you take an IQ test. That IQ score IS your IQ. It measures your IQ PERFECTLY. Okay, obviously, yes, in some sense that’s true. But it still seems to be missing the point of contention here. An IQ test may “perfectly” measure your IQ on a given day, but there are other quantities we could be interested in.

One of them might be, I don’t know, let’s call it IQ_ave (your average IQ score after taking the test a number of different times). Obviously IQ_ave is no more a measure of “real” intelligence than any individual IQ score would be. But it’s still a meaningful number, and you could say it’s closer to what people mean when they talk about “IQ” in the first place. Anyway, the question is then: how well do we expect a given IQ score to reflect someone’s IQ_ave. And that depends very much on the distribution of IQ scores relative to IQ_ave.

I think all GeneralDisarray was trying to say was that this distribution is likely to be skewed – it likely has a much longer tail to the left than the right. So getting an IQ score on a particular day that’s, say, 25 points below your IQ_ave, is much more likely than getting a score that’s 25 points above your IQ_ave (because there are very few things that can increase performance as much as, say, being hungover can decrease performance). This is true despite the fact that you should on average expect your IQ score to be neither above nor below IQ_ave. It just means that small overestimates would be relatively more common than small underestimates, and large overestimates would be relatively less common than large underestimates. So let’s imagine we took you, ikarnal, and you undertook a series of full IQ tests over a period of several months. The conditions, daily timing, location etc. Of each test were as identical as we were able to make them.

Similarly, you assured us that in each case you felt well-rested, healthy, weren’t drunk, and weren’t hungover etc. The total number of tests (lets say, 12) were divided equally in two. The only difference between the two groups of tests was that in one I was standing over you with a fork, jabbing you in your abdomen every so often; and in the other group of tests, I was not doing this thing.

Now let’s clarify things. Which of the following are you, ikarnal, claiming: 1. That your results in the fork-abdomen tests would be roughly the same as the non-form-abdomen tests, with no statistically significant difference in your test result each time. That you would accept the outcome of the tests, perhaps averaged over time, as being as accurate a reflection of your true IQ as we are able to achieve. That even if 1. Is not correct, and there is a statistically significant difference in your measured IQ in the two groups of tests, there exists some equal-and-opposite effect that will counteract the deliberate attempt to sabotage your test taking (even though we’re at pains to avoid any such thing happening). That even if 1.

Is not correct, and there is a statistically significant difference in your measured IQ in the two groups of tests, your measured IQ when you’re being struck in the ribs with a fork is in fact *higher* than when you’re not being struck in the ribs with a fork. Put in statistical terms, what I’m claiming is that the following quotation is correct: There are a thousand reasons why someone might end up with an IQ underestimate, but really no reason for an overestimate If this *is* correct, we would expect that if *the same individual* took the same full IQ tests over an extended period of time, we would see a skewed distribution, clustering round a particular point (the ‘true’ IQ), with some noise on either side; but with a long-ish tail on the negative side. This is for the reason stated in the quote: there’s lots of things that can make you perform *worse* than what you might be able to achieve in an obsessively well-controlled series of tests. There are rather fewer things (notwithstanding nootropics) that would have as big an impact on the positive side. This isn’t about random noise, it’s about the fact than when one person takes one IQ test, and they’re ill or drunk or whatever, the result isn’t an accurate reflection of their IQ.

(I accept that there’s an ambiguity here between the Platonic “true” IQ and the actual measured value in any particular case, but I’m sure people can follow what I mean). The fact that an individual’s results might (or might not) skew in this way over time is irrelevant to the distribution of the population as a whole, the central limit theorem being what it is. As you say, the aggregate IQ of a population will automatically correct for the fact that a few people were ill/hungover on the day. But this just means that the measured population IQ as measured is lower than it would be in the case that a minority of people *weren’t* hung over when they took the test, or otherwise experiencing someone sticking a fork in their ribs. So, I’ve made a specific, falsifiable prediction; and I’d be very grateful to hear from anyone who can point me to any research that disproves it. I know good, longitudinal studies of IQ are pretty rare, so I suspect my hunch won’t be proved or disproved either way. One of them might be, I don’t know, let’s call it IQ_ave The last part of dealt with that.

You are replacing well characterized error bars with uncharacterized error bars, you have no idea whether you’re actually making any appreciable improvement and could get a totally false sense of certainty, and you will introduce systemic bias if you do something like stop when you get a score you like. Or maybe there’s systemic bias in obsessing over IQ results and taking multiple tests, period – after all, those sampled around the world didn’t do that. They took an IQ test. They didn’t obsess over it, they didn’t view this as the key to prophesying their future.

Not seven, stopping at seven because the seventh test gave a satisfying result. You don’t know how much better IQ_ave (I preferred my term, SuperSample MegaBlast IQ) is than IQ, and it could well be worse. Move from the well-characterized to the poorly characterized as you like, but don’t delude yourself into thinking this improves your certainty. Obviously IQ_ave is no more a measure of “real” intelligence than any individual IQ score would be.

But it’s still a meaningful number No. It’s not ‘no more,’ it is less. We have characterized IQ, various IQ tests and what they mean. We have not characterized some grab-bag composite with more tests thrown in until some badly characterized condition is satisfied. Is it meaningful, when compared to no data at all?

But the most meaningful thing in this domain is the thing that has been done all around the world, for many many decades. Fiddling with the methodology could be interesting, but the idea that this is a way of being more certain about underlying g is nonsense. You would have to actually study this, it cannot be assumed. I think all GeneralDisarray was trying to say was that this distribution is likely to be skewed – it likely has a much longer tail to the left than the right.

Well, you’re wrong about what GeneralDisarray was trying to say. Maybe there will be a significantly fatter tail to the left – that certainly cannot be assumed. But contra GeneralDisarray, there will certainly be a very significant tail to the right, and there’s nothing that makes the right end of the distribution more ‘real.’ So getting an IQ score on a particular day that’s, say, 25 points below your IQ_ave, is much more likely than getting a score that’s 25 points above your IQ_ave Look, how many catastrophically ill people take IQ tests? If you’re in a coma that doesn’t mean you take an IQ test and get a zero, it means you don’t take the IQ test. You will be within some certain range of normality if you’re sitting down and taking a test. Yeah, some people will feel unusually bad. Some, one presumes, will feel unusually good.

Yeah, you can feel way way way worse than you can feel good – like, you’re not ever going to wake up feeling as good as someone spraying shit from one end and vomit from the other feels bad, unless you wake up with the aid of some interesting substances. But mr two-way hose is not going to go take a test. He’s going to sit at home, doing his two-way hose thing. Feelings probably just aren’t as important as you seem to think. More elemental chance probably overwhelms this as a source of error. This would include things like a certain category of test just systematically overestimating or underestimating your abilities, which we don’t have any reason to think is impossible.

What we know is there’s a lot of error, a lot of variance. No way of removing this error has been found, as of now.

Go try to find one if you think it is simple. We have, essentially, a moderately tall and wide shotgun scatter blast cluster that defines your ‘g.’ There’s no reason to expect you’ll find a pellet that is guaranteed to be dead center. You could say it’s closer to what people mean when they talk about “IQ” in the first place And that is very unfortunate, because this almost religious false sense of certainty and meaning is unscientific and will lead people astray. In both directions. It makes a useful tool into something very dubious. The total number of tests (lets say, 12) were divided equally in two. The only difference between the two groups of tests was that in one I was standing over you with a fork, jabbing you in your abdomen every so often; and in the other group of tests, I was not doing this thing.

This is quite outside the range of conditions under which people have taken IQ tests, and is as a result useless. People, who are healthy enough to have gotten out of bed and walk around, go take a test. They complete the test, and hand it in.

If someone came in with an axe and chopped the head off the person next to them, this would not result in them getting a worse score. It would result in them getting no score. If I was experiencing extreme sharp pain in my abdomen, I would go to the bathroom.

I would not keep taking the test while muffling my screams. If someone was poking me even lightly, obviously I would deal with that rather than taking the test. Now let’s clarify things. Which of the following are you, ikarnal, claiming: None of those claims bear any relation to mine. This is for the reason stated in the quote: there’s lots of things that can make you perform *worse* than what you might be able to achieve in an obsessively well-controlled series of tests.

There are rather fewer things (notwithstanding nootropics) that would have as big an impact on the positive side. And this is just wrong. Tons of things can have an impact on the positive side, which you can view, if you like, as a less than average negative side. You can say winning all the dice rolls is getting lucky, or you can call it ‘never getting unlucky.’ It doesn’t matter. This isn’t about random noise, it’s about the fact than when one person takes one IQ test, and they’re ill or drunk or whatever, the result isn’t an accurate reflection of their IQ.

It is a perfectly accurate reflection of their IQ – it is their IQ. You take an IQ test, the result is your IQ. You are using some quasi-religious ideal of IQ. IQ is an attempt to measure general intelligence.

It has certain error bars. General intelligence is not IQ. IQ is not general intelligence. Your general intelligence is not your ‘real’ IQ. Your IQ is not your general intelligence. These are two different things that are statistically linked.

If this *is* correct, we would expect that if *the same individual* took the same full IQ tests over an extended period of time, we would see a skewed distribution, clustering round a particular point (the ‘true’ IQ), with some noise on either side; but with a long-ish tail on the negative side. If this was the pattern we saw, this would not prove that the cluster round a particular point is “the ‘true’ IQ.” First of all, that’s just a nonsensical way of phrasing this and you really need to stop. Let me phrase this correctly. There is no implication that this cluster is more g-loaded than the first test. One way of interpreting this result is that some guy took an IQ test, didn’t like the result, bunch of times getting higher and more consistent scores as they remembered more answers or increased proficiency in more categories, until they plateaued (reaching questions or categories they could not answer no matter how much practice they got.) If this interpretation is correct, you would expect the high cluster to be unrepresentative and less g-loaded. Not only would you have to demonstrate that this pattern exists, you would have to then demonstrate that the high cluster is more g-loaded than the average, or than the first test. I consider it unlikely that the high cluster, should it appear, will be more g-loaded than the first result.

Following that, I consider it unlikely that it will be more g-loaded than the average result. There is no obvious reason to remove low scores.

Then, in order for this exercise to be useful, the improvement in g-loading would have to be usefully high. You are spending, at this point, a very significant amount of time. I consider each of these propositions, which all must be correct, unlikely to be true. The last part of this post dealt with that. Sorry, yes, I missed that. You are replacing well characterized error bars with uncharacterized error bars, you have no idea whether you’re actually making any appreciable improvement and could get a totally false sense of certainty, and you will introduce systemic bias if you do something like stop when you get a score you like.

Okay, but.why, though? Like, look, I have no horse in this race. I have no idea what my IQ is, and I don’t really care. I’m not trying to find a way to “game” IQ scores to get a better result, and I certainly wouldn’t do anything stupid like stopping when I got a better result.

But why do we have no idea whether we’re making an appreciable improvement when we average over multiple scores? I’m genuinely asking here, that seems like a very counterintuitive thing to say.

Forgetting for a second about whether the distribution is skewed or not, we know that IQ is noisy to some degree, right? Why wouldn’t averaging over a bunch of test results (assuming you don’t do something stupid like stopping when you get a good result) give you some kind of better indicator of your “true” IQ? Like, you say in your other comment: You don’t get to just assume that you can completely ‘de-noise’ some test with multiple samplings. There is very likely some that you can reduce, but to characterize the reduction you need a significant sample OF these multiple sampling composite tests.

But isn’t that exactly what multiple sampling does? Reduce noise? Like, noise is pretty much defined as the thing that goes down once you do multiple samples. We have not characterized some grab-bag composite with more tests thrown in until some badly characterized condition is satisfied. This seems a bit uncharitable.

Obviously I agree that SuperSample MegaBlast IQ has not been carefully investigated in the literature. But to call it grab-bag seems strange to me. We’re just talking about a straight-up average – it’s pretty much the most obvious thing you can think of to do with a sample. My prior would be on the average of a bunch of tests being at least as meaningful as the test itself, unless I had some specific reason to think otherwise. Fiddling with the methodology could be interesting, but the idea that this is a way of being more certain about underlying g is nonsense. You would have to actually study this, it cannot be assumed. Well sure, I wouldn’t want to assume it, exactly.

But doesn’t it seem like a pretty good guess going forward? The two most obvious things that could screw up SuperSample MegaBlast IQ would be, like you said, a selection bias on who decides to do multiple test compared to one, and a biased stopping condition. But those aren’t particularly hard to mitigate. Yeah, some people will feel unusually bad. Some, one presumes, will feel unusually good. Yeah, you can feel way way way worse than you can feel good – like, you’re not ever going to wake up feeling as good as someone spraying shit from one end and vomit from the other feels bad, unless you wake up with the aid of some interesting substances. But mr two-way hose is not going to go take a test.

He’s going to sit at home, doing his two-way hose thing. Sure, but this just seems like quibbling over degrees. I completely agree that if someone feels bad enough they probably won’t take the test. But that still leaves plenty of room for people who feel semi-crappy, and I don’t think they’d have a corresponding population of people who feel semi-awesome.

It seems likely to me (as in I would be willing to bet on it) that in a proper study the distribution would turn out to be left-skewed. I agree that we shouldn’t assume it, certainly, but I’d be surprised if it wasn’t the case. And that is very unfortunate, because this almost religious false sense of certainty and meaning is unscientific and will lead people astray. In both directions.

It makes a useful tool into something very dubious. Well look, if you’re just trying to push back against people using IQ in an overly certain/deterministic way, then I’m 100% with you. I don’t have any kind of belief that IQ captures everything there is to know about a person’s intelligence.

But given that IQ seems to be predictive and seems to capture something real, I honestly don’t see why a straight-up average of multiple tests (assuming you don’t do something stupid like not determine a stopping condition in advance) wouldn’t be better than a single test. It is a perfectly accurate reflection of their IQ – it is their IQ. You take an IQ test, the result is your IQ. You can think of it like that, or you can think of it as a random sample from a probability distribution. As I understand it, you’re claiming that the same individual taking a test multiple times (and assuming that the test is a ‘good’ IQ test and doesn’t benefit from practice or remembering past questions) will get a series of results that will cluster around some value, and the distribution of that clustering will be symmetrical. (I mean, the way you’ve phrased it kind of makes it sound like you’re saying that IQ tests don’t experience noise and the measured IQ result would be perfectly replicated again and again, but that would be absurd.) The point of my example was to highlight what I took GeneralDisarray’s to be; that is, that the distribution from which an individual IQ test is taken is asymmetrical. There’s a ‘hard limit’ (plus some amount of noise) above which you simply can’t go, then there is a long tail to the left, caused by things ranging from mild illness, distraction, feeling hungover, etc.

The result of this would be an asymmetrical distribution. *Or maybe not*. This is a testable claim, but I don’t think anyone’s done a study where they apply the same (type of) IQ test to someone over regular intervals for an extended period of time. For most people, an IQ is a one-off event, and probably one of only a handful of times in their lives – at most – when they get it measured. If they were ‘slightly off’ on that day for whatever reason it seems strange to claim that this particular measurement should hold so much significance, whatever the shape of the individual distribution. After reading all this back and forth, I think you’re both right. My take on what GeneralDisarray is saying is that someone whose ‘true’ IQ (however we measure that) is around, let’s say, 140 can do badly on a test for whatever reason (all the external influences ilkarnal mentions) and so only get a score equivalent to IQ 120.

But for someone whose ‘true’ IQ is 120, doing well enough to score at IQ 140 level would be so much harder even if they felt really good that day and had practiced hard and all the rest of it, that it’s much more likely that most IQ scores would be under-estimates rather than over-estimates. A sprinter can run slowly and below their best but an 800 metres runner, no matter how good they are, is highly unlikely to be able to run a sub-10 second 100 metres dash. If for no other reason, then when a test is calibrated by normalizing scores so the average becomes zero etc, the underestimates in the calibration data will skew this process, meaning all the non-underestimated scores become slight overestimates. But it is truly hard to imagine a test with an element of random error that only errs in one direction.

Below you say What you’re saying implies that the likelihood of a false positive and a false negative are equal. But I don’t think anybody said the likelihood had to be equal, or of equal magnitude. Just that it would be non-zero. Composite variables are always better predictors than discrete variables (of course), because of the magic of canceling error terms, thus verbal subtests will be more stable predictors than nonverbal tests, especially in areas like engineering (where ceiling effects are also at play). I always wonder if people saying this are just trying to comfort me. See, I was tested in 8th grade or so, and got told I was kinda gifted but not really genius-level.

Just around the cutoff for being seriously bright. My verbal score was, admittedly, way high, but it was offset by a performance/math score that was barely even above average. It was immensely disappointing at the time, because I was a massive nerd. It was basically telling me that I’d never have much hope of being all that good at STEM stuff, no matter how well I could seemingly speak and write.

It’s even been depressing later in life, since I really want to be a proper scientist rather than some guy who can talk real good. All this despite my getting pretty good marks in STEM classes in college. Then it turns out that people go around saying the verbal component is more stable, and that’s the one that’s higher for me. Seems like a really self-serving thing to say, since – me imagining everyone else as copies of myself – I tend to assume everyone will have verbal scores higher than their performance scores. So, big questions: 1) Do most people actually have higher verbal than performance IQ scores? Is that a trend, or am I projecting myself onto everyone else? Are there people who default to effortless greatness on quantitative problems but can only just about communicate with expert fluency?

2) Conditional on the above questions being answered, “no, yes, and yes”, does the pattern of verbal scores being more stable, and performance scores more variable, still actually hold? Why should any such asymmetry actually occur? (On the upside for me, the verbal score really was quite good, and does show up in my real performance on verbal/speech tasks. People always compliment my abilities with second languages, even if I’ve never reached literary-level bilingual fluency for lack of practice and opportunity.

I can speed-run the verbal sections of standardized tests. It makes the flagrant difficulty with mental computation and pattern-recognition all the more painfully apparent.).

IQ scores are normalized, so no, an equal number of people have imbalances between verbal and performance IQ in each direction. Verbal scores, particularly vocabulary, are what we call “hold” scores. They’re more robust over time, and if I want to estimate someone’s pre-TBI functioning, I want to look first at what’s most stable. Verbal scores are much better predictors of academic performance in large part because much of academic instruction, even on nonverbal topics, is verbal. How To Install Mods In Gta Sa Android there. Verbal scores are based less on discrete skills. There are many books on this subject, and despite what some on this thread are espousing, the debate on the nature of the “g” factor has not been fully resolved (it may be a statistical artifact resulting from the narrow environmental scope in which it is measured, and evidence of its impact is assessed, and there are almost certainly epigenetic factors that impact its expression).

Many experts in this area have very little training in IQ testing itself or the statistics involved in test construction, norming and interpretation, and little to no experience in actually administering IQ tests. That lack of context shows in their thinking; IQ can do wonders to bolster one’s overconfidence. (I wish they better appreciated the relationship between heuristic biases, false positives and type-1 errors). If the errors only point one way, then you’re defining someone’s IQ as their peak possible performance. But that isn’t very useful or realistic. For example, we know effort matters. We usually talk about this as if it underestimates people who don’t try that hard.

But it’s equally true that if someone is unusually motivated to take an IQ test, their score is likely an overestimate relative to their normal intellectual performance, since they’re typically not that motivated. Or if someone is unusually well-rested and relaxed, they might turn in an unusually good performance. But that’s actually an overestimate relative to their “typical” performance, which will be a better predictor of how they perform on everyday g-loaded tasks. Presumably someone’s cognitive ability fluctuates with mood, rest, diet, circadian rhythm, etc. We typically define IQ as the average of this fluctuating ability, not the peak.

Errors only going one way: You get to the end of the information subtest on the WAIS. I ask you, “Who wrote Hypnerotomachia Poliphili?” [Not an actual item, but relevant to my point.] If you know the answer, by damn, you know the answer; even if you know this for idiosyncratic reasons (say, your mother is writing a dissertation on 17th century French literature), you remembered the title and the associated author, which is quite a feat. In order for the error term to vary equally in both direction, you’re requiring the likelihood of a correct answer with a guess on that question.

That’s not the way it works. If you do happen to know the answer, but you’re tired, anxious, low potassium, dehydrated, hypoglycemic, hungover etc, you might not recall (false negative). There are thousands of explanations for an underestimate. But there’s really only one explanation for a correct response.

Applies across the test. (And no, nootropics don’t really help all that much. Mostly they help modulate arousal for people who are a bit suboptimally low in the testing context, but there are other folks who will then skew suboptimally high.). >Based on that post, that person is in their first year of college, which means that they would have taken it on the new scale. On the new scale, that score corresponds to roughly the 50th percentile, which is much closer to his IQ. A 1070 apparently corresponds with an IQ a hair below ~108, which is pretty much bang-on what Scott wrote in the post (and 14 IQ pts higher than the guy’s original estimate). EDIT: I see from a comment further down that the original text said 114, which indeed is a lot further away from 94 than 104 is.

Your parents’ socioeconomic status correlates with your own at about r = 0.2 to 0.3 Am I the only one to whom this sounds shockingly, even implausibly, low? I tend to roll my eyes at the perennial articles in SWPL outlets claiming that the US has a rigid class society where people from poor families never get a chance, but even I expect parental SES to predict more than 4-9% of their children’s SES, especially since this is everything parents contribute (genes, culture, and money, not just the latter). I suppose this can be partly explained by being (I assume) based on single-year snapshots rather than long-run averages, but that’s still pretty low.

Your parents’ socioeconomic status correlates with your own at about r = 0.2 to 0.3 Is that Pearson’s correlation or Spearman’s rank correlation? If it’s Pearson then I can easily see why it is low, given that wealth and income are distributed according to Pareto distributions. For instance, you have people like Bill Gates and Mark Zuckerberg who are way much wealthier than their parents, so they make the linear correlation statistic go down, but their parents were still upper class, hence they make a rank correlation statistic go up. Your link to Tao’s blog actually goes to a comment about Jewish STEM performance. By the way, one of the reasons I’m always skeptical about this so-called Jewish STEM advantage is that (as noted about myself) as far as I can tell, Jewish brains have a primarily verbal advantage, which is actually what we’d expect if the arranged-marriage selection pressure was for good religious scholars, rather than good scientists hundreds of years ahead of any concept of science. Given the real history, I expect to see fewer Jews in STEM or FIRE jobs than I actually see, and many more in literature, politics, history, and philosophy than I actually see. Law seems to be the exception, with about the number of Jews I expect.

Maybe the difference comes from an economic pressure towards money-making job fields? But that wouldn’t really explain the amount of Jews ( especially Israelis, nowadays) in the academic sciences, where making a secure living is actually quite difficult.

By the way, one of the reasons I’m always skeptical about this so-called Jewish STEM advantage is that (as noted about myself) as far as I can tell, Jewish brains have a primarily verbal advantage, which is actually what we’d expect if the arranged-marriage selection pressure was for good religious scholars, rather than good scientists hundreds of years ahead of any concept of science. But 1) verbal and mathematical ability are positively correlated, and 2) European Jews have been craftsmen, traders and moneylenders for hundred years, which could have caused sexual selection pressure towards mechanical/mathematical ability. After all, being good at winning Talmudic disputes doesn’t look like something particularly useful to feed your children per se, while being good ad making good deals is. Gentlemen, you are forgetting the ladies, and in breeding the mother is very important. From the vague impressions I get, the ideal of the Talmudic scholar means that is the one who deals with the practical side of life – which includes “earning a living to support the husband studying and discussing Torah and the Talmud twelve hours a day”: Wives of scholars had to assume the responsibilities of daily life, for their husbands had little experience with paying taxes, tending the vegetables or shoveling snow. This reality prompted many women to say knowingly, “As for Olam Haba, let the men say we can’t get there without them. They couldn’t manage this life alone, no doubt we will have to do the job for them there too.” Female productivity in the workforce was so vital, in fact, that when researching a shidduch, many parents would look for a girl who spoke Polish, so she could conduct business with the locals.

In some shidduch letters between Rabbi Shmuel of Kelm and his nephew, a prospective girl is described as “educated in reading Hebrew, Polish, German... And also the Russian alphabet is not unfamiliar to her.” Look at the traits of the Ideal Wife in the Book of Proverbs, which includes: She considereth a field, and buyeth it: with the fruit of her hands she planteth a vineyard. She maketh fine linen, and selleth it; and delivereth girdles unto the merchant. So while selecting for verbal intelligence in the male line, might not selecting for mercantile ability = mathematical talent in the female line? “But 1) verbal and mathematical ability are positively correlated, and 2) European Jews have been craftsmen, traders and moneylenders for hundred years, which could have caused sexual selection pressure towards mechanical/mathematical ability. After all, being good at winning Talmudic disputes doesn’t look like something particularly useful to feed your children per se, while being good ad making good deals is.” As I understand it, Jews show up well on verbal and mathematical ability, but not visualization. It’s probably more accurate to framing it as good at taking part in Talmudic disputes rather than good at winning Talmudic disputes.

Again, as I understand it, the interesting thing was that there were two paths to success for Jewish men. One was making money.

The other was being good at Talmud– poor boys who were good at Talmud had a chance of marrying a rich man’s daughter. Wikipedia described this as an unproven theory about Jewish intelligence. By the way, one of the reasons I’m always skeptical about this so-called Jewish STEM advantage is that (as noted about myself) as far as I can tell, Jewish brains have a primarily verbal advantage This would not make me skeptical. A verbal advantage may be very useful for learning everything, including mathematics. And for lecturing, and writing papers, and convincing people to hire you or give you grant money. Take two people with similarly high mathematical ability, and I would not be shocked if the one with a higher verbal ability were both a faster learner and more likely to get ahead in whatever technical profession for non-technical reasons. It’s mentioned on Saletan’s Wikipedia page, along with links to rebuttals in other publications.

What is no longer accessible is the rather lengthy, boisterous discussion on what was the Slate’s discussion board, the Fray (the presence of which was almost certainly instrumental in Slate’s decision to shutter the Fray). It was a big deal at the time, and many people became acquainted with a new breed of genteel, academic racists, like Mr Sailer here whom we’d never realized were out there. Learning that a psychologist who’d published papers on altruism that I quite liked, had also attempted to verify an inverse correlation between penis size and IQ was rather disheartening. Or learning about the origins and continuing activity of The Pioneer Fund. He can tell you all about it.

Thudbit, Mr Sailer and his associates are a bright and sophisticated bunch of ideologues who are very familiar with the psychology of movements and of persuasion. In fact, if one were inclined to make an argument about the dangers of intellectualism, they’d be a sterling example of the dangers of confirmation bias and premature cognitive closure. Comparisons with the intellectual wings of vilified historical nationalist political movements are apt. Melvin Konner provides an eloquent warning in his preface (titled Caveat) to the notes section of The Tangled Wing. Those notes are available for free download directly from the following link (on his website): I’d be happy to discuss Konner’s essay with you. Mr Sailer is s banner carrier for the worst of intellectual society.

It bothers me to see him here because he’s a pied Piper, of sorts. We should all be suspicious of those more committed to proselytizing than in the state and fate of actual people. 1: if only it were so easy. If you think you always know when you’re being sold to, you either haven’t encountered any sophisticated salesmen, or you didn’t realize it. 2: that is the crux of the problem. I wish people could better remember that by affiliating with exclusionists they and everyone else will always be vulnerable to future exclusion, and will invariably encounter some strange social triangulation occurring as folks attempt to preempt their exclusion. I’d argue there’s something like this going on with ex military families and the NFL right now.

Steve, you might need to put a trigger warning on some of your links. Though GeneralDisarray appeared to be more triggered by your name and the offhand reference to Saletan than anything else. In fairness, what seems to be happening is that race/IQ connections are known to be somewhat taboo to discuss here but no one knows exactly how taboo, partly because Scott seems somewhat conflicted on the issue, so Steve posts a link that comments on the issue and General responds with ad hominems + a link + more ad hominems. But no one wants to enter into an actual argument.

And such an argument would probably just be mind-death. So instead you just get some shots across the bow. Though General, if you want to argue for excluding Steve, you should probably first argue that Scott cease linking West Hunter.

I mean, that could be construed as an implicit endorsement! His official stance towards Steve is mere tolerance. Sorry, I was pointing out that 10 years ago I had made a point similar to the point of Scott’s post: “Q. So, do IQ tests predict an individual’s fate? In an absolute sense, not very accurately at all.

Indeed, any single person’s destiny is beyond the capability of all the tests ever invented to predict with much accuracy. So, if IQ isn’t all that accurate for making predictions about an individual, why even think of using it to compare groups, which are much more complicated? That sounds sensible, but it’s exactly backwards. The larger the sample size, the more the statistical noise washes out. How can that be? If Adam and Zach take an IQ test and Adam outscores Zach by 15 points, it’s far from impossible that Zach actually has the higher “true” IQ. A hundred random perturbations could have thrown the results off.

Maybe if they took the test a dozen times, Zach just might average higher than Adam. “But for comparing the averages of large groups of people, the chance of error becomes vanishingly small.

For example, the largest meta-analysis of American ethnic differences in IQ, Philip L. Roth’s 2001 survey, [Ethnic group differences in cognitive ability in employment and educational settings: a meta-analysis, Personnel Psychology 54, 297–330] aggregated 105 studies of 6,246,729 individuals. That’s what you call a decent sample size. So, you’re saying that IQ testing can tell us more about group differences than about individual differences?

If the sample sizes are big enough and all else is equal, a higher IQ group will virtually always outperform a lower IQ group on any behavioral metric. “One of the very few positive traits not correlated with IQ is musical rhythm—which is a reason high IQ rock stars like Mick Jagger, Pete Townshend, and David Bowie tell Drummer Jokes.”.

Oh, in academic circles the prevailing opinion about that series and the players involved is definitely “toxic.” The problem is, it’s not that parties have been ignorant about the social implications of the debate; there is a political agenda to foster and magnify social disparities, almost exclusively on the basis of an ethnic covariate. This is less surprising now than when Saletan’s series appeared, more’s the pity. It’s difficult to be sympathetic to eugenicists when they are without exception inveterate racists.

That’s a great story. I do have one question about it: Up until this time, although I had been unfriendly to the psychiatrist, I had nevertheless been honest in everything I said.

But when he asked me to put out my hands, I couldn’t resist pulling a trick a guy in the “bloodsucking line” had told me about. I figured nobody was ever going to get a chance to do this, and as long as I was halfway under water, I would do it.

So I put out my hands with one palm up and the other one down. What is he referring to here? A quick google search turns up nothing. I get that the one hand up one hand down thing is not how most people would move when asked to put out their hands, but the ‘bloodsucker’ thing seems to imply there’s a second part to the ‘trick’ that he never got to pull off, and I’m curious what it is. This is something that bothers me to no end. I’ve seen IQ test items in the psychological literature that for all I can see have two logically possible solutions, one of which is probably the intended one.

I’m pretty confident in that diagnosis because example items in scientific papers are unlikely to be so hard that I couldn’t solve them, and because one of the logically possible solutions is usually more subtle. But still, I found this shocking. Are IQ tests meant to, among other things, test how good you are at predicting what the test writer intends? I really do wonder how prevalent this sort of issue is. I’ve seen IQ test items in the psychological literature that for all I can see have two logically possible solutions, one of which is probably the intended one. The 5lb Book of GRE Practice Problems has an entire section with hundreds of these, entitled (ironically) “Reading Comprehension”.

The irony is that despite getting a nearly maxed-out score on my Verbal GRE, I can never seen to answer more than 2/3 of those practice problems correctly. The ones I fail at basically always have two answers out of the four that seem very plausibly correct, and try as I might, I cannot figure out how to guess which one. Other people have looked at it over my shoulder, and can’t seem to guess them either.

Not all sources of “practice problems” are reliable. The GRE used to have an “analytical ability” section, which seemed to consist of logic problems of the form “which conclusions follow from this given information.” On the practice exam I took before my actual GRE, it was always possible to use the information given (about who was older than whom, or which house was next to which other house, or whatever) to construct a complete description of the state of affairs, and use that complete description to derive the correct answers to all the questions. On the actual GRE I took, that was never possible (maybe it was in earlier years? Or maybe the practice questions were all rejects). There was enough information to answer all the specific questions asked, of course (and I was able to; that part of the test seemed to be almost designed to inflate philosophers’ scores, so I don’t know that I approve of its removal in 2002) but always left some of the details that weren’t asked about ambiguous. That definitely happened with me. I remember the school gave me some kind of IQ test in kindergarten and I have very distinct memories of it.

I don’t remember all the questions, but I do remember one part of it extremely clearly, at the end: the tester held up these cards that had a cartoon drawing on them. It would be something like a teddy bear with no head. And then they’d ask me what was missing. I remember it being really easy.

Then they got to the last question: I think everything before was teddy bears or some animals or something. This one is a pen line drawing of a classical ‘house’ (slanted roof, square windows divided into quadrants, some bushes and trees up front, a path leading to the front door, etc. When asked what was missing, I offered tons of options, (e.g.

The sun, a garage, (which were not depicted), etc.) eventually the examiner decided I’d tried enough times and wasn’t going to get it. Curious I asked what the answer was, but she refused to tell me, which annoyed the hell out of me (which is probably what caused me to remember the whole thing). I’d offered tons of plausible ideas, and even some that were a stretch. I’d remembered it and told the story on and off for the rest of my life, still always a little annoyed that I wasn’t told the answer: I was sure that I’d gotten it ‘wrong’ for a dumb reason like that. 20 years later, I was telling the story to my best friend while eating in a cafe.

I got to the point where I was describing the drawing, and suddenly I stopped and my jaw hit the floor. I’D FIGURED IT OUT! 20 YEARS LATER! (at least I think so): all the other drawings had the lines filled in with colors, like animation cels, while the house one was just the pen lines. So I’m pretty sure the answer was supposed to be “color”. Which I didn’t say at the time (even though i noticed it wasn’t there) because you woudn’t say a drawing isn’t complete because the artist chose not to use color.

There are plenty of examples of drawings that have no color that are considered complete works. It’s kind of funny too because later in life I became really into drawing and animation, and I don’t use any color in any of my works, because I don’t think it’s that important to what I’m trying to convey. To me it’s all about structure. It’s funny that I had those same proclivities at age 5. Scores have risen precipitously without a corresponding increase of g This is wrong. G is just the first factor component in the correlation matrix of the results for a variety of cognitive tests. If the population’s IQ has changed over time, g will change also.

You seem to be referring to the idea that some proportion of the variance in IQ is attributable to genetic, rather than environmental factors; and this (we assume) has not changed significantly in the last 100 years, whereas IQ famously has. But this is not the same thing as g. Sorry ilkarnal. What are you actually trying to claim here?

IQ isn’t intertemporally valid. Scores have risen precipitously without a corresponding increase of g. G is the first principal component of a factor analysis of the correlation matrix of a bunch of different tests of cognitive ability. Are you trying to claim that the various types of IQ test are inferior to other tests of cognitive ability, because IQ tests exhibit the Flynn effect? This seems decidedly odd, if true; because it raises the question of why psychologists wouldn’t update their IQ tests to include these other, more intertemporally valid cognitive tests. What is more indicative of cognitive abnormality – liking bongos more than violins, or playing around with a barely sub-critical hunk of plutonium one screwdriver slip away from death? How about handing secrets over to the Soviets?

I wouldn’t describe Feynman’s tastes – his art, his music – as ‘populist’ exactly. Idiosyncratic, sure. It wasn’t that he was turned off by the idea of philosophy so much as he thought the philosophers he ran into were snake-oil salesmen. Here’s Gell-Mann talking about Feynman: As idiosyncratic as Feynman’s behavior was, he didn’t really go far beyond the pale.

Yes, he was a bit of a play-actor. But he wasn’t exactly play-acting a ‘populist’ or someone who was just ‘normal.’. : Feynman was universally regarded as one of the fastest thinking and most creative theorists in his generation. Yet it has been reported-including by Feynman himself-that he only obtained a score of 125 on a school IQ test. I suspect that this test emphasized verbal, as opposed to mathematical, ability.

Feynman received the highest score in the country by a large margin on the notoriously difficult Putnam mathematics competition exam, although he joined the MIT team on short notice and did not prepare for the test. He also reportedly had the highest scores on record on the math/physics graduate admission exams at Princeton. It seems quite possible to me that Feynman’s cognitive abilities might have been a bit lopsided-his vocabulary and verbal ability were well above average, but perhaps not as great as his mathematical abilities. I recall looking at excerpts from a notebook Feynman kept while an undergraduate. While the notes covered very advanced topics for an undergraduate-including general relativity and the Dirac equation-it also contained a number of misspellings and grammatical errors. I doubt Feynman cared very much about such things.

IQ measures general intelligence. This is because every intellectual activity that can be scored measures general intelligence. That’s kind of inherent in the concept of general intelligence. Per Jensen, g-loading of IQ is similar to g-loading of vocabulary tests. What is special about IQ is that it is acultural, you can’t give the same vocabulary test to a Chinese person and an African and a Scotsman, but you can give the same Raven’s Progressive Matrices problems to all three and have direct comparison.

Also, the acultural nature of IQ gives it some protection from accusations of ‘cultural bias’ that inevitably issue forth from the left. There are reasons to be skeptical of IQ, relative to other highly g-loaded tests. It isn’t intertemporally valid, for some reason. That’s enough to make you worry. There is absolutely no surprise that we’re seeing lots of people with IQ underestimating their g-factor, considering the g-loading of IQ you’re going to have a lot of people in say the top or bottom 5% that have scores that are really not very representative.

This doesn’t mean that IQ tests don’t matter for individuals. The further you are from the average of your chosen arena of competition, the less likely you are to be able to hack it.

But on an individual level, we see that below-average IQ people sometimes become scientists, professors, engineers, and almost anything else you could hope for. Not much below average, according to that table of yours.

Lotta those occupations have no grey below 90 IQ. Also, becoming “scientists, professors, engineers, and almost anything else you could hope for” is very very very different than being SUCCESSFUL scientists, professors, engineers &c. “Desperately try to hold on to their dream of being an intellectual” being “an intellectual,” I would suppose, involves more than being able to plausibly and unproductively blend in.

What’s the IQ average of those who actually move things forward? Probably that would be a grey bar substantially further to the right than any on that chart. What’s the IQ average of those who actually move things forward? Probably that would be a grey bar substantially further to the right than any on that chart.

However, I think a good part of moving things forward is simply having sufficient nonconformism to come up with your own way of looking at things. Folks have that North European cultures punch in terms of important discoveries, and if so I suspect this is a result of North European cultures being highly individualistic. ( has some great anecdotes that provide perspective on European individualism.). Per Jensen, g-loading of IQ is similar to g-loading of vocabulary tests. What is special about IQ is that it is acultural, you can’t give the same vocabulary test to a Chinese person and an African and a Scotsman, but you can give the same Raven’s Progressive Matrices problems to all three and have direct comparison. You are mistaken.

A typical IQ test is not just Ravens. A full-spectrum IQ test like WAIS includes extensive verbal testing.

Therefore you can’t give the same test to people from different countries/cultures. You have to translate them.

And international IQ comparisons are consequently difficult. IQ tests are more g-loaded than a vocabulary test, because otherwise psychometricians would just use a vocab test. Vocab tests are part of the WAIS package. By all indications and almost all the tests I ever took my IQ is between 4 and 5 SDs above the mean, except for one test I took in 2nd grade where I scored 133, and that’s the score my school district gave to my teachers at the beginning of each new school year. So measurement error can be quite large, especially among children. I knew Feynman. He was obviously smarter than me, in his case the IQ measurement was off by at least 3 SDs.

(John Conway and a couple of other people I know personally are also smarter than me, but I won’t estimate this for anyone I don’t know personally. And when I say “smarter than me”, I mean “at least as smart as me in every way, and smarter than me in some ways”. Feynman excelled at every kind of thinking, there was nothing narrow about him.). In his case the IQ measurement was off by at least 3 SDs This strikes me as an obvious fallacy.

Imagine you knew a very smart person who had a merely average SAT score, would you conclude that the SAT was wrong? That kind of thinking seems to me to be wrong headed. After all what is your SAT score, other than whatever score you got on the SAT? Since the only reasonable definition of IQ is the “the score one receives on an IQ test”, I don’t see how, if it was constantly tested at a given level (and in this case that is a big if), it could be wrong, even in principle. You do mention childhood tests as being unreliable, but I was under the impression that the score came from a test he took as an adult. What you should instead say is that in Feynman’s case, and perhaps in others, IQ was unrepresentative of his overall intellectual abilities; which I think is the point that people telling the 124 anecdote are trying to get across. I suspect that the truth is that Feynman’s intellectual talents were not evenly distributed.

He may have had an incredible mathematical ability, and merely above average verbal ability. It’s possible.

It’s kind of hard to track down an original source. The closest I’ve got is a of “Surly you’re Joking, Mr Feynman!” in people magazine from 1985. On the trip home from the Nobel ceremonies in Stockholm, Feynman stopped at his high school in Far Rockaway, where he looked up his grades and IQ score.

“My grades were not as good as I remembered, and my IQ was 124 or 126, considered just above average,” he says. Reports Gweneth: “He was delighted. He said to win a Nobel prize was no big deal, but to win it with an IQ of 124, now that was something.” So I revise my impression from adult to high school student. If somebody here has a copy of the book, can you look through it and find the relevant portion? I’ve also heard that William Shockley, James Watson, and Luis Alvarez had similarly gifted, but not extraordinary, sub 130 scores.

At least in Shockley, and Watson’s cases this would be a lot less surprising than Feynman, since their work work was basically experimental, and demanded a lot less in terms of abstract mathematical reasoning. No, I got you on both points. Though I may have been a little unclear in how I phrased my post, I did qualify my statement about the innate validity of IQ tests. I don’t see how, if it was constantly tested at a given level ( and in this case that is a big if) After a little more digging it seems that the anecdote came from testing that was done during high school. It’s not clear, but I’m guessing that means it was only one test, so make of it what you will. I really have no idea what the confidence interval on a 1930s era IQ test looked like, if you do, please contribute what you know to the conversation.

On the Internet, nobody knows your a dog, so I have absolutely no way of verifying that you knew Feynman; and for that matter no way of estimating the accuracy of your evaluation of his intelligence. People are subject to cognitive biases, and often attribute positive traits to people who they hold in high esteem for unrelated reasons (E.g. Believing that physically attractive people are more intelligent). It’s possible that you were so awed by Feynman’s mathematical abilities that you overestimated his verbal intelligence. If we don’t think that Feynman was lying about his score, and he may well have been, then we are left with two possibilities.

Either someone with a ”merely” gifted IQ did Nobel prize worthy work in theoretical physics, or there was a massive measurement error. On the one hand Feynman was obviously a genius, his work on quantum electrodynamics proves that better than any test ever could. On the other hand, we have at least one test that seems not to have captured the magnitude of that genius. This was either due to systemic limitations of the test, for instance it might have been very heavily weighted for a type of intelligence in which Feynman was merely above average, or it may have been due to some kind of random error.

If three standard deviations worth of error was common on whatever test he took, then that test wasn’t worth very much, and It seems like somebody would have noticed that. Of course, some combination of both factors could have been at work.

Perhaps there was only one standard deviation worth of measurement error, and the test was heavily weighted for verbal IQ. In that case if he took the same test again we might have seen that his IQ was around 140, and was still unrepresentative of his intellectual abilities. Above you say that your IQ is between four, and five standard deviations above the mean. If that’s true, and the test is generally a fair representation of intellectual ability, then you are somewhere between being on par with one of the top few thousand, and one of the top hundred minds in the country. If that is the case, then it’s likely we could be talking about far more interesting things than psychometrics.

What do you do for a living? If you knew Feynman, and Conway, then I take it you are a mathematician, or physicist, or perhaps some other sort of natural scientist.

What is your primary area of research? @Steve Sailer You are of course right.

There are between a hundred, and several thousand people in this country alone who have the abilities that Polymath is claiming for himself. They must do something with their spare time, and the SSC comment section seems to be as likely a place as any to find them as any.

I would hope that nothing I’ve said would be taken to imply that I disbelieve him, per se. It’s just that if what he says is true then there are probably things he could contribute that would greatly enrich the SSC discourse. I personally don’t get a lot of opportunity to talk to world class minds who knew Richard Feynman, and I bet it would be a lot more interesting than the average conversation around here. @bbartlog Related to my post below, I don’t see how this is possible. It was my understanding that the normal curve, along with the standard deviation of 15, was built in to the definition of IQ under the newer formulation (i.e. After it stopped purportedly being about mental age).

In other words when the raw score to IQ calculations are being made they are supposed to map the raw scores a normal curve with a SD of 15. So a deviation from that would mean the test designers screwed up rather than something about the population. By all indications and almost all the tests I ever took my IQ is between 4 and 5 SDs above the mean I don’t understand how this is possible. 4SD above the mean is about 1 in 25,000 and 5 SD is about 1 in 3.3 million. I can’t imagine any exam format norming study has included 25,000 adults, much less 3.3 million 12 year olds. And even those numbers wouldn’t be high enough to have any confidence in the results that far out.

Okay, so you extend the norms by over-weighting the number of extremely bright people in the pool. What does that do for you, really? It isn’t as if you can use some other test to determine the true IQs of your extended norm sample and use that to calibrate your new test, unless the other test itself had been normed with some fantastically large pool of people.

Seeing something like 5sd immediately sets off alarm bells in my head, and I would think it would do likewise for other people familial with statistics. But maybe I’m missing something. All the simpler tests that maxed out at 150 or so, I maxed out on, except that one 2nd grade test. Later I took adult tests and scored in the high 160’s.I also got 2400 on the GRE back in the 80s when it was really hard.

I knew Feynman because his best friend, Ed Fredkin, was one of my professors, and because I worked for a time as a researcher at the MIT Lab for computer science in Fredkin’s group that studied the same kinds of theory that Feynman was working on over at Danny Hillis’s company down the block. I have been a mathematical consultant or software developer for thirty years or so, have a math Ph.D., an Erdos number of 2, have won several awards for research and writing, taught on the side a lot, and know a few people who are smarter than I am. My dad’s IQ was measured at 171 back when he was at Bronx Science, which he graduated from 2 years early, and I’m way better at math than he was (he was an attorney, my mother is a physician). I think the New York City school system had data on millions of children over the years, including hundreds of thousands of the high IQ subpopulation of Ashkenazi Jews (which my dad was), they could certainly try to give a test that went up to 170 or so. I agree that the numbers probably aren’t very precise or accurate, and also that there are fat tails due to combining populations with different means. But the GRE is taken by a large enough group of smart people to give pretty good calibration, and even the new GRE was considered by high-IQ societies to have a top of 4 sigma, while the old one that I took had a lot fewer perfect scores and therefore a higher cutoff at the top end. I think, if you have never done a test where the difficulty of the tasks extended into a range that was clearly too difficult for you, you only know a rough estimate of your IQ and not necessarily an underestimate.

In my experience the difficulty of IQ test tasks usually maxes out somewhere around 130, and where you land in the extrapolated range beyond that depends more on your motivation and your ability to concentrate than on your fundamental limit of understanding. I think my IQ is realistically around 135 but I have scored above 170 in some tests. I can always tell when I will overscore, because the tasks are kinda hard but still doable.

Then, when I concentrate hard enough on doing these doable tasks, the extrapolation takes me to Neverland. My guess is, that fat tails usually come from extrapolation. Norming a test means squashing the actual distribution into a gaussian shape. If you do it correctly, you will by definition not have a fat tail, even for a multimodal actual distribution. I don’t see what nyc “having data” on millions of children over the years does for them. Unless the claim is that 1) the same test form was in use over those millions and 2) they were continualy carefully collecting raw score information beyond the then calibration of test and 2b) the questions scaled enough to have meaningful raw scores into the 1 in millions and 3) some statistical expert went back over the carefully collected data to extend the calibration of the test.

And all this assumes they were going for a deviation IQ in the first place, which give how long ago this was may not well be true. Exactly here is the problem with trying to have discussions about IQ. People’s ego are so tightly tied to these numbers that they are willing to swallow any old snake oil that tells them what they want to hear. Even if they are smart and statistically knowledgable enough to know better. There are clearly people on this planet who are 1 in a million with respect to intellegence.

And 1 in a billion. And there’s a smartest person. There just aren’t any standardized tests capable of identifying these folks. Absent truly enormous expense there can’t be. Which is perfectly okay! This little fact of the matter need not be taken as an insult or an attack on anyone’s intellegence. But for some reason it too often is.

(I suspect a lot of SSC readers might fit into the “high-IQ underachievers” category; maybe that should be a topic unto itself.) From my perspective if you’re in a school system that doesn’t track well you wind up with massively underdeveloped work ethic and social skills since your raw intelligence is adequate for most pre-puberty challenges where normal kids have more incentives to work hard and make friends. Usually you realize how fucked you are socially when puberty hits and it’s a crapshoot how much you can repair it in high school and college (college is easier since it’s a fresh start so you see a lot of catching up there). Work ethic can take much longer to catch up with you, depending on what path you take academically and variable teacher/professor emphasis on exams vs busy work, but since most jobs, even high-g ones, are mostly busy work, it’ll be obvious by the time you have one full-time. Yeah, sounds like me.

Now, what I said above was no exaggeration – I literally was doing calculus at age 8, and I haven’t the slightest idea how an exceptional case like me could be “tracked well.” In retrospect, I think I’d have been better off holding myself back, staying with my grade level, and developing socially like a normal human, instead of jumping around among schools and tutors to nurture my “genius” while leaving me emotionally stunted. Long story short, my social skills are still very weak, my work ethic is decent if I can stay focused, but I’m easily distracted if the work isn’t engaging enough, which it often isn’t. My main issue, I think, is that I lack creativity. Set me on a task with a clear answer and I’ll find it, ask me to come up with something myself and I’m stumped. @BBA: I hope you don’t mind if I ask a few further questions: Are you on the Autism/Aspergers spectrum? Do you have particular unusual mental abilities (e.g. Memory, calculation)?

Are your abilities focused in a particular area (e.g. Math) at the expense of others? Do you have difficulty handling verbal ambiguity?

Or, what is difficult about interacting with people? What are your parents like? Did you participate in math competitions or similar activities? What do you do now? Do you work in math or programming? It’s late and I don’t want to delve too much into my personal life, so I’m not going to answer all of those questions. I’ve never been diagnosed with an autism spectrum disorder.

My younger brother has, and he shares a lot of my personality quirks, only his are much more severe and interfere with his daily functioning, while mine just make me obsessive and anti-social. So I consider myself a low-functioning neurotypical. I got a 1570 on my SATs. I’m better at math, but my language skills are perfectly up to snuff.

I’ve always hated writing but I’ve also always been told I’m a good writer. Must be doing something right. I just find social situations exhausting, and hate the feeling of being “trapped” at a noisy, crowded gathering of some sort when I’d much rather be in my room, alone, reading. I’m okay at one-on-one conversations but with more than a few people I find myself listening more than talking and rarely able to get a word in edgewise, which usually suits me. @Nybbler: yeah, I probably would have been miserable no matter what, but never having the chance to make friends as a youngster has made it so much harder now that I don’t even know where to begin. I think the primary reason I haven’t lived up to all that “99th percentile” grade school hype is that I was really bad at going forth and doing tasks that were not laid out for me by authority figures.

Getting a good job in particular, which is to say the combination of choosing a field, seeking out appropriate jobs in that field, and applying for those jobs in such a way that I actually got called for interviews, was existentially terrifying to me as a new college graduate, in much the same way that being told to plan and then embark upon a solo trip backpacking across Europe might be existentially terrifying to someone whose prior travels in life always involved being shepherded around by others. I think there’s a certain brand of high-IQ nerd that wasn’t properly socialized by our society, since standard socialization methods were such a mismatch for them, and thus missed out on essential character development. Fortunately, I think it’s possible to bootstrap a lot of that yourself, though it’s a bit painful. I’m actually starting to think about this from the perspective of a father, rather than a young adult reflecting on my own childhood. If my son turns out to be as much of a nerd as early indications (and his lineage) suggest, I’ll need to find a good way to avoid these pitfalls. I’m thinking boy scouts and martial arts.

Anyone have other ideas? Science-focused summer camps? I only acquired the ability to actually pursue a goal like this at age 39, and only by reading countless guides that broke down how to do every single step from finding leads to preparing questions for the interviewer.

I’m not sure what’s different about 21-year-olds who just go out and do the interviews. Maybe they’re used to life being difficult while I was lazy, maybe they’re more used to playing out social scripts while I expected to be me without changing, maybe they don’t react to unsure situations with executive dysfunction. Maybe they have friends and family that encourage them and normalize the process. Maybe they have less complacency that things will work out for them if they don’t make them happen (I never found a mate, either). At any rate, I spent about ten years driving forklift with my BS in Math and 1480 SAT, then got forced to take a much tougher job in the same factory where I had to adapt to survive. After several years of being forced to develop maturity in this way I started playing D&D, and applied the strategies from work to prepping at home. After that I was able to get into the field of computer programming via self-study and showing up for interviews.

I don’t know if I could ever manage to teach my college self how to be that responsible, or what makes other people able to do it. I had to learn so much about business people’s perspectives by reading their writing on blogs and Reddit, I had to develop so many time management skills. To me it seems like there should be a lot of people like me who are easily able to perform intellectual work but just don’t get invited to by the system. My brother was completely incapable of getting a job after high school. My mom would pick up applications for him, hold his hand through filling them out, but somehow he never returned them. He had the same problem in college — he had the brains, but he didn’t have the executive function, and without our parents to help, he flunked out.

My parents finally put their feet down, afraid he’d never leave home, and said if he didn’t get a job, he had to join the Navy. So that’s what he did. In the military, *they* try to win *you* over, at least if your ASVAB scores are good enough. (His were good enough to get his choice of jobs.) Every time his term in the military runs out, he talks ahead of time about leaving the Navy, maybe studying computer science. And then the clock runs out and he just reenlists because he hasn’t made alternative plans. He’ll probably make a career of it, although it isn’t his ideal job. Autism runs in our family and I’m fairly certain he has it.

Maybe it’s something to look into. Executive dysfunction is a really big sign. I have it to a lesser degree; it takes me a week to make a doctor appointment. Luckily I’m married to an organized person so actually paying the bills isn’t my job. Estes decided to attempt a direct confidence boost. He told some members of the group, completely at random, that they had done very well on the previous test.

On the next test they took, those men and women improved their scores dramatically. It was a clear measure of how confidence can be self-perpetuating.. This article is about gender differences, and it also has an interesting video midway through about how men learn to let things roll off of them through playing sports and getting casually insulted when they are kids. This fits with an analysis I saw indicating that people who play sports while they are students make more money (sorry I can’t find the link, I swear it was on halfsigma.typepad.com or somewhere like that). And it is giving me a great justification for all of the times I teased my little sister when she was growing up 😛. I like this post, I think it’s a good reminder of an important principle although for those without any grounding in statistics I think it may not make things too much clearer; but there are surely plenty of places that offer good introductions to statistics to which you could point such people. So none of that is why I’m commenting.

I’m commenting because I was triggered by something said offhand, and I’m just going to try to keep this brief. I don’t think that someone who is fairly familiar with it would consider music to be a subject which can occupy the intellect at a capacity below the average of various other scientific pursuits. I think that there is certainly an anti-intellectual mythology around music; and even more than those who create it, many who critique it believe that a frankly offensive ignorance of its workings is not only acceptable but may be considered an advantage.

It is troubling not only that the view is held by some in “authoritative” positions, but also that this represents a sort of baseline view in our culture. Again, the remark was offhand and simply served a shallow illustrative example in the piece, the purpose of which example was clear. But I will push at any turn against the notion that music is a field in which the intellect is best left out. FWIW, I have both a high IQ (well above the nominal genius cutoff) and a strong musical talent, including for rhythm. That makes the fact that I agree with Jensen probably of some interest. These are different things. However, I also notice that other aspects of musical talent seem to recruit the same pattern-matching ability I use for demanding cognitive tasks like writing software, and thus are probably heavily g-loaded.

The “rhythm” part of “musical rhythm” is an important qualifier in Jensen’s negative statement. It is common folklore among both programmers and mathematicians that music talent is strongly correlated with talent in their fields.

This matches my experience which I have never had any reason to do. One of my favourite musician putdowns is that they are so primitive they only know the alphabet up to G (although they can say it forwards and backwards), and can only count to four; and drummers don’t even have the alphabet down. I can actually speak to the drummer jokes as a professional, though; obviously there are a lot of aspects of music that a drummer gets to just ignore, or it’s perhaps better said that while others are doing quick calculations to get the particulars of things like a chord progression, drummers think at a level of abstraction higher, just understanding the overall trajectory. They essentially read and interpret an abstract but intrinsic property of the piece called harmonic rhythm; the way that the flow of music sets up certain expectations, and how those expectations are resolved and in what time frame, which creates the division of phrases and the arc which makes those phrases, separately and as a whole, intuitively make sense to a listener. So, a drummer reinforces and modulates the music at the level where a listener projects their understanding of how the music “works”, and possibly even what it means. This is not to say that other instrumentalists do not work or think at this level, but again they simply have concerns to address in playing that a drummer does not. In this way, I might think of drumming as displaying a more emotional intelligence.

It is generally true in music that while there may be wrong answers, there is no single right answer, and this is even more the case in drumming; a drummer hears what everyone else is playing, extrapolates to how they are each thinking, combines these impressions to interpret the flow and meaning of the music as a whole, and then makes specific interventions to impose and reinforce that interpretation. But the whole process is generally an unconscious one that you “feel” intuitively. The average person who gets a low IQ score says “Yup, guess that would explain why I’m failing all my classes”, and then goes back to beating up nerds.

I know this is meant in jest, but it seems really uncharitable to the “low IQ” crowd, which for the anxious portions of the SSC audience appears to be around mid-90s. Most of these people are the perfectly productive, reasonably articulate and curious types to whom Reader’s Digest Classic Editions were targeted in decades past. Statements like these suggest that a bit of ingroup/outgroup thinking has crept into our conception of the meaningfulness of the measure. De-individualization and “the soft bigotry of low expectations” (sorry) are among the concerns that I’ve seen the online IQ-fretters express. I don’t think the IQ-anxious are limited to SSC or the rationalist community, or even the broader nerdy proto-intellectual community. They turn up in fora (both intellectual-leaning and basely popular), clever-sounding and funny and concerned that they or their kids may be at a disadvantage. They may not worry about their ability to contribute an oriel window to the edifice of human knowledge, but they are concerned about their economic and social security, or their ability to be taken seriously as a leader in a hobby community.

Additionally, since this note of reassurance was meant to resonate on the individual level, I’m not sure I would call “emotional intelligence” silly, at least in its popular understanding. Being able to fluidly navigate primate social hierarchies and intuit the desires of others is a useful skill in many contexts. If Dale Carnegie and “game” practitioners are correct, performance on these measures can be improved with practice, unlocking a great deal of value for the individual. I’m not sure I would call “emotional intelligence” silly Yeah, I don’t get why Scott is sh*ting on emotional intelligence. It may be kind of hard to measure quantitatively, but I suspect a test for the ability to read people, understand their emotional states, and model their mental process, seems to me like it would be very predictive of future income, at least for people with a certain minimum IQ. If you look at the leaders of fortune five hundred companies, I don’t think what sets them apart is their spatial reasoning skills.

De-individualization and “the soft bigotry of low expectations” (sorry) are among the concerns that I’ve seen the online IQ-fretters express. And they’re right to express those concerns! Almost nobody in “the system” actually pushes you as hard as your IQ actually allows you to be pushed. They instead take the IQ number as a signal of, “This is how well you can do without any particular effort on our part or your part”, and let you slouch along. You can almost always go further and do more in intellectual subjects than the people giving you an IQ test say you can. It’s important to put in the effort, because the people getting extraordinary results aren’t coasting on their IQ’s.

Coasting on your high IQ leads to Brilliant But Lazy/Gifted Underachiever Syndrome. Great results come only from having a decent IQ and then working hard as hell. Excuse you, I are very dumb! 🙂 Mathematically, that is.

Pattern matching, spatial awareness/manipulation, the rest of it – at times I still go “It’s over here on the right” *points to the left* “Ah ha ha, other right, I meant!” So any test that relies on “what pattern comes after this jumble of lines and dots?” is going to have me taking off my shoes to count on my toes ‘cos I don’t have enough fingers to count that high. Words, though, I’ve never had any trouble with them. Which is really fun all through primary school when you’re bad at maths but reasonable at other subjects, so the teachers go “Well, the reason you’re doing poorly in maths can’t be because you’re stupid, else you’d be doing poorly in everything, so you must be lazy and just not working hard or trying instead!”. Mark, are you referring to Professor Richard “Ulster Says No!” Lynn’s? Because I think he’s as trustworthy on this topic as a fox in the henhouse; I do believe he has a lot of political inclinations which lead him to interpret results through a particular lens and if the Wikipedia article on the criticisms of his work is correct, then he used a lot of fudging and guesstimating and “If I take this result from 1920 and extrapolate a bit” for his “list of global IQ”. I really find it very hard to believe that people five miles on one side of an arbitrary line will have a much better average IQ score than people five lines on the other side of an arbitrary line (as you may see from graphic, Norn Iron is lumped in with the UK’s average of 100 as against the Republic of Ireland’s average of 92 – or 96, depending on which story you read).

But faix and begorrah, Muster Mark, shure isn’t yourself right about meself being Irish, like! Now excuse me, yer honour, I must go and give the pig in the parlour a feed of stirabout, wirra wirra! You don’t write like most people I’ve met with 99 IQ, I can tell you that much. But the whole point of OP is that you shouldn’t put too much weight on a single number for individual purposes, because all sorts of shit can tweak that number up or down, or make it nonindicative for the stuff you actually do, that’ll average out on the population level. This forum just has trouble accepting that because we’re mostly nerds who compare IQs like a drunken football team compares dick measurements.

I might also say that any online IQ test is only slightly more reliable than tea leaves. Intelligence doesn’t have the practical value you’d think it would. The same with wealth really. People assume they would be happy if they were super smart and super rich, and that they would have no problems – but generally speaking the complexity of problems assigned to super smart people and/or rich people scale up. Studies show the most competent people in a workplace are assigned the toughest, most important jobs. Which is actually smart when you think about it. And rich people have rich people problems, like trying to line up a cleaning service for their ski chalet in Alsace (better learn French!).

People assume that have seven cars is more fun than having none, or that being a veterinarian is better than being a dog groomer. But when we study happiness material possessions such as lots of cars don’t really correlate. And while a dog groomer and a vet both smell like dogs, only one of them has to euthanize beloved family pets and coach parents through how to grieve with their children. Especially at the community level, wealth and IQ can solve lots of problems.

But there are generous poor people with low IQs who make important contributions to the world too. Intelligence doesn’t have the practical value you’d think it would? Depends on what you’re doing. I’m a systems programmer and for that it matters a hell of a lot. I planned to be a theoretical mathematician; it would have mattered a lot there, too. I recognize that not all fields are so IQ-loaded, of course.

I don’t know whether this counts as “practical”, but I enjoy being really bright. My universe is a tremendously more interesting place because of all the things I know and my ability to reason about them. Grokking the physics of rainbows gives them a depth of beauty and meaning they wouldn’t have as mere colors in the sky. Wealth might not be a guarantee of happiness, but poverty sure comes with a lot of suffering. And as more and more jobs that don’t require much intelligence get automated, people who have a low IQ have a really good reason to be worried. What if they can’t get any job that pays a living wage? What is that going to do to their chances to live somewhere that’s not a slum, have kids, pay the doctor when they get sick?

That kind of problem probably doesn’t kick in at the 95 IQ level, but ten points lower and there’s likely good reason to worry. I think that the jobs that are likely to be automated first are those that involve manipulation of information and language. Translators are going to be automated before hairdressers. Clerks take money (information) and talk to people (language).

I don’t think it’s freedom from automation that will protect brainy professionals and knowledge workers – it’s social inertia. Look at universities – for undergraduates, there isn’t really any reason for them to exist, and yet they do. Fund managers? The older I get the more I realize that pretty much the only thing you get for being smart* is a richer and deeper appreciation and understanding of the truth and beauty of the universe. And from that: 1: that’s actually a big deal.

The joys of being able to catch some glimpses into the rhythm of the world are immense, and I think those who don’t have that are really missing out. 2: but it doesn’t make this ‘life’ business any easier in terms of getting resources for yourself and dealing with dumb problems. * to be specific, I think there are ‘practical’ values to more intelligence, but they kind of stop past +1 sd, because that’s all it takes to deal with a lot of the junk in the social/cultural/machine of society landscape. And more than that doesn’t help on that axis.

These are good points, I thought it might be worth thinking about the selection of the anecdotal evidence too. Consider the selection process for the other anecdotes: 1. Contacts Scott about their IQ test. These are people who have got a low IQ score. On average they feel that their IQ score is lower than they would have expected.

They also have goals and ambitions which are not easy to achieve for people who are not smart. They have enough intellectual discipline to not just decide IQ tests are bullshit in light of this discomforting news. We are drawing from a different distribution than the general population. SLC readers have higher IQ and higher intellectual motivation/interest than the average (also different variance, skew etc.). The implications of point (1) are as follows. A) We are selecting people from the bottom of the distribution. There are two reasons to be at the bottom of the distribution, low IQ and a negative error in the test.

Consequently the anecdotes are selecting on people with higher expected downward error in the IQ test. B) We are selecting people who feel like their IQ score is too low, or not compatible with the environment they are in, or is insufficient for achieving their goals. This is almost like selecting on a second test. The people who are reporting low IQs have got this other signal (their expectations, environment or goals) about their IQ and are reporting the mismatch. These two issues alone are going to lead us to picking out anomalously high errors. The Feynman story is a little like this too. Why did we hear about Feynman’s low IQ score?

Because he’s a nobel prize winning Physicist who got a low IQ score. If he was 40 points too high we wouldn’t know The implications of point (2) are less clear.

We are picking from a different distribution and so that means we cannot interpret the anomalies in the same way as if we were drawing from the population. How this impacts things depends on the true distribution of IQ for this group. But suppose, for the sake of argument, that the true selection process was read SLC only if true IQ>120. In this case all scores below 120 would be due to measurement error. Draw a readership of x thousand from above 120 and you will get some really anomalously low scores – moreover when they see these scores they will be really miffed and quite probably want to ask someone about this who seems to know about it. I don’t really believe that this is the selection process but it illustrates the kind of selection worries that we should have.

Finally, we are also selecting on people’s interest in intellectual matters, willingness to read long blog posts that contain statistical analysis and complex arguments. This means that there is going to be another bias to worry about other than test-error. That is the relationship between IQ and positive outcomes. Someone who has a low IQ and is reading SLC must have much higher interest/motivation regarding a set of skills and subjects that typically lead to success.

Consequently, even when we are getting the IQ test right we happen to be picking a person with low IQ who is going to outperform people with similar IQs. How important could this factor be? I’m not sure but it is important to note that while the relationship between IQ and outcomes is significant (in both senses) it only explains a small fraction of the variance in people’s success.

In any rigorous analysis of success we can explain around%50 of variance in terms of observables such as IQ, personality, education choices, etc. This leaves a hell of a lot unexplained i.e. There is a lot of room for people’s motivations, interests and choices to determine their outcomes. Moreover, many of the observables (e.g. Education, fitness) are partially choice variables themselves – that is, driven by preferences (I think philosophy is interesting) as well as capacities (I find philosophy easy). So people who are reporting low IQ scores who are reading SLC should both have anomalously high intelligence/success. Except, it’s only really anomalous if we don’t condition on the selection process.

The proportions of people in various roles increase and decrease over time, and roles even appear and disappear. I’m not saying it doesn’t create problems (it obviously does), but unless you want to put a stop to all that (difficult, and only appealing if you have a higher opinion of current society than I do) they are problems you need to deal with regardless. Even if raising average IQ increases the amount of such problems (which is not guaranteed; if we don’t devote effort to raising average IQ, we’re likely to devote effort to something else which will also produce change, or just be changed by becoming bigger slackers if we don’t devote effort to anything), it seems not implausible that a higher average IQ might increase our ability to cope with such problems. IQ is associated with so many good things that it’s hard to see why we wouldn’t try and improve society’s IQ if we can. As for the idea of ‘specialisation’ being a problem? You just use your higher average IQ to automate more and more unskilled and low-skilled jobs.

Any remaining really unpleasant jobs could be done by people who would be sufficiently articulate and aware to ensure they were adequately compensated (as distinct from the status quo, where people in poorly paid, unpleasant jobs are often unable to negotiate themselves into a better position). I agree that an average person with an IQ of 110-130 bagging groceries would probably be an improvement on an average person with an IQ of 70-90. But non-linear things happen when people get over IQ 150 where people can become quite dysfunctional.

If the average increased to 110 and the standard deviation stayed the same the percentage of people over 150 would increase significantly. It is also worth remembering that at least some of the genes that increase intelligence are beneficial when heterozygous but cause diseases like Tay Sachs disease when homozygous. Increasing the frequency of these genes in the population could have all sorts of unexpected effects.

The pleiotropy of genes associated with high intelligence is also a big unknown. For example myopia seems to correlate with high IQ. () If there are mechanisms that select for the current range and average of IQ in society then it might be worth determining why they are the way they are before we go shifting population averages in the hope of some “obvious” global benefit. Hasn’t Scott already written a post about this? Briefly, there isn’t any real reason to think that genius is inseparable from dysfunction. Yes, there are genes related to intelligence which can also result in dysfunction, but there are plenty of genes related to intelligence which don’t. And the main way we could improve intelligence is simply by correcting harmful mutations and genetic load, which would pull the 80’s and 90’s up to what we now consider average without pushing the upper bound of human intelligence any higher.

Well, after we first deal with the low-hanging fruit which causes low intelligence — lead poisoning, fetal alcohol syndrome, and so on. Trying to create super-geniuses is another question, and I certainly agree we shouldn’t do that until we know all of what a gene we wish to insert/correct would actually do.

In short, if we can cure mental retardation, we should do so, because while it’s great that a person of 70 IQ can get a job putting grocery carts away, that is no reason to preserve 70 IQ for the purposes of having someone put carts away. Brave New World was wrong in the assumption that you would have to create dumber people to operate the elevators. The same person could do the job with the defect corrected, and also have a happier life in many other areas.

Well, more intelligent people get bored more easily. They are less likely to just keep putting away carts day after day, and more likely to scheme to take your place as the grocery story manager.

Being “overqualified” for jobs is a thing. There are many employers who prefer not to hire PHDs for menial jobs. The employer quite reasonably fears that they will get bored and quit after a while, and the employer will be stuck retraining. They also may be worried that the person would be less likely to follow orders.

There are 7 foot tall humans, and even 8 foot tall humans. But not many of them. There is probably a reason for that. There are Einsteins and Fermis, but not many of them either. There’s probably a reason for that too! It’s fun to speculate about why human intelligence is not higher than it is. Does the mind get too unstable at high IQ?

Or do people get too bored with the daily grind that they don’t pass on their genes as well? Was there just not enough selection pressure? Or is human intelligence just difficult to push much past where it is (diminishing returns).

This is great! Although what this article says should have been obvious, I never really thought about it this way until now. When I was younger, I took multiple IQ tests. Apparently I got a high IQ.

I didn’t care at all. I don’t even know the result of the latest test I took. I only took the tests because I was young and some respectable adults asked me nicely and I believed them. And this article explains that I was right, and at the same time, the people who took the test and cared about whatever research they did were also right. I was right because my individual IQ score doesn’t matter too much, and because I have more reliable information about what intellectual tasks I am successful at that just the IQ. But at the same point, my school and whatever researchers cared about those tests were also right, because they got some objective measurements out of those tests and they could compare them to the same kinds of measurement done on thousands of other young people.

It’s a win-win situation. I would argue that a very similar article could have been written about discrimination, where you see the same issue that people with certain traits often regard discrimination that clearly happens at the group level as a hard limit on their personal ability to achieve things, rather than ‘merely’ a hindrance where they can still achieve great things. Especially if they are smart and avoid failure modes that have been shown to cause much of the group level differences, I feel that much of the hindrance can often be avoided, especially as a non-trivial part of the failures are caused by people in the group not being taught to make choices that lead to success.

Discrimination certainly doesn’t appear to affect all people equally. Obviously people vary in their natural aptitude for social skills, and those with unusually high aptitude are often able to find ways to “fit in” in groups other than those in which they were raised.

As a result, I feel like there is a pattern of at least some of the most successful people in disadvantaged groups (who are most likely to be among those with this aptitude) sometimes being dismissive of the struggles of those who are not so successful; the hindrances didn’t give them (the successful ones) much trouble, so they couldn’t be that big of a deal. Reality is more complicated because people are also affected by them being told that people exist who discriminate against them and those people then interpreting various experiences with ambiguous causes as discrimination. For instance, I read a comment by a white person the other day who had been stopped by the police 5 times last year for no clear reason. Because that person is white and there is no narrative of racial profiling against white people, that person didn’t conclude that he was racially profiled. Instead he was unable to point to a reason. Many black people who had the same experience would logically have adopted the explanation that the media tells us is common: racial profiling.

Because we know that cases exists like the white person above, a non-zero percentage of black people will falsely attribute experiences to discrimination. We can expect errors the other way too, where black people will falsely attribute discriminatory experiences to other causes. We can expect that different black people have different levels of eagerness to classify ambiguous experiences as discrimination. Unlike what Protagoras seems to believe, this eagerness doesn’t necessarily match how much this person is disadvantaged by discrimination. It seems plausible to me that black people in liberal environments will on average be very eager to interpret bad experiences as discrimination, because they live in an environment where it is commonly believed that discrimination is really bad; while that environment will actually have relatively low levels of discrimination, because most of the people oppose discrimination.

Another issue is that in itself it doesn’t necessarily matter whether people correctly judge whether they are discriminated, but that it’s really important whether people respond in ways that make the situation worse for them or better. Nancy correctly argues that different people respond in different ways, although I would argue that the range of options is broader than just working harder or giving up. For example, some of the police-on-black video’s seem to feature black people who angrily confront the police officer(s) about perceived discrimination in ways that doesn’t make them safer or get them treated better. The conservative argument seems to generally be that it’s better to respond by working as hard as you can, while the progressive argument seems to generally be that the perceived discrimination should be punished, the black person given help, etc. My position is somewhere in the middle with the caveat that I think that progressives often falsely claim a level of group-level discrimination where the scientific evidence doesn’t support their claims. I think it might be useful to talk about some of the ways that people (of all sorts of intelligence) use to up-level their mental abilities.

I’ll just observe that most of what you’re talking about here are strategies to be more effective at life, not how to actually increase cognitive function. For the latter, my best guess is there’s no silver bullet, outside of the general observation that we get better at things we do, which transfers to similar tasks. Like, if you do a lot of math and programming, you’ll probably increase your general analytical ability; and if you read a lot of philosophy, you’ll probably increase your reading and logical ability.

Oh, and getting enough sleep. My dad — a naturally smart guy trying to raise kids who had inherited his intelligence — used to say to us, “You can be dumb or lazy, but you can’t be both.” He also pointed out that we’d never reach our full potential if we counted on being smart and lazy. But it really is obvious that studying does actually make a huge difference.

I’ve always felt the main thing intelligence gives you is the ability to learn stuff faster. If you’re not smart but put in five hours of homework a night, you can pull down the same grades as a genius. Most people don’t feel like putting in that kind of effort, but some care enough to do it, and they succeed. A framing that I find useful for interpreting social science results is: assuming that the study was perfect, had a large sample size, and had no particular sampling bias, even then its conclusions about the impact of measurable factor X on measurable outcome Y are: “In the absence of any other specific knowledge about the person, we predict that X leads to Y with some frequency”. It’s important to remember that where one has additional specific data on a person, these are also relevant, perhaps tremendously so.

Make up silly theories about “emotional intelligence” and “grit” and what have you. I guess on this site there is a lot of prior literature about these silly theories. For me I can only take the terms “emotional intelligence” and “grit” as standard English expressions describing personal qualities. To me it seems that both of those personal qualities are virtues that I would expect to be both measurable and correlated with life outcomes in just the same kind of statistical, precise-for-populations-but-not-for-individuals way that I expect for the virtue of (ordinary) intelligence. But when people use those terms, they aren’t referring to them generically, but to actual psychometric measures people have tried to construct, which have generally failed.

My understanding is that, besides IQ, the only psychometric measure that reliably correlates with success is the personality trait “conscientiousness”. Of course, that we haven’t so far been able to construct a reliable psychometric measure of something doesn’t mean that it doesn’t exist! There’s a lot of variance in personal outcomes left to be explained. Ever since I was a kid (I’m 35 now) I was regarded as intelligent but I never took an IQ test, probably because I was always worried I would turn out to be “ordinary” and lose the only thing that made me special somehow. After reading this article I took a deep breath and did the IQTest.dk Raven’s test right here in the workplace. I opened it in an incognito window somehow thinking if it turned out to be 95 or something I will just turn everything off and erase/block the memory from my brain. I scored a 138 and I couldn’t answer the last question because I was out of time and if my boss didn’t ask me something in the middle of the test maybe I’d even have time for that.

Then I activated my wp account to write this comment here (long time reader, first time commenter) to thank you for inadvertently encouraging me to take the test and feel genuinely special and not worthless at all. PS: Sorry for my English if there were any errors, it is not my primary language. I would just add two comments: First, I think that people have a very inaccurate understanding of the actual IQ distribution because what you read on the internet is so misleading.

People always read inflated IQ claims, and so they take a score of 105 to be absolutely crushing to their ego. But realistically an IQ of 105 just means you’re around average, like most people. Second, generally individuals know more about their IQ than what an IQ test can tell them. An IQ test gives you a noisy estimate of g, i.e. Antony Johnsons Crying Light Zippered there. General mental ability; but a whole lifetime of living in your own brain should give you a much better understanding of your mental abilities. I mean, we spend most of our formative years in school, which is pretty much the most g-loaded thing people do regularly!

So how about this: if you had an easy time learning a wide variety of material in school, can easily read and write complex prose, and are pretty good at math, you have high g. And if an IQ test tells you differently, the test is probably wrong. Now if you always struggled in school, but persisted in believing that you possessed a True Spark of Genius within, and then you get a low IQ score — well, probably you were just in denial. Or maybe you have a learning disability. I think the online SAT/IQ conversion charts people generally cite are wrong; they seem to assume correlation 1 between IQ/SAT, while it’s more like 0.7).

There are two issues: SAT takers are not a random sample of the population, and there is a less than perfect correlation. I think you deal with them in that order. So here is what I would say: (1) Take your SAT score and look up the corresponding percentile for that year.

In 2016, a 1450 is the 98th percentile, i.e. (2) Adjust for the fact that not everyone takes the SAT (no perfect way to do this; err on the side of caution.) I couldn’t find the info online, but let’s be conservative and say that top 2% of SAT takers puts you in the top 1.5% of the population.

(3) Convert this percentile to standard deviations. Top 1.5% is 2.17 standard deviations. (4) Multiply by 0.7 (or whatever the true correlation is — I don’t remember off the top of my head). In our example that yields 1.52. (5) Convert to IQ. Here that’s ~123.

Admittedly step 2 requires guesswork, and step 4 relies on the actual correlation, which I’m not sure of (my brain tells me it’s around 0.7, but I wasn’t able to find a source for this after a quick search. It seems a plausible number to me). Also, this result should not be interpreted as “Scoring 1450 on the SAT provides as much information as scoring 123 on an IQ test”. Just that 123 would be our best bet of what you would score if given an IQ test, but you might score higher or lower than that. (Also, SAT has a ceiling effect which starts being important in the mid-high 1500s; but then, so do IQ tests.). I’ve never heard someone claim his IQ was an exaggeration – only discuss whether it was too low due to a test or normalization error. Theoretical physicists run the gamut, and plenty are in the 140-160 area, so it’s certainly not a laughably high number.

Unless you’re proposing he fudged downwards to confound expectations? Anyway, his sister attested it to a biographer (Gleick) and said she used to tease him because she scored higher on the school test they both took. So there’s a source if you trust Joan Feynman (herself an astrophysicist). In my opinion, self-reporting is unreliable when it comes to just about anything which might effect the speaker’s self-image or social standing. When it comes to IQ, an accomplished person would be tempted to understate his IQ in order to make his accomplishments seem more impressive.

If you read “Surely You’re Joking Mr. Feynman” with a critical eye, you definitely get the impression that Feynman is willing to exaggerate or lie in order to make the story better. So before I accept it, I would like to see a source for Feynman’s IQ which is not ultimately from Feynman himself. To put it another way, the assertion that Feynman’s IQ was only 124 is a somewhat extraordinary claim and it requires stronger evidence than the say-so of an interested witness with a propensity for telling tall tales.

It is not only about earnings that people worry. It is also about social respect. I will give an example close to me. My wife does not have a high IQ. I love her, and she has many other traits I would never in a thousand years trade for a twenty point IQ bump. She is sort of a hufflepuff.

I am a professor and so we move in intellectual circles, go to parties, etc. Where most people are pretty high IQ, or at least put a high value on it.

Now, my wife really loves talking to all these people (she finds intelligence glamorous to no end), and mostly it goes great. But now and then, sadly, the women are just horrible to hertrying to draw her out into saying something pretentious or asking her some question they know damn well she can’t understand. Recently a wife of a colleague literally rolled her eyes and grinned at me like “how do you put up with this?” after my wife asked a sort of obvious question to her husband. Never in a million years would they do that if my wife had cleft palette or something.

Being mean to the less-than-gifted seems like another “last respectable prejudice.” To make matters worse, my wife is very, very beautiful—to the point that someone (uber drivers, servers, family) tells me so, unprompted, about once every few weeks. So, at these get-togethers, the men (these are engineering profs, not the most self-aware) will sometimes crowd around her and kind-heartedly explain some arcane interest of theirs at length, ignoring the women. I understand that, for the wives and girlfriends and female colleagues, it isn’t very fun when you go to the party and the guys want to talk to this physical beauty who can’t compete with you intellectually. But I’m still just astounded by the level of subtle meanness coming from people I otherwise know and like. This also happens at her work. There are a lot of high-performing people at her office, and she gets along with most of them.

But sometimes the women sort of gang up on her and put her down in ways too subtle to be, like, outright rude but which are obviously designed to hurt her feelings, put her down, etc. Then she comes home feeling bad about herself. It makes me furious. Fortunately, in spite of this, and in line with what you’ve written, she does well at work and has a good-paying job. But I do, having seen what life is sometimes like for her, worry about the possibility of our children coming out with “my looks and her brain” (at least the IQ element of her brain, not the wonderful personality and grit).

My seven-year-old son had an IQ test recently. He scored 104. I shouldn’t have been upset — I mean, that’s not dumb or anything! — but I was, because he sure seemed brilliant to me. And I am pretty sure my whole family would have scored above that. I never took an IQ test but my SAT scores were 670 math, 800 verbal. But the school psychologist said I shouldn’t put too much stock in it, because the reason we were seeing her is because my kid is autistic, and autistic kids don’t score properly on IQ tests.

Their talents are so unbalanced that they might score in the 99th percentile on one thing (as my kid did on vocabulary) and a zero score on another (as my kid did on a shape-rotating test). This is made worse by the fact that communicating is hard for him, so on every timed test, he did terribly because he took so long to get the answer out. If people who are scoring low might be on the spectrum, even mildly, I would be unsurprised about negative results.

If they could get a score breakdown, they might find that what comes out as “average” means “wildly brilliant in one area, completely unable to do other tasks.” And while that might hold you back in some ways, it won’t keep you from being a success at the stuff you *know* you’re good. Let people study a bit of physics. Physicists, when teaching their dogma to young minds, talk about accuracy and precision. Accuracy is how close you are to the true (that is, thought after) result on average and precision is how well your measurements agree with each other.

Usually, there is a picture with an archer and a target, with arrows first hitting all around the bull’s eye (high accuracy, low precision) and then on the other picture the arrows are hitting almost the same spot, but away from the bull’s eye (high precision, low accuracy). Basically, what OP is saying is that all this IQ stuff is high accuracy/low precision science. Theoretically, it should be possible to devise some measures that would be highly precise, but measure not exactly “general intelligence”, whatever that is. Something like ELO rating for chess. That would probably require (like in chess) people given enough time to learn a specialized skill, which relies on some, but not other, cognitive abilities, and then measuring them on exactly this task.

A bit like what we do with athletes. Then we can talk about precision/accuracy trade-off. Addendum: Statisticians call the same thing bias/variance trade-off. That is OP claims that IQ is an unbiased measure with high variance. Just don’t expect it to predict a single person’s individual achievement with any kind of reliability.

Especially not yourself. One of my favorite views on this I’ve seen is “you are never a random sample”. IQ tells us a lot about e.g. A random person’s probable career, but that’s largely because we don’t know anything else about them. A high IQ correlates with classroom success, high interest in reading, and so on.

It’s not just that IQ is a direct input, it’s that without other knowledge it provides a best-guess for all the other success factors. If you know that you have an IQ of 95, you probably also know all those other things. Do you read science-y books for fun? Did you find high school physics easy?

Do you have a strong impulse to work hard at physics? That’s all relevant data! Congratulations, you might be a great physics professor regardless of IQ! And if none of those things are true?

Well, someone confused by physics, who hates reading nonfiction, and doesn’t want to do math for a living, is probably a bad candidate for physics professor quite independent of their IQ. We are not random samples. IQ is correlated with many things. It is the main correlation for almost nothing, and with the benefit of in-depth knowledge, it’s far more useful to focus on all of the domain-specific things that will actually predict outcomes. I would note, in addition to all the above points, that IQ at the low end predicts success well, BUT has a real problem with confounder effects from correlated issues. Almost any form of brain damage or mal-development will lower IQ, and will harm most measures of success in life–but it isn’t the low IQ necessarily.

The best-known and most-discussed issue of this sort is low-level lead poisoning. See this Kevin Drum It lowers IQ, but it also seems to directly damage impulse control. So, measures of lots of impulsive behaviour–crime, unplanned pregnancy, just not showing up for work one day–will be correlated with IQ–but it’s the damage to impulse control, not to IQ, that drives them.

You’re talking about IQ as though it were a causal factor for why certain life outcomes occur, which is precisely one of the errors people worry about when they claim that the g-factor isn’t really (I’m thinking about nostalgebraist here). IQ consists exactly of what people score on IQ tests and the traits people with different scores have. In an environment where many people have lead poisoning (and there are no other confounding factors) people with lower IQ have poorer impulse control, and that’s just as much a real fact about IQ as any other trait correlated with IQ. If you try to separate some sort of real effect of IQ from “confounder effects” then you are no longer actually talking about IQ, you’re talking about a new invented concept which may or may not exist and may or may not have the properties you’re familiar with from IQ studies.

A couple of things. First, some terms: fluid intelligence is, roughly, how fast you pick new things up or how many unusual connections you can make. Crystallized intelligence is the things that you know, or know how to do. Your boss probably cares about your crystallized intelligence– can you effectively (diagnose a sleep disorder) (weld aluminum aircraft parts) (translate the manual into French) (install a water heater) (whatever)? But you get crystallized intelligence by combining a certain level of fluid intelligence with experience/practice/work over time, supplemented perhaps with certain “tricks” (checklists, reference materials, work habits, etc.)– it’s not a one-for-one comparison. Second, for every job/task/hobby, there is an effective “floor” for fluid intelligence.

Picture a graph of effort/practice (x-axis) and fluid intelligence/IQ (y-axis). As IQ declines, there is a steady increase in the amount of work needed to achieve the task, but the slope is relatively gentle. But there comes a point where the curve has a “knee”, and below that point the effort required rises dramatically and rapidly becomes, in effect, impossible regardless of how much work a person is willing to put into it. If you’re to the right of the knee, your IQ value won’t have nearly as much to do with your success as things like work ethic, agreeableness, “luck”, connections, etc.

Below that knee, though– no amount of “grit”, coaching, hard work, etc. Will do much good. It’s cruel to people below that knee to try to force them to do the task, or to look down on them if they don’t achieve it, or assume if they worked harder they’d achieve it, or whatever.

Fluid intelligence, or IQ, is a tool that lets you do certain jobs. Nothing more, nothing less. If you have enough for the tasks/jobs you want to do, then it’s otherwise irrelevant. Don’t stress out about it. It’s not some sort of numerical representation of your value as a human being. Topic #2: Related to the previous there are a critical number of jobs in the modern economy which require a minimum IQ in the 110-ish range– accountants, medical techs, skilled mechanics, programmers, small business owners, etc.

There’s another, smaller tranche in the above-120-ish range– physicians, lawyers, engineers, etc. A society which doesn’t have enough of these people to actually cover all of the needed tasks, pretty much can’t have a modern economy. An IQ=80 person can learn to use a cell phone and benefit from it; but it takes a group of IQ=110 or higher people to install and troubleshoot the system, run the billing, write the apps, etc., plus a few IQ=120s to engineer it, design the handsets, etc. The IQ=80 people aren’t going to be able to do it, ever, regardless of how hard they work at it.) Topic #3: Speaking from my own case: IQ, i.e.

Fluid intelligence, is not simply a static, inborn trait. Like physical capability, it must be exercised and challenged in order to develop to its full extent, at least until the brain reaches full development at age 24 or so. In my case, I was not challenged fully during high school or even undergraduate college. But after getting a BA and going into the work world, I ended up choosing to re-invent myself as an electrical engineer I took the college STEM (and particularly, math) classes after work, first at undergraduate level, then graduate level, between ages 20-30. I could feel my brain changing as I did so. I could see things better, handle details better, manipulate objects in my mind with ease, work logic problems better, pick up languages easier, even do crosswords easier.

I could feel my brain changing in response to the math and engineering work. My SATs at age 16 had an implied IQ almost two standard deviations below my GREs (taken at 32, before starting my PhD). It was a little scary, even– like riding in a car that can smoothly accelerate twice as fast as when you started the trip. But a thrill too– I can do stuff now. Most people don’t put in those last 5-6 years of intellectual work, and so don’t ever “put the cherry on top” of their potential.

I’ve never had a problem with IQ being an arbitrary test that attempts to measure G and does reasonably well in aggregate. The problem is when people are eager to discriminate on these sorts of things.

And they will. Your great-grandparents had cancer? Now your health insurance is more expensive. You’re Black? Maybe you’ll get turned down on a rental application.

There are already plenty of suggestions to increase IQ through genetic engineering. Which is y’know eugenics. And generally bad for the gene pool, in the same way that the suggestions of removing all carnivorous animals from the gene pool (that EAs were floating) would be. Not to mention, politicians will somehow manage to get their hands on it, use IQ tests as a pre-requisite for voting, then game the test to favor establishment views.

Sometimes you need the low-IQ proletariat to say “y’know what, that’s a fucking retarded idea” because somehow they often get it right better than us high IQ anomalies. It just means you shouldn’t use it on yourself. Statistics is what tells us that almost everybody feels stimulated on amphetamines. Reality is my patient who consistently goes to sleep every time she takes Adderall. Experiment: What happens if you have her take melatonin some night to reinforce her body-clock setting, and then take Adderall as soon as she gets up to get out of bed in the morning after a good night’s sleep, and uses sunlight or screens or bright lights or cold or whatever to keep herself awake for at least a couple of hours? If she does this once (or for a week), will Adderall have a stimulant effect if she takes it at some other time of day?

Where the experiment comes from: These both actually align with my personal experience of taking concerta. Screens are the biggest factor in how consistent my sleep schedule is (using screens past midnight / until I go to bed will frequently shift my sleep schedule forward by about 4-6 hours in a single night). The second biggest factor is whether or not I take concerta at the same clock time that I did the past few days. The third biggest factor is whether or not I took significantly more melatonin than the previous few days before going to sleep (going from no melatonin to 15 mg can shift my sleep schedule backwards by 4-7 hours in one night). (Sunlight is also a pretty big factor, but these days I always sleep with an eyemask, so I don’t know how to describe its effect.) By combining these effects, I can prevent jetlag (take my concerta at 8am in the destination time zone the morning of my flight, use melatonin or screens if I need to sleep significantly earlier or later than normal on the plane, get 5-12 hours of sleep in transit).

Of particular note is that, if I shift my sleep schedule with screens, taking concerta at 8am every morning does not act as a stimulant, but instead pegs my body clock / sleep schedule to whatever I set it to with screens. I don’t know how this effect works (and I would like to), but my guess is that the default effect of amphetamines is to reset your body clock to “waking up”, but if you take them (enough?

A couple of times?) when you’re sleepy enough that your body clock refuses to reset, your brain will somehow learn that body-clock setting as what it should reset to when you take Adderall. Is this a plausible hypothesis? I have always got, consistently, around the 90th percentile in any brain test I’ve done. That’s the brain equivalent of being 6 foot 2. Normally tall.

So, it probably wouldn’t be wise to count on a career in the NBA. But beyond a few specific niche areas, like getting objects off high shelves without getting a ladder, I probably shouldn’t worry too much. I think that’s probably the biggest practical advantage of being 6 foot 2.

You don’t have to worry. Less nightmare fuel. Obviously being the tallest must bring it’s own social advantages. Must be inconvenient too. [Short people can still play amateur basketball if they enjoy it!]. One think I’d like to suggest for people starting out is to look at testing by the. Broadly, they test your aptitudes for many different things.

For example, hand-eye coordination and memory for numbers. When done, they provide a list of occupations which use the aptitudes you score highly in, under the theory that people who aptitudes they are best at are happier in life. I personally had some interesting revelations as a result as well. I score very low (10th and 30th percentile) for memory for pitch and memory for rhythm.

But very high (90th percentile) for pitch discrimination. So despite enjoying singing, I realize that without a lot of work at it I’m not going to be very good at and and I’ll be thoroughly disappointed during the process. For people who are facing that weird conundrum of very stumbling over things that normally would be expected to be easy for them, all things considered, this kind of assessment might shed light on why, where to expect to excel, and where extra effort will be required. Disclaimer: I am very stupid.

My IQ is failing because of aging, unhealthy lifestyle and probably some early onset dementia. I do many typing errors and I ask to forgive me for it. One thing I am interested in, is if it possible to increase one’s IQ using some form of life-hack. I once hacked dual-n-back, by finding a way to remember a long string of the presented numbers.

(I write them down on the mental screen, thus using visual processing for remembering. It increased my results two times immediately.) There are other tricks, which could increase one’s performance like technic called “innate literacy”, which is claimed to help with one’s grammar skills by using the visual memory of the correct word instead of audio memory. I hope that the fluid intelligence also turns out to be learnable skill, but we just don’t know what it is like and how to teach it.

As far as we know, it’s not possible to increase someone’s fluid intelligence (at least in adults), despite the marketing claims. I doubt that you actually are stupid, though. As near as I can tell, your concern is not fluid intelligence per se, but rather whether there are techniques to help compensate for specific lost skills/abilities in the event of some sort of brain injury or aging. The news here is somewhat brighter: there are many techniques that can help mitigate the problems. For example, if I understand you correctly, you are saying that your short-term memory is not what it was– but that your visual memory seems to be either unaffected, or at least less affected.

As you’ve found out, a long-established mnemonic trick is to convert what you want to remember into a sequence of very vivid/startling visual images, then mentally “place” these in order in a location you’re extremely familiar with (the ancient Greeks imagined walking through their house, coming across all of these weird/shocking things in sequence as they moved through the building). For other people, auditory memory is better, and so they use rhymes, little songs/jingles, mnemonic phrases, etc. Another set of approaches revolves around systematically using props or aides– carrying a notebook around always, and getting in the habit of writing things down every time, for example. Calendars, clocks, and so on, to prompt you when it’s time to do something.

Having reference books easily available. Tech solutions (phone, Apple watch, etc.) might or might not help, depending on you. A third type of coping skill is to simplify your environment, so that there’s less remembering to do. That could be getting simpler stuff, or putting “cheat sheets” or sticky notes with things to cue you when you’re doing something, or doing things only one way (or in only one place, at only one time). The experts in this field are occupational therapists, and they have lots of experience helping people like you. I would suggest that you talk to one, as they’re most effective for you if they catch problems early. The entire premise confuses me.

Having a high IQ is not the same as being An Intellectual (whatever that’s supposed to mean.) Nobody’s kicking you out of your academic or scientific field for failing to meet a number meant to measure overall cognitive potential as applied to generalized populations–if you understand the concepts, have genuine enthusiasm and put in the work necessary to get good at your area of expertise, you’re in. Academia isn’t the G&T program in a public K-12. If you were too stupid to pursue whatever it is you’re pursuing, you wouldn’t be interested in the first place.

My IQ is higher than my mother’s, for example. She’s a college professor. I live in a van writing abstruse rants on the internet.

The university has yet to oust her and hire me over a statistical measurement within which I’m closer to the far end than she is. I’m of the opinion that the Weschler is crap. Scoring is too dependent on test-giver bias and the comprehension subtest will artificially skew your score lower if you have a nonstandard moral code and a test-giver who is either simpler-minded, operating under the assumption that you are a fruitcake, or both. I actually think there’s a neat cultural angle here (just not in the usual way). As I was reading this post, my primary response was “duh, of course”, except for the correlation coefficients, which seemed crazy high to me. Then, when Scott pointed out the r=0.66 explained 44% of the variance, I realized he was talking about r the whole time, while I assumed he was always talking about r^2.

I thought this because my primary use and exposure to these correlations is in organismal biology, evolution, ecology and other “whole animal” fields, where everyone always reports r^2, never just r. So maybe my “duh” reaction isn’t reflective of any intelligence, but rather my immersion in a field which just takes for granted that the HUGE levels of variability in things like an entire ecosystem will make simple “X determines Y” statements laughable, and the best we can ever do is say that X explains N% of Y. After all, it’s tough to reconcile “this trait, seen in a quarter of individuals, provides a survival advantage” with “this species has 99% mortality in the first year of life” without realizing the former statement is only a description of a statistical trend, rather than an iron-clad rule. Are there any other folks from similarly “hyper-variable” fields (for lack of a better term) who have similar experiences with the idea of statistical correlation not defining individual destiny just being “obvious”? I get the impression from the comments that, while I’m not the only biologist here, there aren’t many others who work on non-humans at whole-organism scales. I have an IQ of around 140 so SHUUUUT UUUP!

OK, more seriously: I have two children. One has an IQ of 135.

What would you expect of such a child? He’s drumming his fingers through math class because he already knows it (and math’s not his thing).

He knows who Hobbs and Kierkegaarde are. He can threaten to eat me up in two languages. The other, they estimate, has an IQ of 40. What would you expect of him?

He’s in 4th grade. We’re still working on toilet training. His longest sentences tend to be 2 words. He can dress himself with a lot of help. Sure, if they’re about 100 points apart, we’d expect such a gap.

And if they were 5 points apart, we wouldn’t be able to tell. So the question is not whether individual IQ scores mean something. The question is how big do the differences need to be before they mean something. Looks like it’s somewhere between 5 and 100 points. I think this changes the nature of the discussion.

It’s harder to go to extremes (IQ is worthless or supremely important) after looking at extreme scores. An important thing that I haven’t seen mentioned here: many people take online “IQ tests”. If you happen to be one of them, let me tell you this: There is no such thing as a serious online IQ test.

Those are all fake. Yes, all of them. Yes, even those that say that they are serious. Yes, even those that pretend to be designed or approved by Mensa or whoever.

Yes, even IQTest.dk or whatever happens to be your favorite one; even if it is “ inspired by Raven matrices”. If you took an online “IQ test” and you worry about stop right there. You didn’t take a real IQ test. You have no idea what your IQ is. What you are doing is like freaking out after reading a horoscope.

(I could start explaining why, but my previous experience suggests that this would be futile, because at that moment many people simply refuse to believe that someone could actually go as far as to create a fake online “IQ test” for fun and profit. The short version is that IQ measurement is inherently statistical, which means it has to be calibrated on tons of people, who have to be randomly selected from the population. To do that methodologically correctly is quite difficult and expensive. On the other hand, just making up stuff is very easy and cheap.). Similarly: A news article today reports that the effectiveness of a flu shot, in preventing the flu, is no more than 3% and often no greater than a placebo in some years.

This, too, shows how statistics that apply to populations say nothing useful about individuals. If flu mortality is 0.2% (1 in 500) and shot effectiveness is 3% and half the population gets shots, then shots will save nearly 50,000 US lives each year. The odds that it will save your life are less than 0.005% (1 in 20,000).

You are far better off just washing your hands more often. If false confidence in flu shots reduces hand washing by 20% and half of all flu viruses are transmitted by touch, the flu shots then are counter-effective and actually cause up to 10,000 deaths each year.

Odds of a shot being effective improve for the elderly, and it would be nice to do a randomized, controlled, double blind study of that. Alas, under NIH rules, it would be unethical to do so. Meanwhile, flu vaccine manufacturing is a huge (although concentrated) industry and US health care providers can bill $4 billion each year just to give the shots. Argggh, matey. See what reading SlateStarCodex has done to me?