Jump to content

Paul E. Meehl

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Sdkb (talk | contribs) at 09:02, 8 January 2023 (Selected works: Fixing capitalization of an acronym and general fixes). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Paul E. Meehl
Born
Paul Everett Swedal

(1920-01-03)3 January 1920
Died14 February 2003(2003-02-14) (aged 83)
Minneapolis, Minnesota, US
Alma materUniversity of Minnesota
Known forMinnesota Multiphasic Personality Inventory, genetics of schizophrenia, construct validity, clinical v. statistical prediction, philosophy of science, taxometrics
AwardsNational Academy of Sciences (1987), APA Award for Lifetime Contributions to Psychology (1996), James McKeen Cattell Fellow Award (1998), Bruno Klopfer Award (1979)
Scientific career
FieldsPsychology, philosophy of science
InstitutionsUniversity of Minnesota
Doctoral advisorStarke R. Hathaway
Doctoral studentsHarrison G. Gough, Dante Cicchetti, Donald R. Peterson, George Schlager Welsh
Websitemeehl.umn.edu

Paul Everett Meehl (3 January 1920 – 14 February 2003) was an American clinical psychologist, Hathaway and Regents' Professor of Psychology at the University of Minnesota, and past president of the American Psychological Association.[1] A Review of General Psychology survey, published in 2002, ranked Meehl as the 74th most cited psychologist of the 20th century, in a tie with Eleanor J. Gibson.[2] Throughout his nearly 60-year career, Meehl made seminal contributions to psychology, including empirical studies and theoretical accounts of construct validity, schizophrenia etiology, psychological assessment, behavioral prediction, and philosophy of science.

Biography

Childhood

Paul Meehl was born January 3, 1920, in Minneapolis, Minnesota, to Otto and Blanche Swedal. His family name "Meehl" was his stepfather's.[3] When he was age 16, his mother died as the result of poor medical care which, according to Meehl, greatly affected his faith in the expertise of medical practitioners and diagnostic accuracy of clinicians.[3] After his mother's death, Meehl lived briefly with his stepfather, then with a neighborhood family for one year so he could finish high school. He then lived with his maternal grandparents, who lived near the University of Minnesota.

Education and academic career

Meehl started as an undergraduate at the University of Minnesota in March 1938.[3] He earned his bachelor's degree in 1941[4] with Donald G. Paterson as his advisor, and took his PhD in psychology at Minnesota under Starke R. Hathaway in 1945. Meehl's graduate student cohort at the time included Marian Breland Bailey, William K. Estes, Norman Guttman, William Schofield, and Kenneth MacCorquodale.[3] Upon taking his doctorate, Meehl immediately accepted a faculty position at the university, which he held throughout his career. In addition, he had appointments in psychology, law, psychiatry, neurology, philosophy, and served as a fellow of the Minnesota Center for Philosophy of Science, founded by Herbert Feigl, Meehl, and Wilfrid Sellars.[3]

Meehl rose quickly to academic positions of prominence. He was chairman of the University of Minnesota Psychology Department at age 31, president of the Midwestern Psychological Association at age 34, recipient of the American Psychological Association's Award for Distinguished Scientific Contributions to Psychology at age 38, and president of that association at age 42. He was promoted to Regents' professor, the highest academic position at the University of Minnesota, in 1968. He received the Bruno Klopfer Distinguished Contributor Award in personality assessment in 1979, and was elected to the National Academy of Sciences in 1987.[3]

Meehl was not particularly religious during his upbringing,[3] but in adulthood during the 1950s collaborated with a group of Lutheran theologians and psychologists to write What, Then, Is Man?.[5] This project was commissioned by the Lutheran Church–Missouri Synod through Concordia Seminary. The project explored both orthodox theology, psychological science, and how Christians (Lutherans, in particular) could responsibly function as both Christians and psychologists without betraying orthodoxy or sound science and practice.

Later life and death

In 1995, Meehl was a signatory of a collective statement titled "Mainstream Science on Intelligence", written by Linda Gottfredson and published in the Wall Street Journal.[6] He died on February 14, 2003, at his home in Minneapolis of chronic myelomonocytic leukemia.[4] In 2005, Donald R. Peterson, a student of Meehl's, published a volume of their correspondence.[7]

Philosophy of science

Meehl founded, along with Herbert Feigl and Wilfrid Sellars, the Minnesota Center for the Philosophy of Science, and was a leading figure in philosophy of science as applied to psychology.[3]

Arguably Meehl's most important contributions to psychological research methodology were in legitimizing scientific claims about unobservable psychological processes. In the first half of the 20th century, psychology was dominated by operationism and behaviorism. As outlined in Bridgman's The Logic of Modern Physics, if two researchers had different operational definitions, they had different concepts. There was no "surplus meaning". If, for example, two researchers had different measures of "anomia" or "intelligence", they had different concepts. Behaviorists focussed on stimulus–response theories and were deeply skeptical of "unscientific" explanations in terms of unobservable psychological processes. Behaviorists and operationists would have rejected as unscientific any notion that there was some general thing called "intelligence" that existed inside a person's head and that might be reflected almost-equivalently in Stanford-Binet IQ tests or Wechsler scales. Meehl changed that via two landmark papers.

In 1948, Kenneth MacCorquodale and Meehl introduced the distinction between "hypothetical construct" and "intervening variable".[8] "Naively, it would seem that there is a difference in logical status between constructs which involve the hypothesization of an entity, process, or event which is not itself observed, and constructs which do not involve such hypothesization."[9] An intervening variable is simply a mathematical combination of operations. If one speaks of the "expected value" of a gamble (probability of winning × payoff for winning), this is not hypothesizing any unobservable psychological process. Expected value is simply a mathematical combination of observables. On the other hand, if one attempts to make statements about "attractiveness" of a gamble, if this is not observable or perfectly captured by some single operational measure, this is a "hypothetical construct"—a theoretical term that is not itself observable or a direct function of observables. They used as examples Hull's rg (anticipatory goal response[10]) or Allport's "biophysical traits", or Murray's "needs". "These constructs involve terms which are not wholly reducible to empirical terms; they refer to processes or entities that are not directly observed (although they need not be in principle unobservable)." Such constructs had "surplus meaning". Thus, good behaviorists and operationists should be comfortable with statements about intervening variables, but should have greater wariness of hypothetical constructs.

In 1955, Lee J. Cronbach and Meehl legitimized theory tests about unobservable, hypothetical constructs.[11] Constructs are unobservables, and they can be stable traits of individuals (e.g., "Need for Cognition") or temporary states (e.g., nonconscious goal activation). Previously, good behaviorists had deep skepticism about the legitimacy of psychological research about unobservable processes. Cronbach and Meehl introduced the concept of "construct" validity for cases in which there was no "gold standard" criterion for validating a test of a hypothetical construct. Hence, any construct had "surplus meaning". Construct validity was distinguished from predictive validity, concurrent validity, and content validity. They also introduced the concept of the "nomological net"—the network of associations among constructs and measures. Cronbach and Meehl argued that the meaning of a hypothetical construct is given by its relations to other variables in a nomological network. One tests a theory of relations among hypothetical constructs by showing that putative measures of these constructs relate to each other as implied by one's theory, as captured in the nomological network. This set the stage for modern psychological test and set the stage for the cognitive revolution in psychology that focusses on the study of mental processes that are not directly observable.

After Karl Popper's The Logic of Scientific Discovery was published in English in 1959, Meehl counted himself a "Popperian" for a short time, later as "a 'neo-Popperian' philosophical eclectic",[3] still using the Popperian approach of conjectures and refutations,[12][13] but without endorsing all of Popper's philosophy.[14] Meehl was a strident critic of using statistical null hypothesis testing for the evaluation of scientific theory. He believed that null hypothesis testing was partly responsible for the lack of progress in many of the "scientifically soft" areas of psychology (e.g. clinical, counseling, social, personality, and community).[15][16]

Meehl's paradox

Meehl's paradox is that in the hard sciences more sophisticated and precise methods make it harder to claim support for one's theory. The opposite is true in soft sciences like the social sciences. Hard sciences like physics make exact point predictions and work by testing whether observed data falsify those predictions. With increased precision, one is better able to detect small deviations from the model's predictions and harder to claim support for the model. In contrast, softer social sciences make only directional predictions, not point predictions. Softer social sciences claim support when the direction of the observed effect matches predictions, rejecting only the null hypothesis of zero effect. Meehl argued that no treatment in the real world has zero effect. With sufficient sample size, therefore, one should almost always be able to reject the null hypothesis of zero effect. Researchers who guessed randomly at the sign of any small effect would have a 50–50 chance of finding confirmation with sufficiently large sample size.[17]

Minnesota Multiphasic Personality Inventory

Meehl was considered an authority on the development of psychological assessments using the Minnesota Multiphasic Personality Inventory (MMPI).[4][18] While Meehl did not directly develop the original MMPI items (he was a high school junior when Hathaway and McKinley created the item pool), he contributed widely to the literature on interpreting patterns of responses to MMPI questions.[3][19] In particular, Meehl argued that the MMPI could be used to understand personality profiles systematically associated with clinical outcomes, something he termed a statistical (versus a "clinical") approach to predicting behavior.[20][21]

Interactions and suppressors: the K scale

As part of his doctoral dissertation, Meehl worked with Hathaway to develop the K scale indicator of valid responding for the MMPI.[22] During initial clinical testing of the MMPI, a subset of individuals exhibiting clear signs of mental illness still produced normal personality profiles on the various clinical scales.[23][page needed] It was suspected that these individuals were demonstrating clinical defensiveness and presenting as asymptomatic and well-adjusted. Meehl and Hathaway employed a technique called "empirical criterion keying" to compare the responses of these defensive individuals with other individuals who were not suspected of experiencing mental illness and who also produced normal MMPI profiles. The empirical criterion keying approach selected items based on their ability to maximally discriminate between these groups. They were not selected based on theory or face validity of the item content. As a result, items on the resulting scale, termed the K (for "correction") scale would be difficult to avoid for individuals attempting to present as well-adjusted when taking the MMPI. Individuals who endorsed the K scale items were thought to be demonstrating a sophisticated attempt to conceal information about their mental health history from test administrators. The K scale is an early example of a putative suppressor variable.

The K scale is used as a complement validity indicator to the L (for "lie") scale, whose items were selected based on item content face validity and are more obviously focused on impression management. The K scale has been popular among clinical psychologists, and has been a useful tool for MMPI and MMPI-2 profile interpretation.[23][page needed] Meehl and Hathaway continued to conduct research using MMPI validity indicators and noticed K scales elevations were associated with greater denial of symptoms on some clinical scales more than others.[22] To compensate for this, they developed a K scale correction factor aimed at offsetting effects of defensive responding on other scales measuring psychopathology. Substantial subsequent research conducted on the original MMPI clinical scales used these "K-corrected" scores, although research on the usefulness of the corrections has produced mixed results.[23][24][25] The most recent iteration of the K scale, developed for the MMPI-2-RF, is still used for psychological assessments in clinical, neuropsychological, and forensic contexts.[26]

Clinical versus statistical prediction

Meehl's proposal

Meehl's 1954 book Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence analyzed the claim that mechanical (i.e., formal, algorithmic, actuarial) methods of data combination would outperform clinical (i.e., subjective, informal) methods to predict behavior.[27] Meehl argued that mechanical methods of prediction, when used correctly, make more efficient and reliable decisions about patient prognosis and treatment. His conclusions were controversial and have long conflicted with the prevailing consensus about psychiatric decision-making.[28]

Historically, mental health professionals commonly make decisions based on their professional clinical judgment (i.e., combining clinical information "in their head" and arriving at a prediction about a patient).[29] Meehl theorized that clinicians would make more mistakes than a mechanical prediction tool created to combine clinical data and arrive at predictions.[27] Within his view, mechanical prediction approaches need not exclude any type of data from being combined and could incorporate coded clinical impressions. Once the clinical information is quantified, Meehl proposed mechanical approaches would make 100% reliable predictions for exactly the same data every time. Clinical prediction, on the other hand, would not provide this guarantee.[27]

Later research comparing clinical versus mechanical prediction

Meta-analyses comparing clinical and mechanical prediction efficiency have supported Meehl's (1954) conclusion that mechanical methods outperform clinical methods.[30][31] In response to objections, Meehl continued to defend algorithmic prediction throughout his career and proposed that clinicians should rarely deviate from mechanically derived conclusions.[32] To illustrate this, Meehl described a "broken leg" scenario in which mechanical prediction indicated that an individual has a 90% chance of attending the movies. However, the "clinician" is aware that the individual recently broke his leg, and this was not factored into the mechanical prediction. Therefore, the clinician can confidently conclude the mechanical prediction will be incorrect. The broken leg is objective evidence determined with high accuracy and highly correlated with staying home from the movies. Meehl argued, however, that mental health professionals rarely have access to such clear countervailing information as a broken leg, and therefore rarely if ever can appropriately disregard valid mechanical predictions.

Meehl argued that humans introduce biases when making decisions during clinical practice.[28][33] For example, clinicians may seek out information to support their presuppositions, or miss and ignore information challenging their views. Additionally, Meehl described how clinical judgment could be influenced by overconfidence or anecdotal observations unsupported by empirical research. In contrast, mechanical prediction tools can be configured to use important clinical information and are not influenced by psychological biases. In support of this conclusion, Meehl and his colleagues found that clinicians still make less accurate decisions than mechanical formulas even when given the same mechanical formulas to help with their decision-making.[33] Human biases have become central to research in diverse fields including behavioral economics and decision-making.

Schizophrenia

Graphical depiction of Paul Meehl's dominant schizogene theory of schizophrenia
Paul Meehl's dominant schizogene theory of schizophrenia: Proposed effects across the human organism and the environment are displayed. CNS = central nervous system. (Adapted from Meehl 1962, Meehl 1989b, Meehl 1990b.)

Meehl was elected president of the American Psychological Association in 1962. In his address to the annual convention, he presented his comprehensive theory about the genetic causes of schizophrenia.[34] This conflicted with the prevailing notion that schizophrenia was primarily the result of a person's childhood rearing environment.[4] Meehl argued schizophrenia should be considered a genetically based neurological disorder manifesting via complex interactions with personal and environmental factors. His reasoning was shaped by the writings of psychoanalyst Sandor Rado as well as the behavioral genetics findings at the time. He proposed that existing psychodynamic theory about schizophrenia could be meaningfully integrated into his neurobiological framework for the disorder.[35]

Dominant schizogene theory

Meehl hypothesized the existence of an autosomal dominant schizogene widespread throughout the population, which would function as a necessary, but not sufficient, condition for schizophrenia.[34][36][37] The schizogene would manifest on the cellular level throughout the central nervous system and should be observed as a functional control aberration called hypokrisia. Cells exhibiting hypokrisia should contribute to a characteristic pattern of impaired integrative signal processing across multiple neural circuits in the brain, which Meehl termed "schizotaxia". In response to typical rearing environments and social reinforcement schedules, this neural aberration should invariably lead to a collection of observable behavioral tendencies called "schizotypy". Schizotypy indicators would include neurological soft signs, subtle differences in language usage ("cognitive slippage"), and effects on personality and emotion. Meehl believed many people in society exhibit signs of schizotypy as a result of the schizogene without showing signs of schizophrenia. Schizophrenia would only occur when individuals are carrying other non-specific genetic risk factors ("polygenic potentiators") relevant for traits such as anhedonia, ambivalence, and social fear. These additional traits would be more likely expressed under stress (e.g., trauma) and inconsistent social schedules from parents. Given these combinations of conditions, decompensation from schizotypy to schizophrenia would result.[citation needed]

Meehl's dominant schizogene theory had a substantial influence on subsequent research efforts.[38] His theorizing increased interest in longitudinal study of individuals at risk for psychosis and family members of people with schizophrenia who may be carrying the schizogene.[39] Meehl's descriptions of schizophrenia as largely a neurological phenomenon and schizotypy as a genetically based risk factor for schizophrenia have been supported.[40] However, researchers have not uncovered strong evidence for a single schizogene, and instead believe the genetic risk for schizophrenia is better explained by polygenic combinations of common variants and rare genetic mutations.[41][42]

Taxometrics

With the help of several colleagues, Meehl developed multiple statistical methods for identifying the presence of categorical groupings within biological or psychological variables.[4][43] Meehl was a critic of the checklist ("polythetic") structure used to categorize mental illnesses in diagnostic manuals such as the DSM-III.[44] Although many DSM-defined psychiatric syndromes can be reliability identified in clinical settings, Meehl argued that the categorical nature of mental illnesses assumed by these diagnoses (i.e., a person is either sick or well) should be tested empirically rather than accepted at face value. Meehl advocated for a data-driven approach that could, in the words of Plato, "carve nature at its joints", and determine when it is most appropriate to conceptualize something as being categorical or continuous/dimensional.[citation needed]

In his writings, Meehl advocated for the creation of a field called "taxometrics" to test for categorical groupings across diverse scientific disciplines.[44][45] Based on this approach, latent "taxons" would be conceptualized as causal factors leading to true differences in kind within a population. Taxons could include many types of biological and psychosocial phenomena such as expression of an autosomal dominant gene (e.g., Huntington's disease), biological sex, or indoctrination into a highly homogenous religious sect. Meehl envisioned applying taxometric approaches when the precise underlying latent causes are currently unknown and only observable "indicators" are available (e.g., psychiatric conditions). By mathematically examining patterns across these manifested indicators, Meehl proposed that converging evidence could be used to assess the plausibility of a true latent taxon while also estimating the base rate of that taxon.[citation needed]

Visual depiction of applied taxometrics with Coherent Cut Kinetics
Depiction of Coherent Cut Kinetics procedures for identifying a latent "taxon" with a 30% base rate.[44] The "hitmax" interval distinguishing between the two categorical groups is shown with vertical dotted lines.

Coherent Cut Kinetics and L-Mode

Coherent Cut Kinetics is the suite of statistical tools developed by Meehl and his colleagues to perform taxometric analysis.[46] "Cut Kinetics" refers to the mathematical operation of moving potential cut points across distributions of indicator variables to create subsamples using dichotomous splits. Then, several metrics can be applied to assess if the candidate cut points can be explained by a latent taxon. "Coherent" refers to the process of using multiple indicators and metrics together to make a case for convergence about the categorical or dimensional nature of the phenomenon being studied. Meehl played a role in developing the following taxometric procedures: MAMBAC,[47] MAXCOV,[48] MAXSLOPE,[49] MAXEIG,[46] and L-Mode.[46]

Application, influence, and criticism of taxometric methods

Taxometric analyses have contributed to a shift away from the use of diagnostic categories among mental health researchers.[50] In line with Meehl's theorizing, studies using taxometric methods have demonstrated how most psychiatric conditions are better conceptualized as being dimensional rather than categorical[51] (e.g., psychopathy,[52][53] posttraumatic stress disorder,[54] and clinical depression[55]). However, some possible exceptions have been identified such as a latent taxon representing the tendency to experience maladaptive dissociative states.[56] Since Meehl's death, factor mixture modeling has been proposed as an alternative to address the statistical weaknesses of his taxometric methods.[57]

Applied clinical views and work

Meehl practiced as a licensed and board-certified clinical psychologist throughout his career.[1] In 1958, Meehl performed psychoanalysis on Saul Bellow while Bellow was an instructor at the University of Minnesota.[58] He identified as "strongly psychodynamic in theoretical orientation", and used a combination of psychoanalysis and rational emotive therapy.[37]

"Why I Do Not Attend Case Conferences"

In 1973, Paul Meehl published the polemic "Why I Do Not Attend Case Conferences".[59] He discussed his avoidance of case conferences in mental health clinics, where individual patients, or "cases", are discussed at length by a team, often as a training exercise. Meehl found such case conferences boring and lacking intellectual rigor. In contrast, he recalled numerous interesting illuminating case conferences within internal medicine or neurology departments, which often centered around pathologist reports and objective data about patients' pathophysiology. In other words, case conferences outside mental health disciplines were benefiting from including objective evidence against which clinical expertise could be compared and contrasted. Meehl argued for creating a psychiatric analogue to the pathologist's report. Additionally, he outlined a proposed format for case conferences beginning with initial discussion of clinical observations, and ending with a revealing of a subset of patient data (e.g., psychological testing results) to compare with attendees' clinical inferences and proposed diagnoses.

Meehl also elaborated upon the issue of clinical versus statistical prediction and the known weakness of unstructured clinical decision-making during typical case conferences. He encouraged clinicians to be humble when collaborating about patient care and pushed for a higher scientific standard for clinical reasoning in mental health treatment settings.[59] Meehl directly identified several common deficiencies in reasoning that he had observed among his clinical colleagues, and to which he applied memorable names:

  • Barnum effect: Making a statement that is trivial and true of nearly all patients, but which is made as though it is important for the current patient.[21]
  • Sick-sick ("pathological set"): The tendency to generalize from personal experiences of health and ways of being, to the identification of others who are different from ourselves as being "sick".[citation needed]
  • Me too: The opposite of sick-sick. Imagining that "everyone does this" and thereby minimizing a symptom without assessing the probability of whether a mentally healthy person would actually do it. A variation of this is Uncle George's pancake fallacy. This minimizes a symptom through reference to a friend/relative who exhibited a similar symptom, thereby implying that it is normal.[4]
  • Multiple Napoleons fallacy: "It's not real to us, but it's 'real' to him". "So what if he thinks he's Napoleon?" There is a distinction between reality and delusion that is important to make when assessing a patient and so the consideration of comparative realities can mislead and distract from the importance of a patient's delusion to a diagnostic decision.[18] "If I think the moon is made of green cheese and you think it's a piece of rock, one of us must be wrong". For this, pointing out that the deviated cognitions of a delusional patient "seem real to him" is a waste of time. So, the statement "It is reality to him", which is philosophically either trivial or false, is also clinically misleading.[59]
  • Hidden decisions: Decisions based on factors that we do not own up to or challenge. An example is the placement of middle- and upper-class patients in therapy while lower-class patients are given medication. Meehl identified these decisions as related to an implicit ideal patient who is young, attractive, verbal, intelligent, and successful (YAVIS). He argued that YAVIS patients are preferred by psychotherapists because they can pay for long-term treatment and are more enjoyable to interact with.[59]
  • The spun-glass theory of the mind: The belief that the human organism is so fragile that minor negative events, such as criticism, rejection, or failure, are bound to cause major trauma—essentially not giving humans, and sometimes patients, enough credit for their resilience and ability to recover.
  • Crummy criterion fallacy: This fallacy refers to how psychologists explain away the technical aspects of tests, using inappropriate and 'crummy' criterion that is observational instead of scientific, rather than incorporating the psychometric aspects into the interview, history, and other material being presented at case conferences.
  • Understanding it makes it normal: The act of normalizing or excusing a behavior just because one understands the cause or function of it, regardless of its normalcy or appropriateness.
  • Assumptions that content and dynamics explain why this person is abnormal: Those who seek psychological services have characteristics associated with being a patient/care-seeker, but also characteristics of being human. Meehl argues that it is problematic to view a patient's normative life dysfunction to their psychopathology. For example, no individual is maximally effective in all aspects of their life. This will be true of non-patients and patients alike, and must be distinguished by the clinician from those aspects of the patient's life which are pathological and dysfunctional.
  • Identifying the softhearted with the softheaded: The belief that those who have sincere concern for the suffering (the softhearted) are the same as those who tend to be wrong in logical and empirical decisions (softheaded).
  • Ad hoc fallacy: Creating explanations after we have been presented with evidence that is consistent with what has now been proven.
  • Doing it the hard way: Going about a task in a more difficult manner when an equivalent easier option exists; for example, in clinical psychology, using an unnecessary instrument or procedure that can be difficult and time-consuming while the same information can be ascertained through interviewing or interacting with the client.
  • Social scientists' anti-biology bias: Meehl argued that social scientists like psychologists, sociologists, and psychiatrists have a tendency to react negatively to biological contributors to abnormal behavior, and therefore tending to be anti-drug, anti-genetic, and anti-ECT.
  • Double standard of evidential morals: When one is making an argument and requires less evidence for him or herself than does so for another.

Selected works

References

  1. ^ a b "Curriculum Vitae | Paul E. Meehl". meehl.umn.edu. Retrieved 2019-01-02.
  2. ^ Haggbloom, Steven J.; Warnick, Renee; Warnick, Jason E.; Jones, Vinessa K.; Yarbrough, Gary L.; Russell, Tenea M.; Borecky, Chris M.; McGahhey, Reagan; Powell III, John L.; Beavers, Jamie; Monte, Emmanuelle (2002). "The 100 most eminent psychologists of the 20th century". Review of General Psychology. 6 (2): 139–152. CiteSeerX 10.1.1.586.1913. doi:10.1037/1089-2680.6.2.139. S2CID 145668721.
  3. ^ a b c d e f g h i j Meehl 1989a.
  4. ^ a b c d e f Goode, Erica (19 February 2003). "Paul Meehl, 83, an Example For Leaders of Psychotherapy". The New York Times. Retrieved 4 January 2017.
  5. ^ Meehl et al. 1958.
  6. ^ Gottfredson, Linda S. (1997). "Mainstream science on intelligence: an editorial with 52 signatories, history and bibliography" (PDF). Intelligence. 24 (1): 13–23. doi:10.1016/S0160-2896(97)90011-8.
  7. ^ Peterson 2005.
  8. ^ MacCorquodale & Meehl 1948.
  9. ^ MacCorquodale & Meehl 1948, pp. 95–96.
  10. ^ MacCorquodale & Meehl 1948, p. 100.
  11. ^ Cronbach & Meehl 1955.
  12. ^ Meehl 1983, p. 422.
  13. ^ Meehl 2016, pp. 357–361.
  14. ^ Waller et al. 2006, pp. 119–120, 155, 159, 419, 431, 439.
  15. ^ Meehl 1978.
  16. ^ Waller et al. 2006, pp. 5–7.
  17. ^ Meehl 1967.
  18. ^ a b Konnikova, Maria (May 1, 2013). "The perils of hindsight judgment". Scientific American Blog Network. Retrieved 2018-02-15.
  19. ^ Johnson, John A. (February 8, 2014). "Paul E. Meehl: smartest psychologist of the 20th century?". Psychology Today blogs. Retrieved 2018-02-14.
  20. ^ Hathaway & Meehl 1951.
  21. ^ a b Meehl 1956a.
  22. ^ a b Meehl & Hathaway 1946.
  23. ^ a b c Graham, John R. (2012). MMPI-2: assessing personality and psychopathology (5th ed.). Oxford; New York: Oxford University Press. ISBN 9780195378924. OCLC 683593538.
  24. ^ Hsu, Louis M. (1986). "Implications of differences in elevations of K-corrected and non-K-corrected MMPI T scores". Journal of Consulting and Clinical Psychology. 54 (4): 552–557. doi:10.1037/0022-006x.54.4.552. ISSN 1939-2117. PMID 3745611.
  25. ^ McCrae, Robert R.; Costa, Paul T.; Dahlstrom, W. Grant; Barefoot, John C.; Siegler, Ilene C.; Williams, Redford B. (1989). "A caution on the use of the MMPI K-correction in research on psychosomatic medicine". Psychosomatic Medicine. 51 (1): 58–65. CiteSeerX 10.1.1.551.6918. doi:10.1097/00006842-198901000-00006. ISSN 0033-3174. PMID 2928461. S2CID 985409.
  26. ^ Ben-Porath, Yossef S. (2012). Interpreting the MMPI-2-RF. Minnesota: University of Minnesota Press. ISBN 9780816669660. OCLC 745304242.
  27. ^ a b c Meehl 1954.
  28. ^ a b Meehl 1986.
  29. ^ Vrieze, Scott I.; Grove, William M. (2009). "Survey on the use of clinical and mechanical prediction methods in clinical psychology". Professional Psychology: Research and Practice. 40 (5): 525–531. doi:10.1037/a0014693. ISSN 1939-1323.
  30. ^ Grove, William M.; Zald, David H.; Lebow, Boyd S.; Snitz, Beth E.; Nelson, Chad (2000). "Clinical versus mechanical prediction: a meta-analysis". Psychological Assessment. 12 (1): 19–30. doi:10.1037/1040-3590.12.1.19. PMID 10752360.
  31. ^ Ægisdóttir, Stefanía; White, Michael J.; Spengler, Paul M.; Maugherman, Alan S.; Anderson, Linda A.; Cook, Robert S.; Nichols, Cassandra N.; Lampropoulos, Georgios K.; Walker, Blain S.; Cohen, Genna (May 2006). "The meta-analysis of clinical judgment project: fifty-six years of accumulated research on clinical versus statistical prediction". The Counseling Psychologist. 34 (3): 341–382. doi:10.1177/0011000005285875. ISSN 0011-0000. S2CID 145150890.
  32. ^ Meehl 1957.
  33. ^ a b Dawes, Faust & Meehl 1989.
  34. ^ a b Meehl 1962.
  35. ^ Meehl 1972.
  36. ^ Meehl 1989b.
  37. ^ a b Meehl 1990b.
  38. ^ Lilienfeld, Scott O.; Waller, Niels G. (2006). "A great pioneer of clinical science remembered: Introduction to the special issue in honor of Paul E. Meehl". Journal of Clinical Psychology. 62 (6): 1201–7. doi:10.1002/jclp.20253. ISSN 0021-9762. PMID 16041777.
  39. ^ Lenzenweger, Mark F. (1993). "Explorations in schizotypy and the psychometric high-risk paradigm". Progress in Experimental Personality & Psychopathology Research. 16: 66–116. ISSN 1056-7151. PMID 8293084.
  40. ^ Barrantes-Vidal, Neus; Grant, Phillip; Kwapil, Thomas R. (2015). "The role of schizotypy in the study of the etiology of schizophrenia spectrum disorders". Schizophrenia Bulletin. 41 Suppl 2 (Suppl 2): S408–416. doi:10.1093/schbul/sbu191. ISSN 1745-1701. PMC 4373635. PMID 25810055.
  41. ^ The International Schizophrenia Consortium (2009). "Common polygenic variation contributes to risk of schizophrenia and bipolar disorder". Nature. 460 (7256): 748–752. Bibcode:2009Natur.460..748P. doi:10.1038/nature08185. ISSN 1476-4687. PMC 3912837. PMID 19571811.
  42. ^ Sebat, Jonathan; Levy, Deborah L.; McCarthy, Shane E. (2009). "Rare structural variants in schizophrenia: one disorder, multiple mutations; one mutation, multiple disorders". Trends in Genetics. 25 (12): 528–535. doi:10.1016/j.tig.2009.10.004. ISSN 0168-9525. PMC 3351381. PMID 19883952.
  43. ^ "Taxometrics using Coherent Cut Kinetics | Paul E. Meehl". meehl.umn.edu. Retrieved 2018-02-15.
  44. ^ a b c Meehl 1995.
  45. ^ Meehl 2004.
  46. ^ a b c Waller & Meehl 1998.
  47. ^ Meehl & Yonce 1994.
  48. ^ Meehl & Yonce 1996.
  49. ^ Grove, William M. (2004). "The MAXSLOPE taxometric procedure: mathematical derivation, parameter estimation, consistency tests". Psychological Reports. 95 (6): 517–550. doi:10.2466/pr0.95.6.517-550. ISSN 0033-2941. PMID 15587219.
  50. ^ Schmidt, Norman B.; Kotov, Roman; Joiner, Thomas E. (2004). Taxometrics: toward a new diagnostic scheme for psychopathology. Washington, DC: American Psychological Association. doi:10.1037/10810-000. ISBN 9781591471424. OCLC 54029315.
  51. ^ Haslam, Nick; McGrath, Melanie J.; Viechtbauer, Wolfgang; Kuppens, Peter (2020-06-04). "Dimensions over categories: a meta-analysis of taxometric research". Psychological Medicine. 50 (9): 1418–1432. doi:10.1017/S003329172000183X. ISSN 1469-8978. PMID 32493520. S2CID 219316193.
  52. ^ Edens, John F.; Marcus, David K.; Lilienfeld, Scott O.; Poythress, Norman G. (February 2006). "Psychopathic, not psychopath: Taxometric evidence for the dimensional structure of psychopathy". Journal of Abnormal Psychology. 115 (1): 131–144. doi:10.1037/0021-843x.115.1.131. ISSN 1939-1846. PMID 16492104. S2CID 19223010.
  53. ^ Marcus, David K.; John, Siji L.; Edens, John F. (2004). "A taxometric analysis of psychopathic personality". Journal of Abnormal Psychology. 113 (4): 626–635. doi:10.1037/0021-843x.113.4.626. ISSN 1939-1846. PMID 15535794.
  54. ^ Ruscio, Ayelet Meron; Ruscio, John; Keane, Terence M. (2002). "The latent structure of posttraumatic stress disorder: A taxometric investigation of reactions to extreme stress". Journal of Abnormal Psychology. 111 (2): 290–301. CiteSeerX 10.1.1.462.153. doi:10.1037/0021-843X.111.2.290. ISSN 1939-1846. PMID 12003450.
  55. ^ Ruscio, John; Ruscio, Ayelet Meron (2000). "Informing the continuity controversy: A taxometric analysis of depression". Journal of Abnormal Psychology. 109 (3): 473–487. CiteSeerX 10.1.1.718.9936. doi:10.1037/0021-843X.109.3.473. ISSN 1939-1846. PMID 11016117.
  56. ^ Waller, Niels G.; Ross, Colin A. (November 1997). "The prevalence and biometric structure of pathological dissociation in the general population: Taxometric and behavior genetic findings". Journal of Abnormal Psychology. 106 (4): 499–510. doi:10.1037/0021-843x.106.4.499. ISSN 1939-1846. PMID 9358680.
  57. ^ Lubke, Gitta; Tueller, Stephen (2010-10-06). "Latent class detection and class assignment: a comparison of the MAXEIG taxometric procedure and factor mixture modeling approaches". Structural Equation Modeling. 17 (4): 605–628. doi:10.1080/10705511.2010.510050. ISSN 1070-5511. PMC 3955757. PMID 24648712.
  58. ^ Menand, Louis (May 11, 2015). "Young Saul". The New Yorker. Retrieved October 18, 2016.
  59. ^ a b c d Meehl 1973a, pp. 225–302.