The Jadeite Cabbage, also known as Jadeite Cabbage with Insects, is a piece of jadeite carved into the shape of a head of Chinese cabbage, with a locust and a katydid camouflaged in the leaves. Created by an unknown sculptor in the 19th century, it was first displayed in the Forbidden City's Yonghe Palace, the residence of Consort Jin, who probably received it as part of her dowry for her wedding to the Guangxu Emperor in 1889. The Jadeite Cabbage is now part of the collection of the National Palace Museum in Taipei, Taiwan. It has been called the museum's "most famous masterpiece" and, along with the Meat-Shaped Stone and the Mao Gong ding, is considered one of the Three Treasures of the National Palace Museum.Sculpture credit: unknown; photographed by the National Palace Museum
To take a peak at the talk page and edit history to see if the issue I'm about to revise has been explored, especially if it's in a subject I don't usually edit.
A few metrics to estimate how much weight to give a reference. Especially helpful when used in context (ie. when comparing journals in the same field of study):
H-index: an author-level metric that measures both the productivity and citation impact of the publications. [2]
Impact factor: In general, I try to stick to an IF > 2 though this is not canon. These can usually be found with a simple Google search. I try to match the year of the IF to the year of the article I'm referencing.
CiteScore: a metric based on Scopus data for all indexed documents, including articles, letters, editorials, articles and reviews. It's calculated by dividing the number of citations to all indexed documents within the journal.
SCImago Journal Rank: weighted metric that takes into account both the number of citations in a publication and the prestige of the journals from which those citations came. An SJR >1.0 is above average.
Source Normalized Impact per Paper: weights citations based on the total number of citations in a subject field to provide a contextual, subject-specific metric. A SNIP over 1.0 is good.
The article should be indexed on PubMed, Google Scholar, Scopus, Web of Science, Embase (only institutional access) or another reputable journal index with a Digital Object Identifier (DOI).
The evidence hierarchy — to help me see the forest from the trees. If I drop too far below the apex, I can make 1=2.
How to perform a meta-analysis: [3] - not because we do meta analyses here, but because understanding how something is done allows one to recognize when something is well done (or done poorly).
The plural of anecdote is not evidence and correlation does not equal causation.
Statistical methods to detect publication bias:
The following might sound like esoteric egghead nonsense, but it's really not. After working through the math a few times, it becomes much more intuitive.
Just knowing a few basic level statistical tests and being familiar with a program like R (which is at the level of Turbotax as far as the complexity you need from it) is incredibly helpful. Learning it was akin to using a review checker on an online store — I suddenly realized how many "fake reviews" were out there.
Tilman's The Book of R - a quick read that helped develop my data science skills further and got me to about the level of a high school statistics class. This is well past the majority of professional medical researchers.
Funnel plot based methods include visual examination, regression and rank tests, and the nonparametric trim + fill.
A small fail-safe N or asymmetric funnel plot suggest bias due to suppressed research.
Begg's rank test and Egger's regression can be used within the funnel plot. Begg's examines the correlation between effect sizes and their corresponding sampling variances; a strong correlation implies publication bias. Egger’s regresses standardized effect sizes on their precisions; in the absence of publication bias, the regression intercept is expected to be zero. The weighted regression is popular among meta-analyses because it directly links effect size to its standard error without requiring the standardization process.
Selection models: use weight functions to adjust the overall effect size estimate and are usually employed as sensitivity analyses to assess the potential impact of publication bias.
Cochran’s Q test is the traditional test for heterogeneity in meta-analyses. Based on a chi-square distribution, it generates a probability that, when large, indicates larger variation across studies rather than within subjects in a single study.
Higgin’s & Thompson’s I2 index is a more recent approach to quantify heterogeneity in meta-analyses. I2 provides an estimate of the percentage of variability in results across studies that is due to real differences and not due to chance. The I2 index measures the extent of heterogeneity by dividing the result of Cochran’s Q test and its degrees of freedom by the Q-value itself. When I2 is 0%, variability can be explained by chance alone. If I2 is 20%, this would mean that 20% of the observed variation in treatment effects cannot be attributed to chance alone.
τ2 is the between-study variance in our meta-analysis. It is an estimate of the variance of the underlying distribution of true effect sizes. The closer to zero the less variability between studies.
When single-arm studies constitute the majority of the evidence, traditional network meta-analysis is not as helpful because you don't have any common comparators.
Grey literature - unpublished or non-indexed trials from specific authors. If you have access to a large institutional library, many have local archives that are not indexed online. The university librarian should be able to help.
Look at edit patterns over time from naked IP addresses or hyper-niche editors/researchers.
Keep an ear out for marketing campaigns, public events, brigading from competitors and web traffic patterns. Much web data is public knowledge, though some is more difficult to access or restricted to paid services. This is a very complicated but important topic.
Review not just declarations at the end of the article but the authors' online resumés, research histories, grants and paid lectures.
NIH funded studies are preferred but can still have serious issues. Money, ego and prestige are insidious.
Retraction Watch - a list of scientists with the most retracted papers, either due to p-hacking, poor statistical methods or even actively fabricating data. This list can be accessed here: [5]
I usually start out by looking at figures, diagrams and tables and carefully reading the captions because pictures are easier for my reptile brain to digest. If a table or figure is horizontally displayed in the pdf, don't scroll past; click the rotate button three times and read it. If the authors thought it was important enough to disrupt the flow of their paper, it's important enough to look at.
I then read the first and last sentence of the introduction and the conclusion, and try to guess what the methods and results will look like. If the middle doesn't match what I was anticipating based on the outside, either I didn't understand something or the paper drew an erroneous conclusion. I focus on the parts that don't match my expectations.
These two steps by themselves land me light years ahead of where I would have been just reading the abstract. It can be overwhelming at first, but gets easier and can be done relatively quickly with practice.
Bayesian analyses > frequentist inferences. The former is a deductive probability, the latter inductive and binary. Frequentist statistics may contradict Bayesian analyses because with Bayesian parameters are random variables, and the researcher can subjectively establish whatever parameters they feel like (ie. quality of life and clinical improvement can have very nuanced meanings; this fine print subjectivity can be a loophole or blindspot, depending on a researcher's intent. It's very difficult to understand healthcare research in the modern era without at least a rough understanding of these two statistical philosophies.
However, Bayesian analyses use Markov Chain Monte Carlo modeling (as opposed to frequentist inferences) which can be used with Gibbs sampling to be less likely to be affected by small sample sizes.
The reality is that it's not an either/or: combined Bayesian + frequentist analyses are better than either individually, with the truth often living where they meet.
Overadjustment bias for conclusions that emerge or disappear only after correction for confounding variables. There could be a causal path. Cox proportional hazards models, in particular, are susceptible.
As an example: incorrect adjustment for blood pressure while studying the relationship between obesity and kidney failure. Obesity causes high blood pressure, which is its mechanism for destroying your kidneys. Correcting for hypertension obscures the mechanism and causes a Type II error. This method can also be inverted to cause Type I errors. Such mistakes induce bias instead of preventing it.
Cox models also try to force data into linearity and falter with J- or U-shaped correlations.
Distribution of p-values in meta-analyses to distinguish Monte Carlo type approaches from p-hacking. The Monte Carlo method is trying to describe the shape of a sculpture while blindfolded, while p-hacking is throwing darts at a wall and drawing bullseyes around where they land. The former is the scientific method, the latter a breach of ethics.
Hedges g and Cohen's d are methods to calculate effect size, a measure of how much one group differs from another. The use of Hedges' g instead of Cohen's d is more appropriate for meta analyses with small sample sizes (<20). They can be calculated with Comprehensive Meta-Analysis Software.
Rule of thumb for effect sizes: Small effect (cannot be discerned by the naked eye) = 0.2, Medium Effect = 0.5, Large Effect (seen with the naked eye) = 0.8
Trial sequential analysis recent cumulative meta-analysis method used to weigh type I and II errors and to estimate when the effect is large enough to be unaffected by further studies.
Summary analyses, likelihood of publication bias and heterogeneity tests can be computed using the metafor package for R. It's a simple program with an awkward name – more useful than heat vision in a dark jungle.
If an article I want to read is behind a paywall, I e-mail the author a kind note to ask for a copy. This can be done automatically through ResearchHub if one has access or through the contact listed in the article. This usually works, especially if I pack in a compliment or two. Researchers are like plants. They flourish with attention. (at least to their work)
Images need to be CC BY or CC BY SA. NC and ND licensed images can be uploaded to NC Commons.
Journal lists:
Abridged Index Medicus — a list of 114 journals that are generally gold standard. Another is the 2003 Brandon/Hill list which includes 141 journals, though it is no longer maintained.
Beall's list — a compilation of problematic journals, discussed comprehensively here: [6] It has not been updated in some time and there are limitations but still a phenomenal open-source candle in the dark. Be cautious of hijacked and vanity "journals". MDPI, Frontiers and Hindawi are some of the more frequent offenders.
CiteWatch — Wikipedia's homage to Beall; an excellent resource that is updated twice monthly.
Cabells' Predatory Reports — the successor to Beall's; a comprehensive multidisciplinary update. Unfortunately provided by a paid subscription service only available to institutions, not individual researchers - [7]
All heuristics are equal, but availability is more equal than others.
The One begets the Two. The Two begets the Three, and the Three begets the 10,000 things.
In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.
Arguing with an idiot is like playing chess with a pigeon. It's just going to knock the pieces over, shit on the board, and then strut around like it won.
It is difficult to get a man to understand something when his salary depends on his not understanding it.
People would rather believe a simple lie than the complex truth.
The popularity of a scale rarely equates to its validity.
True humility is not thinking less of yourself. It is thinking of your self less.
I never gave away anything without wishing I had kept it; nor kept it without wishing I had given it away.
When once a man is launched on an adventure as this, he must bid farewell to hopes and fears, otherwise death or deliverance will both come too late to save his honour and his reason!
In this world Ellwood, you must be oh so smart, or oh so pleasant. Well for years I was smart; I recommend pleasant. And you may quote me.
Frank Sinatra saved my life once. He said, "Okay, boys. That's enough."
If you want to go fast, go alone. If you want to go far, go together.