Everything Is Correlated

(gwern.net)

115 points | by gmays 8 hours ago

17 comments

  • simsla
    1 hour ago
    This relates to one of my biggest pet peeves.

    People interpret "statistically significant" to mean "notable"/"meaningful". I detected a difference, and statistics say that it matters. That's the wrong way to think about things.

    Significance testing only tells you the probability that the measured difference is a "good measurement". With a certain degree of confidence, you can say "the difference exists as measured".

    Whether the measured difference is significant in the sense of "meaningful" is a value judgement that we / stakeholders should impose on top of that, usually based on the magnitude of the measured difference, not the statistical significance.

    It sounds obvious, but this is one of the most common fallacies I observe in industry and a lot of science.

    For example: "This intervention causes an uplift in [metric] with p<0.001. High statistical significance! The uplift: 0.000001%." Meaningful? Probably not.

    • V__
      3 minutes ago
      I really like this video [1] from 3blue1brown, where he proposes to think about significance as a way to update the probability. One positive test (or in this analog a study) updates the probability by X % and thus you nearly always need more tests (or studies) for a 'meaningful' judgment.

      [1] https://www.youtube.com/watch?v=lG4VkPoG3ko

  • senko
    4 hours ago
    The article missed the chance to include the quote from that standard compendium of information and wisdom, The Hitchhiker's Guide to the Galaxy:

    > Since every piece of matter in the Universe is in some way affected by every other piece of matter in the Universe, it is in theory possible to extrapolate the whole of creation — every sun, every planet, their orbits, their composition and their economic and social history from, say, one small piece of fairy cake.

    • sayamqazi
      3 hours ago
      Wouldnt you need the T_zero configuration of the universe for this to work?

      Given different T_zero configs of matter and energies T_current would be different. and there are many pathways that could lead to same physical configuration (position + energies etc) with different (Universe minus cake) configurations.

      Also we are assuming there is no non-deterministic processed happening at all.

      • senko
        3 hours ago
        I am assuming integrating over all possible configurations would be a component of The Total Perspective Vortex.

        After all, Feynman showed this is in principle possible, even with local nondeterminism.

        (this being a text medium with a high probability of another commenter misunderstanding my intent, I must end this with a note that I am, of course, BSing :)

    • prox
      1 hour ago
      In Buddhism we have dependent origination : https://en.wikipedia.org/wiki/Prat%C4%ABtyasamutp%C4%81da
      • lioeters
        46 minutes ago
        Also the concept of implicate order, proposed by the theoretical physicist David Bohm.

        > Bohm employed the hologram as a means of characterising implicate order, noting that each region of a photographic plate in which a hologram is observable contains within it the whole three-dimensional image, which can be viewed from a range of perspectives.

        > That is, each region contains a whole and undivided image.

        > "There is the germ of a new notion of order here. This order is not to be understood solely in terms of a regular arrangement of objects (e.g., in rows) or as a regular arrangement of events (e.g., in a series). Rather, a total order is contained, in some implicit sense, in each region of space and time."

        > "Now, the word 'implicit' is based on the verb 'to implicate'. This means 'to fold inward' ... so we may be led to explore the notion that in some sense each region contains a total structure 'enfolded' within it."

  • apples_oranges
    3 hours ago
    People didn't always use statistics to discover truths about the world.

    This, once developed, just happened to be a useful method. But given the abuse using those methods, and the proliferation of stupidity disguised as intelligence, it's always fitting to question it, and this time with this correlation noise observation.

    Logic, fundamental knowledge about domains, you need that first. Just counting things without understanding them in at least one or two other ways, is a tempting invitation for misleading conclusions.

  • nathan_compton
    14 minutes ago
    Really classic "rationalist" style writing: a soup of correct observations about statistical phenomena with chunks of weird political bullshit thrown in here and there. For example: "On a more contemporary note, these theoretical & empirical considerations also throw doubt on concerns about ‘algorithmic bias’ or inferences drawing on ‘protected classes’: not drawing on them may not be desirable, possible, or even meaningful."

    This is such a bizarre sentence. The way its tossed in, not explained in any way, not supported by references, etc. Like I guess the implication being made is something like "because there is a hidden latent variable that determines criminality and we can never escape from correlations with it, its ok to use "is_black" in our black box model which decides if someone is going to get parole? Ridiculous. Does this really "throw doubt" on whether we should care about this?

    The concerns about how models work are deeper than the statistical challenges of creating or interpreting them. For one thing, all the degrees of freedom we include in our model selection process allow us to construct models which do anything that we want. If we see a parole model which includes "likes_hiphop" as an explanatory variable we ought to ask ourselves who decided that should be there and whether there was an agenda at play beyond "producing the best model possible."

    These concerns about everything being correlated actually warrant much more careful understanding about the political ramifications of how and what we choose to model and based on which variables, because they tell us that in almost any non-trivial case a model is at least partly necessarily a political object almost certainly consciously or subconsciously decorated with some conception of how the world is or ought to be explained.

  • Evidlo
    3 hours ago
    This is such a massive article. I wish I had the ability to grind out treatises like that. Looking at other content on the guy's website, he must be like a machine.
    • kqr
      2 hours ago
      IIRC Gwern lives extremely frugally somewhere remote and is thus able to spend a lot of time on private research.
      • tux3
        2 hours ago
        IIRC people funded moving gwern to the bay not too long ago.
    • aswegs8
      42 minutes ago
      I wish I would be even able to read things like that.
    • pas
      2 hours ago
      lots of time, many iterations, affinity for the hard questions, some expertise in research (and Haskell). oh, and also it helps if someone is funding your little endeavor :)
    • tmulc18
      3 hours ago
      gwern is goated
  • dang
    4 hours ago
    Related. Others?

    Everything Is Correlated - https://news.ycombinator.com/item?id=19797844 - May 2019 (53 comments)

    • stouset
      4 hours ago
      Correlated, you mean?
      • pnt12
        2 hours ago
        Those would be all articles posted in HN :)
  • cluckindan
    1 hour ago
    I wonder if this tendency to correlate truly holds for everything? Intuitively it more or less demonstrates that nature tends to favor zero-sum games. Maybe analyzing correlations within the domain of theoretical physics would highlight true non-correlations in some particular approaches? (pun only slightly intended)
  • st-keller
    3 hours ago
    „This renders the meaning of significance-testing unclear; it is calculating precisely the odds of the data under scenarios known a priori to be false.“

    I cannot see the problem in that. To get to meaningful results we often calculate with simplyfied models - which are known to be false in a strict sense. We use Newtons laws - we analyze electric networks based on simplifications - a bank-year used to be 360 days! Works well.

    What did i miss?

    • thyristan
      2 hours ago
      There is a known maximum error introduced by those simplifications. Put the other way around, Einstein is a refinement of Newton. Special relativity converges towards Newtonian motion for low speeds.

      You didn't really miss anything. The article is incomplete, and wrongly suggests that something like "false" even exists in statistics. But really something is only false "with a x% probability of it actually being true nonetheless". Meaning that you have to "statistic harder" if you want to get x down. Usually the best way to do that is to increase the number of tries/samples N. What the article gets completely wrong is that for sufficiently large N, you don't have to care anymore, and might as well use false/true as absolutes, because you pass the threshold of "will happen once within the lifetime of a bazillion universes" or something.

      Problem is, of course, that lots and lots of statistics are done with a low N. Social sciences, medicine, and economy are necessarily always in the very-low-N range, and therefore always have problematic statistics. And try to "statistic harder" without being able to increase N, thereby just massaging their numbers enough to get a desired conclusion proved. Or just increase N a little, claiming to have escaped the low-N-problem.

      • syntacticsalt
        1 hour ago
        A frequentist interpretation of inference assumes parameters have fixed, but unknown values. In this paradigm, it is sensible to speak of the statement "this parameter's value is zero" as either true or false.

        I do not think it is accurate to portray the author as someone who does not understand asymptotic statistics.

    • PeterStuer
      1 hour ago
      Back when I wrote a loan repayment calculator, there were 47 common different ways to 'day count' (used in calculating payments for incomplete repayment periods, e.g in monthly payments, what is the 1st-13th of aug 2025 as a fraction of aug 2025?).
    • bjornsing
      2 hours ago
      The problem is basically that you can always buy a significant result with money (large enough N always leads to ”significant” result). That’s a serious issue if you see research as pursuit of truth.
      • syntacticsalt
        1 hour ago
        Reporting effect size mitigates this problem. If observed effect size is too small, its statistical significance isn't viewed as meaningful.
    • whyever
      2 hours ago
      It's a quantitative problem. How big is the error introduced by the simplification?
  • hshshshshsh
    53 minutes ago
    Doesn't everything means all things that exist in universe and since they exist in same universe they are correlated?
  • 2rsf
    3 hours ago
    • ezomode
      1 hour ago
      Who should quote who? The article is from 2014.
  • syntacticsalt
    2 hours ago
    I don't disagree with the title, but I'm left wondering what they want us to do about it beyond hinting at causal inference. I'd also be curious what the author thinks of minimum effect sizes (re: Implication 1) and noninferiority testing (re: Implication 2).
  • eisvogel
    4 hours ago
    It's just as I suspected - there are NO coincidences.
  • petters
    4 hours ago
    If two things e.g. both change over time, they will be correlated. I think it can be good to keep this article in mind
  • andsoitis
    4 hours ago
    there is but a single unfolding, and everything is part of it
  • 01HNNWZ0MV43FF
    2 hours ago
    > For example, while looking at biometric samples with up to thousands of observations, Karl Pearson declared that a result departing by more than 3 standard deviations is “definitely significant.”

    Wait. Sir Arthur Conan Doyle lived at basically the exact same time as this Karl Pearson.

    Is that why the Sherlock Holmes stories had handwriting analysis so frequently? Was there just pop science going around at the time that like, let's find correlations between anything and anything, and we can see that a criminal mastermind like Moriarty would certainly cross their T's this way and not that way?

  • jongjong
    2 hours ago
    Also, I'm convinced that the reason humans intuitively struggle to figure out causality is because the vast majority of causes and effects are self-reinforcing cycles and go both ways. There was little evolutionary pressure for us to understand the concept of causality because it doesn't play a strong role in natural selection.

    For example, eat a lot and you will gain weight, gain weight and you will feel more hungry and will likely eat more.

    Or exercise more and it becomes easier to exercise.

    Earning money becomes easier as you have more money.

    Public speaking becomes easier as you do it more and the more you do it, the easier it becomes.

    Etc...

    • renox
      29 minutes ago
      > Or exercise more and it becomes easier to exercise.

      Only if you don't injure yourself while exercising.

    • ctenb
      2 hours ago
      > Public speaking becomes easier as you do it more and the more you do it, the easier it becomes.

      That's saying the same thing twice :)

  • vi0g0d
    4 hours ago
    [dead]