Thursday, May 31, 2018

Interesting quote from "King City"

Hi - I'm reading "King City" by Lee Goldberg and wanted to share this quote with you.

""Want some apple pie? It's my momma's recipe." "I'm told it's better than sex." "I think we ought to do a comparison," she said. "While the taste is still fresh in our mouths." She stuck a fork into the pie, carved out a bite, and ate it."

Start reading it for free: http://a.co/2hHbWEO
--------
Download Kindle for Android, iOS, PC, Mac and more
http://amzn.to/1r0LubW


_- Steve

Wednesday, May 30, 2018

Can a Genetic Test Find Your Intelligence in Your DNA? - The Atlantic

Can a Genetic Test Find Your Intelligence in Your DNA? - The Atlantic

Genetic Intelligence Tests Are Next to Worthless

And not just because one said I was below average.

The Atlantic

Students in uniform take an exam in a classroom using pencil and paper.
Karim Kadim / AP

In 2016, I got my genome sequenced while I was working on a book about heredity. Some scientists kindly pointed out some of the interesting features of my genetic landscape. And then they showed me how to navigate the data on my own. Ever since, I've been a genomic wayfarer. Whenever I come across some new insight into the links between our genes and our lives, I check my own DNA. One day I'm inspecting a mutation that raises my risk of skin cancer. The next I'm discovering I have a variant for smooth teeth.

I often consult a website called DNA.Land, run by a team of scientists affiliated with the New York Genome Center who use it to collect genetic data from volunteers for scientific research. Over 100,000 people have signed up so far (the service is free, and the researchers don't sell the information to third parties). As a token of appreciation, the researchers write programs to analyze their volunteers' DNA, generating new reports based on the latest studies.

On a recent visit to DNA.Land, I scanned down the list of traits they offered to tell me about. I stopped at intelligence.

I took a breath before I clicked.

Intelligence, after all, is different from the smoothness of your teeth or your risk of skin cancer. People have fought over the very meaning of the word for over a century. In the early 1900s, some psychologists claimed that intelligence was the mental power underlying many different tasks we carry out, from solving problems to remembering facts. And they developed ways to measure it with a number, just as a doctor might give a number for your blood pressure or body temperature.

But critics argued that the things we associate with intelligence are too complex and ambiguous to pin down in such a simplistic way. Meanwhile, eugenicists used the emerging concept of intelligence in their campaign to recast society. They argued that people's different intelligence test scores were largely due to differences in their genes, and so for the good of society, more intelligent people should have more children and less intelligent ones should not.

In the United States, eugenicists successfully campaigned for women deemed unfit to be sterilized. They lent their support to laws banning marriages between blacks and whites. They helped block immigration from Italy, Russia, and other countries where people were judged to have inferior intelligence. Nazi scientists imitated the American eugenicists and went further, using intelligence tests in hereditary health courts to decide who lived and died.

Yet intelligence, as a scientific concept, endured. The scores that people get on intelligence tests tend to stay similar from childhood to old age. Intelligence test scores are correlated with many other things, from people's reaction times answering simple questions to their odds of surviving into their 70s. What precisely it is that intelligence tests measure—the efficiency of the brain's wiring, perhaps—is not yet clear. But they measure something significant. And in just the past couple years, scientists have started to find some of the genes that play a role in how we do on these tests.

I took a breath and clicked on the link. I was swiftly sent to a new page, entitled "General Intelligence Trait Prediction Report for Carl Zimmer." And on that page, here's what I saw:

DNA.Land

The bell curve was surrounded by notes, disclaimers, and symbols. But my eye stayed locked on that lavender hill—especially on that personalized marker that located me on the low side of the scale.

What was I to make of this? For some reason, DNA.Land's prediction felt more profound than if I had taken an actual IQ test. It didn't depend on whether I could recall some random string of words at a particular moment. Instead, I was looking inside myself, at the immutable genes that shaped me from before birth.

When I sent the curve to my mother, she emailed back, "Those results can't be right! Simply cannot!"

I called up Yaniv Erlich, the scientist who wrote the intelligence program, to ask him about his prediction. Erlich, I should point out, majored in computational neuroscience, got a Ph.D. in genetics, became an associate professor at Columbia, and is on leave from teaching to serve as the chief science officer at the DNA-testing company MyHeritage. I imagine Erlich's mother is very proud of her boy.

I bring all this up because Erlich burst out laughing when I told him about my report and told me about his own.

"I also get that on the left side," he said. "Everything is cool. Many smart people end up there."

Erlich explained that he designed the program to make people cautious about the connection between genes and intelligence. All those disclaimers and notes that surrounded the bell curve were intended to show that these predictions are, in a sense, worse than just wrong. They're practically meaningless.

The inspiration for the program was a 2017 study pinpointing certain genes with some sort of connection to intelligence. For decades, scientists knew that the genes we inherit play a role in the variation in scores on intelligence tests. Studies on twins and families show that people who share more genes in common tend to get closer scores. The 2017 study, carried out by a team of researchers based at Vrije University Amsterdam, was one of the first to find a statistically strong connection between that variation in scores and specific genes.

DNA is composed of four units, known as bases, that are a bit like the alphabetic letters that spell out a recipe. For the most part, the DNA of any two people is identical. But here and there, the letters differ. In a study of over 78,000 people, the Amsterdam team found variants in 52 genes that are unusually common in people who score higher (or lower) on intelligence tests.

This kind of study is very different from the genetic tests that doctors order for patients. If a woman has a family history of aggressive breast cancer, for example, a doctor may order a test for mutations on the BRCA1 gene. A single mutation there can raise the risk of breast cancer by 50 to 85 percent.

If you discovered that you had one of the variants identified by the Amsterdam team, that would not jack up your IQ by 50 points. Each one is only associated, on average, with a shift of a fraction of one point.

Erlich's program checks those 52 genes in the DNA of his volunteers. It determines the effect that each variant has on each person, adding up all the slightly positive and negative effects to determine their total impact.

In most cases, they all pretty much cancel each other out. That's why Erlich ended up with a bell curve, with its peak around a net effect of zero. In my case, the score-lowering variants slightly outweighed the score-raising ones, leaving me—like Erlich—on the left side of the curve. And I do mean slightly. Each of those ticks on the horizontal axis of the bell curve represents five IQ points. Erlich predicted that the effect of my 52 genes added up to less than a point.

But there's an even deeper illusion to my bell curve: The seeming precision is almost certainly wrong.

When geneticists use the word prediction, they give it a different meaning than the rest of us do. We usually think of predictions as accurate forecasts for particular situations. At a carnival, you might encounter a man who promises to predict your weight simply by looking you over. If you weighed, say, 130 pounds, and he guessed 132, you might be impressed. If he guessed 232, you'd expect to walk away with a giant teddy bear.

Geneticists are a lot more forgiving about predictions. When they try to predict a trait from a set of genes, their prediction may be dead on, or it may be no better than random. Or, as is almost always the case, it is somewhere in between.

Genes that predict the variation in a trait perfectly in a group of people have a predictive power of 100 percent. If they're no better than what you'd get from blind guesses, their power is zero. The Amsterdam team tested their 52 genes on thousands of people and concluded that the genes have a predictive power of nearly 5 percent. Their predictions are less than random, but if you used them to predict individual people's intelligence, you'd give away a whole lot of teddy bears.

This weak power is no surprise to scientists who study the heredity of height, blood pressure, and other complex traits. Their variations arise from many genes—sometimes thousands of them—as well as variations in the environment.

That doesn't mean that scientists won't get better at predicting intelligence. Earlier this year, a team of scientists based at the University of Edinburgh published an even bigger study on intelligence, examining nearly a quarter of a million people. They identified 538 genes with clear-cut influence on intelligence test scores. Those 538 genes have a predictive power of about 7 percent. That still may not be good for carnival work, but it's approximately a 50 percent improvement on the power that existed less than a year before.

Scientists are now studying even bigger groups of over a million people, and it's likely their powers of prediction will jump yet again. But in years to come, they won't keep leaping to 100 percent. When researchers study identical twins, they find that their intelligence test scores tend to be closer than fraternal twins. But they're not identical. That's a sign that genes are not the only force that shapes our intelligence. In fact, only roughly half of the variation in intelligence test scores arises from variations in genes.

And many experts doubt that they'll get close to that upper limit. Studying intelligence is just hard. You can't measure it with a quick blood test. Instead, you have to get volunteers to take a lengthy exam.

Even if scientists only manage to reach a predictive power of 25 percent, we could learn a lot from their work. Researchers are already finding that genes linked to intelligence tend to do certain things and not others. A lot of them switch on when neurons divide into new neurons, for example. Social scientists could take DNA into account in experiments to determine the best ways to help children learn and stay in school.

But with millions of people flocking to sites like 23andMe and Ancestry.com to get reports on their genes, we have to wonder what will happen if they start handing out intelligence predictions.

After talking with Erlich, I fear it will turn out badly.

Direct-to-consumer testing companies are marketing themselves as our new oracles of the self. "This year, know you better," an ad for 23andMe exhorts. If people expect to get this knowledge from their genes, I doubt many will look past stark curves and simple scores and plow through the fine print about the complex nature of intelligence, what it means to make a genetic prediction, and how untrustworthy a score for one person can be.

If people get accustomed to getting these intelligence predictions for themselves, it's possible that they may want them for their children too. Some researchers have even called for using genetic data in schools to create "precision education" tailored to each child's DNA. Scientists I've spoken to have raised the possibility that parents using in vitro fertilization might pick out embryos according to their scores on those tests.

For his own part, Erlich hopes none of these things come to pass. "I'm afraid that this will be a distraction for society," he said. An obsession with slippery genetic predictions could turn people's attention away from other things that influence how children do in school and beyond—things like their family's wealth, the stress in their neighborhoods, the quality of the schools themselves.

"I'm afraid that policy makers won't focus on the real things that bother me about inequality and education," Erlich said.

Our inner landscape of DNA is an endlessly fascinating place. But we have to also look up, and survey the cliffs and chasms of our social landscape, too.



_- Steve

Tuesday, May 29, 2018

Antony Beevor: the greatest war movie ever – and the ones I can't bear | Film | The Guardian

Antony Beevor: the greatest war movie ever – and the ones I can't bear | Film | The Guardian

Antony Beevor: the greatest war movie ever – and the ones I can't bear

He groaned at Valkyrie and despaired at Saving Private Ryan. The award-winning historian takes aim at the war films that make him furious – and reveals his own favourite

For a long time now, my wife has refused to watch a war movie with me. This is because I cannot stop grinding my teeth with annoyance at major historical mistakes, or harrumphing over errors of period detail. She only made an exception when Valkyrie came out, with Tom Cruise playing Count Claus Schenk von Stauffenberg. Such a folly of miscasting was bound to be a hoot, and we were not disappointed, especially when Cruise saluted in that downward cutaway style as if he were still in Top Gun. But I was soon grinding away again when the director and screenwriter felt compelled to improve on history, by making it look as if the 20 July plot to blow up Hitler had still very nearly succeeded.

I despair at the way American and British movie-makers feel they have every right to play fast and loose with the facts, yet have the arrogance to imply that their version is as good as the truth. Continental film-makers are on the whole far more scrupulous. The German film Downfall, about Hitler's last days in the bunker, respected historical events and recreated them accurately.

The corruption of combat … The 317th Platoon, regarded as 'the greatest war movie ever' by Beevor.
The corruption of combat … The 317th Platoon, regarded as 'the greatest war movie ever' by Beevor. Photograph: Allstar/RANK

In my view, the greatest war movie ever made is The 317th Platoon, a French film from 1965 set during the country's first Indochina war. This was the original "platoon movie", whose format later directors followed but failed to match in its portrayal of characters and their interaction, to say nothing of the moral choices and the corruption of combat. It is followed closely by 1966's The Battle of Algiers, set during the Algerian war of independence. This was one of the first war films to adopt a quasi-documentary approach, and tackle the moral quagmire of torture justified by the need to save lives.

More recent imitators lack all intellectual honesty. They throw dates and place names on to the screen as if what you are about to see is a faithful reproduction of events, when they are simply trying to pass off their fiction as authentic. This is basically a marketing ploy that has developed over the last 20 years or so. Unfortunately, fake authenticity sells. People are more likely to want to see something they think is very close to the truth, so they can feel they are learning as well as being entertained. In a post-literate society, the moving image is king, and most people's knowledge of history is regrettably based more on cinematic fiction than archival fact.

There are many examples of shameless deception, such as the notorious U-571, in which a US warship is shown to capture a German submarine and seize its Enigma decoding machine, thus enabling the Allies to win the battle of the Atlantic. Right at the end, in the credits, a brief text admitted that in fact it had been the crew of a Royal Navy destroyer, HMS Bulldog, that performed the feat – seven months before the US entered the war.

Shameless deception … U-571 sees the US triumph in a war it had yet to enter.
Shameless deception … U-571 sees the US triumph in a war it had yet to enter. Photograph: PA

When promoting Enemy at the Gates, a fictitious sniper duel set in Stalingrad, Paramount Pictures even had the gall to claim: "One bullet can change the course of history." I hasten to add that, even though Jean-Jacques Annaud invited me to come out to Germany to watch the filming, the movie had nothing to do with my book Stalingrad and I was not an adviser in any form.

The director was trying to woo me and persuade me not to be too severe on the question of truth, because we had found in the Russian ministry of defence archives that the whole story of the sniper duel – portrayed by Jude Law and Ed Harris – had been a clever figment of Soviet propaganda. I liked Annaud, but in the end I was not popular, of course, because Paramount had bought the movie as "a true story". His great line was: "But Antony, who can tell where myth begins and truth ends?"

The real problem is that the needs of history and the needs of the movie industry are fundamentally incompatible. Hollywood has to simplify everything according to set formulae. Its films have to have heroes and, of course, baddies – moral equivocation is too complex. Feature films also have to have a whole range of staple ingredients if they are to make it through the financing, production and studio system to the box office. One element is the "arc of character", in which the leading actors have to go through a form of moral metamorphosis as a result of the experiences they undergo. Endings have to be upbeat, even for the Holocaust. Look at Schindler's List and the sentimentality of its finale, revealing that in movies only the survivors count.

The true story that wasn't … Jude Law as a sniper in Enemy at the Gates.
The true story that wasn't … Jude Law as a sniper in Enemy at the Gates. Photograph: Allstar/Paramount

I was asked by a large-circulation American weekly magazine to review Saving Private Ryan. My piece was spiked since it did not share the widespread adulation, and I still shake my head in disbelief when it is regularly voted the best war movie ever. It is nevertheless a work of intriguing paradoxes – some intended, others not. Steven Spielberg's storyline rightly dramatises the clash between patriotic and therefore collective loyalty, and the struggle of the individual for survival. Those mutually contradictory values are, in many ways, the essence of war.

Spielberg said at the time that he sees the second world war as the "defining moment" in history. One also suspects that he wanted this film to be seen as the defining movie of the war. If so, it is a uniquely American definition of history, with no reference to the British let alone the Soviet role.

Eight US rangers under the command of a captain, having survived the initial D-day bloodbath, are detailed to seek out and save a single man, Private Ryan. The Hollywood notion of creativity often takes the form of cinematic ancestor worship – but in this case, it is images and effects that are recycled. Spielberg may not even have included them consciously but, during the landing, the blood in the water in the first machine-gunning prompts memories of Jaws, another Spielberg film. And German Tiger tanks can indeed appear like prehistoric monsters, but when the sound effects of their approach later in the film resemble that of the Tyrannosaurus rex in Jurassic Park, it all seems too much.

After a truly extraordinary opening – probably the most realistic battle sequence ever filmed – everything changes and becomes formulaic. The climax combines just about every cliche in the book, with a very mixed handful of men (almost a la Dirty Dozen) improvising weapons to defend a vital bridge against an SS Panzer counterattack. The redeemed coward and the cynic reduced to tears – both ticking the "arc of character" box – are straight out of central screenwriting. The US air force arrives in the nick of time, just like the cavalry in 1950s cowboy films. And to cap it all, the final frames are of Private Ryan, standing in old age amid the rows of white crosses in a military cemetery, saluting his fallen comrades as tears run down his cheeks.

So what, apart from milking our tear ducts with both hands, was Spielberg really trying to do? Was his revolutionary approach to realism – the special effects and stunt teams make up the largest blocks in the credits – simply an attempt to conceal a deeply conservative message, as some commentators claimed?

It was not quite as simple as that. Amid the horror of war, Spielberg seems to be trying to rediscover American innocence, that Holy Grail that existed only in the Rousseau-esque imagination yet was virtually incorporated into the constitution. Spielberg, like other Hollywood directors of the time, came from a generation scarred by the moral quagmire of Vietnam. He understood the national need, in the post-cold war chaos, to reach back to more certain times, seeking reassurance from that moment in history – the second world war – when the fight seemed unequivocally right. "Tell me I've led a good life," says the weeping veteran in the cemetery to his wife. "Tell me I'm a good man."

'A stinker' … Mel Gibson in The Patriot.
'A stinker' … Mel Gibson in The Patriot. Photograph: Allstar/Columbia Pictures

"You are," she replies, and the music begins to swell, with drum beats and trumpets. This representative of American motherhood appears to be reassuring the US as a whole. She seems to be speaking to a nation unable at that time to come to terms with its role in a disordered world, to a nation that, for all its power, can be bewilderingly naive abroad because it so badly needs to feel good about itself at home.

Even movies ostensibly showing corruption and criminality in the heart of the CIA and the Pentagon have to end on a nationalistic note, with a tiny group of clean, upstanding American liberals saving democracy. And it is, of course, hard to forget The Patriot, starring Mel Gibson, that fearless symbol of Brit-bashing films, whether at Gallipoli or all woaded up in the Scottish Highlands as Braveheart.

Andrew Marr rightly called The Patriot, set in the American war of independence, "a stinker". As he pointed out: "Black Americans, in fact destined to stay slaves thanks to the war, very many of whom enlisted with the British, are shown fighting shoulder to shoulder with their white rebel 'brothers'. The British are portrayed as effete sadists and serial war criminals, just as in other American films. The huge support of the Bourbon French, who helped win the war, is airbrushed out. And the fact that most colonists actually sided with King George is airily forgotten."

We will fight them on the pristine beaches … Kenneth Branagh in Dunkirk.
We will fight them on the pristine beaches … Kenneth Branagh in Dunkirk. Photograph: Allstar/Warner Bros.

Patriotism also permeated those British war movies of the 1950s and 60s – The Dam Busters, Reach for the Sky, The Cruel Sea, The Heroes of Telemark, The Battle of the River Plate, Cockleshell Heroes. It camouflaged itself in self-deprecation, but the rousing march music in the finale always braced our belief in the rightness of our cause. We have long made fun of all the period cliches, unable to believe that anyone talked like that. But when researching my new book Arnhem: The Battle for the Bridges, I found that German officers really did say to the British paratroopers taken prisoner: "For you the war is over."

One of my favourite remarks, recorded at the time by a junior doctor, is the reaction of Colonel Marrable, the head of an improvised hospital in the Netherlands, when Waffen-SS panzergrenadiers seized the building. Still puffing gently on his pipe, he says to his medical staff: "Good show, chaps. Don't take any notice of the Jerries. Carry on as if nothing has happened." I have always been doubtful about the notion of "a national character", but a national self-image certainly existed during the war and for some time afterwards. Perhaps that is partly why I do not react so angrily when watching films of that era. Also, they never used that weasel claim "based on a true story".

Recent productions are a very different matter. Last year's Dunkirk and Darkest Hour were strong Oscar contenders. Yet watching Dunkirk, you would have thought that CGI had not been invented. Where were all those 400,000 men and their discarded equipment on all those miles of empty, pristine beaches? The film also gave the impression that the air battles took place at low level over the sea when, in fact, Fighter Command was counterattacking at altitude and well inland. It also implied that "the little ships", as Churchill called them, rescued more soldiers than the Royal Navy warships. Wrong again.

'He never set foot on the Tube in his life' … Gary Oldham takes the underground as Churchill in Darkest Hour.
'He never set foot on the Tube in his life' … Gary Oldham takes the underground as Churchill in Darkest Hour. Photograph: Alamy

Darkest Hour had even more historical inaccuracies. Gary Oldman fully deserved the best actor Oscar for his brilliant performance as Churchill, but those responsible for the script get "nul points". I fear that anyone who agrees to be a historical adviser for a movie is putting their reputation on the line. The ludicrous scene of Churchill in the underground (where he had never set foot in his life) was not the only howler.

On becoming prime minister in 1940, Churchill remained in the Admiralty, but he generously allowed Chamberlain to carry on in Downing Street. His respectful treatment of his former leader is important because – when it came to the crunch with Lord Halifax, over the question of asking the Italians to discover Hitler's peace terms – Chamberlain supported Churchill and did not plot against him as the film suggests.

Also, why were so many scenes shot in the bunker war rooms when the Luftwaffe had not yet bombed London? I was so irritated, it was a good thing I saw it on my own. Another visit to the dentist, I fear.



_- Steve

If correlation doesn’t imply causation, then what does? | DDI

If correlation doesn't imply causation, then what does? | DDI

If correlation doesn't imply causation, then what does?

It is a commonplace of scientific discussion that correlation does not imply causation. Business Week recently ran an spoof article pointing out some amusing examples of the dangers of inferring causation from correlation. For example, the article points out that Facebook's growth has been strongly correlated with the yield on Greek government bonds: (credit)

Despite this strong correlation, it would not be wise to conclude that the success of Facebook has somehow caused the current (2009-2012) Greek debt crisis, nor that the Greek debt crisis has caused the adoption of Facebook!

Of course, while it's all very well to piously state that correlation doesn't imply causation, it does leave us with a conundrum: under what conditions, exactly, can we use experimental data to deduce a causal relationship between two or more variables?

The standard scientific answer to this question is that (with some caveats) we can infer causality from a well designed randomized controlled experiment. Unfortunately, while this answer is satisfying in principle and sometimes useful in practice, it's often impractical or impossible to do a randomized controlled experiment. And so we're left with the question of whether there are other procedures we can use to infer causality from experimental data. And, given that we can find more general procedures for inferring causal relationships, what does causality mean, anyway, for how we reason about a system?

It might seem that the answers to such fundamental questions would have been settled long ago. In fact, they turn out to be surprisingly subtle questions. Over the past few decades, a group of scientists have developed a theory of causal inference intended to address these and other related questions. This theory can be thought of as an algebra or language for reasoning about cause and effect. Many elements of the theory have been laid out in a famous book by one of the main contributors to the theory, Judea Pearl. Although the theory of causal inference is not yet fully formed, and is still undergoing development, what has already been accomplished is interesting and worth understanding.

In this post I will describe one small but important part of the theory of causal inference, a causal calculus developed by Pearl. This causal calculus is a set of three simple but powerful algebraic rules which can be used to make inferences about causal relationships. In particular, I'll explain how the causal calculus can sometimes (but not always!) be used to infer causation from a set of data, even when a randomized controlled experiment is not possible. Also in the post, I'll describe some of the limits of the causal calculus, and some of my own speculations and questions.

The post is a little technically detailed at points. However, the first three sections of the post are non-technical, and I hope will be of broad interest. Throughout the post I've included occasional "Problems for the author", where I describe problems I'd like to solve, or things I'd like to understand better. Feel free to ignore these if you find them distracting, but I hope they'll give you some sense of what I find interesting about the subject. Incidentally, I'm sure many of these problems have already been solved by others; I'm not claiming that these are all open research problems, although perhaps some are. They're simply things I'd like to understand better. Also in the post I've included some exercises for the reader, and some slightly harder problems for the reader. You may find it informative to work through these exercises and problems.

Before diving in, one final caveat: I am not an expert on causal inference, nor on statistics. The reason I wrote this post was to help me internalize the ideas of the causal calculus. Occasionally, one finds a presentation of a technical subject which is beautifully clear and illuminating, a presentation where the author has seen right through the subject, and is able to convey that crystalized understanding to others. That's a great aspirational goal, but I don't yet have that understanding of causal inference, and these notes don't meet that standard. Nonetheless, I hope others will find my notes useful, and that experts will speak up to correct any errors or misapprehensions on my part.

Simpson's paradox

Let me start by explaining two example problems to illustrate some of the difficulties we run into when making inferences about causality. The first is known as Simpson's paradox. To explain Simpson's paradox I'll use a concrete example based on the passage of the Civil Rights Act in the United States in 1964.

In the US House of Representatives, 61 percent of Democrats voted for the Civil Rights Act, while a much higher percentage, 80 percent, of Republicans voted for the Act. You might think that we could conclude from this that being Republican, rather than Democrat, was an important factor in causing someone to vote for the Civil Rights Act. However, the picture changes if we include an additional factor in the analysis, namely, whether a legislator came from a Northern or Southern state. If we include that extra factor, the situation completely reverses, in both the North and the South. Here's how it breaks down:

North: Democrat (94 percent), Republican (85 percent)

South: Democrat (7 percent), Republican (0 percent)

Yes, you read that right: in both the North and the South, a larger fraction of Democrats than Republicans voted for the Act, despite the fact that overall a larger fraction of Republicans than Democrats voted for the Act.

You might wonder how this can possibly be true. I'll quickly state the raw voting numbers, so you can check that the arithmetic works out, and then I'll explain why it's true. You can skip the numbers if you trust my arithmetic.

North: Democrat (145/154, 94 percent), Republican (138/162, 85 percent)

South: Democrat (7/94, 7 percent), Republican (0/10, 0 percent)

Overall: Democrat (152/248, 61 percent), Republican (138/172, 80 percent)

One way of understanding what's going on is to note that a far greater proportion of Democrat (as opposed to Republican) legislators were from the South. In fact, at the time the House had 94 Democrats, and only 10 Republicans. Because of this enormous difference, the very low fraction (7 percent) of southern Democrats voting for the Act dragged down the Democrats' overall percentage much more than did the even lower fraction (0 percent) of southern Republicans who voted for the Act.

(The numbers above are for the House of Congress. The numbers were different in the Senate, but the same overall phenomenon occurred. I've taken the numbers from Wikipedia's article about Simpson's paradox, and there are more details there.)

If we take a naive causal point of view, this result looks like a paradox. As I said above, the overall voting pattern seems to suggest that being Republican, rather than Democrat, was an important causal factor in voting for the Civil Rights Act. Yet if we look at the individual statistics in both the North and the South, then we'd come to the exact opposite conclusion. To state the same result more abstractly, Simpson's paradox is the fact that the correlation between two variables can actually be reversed when additional factors are considered. So two variables which appear correlated can become anticorrelated when another factor is taken into account.

You might wonder if results like those we saw in voting on the Civil Rights Act are simply an unusual fluke. But, in fact, this is not that uncommon. Wikipedia's page on Simpson's paradox lists many important and similar real-world examples ranging from understanding whether there is gender-bias in university admissions to which treatment works best for kidney stones. In each case, understanding the causal relationships turns out to be much more complex than one might at first think.

I'll now go through a second example of Simpson's paradox, the kidney stone treatment example just mentioned, because it helps drive home just how bad our intuitions about statistics and causality are.

Imagine you suffer from kidney stones, and your Doctor offers you two choices: treatment A or treatment B. Your Doctor tells you that the two treatments have been tested in a trial, and treatment A was effective for a higher percentage of patients than treatment B. If you're like most people, at this point you'd say "Well, okay, I'll go with treatment A".

Here's the gotcha. Keep in mind that this really happened. Suppose you divide patients in the trial up into those with large kidney stones, and those with small kidney stones. Then even though treatment A was effective for a higher overall percentage of patients than treatment B, treatment B was effective for a higher percentage of patients in both groups, i.e., for both large and small kidney stones. So your Doctor could just as honestly have said "Well, you have large [or small] kidney stones, and treatment B worked for a higher percentage of patients with large [or small] kidney stones than treatment A". If your Doctor had made either one of these statements, then if you're like most people you'd have decided to go with treatment B, i.e., the exact opposite treatment.

The kidney stone example relies, of course, on the same kind of arithmetic as in the Civil Rights Act voting, and it's worth stopping to figure out for yourself how the claims I made above could possibly be true. If you're having trouble, you can click through to the Wikipedia page, which has all the details of the numbers.

Now, I'll confess that before learning about Simpson's paradox, I would have unhesitatingly done just as I suggested a naive person would. Indeed, even though I've now spent quite a bit of time pondering Simpson's paradox, I'm not entirely sure I wouldn't still sometimes make the same kind of mistake. I find it more than a little mind-bending that my heuristics about how to behave on the basis of statistical evidence are obviously not just a little wrong, but utterly, horribly wrong.

Perhaps I'm alone in having terrible intuition about how to interpret statistics. But frankly I wouldn't be surprised if most people share my confusion. I often wonder how many people with real decision-making power – politicians, judges, and so on – are making decisions based on statistical studies, and yet they don't understand even basic things like Simpson's paradox. Or, to put it another way, they have not the first clue about statistics. Partial evidence may be worse than no evidence if it leads to an illusion of knowledge, and so to overconfidence and certainty where none is justified. It's better to know that you don't know.

Correlation, causation, smoking, and lung cancer

As a second example of the difficulties in establishing causality, consider the relationship between cigarette smoking and lung cancer. In 1964 the United States' Surgeon General issued a report claiming that cigarette smoking causes lung cancer. Unfortunately, according to Pearl the evidence in the report was based primarily on correlations between cigarette smoking and lung cancer. As a result the report came under attack not just by tobacco companies, but also by some of the world's most prominent statisticians, including the great Ronald Fisher. They claimed that there could be a hidden factor – maybe some kind of genetic factor – which caused both lung cancer and people to want to smoke (i.e., nicotine craving). If that was true, then while smoking and lung cancer would be correlated, the decision to smoke or not smoke would have no impact on whether you got lung cancer.

Now, you might scoff at this notion. But derision isn't a principled argument. And, as the example of Simpson's paradox showed, determining causality on the basis of correlations is tricky, at best, and can potentially lead to contradictory conclusions. It'd be much better to have a principled way of using data to conclude that the relationship between smoking and lung cancer is not just a correlation, but rather that there truly is a causal relationship.

One way of demonstrating this kind of causal connection is to do a randomized, controlled experiment. We suppose there is some experimenter who has the power to intervene with a person, literally forcing them to either smoke (or not) according to the whim of the experimenter. The experimenter takes a large group of people, and randomly divides them into two halves. One half are forced to smoke, while the other half are forced not to smoke. By doing this the experimenter can break the relationship between smoking and any hidden factor causing both smoking and lung cancer. By comparing the cancer rates in the group who were forced to smoke to those who were forced not to smoke, it would then be possible determine whether or not there is truly a causal connection between smoking and lung cancer.

This kind of randomized, controlled experiment is highly desirable when it can be done, but experimenters often don't have this power. In the case of smoking, this kind of experiment would probably be illegal today, and, I suspect, even decades into the past. And even when it's legal, in many cases it would be impractical, as in the case of the Civil Rights Act, and for many other important political, legal, medical, and econonomic questions.

Causal models

To help address problems like the two example problems just discussed, Pearl introduced a causal calculus. In the remainder of this post, I will explain the rules of the causal calculus, and use them to analyse the smoking-cancer connection. We'll see that even without doing a randomized controlled experiment it's possible (with the aid of some reasonable assumptions) to infer what the outcome of a randomized controlled experiment would have been, using only relatively easily accessible experimental data, data that doesn't require experimental intervention to force people to smoke or not, but which can be obtained from purely observational studies.

To state the rules of the causal calculus, we'll need several background ideas. I'll explain those ideas over the next three sections of this post. The ideas are causal models (covered in this section), causal conditional probabilities, and d-separation, respectively. It's a lot to swallow, but the ideas are powerful, and worth taking the time to understand. With these notions under our belts, we'll able to understand the rules of the causal calculus

To understand causal models, consider the following graph of possible causal relationships between smoking, lung cancer, and some unknown hidden factor (say, a hidden genetic factor):

This is a quite general model of causal relationships, in the sense that it includes both the suggestion of the US Surgeon General (smoking causes cancer) and also the suggestion of the tobacco companies (a hidden factor causes both smoking and cancer). Indeed, it also allows a third possibility: that perhaps both smoking and some hidden factor contribute to lung cancer. This combined relationship could potentially be quite complex: it could be, for example, that smoking alone actually reduces the chance of lung cancer, but the hidden factor increases the chance of lung cancer so much that someone who smokes would, on average, see an increased probability of lung cancer. This sounds unlikely, but later we'll see some toy model data which has exactly this property.

Of course, the model depicted in the graph above is not the most general possible model of causal relationships in this system; it's easy to imagine much more complex causal models. But at the very least this is an interesting causal model, since it encompasses both the US Surgeon General and the tobacco company suggestions. I'll return later to the possibility of more general causal models, but for now we'll simply keep this model in mind as a concrete example of a causal model.

Mathematically speaking, what do the arrows of causality in the diagram above mean? We'll develop an answer to that question over the next few paragraphs. It helps to start by moving away from the specific smoking-cancer model to allow a causal model to be based on a more general graph indicating possible causal relationships between a number of variables:

Each vertex in this causal model has an associated random variable, X_1,X_2,\ldots. For example, in the causal model above X_2 could be a two-outcome random variable indicating the presence or absence of some gene that exerts an influence on whether someone smokes or gets lung cancer, X_3 indicates "smokes" or "does not smoke", and X_4 indicates "gets lung cancer" or "doesn't get lung cancer". The other variables X_1 and X_5 would refer to other potential dependencies in this (somewhat more complex) model of the smoking-cancer connection.

A notational convention that we'll use often is to interchangeably use X_j to refer to a random variable in the causal model, and also as a way of labelling the corresponding vertex in the graph for the causal model. It should be clear from context which is meant. We'll also sometimes refer interchangeably to the causal model or to the associated graph.

For the notion of causality to make sense we need to constrain the class of graphs that can be used in a causal model. Obviously, it'd make no sense to have loops in the graph:

We can't have X causing Y causing Z causing X! At least, not without a time machine. Because of this we constrain the graph to be a directed acyclic graph, meaning a (directed) graph which has no loops in it.

By the way, I must admit that I'm not a fan of the term directed acyclic graph. It sounds like a very complicated notion, at least to my ear, when what it means is very simple: a graph with no loops. I'd really prefer to call it a "loop-free graph", or something like that. Unfortunately, the "directed acyclic graph" nomenclature is pretty standard, so we'll go with it.

Our picture so far is that a causal model consists of a directed acyclic graph, whose vertices are labelled by random variables X_1,X_2,\ldots. To complete our definition of causal models we need to capture the allowed relationships between those random variables.

Intuitively, what causality means is that for any particular X_j the only random variables which directly influence the value of X_j are the parents of X_j, i.e., the collection X_{\mbox{pa}(j)} of random variables which are connected directly to X_j. For instance, in the graph shown below (which is the same as the complex graph we saw a little earlier), we have X_{\mbox{pa}(4)} = (X_2,X_3):

Now, of course, vertices further back in the graph – say, the parents of the parents – could, of course, influence the value of X_4. But it would be indirect, an influence mediated through the parent vertices.

Note, by the way, that I've overloaded the X notation, using X_{\mbox{pa}(4)} to denote a collection of random variables. I'll use this kind of overloading quite a bit in the rest of this post. In particular, I'll often use the notation X (or W, Y or Z) to denote a subset of random variables from the graph.

Motivated by the above discussion, one way we could define causal influence would be to require that X_j be a function of its parents:

 X_j = f_j(X_{\mbox{pa}(j)}),

where f_j(\cdot) is some function. In fact, we'll allow a slightly more general notion of causal influence, allowing X_j to not just be a deterministic function of the parents, but a random function. We do this by requiring that X_j be expressible in the form:

 X_j = f_j(X_{\mbox{pa}(j)},Y_{j,1},Y_{j,2},\ldots),

where f_j is a function, and Y_{j,\cdot} is a collection of random variables such that: (a) the Y_{j,\cdot} are independent of one another for different values of j; and (b) for each j, Y_{j,\cdot} is independent of all variables X_k, except when X_k is X_j itself, or a descendant of X_j. The intuition is that the Y_{j,\cdot} are a collection of auxiliary random variables which inject some extra randomness into X_j (and, through X_j, its descendants), but which are otherwise independent of the variables in the causal model.

Summing up, a causal model consists of a directed acyclic graph, G, whose vertices are labelled by random variables, X_j, and each X_j is expressible in the form X_j = f_j(X_{\mbox{pa}(j)},Y_{j,\cdot}) for some function f_j. The Y_{j,\cdot} are independent of one another, and each Y_{j,\cdot} is independent of all variables X_k, except when X_k is X_j or a descendant of X_j.

In practice, we will not work directly with the functions f_j or the auxiliary random variables Y_{j,\cdot}. Instead, we'll work with the following equation, which specifies the causal model's joint probability distribution as a product of conditional probabilities:

   p(x_1,x_2,\ldots) = \prod_j p(x_j | \mbox{pa}(x_j)).

I won't prove this equation, but the expression should be plausible, and is pretty easy to prove; I've asked you to prove it as an optional exercise below.

Exercises

  • Prove the above equation for the joint probability distribution.

Problems

  • (Simpson's paradox in causal models) Consider the causal model of smoking introduced above. Suppose that the hidden factor is a gene which is either switched on or off. If on, it tends to make people both smoke and get lung cancer. Find explicit values for conditional probabilities in the causal model such that p(\mbox{cancer} | \mbox{smokes}) > p(\mbox{cancer} | \mbox{doesn't     smoke}), and yet if the additional genetic factor is taken into account this relationship is reversed. That is, we have both p(\mbox{cancer} | \mbox{smokes, gene on}) \,\, \textless \,\, p(\mbox{cancer} | \mbox{does not smoke, gene on}) and p(\mbox{cancer} | \mbox{smokes, gene off}) \,\, \textless \,\, p(\mbox{cancer} | \mbox{doesn't smoke, gene off}).

Problems for the author

  • An alternate, equivalent approach to defining causal models is as follows: (1) all root vertices (i.e., vertices with no parents) in the graph are labelled by independent random variables. (2) augment the graph by introducing new vertices corresponding to the Y_{j,k}. These new vertices have single outgoing edges, pointing to X_j. (3) Require that non-root vertices in the augmented graph be deterministic functions of their parents. The disadvantage of this definition is that it introduces the overhead of dealing with the augmented graph. But the definition also has the advantage of cleanly separating the stochastic and deterministic components, and I wouldn't be surprised if developing the theory of causal inference from this point of view was stimulating, at the very least, and may possibly have some advantages compared to the standard approach. So the problem I set myself (and anyone else who is interested!) is to carry the consequences of this change through the rest of the theory of causal inference, looking for advantages and disadvantages.

I've been using terms like "causal influence" somewhat indiscriminately in the discussion above, and so I'd like to pause to discuss a bit more carefully about what is meant here, and what nomenclature we should use going forward. All the arrows in a causal model indicate are the possibility of a direct causal influence. This results in two caveats on how we think about causality in these models. First, it may be that a child random variable is actually completely independent of the value of one (or more) of its parent random variables. This is, admittedly, a rather special case, but is perfectly consistent with the definition. For example, in a causal model like

it is possible that the outcome of cancer might be independent of the hidden causal factor or, for that matter, that it might be independent of whether someone smokes or not. (Indeed, logically, at least, it may be independent of both, although of course that's not what we'll find in the real world.) The second caveat in how we think about the arrows and causality is that the arrows only capture the direct causal influences in the model. It is possible that in a causal model like

X_1 will have a causal influence on X_5 through its influence on X_2 and X_3. This would be an indirect causal influence, mediated by other random variables, but it would still be a causal influence. In the next section I'll give a more formal definition of causal influence that can be used to make these ideas precise.

Causal conditional probabilities

In this section I'll explain what I think is the most imaginative leap underlying the causal calculus. It's the introduction of the concept of causal conditional probabilities.

The notion of ordinary conditional probabilities is no doubt familiar to you. It's pretty straightforward to do experiments to estimate conditional probabilities such as p(\mbox{cancer}| \mbox{smoking}), simply by looking at the population of people who smoke, and figuring out what fraction of those people develop cancer. Unfortunately, for the purpose of understanding the causal relationship between smoking and cancer, p(\mbox{cancer}| \mbox{smoking}) isn't the quantity we want. As the tobacco companies pointed out, there might well be a hidden genetic factor that makes it very likely that you'll see cancer in anyone who smokes, but that wouldn't therefore mean that smoking causes cancer.

As we discussed earlier, what you'd really like to do in this circumstance is a randomized controlled experiment in which it's possible for the experimenter to force someone to smoke (or not smoke), breaking the causal connection between the hidden factor and smoking. In such an experiment you really could see if there was a causal influence by looking at what fraction of people who smoked got cancer. In particular, if that fraction was higher than in the overall population then you'd be justified in concluding that smoking helped cause cancer. In practice, it's probably not practical to do this kind of randomized controlled experiment. But Pearl had what turns out to be a very clever idea: to imagine a hypothetical world in which it really is possible to force someone to (for example) smoke, or not smoke. In particular, he introduced a conditional causal probability p(\mbox{cancer}| \mbox{do(smoking)}), which is the conditional probability of cancer in this hypothetical world. This should be read as the (causal conditional) probability of cancer given that we "do" smoking, i.e., someone has been forced to smoke in a (hypothetical) randomized experiment.

Now, at first sight this appears a rather useless thing to do. But what makes it a clever imaginative leap is that although it may be impossible or impractical to do a controlled experiment to determine p(\mbox{cancer}|\mbox{do}(\mbox{smoking})), Pearl was able to establish a set of rules – a causal calculus – that such causal conditional probabilities should obey. And, by making use of this causal calculus, it turns out to sometimes be possible to infer the value of probabilities such as p(\mbox{cancer}|\mbox{do}(\mbox{smoking})), even when a controlled, randomized experiment is impossible. And that's a very remarkable thing to be able to do, and why I say it was so clever to have introduced the notion of causal conditional probabilities.

We'll discuss the rules of the causal calculus later in this post. For now, though, let's develop the notion of causal conditional probabilities. Suppose we have a causal model of some phenomenon:

Now suppose we introduce an external experimenter who is able to intervene to deliberately set the value of a particular variable X_j to x_j. In other words, the experimenter can override the other causal influences on that variable. This is equivalent to having a new causal model:

In this new causal model, we've represented the experimenter by a new vertex, which has as a child the vertex X_j. All other parents of X_j are cut off, i.e., the edges from the parents to X_j are deleted from the graph. In this case that means the edge from X_2 to X_3 has been deleted. This represents the fact that the experimenter's intervention overrides the other causal influences. (Note that the edges to the children of X_j are left undisturbed.) In fact, it's even simpler (and equivalent) to consider a causal model where the parents have been cut off from X_j, and no extra vertex added:

This model has no vertex explicitly representing the experimenter, but rather the relation X_j = f_j(X_{{\rm pa}(j},Y_{j,\cdot}) is replaced by the relation X_j = x_j. We will denote this graph by G_{\overline X_j}, indicating the graph in which all edges pointing to X_j have been deleted. We will call this a perturbed graph, and the corresponding causal model a perturbed causal model. In the perturbed causal model the only change is to delete the edges to X_j, and to replace the relation X_j = f_j(X_{{\rm     pa}(j},Y_{j,\cdot}) by the relation X_j = x_j.

Our aim is to use this perturbed causal model to compute the conditional causal probability p(x_1,\ldots,\hat x_j, \ldots, x_n | \mbox{do}(x_j)). In this expression, \hat x_j indicates that the x_j term is omitted before the |, since the value of x_j is set on the right. By definition, the causal conditional probability p(x_1,\ldots,\hat x_j, \ldots, x_n | \mbox{do}(x_j)) is just the value of the probability distribution in the perturbed causal model, p'(x_1,\ldots,x_n). To compute the value of the probability in the perturbed causal model, note that the probability distribution in the original causal model was given by

   p(x_1,\ldots,x_n) = \prod_k p(x_k| \mbox{pa}(x_k)),

where the product on the right is over all vertices in the causal model. This expression remains true for the perturbed causal model, but a single term on the right-hand side changes: the conditional probability for the x_j term. In particular, this term gets changed from p(x_j| \mbox{pa}(x_j)) to 1, since we have fixed the value of X_j to be x_j. As a result we have:

 p(x_1,\ldots,\hat x_j,\ldots,x_n | \mbox{do}(x_j))  = \frac{p(x_1,\ldots,x_n)}{p(x_j|\mbox{pa}(x_j))}.

This equation is a fundamental expression, capturing what it means for an experimenter to intervene to set the value of some particular variable in a causal model. It can easily be generalized to a situation where we partition the variables into two sets, X and Y, where X are the variables we suppose have been set by intervention in a (possibly hypothetical) randomized controlled experiment, and Y are the remaining variables:

 [1] \,\,\,\, p(Y=y| \mbox{do}(X=x)) = \frac{p(X=x,Y=y)}{\Pi_j p(X_j = x_j|   \mbox{pa}(X_j))}.

Note that on the right-hand side the values for \mbox{pa}(X_j) are assumed to be given by the appropriate values from x and y. The expression [1] can be viewed as a definition of causal conditional probabilities. But although this expression is fundamental to understanding the causal calculus, it is not always useful in practice. The problem is that the values of some of the variables on the right-hand side may not be known, and cannot be determined by experiment. Consider, for example, the case of smoking and cancer. Recall our causal model:

What we'd like is to compute p(\mbox{cancer}| \mbox{do(smoking)}). Unfortunately, we immediately run into a problem if we try to use the expression on the right of equation [1]: we've got no way of estimating the conditional probabilities for smoking given the hidden common factor. So we can't obviously compute p(\mbox{cancer}| \mbox{do(smoking)}). And, as you can perhaps imagine, this is the kind of problem that will come up a lot whenever we're worried about the possible influence of some hidden factor.

All is not lost, however. Just because we can't compute the expression on the right of [1] directly doesn't mean we can't compute causal conditional probabilities in other ways, and we'll see below how the causal calculus can help solve this kind of problem. It's not a complete solution – we shall see that it doesn't always make it possible to compute causal conditional probabilities. But it does help. In particular, we'll see that although it's not possible to compute p(\mbox{cancer}| \mbox{do(smoking)}) for this causal model, it is possible to compute p(\mbox{cancer}| \mbox{do(smoking)}) in a very similar causal model, one that still has a hidden factor.

With causal conditional probabilities defined, we're now in position to define more precisely what we mean by causal influence. Suppose we have a causal model, and X and Y are distinct random variables (or disjoint subsets of random variables). Then we say X has a causal influence over Y if there are values x_1 and x_2 of X and y of y such that p(y|\mbox{do}(x_1)) \neq p(y|\mbox{do}(x_2)). In other words, an external experimenter who can intervene to change the value of X can cause a corresponding change in the distribution of values at Y. The following exercise gives an information-theoretic justification for this definition of causal influence: it shows that an experimenter who can intervene to set X can transmit information to Y if and only if the above condition for causal inference is met.

Exercises

  • (The causal capacity) This exercise is for people with some background in information theory. Suppose we define the causal capacity between X and Y to be \max_{p(\hat     x)} H(\hat X: \hat Y), where H(\cdot:\cdot) is the mutual information, the maximization is over possible distributions p(\hat   x) for \hat X (we use the hat to indicate that the value of X is being set by intervention), and \hat Y is the corresponding random variable at Y, with distribution p(\hat y) = \sum_{\hat x}   p(\hat y|\mbox{do}(\hat x)) p(\hat x). Shannon's noisy channel coding theorem tells us that an external experimenter who can intervene to set the value of X can transmit information to an observer at Y at a maximal rate set by the causal capacity. Show that the causal capacity is greater than zero if and only if X has a causal influence over Y.

We've just defined a notion of causal influence between two random variables in a causal model. What about when we say something like "Event A" causes "Event B"? What does this mean? Returning to the smoking-cancer example, it seems that we would say that smoking causes cancer provided p(\mbox{cancer} | \mbox{do}(\mbox{smoking})) > p(\mbox{cancer}), so that if someone makes the choice to smoke, uninfluenced by other causal factors, then they would increase their chance of cancer. Intuitively, it seems to me that this notion of events causing one another should be related to the notion of causal influence just defined above. But I don't yet see quite how to do that. The first problem below suggests a conjecture in this direction:

Problems for the author

  • Suppose X and Y are random variables in a causal model such that p(Y=y | \mbox{do}(X=x)) > p(Y=y) for some pair of values x and y. Does this imply that X exerts a causal influence on Y?
  • (Sum-over-paths for causal conditional probabilities?) I believe a kind of sum-over-paths formulation of causal conditional probabilities is possible, but haven't worked out details. The idea is as follows (the details may be quite wrong, but I believe something along these lines should work). Supose X and Y are single vertices (with corresponding random variables) in a causal model. Then I would like to show first that if X is not an ancestor of Y then p(y|\mbox{do}(x)) = p(y), i.e., intervention does nothing. Second, if X is an ancestor of Y then p(y|\mbox{do}(x)) may be obtained by summing over all directed paths from X to Y in G_{\overline X}, and computing for each path a contribution to the sum which is a product of conditional probabilities along the path. (Note that we may need to consider the same path multiple times in the sum, since the random variables along the path may take different values).
  • We used causal models in our definition of causal conditional probabilities. But our informal definiton – imagine a hypothetical world in which it's possible to force a variable to take a particular value – didn't obviously require the use of a causal model. Indeed, in a real-world randomized controlled experiment it may be that there is no underlying causal model. This leads me to wonder if there is some other way of formalizing the informal definition we've given?
  • Another way of framing the last problem is that I'm concerned about the empirical basis for causal models. How should we go about constructing such models? Are they fundamental, representing true facts about the world, or are they modelling conveniences? (This is by no means a dichotomy.) It would be useful to work through many more examples, considering carefully the origin of the functions f_j(\cdot) and of the auxiliary random variables Y_{j,\cdot}.

d-separation

In this section we'll develop a criterion that Pearl calls directional separation (d-separation, for short). What d-separation does is let us inspect the graph of a causal model and conclude that a random variable X in the model can't tell us anything about the value of another random variable Y in the model, or vice versa.

To understand d-separation we'll start with a simple case, and then work through increasingly complex cases, building up our intuition. I'll conclude by giving a precise definition of d-separation, and by explaining how d-separation relates to the concept of conditional independence of random variables.

Here's the first simple causal model:

Clearly, knowing X can in general tell us something about Y in this kind of causal model, and so in this case X and Y are not d-separated. We'll use the term d-connected as a synonym for "not d-separated", and so in this causal model X and Y are d-connected.

By contrast, in the following causal model X and Y don't give us any information about each other, and so they are d-separated:

A useful piece of terminology is to say that a vertex like the middle vertex in this model is a collider for the path from X to Y, meaning a vertex at which both edges along the path are incoming.

What about the causal model:

In this case, it is possible that knowing X will tell us something about Y, because of their common ancestry. It's like the way knowing the genome for one sibling can give us information about the genome of another sibling, since similarities between the genomes can be inferred from the common ancestry. We'll call a vertex like the middle vertex in this model a fork for the path from X to Y, meaning a vertex at which both edges are outgoing.

Exercises

  • Construct an explicit causal model demonstrating the assertion of the last paragraph. For example, you may construct a causal model in which X and Y are joined by a fork, and where Y is actually a function of X.
  • Suppose we have a path from X to Y in a causal model. Let c be the number of colliders along the path, and let f be the number of forks along the path. Show that |f-c| can only take the values 0, 1 or -1, i.e., the number of forks and colliders is either the same or differs by at most one.

We'll say that a path (of any length) from X to Y that contains a collider is a blocked path. By contrast, a path that contains no colliders is called an unblocked path. (Note that by the above exercise, an unblocked path must contain either one or no forks.) In general, we define X and Y to be d-connected if there is an unblocked path between them. We define them to be d-separated if there is no such unblocked path.

It's worth noting that the concepts of d-separation and d-connectedness depend only on the graph topology and on which vertices X and Y have been chosen. In particular, they don't depend on the nature of the random variables X and Y, merely on the identity of the corresponding vertices. As a result, you can determine d-separation or d-connectdness simply by inspecting the graph. This fact – that d-separation and d-connectdness are determined by the graph – also holds for the more sophisticated notions of d-separation and d-connectedness we develop below.

With that said, it probably won't surprise you to learn that the concept of d-separation is closely related to whether or not the random variables X and Y are independent of one another. This is a connection you can (optionally) develop through the following exercises. I'll state a much more general connection below.

Exercises

  • Suppose that X and Y are d-separated. Show that X and Y are independent random variables, i.e., that p(x,y) = p(x)p(y).
  • Suppose we have two vertices which are d-connected in a graph G. Explain how to construct a causal model on that graph such that the random variables X and Y corresponding to those two vertices are not independent.
  • The last two exercises almost but don't quite claim that random variables X and Y in a causal model are independent if and only if they are d-separated. Why does this statement fail to be true? How can you modify the statement to make it true?

So far, this is pretty simple stuff. It gets more complicated, however, when we extend the notion of d-separation to cases where we are conditioning on already knowing the value of one or more random variables in the causal model. Consider, for example, the graph:

(Figure A.)

Now, if we know Z, then knowing X doesn't give us any additional information about Y, since by our original definition of a causal model Y is already a function of Z and some auxiliary random variables which are independent of X. So it makes sense to say that Z blocks this path from X to Y, even though in the unconditioned case this path would not have been considered blocked. We'll also say that X and Y are d-separated, given Z.

It is helpful to give a name to vertices like the middle vertex in Figure A, i.e., to vertices with one ingoing and one outgoing edge. We'll call such vertices a traverse along the path from X to Y. Using this language, the lesson of the above discussion is that if Z is in a traverse along a path from X to Y, then the path is blocked.

By contrast, consider this model:

In this case, knowing X will in general give us additional information about Y, even if we know Z. This is because while Z blocks one path from X to Y there is another unblocked path from X to Y. And so we say that X and Y are d-connected, given Z.

Another case similar to Figure A is the model with a fork:

Again, if we know Z, then knowing X as well doesn't give us any extra information about Y (or vice versa). So we'll say that in this case Z is blocking the path from X to Y, even though in the unconditioned case this path would not have been considered blocked. Again, in this example X and Y are d-separated, given Z.

The lesson of this model is that if Z is located at a fork along a path from X to Y, then the path is blocked.

A subtlety arises when we consider a collider:

(Figure B.)

In the unconditioned case this would have been considered a blocked path. And, naively, it seems as though this should still be the case: at first sight (at least according to my intuition) it doesn't seem very likely that X can give us any additional information about Y (or vice versa), even given that Z is known. Yet we should be cautious, because the argument we made for the graph in Figure A breaks down: we can't say, as we did for Figure A, that Y is a function of Z and some auxiliary independent random variables.

In fact, we're wise to be cautious because X and Y really can tell us something extra about one another, given a knowledge of Z. This is a phenomenon which Pearl calls Berkson's paradox. He gives the example of a graduate school in music which will admit a student (a possibility encoded in the value of Z) if either they have high undergraduate grades (encoded in X) or some other evidence that they are exceptionally gifted at music (encoded in Y). It would not be surprising if these two attributes were anticorrelated amongst students in the program, e.g., students who were admitted on the basis of exceptional gifts would be more likely than otherwise to have low grades. And so in this case knowledge of Y (exceptional gifts) would give us knowledge of X (likely to have low grades), conditioned on knowledge of Z (they were accepted into the program).

Another way of seeing Berkson's paradox is to construct an explicit causal model for the graph in Figure B. Consider, for example, a causal model in which X and Y are independent random bits, 0 or 1, chosen with equal probabilities 1/2. We suppose that Z = X \oplus Y, where \oplus is addition modulo 2. This causal model does, indeed, have the structure of Figure B. But given that we know the value Z, knowing the value of X tells us everything about Y, since Y = Z \oplus X.

As a result of this discussion, in the causal graph of Figure B we'll say that Z unblocks the path from X to Y, even though in the unconditioned case the path would have been considered blocked. And we'll also say that in this causal graph X and Y are d-connected, conditional on Z.

The immediate lesson from the graph of Figure B is that X and Y can tell us something about one another, given Z, if there is a path between X and Y where the only collider is at Z. In fact, the same phenomenon can occur even in this graph:

(Figure C.)

To see this, suppose we choose X and Y as in the example just described above, i.e., independent random bits, 0 or 1, chosen with equal probabilities 1/2. We will let the unlabelled vertex be W = X \oplus Y. And, finally, we choose Z = W. Then we see as before that X can tell us something about Y, given that we know Z, because X = Y \oplus Z.

The general intuition about graphs like that in Figure C is that knowing Z allows us to infer something about the ancestors of Z, and so we must act as though those ancestors are known, too. As a result, in this case we say that Z unblocks the path from X to Y, since Z has an ancestor which is a collider on the path from X to Y. And so in this case X is d-connected to Y, given Z.

Given the discussion of Figure C that we've just had, you might wonder why forks or traverses which are ancestors of Z can't block a path, for similar reasons? For instance, why don't we consider X and Y to be d-separated, given Z, in the following graph:

The reason, of course, is that it's easy to construct examples where X tells us something about Y in addition to what we already know from Z. And so we can't consider X and Y to be d-separated, given Z, in this example.

These examples motivate the following definition:

Definition: Let X, Y and Z be disjoint subsets of vertices in a causal model. Consider a path from a vertex in X to a vertex in Y. We say the path is blocked by Z if the path contains either: (a) a collider which is not an ancestor of Z, or (b) a fork which is in Z, or (c) a traverse which is in Z. We say the path is unblocked if it is not blocked. We say that X and Y are d-connected, given Z, if there is an unblocked path between some vertex in X and some vertex in Y. X and Y are d-separated, given Z, if they are not d-connected.

Saying "X and Y are d-separated given Z" is a bit of a mouthful, and so it's helpful to have an abbreviated notation. We'll use the abbreviation (X \perp Y|Z)_G. Note that this notation includes the graph G; we'll sometimes omit the graph when the context is clear. We'll write (X \perp Y)_G to denote unconditional d-separation.

As an aside, Pearl uses a similar but slightly different notation for d-separation, namely (X \perp \! \! \perp Y|Z)_G. Unfortunately, while the symbol \perp \! \! \perp looks like a LaTeX symbol, it's not, but is most easily produced using a rather dodgy LaTeX hack. Instead of using that hack over and over again, I've adopted a more standard LaTeX notation.

While I'm making asides, let me make a second: when I was first learning this material, I found the "d" for "directional" in d-separation and d-connected rather confusing. It suggested to me that the key thing was having a directed path from one vertex to the other, and that the complexities of colliders, forks, and so on were a sideshow. Of course, they're not, they're central to the whole discussion. For this reason, when I was writing these notes I considered changing the terminology to i-separated and i-connected, for informationally-separated and informationally-connected. Ultimately I decided not to do this, but I thought mentioning the issue might be helpful, in part to reassure readers (like me) who thought the "d" seemed a little mysterious.

Okay, that's enough asides, let's get back to the main track of discussion.

We saw earlier that (unconditional) d-separation is closely connected to the independence of random variables. It probably won't surprise you to learn that conditional d-separation is closely connected to conditional independence of random variables. Recall that two sets of random variables X and Y are conditionally independent, given a third set of random variables Z, if p(x,y|z) = p(x|z)p(y|z). The following theorem shows that d-separation gives a criterion for when conditional independence occurs in a causal model:

Theorem (graphical criterion for conditional independence): Let G be a graph, and let X, Y and Z be disjoint subsets of vertices in that graph. Then X and Y are d-separated, given Z, if and only if for all causal models on G the random variables corresponding to X and Y are conditionally independent, given Z.

(Update: Thanks to Rob Spekkens for pointing out an error in my original statement of this theorem.)

I won't prove the theorem here. However, it's not especially difficult if you've followed the discussion above, and is a good problem to work through:

Problems

  • Prove the above theorem.

Problems for the author

  • The concept of d-separation plays a central role in the causal calculus. My sense is that it should be possible to find a cleaner and more intuitive definition that substantially simplifies many proofs. It'd be good to spend some time trying to find such a definition.

The causal calculus

We've now got all the concepts we need to state the rules of the causal calculus. There are three rules. The rules look complicated at first, although they're easy to use once you get familiar with them. For this reason I'll start by explaining the intuition behind the first rule, and how you should think about that rule. Having understood how to think about the first rule it's easy to get the hang of all three rules, and so after that I'll just outright state all three rules.

In what follows, we have a causal model on a graph G, and W, X, Y, Z are disjoint subsets of the variables in the causal model. Recall also that G_{\overline X} denotes the perturbed graph in which all edges pointing to X from the parents of X have been deleted. This is the graph which results when an experimenter intervenes to set the value of X, overriding other causal influences on X.

Rule 1: When can we ignore observations: I'll begin by stating the first rule in all its glory, but don't worry if you don't immediately grok the whole rule. Instead, just take a look, and try to start getting your head around it. What we'll do then is look at some simple special cases, which are easily understood, and gradually build up to an understanding of what the full rule is saying.

Okay, so here's the first rule of the causal calculus. What it tells us is that when (Y \perp Z|W,X)_{G_{\overline X}}, then we can ignore the observation of Z in computing the probability of Y, conditional on both W and an intervention to set X:

 p(y|w,\mbox{do}(x),z) = p(y|w,\mbox{do}(x))

To understand why this rule is true, and what it means, let's start with a much simpler case. Let's look at what happens to the rule when there are no X or W variables in the mix. In this case, our starting assumption simply becomes that Y is d-separated from Z in the original (unperturbed) graph G. There's no need to worry about G_{\overline X} because there's no X variable whose value is being set by intervention. In this circumstance we have (Y \perp Z)_G, so Y is independent of Z. But the statement of the rule in this case is merely that p(y|z) = p(y), which is, indeed, equivalent to the standard definition of Y and Z being independent.

In other words, the first rule is simply a generalization of what it means for Y and Z to be independent. The full rule generalizes the notion of independence in two ways: (1) by adding in an extra variable W whose value has been determined by passive observation; and (2) by adding in an extra variable X whose value has been set by intervention. We'll consider these two ways of generalizing separately in the next two paragraphs.

We begin with generalization (1), i.e., there is no X variable in the mix. In this case, our starting assumption becomes that Y is d-separated from Z, given W, in the graph G. By the graphical criterion for conditional independence discussed in the last section this means that Y is conditionally independent of Z, given W, and so p(y|z,w) = p(y|w), which is exactly the statement of the rule. And so the first rule can be viewed as a generalization of what it means for Y and Z to be independent, conditional on W.

Now let's look at the other generalization, (2), in which we've added an extra variable X whose value has been set by intervention, and where there is no W variable in the mix. In this case, our starting assumption becomes that Y is d-separated from Z, given X, in the perturbed graph G_{\overline X}. In this case, the graphical criterion for conditional indepenence tells us that Y is independent from Z, conditional on the value of X being set by experimental intervention, and so p(y|\mbox{do}(x),z) = p(y|\mbox{do}(x)). Again, this is exactly the statement of the rule.

The full rule, of course, merely combines both these generalizations in the obvious way. It is really just an explicit statement of the content of the graphical criterion for conditional independence, in a context where W has been observed, and the value of X set by experimental intervention.

The rules of the causal calculus: All three rules of the causal calculus follow a similar template to the first rule: they provide ways of using facts about the causal structure (notably, d-separation) to make inferences about conditional causal probabilities. I'll now state all three rules. The intuition behind rules 2 and 3 won't necessarily be entirely obvious, but after our discussion of rule 1 the remaining rules should at least appear plausible and comprehensible. I'll have bit more to say about intuition below.

As above, we have a causal model on a graph G, and W, X, Y, Z are disjoint subsets of the variables in the causal model. G_{\overline   X} denotes the perturbed graph in which all edges pointing to X from the parents of X have been deleted. G_{\underline X} denotes the graph in which all edges pointing out from X to the children of X have been deleted. We will also freely use notations like G_{\overline W, \overline X, \underline Z} to denote combinations of these operations.

Rule 1: When can we ignore observations: Suppose (Y \perp Z|W,X)_{G_{\overline X}}. Then:

   p(y|w,\mbox{do}(x),z) = p(y|w,\mbox{do}(x)).

Rule 2: When can we ignore the act of intervention: Suppose (Y \perp Z|W,X)_{G_{\overline X,\underline Z}}. Then:

 p(y|w,\mbox{do}(x),\mbox{do}(z)) = p(y|w,\mbox{do}(x),z).

Rule 3: When can we ignore an intervention variable entirely: Let Z(W) denote the set of nodes in Z which are not ancestors of W. Suppose (Y \perp Z|W,X)_{G_{\overline X, \overline{Z(W)}}}. Then:

 p(y|w,\mbox{do}(x),\mbox{do}(z)) = p(y|w,\mbox{do}(x)).

In a sense, all three rules are statements of conditional independence. The first rule tells us when we can ignore an observation. The second rule tells us when we can ignore the act of intervention (although that doesn't necessarily mean we can ignore the value of the variable being intervened with). And the third rule tells us when we can ignore an intervention entirely, both the act of intervention, and the value of the variable being intervened with.

I won't prove rule 2 or rule 3 – this post is already quite long enough. (If I ever significantly revise the post I may include the proofs). The important thing to take away from these rules is that they give us conditions on the structure of causal models so that we know when we can ignore observations, acts of intervention, or even entire variables that have been intervened with. This is obviously a powerful set of tools to be working with in manipulating conditional causal probabilities!

Indeed, according to Pearl there's even a sense in which this set of rules is complete, meaning that using these rules you can identify all causal effects in a causal model. I haven't yet understood the proof of this result, or even exactly what it means, but thought I'd mention it. The proof is in papers by Shpitser and Pearl and Huang and Valtorta. If you'd like to see the proofs of the rules of the calculus, you can either have a go at proving them yourself, or you can read the proof.

Problems for the author

  • Suppose the conditions of rules 1 and 2 hold. Can we deduce that the conditions of rule 3 also hold?

Using the causal calculus to analyse the smoking-lung cancer connection

We'll now use the causal calculus to analyse the connection between smoking and lung cancer. Earlier, I introduced a simple causal model of this connection:

The great benefit of this model was that it included as special cases both the hypothesis that smoking causes cancer and the hypothesis that some hidden causal factor was responsible for both smoking and cancer.

It turns out, unfortunately, that the causal calculus doesn't help us analyse this model. I'll explain why that's the case below. However, rather than worrying about this, at this stage it's more instructive to work through an example showing how the causal calculus can be helpful in analysing a similar but slightly modified causal model. So although this modification looks a little mysterious at first, for now I hope you'll be willing to accept it as given.

The way I'm going to modify the causal model is by introducing an extra variable, namely, whether someone has appreciable amounts of tar in their lungs or not:

(By tar, I don't mean "tar" literally, but rather all the material deposits found as a result of smoking.)

This causal model is a plausible modification of the original causal model. It is at least plausible to suppose that smoking causes tar in the lungs and that those deposits in turn cause cancer. But if the hidden causal factor is genetic, as the tobacco companies argued was the case, then it seems highly unlikely that the genetic factor caused tar in the lungs, except by the indirect route of causing those people to smoke. (I'll come back to what happens if you refuse to accept this line of reasoning. For now, just go with it.)

Our goal in this modified causal model is to compute probabilities like p(\mbox{cancer}|\mbox{do}(smoking)) = p(y| \mbox{do}(x)). What we'll show is that the causal calculus lets us compute this probability entirely in terms of probabilities like p(y|z), p(z|y) and other probabilities that don't involve an intervention, i.e., that don't involve \mbox{do}.

This means that we can determine p(\mbox{cancer}|\mbox{do}(smoking)) without needing to know anything about the hidden factor. We won't even need to know the nature of the hidden factor. It also means that we can determine p(\mbox{cancer}|\mbox{do}(smoking)) without needing to intervene to force someone to smoke or not smoke, i.e., to set the value for X.

In other words, the causal calculus lets us do something that seems almost miraculous: we can figure out the probability that someone would get cancer given that they are in the smoking group in a randomized controlled experiment, without needing to do the randomized controlled experiment. And this is true even though there may be a hidden causal factor underlying both smoking and cancer.

Okay, so how do we compute p(\mbox{cancer}|\mbox{do}(smoking)) = p(y| \mbox{do}(x))?

The obvious first question to ask is whether we can apply rule 2 or rule 3 directly to the conditional causal probability p(y|\mbox{do}(x)).

If rule 2 applies, for example, it would say that intervention doesn't matter, and so p(y|\mbox{do}(x)) = p(y|x). Intuitively, this seems unlikely. We'd expect that intervention really can change the probability of cancer given smoking, because intervention would override the hidden causal factor.

If rule 3 applies, it would say that p(y|\mbox{do}(x)) = p(y), i.e., that an intervention to force someone to smoke has no impact on whether they get cancer. This seems even more unlikely than rule 2 applying.

However, as practice and a warm up, let's work through the details of seeing whether rule 2 or rule 3 can be applied directly to p(y|\mbox{do}(x)).

For rule 2 to apply we need (Y\perp X)_{G_{\underline X}}. To check whether this is true, recall that G_{\underline X} is the graph with the edges pointing out from X deleted:

Obviously, Y is not d-separated from X in this graph, since X and Y have a common ancestor. This reflects the fact that the hidden causal factor indeed does influence both X and Y. So we can't apply rule 2.

What about rule 3? For this to apply we'd need (Y \perp X)_{G_{\overline X}}. Recall that G_{\overline X} is the graph with the edges pointing toward X deleted:

Again, Y is not d-separated from X, in this case because we have an unblocked path directly from X to Y. This reflects our intuition that the value of X can influence Y, even when the value of X has been set by intervention. So we can't apply rule 3.

Okay, so we can't apply the rules of the causal calculus directly to determine p(y|\mbox{x}). Is there some indirect way we can determine this probability? An experienced probabilist would at this point instinctively wonder whether it would help to condition on the value of z, writing:

 [2] \,\,\,\, p(y| \mbox{do}(x)) = \sum_z p(y|z,\mbox{do}(x)) p(z|\mbox{do}(x)).

Of course, saying an experienced probabilist would instinctively do this isn't quite the same as explaining why one should do this! However, it is at least a moderately obvious thing to do: the only extra information we potentially have in the problem is z, and so it's certainly somewhat natural to try to introduce that variable into the problem. As we shall see, this turns out to be a wise thing to do.

Exercises

  • I used without proof the equation p(y| \mbox{do}(x)) = \sum_z   p(y|z,\mbox{do}(x)) p(z|\mbox{do}(x)). This should be intuitively plausible, but really requires proof. Prove that the equation is correct.

To simplify the right-hand side of equation [2], we first note that we can apply rule 2 to the second term on the right-hand side, obtaining p(z|\mbox{do}(x)) = p(z|x). To check this explicitly, note that the condition for rule 2 to apply is that (Z \perp X)_{G_{\underline     X}}. We already saw the graph G_{\underline X} above, and, indeed, Z is d-separated from X in that graph, since the only path from Z to X is blocked at Y. As a result, we have:

 [3] \,\,\,\, p(y| \mbox{do}(x)) = \sum_z p(y|z,\mbox{do}(x)) p(z|x).

At this point in the presentation, I'm going to speed the discussion up, telling you what rule of the calculus to apply at each step, but not going through the process of explicitly checking that the conditions of the rule hold. (If you're doing a close read, you may wish to check the conditions, however.)

The next thing we do is to apply rule 2 to the first term on the right-hand side of equation [3], obtaining p(y|z,\mbox{do}(x)) = p(y|\mbox{do}(z),\mbox{do}(x)). We then apply rule 3 to remove the \mbox{do}(x), obtaining p(y|z,\mbox{do}(x)) = p(y|\mbox{do}(z)). Substituting back in gives us:

 [4] \,\,\,\, p(y| \mbox{do}(x)) = \sum_z p(y|\mbox{do}(z)) p(z|x).

So this means that we've reduced the computation of p(y|\mbox{do}(x)) to the computation of p(y|\mbox{do}(z)). This doesn't seem terribly encouraging: we've merely substituted the computation of one causal conditional probability for another. Still, let us continue plugging away, and see if we can make progress. The obvious first thing to try is to apply rule 2 or rule 3 to simplify p(y|\mbox{do}(z). Unfortunately, though not terribly surprisingly, neither rule applies. So what do we do? Well, in a repeat of our strategy above, we again condition on the other variable we have available to us, in this case x:

 p(y|\mbox{do}(z)) = \sum_x p(y|x,\mbox{do}(z)) p(x|\mbox{do}(z)).

Now we're cooking! Rule 2 lets us simplify the first term to p(y|x,z), while rule 3 lets us simplify the second term to p(x), and so we have p(y|\mbox{do}(z)) = \sum_x p(y|x,z) p(x). To substitute this expression back into equation [4] it helps to change the summation index from x to x', since otherwise we would have a duplicate summation index. This gives us:

  [5] \,\,\,\, p(y| \mbox{do}(x)) = \sum_{x'z} p(y|x',z) p(z|x) p(x') .

This is the promised expression for p(y|\mbox{do}(x)) (i.e., for probabilities like p(\mbox{cancer}| \mbox{do(smoking)}), assuming the causal model above) in terms of quantities which may be observed directly from experimental data, and which don't require intervention to do a randomized, controlled experiment. Once p(\mbox{cancer}| \mbox{do(smoking)}) is determined, we can compare it against p(\mbox{cancer}). If p(\mbox{cancer}| \mbox{do(smoking)}) is larger than p(\mbox{cancer}) then we can conclude that smoking does, indeed, play a causal role in cancer.

Something that bugs me about the derivation of equation [5] is that I don't really know how to "see through" the calculations. Yes, it all works out in the end, and it's easy enough to follow along. Yet that's not the same as having a deep understanding. Too many basic questions remain unanswered: Why did we have to condition as we did in the calculation? Was there some other way we could have proceeded? What would have happeed if we'd conditioned on the value of the hidden variable? (This is not obviously the wrong thing to do: maybe the hidden variable would ultimately drop out of the calculation). Why is it possible to compute causal probabilities in this model, but not (as we shall see) in the model without tar? Ideally, a deeper understanding would make the answers to some or all of these questions much more obvious.

Problems for the author

  • Why is it so much easier to compute p(y|\mbox{do}(z)) than p(y|\mbox{do}(x)) in the model above? Is there some way we could have seen that this would be the case, without needing to go through a detailed computation?
  • Suppose we have a causal model G, with S a subset of vertices for which all conditional probabilities are known. Is it possible to give a simple characterization of for which subsets X and Y of vertices it is possible to compute p(y|\mbox{do}(x)) using just the conditional probabilities from S?

Unfortunately, I don't know what the experimentally observed probabilities are in the smoking-tar-cancer case. If anyone does, I'd be interested to know. In lieu of actual data, I'll use some toy model data suggested by Pearl; the data is quite unrealistic, but nonetheless interesting as an illustration of the use of equation [5]. The toy model data is as follows:

(1) 47.5 percent of the population are nonsmokers with no tar in their lungs, and 10 percent of these get cancer.

(2) 2.5 percent are smokers with no tar, and 90 percent get cancer.

(3) 2.5 percent are nonsmokers with tar, and 5 percent get cancer.

(4) 47.5 percent are smokers with tar, and 85 percent get cancer.

In this case, we get:

   p(\mbox{cancer} | \mbox{do}(smoking)) = 45.25 \

By contrast, p(\mbox{cancer}) = 47.5 percent, and so if this data was correct (obviously it's not even close) it would show that smoking actually somewhat reduces a person's chance of getting lung cancer. This is despite the fact that p(\mbox{cancer} | \mbox{smoking}) = 85.25 percent, and so a naive approach to causality based on correlations alone would suggest that smoking causes cancer. In fact, in this imagined world smoking might actually be useable as a preventative treatment for cancer! Obviously this isn't truly the case, but it does illustrate the power of this method of analysis.

Summing up the general lesson of the smoking-cancer example, suppose we have two competing hypotheses for the causal origin of some effect in a system, A causes C or B causes C, say. Then we should try to construct a realistic causal model which includes both hypotheses, and then use the causal calculus to attempt to distinguish the relative influence of the two causal factors, on the basis of experimentally accessible data.

Incidentally, the kind of analysis of smoking we did above obviously wasn't done back in the 1960s. I don't actually know how causality was established over the protestations that correlation doesn't impy causation. But it's not difficult to think of ways you might have come up with truly convincing evidence that smoking was a causal factor. One way would have been to look at the incidence of lung cancer in populations where smoking had only recently been introduced. Suppose, for example, that cigarettes had just been introduced into the (fictional) country of Nicotinia, and that this had been quickly followed by a rapid increase in rates of lung cancer. If this pattern was seen across many new markets then it would be very difficult to argue that lung cancer was being caused solely by some pre-existing factor in the population.

Exercises

  • Construct toy model data where smoking increases a person's chance of getting lung cancer.

Let's leave this model of smoking and lung cancer, and come back to our original model of smoking and lung cancer:

What would have happened if we'd tried to use the causal calculus to analyse this model? I won't go through all the details, but you can easily check that whatever rule you try to apply you quickly run into a dead end. And so the causal calculus doesn't seem to be any help in analysing this problem.

This example illustrates some of the limitations of the causal calculus. In order to compute p(\mbox{cancer}| \mbox{do}(smoking)) we needed to assume a causal model with a particular structure:

While this model is plausible, it is not beyond reproach. You could, for example, criticise it by saying that it is not the presence of tar deposits in the lungs that causes cancer, but maybe some other factor, perhaps something that is currently unknown. This might lead us to consider a causal model with a revised structure:

So we could try instead to use the causal calculus to analyse this new model. I haven't gone through this exercise, but I strongly suspect that doing so we wouldn't be able to use the rules of the causal calculus to compute the relevant probabilities. The intuition behind this suspicion is that we can imagine a world in which the tar may be a spurious side-effect of smoking that is in fact entirely unrelated to lung cancer. What causes lung cancer is really an entirely different mechanism, but we couldn't distinguish the two from the statistics alone.

The point of this isn't to say that the causal calculus is useless. It's remarkable that we can plausibly get information about the outcome of a randomized controlled experiment without actually doing anything like that experiment. But there are limitations. To get that information we needed to make some presumptions about the causal structure in the system. Those presumptions are plausible, but not logically inevitable. If someone questions the presumptions then it may be necessary to revise the model, perhaps adopting a more sophisticated causal model. One can then use the causal calculus to attempt to analyse that more sophisticated model, but we are not guaranteed success. It would be interesting to understand systematically when this will be possible and when it will not be. The following problems start to get at some of the issues involved.

Problems for the author

  • Is it possible to make a more precise statement than "the causal calculus doesn't seem to be any help" for the original smoking-cancer model?
  • Given a probability distribution over some random variables, it would be useful to have a classification theorem describing all the causal models in which those random variables could appear.
  • Extending the last problem, it'd be good to have an algorithm to answer questions like: in the space of all possible causal models consistent with a given set of observed probabilities, what can we say about the possible causal probabilities? It would also be useful to be able to input to the algorithm some constraints on the causal models, representing knowledge we're already sure of.
  • In real-world experiments there are many practical issues that must be addressed to design a realiable randomized, controlled experiment. These issues include selection bias, blinding, and many others. There is an entire field of experimental design devoted to addressing such issues. By comparison, my description of causal inference ignores many of these practical issues. Can we integrate the best thinking on experimental design with ideas such as causal conditional probabilities and the causal calculus?
  • From a pedagogical point of view, I wonder if it might have been better to work fully through the smoking-cancer example before getting to the abstract statement of the rules of the causal calculus. Those rules can all be explained and motivated quite nicely in the context of the smoking-cancer example, and that may help in understanding.

Conclusion

I've described just a tiny fraction of the work on causality that is now going on. My impression as an admittedly non-expert outsider to the field is that this is an exceptionally fertile field which is developing rapidly and giving rise to many fascinating applications. Over the next few decades I expect the theory of causality will mature, and be integrated into the foundations of disciplines ranging from economics to medicine to social policy.

Causal discovery: One question I'd like to understand better is how to discover causal structures inside existing data sets. After all, human beings do a pretty good (though far from perfect) job at figuring out causal models from their observation of the world. I'd like to better understand how to use computers to automatically discover such causal models. I understand that there is already quite a literature on the automated discovery of causal models, but I haven't yet looked in much depth at that literature. I may come back to it in a future post.

I'm particularly fascinated by the idea of extracting causal models from very large unstructured data sets. The KnowItAll group at the University of Washington (see Oren Etzioni on Google Plus) have done fascinating work on a related but (probably) easier problem, the problem of open information extraction. This means taking an unstructured information source (like the web), and using it to extract facts about the real world. For instance, using the web one would like computers to be able to learn facts like "Barack Obama is President of the United States", without needing a human to feed it that information. One of the things that makes this task challenging is all the misleading and difficult-to-understand information out on the web. For instance, there are also webpages saying "George Bush is President of the United States", which was probably true at the time the pages were written, but which is now misleading. We can find webpages which state things like "[Let's imagine] Steve Jobs is President of the United States"; it's a difficult task for an unsupervised algorithm to figure out how to interpret that "Let's imagine". What the KnowItAll team have done is made progress on figuring out how to learn facts in such a rich but uncontrolled environment.

What I'm wondering is whether such techniques can be adapted to extract causal models from data? It'd be fascinating if so, because of course humans don't just reason with facts, they also reason with (informal) causal models that relate those facts. Perhaps causal models or a similar concept may be a good way of representing some crucial part of our knowledge of the world.

Problems for the author

  • What systematic causal fallacies do human beings suffer from? We certainly often make mistakes in the causal models we extract from our observations of the world – one example is that we often do assume that correlation implies causation, even when that's not true – and it'd be nice to understand what systematic biases we have.
  • Humans aren't just good with facts and causal models. We're also really good at juggling multiple causal models, testing them against one another, finding problems and inconsistencies, and making adjustments and integrating the results of those models, even when the results conflict. In essence, we have a (working, imperfect) theory of how to deal with causal models. Can we teach machines to do this kind of integration of causal models?
  • We know that in our world the sun rising causes the rooster to crow, but it's possible to imagine a world in which it is the rooster crowing that causes the sun to rise. This could be achieved in a suitably designed virtual world, for example. The reason we believe the first model is correct in our world is not intrinsic to the data we have on roosters and sunrise, but rather depends on a much more complex network of background knowledge. For instance, given what we know about roosters and the sun we can easily come up with plausible causal mechanisms (solar photons impinging on the rooster's eye, say) by which the sun could cause the rooster to crow. There do not seem to be any similarly plausible causal models in the other direction. How do we determine what makes a particular causal model plausible or not? How do we determine the class of plausible causal models for a given phenomenon? Can we make this kind of judgement automatically? (This is all closely related to the last problem).

Continuous-time causality: A peculiarity in my post is that even though we're talking about causality, and time is presumably important, I've avoided any explicit mention of time. Of course, it's implicitly there: if I'd been a little more precise in specifying my models they'd no doubt be conditioned on events like "smoked at least a pack a day for 10 or more years". Of course, this way of putting time into the picture is rather coarse-grained. In a lot of practical situations we're interested in understanding causality in a much more temporally fine-grained way. To explain what I mean, consider a simple model of the relationship between what we eat and our insulin levels:

This model represents the fact that what we eat determines our insulin levels, and our insulin levels in turn play a part in determining how hungry we feel, and thus what we eat. But as a model, it's quite inadequate. In fact, there's a much more complex feedback relationship going on, a constant back-and-forth between what we eat at any given time, and our insulin levels. Ideally, this wouldn't be represented by a few discrete events, but rather by a causal model that reflects the continual feedback between these possibilities. What I'd like to see developed is a theory of continuous-time causal models, which can address this sort of issue. It would also be useful to extend the calculus to continuous spaces of events. So far as I know, at present the causal calculus doesn't work with these kinds of ideas.

Problems for the author

  • Can we formulate theories like electromagnetism, general relativity and quantum mechanics within the framework of the causal calculus (or some generalization)? Do we learn anything by doing so?

Other notions of causality: A point I've glossed over in the post is how the notion of causal influence we've been studying relates to other notions of causality.

The notion we've been exploring is based on the notion of causality that is established by a (hopefully well-designed!) randomized controlled experiment. To understand what that means, think of what it would mean if we used such an experiment to establish that smoking does, indeed, cause cancer. All this means is that in the population being studied, forcing someone to smoke will increase their chance of getting cancer.

Now, for the practical matter of setting public health policy, that's obviously a pretty important notion of causality. But nothing says that we won't tomorrow discover some population of people where no such causal influence is found. Or perhaps we'll find a population where smoking actively helps prevent cancer. Both these are entirely possible.

What's going on is that while our notion of causality is useful for some purposes, it doesn't necessarily say anything about the details of an underlying causal mechanism, and it doesn't tell us how the results will apply to other populations. In other words, while it's a useful and important notion of causality, it's not the only way of thinking about causality. Something I'd like to do is to understand better what other notions of causality are useful, and how the intervention-based approach we've been exploring relates to those other approaches.

Acknowledgments

Thanks to Jen Dodd, Rob Dodd, and Rob Spekkens for many discussions about causality. Especial thanks to Rob Spekkens for pointing me toward the epilogue of Pearl's book, which is what got me hooked on causality!

Principal sources and further reading

A readable and stimulating overview of causal inference is the epilogue to Judea Pearl's book. The epilogue, in turn, is based on a survey lecture by Pearl on causal inference. I highly recommend getting a hold of the book and reading the epilogue; if you cannot do that, I suggest looking over the survey lecture. A draft copy of the first edition of the entire book is available on Pearl's website. Unfortunately, the draft does not include the full text of the epilogue, only the survey lecture. The lecture is still good, though, so you should look at it if you don't have access to the full text of the epilogue. I've also been told good things about the book on causality by Spirtes, Glymour and Scheines, but haven't yet had a chance to have a close look at it. An unfortunate aspect of the current post is that it gives the impression that the theory of causal inference is entirely Judea Pearl's creation. Of course that's far from the case, a fact which is quite evident from both Pearl's book, and the Spirtes-Glymour-Scheines book. However, the particular facets I've chosen to focus on are due principally to Pearl and his collaborators: most of the current post is based on chapter 3 and chapter 1 of Pearl's book, as well as a 1994 paper by Pearl, which established many of the key ideas of the causal calculus. Finally, for an enjoyable and informative discussion of some of the challenges involved in understanding causal inference I recommend Jonah Lehrer's recent article in Wired.

Interested in more? Please follow me on Twitter. You may also enjoy reading my new book about open science, Reinventing Discovery.



_- Steve