Friday, January 29, 2010

Expert advice

Jerome Groopman, Harvard Medical School physician, and writer, has an interesting piece in the February New York Review of Books. Called "Health Care: Who Knows Best?", he discusses some of the recommendations built into the health care legislation now stalled (dead?) in the US congress.
One of the principal aims of the current health care legislation is to improve the quality of care. According to the President and his advisers, this should be done through science. The administration's stimulus package already devoted more than a billion dollars to "comparative effectiveness research," meaning, in the President's words, evaluating "what works and what doesn't" in the diagnosis and treatment of patients.
The idea is that science can determine what works and what doesn't, and the government can then mandate that doctors and hospitals that use these 'best practices' will get more money than those who don't. It's in part paternalistic, as Groopman points out, in (large) part driven by insurance industry interests, and completely bad science.

Over the past decade, federal "choice architects"—i.e., doctors and other experts acting for the government and making use of research on comparative effectiveness—have repeatedly identified "best practices," only to have them shown to be ineffective or even deleterious.
For example, Medicare specified that it was a "best practice" to tightly control blood sugar levels in critically ill patients in intensive care. That measure of quality was not only shown to be wrong but resulted in a higher likelihood of death when compared to measures allowing a more flexible treatment and higher blood sugar. Similarly, government officials directed that normal blood sugar levels should be maintained in ambulatory diabetics with cardiovascular disease. Studies in Canada and the United States showed that this "best practice" was misconceived. There were more deaths when doctors obeyed this rule than when patients received what the government had designated as subpar treatment (in which sugar levels were allowed to vary).

Why is it bad science? There are several answers to this. First, everyone is different -- in the same way that single genes don't explain a disease in everyone who has it, single practices don't work equally in everyone with the same condition.  Second, there are many epistemological issues having to do with how 'successful outcomes' are measured.

In addition, and perhaps over-riding all other issues, is the notion of 'experts' and how we dub them and use them. In a field like, say, calculus or Plato, it is rather easy to envision what an expert is. The body of knowledge, thought perhaps extensive or technical, is relatively circumscribed (yes, we know calculus is a big field not all fully known, but one can conceive of an expert mathematician).

In other fields knowledge is so vast, or changeable, or epistemically uncertain, that experts tend to be socially prominent practitioners of a field--medicine, genetics, evolutionary biology, public health. They may know tons about their field, tons more than the people they're asked to advise. But there is too much for anyone to know in detail, and there is too much room for disagreement at levels too close to the policy issues on which the experts are consulted.

Like 'complexity', expert status is not clear and yet the need for experts is often great. One British investigator evaluates potential experts based on their past track records, weighting their current advice on those track records. This may narrow the ranges of estimates of critical policy parameters by rejecting wild estimates by those with less past success. But even that assumes that past success means knowledge and skill, rather than luck, that implies future success, and that's a big (and untestable) assumption.

So we face very challenging problems in deciding, as in this case, what the 'best' medical practices should be. We have to constrain usages of the system in some ways, but how to do it is as much a sociopolitical as scientific decision.

Thursday, January 28, 2010

The end of a goat's tit

I spent Wednesday at Polymeadow's dairy goat farm, which we've written about before.  It wasn't nearly enough time, or as much as I usually spend there, but it was still a good reminder that academia is but a small slice of the world.




My sister, Jennifer, with one of her 2 day old twins.




Her handsome Angora, Markie.  He's getting a little long in the tooth.  Or horn.  But he's still a sweetie.





Melvin, my brother-in-law, just in from milking. This is such a great picture.   













As Melvin said today, "Our whole living comes from the end of a goat's tit." He could well have added, "And all the work that comes with it." But he's a Vermonter.  


Wednesday, January 27, 2010

Not just for the stars and idle rich....

Well, who would have guessed it? While we hold dear to the idea of mating for life (that is, marriage), we know that those with real means often don't really mean it. The high and mighty seem often to fly the cooped-up feeling of the one-and-only. But that seemed so, well, a product of being spoiled in our doting culture, a violation of the more fundamental urge to stay that the rest of us have to (try to) live by. And lifelong fidelity is often the rule in Nature, one that our innocent lessor species tend to keep.

Not anymore! The BBC reports the shocking discovery of a pair of swans who did not honor their vows and not because death doth them did part. No, they both flew the coop, left town and returned (apparently shamelessly) with other partners, to the waterfowl sanctuary where they've been monitored. This shocking 'divorce' stirred the interest of the staff, who dutifully reported it.

What these miscreants show is that fair and fowl, in man and beast, is not necessarily hard-wired. Irreconcilable difference can arise, even among the bird-brained. Either that or, having watched us decadent humans gawking at them all the time (often with different partners), they might as well join the crowd.

The two newly-weddeds seemed quite happy, according to the story, to ignore their ex-es, swimming along and giving each other the cold-wing, as if they didn't exist. It gives new meaning, of beginning rather than ending, to the phrase 'swan song'.

Notch one more for hard-wiring in Nature (not!)

Tuesday, January 26, 2010

The Yin and Yang of GWAS

It's fair enough to say that if it looks like a duck, and quacks like a duck, it must be alive and hence in principle you should be able to GWAS it (do a genome-wide association study) to discover its genetic basis. It would seem that, according to some scientist, somewhere, almost any trait you can name will be put under the GWASlight.

Today's Bulletin of the Shocking is the identification of a number of Y-linked genes. No, we're not talking about the male equivalent of girls' G-spots, the mythical Y-point. No, in this case the subject is non-western, specifically Asian, medicine: the Yin and Yang of life. How long is your Yin and how deep your Yang? It must be genetic, and they've now gone looking for it!

In what seems similar in spirit to Galenic medicine, of the Four Humours fame, JM Lee, a Korean physician in 1894 invented (discovered?) the Four-constitution medicine (FCM). Yin and Yang are, as we understand them, complementary opposites that contribute to individuals as wholes. These concepts are important in Asian philosophy, and have some connection to acupuncture theory, too. But we are far from knowledgable enough to describe, much less to comment on, this background.

In this quaternary view of nature, each individual's physical and mental traits lead to categorization, a Four-consitution type (FCT). You are either a GYN (greater Yin), LYN (lesser Yin), Greater Yang (GYA), or lesser (LYA). Given your score, clinical applications to everything including mental attitudes and food recommendations can follow.

Well, since it appears that your FCT is associated with personality, and you're alive, one can do GWAS from questionnaire-based phenotypes -- and this has now been done. 353,202 SNP markers were typed on healthy individuals in a 'genetically homogeneous population' of Korea. Looking for genetic variation in a homogeneous population seems strange, for starters, but let's let that slide.

A sample of 20 each from GYN, LYN, and LYA classified individuals were GWASed (they gave permission!). They couldn't find enough GYAs to include in this study. They then paired up individuals from two different groups, such as LYA-LYN, and in each of several such paired tests found GWAS SNPs that Yinned the subjects' Yangs, or Yanged their Yin, with statistical significance. They used all of the ready-made, off-the-shelf software that the profession routinely use to check for accuracy and aspects of statistical testing.

The result was that out of the 353,202 tested SNPs, tens of thousands of tickler SNPs (no prurient inuendos need be inferred here!) proved to have statistically significant effects in individual pairings, and hundreds accounted for the Yins of the subjects Yangs in more than one of the pairings.

The authors provide a table of the genes nearest to each of the 20 SNPs that gave hits in 3 different FCM pairings. For example, who could doubt FLJ46156 Protein -- we knew someone would discover it sooner or later! Likewise Drosophila (fly) sal-like 1, variant 1. It's indeed disappointing that these authors found this first, before we could. But that's science. Following up these leads will of course be the next stage, and probably a costly one at that even if, sadly, it probably has to be done in a lab, and not by questionnaire, thought if it requires mouse experiments, it is curious how the mice's Y's are to be evaluated.

These things are important because depending on your FCT which, even tho' there's a questionnaire, we can do a $10,000 gene chip to find out, your clinical treatment may vary. That's because, as with other kinds of Asian medicine (and indeed, some nonstandard medicines in the west, too) the same treatment may help, or harm, depending on the swing of your Yin. What's good for your goose may not be good for your gander, so to speak.

Of course, we have to note that the samples were rather small for genomewide studies compared to the best standard in, say, Europe: 20 vs 500,000. But let's not get our Yangs out of joint over that minor part of the problem.

Disappointingly, the authors fail to put this in evolutionary context, not looking to see whether measurement error (of the size of their subjects Yins or Yangs -- again, no scurrilous giggling out there, please!) might have affected fitness over the past millennia, that is, Nature screening GYAs, GYNs, and all, until Dr Lee came on the scene to formulate the FCM. But if having a mis-sized Yin (or dystorted Yang) affected fitness, we can easily see that the result today is a Korea that's far from genetically homogeneous, despite the authors claim to that effect.

We're not making this up! This is a paper you may not be able to get from your library, but is Yin et al, Journal of Alternative and Complementary Medicine, 15(12), 2009, pp 1327-1333. We must assume, unfortunately, that despite the journal's name, even in Korea medicine is not complementary and you actually have to pay for it. Free care would certainly never do here, where our eyes are always on the Yen, so to speak, rather than the Yin or Yang. So this paper may be of no direct relevance to our health-care dilemma. In the US, complimentary medicine may be free ("You look fine! You look good!"), but no free treatment, no matter your GYA-score.

In the end, GWAS of GYA and GYN vs LYN and LYA, to detect the FCT of your FCM, will get your mind all twisted round. It will make you simply want....a GIN!


(We thank Francesc Calafell for bringing this most important paper to our attention, so we could share it with the world.)

Monday, January 25, 2010

The moveable feast of madness

There was an interesting story in the New York Times on Jan 8 called "The Americanization of Mental Illness," that we are just now getting around to blogging about.  It turns out that mental illness varies by time and place. One culture's madness is completely unknown in another.
In some Southeast Asian cultures, men have been known to experience what is called amok, an episode of murderous rage followed by amnesia; men in the region also suffer from koro, which is characterized by the debilitating certainty that their genitals are retracting into their bodies.
Schizophrenia takes a different course in different cultures, being more severe in the US than in other cultures.  The symptoms of what we in the US know as anorexia nervosa -- an obsession with weight and dieting and a view of one's body as fatter than it actually is -- were, until recently, very different in Hong Kong, where anorexic patients primarily complained of bloated stomachs. In the 1990s, however, the symptoms began to mimic the more profound American disease.  And so on.

Thus,
[f]or more than a generation now, we in the West have aggressively spread our modern knowledge of mental illness around the world. We have done this in the name of science, believing that our approaches reveal the biological basis of psychic suffering and dispel prescientific myths and harmful stigma. There is now good evidence to suggest that in the process of teaching the rest of the world to think like us, we’ve been exporting our Western “symptom repertoire” as well. That is, we’ve been changing not only the treatments but also the expression of mental illness in other cultures. Indeed, a handful of mental-health disorders — depression, post-traumatic stress disorder and anorexia among them — now appear to be spreading across cultures with the speed of contagious diseases. These symptom clusters are becoming the lingua franca of human suffering, replacing indigenous forms of mental illness.
And yet, we in the West have been looking for decades for the genes 'for' mental illnesses such as schizophrenia and depression. The searches, as we've written here numerous times, have been essentially fruitless. In part this has been because it has been so difficult to know how to to define the traits so that a study population convincingly has the same disease. But now it seems that much of the difficulty may be that these diseases are products of a particular time in a particular culture, not 'simple' genetic traits like cystic fibrosis or Tay Sachs disease, in which the biological consequences of the causative mutation are clear.

Unless someone wants to argue the unrelenting geneticistic view that Asians have different mental illnesses -- of the extent and type described above -- because they are genetically different. That's possible in principle, though almost impossible to disentangle from culture, geography, and history. However, there are three strong counter arguments, and some supported by actual data. First, there are secular (time) trends in these kinds of traits, that become more common or more rare within lifetimes within a culture.

Second, many classical migrant studies have been done that show that, to a great extent, migrants adopt the characteristics of their new home population. Clearly-genetic traits like facial appearance may not change, and unless largely environmental, cannot change so fast. But culturally-based traits can, and they do. The formal studies largely have to do with disease, but anyone with eyes can see the extent to which second and third generation immigrants adopt all aspects of their local culture that their conservative parents can't prevent.

The third argument is the manifest way in which culture, including in this case ideas of behavioral traits including disease, spreads around the world these days, and so quickly that we know they're not due to genetic change.

There are many profound, if uncomfortable implications. We know what the problem is, but in the absence of a Lone Ranger with a magic silver bullet better idea, we cling to old beliefs or wishful thinking.

For example, does its moveable feast-ness imply that mental illness isn't a biological trait? Not at all. It 'simply' means that there is a much more complex interaction between culture and biology than geneticists are allowing for. And that genetic characteristics are not pre-wiring for specific behaviors, but may be more probabilistically likely to be manifest in various types of environment. This is why most deterministic stories about the genetics of behavior are basically Just-So stories. Once you throw in human culture, all bets about the rules of the game are off. Overall, humans are a constant, relative to variation in culture. But we vary a lot biologically, too.

Biological variation is not a good predictor, in most cases, of specific traits. This is just what the most successful, and thoughtful new GWAS kinds of analysis are also finding about disease -- even without considering the globally varying and changing cultural landscape.

Thursday, January 21, 2010

The complexities of complexity

I've just returned from co-teaching a course in logical reasoning in genetics in Helsinki (whether or not everyone agrees that my reasoning is logical is not for me to say). I worked with Joe Terwilliger from New York, Markus Perola from Finland, and Patrik Magnusson from Sweden. Students were mainly Finns, but others from the US, Peru, Sweden and perhaps elsewhere that I've forgotten to mention. Hopefully they got something from the course (along with some fun times in snowy, dark, but very socially hospitable Finland, a very nice place, with very good food).

Naturally a lot of attention was paid to ideas, activities, results, and interpretation of GWAS and other whole-genome studies, and we discussed various study designs for inferring the genetic contributions to complex traits.

A picture is emerging that is wholly consistent with theoretical expectations based on basic genetic and evolutionary conceptions that have been around for a long time, and that we present in detail (though in a different, nonbiomedical context) in Mermaid. It's that for many traits, perhaps even most traits, a large number of genes contribute, along with 'environments' (still an elusive term in many biomedical contexts). Sometimes, one or a few genes are far more important in the biological process generating the normal trait or whose mis-firing can lead to disease. Or many genes may be important but, for various reasons, in any population only one or a few may contain variants in the population that have strong effect on their own.

In these cases, unless lifestyle factors are exceedingly important, the genetic variants can be inferred from family or case-control studies, of which GWAS studies are one scaled-up instance. The strong effects are typically repeatable, and focus attention on one or a few genes. Cystic fibrosis is an example of a usually single-gene trait, and breast cancer is a trait in which variants in a few genes have differentially important effect. In the latter, however, only a small fraction of all cases are accounted for.

Most of the time, although genetic variation is clearly contributing to variation in the trait, be it normal variation in stature or pathologic levels of, say, blood pressure, there is clear family risk: if a close family member is affected, your risk is substantially increased. This implies genes, yet.....mapping can't find most of them. This is called polygenic variation.

As far as predicting your value for polygenic traits, the original method of relating your trait to the value in your relatives, that was due to Francis Galton in the late 1800s, still works best. That's because, when genes are contributing to the trait substantially, similarities among relatives follow known relationships, that reflect the action of all genetic variability in the individuals, and you don't gain much by trying to identify or use all the specific genes. But GWAS efforts are attempting to go further, and at least to identify collections or combinations of known variants that may give you a risk 'score' that has at least some predictive power. That means, the identification of many of the polygenes (genes with individually small effect).

Extremely large sets of data, such as whole population biobanks, with full genome sequence, will become available in the predictable future. Such risk scores seem therefore in the offing. But even those who are writing papers proposing such personalized genetic risk scores recognize that the predictive power for disease may remain low in most instances, and it may be a long time before it shows clinical value.

One of the complicating ironies is that, in these conditions, the vast majority of cases of a disease of this sort will be the only cases in their families! That is, the trait is substantially genetic, but with so many possible contributing genotypes that rarely will close relatives inherit enough 'risk variants' also to be affected. That happens only in the subset in which one or a few strong-effect variants are being transmitted. This is similar to the statement above that Galton's classical prediction of trait values among relatives is better by far than trying to enumerate the contributing variants.

There is much food for thought here, with serious implications for what it means to say a case of a disease is 'genetic', or how and when using genetic information will be particularly useful. There are many other issues that are worth discussing in the future, too, but this at least summarizes the essence of what the pro-GWAS advocates are discussing these days, even while recognizing that the kind of promise offered for this approach is not going to be realized.

*References to some of these points are technical so I didn't include them here, but they could be sent on request.*

Wednesday, January 20, 2010

Lessons from Nature

Pharmaceutical altruism?
There are a couple of interesting stories up on the Nature website today, one rather surprising, and the other not at all a surprise.  The first, "GlaxoSmithKline goes public with malaria data", (described here, if you can't link to the Nature story) tells of the pharmaceutical company's decision to release much of its data related to anti-malaria drug development into the public domain.
GlaxoSmithKline is to deposit more than 13,500 structures of possible drugs against malaria into the public domain, along with associated pharmacological data. The move marks the first large-scale public release of such structures by a pharmaceutical company, and it could lead to others following suit.
What's also important, says Bernard Pécoul, head of the Geneva-based Drugs For Neglected Diseases initiative, is that GSK intend not only to release the structures but also relevant data they hold on the 'druggable properties' of the compounds such as its solubility, absorption, metabolism or toxicological profile, which will help with weeding out compounds which would be dead-ends in terms of drug development. "This is extremely precious information," says Pécoul.
Two things strike us here -- one, GSK is a profit-making company, so why is it releasing this heretofore proprietary information?  Because it won't cost them much, and they are hoping it's good for their public image, as The Guardian points out here. Malaria infects hundreds of millions of people every year, a million of whom die, children in particular, but most of these people are poor, living in countries that can't afford to invest in health, which means that malaria isn't a potential profit-making disease.  Thus, in spite of its endemicity in many parts of the world, its designation as a 'neglected tropical disease.'  A vaccine, elusive as its proven to be against this disease, might be profitable, but anti-malarial drugs, no.

That said, data sharing and the beginning of the coordination of research into malaria control is potentially a big step.  In the long run, money spent on controlling infectious diseases has much more potential to prevent debilitation and death for more people than the vast amounts of money being spent on genetic research.

Reductionism and complexity -- again
The second story of interest at Nature today, "Health benefits of red-wine chemical unclear", is a cautionary tale about reductionist science.  For years, contrary to the usual puritan approach to enjoyable substances, we've been hearing that drinking red-wine can extend our lives.  That's red-wine, not white, and it has to be wine, not grape juice.  Some years ago, researchers believed they had isolated the substance that had these beneficial effects. It's a compound, found in grape skins, called resveratrol, which is produced by some plants when they are under attack (so why unfermented grape juice is not equivalent is confusing).   
Resveratrol's health benefits are thought to result from its activation of enzymes called sirtuins, which were linked to longevity 10 years ago when Leonard Guarente from the Massachusetts Institute of Technology in Cambridge found that yeast with additional copies of the gene that encodes sirtuin, called sir2, lived significantly longer than did those that had the usual two copies. Four years later, Guarente's former post-doc David Sinclair published work showing that resveratrol activated sirtuins in yeast and extended the organism's lifespan. Sinclair later went on to show that resveratrol fed to worms and flies lengthened lifespan by acting through the sirtuins.
Naturally enough, companies were quickly established to manufacture and sell this stuff, but recent findings suggest that other components are much more effective than resveratrol at activating the longevity molecules it has been found to activate in yeast, and that perhaps resveratrol doesn't activate these molecules in vivo at all, and anyway, it only works in association with another compound.  Maybe.

The lesson here, other than the possible dangers of premature commitment to an idea, is that what happens at the cellular level isn't necessarily replicable at the organismal level.  The fact that a compound isolated from red-wine (where it's found at rather low levels, in fact) might extend the life of a yeast cell doesn't necessarily mean that it will have anything like that effect on a whole animal. This is a well-known problem, as are the problems of complexity and reductionism which rear their ugly heads here once again. None of this is a surprise.  Unless you've already built a company to make and sell resveratrol.




Tuesday, January 19, 2010

"My hero!"

Ken came home ill from Finland so the musings on the state of human genetics that he promised us will have to wait. Meanwhile, I've been catching up with publications in various piles around the house, and have only just read The Darwin Show in the Jan 7 London Review of Books. Shapin is a historian of science at Harvard, and the piece is about "history's biggest birthday party," the yearlong celebration of Darwin that has just concluded. He makes many points, but generally, in a welcome antidote to the year that just was, Shapin reminds us that Darwin was but a man. A man who wrote enough generations and enough scientific discoveries ago that it's damn peculiar that he's beatified to this day, as if he had no faults or errors in his ideas, which is manifestly wrong.
Even conceding the more expansive claims for Darwin’s genius and influence, we’re still some way from understanding what the festivities have been about.
Paradoxically, this year’s events have been a celebration of a historical figure and his historical work in which specifically historical interests have been notably marginal. The party is one in which the present, with its pressing present concerns, processes fragments of the past in roughly the same way that assorted blocks of white fish, bulked out with filler, are processed into fish fingers. Myths have a market; myth-busting has a small one; setting the historical Darwin in his Victorian intellectual and social context has practically none at all.
Darwin and his work are still too often treated as infallible (Shapin writes that Richard Dawkins "concedes that Darwin 'made some mistakes' -- concedes? -- but quotes EO Wilson as saying that "The man was always right" -- could he really have said this?). Shapin easily demythologizes Dobzhansky's so often-quoted line, that "nothing in biology makes sense except in the light of evolution", a point we make in Mermaid's Tale, so I'm happy to see it here.
To say that nothing in biology makes sense except in light of Darwinism cannot be the same thing as saying that to be a competent biologist is to have command of, or to agree with, any specific version of evolutionary theory, such as those favoured by Dawkins and Dennett. I have taught many talented biology students, both in the US and the UK, who could not give a coherent account of evolution by natural selection – teleology remains strikingly popular – and while it may or may not be the case that evolution provides the conceptual ‘foundation’ of life science, it is certainly not the case that biologists need to have command of any such theory to do competent work, for example, on the sex life of marine worms, on algal photosynthesis, or on the nucleotide sequence of breast cancer genes. Lots of practitioners of lots of modern expert practices turn out not to be very good at articulating their practices’ supposed foundations.
Shapin suggests that the hoopla over Darwin had less to do with him than with what our age wants him to have been. This decade's current crop of atheists uses him to prove they are right, but he hasn't always been read that way. At his funeral in Westminster Abbey, the archdeacon praised him for, according to Shapin, having read 'many hitherto undeciphered lines in God's great epic of the universe'.
The historical Darwin is only a spectral presence at his own commemoration. The Origin as a complex literary and scientific performance was not a focus of the global festivities, nor was Darwin’s own understanding of what he had and had not done, still less the full range of his scientific concerns. What has just been celebrated is not the historical specificity of a mid-19th-century text, or the Victorian author of works on earthworms, orchids and insectivorous plants, but the founding of a particular intellectual lineage, a lineage that led from 1859 to some version of the gene-theory-augmented ‘modern evolutionary synthesis’ that is valued today. Darwin did not discover or invent modern evolutionary biology and its intellectual fellow travellers; at most, he was at one end of a genealogy whose latest members he would scarcely have recognised.
There’s no need to be pedantic about this. If what has happened has only something to do with the historical Darwin, it has a lot to do with us, and what some of us choose to construe and to celebrate as present-day ‘Darwinism’. Those are considerable facts in their own right. A phenomenon as widely dispersed as the Darwin commemorations is bound to have had many causes, serving many purposes. ‘Every age moulds Charles Darwin to its own preoccupations, but the temptation is hard to resist,’ Philip Ball noted in the Observer. ‘In the early 20th century, he became a prophet of social engineering and the free market. With sociobiology in the 1970s, Darwinism became a behavioural theory, while neo-Darwinist genetics prompted a bleak view of humanity as gene machines driven by the selfish imperatives of our DNA.’
Shapin writes of the struggle for Darwin's soul between adaptationists and their critics -- determinists and non-determinists, neutralists and selectionists, pointing out that Darwin himself didn't believe that natural selection explained everything. He writes of the adoption of evolutionary theory by literary theorists. Called 'literary Darwinism", every piece of fiction can be interpreted through Darwinian eyes. Shapin doesn't point out, but might have, that Darwinism has been applied to every aspect of modern life; business, politics, families. And so on. Darwin wouldn't recognize himself.

Darwin had a brilliant insight, for which he's rightly celebrated, but his insight was shared by at least Alfred Russel Wallace at the time, and in various forms by many others even decades befor Origin, and it has been expanded upon, first by the Modern Synthesis, that married natural selection and Mendelian genetics, and later by modern molecular evidence that has allowed biology to understand how traits are made, not just how they evolve. The field has hardly stood still. Darwin was a seminal figure in the intellectual lineage of evolutionary theory, and it's enough to celebrate him for that. But that lineage is still evolving, and Darwin's influence should diminish in significance the more we know.

But, like damsels in distress we need perfect heroes who will rescue us, in this case from benighted religion, the Evil of Evils to the strident Darwinians of our time. They are as polarized and as ideological as, say Sarah Palin or Pat Robertson, imputing evil to heretics, saying--and what is worse, probably actually believing that everything can be accounted for by Darwinism (their Bible, in fact since they consult it often, a great no-no in science, which should be about the best ideas and evidence available today, not in yesteryear). Somehow they must feel that errors mean a tainted vision, and Heroes can't be that way. In no way do we wish to diminish Darwin's astounding insights and achievements, in his own time and, as logical reasoning, even in our own. Darwin's books, especially the Origin, are masterpieces of thought and organized argument. But inspiration in science is not the same as inspiration in religion. We regularly teach Darwin's works, and of course we use his concepts of common origin of life, and of natural selection, among others, as extremely useful. But there's no need for hagiography.

If you are as far behind in your reading as I am, and haven't gotten to this essay yet, it's well worth your time. It tries to put the Darwin Party of last year in its current cultural context.

Friday, January 15, 2010

Calling from the North

I've been in Helsinki, Finland, this week to help teach a week-long minicourse in human genetics, to a variety of graduate students and researchers in human genetics, from Finland and several other countries. Several sponsors have supported this, which is the fifth time the course has been offered (also in Madrid and Maracaibo, as well as 2 times before here in Helsinki).

I co-teach this with Joe Terwilliger, a long-term collaborator, Markus Perola, a Finnish clinician and biomedical geneticist, and Patrik Magnusson, a Swedish genetic epidemiologist. Our purpose is to introduce students to the issues, and ways to think about what we know and how we know it, in human genetics today. We cover subjects like evolutionary genetics, the meaning, use, and value of GWAS approaches, and things of that sort. This is not a methods how-to course, but an attempt to stimulate the students to think as much out of the box, or at least be aware of the epistemological issues as they apply to this challenging field, as we can.

It's a very advanced place, Finland, and quite beautiful and pleasant even in the dead of winter. I feel good about the level of ability and curiosity of the students.  They are alert, thoughtful, curious, and attentive, and it has been a pleasure to help teach this course. As almost always happens in such trips, my own thinking has been affected, and I've learned of some hot new references to look at when I get back next week. I've also heard about unpublished mapping results that help identify some new risk factors but mainly show more about the nature of complex causation.  Basically, it seems we are understanding the nature of life and complex causation, and even though few miracle genes have been found, there are some ideas simmering about how to look at the  aggregate of small-effect genes. I'll have more to say about this next week.

Human genetics is as lively and vibrant as it has ever been. I have my disagreements about the stress and cost of some of this research, as Anne and I have blogged about in the past. But there is clearly a feeling of excitement, that much is being learned--even if magical cures aren't as near as is often promised.


Thursday, January 14, 2010

Accidents do happen, but....

Touching on what seems to have turned into our theme of the week, John Hawks links to a story in the Telegraph yesterday reporting that a third of academics would leave Britain if threatened cuts to 'curiosity-driven' grants go through. This comes on top of deep cuts in funding for higher education in Britain across the board. According to the story, future research will be funded based on its perceived social and economic benefits; close to 20,000 people have signed a petition protesting this change.
...critics claim the move risks wiping out accidental discoveries as university departments struggle to support professors working on the kind of ground-breaking experimentation that led to the discovery of DNA, X-rays and penicillin.
But hold on.  'Curiosity-driven' research is different from accidental discoveries.

Ken, Malia Fullerton and I wrote a paper not long ago saying that epidemiology isn't working, and, basically, suggesting that people recognize this and come up with some better ideas. We had in mind specifically epidemiology's turn to genetics to explain chronic diseases, including diseases like type II diabetes and asthma, for which, even if people do carry some genetic susceptibility, the more important risk factors are clearly environmental, as shown by the fact that incidence of these diseases has risen sharply in recent decades.

We called the paper "Dissecting complex disease: the quest for the Philosopher's Stone?" (Not the Philosopher's Stone of Harry Potter fame, our reference was to the alchemist's dream of a substance that could turn base metals into gold.) The paper was published as one of the point/counterpoint papers in the International Journal of Epidemiology.

This was an interesting exercise. The paper wasn't reviewed in the usual sense, with us able to correct and revise before publication. The paper was published just as we submitted it, followed by commentaries by prominent epidemiologists. We knew people could find holes in our argument, and we waited for months for the comments, imagining how devastating they were going to be, and how we'd respond. But, when we finally got the commentaries, we were amazed. We could have done a much better job of blasting our paper than any of the comments we got. This was somewhat reassuring in that no one said we were wrong, but disappointing because we had very much wanted to start a dialog on the issues.

How is this relevant to the 'curiosity-driven research' story? Well, one of the major defenses of the status quo in the commentaries about our paper, of spending hundreds of millions of taxpayer dollars on research that everyone knows isn't working, was that we can't cut the funding to epidemiology because everyone knows that good stuff is often found by accident. This strikes us as a very strange justification for maintaining the hugely expensive system of researchers spending inordinate amounts of time and energy to write grants proposing research everyone recognizes isn't going to lead to much, never mind improve public health, and tie up equally inordinate amounts of time, energy and money on the part of reviewers who are also expected not to say that the emperor has no clothes (or the Philosopher has no Stone). In the hope that somebody will stumble across something unexpected one day that really will be progress.

This is not the same as 'curiosity-driven research'. Why is the sky blue? is an honest question and whether or not taxpayers should fund the research needed to answer it can be debated on its merits. If the UK has decided to no longer fund basic science, but only research that will lead to patents, or whatever 'social merits' are, that's very different from the idea that we should maintain a system that isn't working on the off chance that something good will come of it.  That decision can be debated, but at least it's an honest debate.

Wednesday, January 13, 2010

Is there still a place in science for unpredicted breakthroughs?

The four-part In Our Time series with Melvyn Bragg, on the history of Britain's Royal Society, aired last week on BBC Radio 4 was a fascinating reprise of the history of science as a reflection of society. The Royal Society was founded by King Charles in 1660, right after the English Civil War, in part, as Bragg said, "to interrogate Nature herself, for the improvement of man's lot". King Charles also explicitly hoped that the new organization of learned men would help enrich the nation.

Science has always needed patrons, and in the past, especially in the US, this has included philanthropists. But the twentieth century saw a "great shift [in science] away from the academy towards industry and war," such that science has become so expensive that only the richest of patrons can fund it now. Generally, that would be the government--the military or agencies, with explicit health or science-related mandates--but private companies are also in on the game. Pharmaceuticals are big, but chemical or food manufacturers, or computer engineering firms are up there, too.

One of the speakers on the In Our Time episode aired on January 7 described the typical scientist these days as not a professor or a grad student or a post-doc, but someone managing a medical trial run by a private contract research organization, probably a multinational, with the trial probably happening somewhere in the Third World. Science now follows the money. And it's planned, directed and applied.

But, as pointed out on the program, many of the best scientists would say that their own great work was unpredictable, and couldn't have been preconceived and proposed in advance to a funding agency. If science is only responsive to immediate needs, then the kind of high-risk, high-payoff work that has been so successful in the past isn't going to get done.
Many scientists working away in the incremental business-as-usual-with-a-minor-tweak world, and that means most of us, often argue that continued funding will lead to serendipitous discoveries, as though this justified the current system. But those mainly come from the quiet brilliance of (generally young) investigators concentrating hard on focused questions, not from bureaucratic cogs. Our system doesn't particularly favor the former, but it does favor the latter who learn how to 'game' it for resources.

But does this matter after all? Driven in part by military research, technology has been advancing as never before in history. And public health and pharmaceuticals have made major contributions to improving quality and length of life, at least in populations that can afford them. Perhaps the idea of the eccentric genius, toiling away alone in the lab, thinking big thoughts and making accidental discoveries is now an anachronism.  Maybe the Hadron supercollider, if the forces-that-be finally allow it to crank up to speed, as hugely costly and well-planned as it has been (except for things like bits of baguettes dropping in to gum up the works) will actually make unexpected discoveries, putting the lie to the idea that the best discoveries can't be predicted. Perhaps major advances in quality of life now require political action, in terms of resource distribution and so on, rather than scientific.

But we don't think so.  The view from within makes it hard to see alternative landscapes -- in the same way that a short-sighted view can seduce us into thinking we're at The End of History.  There are enough rewards in the grant system for enough people that it's hard to rock the boat.  But, as we've written before (e.g., here), it wouldn't take huge changes to improve things greatly.  Basic changes to the grant system and the system of rewards in academia, could go a long way toward increasing innovative thinking. 

Tuesday, January 12, 2010

Knowledge is Power

At the dawn of the modern science era, in 1597, Francis Bacon, a founding empiricist, used the phrase 'knowledge is power'. To Bacon, "knowledge itself is power", that is, knowledge of how the world works would lead whoever had it to extract resources and wield power over the world--science would enable Empire.

This view of science has persisted. It was important in the early founding of the Royal Society and other prominent British scientific societies in the 17th and 18th centuries and beyond. The technology and even basic knowledge that was fostered did, indeed, help Britannia to rule the waves.

Basic science was the playground of the classic idle wealthy of the 1700s and surrounding years, and applied technology was developed by people not formally beneficiaries of 'education' as it was done in those times. In the US, major scientific investment, such as in large telescopes, was funded by private philanthropy--of wealthy industrialists who could see the value of applied science.

We tend perhaps to romanticize the 18th and 19th centuries, the era of Newton, Darwin, and many others, who advanced science in all areas--geological, physical, chemical, and biological, without doing so for personal or financial gain. But at the same time, there was much activity in applied science and technology and even in 1660 when the Royal Society was founded with government support, gain was one of the objectives.

An informative series about the history of the Royal Society and of other scientific activities in Britain was aired the week of Jan 4 on BBC Radio 4, on the program called In Our Time--the four parts are now available online. Much of the discussion shows that the interleaving of government funding, geopolitics, and avarice were as important when the Royal Society was funded, as now, in driving science.

There can be no doubt about the importance of systematic investigation of the ways of Nature in transforming society during the industrial revolution. The result was due to a mix of basic and applied science. The good was accompanied by the bad: daily life was made easier and healthier, but episodes of industrialized warfare made it more horrible. On the whole, it has allowed vastly more people to live, and live longer, than ever before. But it's also allowed vastly more people to struggle in poverty, too. (The discovery of novocaine for use by dentists may alone justify the whole enterprise!)

The post-WWII era seemed to foster lots of basic science. But in the US the National Science Foundation and other institutions poured money into science largely, at least, in response to the fears that the Soviet Union whose space program was far ahead of ours, might gain on us in world prominence. So there was a recurring pragmatic drive for supporting science.

The university dependence on research grants was one of the beneficiaries of this drive. We think this has been awful for science, since careers depend on money-generating by faculty, and that leads to safe, short-term thinking, even if more funds mean more opportunity. The intellectually thin Reagan administration's demand that research should translate into corporate opportunity was just a continuation of the materialistic element of support for science.

In a way, we're lucky that basic science, disinterested science actually got done, and lots of it at that! Human society probably can't be expected to put resources into things so abstract as basic science, with no promise or obvious way to lead to better pencils, medicine, or bombs. So it's no wonder that universities, bureaucracies, and scientists alike hype their personal interests in terms of the marvels to be returned to the funders.

Such a system understandably leads to entrenched vested interests who ensure their own cut of the pie. We routinely write about these vested interests and the effect we believe they have on the progress of knowledge. But, as anthropologists, we have to acknowledge that the self-interest that is part of the package is not a surprise. After all, why should we be able to feed off the taxpaying public without at least promising Nirvana in exchange? Human culture is largely about systematized resource access and distribution, and this is how we happen to do that these days.

Under these conditions science may not be as efficient or effective as it might otherwise be. A few MegaResearchers will, naturally, acquire an inordinately large share of the pie. Much waste and trivia will result. The best possible science may not be done.

Nonetheless, it's clear that knowledge does progress. A century hence, it will be our descendants who judge what resulted from our system that was of real value. The chaff in science, as in the arts, sports, or any other area of life, will be identifiable, and will be the majority. But the core of grain will be recognized for its lasting value and impact.

BUT that doesn't mean we should resign ourselves to the way the system works, to its greed, waste, hierarchies, and its numerous drones who use up resources generating incremental advance (at best). That is part of life, but only by the pressure of criticism of its venality and foibles can the System be nudged towards higher likelihoods of real innovation and creativity in knowledge.

It's remarkable that blobs of protoplasm, evolved through molecules of DNA and the like from some primordial molecular soup, understand the universe that produced it as well as we actually do. And we will continue to build on what we know; empirical, method-driven activity is a powerful approach to material gain. Embedded in inequity, vanity, venality, and other human foibles, we nonetheless manage to manipulate our world in remarkable ways.

The history of the Royal Society and other science societies that reflect the growth of society generally, as reflected in these BBC programs, is a fascinating one. But that doesn't change our belief that, in principle at least, we could make better use of our knowledge and abilities to manipulate our world toward less inequity, vanity, venality and so on.

Monday, January 11, 2010

Impressionism in a cause-and-effect world

Scientists are often materialistic totalitarians, arguing (sometimes rather presumptuously) that this is strictly a stuff and energy universe. Mysticism that involves the supernatural--that is, things not just made of stuff and energy--is sneered at. Everything is causal: if you see an outcome, there must be a cause, and it is often argued at least implicitly that any cause is essentially inferrable. Nothing is left to chance, even if it may superficially appear so.

When it comes to life, material fundamentalism argues that all that we are, including all we think, is stuff and energy, cause and effect. If so, then molecular genetics will eventually be able to account for everything (and, it's often argued, in terms of Darwnian natural selection).

Science regularly discovers new kinds of natural factors, and they then join the panoply of materialist causes. We must be open to the possibility that our understanding of what is 'material' may change if we discover some truly other phenomena which we can call 'dark matter' as a general kind of term: factors not previously known or suspected. What we think of as 'material' may have to include new things.

On the other hand, scientists don't accept explanations that argue on the basis of such unknowns. That's often characterized as 'mysticism' or 'superstition'. In the absence of evidence acceptable from a marialistic point of view, that's one major reason scientists oppose or don't accept religious explanations of the natural world.

If every instance of a phenomenon, say every case of a certain type of cancer, has a different material explanation, then the usual kinds of cause and effect approaches don't work very well as explanations. Science is currently based largely on repeatable observations. Individualism can therefore be a problem.

These thoughts were triggered in part by watching a video of a BBC television series called The Impressionists. It is a very fine dramatization of the 19th century French impressionism movement in art: Degas, Manet, Monet, Cezanne and others. At the same time these painters were working, realist painters were working as well (and there was conflict between the two groups, of course). So why did impressionism arise? Why is impressionist art so impressive (to some, at least)? Can anyone seriously argue that groups looking at the same object saw it in such different ways as a result of having different genotypes?

In what meaningful sense is impressionism to be explained in material terms? Certainly Monet saw through a retina made of cells and photoreceptors, and his brain was made of neurons that connected via neurotransmitters and receptors. At least, we need not argue literally that Monet got his impression directly from God (or, as realists might have argued, from Satan!). And why are eyes that evolved to perceive the real world also vulnerable to liking rather than fearing blurred and hence untrustworthy interpretations of Nature?

Explaining things that are strictly material may not always be possible in meaningfully material terms. This message uses 'ink' and images of 'letters', but cannot be understood in terms of ink and letters. It is understandable both from cause and effect viewpoints in its organization, the higher-level interactions (which may include true probabilism), and these are indeed also 'purposive' in some senses--Degas painted for a reason, saw things as he did for a reason.

Accounting for complexity, for the way that countless interactions yield, or cause, coherent and unitary outcomes, is a major challenge for science, and perhaps especially for biology, because although we're made of atoms that act as automotons--each hydrogen is the same, so far as we know--at least for practical purposes, our components change and react to circumstances.

So genetic variation, if properly understood, may account for very general traits like 'fidgetiness' or 'anger', but not for the higher-level or more specific aspects of culture. Culture evolves in its own way (a subject for future posts), and constitutes environments in which traits like impressionism arise.

But esthetics are a good example of things we should not expect to be explicable in molecular terms, even if Manet and Monet were made of nothing but atoms.

Friday, January 8, 2010

2020 visions, or 20-20 hindsight?

For an article in Nature this week ("2020 Visions"), a number of "leading researchers and policy-makers" were asked to comment on what their field is going to look like in ten years. "We invited them to identify the key questions their disciplines face, the major roadblocks and the pressing next steps."

Well, Nature is a commercial operation, not as unlike, say, People Magazine, as it may wish to be viewed as, and it often looks like it, too. As it does here, since only incremental science can even generally be predicted (for example, that the price of whole-genome DNA sequencing will drop dramatically, and that as a consequence we'll all be hungering to do it in almost any kind of study, whether justified or not). So, this article is essentially free advertising for the respondents. The futuristic bravado is limited, as in a way it must be. But let's suspend our disbelief and see if we can go with their premise for a minute.

The respondents include a university president, an astronomer, a chemist, a paleontologist, a computer scientist, someone from the NIH, a geneticist and so on. We aren't qualified to comment on the specifics of Google's director of research's vision of the future, but we can say, in general, that this is an odd exercise, as prognostication generally turns out to be a wish-list in disguise (right, we can't even go with the premise for a whole minute!). So, prognosticators on the future of personalized medicine, say, are advocating for their own view of the future. Or rather, of the present. One interviewee comments about how rare genetic variation will be found to have much more predictive power for disease than common variants--exactly what one might expect given the failure of 'common' variants to solve all the world's problems, and the next level of ramped-up DNA-variation-based promises that have been growing in recent years, not coincidentally nor disinterestedly along with the technologies being sold for ever-cheaper whole-genome sequences. This is essentially rationalizing for more of the same, since although there may be new findings for disorders with clearly known causal genes, generally rare variants will be very difficult to assign causal effect to (for example, suppose it's only seen in one patient?). So rare variants will not be very useful in public health terms, yet public health funds are going to be demanded for this work. We have to assume that the leading experts in the areas we know a lot less about are doing the same kind of nest-feathering.

Of course, any scientists can each be expected to be excited about, and to want to promote, their own field of interest. If we didn't think it important, it would be depressing to go to work every day. And these days, as things are structured, science is expensive and has become a kind of competitive commercial Get-Grants enterprise. But journalism, even science journalism, should bear the responsibility of calling things by their true names, and asking seriously about vested interests and so on.

But, the Nature piece is provocative in the following sense. A deeply embedded belief (truth?) one hears over and over again about science is that major discoveries over the centuries have been accidental. They can't be planned or predicted. Geniuses must be given free rein to think, tinker, experiment, and their eureka moments will follow.

If this is really true, how likely is it still to happen in today's vested-interest, continuity-driven funding-based arena? Big-money science today is goal-oriented--with the goals often dictated by the patron (NIH, the military, etc.)--and those goals are generally very specific and incremental, with every step carefully planned even years ahead of time. Knowledge is gained, for sure, but it was gained by Victorian beetle collectors, too, which didn't go very far. On the way, unexpected things are certainly to be found, but even they usually are within the incremental rather than conceptually door-opening.

Given the way science is funded these days, that's the way it has to be. So, there's less and less room for real luck and serendipity, as the 'visions' of the 2020 visionaries show in a round-about sort of way. Does this mean no progress will be made? Of course not, but it is a different kind of science.

Personally, we think that centralization of high-cost technology (like high-throughput DNA sequencing), and distribution of more, but smaller though longer-term grants to more different investigators, especially junior investigators, with less detail required in grant proposals, and with promotion and tenure and overhead disconnected from individual grantees' would increase the 'ecological' diversity of science and raise the probability of major new discoveries. Less intense pressure to hustle, more time to think, but there should be eventual accountability and project-termination criteria, too--unlike much of the Big Science that is being constructed, with guarantees of continuity in mind.

Nothing ensures any particular level of dramatic discovery, but as scientists we should want the odds to be as high as possible. Institutionalized enterprize may not be the best way to make that happen.

Thursday, January 7, 2010

Christmas season ends, silly season resumes....

It is easy to criticize science and much harder to do it, especially if one expects more than incremental discovery. It is similarly easy to point out the human failings in any endeavor, including science, especially when it evolves to become a major part of society on which many of us must earn our daily bread.

At the same time, criticism is important as a corrective, to try to keep the ship on an honest course, at least to the best of our abilities. 'Established' entities, government, business, church, or otherwise, grow in ways that become entrenched and self-serving and need such correctives. We think that science certainly does.

A number of years ago Wisconsin's Senator William Proxmire used to present annual Golden Fleece awards to the stupidest science projects that had been funded at government expense. They were, Proximire alleged, fleecing the public. Some of this was political theater, if not demogoguery, because a study with a laughable-sounding title could actually be great science, and some of it was.

But a lot of the Golden Fleece awardees richly deserved their ridicule. Just the kinds of expense that the right-wing 'tea parties' in our current benighted land target as examples of the lack of connection between government and those who pay for it.

Some of the dumbest studies one can imagine are indeed being paid for. Our story the other day on the G-spot is an example. Self-reported surveys of twins as to whether or not they experienced the Joy of G hardly count as reliable science by almost any standard one can name. So whether or not there's a G-whiz!! experience to be had (that way), it is no laughing matter.

If a trait is not empirically defined in an objective way, or its occurrence is only subjectively identified, and so on, the conclusions hold about as much water as, well, as water. Statistics seem objective and reductionistic, but they are just empty: the p-value has little if any more reality than the G-value!

Social science research is the easiest to poke fun at because it has become a branch of statistical reductionism much as biomedical science is molecular reductionism. But social entities -- us, people -- are not identically reactive automotons the way molecules are, and social phenomena are so complex that the approach usually doesn't work. Similar conditions might elicit different responses at different times by the same people, for example, as we chase self-help and other kinds of fashions.

Some areas of social science, such as demography where the object is to count people by age, sex, address, country of origin, marital status, or even income levels can work as real science by any standard. But the behavioral, political, economic, and other social aspects of this 'science' are not so clearly science of the same sort.

As far as fleecing the public is concerned, one can argue that society is manifestly not better off socially or mentally as a result of decades of munificent social science research. People are not happier. We all seem to need our permanent personal therapist much as athletes need trainers. We're on drugs (legal or otherwise), and so on.

Most social progress, and there has been a lot of it, has been due to causes other than research: civil rights legislation is perhaps the best example. And whether those therapists who actually fix their clients' problems do so because of their scientific knowledge or because they happen to have the right intuitive stuff for helping people is highly debatable at the very least, and the latter seems at least as likely to be the most likely.

A huge fraction of social science research, including a lot of what is paid for by NIH doesn't have much to do with the 'H' (health), except that of NIH bureaucracy and university welfare programs, that encourage universities to have their faculties hustling grants rather than teaching, and everyone tailors their supposed research objectives to relate to 'health'.

But it's like picking on the helpless to go at social science in this way. That's because a healthy fraction of genetics and other areas of research, by NIH, NSF, and other agencies, too, is just as pointless and wasteful. The mother of all fleece may belong to NASA's hyping of 'life' on Mars and our desperate 'need' to send emigrants there! But the fleece barn is deep and rich, largely we argue because science has become part of the entrenched establishment, a topic we voice our views on regularly when it comes to genetics, where there's a lot of fleecing going on.

To us, the issue is not that there aren't problems and questions about nature and human life that deserve funding, and certainly this is true in genetics and molecular biology. It's not an opposition to basic research. Indeed, it's a view for basic research, rather than research promulgated on forced relevance, at high funding cost, by bigger groups, that tie up resources for years if not decades in ways that have little if any accountability for the good reason that they don't deliver the goods they promise (like, for example, the 'Healthy People' programs we opined about recently). It's a lack of terminating, scaling back, or changing directions when we know very well they aren't working as was thought. It's moving the goal posts so victory can be declared and grants can be renewed, in the same hands, for another 5 or more years.

Good science, especially good basic science, is unpredictable, mostly doesn't make major findings, and the most important findings cannot be ordered up like a ham sandwich at a deli. But whether public anger at this kind of waste will overtake the ability of vested science interests to persist, and purge its funding, is debatable: power usually manages to hold on. But if the anger does overtake, science will be the loser because the good science, the real, basic science, the science by independent-thinking investigators not bound by momentum of mega-groups or short-term careerism, will also suffer.

And if that happens then, sadly, we may never know if there's a G spot or not!

Wednesday, January 6, 2010

Penetrating the fog of 'penetrance'

In genetics there is a hoary old concept called 'penetrance'. It's not a definition of sexual success, so those of you who come to this post (no pun intended) with prurient interest should seek satisfaction elsewhere.

Penetrance is the probability that an individual has a specified trait given that s/he has a specified genotype. Usually, we think of the latter in terms of an allele, like the proverbial dominant A or recessive a in classical Mendelian terms in which genetics is taught.

Penetrance can range from zero -- the trait is never found in a person with genotype G -- to 1.0 (100% of the time the trait is present in persons with genotype G). If 'G' refers to an allele, that allele is called dominant if its presence is always associated with the trait, or recessive if the trait is present only in the absence of the other allele (in gg genotypes).

The key concept, that links simple Mendelian inheritance with general aspects of penetrance is that penetrance is almost always a relative term. The effect of an allele is always dependent on the other alleles in the individual's genome, as well as to aspects of the environment, and also to chance.

Sometimes things seem simple enough that we need not worry too much about these details. Very strong effects that are (almost) always manifest are examples. But when things are relativistic in this way, genetic inherency usually must take a back seat to a more comprehensive understanding.

The first step, and usually a difficult one, is to specify just what genotype you are referring to and, often even more challenging, just what phenotype (trait or aspects of a trait) you are referring to. To use the Einstein phrase that applies to relativity in physics, you have to be clear about your frame of reference.

This is much, much more easily said than done. Is 'heart disease' a useful frame of reference relative to alleles at some gene like, say, ApoE (associated with lipid transport in the blood)? We work with a colleagues, including Joan Richtsmeier here in our own department, who are concerned with craniofacial malformations. There are many such traits, including abnormal closure of cranial sutures (where bones meet in the skull). And, cancer is a single word that covers a multitude of syns (syndromes).

These are examples in which no two cases are identical. When that's so, how can we tell what the penetrance is of a mutation in a particular gene? Probabilistic statements require multiple observations, but also that each observation be properly classified (since probabilities refer to distinct classes of outcomes).

Since a given mutation affects only a single part of a single gene, it can be identified specifically (if, for the moment, we discount the mutations that take place within the person's body each time any of his/her cells divide). But traits can be variable and hard to define precisely, and the rest of the genome will vary in each person, even in inbred mice (because they undergo mutations). The amount of variation depends on the situation, but is very difficult to quantify precisely (recent work on genetic mutations in cancer begins to show this in detail, though we've known it in principle for a long time).

Most of the additional variation, not to mention purely chance aspects of development, homeostasis, and every cell's behavior, is unknown and much of it perhaps not even documentable in principle. Thus, in trying to characterize complex traits, we face real challenges just of definition, and much more so of understanding.

The same is true of evolution. An allele that has zero penetrance cannot be seen by natural selection. An allele with 100% penetrance is always 'visible' to selection in principle. But even there it has no necessary evolutionary implications unless it also affects fitness, that is, reproductive success. And that is another layer of causation with complex definitional issues, that we have written much about.

One bottom line is that just knowing a complete DNA sequence from some representative cell of an individual does not explain phenotypes, phenotypic effects, or evolution.

Tuesday, January 5, 2010

Aw, G! G-whiz! No G?

Well, according to the world authority on such matters, Salon.com, the crushing news has just been announced: there is no G-spot!

Given her history of dealing with the more sensitive subjects of sex, diarrhea, and halitosis (among others), I thought that our own special collaborator Holly would be the one to comment on this bit of hot research. But Anne thought it might be presumptuous of me to ask her such a thing. And Anne demurred, perhaps thinking that such a subject touched, so to speak, too close to home. So the task fell to me.

The G-spot for those who are uninitiated in the arts of female pleasuring, is a point in the vaginal wall that, when proberly (no misspelling here!) stimulated can lead to exquisite orgasms (for her, too!). But for some of the unfortunate of our better halves, this pleasure oasis doesn't seem to exist.

Cold fish? Just not interested in their partners? Can't really get into it?

Not so, say the experts! Yet another thing that turns out not to be her fault, despite our sexist accusatory society! What was thought perhaps to be a revelation for the new G-eneration of women turns out not to exist at all! It was a sex-toy vendor's scam. All those weird shaped twisting, vibrating, variously sized dildos: they're bunk (from this point of view, at least)!

It turns out that our more socially responsible citizens (university professors), who have to think of something important to research so they can get grants and promotions, did a twin study of the G-spot. Like searches for the Loch Ness monster, they delved deeply but came up empty handed.

More precisely, identical twins who are genetically the same, were no more concordant (didn't agree more) on whether they had the G-experience or not, compared to fraternal (well, sororal) twins who share only half their genes. Or, at least, whether they reported such G-ratification.

Assuming no confounding issues such as monozygous twins picking less knowledgeable partners than sororal twins, nor a strange kind of sibling ribaldry, there simply is no evidence -- at least no genetic evidence -- for the existence of Playboy's favorite playground. Conclusion: it's a myth.

G, that is sooo too bad!

Hey, wait a minute! What kind of conclusion is that? After all, a substantial fraction of women in the study did say they had one (G spot). And so said both kinds of twins! What the heck more do you want for evidence? So maybe this is consistent with a G-enetic reality, and has to do with the well-known variable expressivity of the G-ene, as with any other gene. Maybe the spot's bigger in some than others, or more trigger-happy. Maybe other women, wishing to uphold a demure image, deny what they experience to be true. Maybe they want their husband (or their mates) to feel put down as performance failures.

This relates to the genetic concept of 'penetrance' (no pun intended) that we will discuss in tomorrow's post. Having the G-ene doesn’t imply having the same amount of fun, except probabilistically (i.e., what's the probability that he'll get to the bottom of this phenomenon and figure IT out?).

What I think is the obvious answer to this question is: we need more research! Lot's more research! Maybe, like SETI (where everyone is asked to volunteer their computer to search for ET's in outer space), we can engage the whole population to search for ITs in inner space.

I think I've said enough.... Again, I think this is a ball for Holly or Anne to pick up.

Ken

Monday, January 4, 2010

Healthy People or unhealthy promotions?

Leading Health Indicators
Healthy People 2010? A story is making the rounds (e.g., here) about how well or how poorly the US has met goals set in 2000 for improving the health of its population by 2010. Not so well, by most measures, though there have been some improvements (some, in spite of ourselves -- heart disease mortality tends to go up or down without our understanding why).
There are more obese Americans than a decade ago, not fewer. They eat more salt and fat, not less. More of them have high blood pressure. More U.S. children have untreated tooth decay.
But the country has made at least some progress on many other goals. Vaccination rates improved. Most workplace injuries are down. And deaths rates from stroke, cancer and heart disease are all dropping.

As a new decade approaches, the government is analyzing how well the U.S. met its 2010 goals and drawing up a new set of goals for 2020 expected to be more numerous and - perhaps - less ambitious.
Why is meeting these goals so difficult?

The government uses 'leading health indicators' to measure the health of the population. These indicators are:
  • Physical Activity
  • Overweight and Obesity
  • Tobacco Use
  • Substance Abuse
  • Responsible Sexual Behavior
  • Mental Health
  • Injury and Violence
  • Environmental Quality
  • Immunization
  • Access to Health Care

Healthy people and the NIH
Note that, except for the goals that require political action, this is primarily a list that can only be accomplished by getting people to change their behavior. Which is interesting, since Francis Collins set five goals for the NIH when he took over the directorship last year, and which he reiterates in this week's Science, none of which have anything to do with changing behavior, and everything to do with high-tech research (he does include the goal of increasing access to health care, but it's clear from the mixed, at best, results of Healthy People 2010 that there isn't a straight line between health care and health). He begins his Science essay with this:
The mission of the National Institutes of Health (NIH) is science in pursuit of fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to extend healthy life and to reduce the burdens of illness and disability.
Collins sounds as though the mission of the NIH is in accord with the Healthy People 2010 (and now 2020) goals set by the Office of Disease Prevention and Health Promotion, and the US Dept of Health and Human Services, but if they were really on the same page, he'd have put health education first and foremost in his list of priorities for the NIH. Or, Healthy People 2020 would include goals like Sequence everyone's individual genome, or biome, or nutriome, which big science, with Collins at the political forefront, has been promising for years will work miracles for our health.



To put it politely, Collins' goals remain wishful thinking with respect to health benefits, but a big boon for big science. We could generate more healthy years for more people, by far, by changing lifestyle exposures than we ever could by all the 'basic research' and genetics one can dream of. And this is not just keeping an old car running rather than inventing better cars: healthful lifestyles will work every generation.

Technocracy welfare
Dr Collins is instead a sales agent for our technocracy welfare system (of which we personally, it must be acknowledged, are beneficiaries), in the form of largesse from Congress. In fairness, one of his priorities includes focusing more on global health, which is laudable, especially for someone who has been a medical missionary in Africa, and it's very easy to argue that improving the health of people living in poor and underserved countries is good for the US as well, because extremely drug resistant TB doesn't stop at borders. But, most global health issues are understood, and we could go a long way toward solving them with money, infrastructure development and public health measures, which will help a lot more people a lot faster than high tech research.

But, issues with the NIH aside, if the "Healthy People 20whenever" goals are so clear, and could be met if only people would change their behavior, why isn't that happening? Because figuring out how to get health education to actually work is a long-standing challenge for public health. It is not rocket science and health policy should not be turned into rocket science.  But it is difficult, which you might think is perhaps a little odd, since advertisers seem to have no problem getting people to buy sexy cars, or supersize their fries. Why can't advertising be just as effective when it comes to health-related behaviors? Well, health-related behaviors that improve health, since supersized fries is health-related.

The facile answer is that advertisers sell fun stuff, while public health tries to sell restraint, moderation, eliminating the fun stuff. It's hard to do.

Reality checks
It's good to have goals. An organization needs goals. But, when the goals are so often much loftier than what can be accomplished, the organization needs to stop and think about why they've overshot, and what can be done about it.

There are three reality checks here. First, the taxpayers footing today's technocracy bill are not generally going to benefit from the work of no-matter-how-many DNA sequencers crunching. Those who do have serious genetic problems that can be understood in other ways, will hopefully benefit, even if that's years of hard work away. But cheaper ways to actually increase health--the behaviors mentioned above--exist and the same funds (and policy changes) could be put to those ends. They will, if adopted, pay off for as many generations as current technology promoters promise.

Second, these NIH programs are largely costly self-promotion. Without doubt there is a lot of good research done within, and paid for by, the NIH. And a lot of it is technological in ways that are fully justifiable. But it's embedded in huge amounts of self-protecting bureaucracy and baloney.

Nobody seriously could think that we would have 'Healthy People 2010' (unless we really hurry up!). This is advertising, but not of the health improvement kind.  It's spending money where it shouldn't be spent, primarily fostering scientists rather than taxpayers. It is cynically cruel to make grandiose promises that cannot be kept. It's reminiscent of the old communist countries' Five Year Plans that we so routinely ridiculed. We can do better. We can try more sincerely. And we can be accountable for what we boast about.  Note that the goals for Healthy People 2020 are going to be 'less ambitious.' 

Third, a minor point....or is it? Francis Collins says he believes in a personal God, roughly the fundamentalist Christian God. If that's the case, and if those who live virtuous temporary lives here on Earth will have eternal rewards in the Hereafter, then why on earth is it important to pour money into future technological solutions rather than to save the quality of life of those who are here right now, so they can live in faith and do good, rather than suffer in privation?

Well, leaving the last bit aside, the point here is to take with a grain of salt the advertising and hyperbole. If pure science and the fun of its practice are the purpose of research money, then let's just say so--this is about us and our careers--and stop pretending this is all about health per se. There have been major improvements in some areas of health, and technology (including genetics) has clearly played a role. But overall it's hard to argue that we're happier, or healthier, than we were before pouring billions into a lot of things we've been pouring billions into, things we know very well (as we've opined in numerous posts) are not bearing much fruit from the Tree of Life.

Friday, January 1, 2010

Foul Mouthed Sweet Tooth


Happy 2010 from guest blogger Holly... Here's my New Year wish for everyone.

If you’re like me then you frequently find yourself reading articles about things that you know nothing about (which, unfortunately, describes the content of probably 99.99% of the things that I read). There are many explanations for such behavior. And why I clicked on, “The Bifidobacterium dentium Bd1 Genome Sequence Reflects Its Genetic Adaptation to the Human Oral Cavity,” needs a little bit of back story.

The last week of November I was hit with hemorrhagic E. coli.* I think my body pretty well expelled the bug on its own, but a strong dose of two different antibiotics made sure it was real dead.

Many people who’ve taken Cipro have experienced that notoriously unpleasant taste in their mouths. The flavor is difficult to describe, but ever since I finished taking the medicine, I’ve continued to have a foul tasting mouth and a foul mouth (ba dum bum bum). My dogs love the makeover (which was amusing for about five seconds), but I can’t bear to ask my husband if he’s noticed a difference. (Ego, 1: Scientific rigor, 0.)

Is this new taste a signal that my mouth’s ecosystem is different? My mind (I take no credit for this) imagines that the antibiotics killed off more micro-critters than just the E. coli and those that better survived are now more prevalent compared to their ancestors. Maybe the species that currently dominates my mouth, or its products, tastes different from the bacterial composition that I had before.

Since this shift in balance would have happened abruptly during the Cipro killing spree, it makes sense that my taste receptors are detecting it now – my brain would have ignorantly ignored a slow bacterial change just like it couldn’t detect my transformation from 2 cells to 32-year-old.

I’ve been popping mints, gargling, and flossing like a lumberjack, for a month now and this whole oral fiasco is why I was naturally drawn to this recent article,“The Bifidobacterium dentium Bd1 Genome Sequence Reflects Its Genetic Adaptation to the Human Oral Cavity.”

Bacteria. Adaptation. Oral Cavity. This is totally going to be about me! See, first I happened to watch Food Inc., in which they discuss hemorrhagic E. coli while I was recovering from hemorrhagic E. coli, and now just as there’s a bacterial coup in my oral cavity, I stumble upon new research on oral bacteria!** Is it going to identify the species that differentially survived antibiotics, ran rampant in my mouth, and ruined my breath?***

Well, it turns out that this article is not at all about me or my dilemma - despite the common themes which you'll read about below, I can't solve my bad breath mystery by simply reading about oral bacteria. But it’s still a good read for people with oral cavities, people with cavities in their oral cavities, people who are afraid of getting cavities, and people who are fascinated with evolution. It’s especially poignant if you’ve got a sweet tooth that has taken command of your life for the past few weeks like mine has.****

We’ve all got like 900 species of microbiota in our “oral biofilm.” Some of those species are more similar to what’s down the hatch than others. The genus Bifidobacterium is one group that has species living from the lips all the way down to the colon, and also in the vagina (although the article doesn't mention the vaginal species, only my brief internet search said so). You may recognize the genus Bifidobacterium because many of its species are considered “probiotics” and they are included in foods and food supplements to help with digestion, sometimes under the term “Bifidus.”

Most species in this genus are your friends. They’ve teamed up with your body so that they get what they want out of the food that you eat while at the same time they're helping you get what you need out of the food that you eat. It’s a win-win situation and all these microbiotic critters running around inside you (well, more like clinging desperately to your epithelial linings) are why you are more cells of them than you are cells of you.

Like many microbiota in your mouth, B. dentium (the focal species of the paper we're talking about here) is great at metabolizing carbohydrates and approximately 14% of the genes in its entire genome code for proteins that are involved in this process. How does this compare to human genes for metabolizing carbohydrates? I don’t know, but I’m guessing we have relatively fewer genes that are involved in carbohydrate metabolism and that by teaming up with other species which are essentially born to do this, our own system can afford to slack, functionally speaking. It’s a beautiful relationship. But it comes with some costs.

For example, although B. dentium helps with our digestive functions, it has evolved in such a way that it’s no longer just a friend. It’s also a pathogen!
B. dentium is by far the most popular Bifidobacterium to be associated with cavities on tooth crowns in both children and adults and those on the roots of adult teeth. By producing acid, it lowers the pH enough to cause teeth to demineralize. And it’s so good at surviving in this highly acidic environment that it that it can make a living like this, on our rotting holiday cookie-smothered teeth.

The whole genome sequence revealed that the intergenic regions of the B. dentium genome have more nucleotide differences than the protein-coding regions. Sound familiar? (Everything boils down to human vs. chimp, doesn’t it?)

The results of these differences found in the genome of B. dentium are (1) it can withstand low pH conditions (as mentioned above), (2) it can metabolize a wider range of stuff that we eat compared to what its cousins in the colon can metabolize (which makes sense considering the smorgasbord presented to B. dentium vs. what metaphorical crumbs make it down to the colon) and it can even live off our own saliva, and (3) it seems to be able to resist biocide better than its relatives (which was tested by growing B. dentium in mouthwash which sounds completely unethical).
Regarding that third point, B. dentium has a relative abundance of what are called “two-component systems (2CSs)” which are instances where protein-coding genes are essentially flanked by regulator genes, and this seems to be a surprise to the authors based on what is known about other bifidobacterial genomes. The implications of these numerous 2CSs are that they may be indicative of B. dentium’s “ability to sense dynamic environmental cues and to modulate appropriate physiological responses.” Perhaps this is what enables them to differentially survive despite our attempts to murder them with mouthwash (or, heh heh, during E. coli-cide? Hmmm?). Or at least, perhaps this is what enables them to not just tolerate but thrive in fluctuating habitat acidity. And what's even more exciting is that these adaptations may be linked.

Naturally we’re left to wonder: How long ago or recently did B. dentium originate? Did our behavior induce B. dentium to adapt this way through our diet and/or through our dental hygiene? Is it smart to ingest commercially sold probiotics if they contain species that can evolve to be opportunistic pathogens like B. dentium did or, worse, if they actually contain the pathogenic B. dentium and we just don’t know it?

Because it's the holidays, should we just forget about it and eat all the cookies and sweets that we want because we can't stop the evolution of our oral microbiota?

I like that idea. More cookies, please, and pass the biocide and the floss, thanks.

Starred Footnotes:
*No, hemorrhagic E. coli is not why I’ve been gone since mid November (apart from dropping comments on Anne and Ken’s posts). University life has kept me sufficiently busy and generally abloggish. But getting hemorrhagic E. coli certainly didn’t help. No, I don’t know where it came from. And no, I didn’t go to this doctor for treatment (amicably sarcastic emoticon).
** I believe this is what Oprah calls “The Secret.”
***Please do not take this opportunity to tell me that my breath was already ruined.
**** I used to think that a gold tooth was a sweet tooth and that it could actually taste sweets better than enamel teeth. This is because my uncle has a gold tooth and a sweet tooth, so naturally a 5-year-old me who had never seen or heard of either of those things thought that they were one in the same, and thought that a sweet tooth was a pretty neat trait to have. For much of my childhood I was incredibly jealous of every person I saw sporting a gold tooth.