Michael J. Behe's Blog, page 591

October 6, 2011

New Work by Thornton's Group Supports Time-Asymmetric Dollo's Law

In the June 2011 issue of PLOS Genetics the laboratory of University of Oregon evolutionary biologist Joseph Thornton published ( http://tinyurl.com/3dsorzm ) "Mechanisms for the Evolution of a Derived Function in the Ancestral Glucocorticoid Receptor", the latest in their series of papers concerning the evolution of proteins that bind steroid hormones. (Carroll et al, 2011) In earlier laboratory work ( http://tinyurl.com/3hevjzy ) they had concluded that a particular protein, which they argued had descended from an ancestral, duplicated gene, would very likely be unable to evolve back to the original ancestral protein, even if selection favored it. (Bridgham et al, 2009) The reason is that the descendant protein had acquired a number of mutations which would have to be reversed, mutations which, the authors deduced, would confer no benefit on the intermediate protein. They used these results to argue for a molecular version of "Dollo's Law", which says roughly that a given forward evolutionary pathway is very unlikely to be exactly reversed.



In my comments on this interesting work ( http://tinyurl.com/3cjm4gr ), I noted that there is nothing time-asymmetric about random mutation/natural selection, so that the problem they saw in reversing the steroid hormone receptor evolution did not have to be in the past — it could just as easily have been in the future. The reason is that natural selection hones a protein to its present job, with regard to neither future use nor past function. Thus, based on Thornton's work, one would not in general expect a protein that had been selected for one function to be easily modified by RM/NS to another function. I have decided to call this the Time-Asymmetric Dollo's Law, or "TADL".


But if there is such a thing as a TADL, did the forward evolution of the steroid-hormone protein-receptor manage to avoid it? That question had not yet been addressed. Was the protein lucky this time, and encountered no obstacles to its evolution from the ancestral state to the modern state? If so, then maybe TADL is occasionally an obstacle, but not so often as to rule out modest Darwinian evolution of proteins (as I had thought before reading Thornton's earlier work).


Well, thanks to the Thornton group's new work ( http://tinyurl.com/3dsorzm ), we can now see that there are indeed obstacles to the forward evolution of the ancestral protein. The group was interested in which of the many sequence changes between the ancestral and derived-modern protein were important to its change in activity, which consisted mostly of a considerable weakening of the protein's ability to bind its steroid ligands. They narrowed the candidates down to two amino acid positions, residues 43 and 116. Each of the changes at those sites decreased binding by over a hundred-fold. However, when the researchers combined both mutations into a single protein, as occurs in the modern protein, binding was not only decreased — it was for all intents and purposes abolished. Upon further research the group showed that a third mutation, at position 71, was necessary to ameliorate the effects of the combination of the other two mutations, bringing them back to hundreds-fold loss of function rather than essentially-complete loss of function.


Carroll et al (2011) conjecture that the mutation at position 71 occurred before the other two mutations, but that it had no effect on the activity of the ancestral protein. So let us count the ways, then, in which "fortune" favored the evolution of the modern protein. First, an ancestral gene duplicated, which would usually be considered a neutral event. Thus it would not have the assistance of natural selection to help it spread in the population. Next, it avoided hundreds of possible mutations which would have rendered the duplicated gene inactive. Third, it acquired a neutral mutation at position 71. Thus, again, this mutation would have to spread by drift, without the aid of natural selection. Once more, the still-neutral gene manages to avoid all of the possible mutations that would have inactivated it. Next, it acquires the correct mutation (either at position 43 or 116) which finally differentiates it from its parent gene — by reducing its activity a hundred-fold! Finally, somehow the wimpy, mutated gene (putatively) confers upon the lucky organism some likely-quite-weak selective advantage.


The need for passage through multiple neutral steps plus the avoidance of multiple likely-deleterious steps to produce a protein that has lost 99% of its activity is not a ringing example of the power of Darwinian processes. Rather, as mentioned above, it shows the strength of TADL. Darwinian selection will fit a protein to its current task as tightly as it can. In the process, it makes it extremely difficult to adapt to a new task or revert to an old task by random mutation plus selection.


Dollo's law holds going forward as well as backward. We can state the experimentally based law simply: "Any evolutionary pathway from one functional state to another is unlikely to be traversed by random mutation and natural selection. The more the functional states differ, the much-less likely that a traversable pathway exists."


1. Carroll, S. M., E. A. Ortlund, and J. W. Thornton, 2011 Mechanisms for the evolution of a derived function in the ancestral glucocorticoid receptor. PloS. Genet. 7: e1002117.


2. Bridgham, J. T., E. A. Ortlund, and J. W. Thornton, 2009 An epistatic ratchet constrains the direction of glucocorticoid receptor evolution. Nature 461: 515-519.

 •  0 comments  •  flag
Share on Twitter
Published on October 06, 2011 01:52

October 5, 2011

New Work by Thornton’s Group Supports Time-Symmetric Dollo’s Law

In the June 2011 issue of PLOS Genetics the laboratory of University of Oregon evolutionary biologist Joseph Thornton published ( http://tinyurl.com/3dsorzm ) “Mechanisms for the Evolution of a Derived Function in the Ancestral Glucocorticoid Receptor”, the latest in their series of papers concerning the evolution of proteins that bind steroid hormones. (Carroll et al, 2011) In earlier laboratory work ( http://tinyurl.com/3hevjzy ) they had concluded that a particular protein, which they argued had descended from an ancestral, duplicated gene, would very likely be unable to evolve back to the original ancestral protein, even if selection favored it. (Bridgham et al, 2009) The reason is that the descendant protein had acquired a number of mutations which would have to be reversed, mutations which, the authors deduced, would confer no benefit on the intermediate protein. They used these results to argue for a molecular version of “Dollo’s Law”, which says roughly that a given forward evolutionary pathway is very unlikely to be exactly reversed.


In my comments on this interesting work ( http://tinyurl.com/3cjm4gr ), I noted that there is nothing time-asymmetric about random mutation/natural selection, so that the problem they saw in reversing the steroid hormone receptor evolution did not have to be in the past — it could just as easily have been in the future. The reason is that natural selection hones a protein to its present job, with regard to neither future use nor past function. Thus, based on Thornton’s work, one would not in general expect a protein that had been selected for one function to be easily modified by RM/NS to another function. I have decided to call this the Time-Symmetric Dollo’s Law, or “TSDL”.


But if there is such a thing as a TSDL, did the forward evolution of the steroid-hormone protein-receptor manage to avoid it? That question had not yet been addressed. Was the protein lucky this time, and encountered no obstacles to its evolution from the ancestral state to the modern state? If so, then maybe TSDL is occasionally an obstacle, but not so often as to rule out modest Darwinian evolution of proteins (as I had thought before reading Thornton’s earlier work).


Well, thanks to the Thornton group’s new work ( http://tinyurl.com/3dsorzm ), we can now see that there are indeed obstacles to the forward evolution of the ancestral protein. The group was interested in which of the many sequence changes between the ancestral and derived-modern protein were important to its change in activity, which consisted mostly of a considerable weakening of the protein’s ability to bind its steroid ligands. They narrowed the candidates down to two amino acid positions, residues 43 and 116. Each of the changes at those sites decreased binding by over a hundred-fold. However, when the researchers combined both mutations into a single protein, as occurs in the modern protein, binding was not only decreased — it was for all intents and purposes abolished. Upon further research the group showed that a third mutation, at position 71, was necessary to ameliorate the effects of the combination of the other two mutations, bringing them back to hundreds-fold loss of function rather than essentially-complete loss of function.


Carroll et al (2011) conjecture that the mutation at position 71 occurred before the other two mutations, but that it had no effect on the activity of the ancestral protein. So let us count the ways, then, in which “fortune” favored the evolution of the modern protein. First, an ancestral gene duplicated, which would usually be considered a neutral event. Thus it would not have the assistance of natural selection to help it spread in the population. Next, it avoided hundreds of possible mutations which would have rendered the duplicated gene inactive. Third, it acquired a neutral mutation at position 71. Thus, again, this mutation would have to spread by drift, without the aid of natural selection. Once more, the still-neutral gene manages to avoid all of the possible mutations that would have inactivated it. Next, it acquires the correct mutation (either at position 43 or 116) which finally differentiates it from its parent gene — by reducing its activity a hundred-fold! Finally, somehow the wimpy, mutated gene (putatively) confers upon the lucky organism some likely-quite-weak selective advantage.


The need for passage through multiple neutral steps plus the avoidance of multiple likely-deleterious steps to produce a protein that has lost 99% of its activity is not a ringing example of the power of Darwinian processes. Rather, as mentioned above, it shows the strength of TSDL. Darwinian selection will fit a protein to its current task as tightly as it can. In the process, it makes it extremely difficult to adapt to a new task or revert to an old task by random mutation plus selection.


Dollo’s law holds going forward as well as backward. We can state the experimentally based law simply: “Any evolutionary pathway from one functional state to another is unlikely to be traversed by random mutation and natural selection. The more the functional states differ, the much-less likely that a traversable pathway exists.”


1. Carroll, S. M., E. A. Ortlund, and J. W. Thornton, 2011 Mechanisms for the evolution of a derived function in the ancestral glucocorticoid receptor. PloS. Genet. 7: e1002117.


2. Bridgham, J. T., E. A. Ortlund, and J. W. Thornton, 2009 An epistatic ratchet constrains the direction of glucocorticoid receptor evolution. Nature 461: 515-519.

 •  0 comments  •  flag
Share on Twitter
Published on October 05, 2011 18:52

October 4, 2011

New Work by Thornton’s Group Supports Time-Symmetric Dollo’s Law

In the June 2011 issue of PLOS Genetics the laboratory of University of Oregon evolutionary biologist Joseph Thornton published ( http://tinyurl.com/3dsorzm ) “Mechanisms for the Evolution of a Derived Function in the Ancestral Glucocorticoid Receptor”, the latest in their series of papers concerning the evolution of proteins that bind steroid hormones. (Carroll et al, 2011) In earlier laboratory work ( http://tinyurl.com/3hevjzy ) they had concluded that a particular protein, which they argued had descended from an ancestral, duplicated gene, would very likely be unable to evolve back to the original ancestral protein, even if selection favored it. (Bridgham et al, 2009) The reason is that the descendant protein had acquired a number of mutations which would have to be reversed, mutations which, the authors deduced, would confer no benefit on the intermediate protein. They used these results to argue for a molecular version of “Dollo’s Law”, which says roughly that a given forward evolutionary pathway is very unlikely to be exactly reversed.


In my comments on this interesting work ( http://tinyurl.com/3cjm4gr ), I noted that there is nothing time-asymmetric about random mutation/natural selection, so that the problem they saw in reversing the steroid hormone receptor evolution did not have to be in the past — it could just as easily have been in the future. The reason is that natural selection hones a protein to its present job, with regard to neither future use nor past function. Thus, based on Thornton’s work, one would not in general expect a protein that had been selected for one function to be easily modified by RM/NS to another function. I have decided to call this the Time-Symmetric Dollo’s Law, or “TSDL”.


But if there is such a thing as a TSDL, did the forward evolution of the steroid-hormone protein-receptor manage to avoid it? That question had not yet been addressed. Was the protein lucky this time, and encountered no obstacles to its evolution from the ancestral state to the modern state? If so, then maybe TSDL is occasionally an obstacle, but not so often as to rule out modest Darwinian evolution of proteins (as I had thought before reading Thornton’s earlier work).


Well, thanks to the Thornton group’s new work ( http://tinyurl.com/3dsorzm ), we can now see that there are indeed obstacles to the forward evolution of the ancestral protein. The group was interested in which of the many sequence changes between the ancestral and derived-modern protein were important to its change in activity, which consisted mostly of a considerable weakening of the protein’s ability to bind its steroid ligands. They narrowed the candidates down to two amino acid positions, residues 43 and 116. Each of the changes at those sites decreased binding by over a hundred-fold. However, when the researchers combined both mutations into a single protein, as occurs in the modern protein, binding was not only decreased — it was for all intents and purposes abolished. Upon further research the group showed that a third mutation, at position 71, was necessary to ameliorate the effects of the combination of the other two mutations, bringing them back to hundreds-fold loss of function rather than essentially-complete loss of function.


Carroll et al (2011) conjecture that the mutation at position 71 occurred before the other two mutations, but that it had no effect on the activity of the ancestral protein. So let us count the ways, then, in which “fortune” favored the evolution of the modern protein. First, an ancestral gene duplicated, which would usually be considered a neutral event. Thus it would not have the assistance of natural selection to help it spread in the population. Next, it avoided hundreds of possible mutations which would have rendered the duplicated gene inactive. Third, it acquired a neutral mutation at position 71. Thus, again, this mutation would have to spread by drift, without the aid of natural selection. Once more, the still-neutral gene manages to avoid all of the possible mutations that would have inactivated it. Next, it acquires the correct mutation (either at position 43 or 116) which finally differentiates it from its parent gene — by reducing its activity a hundred-fold! Finally, somehow the wimpy, mutated gene (putatively) confers upon the lucky organism some likely-quite-weak selective advantage.


The need for passage through multiple neutral steps plus the avoidance of multiple likely-deleterious steps to produce a protein that has lost 99% of its activity is not a ringing example of the power of Darwinian processes. Rather, as mentioned above, it shows the strength of TSDL. Darwinian selection will fit a protein to its current task as tightly as it can. In the process, it makes it extremely difficult to adapt to a new task or revert to an old task by random mutation plus selection.


Dollo’s law holds going forward as well as backward. We can state the experimentally based law simply: “Any evolutionary pathway from one functional state to another is unlikely to be traversed by random mutation and natural selection. The more the functional states differ, the much-less likely that a traversable pathway exists.”


1. Carroll, S. M., E. A. Ortlund, and J. W. Thornton, 2011 Mechanisms for the evolution of a derived function in the ancestral glucocorticoid receptor. PloS. Genet. 7: e1002117.


2. Bridgham, J. T., E. A. Ortlund, and J. W. Thornton, 2009 An epistatic ratchet constrains the direction of glucocorticoid receptor evolution. Nature 461: 515-519.

 •  0 comments  •  flag
Share on Twitter
Published on October 04, 2011 17:34

August 19, 2011

"Irremediable Complexity"

An intriguing 'hypothesis' paper entitled "How a neutral evolutionary ratchet can build cellular complexity" (1), where the authors speculate about a possible solution to a possible problem, recently appeared in the journal IUBMB Life. It is an expanded version of a short essay called "Irremediable Complexity?" (2) published last year in Science. The authors of the manuscripts include the prominent evolutionary biologist W. Ford Doolittle.


The gist of the paper is this. The authors think that over evolutionary time, neutral processes would tend to "complexify" the cell. They call that theoretical process "constructive neutral evolution" (CNE). In an amusing analogy they liken cells in this respect to human institutions:



Organisms, like human institutions, will become ever more "bureaucratic," in the sense of needlessly onerous and complex, if we see complexity as related to the number of necessarily interacting parts required to perform a function, as did Darwin. Once established, such complexity can be maintained by negative selection: the point of CNE is that complexity was not created by positive selection. (1)



In brief, the idea is that neutral interactions evolve serendipitously in the cell, spread in a population by drift, get folded into a system, and then can't be removed because their tentacles are too interconnected. It would be kind of like trying to circumvent the associate director of licensing delays in the Department of Motor Vehicles — can't be done.


The possible problem the authors are trying to address is that they think many systems in the cell are needlessly complex. For example, the spliceosome, which "splices" some RNAs (cuts a piece out of the middle of a longer RNA and stitches the remaining pieces together), is a huge conglomerate containing "five small RNAs (snRNAs) and >300 proteins, which must be assembled de novo and then disassembled at each of the many introns interrupting the typical nascent mRNA." (1) What's more, some RNAs don't need the spliceosome — they can splice themselves, without any assistance from proteins. So why use such an ungainly assemblage if a simpler system would do?


The authors think the evolution of such a complex is well beyond the powers of positive natural selection: "Even Darwin might be reluctant to advance a claim that eukaryotic spliceosomal introns remove themselves more efficiently or accurately from mRNAs than did their self-splicing group II antecedents, or that they achieved this by 'numerous, successive, slight modifications' each driven by selection to this end." (1)


Well, I can certainly agree with them about the unlikelihood of Darwinian processes putting together something as complex as the spliceosome. However, leaving aside the few RNAs involved in the splicesome, I think their hypothesis of CNE as the cause for the interaction of hundreds of proteins — or even a handful — is quite implausible. (An essay skeptical of large claims for CNE, written from a Darwinian-selectionist viewpoint, has appeared recently (3) along with a response from the authors (4)).


The authors rationale for how a protein drifts into becoming part of a larger complex is illustrated by Figure 1 of their recent paper (similar to the single figure in their Science essay). A hypothetical "Protein A" is imagined to be working just fine on its own, when hypothetical "Protein B" serendipitously mutates to bind to it. This interaction, postulate the authors, is neutral, neither helping nor harming the ability of Protein A to do its job. Over the generations Protein A eventually suffers a mutation which would have decreased or eliminated its activity. However, because of the fact that Protein B is bound to it, the mutation does not harm the activity of Protein A. This is still envisioned to be a neutral interaction by the authors, and organisms containing the Protein A-Protein B complex drift to fixation in the population. Then other mutations come along, co-adapting the structures of Protein A and Protein B to each other. At this point the AB complex is necessary for the activity of Protein A. Repeat this process several hundred more times with other proteins, and you've built up a protein aggregate with complexity of the order of the spliceosome.


Is this a reasonable hypothesis? I don't mean to be unkind, but I think that the idea seems reasonable only to the extent that it is vague and undeveloped; when examined critically it quickly loses plausibility. The first thing to note about the paper is that it contains absolutely no calculations to support the feasibility of the model. This is inexcusable. The mutation rates of various organisms — viral, prokaryotic, eukaryotic — are known to sufficient accuracy (5) that estimates of how frequently the envisioned mutations arrive could have been provided. The neutral theory of evolution is also well-developed (6), which would allow the authors to calculate how long it would take for the postulated neutral mutations to spread in a population. Yet no numbers — not even back-of-the-envelope calculations — are provided. Previous results by other workers (7-9) have shown that the development of serendipitous specific binding sites between proteins would be expected to be quite rare, and to involve multiple mutations. Kimura (6) showed that fixation of a mutation by neutral drift would be expected to take a looong time. Neither of these previous results bodes well for the authors' hypothesis.


The second thing to notice about the paper is that there is no experimental support for its hypothesis. As the authors point out:



Development of in vitro experimental systems with which to test CNE will be an important step forward in distinguishing complex biology that arose due to adaptation versus nonadaptive complexity, as part of a larger view to understand the interplay between neutral and adaptive evolution, such as the intriguing long-term evolution experiments of Lenski and coworkers. (1)



Yet no such experimental evolutionary results have been reported to my knowledge, either by Lenski or by other workers (10).


Besides the lack of support from calculations or experiments, the authors discuss no possible obstacles to the scheme. I certainly understand that workers want to accentuate the positive when putting a new model forward, but potential pitfalls should be pointed out, so that other researchers have a clearer idea of the promise of the model before they invest time in researching it.


The first possible pitfall comes at the first step of the model, where a second protein is postulated to bind in a neutral fashion to a working protein. How likely is that step to be neutral? At the very least, we now have two proteins, A & B, that now have a large part of their surfaces obstructed that weren't before. Will this interfere with their activities? It seems there is a good chance. Second, simply by Le Chatelier's principle the binding of the two proteins must affect the free energies of their folded states. What's more, the flexibility of both proteins must be affected. Will these individual effects serendipitously cancel out so that the overall effect will be neutral? It seems like an awful lot to ask for without evidence.


In the next step of the model Protein A is supposed to suffer a mutation that would have caused it to lose activity, but, luckily, when it is bound to Protein B it is stabilized enough so that activity is retained. What fraction of possible mutations to Protein A would fall in that range? It seems like a very specialized subfraction. Looking at the flip side, what fraction of mutations to Protein A and/or Protein B which otherwise would not have caused A to lose activity will now do so because of its binding to Protein B?


The last step of the model is the "co-adaptation" of the two proteins, where other, complementary mutations occur in both proteins. Yet this implies that the protein complex must suffer deleterious mutations at least every other step, provoking the "co-adaptive" mutation to fix in the population. Wouldn't these deleterious mutations be very unlikely to spread in the population?


Finally, multiply these problems all by a hundred to get a spliceosome. Or, rather, raise these problems to the hundredth power. But, then, why stop at a hundred? As the authors note approvingly:



Indeed, because CNE is a ratchet-like process that does not require positive selection, it will inevitably occur in self-replicating, error prone systems exhibiting sufficient diversity, unless some factor prevents it. (1)



Why shouldn't the process continue, folding in more and more proteins, until the cell congeals? I suppose the authors would reply, "some factor prevents it". But might not that factor kick in at the first or second step? The authors give us no reason to think it wouldn't.


The CNE model (at least on the scale envisioned by the authors) faces other problems as well (for example, it would be a whole lot easier to develop binding sites for metal ions or metabolites that are present in the cell at much higher concentrations than most proteins), but I think this is enough to show it may not be as promising as the article would have one believe.


Besides the model itself, it is interesting to look at a professed aspect of the motivation of the authors in proposing it. It may not have escaped your notice, dear reader, that "irremediable complexity" sort of sounds like "irreducible complexity". In fact, the authors put the model forward as their contribution to the good fight against "antievolutionists":



… continued failure to consider CNE alternatives impoverishes evolutionary discourse and, by oversimplification, actually makes us more vulnerable to critiques by antievolutionists, who like to see such complexity as "irreducible." (1)



So there you have it. The authors don't think Darwin can explain such complexity as is found in the proteasome, and they apparently rule out intelligent design. (By the way, when will these folks ever grasp the fact that intelligent design is not "antievolution"?) "Irremediable complexity" seems to be all that's left, no matter how unsupported and problematic it may be.


Although the authors seem not to notice, their entire model is built on a classic argument from ignorance, beginning with the definition of irremediable complexity:



"irremediable complexity": the seemingly gratuitous, indeed bewildering, complexity that typifies many cellular subsystems and molecular machines, particularly in eukaryotes. (1)



"Seemingly gratuitous". In other words, the authors don't know of a function for the complexity of some eukaryotic subsystems; therefore, they don't have functions. Well the history of arguments asserting that something or other in biology is functionless is pretty grim. More, the history of assertions that even "simple" things (like, say, DNA, pre-1930) in the cell either don't have a function or are just supporting structures is abysmal. Overwhelmingly, progress in biology has consisted of finding new and ever-more-sophisticated properties of systems that had been thought simple. If apparently simple systems are much more complex than they initially seemed, I would bet heavily against the hypothesis that apparently complex systems are much simpler than they appear.


References


1.  Lukes, J., J. M. Archibald, P. J. Keeling, W. F. Doolittle, and M. W. Gray, 2011 How a neutral evolutionary ratchet can build cellular complexity. IUBMB Life 63: 528-537.
2.  Gray, M. W., J. Lukes, J. M. Archibald, P. J. Keeling, and W. F. Doolittle, 2010 Cell biology. Irremediable complexity? Science 330: 920-921.
3.  Speijer, D., 2011 Does constructive neutral evolution play an important role in the origin of cellular complexity? Making sense of the origins and uses of biological complexity. Bioessays 33: 344-349.
4.  Doolittle, W. F., J. Lukes, J. M. Archibald, P. J. Keeling, and M. W. Gray, 2011 Comment on "Does constructive neutral evolution play an important role in the origin of cellular complexity?" Bioessays 33: 427-429.
5.  Drake, J. W., B. Charlesworth, D. Charlesworth, and J. F. Crow, 1998 Rates of spontaneous mutation. Genetics 148: 1667-1686.
6.  Kimura M., 1983 The neutral theory of molecular evolution. Cambridge University Press, Cambridge.
7.  Nissim, A., H. R. Hoogenboom, I. M. Tomlinson, G. Flynn, C. Midgley, D. Lane, and G. Winter, 1994 Antibody fragments from a 'single pot' phage display library as immunochemical reagents. EMBO Journal 13: 692-698.
8.  Griffiths, A. D., S. C. Williams, O. Hartley, I. M. Tomlinson, P. Waterhouse, W. L. Crosby, R. E. Kontermann, P. T. Jones, N. M. Low, T. J. Allison, and G. Winter, 1994 Isolation of high affinity human antibodies directly from large synthetic repertoires. EMBO Journal 13: 3245-3260.
9.  Smith, G. P., S. U. Patel, J. D. Windass, J. M. Thornton, G. Winter, and A. D. Griffiths, 1998 Small binding proteins selected from a combinatorial repertoire of knottins displayed on phage. Journal of Molecular Biology 277: 317-332.
10.  Behe, M. J., 2010 Experimental Evolution, Loss-of-function Mutations, and "The First Rule of Adaptive Evolution". Quarterly Review of Biology 85: 1-27.
 •  0 comments  •  flag
Share on Twitter
Published on August 19, 2011 21:56

“Irremediable Complexity”

An intriguing ‘hypothesis’ paper entitled “How a neutral evolutionary ratchet can build cellular complexity” (1), where the authors speculate about a possible solution to a possible problem, recently appeared in the journal IUBMB Life. It is an expanded version of a short essay called “Irremediable Complexity?” (2) published last year in Science. The authors of the manuscripts include the prominent evolutionary biologist W. Ford Doolittle.


The gist of the paper is this. The authors think that over evolutionary time, neutral processes would tend to “complexify” the cell. They call that theoretical process “constructive neutral evolution” (CNE). In an amusing analogy they liken cells in this respect to human institutions:



Organisms, like human institutions, will become ever more ”bureaucratic,” in the sense of needlessly onerous and complex, if we see complexity as related to the number of necessarily interacting parts required to perform a function, as did Darwin. Once established, such complexity can be maintained by negative selection: the point of CNE is that complexity was not created by positive selection. (1)



In brief, the idea is that neutral interactions evolve serendipitously in the cell, spread in a population by drift, get folded into a system, and then can’t be removed because their tentacles are too interconnected. It would be kind of like trying to circumvent the associate director of licensing delays in the Department of Motor Vehicles — can’t be done.


The possible problem the authors are trying to address is that they think many systems in the cell are needlessly complex. For example, the spliceosome, which “splices” some RNAs (cuts a piece out of the middle of a longer RNA and stitches the remaining pieces together), is a huge conglomerate containing “five small RNAs (snRNAs) and >300 proteins, which must be assembled de novo and then disassembled at each of the many introns interrupting the typical nascent mRNA.” (1) What’s more, some RNAs don’t need the spliceosome — they can splice themselves, without any assistance from proteins. So why use such an ungainly assemblage if a simpler system would do?


The authors think the evolution of such a complex is well beyond the powers of positive natural selection: “Even Darwin might be reluctant to advance a claim that eukaryotic spliceosomal introns remove themselves more efficiently or accurately from mRNAs than did their self-splicing group II antecedents, or that they achieved this by ‘numerous, successive, slight modifications’ each driven by selection to this end.” (1)


Well, I can certainly agree with them about the unlikelihood of Darwinian processes putting together something as complex as the spliceosome. However, leaving aside the few RNAs involved in the splicesome, I think their hypothesis of CNE as the cause for the interaction of hundreds of proteins — or even a handful — is quite implausible. (An essay skeptical of large claims for CNE, written from a Darwinian-selectionist viewpoint, has appeared recently (3) along with a response from the authors (4)).


The authors rationale for how a protein drifts into becoming part of a larger complex is illustrated by Figure 1 of their recent paper (similar to the single figure in their Science essay). A hypothetical “Protein A” is imagined to be working just fine on its own, when hypothetical “Protein B” serendipitously mutates to bind to it. This interaction, postulate the authors, is neutral, neither helping nor harming the ability of Protein A to do its job. Over the generations Protein A eventually suffers a mutation which would have decreased or eliminated its activity. However, because of the fact that Protein B is bound to it, the mutation does not harm the activity of Protein A. This is still envisioned to be a neutral interaction by the authors, and organisms containing the Protein A-Protein B complex drift to fixation in the population. Then other mutations come along, co-adapting the structures of Protein A and Protein B to each other. At this point the AB complex is necessary for the activity of Protein A. Repeat this process several hundred more times with other proteins, and you’ve built up a protein aggregate with complexity of the order of the spliceosome.


Is this a reasonable hypothesis? I don’t mean to be unkind, but I think that the idea seems reasonable only to the extent that it is vague and undeveloped; when examined critically it quickly loses plausibility. The first thing to note about the paper is that it contains absolutely no calculations to support the feasibility of the model. This is inexcusable. The mutation rates of various organisms — viral, prokaryotic, eukaryotic — are known to sufficient accuracy (5) that estimates of how frequently the envisioned mutations arrive could have been provided. The neutral theory of evolution is also well-developed (6), which would allow the authors to calculate how long it would take for the postulated neutral mutations to spread in a population. Yet no numbers — not even back-of-the-envelope calculations — are provided. Previous results by other workers (7-9) have shown that the development of serendipitous specific binding sites between proteins would be expected to be quite rare, and to involve multiple mutations. Kimura (6) showed that fixation of a mutation by neutral drift would be expected to take a looong time. Neither of these previous results bodes well for the authors’ hypothesis.


The second thing to notice about the paper is that there is no experimental support for its hypothesis. As the authors point out:



Development of in vitro experimental systems with which to test CNE will be an important step forward in distinguishing complex biology that arose due to adaptation versus nonadaptive complexity, as part of a larger view to understand the interplay between neutral and adaptive evolution, such as the intriguing long-term evolution experiments of Lenski and coworkers. (1)



Yet no such experimental evolutionary results have been reported to my knowledge, either by Lenski or by other workers (10).


Besides the lack of support from calculations or experiments, the authors discuss no possible obstacles to the scheme. I certainly understand that workers want to accentuate the positive when putting a new model forward, but potential pitfalls should be pointed out, so that other researchers have a clearer idea of the promise of the model before they invest time in researching it.


The first possible pitfall comes at the first step of the model, where a second protein is postulated to bind in a neutral fashion to a working protein. How likely is that step to be neutral? At the very least, we now have two proteins, A & B, that now have a large part of their surfaces obstructed that weren’t before. Will this interfere with their activities? It seems there is a good chance. Second, simply by Le Chatelier’s principle the binding of the two proteins must affect the free energies of their folded states. What’s more, the flexibility of both proteins must be affected. Will these individual effects serendipitously cancel out so that the overall effect will be neutral? It seems like an awful lot to ask for without evidence.


In the next step of the model Protein A is supposed to suffer a mutation that would have caused it to lose activity, but, luckily, when it is bound to Protein B it is stabilized enough so that activity is retained. What fraction of possible mutations to Protein A would fall in that range? It seems like a very specialized subfraction. Looking at the flip side, what fraction of mutations to Protein A and/or Protein B which otherwise would not have caused A to lose activity will now do so because of its binding to Protein B?


The last step of the model is the “co-adaptation” of the two proteins, where other, complementary mutations occur in both proteins. Yet this implies that the protein complex must suffer deleterious mutations at least every other step, provoking the “co-adaptive” mutation to fix in the population. Wouldn’t these deleterious mutations be very unlikely to spread in the population?


Finally, multiply these problems all by a hundred to get a spliceosome. Or, rather, raise these problems to the hundredth power. But, then, why stop at a hundred? As the authors note approvingly:



Indeed, because CNE is a ratchet-like process that does not require positive selection, it will inevitably occur in self-replicating, error prone systems exhibiting sufficient diversity, unless some factor prevents it. (1)



Why shouldn’t the process continue, folding in more and more proteins, until the cell congeals? I suppose the authors would reply, “some factor prevents it”. But might not that factor kick in at the first or second step? The authors give us no reason to think it wouldn’t.


The CNE model (at least on the scale envisioned by the authors) faces other problems as well (for example, it would be a whole lot easier to develop binding sites for metal ions or metabolites that are present in the cell at much higher concentrations than most proteins), but I think this is enough to show it may not be as promising as the article would have one believe.


Besides the model itself, it is interesting to look at a professed aspect of the motivation of the authors in proposing it. It may not have escaped your notice, dear reader, that “irremediable complexity” sort of sounds like “irreducible complexity”. In fact, the authors put the model forward as their contribution to the good fight against “antievolutionists”:



… continued failure to consider CNE alternatives impoverishes evolutionary discourse and, by oversimplification, actually makes us more vulnerable to critiques by antievolutionists, who like to see such complexity as ”irreducible.” (1)



So there you have it. The authors don’t think Darwin can explain such complexity as is found in the proteasome, and they apparently rule out intelligent design. (By the way, when will these folks ever grasp the fact that intelligent design is not “antievolution”?) “Irremediable complexity” seems to be all that’s left, no matter how unsupported and problematic it may be.


Although the authors seem not to notice, their entire model is built on a classic argument from ignorance, beginning with the definition of irremediable complexity:



”irremediable complexity”: the seemingly gratuitous, indeed bewildering, complexity that typifies many cellular subsystems and molecular machines, particularly in eukaryotes. (1)



“Seemingly gratuitous”. In other words, the authors don’t know of a function for the complexity of some eukaryotic subsystems; therefore, they don’t have functions. Well the history of arguments asserting that something or other in biology is functionless is pretty grim. More, the history of assertions that even “simple” things (like, say, DNA, pre-1930) in the cell either don’t have a function or are just supporting structures is abysmal. Overwhelmingly, progress in biology has consisted of finding new and ever-more-sophisticated properties of systems that had been thought simple. If apparently simple systems are much more complex than they initially seemed, I would bet heavily against the hypothesis that apparently complex systems are much simpler than they appear.


References


1.  Lukes, J., J. M. Archibald, P. J. Keeling, W. F. Doolittle, and M. W. Gray, 2011 How a neutral evolutionary ratchet can build cellular complexity. IUBMB Life 63: 528-537.
2.  Gray, M. W., J. Lukes, J. M. Archibald, P. J. Keeling, and W. F. Doolittle, 2010 Cell biology. Irremediable complexity? Science 330: 920-921.
3.  Speijer, D., 2011 Does constructive neutral evolution play an important role in the origin of cellular complexity? Making sense of the origins and uses of biological complexity. Bioessays 33: 344-349.
4.  Doolittle, W. F., J. Lukes, J. M. Archibald, P. J. Keeling, and M. W. Gray, 2011 Comment on “Does constructive neutral evolution play an important role in the origin of cellular complexity?” Bioessays 33: 427-429.
5.  Drake, J. W., B. Charlesworth, D. Charlesworth, and J. F. Crow, 1998 Rates of spontaneous mutation. Genetics 148: 1667-1686.
6.  Kimura M., 1983 The neutral theory of molecular evolution. Cambridge University Press, Cambridge.
7.  Nissim, A., H. R. Hoogenboom, I. M. Tomlinson, G. Flynn, C. Midgley, D. Lane, and G. Winter, 1994 Antibody fragments from a ’single pot’ phage display library as immunochemical reagents. EMBO Journal 13: 692-698.
8.  Griffiths, A. D., S. C. Williams, O. Hartley, I. M. Tomlinson, P. Waterhouse, W. L. Crosby, R. E. Kontermann, P. T. Jones, N. M. Low, T. J. Allison, and G. Winter, 1994 Isolation of high affinity human antibodies directly from large synthetic repertoires. EMBO Journal 13: 3245-3260.
9.  Smith, G. P., S. U. Patel, J. D. Windass, J. M. Thornton, G. Winter, and A. D. Griffiths, 1998 Small binding proteins selected from a combinatorial repertoire of knottins displayed on phage. Journal of Molecular Biology 277: 317-332.
10.  Behe, M. J., 2010 Experimental Evolution, Loss-of-function Mutations, and “The First Rule of Adaptive Evolution”. Quarterly Review of Biology 85: 1-27.
 •  0 comments  •  flag
Share on Twitter
Published on August 19, 2011 14:56

April 15, 2011

Richard Lenski, "evolvability", and tortuous Darwinian pathways

Several papers on the topic of "evolvability" have been published relatively recently by the laboratory of Richard Lenski. (1, 2) Most readers of this site will quickly recognize Lenski as the Michigan State microbiologist who has been growing cultures of E. coli for over twenty years in order to see how they would evolve, patiently transferring a portion of each culture to new media every day, until the aggregate experiment has now passed 50,000 generations. I'm a huge fan of Lenski et al's work because, rather than telling Just-So stories, they have been doing the hard laboratory work that shows us what Darwinian evolution can and likely cannot do.


The term "evolvability" has been used widely and rather loosely in the literature for the past few decades. It usually means something like the following: a species possesses some biological feature which lends itself to evolving more easily than other species that don't possess the feature, so that the lucky species will tend to adapt and survive better than its rivals over time. The kind of feature that is most often invoked in this context is "modularity." That word itself is often used in a vague manner. As I wrote in The Edge of Evolution, "Roughly, a module is a more-or-less self-contained biological feature that can be plugged into a variety of contexts without losing its distinctive properties. A biological module can range from something very small (such as a fragment of a protein), to an entire protein chain (such as one of the subunits of hemoglobin), to a set of genes (such as Hox genes), to a cell, to an organ (such as the eyes or limbs of Drosophila)." (3)


Well, Lenski and co-workers don't use "evolvability" in that sense. They use the term in a much broader sense: "Evolutionary potential, or evolvability, can be operationally defined as the expected degree to which a lineage beginning from a particular genotype will increase in fitness after evolving for a certain time in a particular environment." (1) To put it another way, in their usage "evolvability" means how much an organism will increase in fitness over a defined time starting from genotype A versus starting from genotype B, no matter whether genotypes A and B have any particular identifiable feature such as modularity or not.


Lenski's group published a very interesting paper last year showing that the more defective a starting mutant was in a particular gene (rpoB, which encodes a subunit of RNA polymerase), the more "evolvable" it was. (2) That is, more-crippled cells could gain more in fitness than less-crippled cells. But none of the evolved crippled cells gained enough fitness to match the uncrippled parent strain. Thus it seemed that more-crippled cells could gain more fitness simply because they started from further back than less-crippled ones. Compensatory mutations would pop up somewhere in the genome until the evolving cell was near to its progenitor's starting point. This matches the results of some viral evolution studies where some defective viruses could accumulate compensatory mutations until they were similar in fitness to the starting strain, whether they began with one-tenth or one-ten-billionth of the original fitness. (4)


In a paper published a few weeks ago the Michigan State group took a somewhat different experimental tack. (1) They isolated a number of cells from relatively early in their long-term evolution experiment. (Every 500th generation during the 50,000-generation experiment Lenski's group would freeze away the portion of the culture which was left over after they used a part of it to seed a flask to continue the growth. Thus they have a very complete evolutionary record of the whole lineage, and can go back and conduct experiments on any part of it whenever they wish. Neat!) They saw that different mutations had cropped up in different early cells. Interestingly, the mutations which gave the greatest advantage early on had become extinct after another 1,000 generations. So Lenski's group decided to investigate why the early very-beneficial mutations were nonetheless not as "evolvable" (because they were eventually outcompeted by other lineages) as cells with early less-beneficial mutations.


The workers examined the system thoroughly, performing many careful experiments and controls. (I encourage everyone to read the whole paper.) The bottom line, however, is that they found that changing one particular amino acid residue in one particular protein (called a "topoisomerase", which helps control the "twistiness" of DNA in the cell), instead of a different amino acid residue in the same protein, interfered with the ability of a subsequent mutation in a gene (called spoT) for a second protein to help the bacterium increase in fitness. In other words, getting the "wrong" mutation in topoisomerase — even though that mutation by itself did help the bacterium — prevented a mutation in spoT from helping. Getting the "right" mutation in topoisomerase allowed a mutation in spoT to substantially increase the fitness of the bacterium.


The authors briefly discuss the results (the paper was published in Science, which doesn't allow much room for discussion) in terms of "evolvability", understood in their own sense. (1) They point out that the strain with the right topoisomerase mutation was more "evolvable" than the one with the wrong topoisomerase mutation, because it outcompeted the other strain. That is plainly correct, but does not say anything about "evolvability" in the more common and potentially-much-more-important sense of an organism possessing modular features that help it evolve new systems. "Evolvability" in the more common sense has not been tested experimentally in a Lenski-like fashion.


In my own view, the most interesting aspect of the recent Lenski paper is its highlighting of the pitfalls that Darwinian evolution must dance around, even as it is making an organism somewhat more fit. (1) If the "wrong" advantageous mutation in topoisomerase had become fixed in the population (by perhaps being slightly more advantageous or more common), then the "better" selective pathway would have been shut off completely. And since this phenomenon occurred in the first instance where anyone had looked for it, it is likely to be commonplace. That should not be surprising to anyone who thinks about the topic dispassionately. As the authors note, "Similar cases are expected in any population of asexual organisms that evolve on a rugged fitness landscape with substantial epistasis, as long as the population is large enough that multiple beneficial mutations accumulate in contending lineages before any one mutation can sweep to fixation." If the population is not large enough, or other factors interfere, then the population will be stuck on a small peak of the rugged landscape.


This fits well with recent work by Lenski's and others' laboratories, showing that most beneficial mutations actually break or degrade genes (4), and also with work by Thornton's group showing that random mutation and natural selection likely could not transform a steroid hormone receptor back into its homologous ancestor, even though both have very similar structures and functions, because the tortuous evolutionary pathway would be nearly impossible to traverse. (5, 6) The more that is learned about Darwin's mechanism at the molecular level, the more ineffectual it is seen to be.


1. Woods, R. J., J. E. Barrick, T. F. Cooper, U. Shrestha, M. R. Kauth, and R. E. Lenski.  2011 Second-order selection for evolvability in a large Escherichia coli population. Science 331: 1433-1436.


2. Barrick, J. E., M. R. Kauth, C. C. Strelioff, and R. E. Lenski, 2010 Escherichia coli rpoB mutants have increased evolvability in proportion to their fitness defects. Molecular Biology and Evolution 27: 1338-1347.


3. Behe M. J., 2007 The Edge of Evolution: the search for the limits of Darwinism. Free Press, New York.


4. Behe, M. J., 2010 Experimental Evolution, Loss-of-function Mutations, and "The First Rule of Adaptive Evolution". Quarterly Review of Biology 85: 1-27.


5. Bridgham, J. T., E. A. Ortlund, and J. W. Thornton, 2009 An epistatic ratchet constrains the direction of glucocorticoid receptor evolution. Nature 461: 515-519.


6. See my comments on Thornton's work at the middle of http://behe.uncommon descent.com/page/2/ and the bottom of http://behe.uncommondescent.com/.

 •  0 comments  •  flag
Share on Twitter
Published on April 15, 2011 17:46

Richard Lenski, “evolvability”, and tortuous Darwinian pathways

Several papers on the topic of “evolvability” have been published relatively recently by the laboratory of Richard Lenski. (1, 2) Most readers of this site will quickly recognize Lenski as the Michigan State microbiologist who has been growing cultures of E. coli for over twenty years in order to see how they would evolve, patiently transferring a portion of each culture to new media every day, until the aggregate experiment has now passed 50,000 generations. I’m a huge fan of Lenski et al’s work because, rather than telling Just-So stories, they have been doing the hard laboratory work that shows us what Darwinian evolution can and likely cannot do.


The term “evolvability” has been used widely and rather loosely in the literature for the past few decades. It usually means something like the following: a species possesses some biological feature which lends itself to evolving more easily than other species that don’t possess the feature, so that the lucky species will tend to adapt and survive better than its rivals over time. The kind of feature that is most often invoked in this context is “modularity.” That word itself is often used in a vague manner. As I wrote in The Edge of Evolution, “Roughly, a module is a more-or-less self-contained biological feature that can be plugged into a variety of contexts without losing its distinctive properties. A biological module can range from something very small (such as a fragment of a protein), to an entire protein chain (such as one of the subunits of hemoglobin), to a set of genes (such as Hox genes), to a cell, to an organ (such as the eyes or limbs of Drosophila).” (3)


Well, Lenski and co-workers don’t use “evolvability” in that sense. They use the term in a much broader sense: “Evolutionary potential, or evolvability, can be operationally defined as the expected degree to which a lineage beginning from a particular genotype will increase in fitness after evolving for a certain time in a particular environment.” (1) To put it another way, in their usage “evolvability” means how much an organism will increase in fitness over a defined time starting from genotype A versus starting from genotype B, no matter whether genotypes A and B have any particular identifiable feature such as modularity or not.


Lenski’s group published a very interesting paper last year showing that the more defective a starting mutant was in a particular gene (rpoB, which encodes a subunit of RNA polymerase), the more “evolvable” it was. (2) That is, more-crippled cells could gain more in fitness than less-crippled cells. But none of the evolved crippled cells gained enough fitness to match the uncrippled parent strain. Thus it seemed that more-crippled cells could gain more fitness simply because they started from further back than less-crippled ones. Compensatory mutations would pop up somewhere in the genome until the evolving cell was near to its progenitor’s starting point. This matches the results of some viral evolution studies where some defective viruses could accumulate compensatory mutations until they were similar in fitness to the starting strain, whether they began with one-tenth or one-ten-billionth of the original fitness. (4)


In a paper published a few weeks ago the Michigan State group took a somewhat different experimental tack. (1) They isolated a number of cells from relatively early in their long-term evolution experiment. (Every 500th generation during the 50,000-generation experiment Lenski’s group would freeze away the portion of the culture which was left over after they used a part of it to seed a flask to continue the growth. Thus they have a very complete evolutionary record of the whole lineage, and can go back and conduct experiments on any part of it whenever they wish. Neat!) They saw that different mutations had cropped up in different early cells. Interestingly, the mutations which gave the greatest advantage early on had become extinct after another 1,000 generations. So Lenski’s group decided to investigate why the early very-beneficial mutations were nonetheless not as “evolvable” (because they were eventually outcompeted by other lineages) as cells with early less-beneficial mutations.


The workers examined the system thoroughly, performing many careful experiments and controls. (I encourage everyone to read the whole paper.) The bottom line, however, is that they found that changing one particular amino acid residue in one particular protein (called a “topoisomerase”, which helps control the “twistiness” of DNA in the cell), instead of a different amino acid residue in the same protein, interfered with the ability of a subsequent mutation in a gene (called spoT) for a second protein to help the bacterium increase in fitness. In other words, getting the “wrong” mutation in topoisomerase — even though that mutation by itself did help the bacterium — prevented a mutation in spoT from helping. Getting the “right” mutation in topoisomerase allowed a mutation in spoT to substantially increase the fitness of the bacterium.


The authors briefly discuss the results (the paper was published in Science, which doesn’t allow much room for discussion) in terms of “evolvability”, understood in their own sense. (1) They point out that the strain with the right topoisomerase mutation was more “evolvable” than the one with the wrong topoisomerase mutation, because it outcompeted the other strain. That is plainly correct, but does not say anything about “evolvability” in the more common and potentially-much-more-important sense of an organism possessing modular features that help it evolve new systems. “Evolvability” in the more common sense has not been tested experimentally in a Lenski-like fashion.


In my own view, the most interesting aspect of the recent Lenski paper is its highlighting of the pitfalls that Darwinian evolution must dance around, even as it is making an organism somewhat more fit. (1) If the “wrong” advantageous mutation in topoisomerase had become fixed in the population (by perhaps being slightly more advantageous or more common), then the “better” selective pathway would have been shut off completely. And since this phenomenon occurred in the first instance where anyone had looked for it, it is likely to be commonplace. That should not be surprising to anyone who thinks about the topic dispassionately. As the authors note, “Similar cases are expected in any population of asexual organisms that evolve on a rugged fitness landscape with substantial epistasis, as long as the population is large enough that multiple beneficial mutations accumulate in contending lineages before any one mutation can sweep to fixation.” If the population is not large enough, or other factors interfere, then the population will be stuck on a small peak of the rugged landscape.


This fits well with recent work by Lenski’s and others’ laboratories, showing that most beneficial mutations actually break or degrade genes (4), and also with work by Thornton’s group showing that random mutation and natural selection likely could not transform a steroid hormone receptor back into its homologous ancestor, even though both have very similar structures and functions, because the tortuous evolutionary pathway would be nearly impossible to traverse. (5, 6) The more that is learned about Darwin’s mechanism at the molecular level, the more ineffectual it is seen to be.


1. Woods, R. J., J. E. Barrick, T. F. Cooper, U. Shrestha, M. R. Kauth, and R. E. Lenski.  2011 Second-order selection for evolvability in a large Escherichia coli population. Science 331: 1433-1436.


2. Barrick, J. E., M. R. Kauth, C. C. Strelioff, and R. E. Lenski, 2010 Escherichia coli rpoB mutants have increased evolvability in proportion to their fitness defects. Molecular Biology and Evolution 27: 1338-1347.


3. Behe M. J., 2007 The Edge of Evolution: the search for the limits of Darwinism. Free Press, New York.


4. Behe, M. J., 2010 Experimental Evolution, Loss-of-function Mutations, and “The First Rule of Adaptive Evolution”. Quarterly Review of Biology 85: 1-27.


5. Bridgham, J. T., E. A. Ortlund, and J. W. Thornton, 2009 An epistatic ratchet constrains the direction of glucocorticoid receptor evolution. Nature 461: 515-519.


6. See my comments on Thornton’s work at the middle of http://behe.uncommon descent.com/page/2/ and the bottom of http://behe.uncommondescent.com/.

 •  0 comments  •  flag
Share on Twitter
Published on April 15, 2011 10:46

April 14, 2011

Richard Lenski, “evolvability”, and tortuous Darwinian pathways

Several papers on the topic of “evolvability” have been published relatively recently by the laboratory of Richard Lenski. (1, 2) Most readers of this site will quickly recognize Lenski as the Michigan State microbiologist who has been growing cultures of E. coli for over twenty years in order to see how they would evolve, patiently transferring a portion of each culture to new media every day, until the aggregate experiment has now passed 50,000 generations. I’m a huge fan of Lenski et al’s work because, rather than telling Just-So stories, they have been doing the hard laboratory work that shows us what Darwinian evolution can and likely cannot do.


The term “evolvability” has been used widely and rather loosely in the literature for the past few decades. It usually means something like the following: a species possesses some biological feature which lends itself to evolving more easily than other species that don’t possess the feature, so that the lucky species will tend to adapt and survive better than its rivals over time. The kind of feature that is most often invoked in this context is “modularity.” That word itself is often used in a vague manner. As I wrote in The Edge of Evolution, “Roughly, a module is a more-or-less self-contained biological feature that can be plugged into a variety of contexts without losing its distinctive properties. A biological module can range from something very small (such as a fragment of a protein), to an entire protein chain (such as one of the subunits of hemoglobin), to a set of genes (such as Hox genes), to a cell, to an organ (such as the eyes or limbs of Drosophila).” (3)


Well, Lenski and co-workers don’t use “evolvability” in that sense. They use the term in a much broader sense: “Evolutionary potential, or evolvability, can be operationally defined as the expected degree to which a lineage beginning from a particular genotype will increase in fitness after evolving for a certain time in a particular environment.” (1) To put it another way, in their usage “evolvability” means how much an organism will increase in fitness over a defined time starting from genotype A versus starting from genotype B, no matter whether genotypes A and B have any particular identifiable feature such as modularity or not.


Lenski’s group published a very interesting paper last year showing that the more defective a starting mutant was in a particular gene (rpoB, which encodes a subunit of RNA polymerase), the more “evolvable” it was. (2) That is, more-crippled cells could gain more in fitness than less-crippled cells. But none of the evolved crippled cells gained enough fitness to match the uncrippled parent strain. Thus it seemed that more-crippled cells could gain more fitness simply because they started from further back than less-crippled ones. Compensatory mutations would pop up somewhere in the genome until the evolving cell was near to its progenitor’s starting point. This matches the results of some viral evolution studies where some defective viruses could accumulate compensatory mutations until they were similar in fitness to the starting strain, whether they began with one-tenth or one-ten-billionth of the original fitness. (4)


In a paper published a few weeks ago the Michigan State group took a somewhat different experimental tack. (1) They isolated a number of cells from relatively early in their long-term evolution experiment. (Every 500th generation during the 50,000-generation experiment Lenski’s group would freeze away the portion of the culture which was left over after they used a part of it to seed a flask to continue the growth. Thus they have a very complete evolutionary record of the whole lineage, and can go back and conduct experiments on any part of it whenever they wish. Neat!) They saw that different mutations had cropped up in different early cells. Interestingly, the mutations which gave the greatest advantage early on had become extinct after another 1,000 generations. So Lenski’s group decided to investigate why the early very-beneficial mutations were nonetheless not as “evolvable” (because they were eventually outcompeted by other lineages) as cells with early less-beneficial mutations.


The workers examined the system thoroughly, performing many careful experiments and controls. (I encourage everyone to read the whole paper.) The bottom line, however, is that they found that changing one particular amino acid residue in one particular protein (called a “topoisomerase”, which helps control the “twistiness” of DNA in the cell), instead of a different amino acid residue in the same protein, interfered with the ability of a subsequent mutation in a gene (called spoT) for a second protein to help the bacterium increase in fitness. In other words, getting the “wrong” mutation in topoisomerase — even though that mutation by itself did help the bacterium — prevented a mutation in spoT from helping. Getting the “right” mutation in topoisomerase allowed a mutation in spoT to substantially increase the fitness of the bacterium.


The authors briefly discuss the results (the paper was published in Science, which doesn’t allow much room for discussion) in terms of “evolvability”, understood in their own sense. (1) They point out that the strain with the right topoisomerase mutation was more “evolvable” than the one with the wrong topoisomerase mutation, because it outcompeted the other strain. That is plainly correct, but does not say anything about “evolvability” in the more common and potentially-much-more-important sense of an organism possessing modular features that help it evolve new systems. “Evolvability” in the more common sense has not been tested experimentally in a Lenski-like fashion.


In my own view, the most interesting aspect of the recent Lenski paper is its highlighting of the pitfalls that Darwinian evolution must dance around, even as it is making an organism somewhat more fit. (1) If the “wrong” advantageous mutation in topoisomerase had become fixed in the population (by perhaps being slightly more advantageous or more common), then the “better” selective pathway would have been shut off completely. And since this phenomenon occurred in the first instance where anyone had looked for it, it is likely to be commonplace. That should not be surprising to anyone who thinks about the topic dispassionately. As the authors note, “Similar cases are expected in any population of asexual organisms that evolve on a rugged fitness landscape with substantial epistasis, as long as the population is large enough that multiple beneficial mutations accumulate in contending lineages before any one mutation can sweep to fixation.” If the population is not large enough, or other factors interfere, then the population will be stuck on a small peak of the rugged landscape.


This fits well with recent work by Lenski’s and others’ laboratories, showing that most beneficial mutations actually break or degrade genes (4), and also with work by Thornton’s group showing that random mutation and natural selection likely could not transform a steroid hormone receptor back into its homologous ancestor, even though both have very similar structures and functions, because the tortuous evolutionary pathway would be nearly impossible to traverse. (5, 6) The more that is learned about Darwin’s mechanism at the molecular level, the more ineffectual it is seen to be.


1. Woods, R. J., J. E. Barrick, T. F. Cooper, U. Shrestha, M. R. Kauth, and R. E. Lenski.  2011 Second-order selection for evolvability in a large Escherichia coli population. Science 331: 1433-1436.


2. Barrick, J. E., M. R. Kauth, C. C. Strelioff, and R. E. Lenski, 2010 Escherichia coli rpoB mutants have increased evolvability in proportion to their fitness defects. Molecular Biology and Evolution 27: 1338-1347.


3. Behe M. J., 2007 The Edge of Evolution: the search for the limits of Darwinism. Free Press, New York.


4. Behe, M. J., 2010 Experimental Evolution, Loss-of-function Mutations, and “The First Rule of Adaptive Evolution”. Quarterly Review of Biology 85: 1-27.


5. Bridgham, J. T., E. A. Ortlund, and J. W. Thornton, 2009 An epistatic ratchet constrains the direction of glucocorticoid receptor evolution. Nature 461: 515-519.


6. See my comments on Thornton’s work at the middle of http://behe.uncommon descent.com/page/2/ and the bottom of http://behe.uncommondescent.com/.

 •  0 comments  •  flag
Share on Twitter
Published on April 14, 2011 18:18

January 12, 2011

Even more from Jerry Coyne

In my last post I reported that University of Chicago evolutionary biologist Jerry Coyne, who had critiqued (http://tinyurl.com/2fjenlt) my recent Quarterly Review of Biology article (http://tinyurl.com/25c422s) concerning laboratory evolution studies of the last four decades and what they show us about evolution, had asked several other prominent scientists for comments (http://tinyurl.com/2cyetm7). I replied (http://tinyurl.com/4lq8sre) to those of experimental evolutionary biologist John Bull. In a subsequent post Coyne discussed (http://tinyurl.com/4tqoq7c) a recent paper (http://tinyurl.com/4shw456) by the group of fellow University of Chicago biologist Manyuan Long on gene duplication in fruitflies. After a bit of delay due to the holidays, I will comment on that here.


Try as one might to keep Darwinists focused on the data, some can't help reverting to their favorite trope: questioning Darwinism simply must be based on religion. Unfortunately Professor Coyne succumbs to this. Introducing his blog post he writes:


What role does the appearance of new genes, versus simple changes in old ones, play in evolution? There are two reasons why this question has recently become important…. The first involves a scientific controversy…. The second controversy is religious. Some advocates of intelligent design (ID)—most notably Michael Behe in a recent paper—have implied not only that evolved new genes or new genetic "elements" (e.g., regulatory sequences) aren't important in evolution, but that they play almost no role at all, especially compared to mutations that simply inactivate genes or make small changes, like single nucleotide substitutions, in existing genes. This is based on the religiously-motivated "theory" of ID, which maintains that new genetic information cannot arise by natural selection, but must installed [sic] in our genome by a magic poof from Jebus. [sic]



Anyone who reads the paper, however, knows my conclusions were based on the reviewed experiments of many labs over decades. Even Coyne knows this. In the very next sentence he writes, schizophrenically, "I've criticized Behe's conclusions, which are based on laboratory studies of bacteria and viruses that virtually eliminated the possibility of seeing new genes arise, but I don't want to reiterate my arguments here." Yet if my conclusions are based on "laboratory studies", then they ain't "religious", even if Coyne disagrees with them.


Professor Coyne is so upset, he imagines things that aren't in the paper. (They are "implied", you see.) So although I haven't actually written it, supposedly I have "implied not only that evolved new genes or new genetic 'elements' … aren't important in evolution, but that they play almost no role at all…." [Coyne's emphasis]


"Play almost no role at all"? When I first read these "implied" words that Coyne wants to put in my mouth, I thought the argumentative move rang a bell. Sure enough, check out the Dilbert comic strip from November 1, 2001 (http://tinyurl.com/6y6upgc), where Dilbert complains that a co-worker "changed what I said into a bizarre absolute." If one person says that an event is "very unlikely", and an interlocutor rephrases that into "so, you say it's logically impossible and would never happen even in an infinite multiverse", well then, the second fellow is setting up a straw man.


Contrast Coyne's imagined "implications" with what I actually wrote in the review. Considering possible objections to my conclusions I noted that:


A third objection could be that the time and population scales of even the most ambitious laboratory evolution experiments are dwarfed when compared to those of nature. It is certainly true that, over the long course of history, many critical gain-of-FCT events occurred. However, that does not lessen our understanding, based upon work by many laboratories over the course of decades, of how evolution works in the short term, or of how the incessant background of loss-of-FCT mutations may influence adaptation.



Although I think that statement is clear enough in the context of the paper, let me say it differently in case some folks are confused. Loss of function mutations occur relatively rapidly, and LOF mutations can be adaptive. Gain of function mutations can be adaptive, too, but their rate of occurrence (including the rate of gene duplication-plus-divergence that Coyne is discussing) is much less. Thus whenever a new selective pressure pops up, LOF adaptive mutations (if such there be in the particular circumstance) can appear most swiftly, and will likely dominate short-term adaptation. So when a GOF mutation eventually appears, it will likely be against the altered genetic background of the selective pressure ameliorated by the adaptive LOF mutation(s). In order to understand how evolution works in the long term, we must take that into consideration.


Professor Coyne notes that the new genes studied by Professor Long "arise quickly, at least on an evolutionary timescale". [my emphasis] But adaptive LOF mutations arise quickly even on a laboratory timescale. For example, as I note in my QRB review, in one experiment (http://tinyurl.com/4zyxt66) adaptive mutations in E. coli cultures due to loss of function mutations in the rpoS gene "occurred, and indeed spread at rapid rates within a few generations of establishing glucose-limited chemostats". A few generations for E. coli can be on the order of hours. The gene duplications studied by Professor Long occur on the order of millions of years. Admittedly the situation in nature is more complex than in the laboratory. Nonetheless, whatever selective pressures the gene duplications encounter when they eventually show up will already have been substantially altered by adaptive LOF mutations. That's a very important point for evolutionary biologists to keep in mind.


I have never stated, nor do I think, that gene duplication and diversification cannot happen by Darwinian mechanisms, or that "they play almost no role at all" in the unfolding of life. (As a matter of fact, I discussed several examples of that in my 2007 book The Edge of Evolution. (http://tinyurl.com/4nqxhvr)) That would be silly — why would anyone with knowledge of basic biochemical mechanisms deny that, say, the two gamma-globin coding regions on human chromosome 11 resulted from the duplication of a single gamma-globin gene and then the alteration of a single codon? What I don't think can happen is that duplication/ divergence by Darwinian mechanisms can build new, complex interactive molecular machines or pathways. Assuming (since he is in fact critiquing them) Professor Coyne has been attentive to my arguments, one background assumption that he may have left unexpressed is that he thinks the newer duplicated genes discovered by Professor Long's excellent work represent such complex entities, or parts of them.


There is no reason to think so. A gene can duplicate and diversify without building a new machine or network, or even changing function much. The above example of the two gamma-globin genes shows that duplication does not necessarily result in change in function. The examples of delta- and epsilon-globin, which, like gamma-globin, presumably also resulted from the duplication of an ancestral beta-like globin gene, show that sequence can diversify further, but function remain very similar. Even myoglobin, which shares rather little sequence homology with the other globins, has not diverged much in biochemical function.


In his recent work Professor Long discovered that many of the new genes were essential for the viability of the organism — without the gene product, the fruitflies would die before maturity. Perhaps Professor Coyne thinks that that means the genes necessarily are parts of complex systems, or at least do something fundamentally new. Again, however, there is no reason to think so. The notion of "essential" genes is at best ambiguous. We know of examples of proteins that surely appear necessary, but whose genes are dispensable. The classic example is myoglobin (http://tinyurl.com/4ogmd98). It is also easy to conceive of a simple route to an "essential" duplicate gene that does little new. Suppose, for example, that some gene was duplicated. Although the duplication caused the organism to express more of the protein than was optimum, subsequent mutations in the promoter or protein sequence of one or both of the copies decreased the total activity of the protein to pre-duplication levels. Now, however, if one of the copies is deleted, there is not enough residual protein activity for the organism to survive. The new copy is now "essential", although it does nothing that the original did not do.


To sum up, the important point of "Experimental Evolution, Loss-of-Function Mutations, and 'The First Rule of Adaptive Evolution'" (http://tinyurl.com/25c422s) is not that anything in particular in evolution is absolutely ruled out. Rather, the point is that short term adaptation tends to be dominated by LOF mutations. And, tinkerer that it is, Darwinian evolution always works in the short term.


Here's an analogy that some people might find amusing and helpful. Think of GOF mutations (such as the gene duplication/divergence that Professor Coyne discusses) as the "snail mail" of evolution. And think of LOF mutations as the email, texting, and phone calls of evolution. In a busy world, by the time a real letter shows up at someone's or some business's door, a lot of communication concerning the subject of the slow letter would already have happened by faster means, and the more important the topic, the more fast-communication there likely would have been. That speedy communication can quite easily change the context of the letter, and either render it moot or at least less important. It is certainly possible that on occasion the slow letter will arrive with its impact unaffected by other messages, but it would be foolish to ignore the effect of the fast channels of communication.

 •  0 comments  •  flag
Share on Twitter
Published on January 12, 2011 20:14

December 24, 2010

More from Jerry Coyne

At his blog (http://tinyurl.com/2cyetm7) University of Chicago professor of evolutionary biology Jerry Coyne has commented on my reply (http://tinyurl.com/383zqm7) to his analysis ((http://tinyurl.com/2fjenlt) of my new review (http://tinyurl.com/25c422s) in the Quarterly Review of Biology. This time he has involved two other prominent scientists in the conversation. I'll discuss the comments of one of them in this post and the other in a second post. The first one is University of Texas professor of molecular biology James J. Bull, who works on the laboratory evolution of bacterial viruses (phages). I reviewed a number of Bull's fascinating papers in the recent QRB publication. Coyne solicited Prof. Bull's comments and put them up on his blog (http://tinyurl.com/2cyetm7). Bull says several nice things about my review, but agrees with Prof. Coyne that he wouldn't expect  "novelty" in the lab evolutionary experiments he and others conducted, and he thinks they are not a good model of how evolution occurs in nature. (I wonder if he mentions this in his grant proposals….)


Prof. Bull states that bacteriophage T7 (which he used in his studies) avoids taking up DNA from its host, E. coli, so it really isn't an example of a system where novel DNA was available to the phage, despite his initial hopes that it would be. (In the paper describing the work he and his co-authors wrote, "At the outset, our expectation from work in other viral systems was that the loss of ligase activity would … require the [T7] genome to acquire new sequences through recombination or gene duplication.") But, he writes in his new post, "what we failed to point out in our paper, and is fatal to MB's criticism, is the fact that T7 degrades E. coli DNA, so even if the phage did incorporate an E. coli gene, it might well destroy itself in the next infection." This reasoning strikes me as overlooking an obvious problem, and overlooking an obvious solution to the problem.


First, the problem. If T7, and presumably other bacteriophages, find it advantageous to have a mechanism that excludes host DNA from being incorporated into the phage genome, doesn't this drastically cut down the opportunity for the very mixing of cross-species DNA that Coyne and Bull tout as the Darwinian solution to the problem of developing complex new functions? I suppose they could respond that, well, maybe the phages can't exclude other, non-host DNA, so that's where novel DNA would come from. But it seems host DNA would be by far the DNA the phages contacted the most. But if that is essentially excluded as a source, then the sorts of compensatory mutations that Professor Bull observed in his experiments are still by far the most likely ones to occur in nature. (And the grant application is saved!) It's a matter of rate. The adaptive mutations that come along first will be selected first, and clearly point mutations and deletions come along very rapidly in phage populations.


Next, the solution. If a phage has a mechanism that is preventing it from taking up DNA that could be advantageous to it (such as the gene for a DNA ligase in the case of the experiment of Rokyta et al 2002 (http://tinyurl.com/3adlq6c)) then all it has to do is break that mechanism and the opportunity for acquiring DNA is now opened to it. After all, breaking things is what random mutation does best, and, as I reviewed, many of the reported adaptive mutations in lab evolution experiments resulted from broken genes. Broken genes can also be neutral mutations. In the majority of the cultures of E. coli that Richard Lenski has grown for 50,000 generations, "mutator" strains took over. A mutator strain is one which has lost at least part of its ability to repair its DNA. If E. coli can toss out part of its repair ability with impunity, why couldn't T7 lose its inability to take up some helpful host DNA?


Professor Bull suggests (http://tinyurl.com/2cyetm7) that lab evolution experiments which use whole cells and viruses aren't needed to show the power of Darwinian processes because that is apparent in experiments using "directed" evolution. I strongly disagree with his assessment. In directed evolution workers use an experimental set-up so that a single, particular gene or protein must mutate to be adaptive. "Directed" evolution is a much, much more artificial system than ones that use whole cells and/or viruses, as he did. In response to some selective pressure, a cell has potentially very many more ways to adapt to deal with it than does a single protein — a cell has thousands of genes and thousands of regulatory elements that can potentially help the cell adapt by gain- or loss-of-function, or tweaking of pre-existing function. On the other hand, directed evolution artificially constrains the system to mutate the component that the experimenter chooses. It seems a bit inconsistent to me for someone to claim that single species of cells (and/or viruses) are insufficiently complex to produce gain-of-function mutations by Darwinian processes, but that artificially constraining mutation/selection to single genes or proteins shows it clearly. Seems to me this is exactly backwards.


In his post (http://tinyurl.com/2cyetm7) Professor Bull describes an experiment (http://tinyurl.com/38qhow9) he did with coworkers which, they hoped, would mimic the process of gene duplication and divergence. They placed two copies of the same gene, each on its own kind of plasmid, into the same cell. The gene produced a protein that could disable one kind of antibiotic very well, and disable a second kind of antibiotic rather poorly. In the presence of both antibiotics, they expected one of the copies of the gene to stay about the same, degrading the first antibiotic. They expected the second copy of the gene to accumulate point mutations which would help it become more efficient at degrading the second kind of antibiotic (from other publications such mutants were already known to exist.) The system, however, had its own ideas. Bull says that contrary to expectations, one of the genes was deleted and the other gene accumulated point mutations so that it did a decent job degrading both antibiotics.


Professor Bull writes (http://tinyurl.com/2cyetm7) that, "This study merely illustrates that the conditions favoring the maintenance of two copies undergoing evolutionary divergence are delicate." Skeptic that I am, instead of "delicate", I would say it illustrates that the conditions are "rare". That is, it demonstrates very nicely that having two copies of a gene under what seem to be ideal conditions for adaptive divergence is not enough. (A similar result using a different system was recently obtained by Gauger et al 2010 (http://tinyurl.com/2uc6d8g).) Other factors enter into the result as well. Since we don't know exactly what those other factors are, or how rare they make successful duplication/divergence events, we should not automatically assume that the occurrence of duplicated and diverged genes in nature happened by unguided, Darwinian processes.

 •  0 comments  •  flag
Share on Twitter
Published on December 24, 2010 17:50

Michael J. Behe's Blog

Michael J. Behe
Michael J. Behe isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Michael J. Behe's blog with rss.