Simpósio sobre a TDI na Universidade Presbiteriana Mackenzie em São Paulo, SP: 4 e 5 de dezembro de 2015

segunda-feira, novembro 30, 2015



Outra pesquisa filogenômica rejeita o paradigma sobre a origem das aranhas tecelãs

Phylogenomics Resolves a Spider Backbone Phylogeny and Rejects a Prevailing Paradigm for Orb Web Evolution

Jason E. Bond 4 correspondence email, Nicole L. Garrison4, Chris A. Hamilton, Rebecca L. Godwin, Marshal Hedin, Ingi Agnarsson

4Co-first author

Open Archive




Highlights

•Phylogenomic data are presented for taxa representing all major spider lineages

•Concatenation and species tree analyses resolve a consistent backbone phylogeny

•The orb web is reconstructed as ancestral for a clade including most spider diversity

•Divergence time estimates place the origin of orb webs in the Lower Jurassic

Summary

Spiders represent an ancient predatory lineage known for their extraordinary biomaterials, including venoms and silks. These adaptations make spiders key arthropod predators in most terrestrial ecosystems. Despite ecological, biomedical, and biomaterial importance, relationships among major spider lineages remain unresolved or poorly supported [ 1 ]. Current working hypotheses for a spider “backbone” phylogeny are largely based on morphological evidence, as most molecular markers currently employed are generally inadequate for resolving deeper-level relationships. We present here a phylogenomic analysis of spiders including taxa representing all major spider lineages. Our robust phylogenetic hypothesis recovers some fundamental and uncontroversial spider clades, but rejects the prevailing paradigm of a monophyletic Orbiculariae, the most diverse lineage, containing orb-weaving spiders. Based on our results, the orb web either evolved much earlier than previously hypothesized and is ancestral for a majority of spiders or else it has multiple independent origins, as hypothesized by precladistic authors. Cribellate deinopoid orb weavers that use mechanically adhesive silk are more closely related to a diverse clade of mostly webless spiders than to the araneoid orb-weaving spiders that use adhesive droplet silks. The fundamental shift in our understanding of spider phylogeny proposed here has broad implications for interpreting the evolution of spiders, their remarkable biomaterials, and a key extended phenotype—the spider web.

Received: April 23, 2014; Received in revised form: June 12, 2014; Accepted: June 12, 2014; Published Online: July 17, 2014

© 2014 Elsevier Ltd. Published by Elsevier Inc.

FREE PDF GRATIS: Current Biology

Análise filogenômica revela que as aranhas tecelãs não compartilham das mesmas origens

Phylogenomic Analysis of Spiders Reveals Nonmonophyly of Orb Weavers

Rosa Fernández correspondenc eemail, Gustavo Hormiga, Gonzalo Giribet

Open Archive



  
Highlights

•We present a phylogenomic study of spider relationships using multiple approaches

•Our results reveal that orb weavers are not monophyletic

•Either the orbicular web evolved twice or its origin is ancestral

•Potentially confounding factors in phylogenetics had no effect in our results

Summary

Spiders constitute one of the most successful clades of terrestrial predators [ 1 ]. Their extraordinary diversity, paralleled only by some insects and mites [ 2 ], is often attributed to the use of silk, and, in one of the largest lineages, to stereotyped behaviors for building foraging webs of remarkable biomechanical properties [ 1 ]. However, our understanding of higher-level spider relationships is poor and is largely based on morphology [ 2–4 ]. Prior molecular efforts have focused on a handful of genes [ 5, 6 ] but have provided little resolution to key questions such as the origin of the orb weavers [ 1 ]. We apply a next-generation sequencing approach to resolve spider phylogeny, examining the relationships among its major lineages. We further explore possible pitfalls in phylogenomic reconstruction, including missing data, unequal rates of evolution, and others. Analyses of multiple data sets all agree on the basic structure of the spider tree and all reject the long-accepted monophyly of Orbiculariae, by placing the cribellate orb weavers (Deinopoidea) with other groups and not with the ecribellate orb weavers (Araneoidea). These results imply independent origins for the two types of orb webs (cribellate and ecribellate) or a much more ancestral origin of the orb web with subsequent loss in the so-called RTA clade. Either alternative demands a major reevaluation of our current understanding of the spider evolutionary chronicle.

Received: May 1, 2014; Received in revised form: June 12, 2014; Accepted: June 12, 2014; Published Online: July 17, 2014

© 2014 Elsevier Ltd. Published by Elsevier Inc.

FREE PDF GRATIS: Current Biology

A evolução da teoria da evolução de Darwin: Cambridge libera 12.000 documentos online

The evolution of Darwin’s Origin: Cambridge releases 12,000 papers online

Portrait of Charles Darwin - Credit: Cambridge University Library

The origins of Darwin’s theory of evolution – including the pages where he first coins and commits to paper the term ‘natural selection’ – are being made freely available online today in one of the most significant releases of Darwin material in history.

"The information Darwin received, and the discussions he conducted in these letters played a crucial role in the development of his thinking." Alison Pearn

(http://bit.ly/1y7q4e1) is releasing more than 12,000 hi-res images, alongside transcriptions and detailed notes as a result of an international collaboration with the Darwin Manuscript Project, based at the American Museum of Natural History. These papers chart the evolution of Darwin’s journey, from early theoretical reflections while on board HMS Beagle, to the publication of On the Origin of Species – 155 years ago today.

The launch of Darwin’s papers also marks the end of the first phase of funding for Cambridge’s Digital Library, launched to worldwide acclaim in 2011 with the publication of Isaac Newton’s scientific archive. Initial £1.5m funding for the Digital Library was provided by the Polonsky Foundation. Funding for the digitisation and transcription of the Origin papers was provided by the US National Endowment for the Humanities and National Science Foundation.

Cambridge University Library holds almost the entire collection of Darwin’s working scientific papers and the ones being released today are the most important for understanding the development of his evolutionary theory. They are being published simultaneously on the Cambridge Digital Library and Darwin Manuscripts Project websites, with a further release planned for June 2015, covering the notes and drafts of his eight post-Origin books.

None of the Darwin documents available from today have hitherto been digitised to the present high standard of full colour and high resolution, and many have never been transcribed or edited before now.

Professor David Kohn, Director of the Darwin Manuscripts Project, said: “These documents truly constitute the surviving seedbed of the Origin. In them, Darwin hammered out natural selection and the structure of concepts he used to support natural selection. It was here also that he developed his evolutionary narrative and where he experimented privately with arguments and strategies of presentation that he either rejected or that eventually saw the light of day with the Origin’s publication on November 24, 1859.”

The current release includes important documents such as the “Transmutation” and “Metaphysical” notebooks of the 1830s and the 1842 “Pencil Sketch” which sees Darwin’s first use of the term “natural selection”.

It was in Transmutation Notebook B, that Darwin first attempted to formulate a full theory of evolution and it was in Notebooks D and E that natural selection began to take form in late 1838 and early 1839. The further maturation of Darwin’s theory is found in the three experiment notebooks he began in the late 1830s and mid 1850s, and above all in a large mass of previously unpublished loose notes, primarily from the 1830s-1850s, which Darwin organised into portfolios that generally parallel the chapters of the Origin.

Also included will be images of nearly 300 of Darwin's letters with transcriptions and notes provided by the Darwin Correspondence Project, an Anglo-American research group also based in Cambridge University. 
...

Privileged species (Espécie privilegiada): como o Cosmos é planejado intencionalmente para a vida humana

domingo, novembro 29, 2015



Are humans the accidental products of a blind and uncaring universe? Or are they the beneficiaries of a cosmic order that was planned beforehand to help them flourish? Privileged Species is a 33-minute documentary by Discovery Institute that explores growing evidence from physics, chemistry, biology, and related fields that our universe was designed for large multi-cellular beings like ourselves. Featuring geneticist and author Michael Denton, the documentary investigates the special properties of carbon, water, and oxygen that make human life and the life of other organisms possible, and it explores some of the unique features of humans that make us a truly privileged species.

Os primeiros ecossistemas da Terra eram mais complexos do que antes considerados

sábado, novembro 28, 2015

Suspension feeding in the enigmatic Ediacaran organism Tribrachidium demonstrates complexity of Neoproterozoic ecosystems

Imran A. Rahman 1, Simon A. F. Darroch 2, 3,*, Rachel A. Racicot 2, 4 and Marc Laflamme 5

- Author Affiliations

1School of Earth Sciences, University of Bristol, Life Sciences Building, 24 Tyndall Avenue, Bristol BS8 1TQ, UK.

2Smithsonian Institution, P. O. Box 37012, MRC 121, Washington, DC 20013–7012, USA.

3Department of Earth and Environmental Sciences, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235–1805, USA.

4The Dinosaur Institute, Natural History Museum of Los Angeles County, Los Angeles, CA 90007, USA.

5Department of Chemical and Physical Sciences, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, Ontario L5L 1C6, Canada.

↵*Corresponding author.
E-mail: simon.a.darroch@vanderbilt.edu

Science Advances 27 Nov 2015:

Vol. 1, no. 10, e1500800

DOI: 10.1126/sciadv.1500800

Source/Fonte: Science Advances

Abstract

The first diverse and morphologically complex macroscopic communities appear in the late Ediacaran period, 575 to 541 million years ago (Ma). The enigmatic organisms that make up these communities are thought to have formed simple ecosystems characterized by a narrow range of feeding modes, with most restricted to the passive absorption of organic particles (osmotrophy). We test between competing feeding models for the iconic Ediacaran organism Tribrachidium heraldicum using computational fluid dynamics. We show that the external morphology of Tribrachidium passively directs water flow toward the apex of the organism and generates low-velocity eddies above apical “pits.” These patterns of fluid flow are inconsistent with osmotrophy and instead support the interpretation of Tribrachidium as a passive suspension feeder. This finding provides the oldest empirical evidence for suspension feeding at 555 to 550 Ma, ~10 million years before the Cambrian explosion, and demonstrates that Ediacaran organisms formed more complex ecosystems in the latest Precambrian, involving a larger number of ecological guilds, than currently appreciated.

Keywords Ediacara Ecology suspension feeding ecosystem engineers computational fluid dynamics

Copyright © 2015, The Authors

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.

FREE PDF GRATIS: Science Advances

Como as cobras perderam suas pernas

The burrowing origin of modern snakes

Hongyu Yi1,2,* and Mark A. Norell2

+ Author Affiliations

↵*Corresponding author. E-mail: v1hyi@staffmail.ed.ac.uk

Science Advances 27 Nov 2015:

Vol. 1, no. 10, e1500743

DOI: 10.1126/sciadv.1500743

Abstract

Modern snakes probably originated as habitat specialists, but it controversial unclear whether they were ancestrally terrestrial burrowers or marine swimmers. We used x-ray virtual models of the inner ear to predict the habit of Dinilysia patagonica, a stem snake closely related to the origin of modern snakes. Previous work has shown that modern snakes perceive substrate vibrations via their inner ear. Our data show that D. patagonica and modern burrowing squamates share a unique spherical vestibule in the inner ear, as compared with swimmers and habitat generalists. We built predictive models for snake habit based on their vestibular shape, which estimated D. patagonica and the hypothetical ancestor of crown snakes as burrowers with high probabilities. This study provides an extensive comparative data set to test fossoriality quantitatively in stem snakes, and it shows that burrowing was predominant in the lineages leading to modern crown snakes.

Keywordscrown snake origin HRCTinner earbony labyrinth virtual model building geometric morphometrics functional morphology ancestral state reconstructions

Copyright © 2015, The Authors

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.

FREE PDF GRATIS: Science Advances

A origem da primeira espécie: o começo da evolução darwinista

quinta-feira, novembro 26, 2015

Toward the Darwinian transition: Switching between distributed and speciated states in a simple model of early life

Hinrich Arnoldt, Steven H. Strogatz, and Marc Timme

Phys. Rev. E 92, 052909 – Published 13 November 2015


Abstract

It has been hypothesized that in the era just before the last universal common ancestor emerged, life on earth was fundamentally collective. Ancient life forms shared their genetic material freely through massive horizontal gene transfer (HGT). At a certain point, however, life made a transition to the modern era of individuality and vertical descent. Here we present a minimal model for stochastic processes potentially contributing to this hypothesized “Darwinian transition.” The model suggests that HGT-dominated dynamics may have been intermittently interrupted by selection-driven processes during which genotypes became fitter and decreased their inclination toward HGT. Stochastic switching in the population dynamics with three-point (hypernetwork) interactions may have destabilized the HGT-dominated collective state and essentially contributed to the emergence of vertical descent and the first well-defined species in early evolution. A systematic nonlinear analysis of the stochastic model dynamics covering key features of evolutionary processes (such as selection, mutation, drift and HGT) supports this view. Our findings thus suggest a viable direction out of early collective evolution, potentially enabling the start of individuality and vertical Darwinian evolution.

FREE PDF GRATIS: Physical Review E

Porosidade nas cascas de ovos fornecem insight nos ninhos de dinossauros


Eggshell Porosity Provides Insight on Evolution of Nesting in Dinosaurs

Kohei Tanaka , Darla K. Zelenitsky , François Therrien 

Published: November 25, 2015
DOI: 10.1371/journal.pone.0142829


Abstract

Knowledge about the types of nests built by dinosaurs can provide insight into the evolution of nesting and reproductive behaviors among archosaurs. However, the low preservation potential of their nesting materials and nesting structures means that most information can only be gleaned indirectly through comparison with extant archosaurs. Two general nest types are recognized among living archosaurs: 1) covered nests, in which eggs are incubated while fully covered by nesting material (as in crocodylians and megapodes), and 2) open nests, in which eggs are exposed in the nest and brooded (as in most birds). Previously, dinosaur nest types had been inferred by estimating the water vapor conductance (i.e., diffusive capacity) of their eggs, based on the premise that high conductance corresponds to covered nests and low conductance to open nests. However, a lack of statistical rigor and inconsistencies in this method render its application problematic and its validity questionable. As an alternative we propose a statistically rigorous approach to infer nest type based on large datasets of eggshell porosity and egg mass compiled for over 120 extant archosaur species and 29 archosaur extinct taxa/ootaxa. The presence of a strong correlation between eggshell porosity and nest type among extant archosaurs indicates that eggshell porosity can be used as a proxy for nest type, and thus discriminant analyses can help predict nest type in extinct taxa. Our results suggest that: 1) covered nests are likely the primitive condition for dinosaurs (and probably archosaurs), and 2) open nests first evolved among non-avian theropods more derived than Lourinhanosaurus and were likely widespread in non-avian maniraptorans, well before the appearance of birds. Although taphonomic evidence suggests that basal open nesters (i.e., oviraptorosaurs and troodontids) were potentially the first dinosaurs to brood their clutches, they still partially buried their eggs in sediment. Open nests with fully exposed eggs only became widespread among Euornithes. A potential co-evolution of open nests and brooding behavior among maniraptorans may have freed theropods from the ground-based restrictions inherent to covered nests and allowed the exploitation of alternate nesting locations. These changes in nesting styles and behaviors thus may have played a role in the evolutionary success of maniraptorans (including birds).


Citation: Tanaka K, Zelenitsky DK, Therrien F (2015) Eggshell Porosity Provides Insight on Evolution of Nesting in Dinosaurs. PLoS ONE 10(11): e0142829. doi:10.1371/journal.pone.0142829

Editor: Matthew Shawkey, University of Akron, UNITED STATES

Received: August 4, 2015; Accepted: October 27, 2015; Published: November 25, 2015

Copyright: © 2015 Tanaka et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

Data Availability: All relevant data are within the paper and its Supporting Information files.

Funding: The Yoshida Scholarship Foundation (http://www.ysf.or.jp/englishpage/index.html) and the Japan Student Services Organization (JASSO) (http://www.jasso.go.jp/index_e.html) provided funding to KT.

The Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant (http://www.nserc-crsng.gc.ca/index_eng.asp) provided funding to DKZ.

Competing interests: The authors have declared that no competing interests exist.

FREE PDF GRATIS: PLoS One

Perguntar não ofende: qual é mesmo o mecanismo [preencha as lacunas]?

quarta-feira, novembro 25, 2015


Lendo outro dia sobre o questionamento que sempre vem à baila quando a teoria do Design Inteligente [TDI] é discutida, a pergunta que sempre vem é: Qual é o mecanismo da TDI? Os teóricos e defensores da TDI geralmente respondem:

"Nós não propomos um mecanismo (uma causa estritamente ou necessariamente materialista) para a origem da informação biológica. Todavia, o que nós propomos é uma causa inteligente ou mental."

A Dra. Ann Gauger tem uma resposta interessante para as exigências de mecanismo cobrada pelos críticos e oponentes da TDI:

"A exigência de uma causa material, de um mecanismo, pode levar à conclusão esdrúxula de que a lei da gravidade de Isaac Newton não é científica porque ele se recusou famosamente em fornecer uma explanação mecanística para a ação [da gravidade] a uma distância. Do mesmo modo a equação de Einstein, a E = mc2 não tem mecanismo. Mas essas leis são, certamente, científicas."

Fui, sem querer pensando: Esvaziou o argumento mecanicista com uma elegância!

A microgravidade reduz a diferenciação e potencial regenerativo das células-tronco embrionárias

terça-feira, novembro 24, 2015

Microgravity Reduces the Differentiation and Regenerative Potential of Embryonic Stem Cells

To cite this article:

Blaber Elizabeth A., Finkelstein Hayley, Dvorochkin Natalya, Sato Kevin Y., Yousuf Rukhsana, Burns Brendan P., Globus Ruth K., and Almeida Eduardo A.C.. Stem Cells and Development. November 15, 2015, 24(22): 2605-2621. doi:10.1089/scd.2015.0218.

Published in Volume: 24 Issue 22: November 10, 2015
Online Ahead of Print: October 22, 2015
Online Ahead of Editing: September 28, 2015

Elizabeth A. Blaber,1,2 Hayley Finkelstein,1 Natalya Dvorochkin,1 Kevin Y. Sato,3 Rukhsana Yousuf,1 Brendan P. Burns,2,4 Ruth K. Globus,1 and Eduardo A.C. Almeida1

1Space Biosciences Division, NASA Ames Research Center, Moffett Field, California.

2School of Biotechnology and Biomolecular Sciences, University of New South Wales, Sydney, Australia.

3FILMSS Wyle, Space Biology, NASA Ames Research Center, Moffett Field, California.

4Australian Centre for Astrobiology, University of New South Wales, Sydney, Australia.

© Elizabeth A. Blaber et al., 2015; Published by Mary Ann Liebert, Inc. This Open Access article is distributed under the terms of the Creative Commons Attribution Noncommercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Address correspondence to:

Dr. Eduardo A.C. Almeida
Space Biosciences Division
NASA Ames Research Center
Mail Stop 236-7
Moffett Field, CA 94035

E-mail: e.almeida@nasa.gov

Received for publication June 25, 2015

Accepted after revision August 28, 2015


Source/Fonte: NASA

Abstract

Mechanical unloading in microgravity is thought to induce tissue degeneration by various mechanisms, including inhibition of regenerative stem cell differentiation. To address this hypothesis, we investigated the effects of microgravity on early lineage commitment of mouse embryonic stem cells (mESCs) using the embryoid body (EB) model of tissue differentiation. We found that exposure to microgravity for 15 days inhibits mESC differentiation and expression of terminal germ layer lineage markers in EBs. Additionally, microgravity-unloaded EBs retained stem cell self-renewal markers, suggesting that mechanical loading at Earth's gravity is required for normal differentiation of mESCs. Finally, cells recovered from microgravity-unloaded EBs and then cultured at Earth's gravity showed greater stemness, differentiating more readily into contractile cardiomyocyte colonies. These results indicate that mechanical unloading of stem cells in microgravity inhibits their differentiation and preserves stemness, possibly providing a cellular mechanistic basis for the inhibition of tissue regeneration in space and in disuse conditions on earth.

FREE PDF GRATIS: Stem Cells and Development

Melhores práticas para computação científica

Best Practices for Scientific Computing

Greg Wilson , D. A. Aruliah, C. Titus Brown, Neil P. Chue Hong, Matt Davis, Richard T. Guy, Steven H. D. Haddock, Kathryn D. Huff, Ian M. Mitchell, Mark D. Plumbley, Ben Waugh, Ethan P. White, Paul Wilson

Published: January 7, 2014DOI: 10.1371/journal.pbio.1001745

Citation: Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, et al. (2014) Best Practices for Scientific Computing. PLoS Biol 12(1): e1001745. doi:10.1371/journal.pbio.1001745

Academic Editor: Jonathan A. Eisen, University of California Davis, United States of America

Published: January 7, 2014

Copyright: © 2014 Wilson et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: Neil Chue Hong was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Grant EP/H043160/1 for the UK Software Sustainability Institute. Ian M. Mitchell was supported by NSERC Discovery Grant #298211. Mark Plumbley was supported by EPSRC through a Leadership Fellowship (EP/G007144/1) and a grant (EP/H043101/1) for SoundSoftware.ac.uk. Ethan White was supported by a CAREER grant from the US National Science Foundation (DEB 0953694). Greg Wilson was supported by a grant from the Sloan Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The lead author (GVW) is involved in a pilot study of code review in scientific computing with PLOS Computational Biology.


Introduction

Scientists spend an increasing amount of time building and using software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. We describe a set of best practices for scientific software development that have solid foundations in research and experience, and that improve scientists' productivity and the reliability of their software.

Software is as important to modern scientific research as telescopes and test tubes. From groups that work exclusively on computational problems, to traditional laboratory and field scientists, more and more of the daily operation of science revolves around developing new algorithms, managing and analyzing the large amounts of data that are generated in single research projects, combining disparate datasets to assess synthetic problems, and other computational tasks.

Scientists typically develop their own software for these purposes because doing so requires substantial domain-specific knowledge. As a result, recent studies have found that scientists typically spend 30% or more of their time developing software [1],[2]. However, 90% or more of them are primarily self-taught [1],[2], and therefore lack exposure to basic software development practices such as writing maintainable code, using version control and issue trackers, code reviews, unit testing, and task automation.

We believe that software is just another kind of experimental apparatus [3] and should be built, checked, and used as carefully as any physical apparatus. However, while most scientists are careful to validate their laboratory and field equipment, most do not know how reliable their software is [4],[5]. This can lead to serious errors impacting the central conclusions of published research [6]: recent high-profile retractions, technical comments, and corrections because of errors in computational methods include papers in Science [7],[8], PNAS [9], the Journal of Molecular Biology [10], Ecology Letters [11],[12], the Journal of Mammalogy [13], Journal of the American College of Cardiology [14], Hypertension [15], and The American Economic Review [16].

In addition, because software is often used for more than a single project, and is often reused by other scientists, computing errors can have disproportionate impacts on the scientific process. This type of cascading impact caused several prominent retractions when an error from another group's code was not discovered until after publication [6]. As with bench experiments, not everything must be done to the most exacting standards; however, scientists need to be aware of best practices both to improve their own approaches and for reviewing computational work by others.

This paper describes a set of practices that are easy to adopt and have proven effective in many research settings. Our recommendations are based on several decades of collective experience both building scientific software and teaching computing to scientists [17],[18], reports from many other groups [19]–, guidelines for commercial and open source software development [26],, and on empirical studies of scientific computing [28]–[31] and software development in general (summarized in [32]). None of these practices will guarantee efficient, error-free software development, but used in concert they will reduce the number of errors in scientific software, make it easier to reuse, and save the authors of the software time and effort that can used for focusing on the underlying scientific questions.

Our practices are summarized in Box 1; labels in the main text such as “(1a)” refer to items in that summary. For reasons of space, we do not discuss the equally important (but independent) issues of reproducible research, publication and citation of code and data, and open science. We do believe, however, that all of these will be much easier to implement if scientists have the skills we describe.

FREE PDF GRATIS: PLoS Biology

Dez regras simples para pesquisa computacional reproduzível

Ten Simple Rules for Reproducible Computational Research

Geir Kjetil Sandve , Anton Nekrutenko, James Taylor, Eivind Hovig

Published: October 24, 2013 
DOI: 10.1371/journal.pcbi.1003285

Citation: Sandve GK, Nekrutenko A, Taylor J, Hovig E (2013) Ten Simple Rules for Reproducible Computational Research. PLoS Comput Biol 9(10): e1003285. doi:10.1371/journal.pcbi.1003285

Editor: Philip E. Bourne, University of California San Diego, United States of America

Published: October 24, 2013

Copyright: © 2013 Sandve et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: The authors' laboratories are supported by US National Institutes of Health grants HG005133, HG004909, and HG006620 and US National Science Foundation grant DBI 0850103. Additional funding is provided, in part, by the Huck Institutes for the Life Sciences at Penn State, the Institute for Cyberscience at Penn State, and a grant with the Pennsylvania Department of Health using Tobacco Settlement Funds. The funders had no role in the preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.


Replication is the cornerstone of a cumulative science [1]. However, new tools and technologies, massive amounts of data, interdisciplinary approaches, and the complexity of the questions being asked are complicating replication efforts, as are increased pressures on scientists to advance their research [2]. As full replication of studies on independently collected data is often not feasible, there has recently been a call for reproducible research as an attainable minimum standard for assessing the value of scientific claims [3]. This requires that papers in experimental science describe the results and provide a sufficiently clear protocol to allow successful repetition and extension of analyses based on original data [4].

The importance of replication and reproducibility has recently been exemplified through studies showing that scientific papers commonly leave out experimental details essential for reproduction [5], studies showing difficulties with replicating published experimental results [6], an increase in retracted papers [7], and through a high number of failing clinical trials [8], [9]. This has led to discussions on how individual researchers, institutions, funding bodies, and journals can establish routines that increase transparency and reproducibility. In order to foster such aspects, it has been suggested that the scientific community needs to develop a “culture of reproducibility” for computational science, and to require it for published claims [3].

We want to emphasize that reproducibility is not only a moral responsibility with respect to the scientific field, but that a lack of reproducibility can also be a burden for you as an individual researcher. As an example, a good practice of reproducibility is necessary in order to allow previously developed methodology to be effectively applied on new data, or to allow reuse of code and results for new projects. In other words, good habits of reproducibility may actually turn out to be a time-saver in the longer run.

We further note that reproducibility is just as much about the habits that ensure reproducible research as the technologies that can make these processes efficient and realistic. Each of the following ten rules captures a specific aspect of reproducibility, and discusses what is needed in terms of information handling and tracking of procedures. If you are taking a bare-bones approach to bioinformatics analysis, i.e., running various custom scripts from the command line, you will probably need to handle each rule explicitly. If you are instead performing your analyses through an integrated framework (such as GenePattern [10], Galaxy [11], LONI pipeline [12], or Taverna [13]), the system may already provide full or partial support for most of the rules. What is needed on your part is then merely the knowledge of how to exploit these existing possibilities.

In a pragmatic setting, with publication pressure and deadlines, one may face the need to make a trade-off between the ideals of reproducibility and the need to get the research out while it is still relevant. This trade-off becomes more important when considering that a large part of the analyses being tried out never end up yielding any results. However, frequently one will, with the wisdom of hindsight, contemplate the missed opportunity to ensure reproducibility, as it may already be too late to take the necessary notes from memory (or at least much more difficult than to do it while underway). We believe that the rewards of reproducibility will compensate for the risk of having spent valuable time developing an annotated catalog of analyses that turned out as blind alleys.

As a minimal requirement, you should at least be able to reproduce the results yourself. This would satisfy the most basic requirements of sound research, allowing any substantial future questioning of the research to be met with a precise explanation. Although it may sound like a very weak requirement, even this level of reproducibility will often require a certain level of care in order to be met. There will for a given analysis be an exponential number of possible combinations of software versions, parameter values, pre-processing steps, and so on, meaning that a failure to take notes may make exact reproduction essentially impossible.

With this basic level of reproducibility in place, there is much more that can be wished for. An obvious extension is to go from a level where you can reproduce results in case of a critical situation to a level where you can practically and routinely reuse your previous work and increase your productivity. A second extension is to ensure that peers have a practical possibility of reproducing your results, which can lead to increased trust in, interest for, and citations of your work [6], [14].

We here present ten simple rules for reproducibility of computational research. These rules can be at your disposal for whenever you want to make your research more accessible—be it for peers or for your future self.

FREE PDF GRATIS: PLoS Computational Biology 

Um guia rápido para organizar projetos computacionais em Biologia

A Quick Guide to Organizing Computational Biology Projects

William Stafford Noble 

Published: July 31, 2009 DOI: 10.1371/journal.pcbi.1000424

Citation: Noble WS (2009) A Quick Guide to Organizing Computational Biology Projects. PLoS Comput Biol 5(7): e1000424. doi:10.1371/journal.pcbi.1000424

Editor: Fran Lewitter, Whitehead Institute, United States of America

Published: July 31, 2009

Copyright: © 2009 William Stafford Noble. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: The author received no specific funding for writing this article.

Competing interests: The author has declared that no competing interests exist.



Introduction

Most bioinformatics coursework focuses on algorithms, with perhaps some components devoted to learning programming skills and learning how to use existing bioinformatics software. Unfortunately, for students who are preparing for a research career, this type of curriculum fails to address many of the day-to-day organizational challenges associated with performing computational experiments. In practice, the principles behind organizing and documenting computational experiments are often learned on the fly, and this learning is strongly influenced by personal predilections as well as by chance interactions with collaborators or colleagues.

The purpose of this article is to describe one good strategy for carrying out computational experiments. I will not describe profound issues such as how to formulate hypotheses, design experiments, or draw conclusions. Rather, I will focus on relatively mundane issues such as organizing files and directories and documenting progress. These issues are important because poor organizational choices can lead to significantly slower research progress. I do not claim that the strategies I outline here are optimal. These are simply the principles and practices that I have developed over 12 years of bioinformatics research, augmented with various suggestions from other researchers with whom I have discussed these issues.

Principles

The core guiding principle is simple: Someone unfamiliar with your project should be able to look at your computer files and understand in detail what you did and why. This “someone” could be any of a variety of people: someone who read your published article and wants to try to reproduce your work, a collaborator who wants to understand the details of your experiments, a future student working in your lab who wants to extend your work after you have moved on to a new job, your research advisor, who may be interested in understanding your work or who may be evaluating your research skills. Most commonly, however, that “someone” is you. A few months from now, you may not remember what you were up to when you created a particular set of files, or you may not remember what conclusions you drew. You will either have to then spend time reconstructing your previous experiments or lose whatever insights you gained from those experiments.

This leads to the second principle, which is actually more like a version of Murphy's Law: Everything you do, you will probably have to do over again. Inevitably, you will discover some flaw in your initial preparation of the data being analyzed, or you will get access to new data, or you will decide that your parameterization of a particular model was not broad enough. This means that the experiment you did last week, or even the set of experiments you've been working on over the past month, will probably need to be redone. If you have organized and documented your work clearly, then repeating the experiment with the new data or the new parameterization will be much, much easier.

To see how these two principles are applied in practice, let's begin by considering the organization of directories and files with respect to a particular project.

File and Directory Organization

When you begin a new project, you will need to decide upon some organizational structure for the relevant directories. It is generally a good idea to store all of the files relevant to one project under a common root directory. The exception to this rule is source code or scripts that are used in multiple projects. Each such program might have a project directory of its own.

Within a given project, I use a top-level organization that is logical, with chronological organization at the next level, and logical organization below that. A sample project, called msms, is shown in Figure 1. At the root of most of my projects, I have a data directory for storing fixed data sets, a results directory for tracking computational experiments peformed on that data, a doc directory with one subdirectory per manuscript, and directories such as src for source code and bin for compiled binaries or scripts.

FREE PDF GRATIS : PLoS Computational Biology

Elevadas taxas de extinção são fundamentais para a diversificação de vertebrados terrestres

Elevated Extinction Rates as a Trigger for Diversification Rate Shifts: Early Amniotes as a Case Study

Neil Brocklehurst, Marcello Ruta, Johannes Müller & Jörg Fröbisch

Scientific Reports 5, Article number: 17104 (2015)

doi:10.1038/srep17104

Received: 11 August 2015
Accepted: 26 October 2015
Published online: 23 November 2015

Palaeontology | Phylogenetics

Abstract

Tree shape analyses are frequently used to infer the location of shifts in diversification rate within the Tree of Life. Many studies have supported a causal relationship between shifts and temporally coincident events such as the evolution of “key innovations”. However, the evidence for such relationships is circumstantial. We investigated patterns of diversification during the early evolution of Amniota from the Carboniferous to the Triassic, subjecting a new supertree to analyses of tree balance in order to infer the timing and location of diversification shifts. We investigated how uneven origination and extinction rates drive diversification shifts, and use two case studies (herbivory and an aquatic lifestyle) to examine whether shifts tend to be contemporaneous with evolutionary novelties. Shifts within amniotes tend to occur during periods of elevated extinction, with mass extinctions coinciding with numerous and larger shifts. Diversification shifts occurring in clades that possess evolutionary innovations do not coincide temporally with the appearance of those innovations, but are instead deferred to periods of high extinction rate. We suggest such innovations did not cause increases in the rate of cladogenesis, but allowed clades to survive extinction events. We highlight the importance of examining general patterns of diversification before interpreting specific shifts.


Esguichos de cianobactéria provavelmente responsáveis pelo oxigênio da Terra

segunda-feira, novembro 23, 2015

Transient episodes of mild environmental oxygenation and oxidative continental weathering during the late Archean

Brian Kendall 1,*, Robert A. Creaser 2, Christopher T. Reinhard 3, Timothy W. Lyons 4 and Ariel D. Anbar 5,6

+ Author Affiliations

↵*Corresponding author. E-mail: bkendall@uwaterloo.ca

Science Advances 20 Nov 2015:

Vol. 1, no. 10, e1500777

DOI: 10.1126/sciadv.1500777


Grand Prismatic Spring at Yellowstone National Park

samspicerphoto / Fotolia

Abstract

It is not known whether environmental O2 levels increased in a linear fashion or fluctuated dynamically between the evolution of oxygenic photosynthesis and the later Great Oxidation Event. New rhenium-osmium isotope data from the late Archean Mount McRae Shale, Western Australia, reveal a transient episode of oxidative continental weathering more than 50 million years before the onset of the Great Oxidation Event. A depositional age of 2495 ± 14 million years and an initial 187Os/188Os of 0.34 ± 0.19 were obtained for rhenium- and molybdenum-rich black shales. The initial 187Os/188Os is higher than the mantle/extraterrestrial value of 0.11, pointing to mild environmental oxygenation and oxidative mobilization of rhenium, molybdenum, and radiogenic osmium from the upper continental crust and to contemporaneous transport of these metals to seawater. By contrast, stratigraphically overlying black shales are rhenium- and molybdenum-poor and have a mantle-like initial 187Os/188Os of 0.06 ± 0.09, indicating a reduced continental flux of rhenium, molybdenum, and osmium to seawater because of a drop in environmental O2 levels. Transient oxygenation events, like the one captured by the Mount McRae Shale, probably separated intervals of less oxygenated conditions during the late Archean.

Keywords Earth sciences Archean oxidative continental weathering atmospheric oxygen Geochronology rhenium osmium molybdenum Mount McRae Shale Hamersley Basin

FREE PDF GRATIS: Science Advances

A hidra pode modificar seu sistema genético

Loss of neurogenesis in Hydra leads to compensatory regulation of neurogenic and neurotransmission genes in epithelial cells

Y. Wenger, W. Buzgariu, B. Galliot

Published 23 November 2015. DOI: 10.1098/rstb.2015.0040

 
Abstract

Hydra continuously differentiates a sophisticated nervous system made of mechanosensory cells (nematocytes) and sensory–motor and ganglionic neurons from interstitial stem cells. However, this dynamic adult neurogenesis is dispensable for morphogenesis. Indeed animals depleted of their interstitial stem cells and interstitial progenitors lose their active behaviours but maintain their developmental fitness, and regenerate and bud when force-fed. To characterize the impact of the loss of neurogenesis in Hydra, we first performed transcriptomic profiling at five positions along the body axis. We found neurogenic genes predominantly expressed along the central body column, which contains stem cells and progenitors, and neurotransmission genes predominantly expressed at the extremities, where the nervous system is dense. Next, we performed transcriptomics on animals depleted of their interstitial cells by hydroxyurea, colchicine or heat-shock treatment. By crossing these results with cell-type-specific transcriptomics, we identified epithelial genes up-regulated upon loss of neurogenesis: transcription factors (Dlx, Dlx1, DMBX1/Manacle, Ets1, Gli3, KLF11, LMX1A, ZNF436, Shox1), epitheliopeptides (Arminins, PW peptide), neurosignalling components (CAMK1D, DDCl2, Inx1), ligand-ion channel receptors (CHRNA1, NaC7), G-Protein Coupled Receptors and FMRFRL. Hence epitheliomuscular cells seemingly enhance their sensing ability when neurogenesis is compromised. This unsuspected plasticity might reflect the extended multifunctionality of epithelial-like cells in early eumetazoan evolution.

FREE PDF GRATIS: Phil Trans R Soc B

Christoph Adami e sua teoria da informação da vida

The Information Theory of Life

The polymath Christoph Adami is investigating life’s origins by reimagining living things as self-perpetuating information strings.

Chris Adami on the Michigan State University campus.
Kristen Norman for Quanta Magazine
By: Kevin Hartnett
November 19, 2015

There are few bigger — or harder — questions to tackle in science than the question of how life arose. We weren’t around when it happened, of course, and apart from the fact that life exists, there’s no evidence to suggest that life can come from anything besides prior life. Which presents a quandary.
Christoph Adami does not know how life got started, but he knows a lot of other things. His main expertise is in information theory, a branch of applied mathematics developed in the 1940s for understanding information transmissions over a wire. Since then, the field has found wide application, and few researchers have done more in that regard than Adami, who is a professor of physics and astronomy and also microbiology and molecular genetics at Michigan State University. He takes the analytical perspective provided by information theory and transplants it into a great range of disciplines, including microbiology, genetics, physics, astronomy and neuroscience. Lately, he’s been using it to pry open a statistical window onto the circumstances that might have existed at the moment life first clicked into place.
To do this, he begins with a mental leap: Life, he argues, should not be thought of as a chemical event. Instead, it should be thought of as information. The shift in perspective provides a tidy way in which to begin tackling a messy question. In the following interview, Adami defines information as “the ability to make predictions with a likelihood better than chance,” and he says we should think of the human genome — or the genome of any organism — as a repository of information about the world gathered in small bits over time through the process of evolution. The repository includes information on everything we could possibly need to know, such as how to convert sugar into energy, how to evade a predator on the savannah, and, most critically for evolution, how to reproduce or self-replicate.
This reconceptualization doesn’t by itself resolve the issue of how life got started, but it does provide a framework in which we can start to calculate the odds of life developing in the first place. Adami explains that a precondition for information is the existence of an alphabet, a set of pieces that, when assembled in the right order, expresses something meaningful. No one knows what that alphabet was at the time that inanimate molecules coupled up to produce the first bits of information. Using information theory, though, Adami tries to help chemists think about the distribution of molecules that would have had to be present at the beginning in order to make it even statistically plausible for life to arise by chance.
Quanta Magazine spoke with Adami about what information theory has to say about the origins of life. An edited and condensed version of the interview follows.
QUANTA MAGAZINE: How does the concept of information help us understand how life works?
CHRISTOPH ADAMI: Information is the currency of life. One definition of information is the ability to make predictions with a likelihood better than chance. That’s what any living organism needs to be able to do, because if you can do that, you’re surviving at a higher rate. [Lower organisms] make predictions that there’s carbon, water and sugar. Higher organisms make predictions about, for example, whether an organism is after you and you want to escape. Our DNA is an encyclopedia about the world we live in and how to survive in it. ...
FREE PDF GRATIS: Quanta Magazine
+++++
NOTA DESTE BLOGGER:

A biologia dos séculos 20 e 21 é uma ciência de informação. A teoria do Design Inteligente é uma teoria de informação complexa especificada...
Fui, nem sei por que pensando que a nova teoria geral da evolução - a Síntese Evolutiva Ampliada/Estendida não contemplou devidamente a questão da informação. Uma teoria científica natimorta!