[link]
Stuart and Ding's paper is a sociological look at the predictors of entrepreneurship among academic scientists. The paper is framed using the basic tension between academic science and commerce (discussed around patents in particular by Fiona Murry in The Oncomouse that roared: Resistance and accommodation to patenting in academic science). The authors essentially present an event history that models the conditions prompting university employed scientists to become entrepreneurs which they define as either founding as a biotechnology company or joining the scientific advisory board of a new biotechnology firm. The authors find that the most prominent scientists joined first with the very most prestigious being among the first but that, over time, the "bar" has been lowered and a wider number and variety of scientists are now engaging in entrepreneurship. The paper's basic framing is around the idea that entrepreneurship and science used to be seen as incompatible or even counter-productive and bad. In the biosciences, this seem to no longer be the case. The paper aims to look at the changes over time and explain how this transition happened. The paper looks at four determinants: (1) socialization in graduate school (2) peer influence exerted across a faculties social network (3) the presents of pro-entrepreneurship colleagues in a scientists workplace and (4) differential access to social resources that facilitate entrepreneurship. Using an impressive dataset built from a from a variety of sources, the findings show taht commercial science began first, and diffused across, the most elite scientists (i.e., most prestigious institutions, most citations. most co-authors, etc) and then, over time, diffused to more. The authors use an event history to test eight formal hypotheses (each included verbatim below: 1. Scientists are more likely to transition to the entrepreneurial role when they are affiliated with institutions that employ other scientists who have participated in commercial science. (Supported) 2. The effect of prior local adopters on scientists’ rate of transition to entrepreneurship will have been weaker in medical schools than it was in university science departments. (Supported) 3. The effect of prior local adopters on scientists’ rate of transition to commercial science will decline as academic entrepreneurship gains acceptance in the scientific community. (Supported) 4. As faculty members in arts and sciences departments come to accept entrepreneurship as a legitimate professional activity, the difference in the rates of transition to academic entrepreneurship between scientists in medical schools and those in departments of arts and sciences will decline. (Supported) 5. Scientists are more likely to transition to the entrepreneurial role when they are affiliated with universities that employ high-status scientists who have previously made the transition. (Supported) 6. Life scientists who were trained in universities with pro-entrepreneurship faculty members are more likely to transition to commercial science later in their careers. (Not supported) 7. Scientists who have previously coauthored research with academic entrepreneurs are more likely to transition to commercial science. (Supported) 8. Co-authorship ties with scientists who have high centrality in the commercial sector will have a particularly large effect on the transition rate. (Supported) There are important remaining questions about why the high-status individuals are first to make the move to entrepreneurship. The authors suggest that is probably a factor of opportunity. Alternatively, there may be a status story in that the high-prestige scientists felt that their positions was less threatened. #### Theoretical and practical relevance: The paper has been cited more than 100 times in the six year since its publication. These have primarily have been in the entrepreneurship literature. |
[link]
Published in Science, Technology, and Human Values, Langdon Winner's article is adapted from a presidential speech given at the conference for the Society for Philosophy and Technology. The speech is an extended critique of social constructivism in technology from the perspective of a philosophy in general and morals in particular. The article is prompted by the groups of sociologists studying science and technology and the growth of STS more generally (he cite Bruno Latour and Steven Woolgar in particular as authors he responding to). Winner praises the sociological study of science for bringing empirical rigor to the study of science and the means through which it is created. Although he argues that these social constructivists have paid less attention to technology, he argues that provided a useful service in calling into questions the highly arbitrary divisions between the social sphere and the technical sphere. But Winner also argues that social constructivists are essentially fighting for primacy against more traditional considerations of technology like those by Marx, Lewis Mumford, and Heidegger that are more closely aligned with the philosophy of technology. Winner cautions that we should, "notice what one gives up as well as what one gains in choosing this intellectual path to the study of technology and human affairs." * Winner argues that the social consequences of technical choice are almost left out of view completely by the more empirically minding approaches which calls things constructive, provides the evidence, and goes home. * He argues that with its focus on "relevant social actors" it ends up discounting the experience or values of "irrelevant" groups who are indeed affected by technology noting that unpacking black boxes will end up concealing as much as it reveals. * It's focus on social structure ignores other important factors that a focus on the technology itself or on other factors might leave out. * It leaves out moral questions that make it impossible to evaluate technological choices. In Winner's terms, "the methodological bracketing of questions about interests and interpretations amounts to a political stance that regards the status quo and its ills an injustices with precision equanimity." The paper ends with a response to Steven Woolgar who made an exemplary argument against Winner's description of the primacy of political interpretation of Robert Moses's bridges in Do artifacts have politics? and a memorable quote from Winner: *Although the social constructivists have opened the black box and shown a colorful array of social actors, processes, and images therein, the box they reveal is still a remarkably hollow one.* #### Theoretical and practical relevance: Winner's paper has been cited over 300 times since it's publication nearly 20 years ago. |
[link]
Partha Dasgupta and Paul David open their article suggesting that economics literature has, "lacked an overarching conceptual framework to guide empirical studies and public policy discussions" about science. They attempt to unpack the reputation-based reward systems in science to help understand what the economic drivers are of scientific work and how public changes might influence those incentives and change science. The basic argument is framed by three features of science: * Borrowing from agency theory, scientific production and progress is very costly for outsiders to monitor. * There are significant aspects of indivisibility, attendant fixed costs, and economies of scale inherent in the underlying processes of knowledge production. * Knowledge created can be kept from the public if researcher choose. The goals of the authors is to introduce an "economics of science" which: * Exposes the logic of scientific institutions. * Examines implications of different types of institutions on the efficiency of resources allocation within science. The authors argue that difference between science and technology basically comes down to a different set of socio-political reward systems affecting the allocation of resources. In technology, work is kept secret and owned, in science, it is put into the public domain. They argue that because markets are reasonable bad at the second type of production (i.e., the production of a public good), either (1) governments can engage in science directly, (2) society can grant monopoly rights to scientific production, or (3) scientific production can be done through public subsidies but without exclusive rights being granted to the creators. The authors collapse (1) and (3) together and argue that are really two core economic means of encouraging scientific production. Much of the core of the article then goes into depth on the priority system which is the system whereby the first person that publishes something gets all the fruits (usually the credit) for a particular discovery. It talks about the combination methods that most science works under where scientists get both rewards and a set salary, as a way of balancing the agency concerns. It goes into some high-level game theory to talk about the possible inefficiencies that stem from a priority based system. The authors argue that because there are repeated games a strong norm-based system in science, many of the potential problems with inefficiency are addressed. That said, they point out there problems with sciences emphasis on who makes discoveries (which society does not care about) and timing effects related to coordination which the scientific incentive system makes hard and that central funding organizations may have trouble manipulating. The paper also discusses the role that science plays in training individuals for the work force and in technology and industry and a series of other issues related to science and the economy before its closing section that discusses implications for policy. The general policy implications are one of skepticism. The authors warn that the incentive system in science works well but that it is a delicate balance and that there is some evidence that minor changes (e.g., reducing the number of PhD students being produced or promoting transferability from universities to industry) may have unintended bad effects. #### Theoretical and practical relevance: The article was initially published in Research Policy but was subsequently re-published in the book Science bought and sold. The paper has been cited more than 1,400 times in the literature on science and innovation and innovation more broadly. |
[link]
Brooks' article is a high-level review that attempts to lay out the complicated relationship between science and technology. Although almost impossible broad in scope, the article does a surprisingly good job that conveys both the depth necessary to treat the subject well and effective use of examples that go into enough specifics and examples to convey his points. He argues that science contributed to technology in six ways: 1. Direct source of new technological ideas where archetypal ideas might be the atomic bomb or X-Rays. 2. A source of engineering design tools and techniques in ways that might be more common in more engineering-focused scientific investigations. 3. Instrumentation, laboratory techniques, and analytical methods which includes techniques and other innovation created in the process of doing science and where scientists act as sorts of lead users creating new technologists in order to investigate questions that were otherwise not possible. 4. Development of human skills through training students in technologies and scientific techniques and methods. 5. Technology assessment that might look at the side effects of technologies like chemical waste and measurement of side effects. 6. Source of development strategy that might help scientists avoid blind alleys. Additionally, he argues that technology contributes to science in two ways: 1. Source of new challenges as has been the case in material science which are driven by technological research. 2. Instrumentation and measurement techniques where technologists create tools that end up being useful to science more generally and so that scientists don't have to create all their own tools or focus on the parts of tool creation that they are less good at. Harvey Brooks was the dean of the Harvard Division of Engineering and Applied Sciences for nearly 20 years (1957-1976) before founding the center for Science, Technology and Public Policy at the Kennedy School in 1976. This paper was published more than 10 years after his retirement. #### Theoretical and practical relevance: The paper is a "semi-famous" paper and is more of a review article than an empirical piece but plays an important role in framing questions around science policy and has been cited by others exploring the relationship or making policy claims about the promotion of science for public policy reasons. |
[link]
A biofilm is a surface-associated population of microorganisms embedded in a matrix of extracellular polymeric substances. Biofilms are a major natural growth form of microorganisms and the cause of pervasive device-associated infection. This report focuses on the biofilm matrix of Candida albicans, the major fungal pathogen of humans. We report here that the C. albicans zinc-response transcription factor Zap1 is a negative regulator of a major matrix component, soluble β-1,3 glucan, in both in vitro and in vivo biofilm models. To understand the mechanistic relationship between Zap1 and matrix, we identified Zap1 target genes through expression profiling and full genome chromatin immunoprecipitation. On the basis of these results, we designed additional experiments showing that two glucoamylases, Gca1 and Gca2, have positive roles in matrix production and may function through hydrolysis of insoluble β-1,3 glucan chains. We also show that a group of alcohol dehydrogenases Adh5, Csh1, and Ifd6 have roles in matrix production: Adh5 acts positively, and Csh1 and Ifd6, negatively. We propose that these alcohol dehydrogenases generate quorum-sensing aryl and acyl alcohols that in turn govern multiple events in biofilm maturation. Our findings define a novel regulatory circuit and its mechanism of control of a process central to infection. A biofilm is a surface-associated population of microbes that is embedded in a cement of extracellular compounds. This cement is known as matrix. The two main functions of matrix are to protect cells from their surrounding environment, preventing drugs and other stresses from penetrating the biofilm, and to maintain the architectural stability of the biofilm, acting as a glue to hold the cells together. The presence of matrix is a contributing factor to the high degree of resistance to antimicrobial drugs observed in biofilms. Because biofilms have a major impact on human health, and because matrix is such a pivotal component of biofilms, it is important to understand how the production of matrix is regulated. We have begun to address this question in the major human fungal pathogen Candida albicans. We found that the zinc-responsive regulatory protein Zap1 controls the expression of several genes important for matrix formation in C. albicans. These target genes encode glucoamylases and alcohol dehydrogenases, enzymes that probably govern the synthesis of distinct matrix constituents. The findings here offer insight into the metabolic processes that contribute to biofilm formation and indicate that Zap1 functions broadly as a negative regulator of biofilm maturation. |
[link]
A key question in the analysis of hippocampal memory relates to how attention modulates the encoding and long-term retrieval of spatial and nonspatial representations in this region. To address this question, we recorded from single cells over a period of 5 days in the CA1 region of the dorsal hippocampus while mice acquired one of two goal-oriented tasks. These tasks required the animals to find a hidden food reward by attending to either the visuospatial environment or a particular odor presented in shifting spatial locations. Attention to the visuospatial environment increased the stability of visuospatial representations and phase locking to gamma oscillations—a form of neuronal synchronization thought to underlie the attentional mechanism necessary for processing task-relevant information. Attention to a spatially shifting olfactory cue compromised the stability of place fields and increased the stability of reward-associated odor representations, which were most consistently retrieved during periods of sniffing and digging when animals were restricted to the cup locations. Together, these results suggest that attention selectively modulates the encoding and retrieval of hippocampal representations by enhancing physiological responses to task-relevant information. Attention modulates the encoding and retrieval of memories, but the physiological basis of this interaction has largely been unexplored. The formation of memories which depend on the hippocampus involves the conscious recall of events that occur in specific spatial contexts, a form of memory known as episodic. To investigate the physiological consequences of the interaction between attention and memory in the hippocampus, we recorded single-cell activity and local field potentials — the local rhythmic oscillatory activity of neurons — from the same cells over several days while animals learned one of two goal-oriented tasks. In the visuospatial version of the task, mice had to associate a specific spatial location with a reward, independent of an odor cue. In the nonspatial, olfactory version, mice had to associate a specific odor with the food reward, independent of spatial location. We found that, during periods of navigation, only neurons in the visuospatially trained animals displayed long-term stable representations of space, and neuronal synchronization to so-called gamma oscillations, a mechanism of signal amplification that has been proposed to underlie attentional processes. Conversely, when animals were sniffing the odors in fixed spatial locations, only neurons in the olfactory-trained group displayed a stable increase in firing rate in response to the reward-associated odor. Our data suggest that attention modulates what is encoded and retrieved by hippocampal cells and that neuronal synchronization to gamma oscillations may underlie the mechanism whereby attention leads to stable spatial memory retrieval during navigation. |
[link]
Pathogen perception by the plant innate immune system is of central importance to plant survival and productivity. The Arabidopsis protein RIN4 is a negative regulator of plant immunity. In order to identify additional proteins involved in RIN4-mediated immune signal transduction, we purified components of the RIN4 protein complex. We identified six novel proteins that had not previously been implicated in RIN4 signaling, including the plasma membrane (PM) H+-ATPases AHA1 and/or AHA2. RIN4 interacts with AHA1 and AHA2 both in vitro and in vivo. RIN4 overexpression and knockout lines exhibit differential PM H+-ATPase activity. PM H+-ATPase activation induces stomatal opening, enabling bacteria to gain entry into the plant leaf; inactivation induces stomatal closure thus restricting bacterial invasion. The rin4 knockout line exhibited reduced PM H+-ATPase activity and, importantly, its stomata could not be re-opened by virulent Pseudomonas syringae. We also demonstrate that RIN4 is expressed in guard cells, highlighting the importance of this cell type in innate immunity. These results indicate that the Arabidopsis protein RIN4 functions with the PM H+-ATPase to regulate stomatal apertures, inhibiting the entry of bacterial pathogens into the plant leaf during infection. Author Summary Top Plants are continuously exposed to microorganisms. In order to resist infection, plants rely on their innate immune system to inhibit both pathogen entry and multiplication. We investigated the function of the Arabidopsis protein RIN4, which acts as a negative regulator of plant innate immunity. We biochemically identified six novel RIN4-associated proteins and characterized the association between RIN4 and the plasma membrane H+-ATPase pump. Our results indicate that RIN4 functions in concert with this pump to regulate leaf stomata during the innate immune response, when stomata close to block the entry of bacterial pathogens into the leaf interior. |
[link]
Apomixis, or asexual clonal reproduction through seeds, is of immense interest due to its potential application in agriculture. One key element of apomixis is apomeiosis, a deregulation of meiosis that results in a mitotic-like division. We isolated and characterised a novel gene that is directly involved in controlling entry into the second meiotic division. By combining a mutation in this gene with two others that affect key meiotic processes, we created a genotype called MiMe in which meiosis is totally replaced by mitosis. The obtained plants produce functional diploid gametes that are genetically identical to their mother. The creation of the MiMe genotype and apomeiosis phenotype is an important step towards understanding and engineering apomixis. |
[link]
The mechanism by which a complex auditory scene is parsed into coherent objects depends on poorly understood interactions between task-driven and stimulus-driven attentional processes. We illuminate these interactions in a simultaneous behavioral-neurophysiological study in which we manipulate participants' attention to different features of an auditory scene (with a regular target embedded in an irregular background). Our experimental results reveal that attention to the target, rather than to the background, correlates with a sustained (steady-state) increase in the measured neural target representation over the entire stimulus sequence, beyond auditory attention's well-known transient effects on onset responses. This enhancement, in both power and phase coherence, occurs exclusively at the frequency of the target rhythm, and is only revealed when contrasting two attentional states that direct participants' focus to different features of the acoustic stimulus. The enhancement originates in auditory cortex and covaries with both behavioral task and the bottom-up saliency of the target. Furthermore, the target's perceptual detectability improves over time, correlating strongly, within participants, with the target representation's neural buildup. These results have substantial implications for models of foreground/background organization, supporting a role of neuronal temporal synchrony in mediating auditory object formation. |
[link]
Reports of rapid growth in nature-based tourism and recreation add significant weight to the economic case for biodiversity conservation but seem to contradict widely voiced concerns that people are becoming increasingly isolated from nature. This apparent paradox has been highlighted by a recent study showing that on a per capita basis, visits to natural areas in the United States and Japan have declined over the last two decades. These results have been cited as evidence of "a fundamental and pervasive shift away from nature-based recreation" - but how widespread is this phenomenon? We address this question by looking at temporal trends in visitor numbers at 280 protected areas (PAs) from 20 countries. This more geographically representative dataset shows that while PA visitation (whether measured as total or per capita visit numbers) is indeed declining in the United States and Japan, it is generally increasing elsewhere. Total visit numbers are growing in 15 of the 20 countries for which we could get data, with the median national rate of change unrelated to the national rate of population growth but negatively associated with wealth. Reasons for this reversal of growth in the richest countries are difficult to pin down with existing data, but the pattern is mirrored by trends in international tourist arrivals as a whole and so may not necessarily be caused by disaffection with nature. Irrespective of the explanation, it is clear that despite important downturns in some countries, nature-related tourism is far from declining everywhere, and may still have considerable potential both to generate funds for conservation and to shape people's attitudes to the environment. Nature-based tourism is frequently described as one of the fastest growing sectors of the world's largest industry, and a very important justification for conservation. However, a recent, high profile report has interpreted declining visit rates to US and Japanese national parks as evidence of a pervasive shift away from nature tourism. Here we use the largest database so far compiled on trends in visits to Protected Areas around the world to resolve this apparent paradox. We find that, while visit rates—measured in two different ways—are indeed declining in some wealthy countries, in roughly three-quarters of the nations where data are available, visits to Protected Areas are increasing. Internationally, rates of growth in the number of visits to such areas show a clear negative association with per capita income, which interestingly is matched by trends in foreign arrivals as a whole. Our results therefore suggest that, despite worrying local downturns, nature-related tourism is far from declining everywhere, and may still have considerable potential to generate funds for conservation and engage people with the environment. |
[link]
#### Background Pain, although unpleasant, is essential for survival. Whenever the body is damaged, nerve cells detecting the injury send an electrical message via the spinal cord to the brain and, as a result, action is taken to prevent further damage. Usually pain is short-lived, but sometimes it continues for weeks, months, or years. Long-lasting (chronic) pain can be caused by an ongoing, often inflammatory condition (for example, arthritis) or by damage to the nervous system itself—experts call this “neuropathic” pain. Damage to the brain or spinal cord causes central neuropathic pain; damage to the nerves that convey information from distant parts of the body to the spinal cord causes peripheral neuropathic pain. One example of peripheral neuropathic pain is “radicular” low back pain (also called sciatica). This is pain that radiates from the back into the legs. By contrast, axial back pain (the most common type of low back pain) is confined to the lower back and is non-neuropathic. #### Why Was This Study Done? Chronic pain is very common—nearly 10% of American adults have frequent back pain, for example—and there are many treatments for it, including rest, regulated exercise (physical therapy), pain-killing drugs (analgesics), and surgery. However, the best treatment for any individual depends on the exact nature of their pain, so it is important to assess their pain carefully before starting treatment. This is usually done by scoring overall pain intensity, but this assessment does not reflect the characteristics of the pain (for example, whether it occurs spontaneously or in response to external stimuli) or the complex biological processes involved in pain generation. An assessment designed to take such factors into account might improve treatment outcomes and could be useful in the development of new therapies. In this study, the researchers develop and test a new, standardized tool for the assessment of chronic pain that, by examining many symptoms and signs, aims to distinguish between pain subtypes. #### What Did the Researchers Do and Find? One hundred thirty patients with several types of peripheral neuropathic pain and 57 patients with non-neuropathic (axial) low back pain completed a structured interview of 16 questions and a standardized bedside examination of 23 tests. Patients were asked, for example, to choose words that described their pain from a list provided by the researchers and to grade the intensity of particular aspects of their pain from zero (no pain) to ten (the maximum imaginable pain). Bedside tests included measurements of responses to light touch, pinprick, and vibration—chronic pain often alters responses to harmless stimuli. Using “hierarchical cluster analysis,” the researchers identified six subgroups of patients with neuropathic pain and two subgroups of patients with non-neuropathic pain based on the patterns of symptoms and signs revealed by the interviews and physical tests. They then used “classification tree analysis” to identify the six questions and ten physical tests that discriminated best between pain subtypes and combined these items into a tool for a Standardized Evaluation of Pain (StEP). Finally, the researchers asked whether StEP, which took 10–15 minutes, could identify patients with radicular back pain and discriminate them from those with axial back pain in an independent group of 137 patients with chronic low back pain. StEP, they report, accurately diagnosed these two conditions and was well accepted by the patients. #### What Do These Findings Mean? These findings indicate that a standardized assessment of pain-related signs and symptoms can provide a simple, quick diagnostic procedure that distinguishes between radicular (neuropathic) and axial (non-neuropathic) low back pain. This distinction is crucial because these types of back pain are best treated in different ways. In addition, the findings suggest that it might be possible to identify additional pain subtypes using StEP. Because these subtypes may represent conditions in which different pain mechanisms are acting, classifying patients in this way might eventually enable physicians to tailor treatments for chronic pain to the specific needs of individual patients rather than, as at present, largely guessing which of the available treatments is likely to work best. |
[link]
Modern societies are characterized by a large degree of pluralism in social, political and cultural opinions. In addition, there is evidence that humans tend to form distinct subgroups (clusters), characterized by opinion consensus within the clusters and differences between them. So far, however, formal theories of social influence have difficulty explaining this coexistence of global diversity and opinion clustering. This paper identifies a missing ingredient that helps to fill this gap: the striving for uniqueness. Besides being influenced by their social environment, individuals also show a desire to hold a unique opinion. Thus, when too many other members of the population hold a similar opinion, individuals tend to adopt an opinion that distinguishes them from others. This notion is rooted in classical sociological theory and is supported by recent empirical research. Authors develop a computational model of opinion dynamics in human populations and demonstrate that the new model can explain opinion clustering. Authors conduct simulation experiments to study the conditions of clustering. Based on our results, we discuss preconditions for the persistence of pluralistic societies in a globalizing world. |
[link]
Genomic tools such as the availability of the Drosophila genome sequence, the relative ease of stable transformation, and DNA microarrays have made the fruit fly a powerful model in insecticide toxicology research. We have used transgenic promoter-GFP constructs to document the detailed pattern of induced Cyp6a2 gene expression in larval and adult Drosophila tissues. We also compared various insecticides and xenobiotics for their ability to induce this cytochrome P450 gene, and show that the pattern of Cyp6a2 inducibility is comparable to that of vertebrate CYP2B genes, and different from that of vertebrate CYP1A genes, suggesting a degree of evolutionary conservation for the “phenobarbital-type” induction mechanism. Our results are compared to the increasingly diverse reports on P450 induction that can be gleaned from whole genome or from “detox” microarray experiments in Drosophila. These suggest that only a third of the genomic repertoire of CYP genes is inducible by xenobiotics, and that there are distinct subsets of inducers/induced genes, suggesting multiple xenobiotic transduction mechanisms. A relationship between induction and resistance is not supported by expression data from the literature. The relative abundance of expression data now available is in contrast to the paucity of studies on functional expression of P450 enzymes, and this remains a challenge for our understanding of the toxicokinetic aspects of insecticide action. |
[link]
The Escherichia coli chemotaxis network is a model system for biological signal processing. In E. coli, transmembrane receptors responsible for signal transduction assemble into large clusters containing several thousand proteins. These sensory clusters have been observed at cell poles and future division sites. Despite extensive study, it remains unclear how chemotaxis clusters form, what controls cluster size and density, and how the cellular location of clusters is robustly maintained in growing and dividing cells. Here, we use photoactivated localization microscopy (PALM) to map the cellular locations of three proteins central to bacterial chemotaxis (the Tar receptor, CheY, and CheW) with a precision of 15 nm. We find that cluster sizes are approximately exponentially distributed, with no characteristic cluster size. One-third of Tar receptors are part of smaller lateral clusters and not of the large polar clusters. Analysis of the relative cellular locations of 1.1 million individual proteins (from 326 cells) suggests that clusters form via stochastic self-assembly. The super-resolution PALM maps of E. coli receptors support the notion that stochastic self-assembly can create and maintain approximately periodic structures in biological membranes, without direct cytoskeletal involvement or active transport. |
[link]
Published in the New England Journal of Medicine Murray's article can be seen as more of a case study and a high-level overview of her longer-form and much more detailed work in The Oncomouse that roared: Resistance and accommodation to patenting in academic science. Her article discusses the issuing of a patent that covers most lines of embryonic stem cells by James Thompson at the University of Michigan and the problems around licensing of stem-cells that followed and ultimately resulted in a successful challenge of the patent by a consumer watchdog organization. Like in her paper on the onco-mouse, Murray argues that there are two ideologies or major institutional models at conflict between open science and the mode of commercialization. In this article, however, Murray takes much more a perspective stance and argues that, "it ought to be possible to create a stem-cell market that provides both rapid, unconditional access to the academic researchers and more circumscribed access to commercial scientists, along with higher prices and profit sharing." Theoretical and practical relevance: Murray's prescriptions seems to parallel the "two economies" model argued for by Lessig in a blog post and in his his book Remix: Making art and commerce thrive in the hybrid economy. Unlike Lessig who is geared more toward issues of culture and who is pursuing Creative Commons as the means toward this production, Murray is less clear about what a final arrangement might look like and is speaking toward a more scientific community. Interesting, Science Commons seems to have done little pursue the strategy that Murray suggests focusing much more strongly on a firm position of completely open science open to commercialization. This later option seems more likely to gain the benefits to commerce and the economy of open science detailed by Rosenberg and Brooks (for example). |
[link]
Walsh, Cho, and Cohen offer a very short two page report in Science on a survey of scientists they ran that aimed to measure or detect an anticommons effect as a way of providing an empirical test of the theory suggested by Heller and Eisenberg in Can patents deter innovation? The anticommons in biomedical research. In their survey, they ask about material transfers, if they are refused, and why. The authors survey 414 biomedical researchers in universities, governments, and nonprofits with a 40% response rate. The survey shows that authors have been instructed by their institutions to pay more attention to patents but that very few do. They conclude that, "patents on knowledge inputs rarely impose a significant burden on biomedical research." That said, they see reasonably frequent non-compliance with requests for shared material or knowledge. They probe this with two logistic regressions. Although they find that drugs and competitiveness are associated with reduced risks of sharing, they find no effect for patents. People who refuse most often tend to have a more commercial orientation, be more competitive, have a higher burden in term so the number of requests, and have published more. They also discuss a case study of 93 academics that are working in a very patent-intensive sub-area and, again, find very little evidence for a negative effect of patents on the research. Theoretical and practical relevance: The paper has been cited more than 80 times in the last five years. It provides and important citation in research on the effects of patents on scientific innovation. |
[link]
Recent evidence suggests that many malignancies, including breast cancer, are driven by a cellular subcomponent that displays stem cell-like properties. The protein phosphatase and tensin homolog (PTEN) is inactivated in a wide range of human cancers, an alteration that is associated with a poor prognosis. Because PTEN has been reported to play a role in the maintenance of embryonic and tissue-specific stem cells, we investigated the role of the PTEN/Akt pathway in the regulation of normal and malignant mammary stem/progenitor cell populations. We demonstrate that activation of this pathway, via PTEN knockdown, enriches for normal and malignant human mammary stem/progenitor cells in vitro and in vivo. Knockdown of PTEN in normal human mammary epithelial cells enriches for the stem/progenitor cell compartment, generating atypical hyperplastic lesions in humanized NOD/SCID mice. Akt-driven stem/progenitor cell enrichment is mediated by activation of the Wnt/β-catenin pathway through the phosphorylation of GSK3-β. In contrast to chemotherapy, the Akt inhibitor perifosine is able to target the tumorigenic cell population in breast tumor xenografts. These studies demonstrate an important role for the PTEN/PI3-K/Akt/β-catenin pathway in the regulation of normal and malignant stem/progenitor cell populations and suggest that agents that inhibit this pathway are able to effectively target tumorigenic breast cancer cells. |
[link]
This paper reviews the known physical origins of hearing and equilibrium in vertebrates, focusing on the results of studies in the 1970s and 80s particularly on the role of hair bundles in converting sound into electrical potential in the nervous system. The contemporary understanding of structural details of the ear are summarized, including the structure of hair cells and mechanoreceptive hair bundles, transduction channels, adaptation to a range of frequencies, and the possibilities for direct mechanoelectrical transduction, driven directly by hair motion without secondary messengers. Particular attention is paid to mechanisms for transduction and frequency tuning, areas of active research and study at the time. Both positive and negative discoveries are covered, noting areas where further research is needed. Some new micrographs and figures from the author's work are included to tie the review together. Over 100 related papers are cited and synthesized into the review, most by other authors. |
[link]
During the development of neural circuitry, neurons of different kinds establish specific synaptic connections by selecting appropriate targets from large numbers of alternatives. The range of alternative targets is reduced by well organised patterns of growth, termination, and branching that deliver the terminals of appropriate pre- and postsynaptic partners to restricted volumes of the developing nervous system. We use the axons of embryonic Drosophila sensory neurons as a model system in which to study the way in which growing neurons are guided to terminate in specific volumes of the developing nervous system. The mediolateral positions of sensory arbors are controlled by the response of Robo receptors to a Slit gradient. Here we make a genetic analysis of factors regulating position in the dorso-ventral axis. We find that dorso-ventral layers of neuropile contain different levels and combinations of Semaphorins. We demonstrate the existence of a central to dorsal and central to ventral gradient of Sema 2a, perpendicular to the Slit gradient. We show that a combination of Plexin A (Plex A) and Plexin B (Plex B) receptors specifies the ventral projection of sensory neurons by responding to high concentrations of Semaphorin 1a (Sema 1a) and Semaphorin 2a (Sema 2a). Together our findings support the idea that axons are delivered to particular regions of the neuropile by their responses to systems of positional cues in each dimension. |
[link]
For all animals, the taste sense is crucial to detect and avoid ingesting toxic molecules. Many toxins are synthesized by plants as a defense mechanism against insect predation. One example of such a natural toxic molecule is L-canavanine, a nonprotein amino acid found in the seeds of many legumes. Whether and how insects are informed that some plants contain L-canavanine remains to be elucidated. In insects, the taste sense relies on gustatory receptors forming the gustatory receptor (Gr) family. Gr proteins display highly divergent sequences, suggesting that they could cover the entire range of tastants. However, one cannot exclude the possibility of evolutionarily independent taste receptors. Here, we show that L-canavanine is not only toxic, but is also a repellent for Drosophila. Using a pharmacogenetic approach, we find that flies sense food containing this poison by the DmX receptor. DmXR is an insect orphan G-protein-coupled receptor that has partially diverged in its ligand binding pocket from the metabotropic glutamate receptor family. Blockade of DmXR function with an antagonist lowers the repulsive effect of L-canavanine. In addition, disruption of the DmXR encoding gene, called mangetout (mtt), suppresses the L-canavanine repellent effect. To avoid the ingestion of L-canavanine, DmXR expression is required in bitter-sensitive gustatory receptor neurons, where it triggers the premature retraction of the proboscis, thus leading to the end of food searching. These findings show that the DmX receptor, which does not belong to the Gr family, fulfills a gustatory function necessary to avoid eating a natural toxin. |
[link]
In the Neotropics, most plants depend on animals for pollination. Solitary bees are the most important vectors, and among them members of the tribe Centridini depend on oil from flowers (mainly Malpighiaceae) to feed their larvae. This specialized relationship within 'the smallest of all worlds' (a whole pollination network) could result in a 'tiny world' different from the whole system. This 'tiny world' would have higher nestedness, shorter path lengths, lower modularity and higher resilience if compared with the whole pollination network. In the present study, we contrasted a network of oil-flowers and their visitors from a Brazilian steppe ('caatinga') to whole pollination networks from all over the world. A network approach was used to measure network structure and, finally, to test fragility. The oil-flower network studied was more nested (NODF = 0·84, N = 0·96) than all of the whole pollination networks studied. Average path lengths in the two-mode network were shorter (one node, both for bee and plant one-mode network projections) and modularity was lower (M = 0·22 and four modules) than in all of the whole pollination networks. Extinctions had no or small effects on the network structure, with an average change in nestedness smaller than 2% in most of the cases studied; and only two species caused coextinctions. The higher the degree of the removed species, the stronger the effect and the higher the probability of a decrease in nestedness. We conclude that the oil-flower subweb is more cohesive and resilient than whole pollination networks. Therefore, the Malpighiaceae have a robust pollination service in the Neotropics. Our findings reinforce the hypothesis that each ecological service is in fact a mosaic of different subservices with a hierarchical structure ('webs within webs'). Theoretical and practical relevance: This paper goes one step further in the hypothesis of mutualistic modules, evidencing that ecosystem services may be a mosaic of subservices with different properties. Furthermore, this finding has important implications for service-oriented conservation programs, as their planning should take into account this hierarchical structure. |
[link]
Ongoing declines in production of the world's fisheries may have serious ecological and socioeconomic consequences. As a result, a number of international efforts have sought to improve management and prevent overexploitation, while helping to maintain biodiversity and a sustainable food supply. Although these initiatives have received broad acceptance, the extent to which corrective measures have been implemented and are effective remains largely unknown. We used a survey approach, validated with empirical data, and enquiries to over 13,000 fisheries experts (of which 1,188 responded) to assess the current effectiveness of fisheries management regimes worldwide; for each of those regimes, we also calculated the probable sustainability of reported catches to determine how management affects fisheries sustainability. Our survey shows that 7% of all coastal states undergo rigorous scientific assessment for the generation of management policies, 1.4% also have a participatory and transparent processes to convert scientific recommendations into policy, and 0.95% also provide for robust mechanisms to ensure the compliance with regulations; none is also free of the effects of excess fishing capacity, subsidies, or access to foreign fishing. A comparison of fisheries management attributes with the sustainability of reported fisheries catches indicated that the conversion of scientific advice into policy, through a participatory and transparent process, is at the core of achieving fisheries sustainability, regardless of other attributes of the fisheries. Our results illustrate the great vulnerability of the world's fisheries and the urgent need to meet well-identified guidelines for sustainable management; they also provide a baseline against which future changes can be quantified. Author Summary Top Global fisheries are in crisis: marine fisheries provide 15% of the animal protein consumed by humans, yet 80% of the world's fish stocks are either fully exploited, overexploited or have collapsed. Several international initiatives have sought to improve the management of marine fisheries, hoping to reduce the deleterious ecological and socioeconomic consequence of the crisis. Unfortunately, the extent to which countries are improving their management and whether such intervention ensures the sustainability of the fisheries remain unknown. Here, we surveyed 1,188 fisheries experts from every coastal country in the world for information about the effectiveness with which fisheries are being managed, and related those results to an index of the probable sustainability of reported catches. We show that the management of fisheries worldwide is lagging far behind international guidelines recommended to minimize the effects of overexploitation. Only a handful of countries have a robust scientific basis for management recommendations, and transparent and participatory processes to convert those recommendations into policy while also ensuring compliance with regulations. Our study also shows that the conversion of scientific advice into policy, through a participatory and transparent process, is at the core of achieving fisheries sustainability, regardless of other attributes of the fisheries. These results illustrate the benefits of participatory, transparent, and science-based management while highlighting the great vulnerability of the world's fisheries services. The data for each country can be viewed at http://as01.ucis.dal.ca/ramweb/surveys/fishery_assessment . |
[link]
Synaptic plasticity is widely believed to constitute a key mechanism for modifying functional properties of neuronal networks. This belief implicitly implies, however, that synapses, when not driven to change their characteristics by physiologically relevant stimuli, will maintain these characteristics over time. How tenacious are synapses over behaviorally relevant time scales? To begin to address this question, we developed a system for continuously imaging the structural dynamics of individual synapses over many days, while recording network activity in the same preparations. We found that in spontaneously active networks, distributions of synaptic sizes were generally stable over days. Following individual synapses revealed, however, that the apparently static distributions were actually steady states of synapses exhibiting continual and extensive remodeling. In active networks, large synapses tended to grow smaller, whereas small synapses tended to grow larger, mainly during periods of particularly synchronous activity. Suppression of network activity only mildly affected the magnitude of synaptic remodeling, but dependence on synaptic size was lost, leading to the broadening of synaptic size distributions and increases in mean synaptic size. From the perspective of individual neurons, activity drove changes in the relative sizes of their excitatory inputs, but such changes continued, albeit at lower rates, even when network activity was blocked. Our findings show that activity strongly drives synaptic remodeling, but they also show that significant remodeling occurs spontaneously. Whereas such spontaneous remodeling provides an explanation for "synaptic homeostasis" like processes, it also raises significant questions concerning the reliability of individual synapses as sites for persistently modifying network function. Author Summary Top Neurons communicate via synapses, and it is believed that activity-dependent modifications to synaptic connections—synaptic plasticity—is a fundamental mechanism for stably altering the function of neuronal networks. This belief implies that synapses, when not driven to change their properties by physiologically relevant stimuli, should preserve their individual properties over time. Otherwise, physiologically relevant modifications to network function would be gradually lost or become inseparable from stochastically occurring changes in the network. So do synapses actually preserve their properties over behaviorally relevant time scales? To begin to address this question, we examined the structural dynamics of individual postsynaptic densities for several days, while recording and manipulating network activity levels in the same networks. We found that as expected in highly active networks, individual synapses undergo continual and extensive remodeling over time scales of many hours to days. However, we also observed, that synaptic remodeling continues at very significant rates even when network activity is completely blocked. Our findings thus indicate that the capacity of synapses to preserve their specific properties might be more limited than previously thought, raising intriguing questions about the long-term reliability of individual synapses. |
[link]
From the late 1980s onward, the term “bioinformatics” mostly has been used to refer to computational methods for comparative analysis of genome data. However, the term was originally more widely defined as the study of informatic processes in biotic systems. In this essay, author traces this early history (from a personal point of view) and argues that the original meaning of the term is re-emerging. |
[link]
As hair bundles move, viscous friction between stereocilia and the surrounding liquid poses a physical challenge to the ear’s high sensitivity and sharp frequency selectivity. This letter proposes that some of that energy is used for frequency-selective sound amplification, through fluid–structure interaction between the liquid within the hair bundle and the stereocilia. A dynamic model is proposed to simulate hair bundles in a viscous environment, to see what large and small scale insights could be gained. Finite-element analysis, a submodel of hydrodynamic forces, stochastic simulation, and models of interferometric measurement all aimed to simulate both a hair bundle in natural conditions and what might be observed in an experiment involving it. Forces between stereocilia are estimated, and the results suggest that the closeness of stereocilia reduces drag between them, supporting a sliding but not a squeezing mode. Tip links may couple mechanotransduction to this low-friction sliding mode, with motion between neighboring stereocilia of less than 1nm when the hair bundle moves the larger distance [O(10nm)]needed to stimulate its channels. |
[link]
The extent by which different cellular components generate phenotypic diversity is an ongoing debate in evolutionary biology that is yet to be addressed by quantitative comparative studies. We conducted an in vivo mass-spectrometry study of the phosphoproteomes of three yeast species (Saccharomyces cerevisiae, Candida albicans, and Schizosaccharomyces pombe) in order to quantify the evolutionary rate of change of phosphorylation. We estimate that kinase-substrate interactions change, at most, two orders of magnitude more slowly than transcription factor (TF)-promoter interactions. Our computational analysis linking kinases to putative substrates recapitulates known phosphoregulation events and provides putative evolutionary histories for the kinase regulation of protein complexes across 11 yeast species. To validate these trends, we used the E-MAP approach to analyze over 2,000 quantitative genetic interactions in S. cerevisiae and Sc. pombe, which demonstrated that protein kinases, and to a greater extent TFs, show lower than average conservation of genetic interactions. We propose therefore that protein kinases are an important source of phenotypic diversity. Natural selection at a population level requires phenotypic diversity, which at the molecular level arises by mutation of the genome of each individual. What kinds of changes at the level of the DNA are most important for the generation of phenotypic differences remains a fundamental question in evolutionary biology. One well-studied source of phenotypic diversity is mutation in gene regulatory regions that results in changes in gene expression, but what proportion of phenotypic diversity is due to such mutations is not entirely clear. We investigated the relative contribution to phenotypic diversity of mutations in protein-coding regions compared to mutations in gene regulatory sequences. Given the important regulatory role played by phosphorylation across biological systems, we focused on mutations in protein-coding regions that alter protein-protein interactions involved in the binding of kinases to their substrate proteins. We studied the evolution of this "phosphoregulation" by analyzing the in vivo complement of phosphorylated proteins (the "phosphoproteome") in three highly diverged yeast species—the budding yeast Saccharomyces cerevisiae, the pathogenic yeast Candida albicans, and the fission yeast Schizosaccharomyces pombe—and integrating those data with existing data on thousands of known genetic interactions from S. cerevisiae and Sc. pombe. We show that kinase-substrate interactions are altered at a rate that is at most two orders of magnitude slower than the alteration of transcription factor (TF)-promoter interactions, whereas TFs and kinases both show a faster than average rate of functional divergence estimated by the cross-species analysis of genetic interactions. Our data provide a quantitative estimate of the relative frequencies of different kinds of functionally relevant mutations and demonstrate that, like mutations in gene regulatory regions, mutations that result in changes in kinase-substrate interactions are an important source of phenotypic diversity. |
[link]
The regulation of filopodia plays a crucial role during neuronal development and synaptogenesis. Axonal filopodia, which are known to originate presynaptic specializations, are regulated in response to neurotrophic factors. The structural components of filopodia are actin filaments, whose dynamics and organization are controlled by ensembles of actin-binding proteins. How neurotrophic factors regulate these latter proteins remains, however, poorly defined. Here, using a combination of mouse genetic, biochemical, and cell biological assays, we show that genetic removal of Eps8, an actin-binding and regulatory protein enriched in the growth cones and developing processes of neurons, significantly augments the number and density of vasodilator-stimulated phosphoprotein (VASP)-dependent axonal filopodia. The reintroduction of Eps8 wild type (WT), but not an Eps8 capping-defective mutant, into primary hippocampal neurons restored axonal filopodia to WT levels. We further show that the actin barbed-end capping activity of Eps8 is inhibited by brain-derived neurotrophic factor (BDNF) treatment through MAPK-dependent phosphorylation of Eps8 residues S624 and T628. Additionally, an Eps8 mutant, impaired in the MAPK target sites (S624A/T628A), displays increased association to actin-rich structures, is resistant to BDNF-mediated release from microfilaments, and inhibits BDNF-induced filopodia. The opposite is observed for a phosphomimetic Eps8 (S624E/T628E) mutant. Thus, collectively, our data identify Eps8 as a critical capping protein in the regulation of axonal filopodia and delineate a molecular pathway by which BDNF, through MAPK-dependent phosphorylation of Eps8, stimulates axonal filopodia formation, a process with crucial impacts on neuronal development and synapse formation. Neurons communicate with each other via specialized cell-cell junctions called synapses. The proper formation of synapses ("synaptogenesis") is crucial to the development of the nervous system, but the molecular pathways that regulate this process are not fully understood. External cues, such as brain-derived neurotrophic factor (BDNF), trigger synaptogenesis by promoting the formation of axonal filopodia, thin extensions projecting outward from a growing axon. Filopodia are formed by elongation of actin filaments, a process that is regulated by a complex set of actin-binding proteins. Here, we reveal a novel molecular circuit underlying BDNF-stimulated filopodia formation through the regulated inhibition of actin-capping factor activity. We show that the actin-capping protein Eps8 down-regulates axonal filopodia formation in neurons in the absence of neurotrophic factors. In contrast, in the presence of BDNF, the kinase MAPK becomes activated and phosphorylates Eps8, leading to inhibition of its actin-capping function and stimulation of filopodia formation. Our study, therefore, identifies actin-capping factor inhibition as a critical step in axonal filopodia formation and likely in new synapse formation. |
[link]
Following one of the basic principles in evolutionary biology that complex life forms derive from more primitive ancestors, it has long been believed that the higher animals, the Bilateria, arose from simpler (diploblastic) organisms such as the cnidarians (corals, polyps, and jellyfishes). A large number of studies, using different datasets and different methods, have tried to determine the most ancestral animal group as well as the ancestor of the higher animals. Here, we use “total evidence” analysis, which incorporates all available data (including morphology, genome, and gene expression data) and come to a surprising conclusion. The Bilateria and Cnidaria (together with the other diploblastic animals) are in fact sister groups: that is, they evolved in parallel from a very simple common ancestor. We conclude that the higher animals (Bilateria) and lower animals (diploblasts), probably separated very early, at the very beginning of metazoan animal evolution and independently evolved their complex body plans, including body axes, nervous system, sensory organs, and other characteristics. The striking similarities in several complex characters (such as the eyes) resulted from both lineages using the same basic genetic tool kit, which was already present in the common ancestor. The study identifies Placozoa as the most basal diploblast group and thus a living fossil genome that nicely demonstrates, not only that complex genetic tool kits arise before morphological complexity, but also that these kits may form similar morphological structures in parallel. |
[link]
Aquaporins are transmembrane proteins that facilitate the flow of water through cellular membranes. An unusual characteristic of yeast aquaporins is that they frequently contain an extended N terminus of unknown function. Here we present the X-ray structure of the yeast aquaporin Aqy1 from Pichia pastoris at 1.15 resolution. Our crystal structure reveals that the water channel is closed by the N terminus, which arranges as a tightly wound helical bundle, with Tyr31 forming H-bond interactions to a water molecule within the pore and thereby occluding the channel entrance. Nevertheless, functional assays show that Aqy1 has appreciable water transport activity that aids survival during rapid freezing of P. pastoris. These findings establish that Aqy1 is a gated water channel. Mutational studies in combination with molecular dynamics simulations imply that gating may be regulated by a combination of phosphorylation and mechanosensitivity. All living organisms must regulate precisely the flow of water into and out of cells in order to maintain cell shape and integrity. Proteins of one family, the aquaporins, are found in virtually every living organism and play a major role in maintaining water homeostasis by acting as regulated water channels. Here we describe the first crystal structure of a yeast aquaporin, Aqy1, at 1.15 resolution, which represents the highest resolution structural data obtained to date for a membrane protein. Using this structural information, we address an outstanding biological question surrounding yeast aquaporins: what is the functional role of the amino-terminal extension that is characteristic of yeast aquaporins? Our structural data show that the amino terminus of Aqy1 fulfills a novel gate-like function by folding to form a cytoplasmic helical bundle with a tyrosine residue entering the water channel and occluding the cytoplasmic entrance. Molecular dynamics simulations and functional studies in combination with site-directed mutagenesis suggest that water flow is regulated through a combination of mechanosensitive gating and post-translational modifications such as phosphorylation. Our study therefore provides insight into a unique mechanism for the regulation of water flux in yeast. |
[link]
Published in Science in 1998, Heller and Eisenberg frame their argument explicitly in terms of Hardin's classic piece of The tragedy of the commons and applied to biomedical research although it has been used and cited as relevant more broadly. They argue that just as too much open access to an expendable public resource can create a tragedy of the commons, too much ownership -- especially an intellectual domain -- can create thickets that limit the progress of science more broadly. They argue that, "privatization can solve one tragedy but cause another." Heller and Eisenberg are reacting, in large part, to the growth of patenting within in biomedical science (see Murray (2006) for more detail on case study of this in the area of mouse-research). Their core argument is that the anticommons emerges when the rights necessary to practice research are split up among a large number, and a large variety, of different researchers. This essentially introduces a set of complex collective action problems beyond those introduced by patent licensing which they suggest may create an important barrier to scientific progress. They explain quite clearly that, "the tragedy of the anticommons refers to the more complex obstacles that arise when a user needs access to multiple patented inputs to create a single useful output." They use examples of patents on concurrent fragments which they suggest may be creating thickets and reach-through licensing agreements to make this point. They end by describing why different types of organizations (i.e., non-profits and for-profits) create heterogeneous interests among rights holders, transaction costs around bundling, and cognitive biases where scientists think too highly off their own work might prevent institutional solutions to the anticommons that might reduce costs (e.g., ASCAP in the area the copyright). Theoretical and practical relevance: Heller and Eisenberg's article has been cited more than 1,300 times in the last 12 years and has become a major article in the literature critical of patents in science. The metaphor of the anticommons has become a frequently cited in the areas of open innovation, arguments in favor of open science, and critiques of the patent system more generally. That said, the article seems to be somewhat missued by a number of "downstream" academics citing it. The article is often treated as argument against particular patents. In fact, it's argument is carefully crouched in terms of the problems of patents in aggregate. In that sense, Murray and Stern's article econometric article testing the hypothesis is a somewhat rough match for the theory offered. The article was also tested by Walsh et al. (2005) who found no evidence of an anticommons effect. |
[link]
We used allometric scaling to explain why the regular replacement of the primary flight feathers requires disproportionately more time for large birds. Primary growth rate scales to mass (M) as M0.171, whereas the summed length of the primaries scales almost twice as fast (M0.316). The ratio of length (mm) to rate (mm/day), which would be the time needed to replace all the primaries one by one, increases as the 0.14 power of mass (M0.316/M0.171 = M0.145), illustrating why the time required to replace the primaries is so important to life history evolution in large birds. Smaller birds generally replace all their flight feathers annually, but larger birds that fly while renewing their primaries often extend the primary molt over two or more years. Most flying birds exhibit one of three fundamentally different modes of primary replacement, and the size distributions of birds associated with these replacement modes suggest that birds that replace their primaries in a single wave of molt cannot approach the size of the largest flying birds without first transitioning to a more complex mode of primary replacement. Finally, we propose two models that could account for the 1/6 power allometry between feather growth rate and body mass, both based on a length-to-surface relationship that transforms the linear, cylindrical growing region responsible for producing feather tissue into an essentially two-dimensional structure. These allometric relationships offer a general explanation for flight feather replacement requiring disproportionately more time for large birds. The pace of life varies with body size and is generally slower among larger organisms. Larger size creates opportunities but also establishes constraints on time-dependent processes. Flying birds depend on large wing feathers that deteriorate over time and must be replaced through molting. The lengths of flight feathers increase as the 1/3 power of body mass, as one expects for a length-to-volume ratio. However, feather growth rate increases as only the 1/6 power of body mass, possibly because a two-dimensional feather is produced by a one-dimensional growing region. The longer time required to grow a longer feather constrains the way in which birds molt, because partially grown feathers reduce flight efficiency. Small birds quickly replace their flight feathers, often growing several feathers at a time in each wing. Larger species either prolong molt over two or more years, adopt complex patterns of multiple feather replacement to minimize gaps in the flight surface, or, among species that do not rely on flight for feeding, simultaneously molt all their flight feathers. We speculate that the extinct 70-kg raptor, Argentavis magnificens, must have undergone such a simultaneous molt, living off fat reserves for the duration. |
[link]
Proves the NP-completeness of the total ordering problem: given finite sets $S$ and $R$, where $R$ is a subset of $S \times S \times S$, does there exist a total ordering of the elements of S such that for all (x, y, z) in R, either $x < y < z$ or $z < y < x$? The reduction is from the hypergraph 2-colorability problem with edges of size at most 3. This problem is in "Computers and Intractibility" by Garey and Johnson as problem MS1, the betweenness problem \cite{garey1979computers}. |
[link]
This paper takes up the question of whether rhetorical relations can be automatically derived and classified. It focuses, in particular, on discourse markers. These may be ambigious (e.g 'since', 'yet' have multiple uses and are sometimes, but not always, discourse markers); and these discourse markers may also be missing altogether. The authors comment that: "what is needed is a model which can classify rhetorical relations in the absence of an explicit discourse marker." (p4). Previous work (e.g. Marcu & Echihabi 2002) has suggested creating training data for a classifier by labelling examples which contain an unambiguous lexically marked rhetorical relation, then removing the markers. The main purpose of this paper is to empirically test this. It also provides an interesting theoretical observation: Two conditions are needed for training on marked examples to work well: "First, there has to be a certain amount of redundancy between the discourse marker and the general linguistic context, i.e. removing the discourse marker should still leave enough residual information for the classifier to learn how to distinguish different relations." Second, similarity between marked and unmarked examples is needed so that a classifier can make generalizations. The paper suggests that texts with lexically marked and lexically unmarked rhetorical relations may be inherently different, in so far as removing discourse markers may change the meaning of a sentence, and classifiers built based on removing markers from classified sentences work little better than chance. |
[link]
The simplicity of the Web has drawbacks, including the difficulty of integrating information from multiple sources. Mashups may combine information from multiple sources in a custom integration, and user communities may collaborate to annotate images and videos. However it is desirable to integrate information by machine: this is the goal of the semantic web. "A major difficulty in realizing this goal is that most Web content is primarily intended for presentation to and consumption by human users; HTML markup is primarily concerned with layout, size, color, and other presentation issues"(59). Furthermore, "This vision of a semantic Web is extremely ambitious and would require solving many long-standing research problems in knowledge representation and reasoning, databases, computational linguistics, computer vision, and agent systems"(59). The paper discusses the use RDF and RDFS and OWL, (using some examples from the world of the Harry Potter stories). A brief description of ontologies and of the context of Description Logics are also given. It contains a discussion of reasoning systems as well as a Theoretical and practical relevance: Compares databases and OWL ontologies and gives a current computer science perspective on the semantic web. Lists some ontology applications such as specific reasoning systems and ontologies. |
[link]
An opinion piece in the ACM Communications by two CS professors describing the current situation in CS publishing: unlike other academic disciplines that emphasize publishing in peer-reviewed journals, CS as a discipline emphasizes publication at conferences. They theorize this is due to a number of factors: * Conferences give faster review and publication turnarounds than journals (implied but not stated: CS is a rapidly moving field where this is particularly vital). * Publicity. "The best way to get your sub-discipline to know about your results is to publish them in the leading conference for that subdiscipline." And it is problematic for a few reasons: * Conference papers are usually limited to be shorter than journal ones, meaning that it's harder to explain results in reproducible detail. * Conference papers are often not reviewed as thoroughly as journal ones. * CS as a discipline has splintered into so many subfields and their corresponding conferences that presenting at a conference doesn't actually disseminate work to everyone who should see it. The authors go on to suggest that the CS community shift their focus to journal publication for more thoughtful certification of quality work, and give a number of things that could support such a shift. * Use centralized web archives to store papers publicly online * Speed up journal review cycles * Make everyone who submits a paper "pay" to have that paper reviewed by reviewing papers themselves * Allow multiple certifications per paper -- that is, make it ok for a paper to get reviewed and approved by two or more publications. |
[link]
Authors provide a privacy-preserving targeted ad system (PPOAd) via a User Ad Proxy which facilitates the anonymous expression of ad preferences and uses a blacklistable unlinkable credential system for registration credentials and an accountable ecash scheme for ad clicks. User information is only revealed if user clicks on an ad too many times, or attempts to double-spend an ad click allotment. |
[link]
This paper proposes a taxonomy of argumentation models, distinguishing three main types of models, and comparing models in each of these categories: 1. monological models - micro structure 2. dialogical models - macro structure 3. rhetorical models - audience's perception ### Monological models Monological models view arguments as a tentative proof, and focus on the internal structure of the chain of inference rules relating premises to conclusions ### Dialogical models Dialogical models emphasize the relationship between arguments. An argument can be seen as a dialogue game, where parties defend their viewpoint. In this view, argumentation is `defeasible' reasoning. ### Rhetorical models Rhetorical models study how arguments are used as a means of persuasion; they consider the audience's perception, and may relate to evaluative judgements (rather than truth). ### Distinguishing between the models Monological models are generally about the interal structure; Dialogical models are generally about the external structure. Rhetorical models are external to the argument, considering the communication aspects; ### Joint models Both rhetorical and dialogical (Bench-Capon 2003; Bentahar et al. 2007b) Both monological and dialogical (Bench-Capon 1989; Farley and Freeman 1995; and Atkinson et al. model 2006) Figure 1 summarizes the taxonomy, indicating the structure, foundation, and linkage of each type of model. The paper also presents an extensive description of various models, explaining the advantages and limits of each argumentation scheme considered. #### Theoretical and practical relevance: Argumentation is an everyday human activity and computational argumentation is also widespread; this paper works towards developing a "global view of existing argumentation models and methods". This is a seminal paper in argumentation which references and describes a large body of work, making sense of it with the taxonomy described. The three types of models complement each other and should be combined. |
[link]
This builds on the work of Automatic detection of arguments in legal texts; whereas that paper used argumentative texts from multiple domains (including newspapers and social media, despite the title), this work is restricted to the legal domain. Besides detecting argumentative and non-argumentative sentences, premises and conclusions are also detected. Additional features are added to analyze the importance of relations between sentences. #### Procedure 29 admissibility reports and 25 legal cases randomly selected by European Court of Human Rights August 2006 & December 2006. These contain facts, complaints, the law, and final conclusions from judges, expressed in long and complex sentences. These were manually analyzed by two lawyers to indicate whether they contained arguments. There were 12,904 sentences (10,133 non-argumentative and 2,771 argumentative), which included 2,355 premises and 416 conclusions. Average accuracy of the maximum entropy model is 82%, using only the information from the current analyzed sentence. (Previous experiments used a naive Bayes model; the increased amount of information in this case meant they could not satisfy the independence assumptions of the naive Bayes classifier). They also experimented with using information in adjacent sentences. In future work they plan to look at the clause level, instead of the sentence level. |
[link]
This paper points out that product reviews contain domain-specific knowledge. To capture the hierarchical relationships between product attributes, they introduce a new approach: "hierarchical learning with sentiment ontology tree" (HL-SOT) in order to: 1. identify attributes 2. identify which attributes have sentiment attached to them This would enable searching for particular attributes in reviews. Their algorithm is based on H-RLS from Incremental algorithms for hierarchical classification. Evaluations are conducted against a human-labeled data set. |
[link]
This paper shows the derivation of an algorithm that enables the positive rationals to be enumerated in two different ways. One way is known, and is called Calkin-Wilf-Newman enumeration; the second is new and corresponds to a flattening of the Stern-Brocot tree of rationals. We show that both enumerations stem from the same simple algorithm. In this way, we construct a Stern-Brocot enumeration algorithm with the same time and space complexity as Calkin-Wilf-Newman enumeration. |
[link]
The first in a three-part series in IEEE Annals, this article gives a historical explanation of the endemic confusion surrounding the stored-program concept. The authors suggest the adoption of more precisely defined alternatives to capture specific aspects of the new approach to computing associated with the 1945 work of von Neumann and his collaborators. The second article, "Engineering--The Miracle of the ENIAC: Implementing the Modern Code Paradigm,"' examines the conversion of ENIAC to use the modern code paradigm identified in this article. The third, "Los Alamos Bets on ENIAC: Nuclear Monte Carlo Simulations, 1947-1948,"' examines in detail the first program written in the new paradigm to be executed. |
[link]
This paper presents a microtext corpus derived from hostage negotiation transcripts. This source was chosen for its availability and its density of persuasion: Traditional microtext sources (Twitter, SMS, chat rooms) showed "limited occurrences of directly persuasive attempts". Even the negotiation transcripts showed fewer than 12% persuasive utterances. They definie persuasion as "the ability of one party to convince another party to act or believe in some desired way". Cialdini's persuasion model was used, focusing on: 1. Reciprocity 2. Commitment and Consistency 3. Scarcity 4. Liking 5. Authority 6. Social Proof |
[link]
Fernanda Viegas, Marin Wattenberg, and Kushal Dave describe a visualization system they have built called history flow that they use to visualize changes made to Wikipedia articles. The authors suggest that their papers makes three distinct contributions: * History flow itself which is able to reveal editing patterns in Wikipedia and provide context for editors. * Several examples of collaboration patterns that become visible using the visualization tool and contribute to the literature on Wikipedia. * Implications of these patterns for design and governance of online social spaces. The paper is largely an examination of Wikipedia and the early parts of the paper give background into the sites. It uses shortcomings in the design of the Wikipedia to motivate the history flow visualization which essentially depicts articles, over time, with colors representing authors who contributed text in question. Examples can be seen online at the IBM History Flow website. The interface is particularly good at representing major deletions and insertions. The authors use a lightweight statistically analysis to reveal patterns of editing on Wikipedia (which at the time, were not widely studied). In particular, they show vandalism including mass-deletion, the creation of phony redirects, and addition of idiosyncratic copy and show that it rarely stays on the site for more than few minutes before being removed. They also show a zig-zag patten that represents negotiation of content, often in the form of edit wars. They also attempt to provide some basic data on the stability of Wikipedia and the growth of articles on average. They suggest something that is now taken for granted by researchers of wikis: that studying Wikipedia may have important implications for other types of work. #### Theoretical and practical relevance: The paper is important more for its path-breaking work on Wikipedia -- now with its track at CHI, than for the history flow visualization which has not, for the most part, been widely deployed outside Wikipedia but which seems to hold promise in a variety of other contexts. The paper has been cited more than 400 times, mostly in the academic literature on Wikipedia. This paper is a finalist for the Wikimedia France Research Award. |
[link]
Cost and effort overruns tend to be about 30% and haven't changed much from 1980s to now. Estimation methods haven't changed, expert estimation still dominates. But we know more; author notes 7 lessons supported by research: 1. There Is No “Best” Effort Estimation Model or Method (important variable depends on context, also explains overfitting of advanced statistical estimation methods) 2. Clients’ Focus on Low Price Is a Major Reason for Effort Overruns 3. Minimum and Maximum Effort Intervals Are Too Narrow (estimates do not adequately reflect uncertainty) 4. It’s Easy to Mislead Estimation Work and Hard to Recover from Being Misled (strongest when estimators aware of constraints such as budget, resulting in "estimation anchor" bias even if unintentional) 5. Relevant Historical Data and Checklists Improve Estimation Accuracy 6. Combining Independent Estimates Improves Estimation Accuracy (groupthink leading to more risk not found in software estimation research) 7. Estimates Can Be Harmful (too low: low quality, too high: work expands; consider whether estimate is really needed) 3 estimation challenges research has no solution for: * How to Accurately Estimate the Effort of Mega - large, Complicated Software Projects (less relevant experience and data available, cf #5 above; large projects also involve complex interactions with stakeholders and organizational changes) * How to Measure Software Size and Complexity for Accurate Estimation * How to Measure and Predict Productivity (large difference among developers and team only discernible through trial; we don't even know if there are economies or diseconomies to scale for software production!) Practices likely to improve estimation, quote: * Develop and use simple estimation models tailored to local contexts in combination with expert estimation. * Use historical estimation error to set minimum - maximum effort intervals. * Avoid exposure to misleading and irrelevant estimation information. * Use checklists tailored to own organization. * Use structured, group - based estimation processes where independence of estimates are assured. * Avoid early estimates based on highly incomplete information. |
[link]
Thom-Santelli et al. present a qualitative study of 15 authors use of the {{maintained}} template. The authors searched though a full Wikipedia dump to find the approximate 1,100 pages that used the template on article talk pages to explain to other users that the article is maintained. They the contacted a subsample of 15 editors (5 women and 10 men) and engaged them in approximate 1 hours unstructured interviews to help understand their use of the template and the degree of "territoriality" (if any) that the authors felt over the articles in questions. Their basic finding is that there is indeed territoriality on Wikipedia which the authors attempt to connect to a sense of ownership. They argue that this territoriality can be valuable (and retaining expertise), but suggest that it might also have the negative effect of deterring new member participation. |
[link]
This paper gives advice for using micro-task markets for user studies, to get quick (and yet reliable) feedback from users. The way a task is defined makes a significant difference in the results, and good design can reduce the number of users "gaming the system". They conclude that micro-task markets may be useful for user studies that combine objective and subjective information gathering, and provide specific advice (below). This paper defines a "micro-task market" -- where short tasks (which take minutes or seconds) are entered into a shared system where users select them and complete them for some reward (generally money or reputation). The advantages of micro-task markets for user studies are that they are global and diverse, with very quick turnaround times (responses within 24-48 hours) at inexpensive rates (e.g. 5 cents per rating). The disadvantages are the lack of demographic inforation, lack of verifiable credentials, and limited experimenter contact. |
[link]
The goal of this paper is to apply this background, particularly to "identify fallacies made by dialogue participants" on the WWW. The Semantic Web is seen as a way to do this. This paper begins by presenting a definition of argumentation, reviewing Walton's and Toulmin's analyses of arguments, and showing basic argument diagrams. It also discusses the importance, for computer modelling, of classifications of arguments, citing Walton's 1996 book, Argumentation Schemes for Presumptive Reasoning, as the most influential of these. Then, it presents the notion of critical questions and criteria for argument acceptability, which form part of the core basis for this work. Finally, dialogue games are mentioned in passing. |
[link]
The core question of this Master's thesis, as the author puts it, is: “Can we learn to identify persuasion as characterized by Cialdini’s model using traditional machine learning techniques?” The authors give a qualified "yes"; improvement is needed for real-world results, but the methods function. The corpus used was developed in his colleague's Master's thesis, Persuasion detection in conversation. |
[link]
This paper models dialog acts in Twitter conversations and presents a corpus of 1.3 million conversations. They provide a status diagram showing the likelihood of transitions between dialogue acts. ![](http://i.imgur.com/eTTVcXO.png) ### Methodology Unsupervised LDA modelling of Twitter conversations, evaluated by held-out test conversations. Uses a conversation+topic model (segmenting post words into those that involve the topic of conversation, the dialogue act, or something else). Trained on 10,000 randomly sampled conversations (conversation length 3-6) from the corpus. ### Corpus 1.3 million conversations with each conversation containing between 2 and 243 posts. In summer 2009, they selected a random sample of Twitter users by gathering 20 randomly selected posts per minute, then queried to get all their posts. Followed any replies to collect conversations. Removed non-English conversations and non-reply posts. |
[link]
This study finds that poll data on consumer confidence and presidential job approvals can be approximated with straightforward sentiment analysis of Twitter data. The idea of aggregate sentiment is particularly interesting--where the errors are treated as noise which is expected to cancel itself out in aggregate. They point to Hopkins and King (2010) to show that standard text analysis techniques are inappropriate for assessing aggregate populations. Further, they provide some evidence from their own experiment: they mention that filtering out "will" (which is treated as positive sentiment despite being a verb sense, since they don't do POS tagging). However, they mention one caution: errors could potentially correlate with information of interest, such as if certain demographic groups might tweet in ways that are harder to analyze. |
[link]
Hill et al. introduce the notion of computation wear in this paper from the awareness literature in computer-supported cooperative work. The authors present an example of a project which shows wear in terms of both editing and reading in a modified version of an Emacs-like text editor and suggest that the concept may be broadly relevant in a variety of other other contexts and show designs for menus and others systems that display ware. The authors maps wear of documents into scrollbars of the Zmacs text editor (essential an Emacs clone for the Symbolics lisp machine) with what they call attribution mapped scrollbars. These scrollbars essentially emphasize different parts of the documents with a sort of histogram of edits based on how often that particular portion of the document has been edited (in the edit wear example) or read (in the read wear example). This provides an easy mode of showing "hot spots" in ways that parallel the way one can find the dog-eared or yellowed pages in a book or the stained recipe card very easily which correspond to physical, rather than computation, wear. Theoretically, the paper frames the design in terms of Schoen's concepts of reflective work and argues that wear provides a means of assisting in problem setting by providing attention to core areas that have received previous and attention. Before concluding, the authors generalize their suggestion from the specific example of edit and read wear to menus and spreadsheet and suggest that wear is a generally useful concept in a variety of cooperative environments. |
[link]
Summary: Authors describe context of Haskell's creation (many lazy purely functional research languages, desire for common language in genre), key branching factors (e.g., decision of Miranda developers to not allow their language to be base of common language; adoption of still new features of typeclasses and monads), and a number of the design decisions made, and tools, implementations, and applications now available. Theoretical and practical relevance: Haskell seems to have had inauspicious beginnings for a widely used general purpose programming language -- design by committee of academics, but through some combination of purity (authors argue decision to be lazy made it easier to stay purely functional), openness (of the design process, specification, libraries, and language implementations), and luck, the language seems to have remained interesting for researchers and become practical for industry, and has also influenced feature development in many other languages. |