Profile cover photo
Profile photo
Boris Borcic
in the beginning, animals voted with their genitals, and plants delegated voting to animals
in the beginning, animals voted with their genitals, and plants delegated voting to animals

Boris's posts

Post has shared content

Post has shared content

Post has shared content

Code and data for paper "Deep Photo Style Transfer"


Post has shared content
This article was written by Matthew Rubashkin. With a background in optical physics and biomedical research, Matthew has a broad range of experiences in software development, database engineering, and data analytics.

Post has shared content

Post has shared content
"Scientists have identified the genes of a deadly fungus that is decimating salamander and newt populations in Northern Europe.
Batrachochytrium salamandrivorans (Bsal), dubbed the 'amphibian plague', is a highly infectious chytrid fungus that affects many species of salamanders and newts, literally digesting their skin, which quickly leads to death. Since its discovery in 2013, very little has been found about how the fungus causes disease.
Now, researchers from Imperial College London, Ghent University, and the Broad Institute, have sequenced and identified the genes responsible for Bsal from an infected salamander. The authors say the findings, published last week in the journal Nature Communications, could ultimately help conservation efforts and provide drug targets in the future to help curb the disease.
Dr Rhys Farrer, co-author from Imperial's School of Public Health, said: "Until now, no one knew the exact mechanisms Bsal uses to cause disease. Our findings mean that policy makers and conservationists are now equipped with more knowledge on how best to curb this amphibian plague."
Dr Rhys Farrer and co-author Professor An Martel from Ghent University sequenced the genes from a salamander that had died from Bsal, and compared the genes with those of Batrachochytrium dendrobatidis (Bd), a closely related deadly fungus that affects not just salamanders and newts, but all amphibians. Bd has caused more extinction events than any other infectious disease known to science".

Post has shared content
The dynamics of disinformation, propaganda, "fake news," and conspiracy theories can be studied by watching how they spread. This is a summary of a scientific study (by one of its authors, who links the full paper) into this, and it's chock-full of fascinating results. They focused on responses to mass shootings in particular, as these are a favorite target of conspiracy theories. Conspiracy stories, it turns out, spread with a very different pattern than other types of story - and botnets, quasi-replication of stories between sites, and similar patterns of signal manipulation are key to them. This (as well as other interesting commonalities between the sites which propagate these) suggests that there is something systematic and intentional behind these theories: they aren't emerging organically, they're being curated. 

Post has shared content

Post has shared content
"Meta-assessment of bias in science", Fanelli et al 2017:

"Numerous biases are believed to affect the scientific literature, but their actual prevalence across disciplines is unknown. To gain a comprehensive picture of the potential imprint of bias in science, we probed for the most commonly postulated bias-related patterns and risk factors, in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was overall relatively small. However, we consistently observed a significant risk of small, early, and highly cited studies to overestimate effects and of studies not published in peer-reviewed journals to underestimate them. We also found at least partial confirmation of previous evidence suggesting that US studies and early studies might report more extreme effects, although these effects were smaller and more heterogeneously distributed across meta-analyses and disciplines. Authors publishing at high rates and receiving many citations were, overall, not at greater risk of bias. However, effect sizes were likely to be overestimated by early-career researchers, those working in small or long-distance collaborations, and those responsible for scientific misconduct, supporting hypotheses that connect bias to situational factors, lack of mutual control, and individual integrity. Some of these patterns and risk factors might have modestly increased in intensity over time, particularly in the social sciences. Our findings suggest that, besides one being routinely cautious that published small, highly-cited, and earlier studies may yield inflated results, the feasibility and costs of interventions to attenuate biases in the literature might need to be discussed on a discipline-specific and topic-specific basis.

The bias patterns most commonly discussed in the literature,
which are the focus of our study, include the following:
1. Small-study effects: Studies that are smaller (of lower precision) might report effect sizes of larger magnitude. This phenomenon could be due to selective reporting of results or to genuine heterogeneity in study design that results in larger effects being detected by smaller studies (17).
2. Gray literature bias: Studies might be less likely to be published if they yielded smaller and/or statistically nonsignificant effects and might be therefore only available in PhD theses, conference proceedings, books, personal communications, and other forms of "gray" literature (1).
3. Decline effect: The earliest studies to report an effect might overestimate its magnitude relative to later studies, due to a decreasing field-specific publication bias over time or to differences in study design between earlier and later studies (1, 18). Early-extreme: An alternative scenario to the decline effect might see earlier studies reporting extreme effects in any direction, because extreme and controversial findings have an early window of opportunity for publication (19).
4. Citation bias: The number of citations received by a study might be correlated to the magnitude of effects reported (20).
5. US effect: Publications from authors working in the United States might overestimate effect sizes, a difference that could be due to multiple sociological factors (14).
6. Industry bias: Industry sponsorship may affect the direction and magnitude of effects reported by biomedical studies (21). We generalized this hypothesis to nonbiomedical fields by predicting that studies with coauthors affiliated to private companies might be at greater risk of bias.

The prevalence of these phenomena across multiple meta-analyses can be analyzed with multilevel weighted regression analysis (14) or, more straightforwardly, by conducting a second-order meta-analysis on regression estimates obtained within each meta-analysis (32). Bias patterns and risk factors can thus be assessed across multiple topics within a discipline, across disciplines or larger scientific domains (social, biological, and physical sciences), and across all of science.
To test these hypotheses, we searched for meta-analyses in each of the 22 mutually exclusive disciplinary categories used by the Essential Science Indicators database, a bibliometric tool that covers all areas of science and was used in previous large-scale studies of bias (5, 11, 33). These searches yielded an initial list of over 116,000 potentially relevant titles, which through successive phases of screening and exclusion yielded a final sample of 3,042 usable metaanalyses (Fig. S1). Of these, 1,910 meta-analyses used effect-size metrics that could all be converted to log-odds ratio (n = 33,355 nonduplicated primary data points),

Bias Patterns. Bias patterns varied substantially in magnitude as well as direction across meta-analyses, and their distribution usually included several extreme values (Fig. S2; full numerical results in Dataset S1). Second-order meta-analysis of these regression estimates yielded highly statistically significant support for the presence of small-study effects, gray literature bias, and citation bias (Fig. 1 A and B). These patterns were consistently observed in all secondary and robustness tests, which repeated all analyses not adjusting for study precision, standardizing metaregression estimates and not coining the meta-analyses or coining them with different thresholds (see Methods for details and all numerical results in Dataset S2).
The decline effect, measured as a linear association between year of study publication and reported effect size, was not statistically significant in our main analysis (Fig. 1B), but was highly significant in all robustness tests. Moreover, secondary analyses conducted with the multilevel regression approach suggest that most or all of this effect might actually consist of a "first-year" effect, in which the decline is not linear and just the very earliest studies are likely to overestimate findings (SI Multilevel MetaRegression Analysis, Multilevel Analyses, Secondary Tests of Early Extremes, Proteus Phenomenon and Decline Effect).
The early-extreme effect was, in most robustness tests, marginally significant in the opposite direction to what was predicted, but was measured to high statistical significance in the predicted (i.e., negative) direction when not adjusted for smallstudy effects (Dataset S2). In other words, it appears to be true that earlier studies may report extreme effects in either direction, but this effect is mainly or solely due to the lower precision of earlier studies.
The US effect exhibited associations in the predicted direction and was marginally significant in our main analyses (Fig. 1B) and was significant in some of the robustness tests, particularly when meta-analysis coining was done more conservatively (Dataset S2; see Methods for further details).
Industry bias was absent in our main analyses (Fig. 1B) but was statistically significant when meta-analyses were coined more conservatively (Dataset S2).
Standardizing these various biases to estimate their relative importance is not straightforward, but results using different methods suggested that small-study effects are by far the most important source of potential bias in the literature. Second-order meta-analyses of standardized meta-regression estimates, for example, yield similar results to those in Fig. 1 (Dataset S2). Calculation of pseudo-R 2 in multilevel regression suggests that small-study effects account for around 27% of the variance of primary outcomes, whereas gray literature bias, citation bias, decline effect, industry sponsorship, and US effect, each tested as individual predictor and not adjusted for study precision, account for only 1.2%, 0.5%, 0.4%, 0.2%, and 0.04% of the variance, respectively (see SI Multilevel Meta-Regression Analysis, Multilevel Analyses, Relative Strength of Biases further details).

The career level of authors, measured as the number of years in activity since the first publication in the Web of Science, was overall negatively associated with reported effect size, although the association was statistically significant and robust only for last authors (Fig. 1F). This finding is consistent with the hypothesis that early-career researchers would be at greater risk of reporting overestimated effects (Table 1).
Gender was inconsistently associated with reported effect size: In most robustness tests, female authors exhibited a tendency to report smaller (i.e., more conservative) effect sizes (e.g., Fig. 1F), but the only statistically significant effect detected among all robustness tests suggested the opposite, i.e., that female first authors are more likely to overestimate effects (Dataset S2).
Scientists who had one or more papers retracted were significantly more likely to report overestimated effect sizes, albeit solely in the case of first authors (Fig. 1F). This result, consistently observed across most robustness tests (Dataset S2), offers partial support to the individual integrity hypothesis (Table 1). The between-meta-analysis heterogeneity measured for all bias patterns and risk factors was high (Fig. 1, Fig. S2, and Dataset S2), suggesting that biases are strongly dependent on contingent characteristics of each meta-analysis. The associations most consistently observed, estimated as the value of between-metaanalysis variance divided by summary effect observed, were, in decreasing order of consistency, citation bias, small-study effects, gray literature bias, and the effect of a retracted first author (Fig. 1, bottom numbers).
Differences Between Disciplines and Domains. Part of the heterogeneity observed across meta-analyses may be accounted for at the level of discipline (Fig. S3) or domain (Fig. 2 and Fig. S4), as evidenced by the lower levels of heterogeneity and higher levels of consistency observed within some disciplines and domains. The social sciences, in particular, exhibited effects of equal or larger magnitude than the biological and the physical sciences for most of the biases (Fig. 2) and some of the risk factors (Fig. S4)."

Post has shared content
Desargues graph in 5 dimensions

This image by +Greg Egan shows various views of a 5-dimensional cube. Some vertices and edges are drawn in gray, while others are emphasized, showing the Desargues graph.

The vertices of a 5d cube can be seen as 5-bit strings, like this:


There are 32 of theem The blue dots in Egan's image are strings with two 1's in them. The red dots are strings with three 1's. An edge lies in the Desargues graph - so Egan draws it as a dark line - if it goes from a bit string with two 1's to a bit string with all those 1's and one more.

The Desargues graph is beautifully symmetrical on its own, but it seems even more beautiful to me when it's sitting inside the 5d cube in this way.

Egan does an even fancier trick with the Desargues graph in another post. Instead of 5-bit strings, he imagines pizzas with 5 possible toppings:

Desargues delivers drones

A fleet of twenty drones is sent out to deliver pizzas, with every possible choice of either two or three toppings from a menu of five.

Whenever two drones are carrying pizzas that differ by the addition of one extra topping, they must fly at a fixed distance from each other, and the precise distance depends on the particular topping that you would need to add to the two-topping pizza to make its cargo identical to its three-topping neighbour.

Can we have these drones flying loops around each other, without any of them colliding, even if they all fly at the same height?


To see an animated gif of how it works, go here:


Animated Photo
Wait while more posts are being loaded