Post has attachment
ИИ-система Alibaba поможет фермерам следить за свиньями

На Китай приходится около половины мирового поголовья свиней. Сейчас фермерские хозяйства Поднебесной насчитывают около 700 миллионов этих животных, поэтому следить за ними и ухаживать, вовремя выявляя больных и слабых особей, становится всё сложнее. На помощь свиноводам пришла технологическая компания Alibaba, разработавшая алгоритм ИИ, способный самостоятельно мониторить здоровье и настроение животных — пишет The Verge.
Технология машинного зрения позволит осуществлять автоматический учёт поголовья свиней по клеймам с ID на ушах животных, а специальная система распознавания звуков сможет определить по характерным звукам больных особей и вовремя предупредить ветеринарный контроль в целях предотвращения эпидемии.
Сейчас IT-гигант Alibaba находится на стадии подписания документов с Dekon Group, крупнейшим в стране поставщиком свинины, и компанией Tequ Group, которая специализируется на производстве кормов для животных. Сейчас фермеры используют для подсчёта поголовья скота RFID-метки и специальные системы радиочастотной идентификации, что в поголовьях, насчитывающих по десять миллионов голов, уже не так эффективно, ведь при таком огромном стаде сложно даже подсчитать примерное количество родившихся поросят, не говоря уже о том, чтобы снабдить их метками. Технология, созданная Alibaba, предполагает учёт поголовья свиней с помощью ID, нанесённых на их спины. Пока система будет пересчитывать их только таким способом, но затем разработчики собираются предложить более комплексное решение, например, предполагающее использование инфракрасных датчиков для оценки состояния их здоровья и самочувствия.
Первоначальная сумма сделки колеблется в районе пяти миллионов долларов, а массовое внедрение намечено на конец 2019 года.
Add a comment...

Post has attachment
The Future of Blockchain Needs Economics
Crypto-economics, Vitalik Buterin once proposed at a talk, is the use of cryptography to design systems with desirable properties; to align cryptographic guarantees with agent incentives.
While Buterin was talking about block reward systems in which consensus protocols are robust to attack, we can generalize this thinking to a mapping of on-chain incentives with desirable properties in the off-chain “real world.”
Buterin’s vision for blockchain technology, from a quote pulled in 2015, was “a ‘Lego Mindstorms’ for building economic and social institutions.” In this vein, we can improve the efficiency of processes by building new systems to incentivize “good” behavior.
The ultimate economic benefits from crypto may not necessarily be led by the marginal advantages that cryptographic guarantees provide, but how they transform the way we think about project funding:

With smart contracts, you could make sure people do things without always paying for lawyers.
With protocols, startups don’t need to charge service fees to make money.
With tokens, you can convert more forms of capital into cash.

Here are some mechanism design concepts that could become reality, aided by the use of blockchain-enabled smart contracts.
Deferred and decentralized tokenization schemes
Tokenization schemes align incentives over time periods, for example:

Selling equity stakes in art ensure artists get rewarded for future appreciation
Marketplaces for depreciating licenses could save the sharing economy

Hito Steyerl, Liquidity Inc.
Cryptographic platforms have made minting public tokens much easier. It costs now essentially less than $5 to mint a token that can be publicly audited, and traded on secondary markets. This leads us to an economy where even abstract goods can be as efficiently re-allocated as easily as commodities.
The most straightforward example for this is the art market. As artists only profit from their first point of sale, they are incentivized to spend a distracting amount of time negotiating relationships with dealers and gallerists to establish a favorable valuation before the sale. Artists who become renowned later in life, or after death, do not receive funding from future appreciation over their artwork. While it would be infeasible to convince budding artists to hire lawyers and legally track every point of sale, a smart contract would be lightweight to both implement and enforce, given that existing blockchain platforms already integrate with secondary markets. Moreover, the metadata in a smart contract could be used to later verify the authenticity of the work in question.
This idea has already gained some traction in the art world. Artist-critic Hito Steyerl has analogized the creation of value in cryptocurrency through speculative adoption and networks with that among artists: “Art is a networked, decentralized, widespread system of value. It gains stability because it calibrates credit or disgrace across competing institutions or cliques.”
Another study publicized that if New York school artists Jasper Johns and Robert Rauschenberg had securitized their artwork and retained a 10% stake, they would have beat the S&P 500 by 3 to 900 times (Whitaker and Kraussl, 2018). Most hedge funds can’t even beat the S&P 500 with fees, period.
While the tokenization of art may run afoul of being filed as a security under the Howey test in the US, the smart contract could be defined to reflect specific rights, such as the right to sell prints, exhibitions, and so on. The smart contract could also require a service fee proportional to a new valuation for each resale.
There are also other tokenization schemes to consider. For instance, works could be governed by a “depreciating license” (Weyl Zhang 2017), where the owner of the work would periodically announce valuations for the work at which they commit to sell their licenses, and pay a percentage of these valuations to the artist as license fees to continue owning the good.
This could easily extend to any industry where current ownership and licensing schemes impede efficient reallocation of resources, including public resources which require consistent investment.
This could also have implications for the sharing economy, where companies could allocate assets by establishing marketplaces for depreciating licenses over the assets they own. For instance, Uber could use depreciating licenses with tokens to implement a long-term decentralized carsharing program among partnered drivers.
In the current model of short-term licensing, users have no incentive to maintain the common value of the asset. When the company tried to conduct a subprime car-leasing program called Xchange to increase its service fleet, they overshot their loss predictions on inventory by 18 times. Part of this was due to a unusually generous “unlimited miles with routine maintenance” clause, which incentivized drivers to “work long days and return vehicles with way too many miles, killing the resale value.”
In a decentralized token system, drivers can buy a depreciating license for the car (analogous to a “deposit/down payment”), pay a proportional amount of this self-evaluation (analogous to a “rental fee”), then resell the license when they no longer utilize it enough to pay this fee. In contrast to subprime loans, drivers would not only have an incentive to keep the car in good conditions for a higher resale value, but also get a sense of communal ownership.
If necessary, Uber could still moderate bad actors by voiding tokens claimed by fraud, and also incorporate other incentives on top of this program, including gamification, such as token rewards for drivers who complete some milestone number of rides.
Add a comment...

Post has attachment
How to Overcome Challenges at Work
I recently had a difficult conversation with a friend who was going through a shitty situation at work. We talked through a few options and action steps to take, and I left them with the message of “this too would pass.” Sure enough, I got a text from the friend yesterday that they had made it through the situation and things were back to normal.
While I was relieved to hear the situation was solved, I couldn’t help but think about the conundrum that my friend faced: she was just as stressed about dealing with the situation as she was with being afraid to ask for help. It was frustrating to see that they were so afraid and concerned about asking for help on how to deal with a stressful or difficult situation they were encountering in the workplace. While many companies are beginning to champion work-life balance initiatives and strive for more inclusive work environments, it still seems taboo to admit that you’re stressed, struggling, or challenged and need help. The default is to pretend that everything is fine even though it’s not.

One of the realities of life and work is that there are going to be tough and shitty times, and not everything is going to always be rainbows and butterflies. Instead of hiding behind them or viewing them as taboo topics, I think it’s important to acknowledge and discuss them.
Upon further reflection of my friend’s experience, I started thinking about challenging situations I’ve had at work, and how I’ve managed to work through them. I know I’ve certainly had my fair share of struggles, challenges, and stumbles, but through these stumbles and successes, I’ve learned some tactics for how to handle difficult and stressful situations.
Go into the eye of the storm
When you start to experience a difficulty or challenge that makes you feel uncomfortable, it can be scary and cause anxiety or concern. This is a classic case of “fight or flight.” While it might seem comforting to shy away from the task at hand, if you want to continue to learn and grow, it is essential to go head-first into the eye of the storm despite how scary or challenging it may seem. So why should you dive in head first?
First, it’s hard to fully run away from problems at work, especially if they pertain to your job responsibilities. While you can procrastinate or perhaps push them off to the side or onto someone else, at some point they will catch up to you and impact you in a negative way.
Second, seeking comfort is great and feels good but doesn’t necessarily help you continue to learn and grow your skillset. The best way to see what you are made of is to put yourself in uncomfortable positions to put your skills to the test. It may mean short-term pain (and hopefully it’s not too severe), but over the longer term it will help you gain additional skills and experiences.
Focus on progress each day
Drawing on the previous analogy, think back to that area in which you excel. Did you become an expert in it overnight? It most likely took a lot of time, trial and error, a lot of effort, and plenty of discipline and persistence. These things are going to be true with whatever challenge or situation you are dealing with at work. You won’t get through it overnight, and you won’t master it in a day, so once you’ve accepted the fact that you are going to go into the eye of the storm and that things won’t change overnight, you’ve also accepted that you’re going to need to work at this for a while.
This may seem challenging and daunting. Instead of trying to think of it as a whole, focus not on solving the challenge or beating the stage, but rather as making progress each and every day to your goal. If you accept that it won’t get better overnight, but that there are things you can do to improve each day, it becomes a much smaller and more realistic goal, one that seems attainable.
Research tells us that one of the criteria of successfully tackling projects and goals is our belief that we actually have what it takes to achieve the goal. So, make the goal seem achievable instead of insurmountable by focusing on it in smaller chunks.
Identify Your Stress Relievers
When you’re going through a challenging experience, it can be all consuming. Even when you’re not working, it can be easy for it to dominate your thoughts and feelings which can hinder your overall outlook and put additional stress and worry in your life. One thing you can do right away is to identify things that are stress relievers in your life and find ways to incorporate those things in your life as much as possible while you are going through this experience.
For some, it means going for a run, practicing yoga, or listening to music. For others, it means journaling or spending time with friends. Identifying these things and incorporating them into our week can help us manage the stress and make sure that whatever we are going through doesn’t consume our entire life.
Phone a friend
One of the best ways we can help ourselves is by relying on our support network. For example, during a recent challenge at work, the first thing I did was to call up and email all my friends who I thought had experienced something similar before and ask them for their advice, tips and tricks. It’s important to remember that while you may feel that you are alone, the odds are that you are doing something that someone has A) probably done before and B) probably felt how you feel. If they were able to get through it, so can you. Sure enough, after emailing about 5–6 people, I got responses ranging from “I am going through that right now!” to “I know exactly how you feel.”
In addition to sharing their tips with me, a few of them began reaching out to me on a regular basis as I went through the difficult stretch, just to make sure that I was doing okay and moving forward. You may not have all the answers, and that’s okay. But if you dig deep into your resources, you probably can find some answers that will help you out
Focus on what’s within your control
Some of these things will be within your control, and in most cases, you can take action to correct them. Other things are outside of your control and are things that will happen or occur regardless of what you do or don’t do. Now, focusing only on what you can control sounds great in theory, but it is harder to do in reality, especially for people who struggle with it.
Focusing or spending time on things you cannot control takes energy and resources away from you that you should be using to spend on things you can control. It’s exhausting and stressful as well. Furthermore, identifying and then focusing on things you can control also helps you prioritize where to spend your time and ultimately channels your energies to the most important things.
Practice Gratitude
Research shows that people who practice gratitude tend to experience more positive and happy emotions. These emotions can be especially helpful during stressful or difficult times.
I had a gratitude journal that I wrote in just to remind myself of things that I was thankful for. If you can master your own mind, if you can learn how to make the best of any situation, how to be yourself when you don’t feel like yourself, how to find yourself when you don’t feel like you’re on the right path, you will come back to this skill again and again throughout your life in every bad or hard time, and it will make you not just a successful person but an extremely happy person.
Help Someone
One way to break away from your stress or concerns is to channel them towards helping other people. Find a way to help someone at some point throughout the day. You can do simple things. Call a friend and give them advice on something that is on their mind, forward a new idea or website to a colleague that is relevant to them, or hold the door open for everyone you see during the day. I find when you focus your energy on helping someone else, it A) makes you feel like you’re contributing to someone else which makes you feel positive and B) takes your mind off of whatever stress or concerns that are on your mind.
Debrief when it’s done
One thing you can do when it’s over is to ask yourself, “What just happened?” Before you move on from this experience, take the time to ask yourself what you did, what you learned, and what you can take from this moving forward.
Doing a mini debrief or reflection will help you understand what you just went through and experienced. Furthermore, it will shed light on the skills you gained and the lessons that you learned. Yes, sometimes the experience can be so bad that all you want to do is forget about it, but I’ve found doing a simple reflection/debrief exercise can also be helpful and cathartic, and in some cases, it can help you find the positives in a shitty situation.
Stressful work situations will never go away, so if you plan on working for many more years, it’s worthwhile to spend time thinking about how to best navigate them when they arise. The good news is that there are things you can do to navigate these tough situations and thrive in them while maintaining some semblance of balance in your life.
We can’t always control what will happen to us, but we do have control on how we respond, and I’m confident that if you use some of these tips next time you’re in a stressful situation at work, you’ll be on your way to overcoming whatever comes your way.
This post originally appeared on my blog
Add a comment...

Post has attachment
Первая авиакатастрофа с участием мультикоптера
Федеральное управление гражданской авиации США начало расследование инцидента, который может стать первой авиакатастрофой с участием мультикоптера. 14 февраля вертолет Robinson R22 упал на южной оконечности острова Дэниел в городе Чарльстон, Южная Каролина, США. Со слов пилота, причиной авиакатастрофы оказался белый дрон, предположительно DJI Phantom, оказавшийся на пути вертолета.

Некоторое время назад стало заметно, что воздушным судам и дронам становится тесно. Всё чаще появлялись сообщения об опасных инцидентах с участием дронов, о чём можно прочитать на GT в статье 2015 года «Дрон в законе. Летай, но помни»
Если то же FAA запрещает использовать дроны в коммерческих целях, а любителям – не запускать дроны выше 400 футов (примерно 120 метров), то в российском законодательстве формулировки более размытые.

Таким образом, несмотря на угрозу штрафа, данная ситуация отлично продемонстрировала дыру в законодательстве.
Два года понадобилось российскому законодательству чтобы определиться в этом вопросе и в октябре и декабре 2017 года два владельца квадрокоптеров «DJI FANTOM 4» были наказаны штрафами по 3000 рублей ( «Запустил коптер — получил штраф» ) за «несанкционированный запуск и пилотирование беспилотного летального аппарата».
Масштаб проблемы в США к 2017 году достиг уровня, когда Военно-воздушные силы США захотели получить право сбивать гражданские дроны, чтобы защитить дорогостоящую военную технику от повреждения или уничтожения дронами.
Федеральное управление гражданской авиации США сообщает о увеличивающимся количестве опасных инцидентов и нарушений правил полётов «пилотами дронов». В конце 2017 года Федеральное управление подвело итоги — в среднем происходит около 250 инцидентов в месяц, что на 50% больше чем в 2016 году.

В большинстве случаев установить владельца мультикоптера не удается. Исключением стал случай в Нью-Йорке 21 сентября 2017 года, когда дрон повредил военный вертолет. Владелец был установлен по номерам деталей мультикоптера.
В феврале 2018 года видео с дрона, который пролетел в нескольких метрах от авиалайнера в Лас-Вегаса, и которое вызвало возмущение в Интернете, побудило три влиятельных американских авиационных лобби, требовать от Конгресса более жестких правил для беспилотников.

[embedded content]

До последнего времени инциденты благополучно разрешались, но в этот раз крушение воздушного судна состоялось. Об инциденте сообщают американские СМИ, например, Bloomberg, Post and Courier.
Со слов пилота-инструктора компании Holy City Helicopters, 14 февраля он отрабатывал с учеником маневрирование на вертолёте Robinson R22 над незастроенными земельными участками, когда около 2 часов дня, неожиданно перед ними оказался белый DJI Phantom. Стремясь избежать опасного столкновения с дроном, инструктор взял управление на себя, но хвост вертолета зацепил небольшое дерево или ветку. В результате вертолёт потерял управление, упал и опрокинулся на бок. Экипаж не пострадал, вертолет получил повреждения.
Add a comment...

Post has attachment
The Circuitry in Our Cells

If you’re reading this, you’re probably biological.
As you sit quietly, trillions of cells in your body are performing a frenetic dance of biochemical computation that makes your existence possible.
Consider this: You were once a single cell – a fertilized egg. This single cell was equipped with a genetic program capable of assembling atomically-precise molecular machines, replicating and distributing copies of its genetic program through cell division, and self-organizing multicellular structures into a human shape with specialized cell types, tissues, and organs.
And now here you are, reading this: Your eyes scanning these words while your brain interprets them. You built yourself from scratch.

Illustration of the molecular milieu inside a white blood cell. Cells are complex biochemical entities capable of sophisticated computation. (David Goodsell)

Biology computes with genetic circuits
The remarkable ability of biology to create patterns, perform specialized tasks, and adapt to changing environments is made possible with genetic circuits – networks of interacting genes that perform computation.
Genetic circuits appear literally everywhere in nature. In a lone bacterium as it “tumbles and runs” toward food. In a California redwood as it constructs itself into the sky. And in your immune system as it wards off cancer and infection. In fact, every single thing that civilization sources from biology – food, materials, drugs – was built by nature using genetic circuits to exert fine spatiotemporal control over biochemistry.
Yet despite their ubiquity in nature, genetic circuits are not harnessed in most biotechnology today. Instead, the state-of-the-art is constant overproduction of a few genes, whether they be enzymes, pesticides, or peptides.
Future biotechnologies will seem like science fiction: Intelligent therapeutics programmed to sense disease in the human body and trigger a therapeutic response. Living materials that can heal and react to their surroundings. Smart plants that can modify their physiology to withstand extreme cold or drought. To make these biotechnologies a reality, we need to be able to engineer genetic circuits.
From discovery to design
Natural genetic circuits have been studied for more than half a century. In 1961, the French scientists François Jacob and Jacques Monod published a landmark paper describing the genetic circuit in E. coli that senses and eats lactose [1]. Their description of how the appropriate metabolic genes are regulated (known as the lac operon model) was the first of its kind.

The lac operon genetic circuit. In response to glucose and lactose availability, E. coli regulates the expression of genes involved in lactose metabolism. (Wikimedia Commons)

A few months later, they predicted that similar regulatory processes could explain cell differentiation in multicellular organisms, like humans. Without mincing words, they wrote, “Moreover, it is obvious from the analysis of these mechanisms that their known elements could be connected into a wide variety of ‘circuits’, endowed with any desired degree of stability” [2]. For their work, they were awarded the Nobel Prize for Physiology or Medicine in 1965 along with André Lwoff.

François Jacob (front) and Jacques Monod (back) in their lab at the Institut Pasteur in 1971. (HO/Agence France-Presse)

In the years since that seminal discovery, scientists have further illuminated the myriad ways biological systems achieve behavior – from everyday tasks to exceptional feats. Indeed, entire books have been written on natural genetic circuits. (Check out the classic, “A Genetic Switch” by Mark Ptashne [3], which describes how the bacterial virus lambda phage regulates its life cycle.) The panoply of molecular mechanisms that power biological computation is vast and diverse, and reverse engineering natural genetic circuits is a field of intense ongoing research.
Armed with insights from nature, biological engineers began to design synthetic genetic circuits from the ground up. Back-to-back publications in Nature in 2000 are considered by many to be the first examples in the field (a genetic oscillator [4] and toggle switch [5]).
Over the past two decades, the ability to engineer increasingly complex and precise genetic circuits has advanced rapidly. Progress has resulted from several factors: thousands of sequenced genomes (and metagenomes) from which to “mine” useful genes, faster and cheaper DNA synthesis and sequencing, an improved understanding of cell biophysics to enable simulation, the ability to make targeted genomic modifications using CRISPR, and last but not least, years of compounded genetic engineering experience distilled into guiding design principles.
We are truly in the early days of a golden era for engineering biology.
Yet despite our progress so far, genetic circuit design has often been characterized by a manual and failure-prone process. Engineers often spend years creating a functional design through trial-and-error.
Automating genetic circuit design
How might this process of genetic circuit design be systematized and made more reliable? The semiconductor industry has completely transformed society, and its evolution offers a case study in transitioning from artisanal to automated.

Electronic circuits being manually laid out on Rubylith masking film, circa 1970. (Intel Corporation)

Early on, electronics engineers would painstakingly design and lay out circuit diagrams by hand. Then, the 1970s brought with it the first taste of automation: “place and route” techniques developed to position all of the electronic components and wires.
In the 1980s, the advent of electronic design automation (EDA) enabled programming languages that could be compiled down to patterns in silicon. One of the early publications describing this capability, “Introduction to VLSI Systems” by Carver Mead and Lynn Conway, is a holy text of EDA [6]. This breakthrough drove rapid increases in electronic chip complexity, and EDA became an entire industry in itself.
Today, chip designers use sophisticated EDA software that automates the entire workflow (design, simulation, and manufacturing). Software was truly brought to bear on electronic circuit design, and was one of the key enablers of Moore’s law.

Modern electronic design automation software, Virtuoso Layout Suite XL. (Cadence Design Systems)

Drawing inspiration from this evolution, we built a genetic circuit design automation platform, Cello (short for “Cell Logic”) [7]. We even used a common electronic hardware description language (Verilog) from electronics design to write our circuit specifications.
By combining concepts from digital logic synthesis, cell biophysics, and synthetic biology, we were able to build genetic circuits with up to 10 interacting genes. That’s state-of-the-art for an engineered cell behavior in 2017, but it still pales in comparison to nature. For reference, the E. coli genome employs roughly 300 genes called transcription factors to control metabolism, survival, and replication. Human cells have about an order of magnitude more. (While this may seem paltry compared to the billions of transistors in a modern CPU, it’s an apple and oranges comparison. The point isn’t to compete with silicon – the point is to program biology with new functions.) 
A tremendous amount of engineering lies ahead before we achieve genome-scale design with comparable complexity, elegance, and subtlety to what nature has evolved. We’re working on that. On the other hand, we have reached a point where genetic circuit engineering is reliable enough that we can program cell functions for previously impossible biotechnologies.

Overview of the original Cello platform [7]. A Verilog specification is automatically compiled to a DNA sequence that encodes a genetic circuit.

Introducing Asimov
In the same way that electronic circuits have become ubiquitous in the world – from cars to mobile phones to smart refrigerators – the same will become true for engineered genetic circuits. They will begin to appear in many aspects of daily life, from therapeutics to agriculture to consumer goods.
To help lay the foundation, I’m proud to announce the launch of Asimov with my co-founders Chris Voigt, Doug Densmore, and Raja Srinivas. Building upon our initial work on Cello, we’re developing a platform for professional genetic circuit design. We strive for Asimov to be the go-to resource for designing biological computation as biotechnology steadily becomes a fully-fledged engineering discipline.
I personally hope that this technology one day improves our ability to cure disease, empowers clean and sustainable manufacturing, and helps nourish a growing global population.
I look forward to keeping you updated on our progress.
Alec A.K. Nielsen
Founder & CEO, Asimov
[1] Jacob, F. & Monod, J. Genetic regulatory mechanisms in the synthesis of proteins. J. Mol. Biol. 3, 318–356 (1961).
[2] Monod, J. & Jacob, F. General Conclusions: Teleonomic Mechanisms in Cellular Metabolism, Growth, and Differentiation. Cold Spring Harb. Symp. Quant. Biol. 26, 389–401 (1961).[3] Ptashne, M. A. A genetic switch: Gene control and phage lambda. (1986).
[4] Elowitz, M. B. & Leibler, S. A synthetic oscillatory network of transcriptional regulators. Nature 403, 335–338 (2000).
[5] Gardner, T. S., Cantor, C. R. & Collins, J. J. Construction of a genetic toggle switch in Escherichia coli. Nature 403, 339–342 (2000).
[6] Mead, C. & Conway, L. Introduction to VLSI systems. (1980).
[7] Nielsen, A. A. K. et al. Genetic circuit design automation. Science 352, aac7341-aac7341 (2016).
Add a comment...

Post has attachment
So You Think You Have a Power Law – Well Isn't That Special?
Regular readers who care about such things — I think there are about

three of you — will recall that I

have long had a

thing about just how unsound many of the claims for the presence

of power law

distributions in real data are, especially those made by theoretical

physicists, who, with some honorable exceptions, learn nothing about data

analysis. (I certainly didn’t.) I have even whined about how I should really

be working on a paper about how to do all this right, rather than merely

snarking in a weblog. As evidence that the age of wonders is not passed

— and, more relevantly, that I have productive collaborators — this

paper is now loosed upon the world:

Aaron Clauset, CRS

and M. E. J. Newman, “Power-law distributions in empirical

data”, arxiv:0706.1062, with

code available

in Matlab and R; forthcoming (2009) in SIAM Review
Abstract: Power-law distributions occur in many situations of

scientific interest and have significant consequences for our understanding of

natural and man-made phenomena. Unfortunately, the empirical detection and

characterization of power laws is made difficult by the large fluctuations that

occur in the tail of the distribution. In particular, standard methods such as

least-squares fitting are known to produce systematically biased estimates of

parameters for power-law distributions and should not be used in most

circumstances. Here we describe statistical techniques for making accurate

parameter estimates for power-law data, based on maximum likelihood methods and

the Kolmogorov-Smirnov statistic. We also show how to tell whether the data

follow a power-law distribution at all, defining quantitative measures that

indicate when the power law is a reasonable fit to the data and when it is

not. We demonstrate these methods by applying them to twenty-four real-world

data sets from a range of different disciplines. Each of the data sets has been

conjectured previously to follow a power-law distribution. In some cases we

find these conjectures to be consistent with the data while in others the power

law is ruled out.

The paper is deliberately aimed at physicists, so we assume some things that

they know (like some of the mechanisms, e.g. critical fluctuations, which can

lead to power laws), and devote extra detail to things they don’t but which

e.g. statisticians do know (such as how to find the cumulative distribution

function of a standard Gaussian). In particular, we refrained from making a

big deal about the need for

an error-statistical approach to

problems like this, but it

definitely shaped


Aaron has

already posted

about the paper, but I’ll do so myself anyway. Partly this is to help hammer

the message home, and partly this is because I am a basically negative and

critical man, and this sort of work gives me an excuse to vent my feelings of

spite under the pretense of advancing truth (unlike Aaron and Mark, who are

basically nice guys and constructive scholars).
Here are the take-home points, none of which ought to be news, but

which, taken together, would lead to a real change in the literature. (For

example, half or more each issue of Physica A would disappear.)

Lots of distributions give you straight-ish lines on a log-log

plot. True, a Gaussian or a Poisson won’t, but lots of other things

will. Don’t even begin to talk to me about log-log plots which you claim are “piecewise linear”.

Abusing linear regression makes the baby Gauss

cry. Fitting a line to your log-log plot by least squares is a bad

idea. It generally doesn’t even give you a probability distribution, and even

if your data do follow a power-law distribution, it gives you a bad estimate of

the parameters. You cannot use the error estimates your regression software

gives you, because those formulas incorporate assumptions which directly

contradict the idea that you are seeing samples from a power law. And no, you

cannot claim that because the line “explains” (really, describes) a lot of the

variance that you must have a power law, because you can get a very high R^2

from other distributions (that test has no “power”). And this is without

getting into the additional errors caused by trying to fit a line to binned


It’s true that fitting lines on log-log graphs is what Pareto did back in

the day when he started this whole power-law business, but “the day” was

the 1890s. There’s a time and a place for being old school; this

isn’t it.

Use maximum likelihood to estimate the scaling exponent.

It’s fast! The formula is easy! Best of all, it works! The method

of maximum likelihood was invented in 1922

[parts 1

and 2],

by someone

who studied statistical mechanics, no less. The maximum likelihood estimators

for the discrete (Zipf/zeta) and continuous (Pareto) power laws were worked out

in 1952

and 1957 (respectively). They converge on the correct value of the scaling

exponent with probability 1, and they do so efficiently. You can even work out

their sampling distribution (it’s

an inverse

gamma) and so get exact confidence intervals. Use the MLEs!

Use goodness of fit to estimate where the scaling region

begins. Few people pretend that the whole of their data-set

follows a power law distribution; usually the claim is about the right or upper

tail, the large values over some given threshold. This ought to raise the

question of where the tail begins. Usually people find it by squinting at

their log-log plot. Mark Handcock and James Jones,

in one of the few

worthwhile efforts here, suggested using Schwarz’s

information criterion. This isn’t bad, but has trouble with with

continuous data. Aaron devised an ingenious method which finds the

empirically-best scaling region, by optimizing the Kolmogorov-Smirnov

goodness-of-fit statistic; it performs slightly better than the information


(Yes, one could imagine more elaborate semi-parametric approaches to this

problem. Feel free to go ahead and implement them.)

Use a goodness-of-fit test to check goodness of fit. In

particular, if you’re looking at the goodness of fit of

a distribution, use a statistic meant for distributions, not

one for regression curves. This means forgetting about R^2, the fraction of

variance accounted for by the curve, and using the Kolmogorov-Smirnov

statistic, the maximum discrepancy between the empirical distribution and the

theoretical one. If you’ve got the right theoretical distribution, KS

statistic will converge to zero as you get more data (that’s the


theorem). The one hitch in this case is that you can’t use the usual

tables/formulas for significance levels, because you’re estimating the

parameters of the power law from the data. (If you really want to see where

the problem comes from,

see Pollard,

starting on p. 99.) This is why God, in Her wisdom and mercy, gave

us the


If the chance of getting data which fits the estimated distribution as

badly as your data fits your power law is, oh, one in a thousand or less, you

had better have some other, very compelling reason to think that

you’re looking at a power law.

Use Vuong’s test to check alternatives, and be prepared for

disappointment. Even if you’ve estimated the parameters of your

parameters properly, and the fit is decent, you’re not done yet. You also need

to see whether other, non-power-law distributions could have produced

the data. This is a

model selection

problem, with the complication that possibly neither the power law nor the

alternative you’re looking at is exactly right; in that case you’d at least

like to know which one is closer to the truth. There is a brilliantly simple

solution to this problem (at least for cases like this) which was first devised

by Quang Vuong in a 1989 Econometrica

paper: use the log-likelihood ratio, normalized by an estimate of the

magnitude of the fluctuations in that ratio. Vuong showed that this test

statistic asymptotically has a standard Gaussian distribution when the

competing models are equally good; otherwise it will almost surely converge on

picking out the better model. This is extremely clever and deserves to be much

better known. And, unlike things like the fit to a log-log regression line, it

actually has the power to discriminate among the alternatives.

If you use sensible, heavy-tailed alternative distributions, like the

log-normal or the Weibull (stretched exponential), you will find that it is

often very, very hard to rule them out. In the two dozen data sets we looked

at, all chosen because people had claimed they followed power laws, the

log-normal’s fit was almost always competitive with the power law, usually

insignificantly better and sometimes substantially better.

(To repeat a joke: Gauss is not


For about half the data sets, the fit is substantially improved by adding

an exponential cut-off to the power law. (I’m too lazy to produce the

necessary equations; read the paper.) This means that there is a

characteristic scale after all, and that super-mega-enormous, as opposed to

merely enormous, events are, indeed, exponentially rare. Strictly speaking, a

cut-off power law should always fit the data better than a pure one

(just let the cut-off scale go to infinity, if need be), so you need to be a

little careful in seeing whether the improvement is real or just noise; but

often it’s real.

Ask yourself whether you really care. Maybe you

don’t. A lot of the time, we

think, all that’s genuine important is that the tail is heavy, and it

doesn’t really matter whether it decays linearly in the log of the variable

(power law) or quadratically (log-normal) or something else. If that’s all

that matters, then you should really consider doing some kind of non-parametric

density estimation

(e.g. Markovitch and Krieger’s


Sometimes, though, you do care. Maybe you want to make a claim which

depends heavily on just how common hugely large observations are. Or maybe you

have a particular model in mind for the data-generating process, and that model

predicts some particular distribution for the tail. Then knowing whether it

really is a power law, or closer to a power law than (say) a stretched

exponential, actually matters to you. In that case, you owe it to yourself to

do the data analysis right. You also owe it to yourself to think carefully

about whether there are other ways of checking your model. If the

only testable prediction it makes is about the shape of the tail,

it doesn’t sound like a very good model, and it will be intrinsically

hard to check it.

Because this is, of course, what everyone ought to do with a computational

paper, we’ve put our code online, so you can

check our calculations, or use these methods on your own data, without having to

implement them from scratch. I trust that I will no longer have to referee

papers where people use GnuPlot to draw

lines on log-log graphs, as though that meant something, and that in five to

ten years even science journalists and editors

of Wired will begin to get the message.
Manual trackbacks: The Statistical

Mechanic; Uncertain




Science After Sunclipse;


Naturalis 11

(at Highly



blogs for

industry … blogs for the dead;

Infectious Greed;

No Free Lunch;

Look Here First; TPMCafe;

Science After Sunclipse;

Cosmic Variance;

Messy Matters;

AI and Social


Power Laws;

Enigmas of Chance;

Add a comment...

Post has attachment
Nike teamed up with Snap and Darkstore to pre-release Air Jordan III ‘Tinker’ shoes on Snapchat

Snap, Nike, Darkstore and Shopify teamed up in a collaboration of epic proportions to pre-release the Air Jordan III “Tinker” on Snapchat with same-day delivery last night after the NBA All-Star game. This is the first time a brand other than Snap has sold a product via Snapchat.
The thousands who attended the Jumpman All-Star after-party in Los Angeles last night were able to scan exclusive Snap codes to receive the shoes by 10:30pm that same night. Once they scanned the Snap code, they were brought into the Snapchat app, where they could then purchase the sneakers.
Within 23 minutes, all the shoes sold out, Darkstore CEO Lee Hnetinka told me. Darkstore, a startup that aims to become an “invisible retailer,” facilitated the deliveries.
“This is the holy grail of the experience [Nike is] trying to intend, which is direct to consumer — to the actual consumer, versus a bot, — and same-day delivery,” Hnetinka said. “The Snap code introduces a new paradigm for commerce.”
Darkstore works by exploiting excess capacity in storage facilities, malls and bodegas, and enables them to be fulfillment centers with just a smartphone. The idea is that brands without local inventory can store products in a Darkstore and then ship them out the same day.
In addition to the exclusive Snap codes, Snapchat geofenced the area over the Staples Center in downtown Los Angeles during the All-Star game. Within that geofence, fans had access to a special 3D augmented reality Michael Jordan lens.
The official release for the shoe isn’t until March 24, but Nike wanted to do something extra special in celebration for the 30th anniversary of Michael Jordan’s slam dunk in 1988. That dunk is often referred to as the moment when Jordan “took flight.”
This isn’t Nike’s first time selling shoes via app-based experiences. Last June, Nike’s release for the SB Dunk High “Momofuku” required people to go a Momofuku restaurant, or to the Momofuku website, and then point their camera at the menu in order to see a sneaker pop up in augmented reality. From there, sneakerheads could purchase the shoes. Similar to what Nike is doing with Snapchat, you have to physically, or virtually, be somewhere in order to buy a pair.





This collaboration also marks Snap’s most aggressive move into the in-app e-commerce game. Snap launched the Snap Store within the Snapchat app’s Discover section just earlier this month to sell the Dancing Hot Dog Plushie, Snapchat winkface sweatshirt and other Snap-related products. At the time, TechCrunch’s Josh Constine noted Snapchat could position itself as a way for top brands to reach their audiences in a medium that bridges both shopping and social experiences.
Add a comment...

Post has attachment
Only Numpy: Recommending Optimal Treatment for Depression using Dilated Update Gate RNN (Google…

Training and Results ( All Cases )

Left Image → Case 1: Cost Over Time
Right Image → Case 1: Performance on Test Set
The learning rate seems bit high since, the cost over time wobbles a lot. However the network seem to generalize well, since it did well on the test set.

Left Image → Case 2: Cost Over Time
Right Image → Case 2: Performance on Test Set
Case 2’s cost over time graph doesn’t look that much different when compared to case 1. As well as the performance on the test set.

Left Image → Case 3: Cost Over Time
Right Image → Case 3: Performance on Test Set
Case 3 was bit interesting, as seen in the cost over time graph, we can see that the cost value is bit higher when compared to other cases, and I think that is the reason why it didn’t do so well on the test set.

Left Image → Case 4: Cost Over Time
Right Image → Case 4: Performance on Test Set
The final case was best of all, not only the model had the smallest cost value at the end of training, but also did well on the test set.
Add a comment...

Post has attachment
Top 10 Mobile App Marketing Companies, Agencies | List
Here is a fresh list of top 10 mobile app marketing companies that understand the importance of mobile app marketing. Each of the companies at this list employs a different approach to mobile app marketing.

Top 10 Mobile App Marketing Companies List
A detailed list of Top Mobile App Marketing Companies:
#1: ComboApp

ComboApp Group is an app marketing company that provides app marketing services for a global mobile app marketplace. The company is made up of three units which include ComboApp Consulting, ComboStore and GoWide. The vital part of the group is the ComboApp’s Innovation Hub — the team of programmers and analytics that supports all projects.
ComboApp Consulting is a global mobile app marketing company aimed at supplying guidance and complex assistance to companies who develop and promote their mobile apps. Having We provide clients with everything from creative development — to — product/campaign strategy and analytics — to — media placement and traditional public relations campaigns. This includes identifying target markets, validating concepts, shaping business models, qualitative research, data analysis, preparing the app launch and more.
ComboStore is a “one stop checkout”, mobile app marketing super-store designed to help mobile app owners and mobile developers leverage every dollar of their marketing budget to provide the most targeted, effective results possible to amplify the promotion of their app in the mobile ecosystem. The goal of the ComboStore is to level the marketing playing field, providing even those with a tight budget with the promotion tools required for success.
GoWide is a global mobile ad platform enabling businesses to run effective cost-per-mobile app downloads and mobile CPA campaigns based on its own proprietary technology of programmatic media buying. Its unique algorithm — consisting of aggregation and the inventory of mobile web and in-app publishers — allows dramatic optimization of ad operations and lowers media buying costs for advertisers.
Mobile apps Marketing & PR services
#2: App Promo

App Promo offers Strategy, Marketing and Monetization services across iOS, Android and Windows.
Our Services include:
-App Strategy: We are experts at rooting ideas in business strategy necessary to mitigate the risk of launching an app and properly set it up for success.
-App Marketing: From PR and Social Media to Paid Media and Promotions, our app marketing experts will work with you to get your app downloaded.
-App Monetization: Whether you need help identifying a revenue model or have one that isn’t working, our team has the skills and experience to make you money.
-App Store Optimization: Immediately improve app discovery in the app store with our ASO (App Store Optimization) packages for new & existing apps across platforms.
App Strategy, App Marketing, App Monetization, App Store Optimization
#3: InMobi

InMobi enables consumers to discover amazing products through mobile advertising. Through Miip, a revolutionary discovery platform, developers, merchants & brands can engage mobile consumers globally. Recognized by MIT Technology Review as one of the 50 Most Disruptive Companies in the world, InMobi enables over 100 billion discovery sessions on mobile across a billion users every month, becoming the largest discovery platform in the world.
Global Mobile Ad Platform, Mobile Advertising, Discovery Platform for Consumers, App Developers, Mobile App Marketing, Mobile App Monetization
#4: Appency

Appency — Professional Mobile Application Marketing 
Make a BIG Noise
Appency is the global leader in mobile application marketing. We provide a full range of marketing services from branding and consultation services to managing your launch and long term success with PR, social marketing, paid media management and more.
Mobile Apps, iOS, Product Consulting, Branding, Public Relations, Paid Media, App Marketing, Android, Windows Mobile
#5: Somo

Somo creates solutions for the connected world, with a mission to help businesses increase sales, customer engagement and productivity. Recently named Best Management Team and ranked #19 in The Sunday Times Tech track, Somo’s full service offering has three focused divisions:
● Custom Product Development 
● Marketing for the Connected World 
● Products and Platforms
These divisions are tied together by our strategy and insights team.
Somo has built some of the most innovative and creative mobile products in the market and run some of the most successful ad campaigns for brands all over the world such as Audi, De Beers, BP, and The New York Times.
Founded in 2009, the privately-held company of more than 180 employees is based in London and has offices in San Francisco, New York, and Bristol.
Please contact us through our website and we’d be happy to discuss how we can help you win in the connected world!
Mobile, Marketing, Advertising, Creative, Applications, Strategy
#6: Mozoo

Mozoo is a leading independent advertising group, providing tailored end to end mobile solutions. Mozoo Group has offices in Paris and London, and contains two principle entities: Surikate — a performance division, and Numbate — a creative and innovative division.
At Mozoo Group, mobile users’ experience is placed at the forefront of the action. From the creation of highly engaging formats to intelligent optimisation, a team of mobile experts ensures the delivery of non-intrusive mobile advertising experiences.
Through its unique portfolio of expertise and services, Mozoo Group delivers both cost-effective performances and winning innovative campaigns.
Mobile Advertising, Mobile Marketing, Mobile application promotion, Innovation, creativity and performance, Mobile traffic generation
#7: Yodel Mobile
Yodel Mobile is an award-winning specialist in mobile marketing — delivering marketing success for clients through expert mobile strategy, mobile advertising know how and engaging apps and campaigns.
Providing mobile marketing solutions for clients including Kobo, Hastings Direct, Dennis Publishing, IPC, BBC Magazines, Coral, Right Move, Daily Mail, and The Economist, Yodel Mobile combines specialist mobile expertise with a passion for the industry.
Yodel Mobile works to build marketing strategies that achieve outstanding results for brands across a variety of sectors. Defining the role of mobile within your business is one of the most important processes your company will go through — Yodel Mobile provides the tools, advice and insight that make sure you get it right.
Established in 2007, Yodel Mobile was there at the start of the mobile ad revolution — and continues to operate at the cutting edge of the industry. The opportunity to reach your audience through mobile advertising is immense — our experience in managing, measuring and optimising mobile ad campaigns consistently delivers on our clients’ marketing objectives.
Creativity is at the heart of every successful campaign — and as well as providing the strategic planning, Yodel Mobile provides full creative services. Whether it’s developing applications, mobile websites, messaging campaigns, or rich media, we know how to create work that engages customers.
Mobile Advertising, Mobile Strategy, Mobile Creative, Mobile Planning, Mobile Research, Mobile Development, Mobile App Store Optimisation, Mobile Search, Mobile CRM, Mobile Tracking & Analytics, Data Insight, Mobile Research, Mobile Marketing, Mobile Buying
#8: Phonevalley
Phonevalley is the world’s leading mobile marketing agency and Publicis Groupe’s mobile marketing agency.
Recognized as an industry pioneer, Phonevalley provides a full service offer in mobile marketing which spans from mobile media planning and buying, to mobile interactive services (mobile Internet sites, mobile applications, branded content & promotions) and strategic consultancy. 
Phonevalley’s unique value proposition includes its global approach, as well as a deep attention to local specificities.
Phonevalley is a part of Publicis Groupe (Euronext Paris: FR0000130577), the world’s fourth largest communications group. In addition, it is ranked as the world’s second largest media counsel and buying group, and is a global leader in digital and healthcare communications.
stratégie marketing mobile, application mobile et tablette, site mobile responsive, Messaging sms push notifications
#9: Kumuva
Kumuva is a mobile app consultancy that offers marketing, design and product management services to organisations and entrepreneurs around the world. We’ve helped developers get to the top of the app store charts and have achieved millions of downloads for our clients.
Digital Strategy, Mobile Marketing, Product Management, SMM
Would you like to include your App Marketing Company at this article? Please comment down below.
Thank you for Visit my Article 🙂
Add a comment...

Post has attachment
A Hacker Has Wiped a Spyware Company’s Servers
Last year, a vigilante hacker broke into the servers of a company that sells spyware to everyday consumers and wiped their servers, deleting photos captured from monitored devices. A year later, the hacker has done it again.
Thursday, the hacker said he started wiping some cloud servers that belong to Retina-X Studios, a Florida-based company that sells spyware products targeted at parents and employers, but that are also used by people to spy on their partners without their consent.
Retina-X was one of two companies that were breached last year in a series of hacks that exposed the fact that many otherwise ordinary people surreptitiously install spyware on their partners’ and children’s phones in order to spy on them. This software has been called “stalkerware” by some. This spyware allows people to have practically full access to the smartphone or computer of their targets. Whoever controls the software can see the photos the target snaps with their phone, read their text messages, or see what websites they go to, and track their location.
A Retina-X spokesperson said in an email Thursday that the company hasn’t detected a new data breach since last year. Friday morning, after the hacker told us he had deleted much of Retina-X’s data, the company again said it had not been hacked. But Motherboard confirmed that the hacker does indeed have access to its servers.
Friday, Motherboard created a test account using Retina-X’s PhoneSheriff spyware in order to verify the hacker’s claims. We downloaded and installed PhoneSheriff onto an Android phone and used the phone’s camera to take a photo of our shoes.
“I have 2 photos of shoes,” the hacker told us moments later.
The hacker also described other photos we had on the device, told us the email account we used to register the account, and then deleted the data from our PhoneSheriff account.
“None of this should be online at all,” the hacker told Motherboard, claiming that he had deleted a total of 1 terabyte of data.
“Aside from the technical flaws, I really find this category of software disturbing. In the US, it’s mainly targeted to parents,” the hacker said, explaining his motivations for going after Retina-X. “Edward Snowden has said that privacy is what gives you the ability to share with the world who you are on your own terms, and to protect for yourself the parts of you that you’re still experimenting with. I don’t want to live in a world where younger generations grow up without that right.”
In the first Retina-X data breach last year, the hacker was able to access private photos, messages, and other sensitive data from people who were monitored using one of Retina-X’s products. The private data was stored in containers provided by cloud provider Rackspace. The hacker found the key and credentials to those containers inside the Android app of PhoneSheriff, one of Retina-X’s spyware products. The API key and the credentials were stored in plaintext, meaning the hacker could take them and gain access to the server.
This time, the hacker said the API key was obfuscated, but it was still relatively easy for him to obtain it and break in again. Because he feared another hacker getting in and then posting the private photos online, the hacker decided to wipe the containers again.
Shortly after Motherboard first reported the Retina-X breach in February of last year, a second hacker independently approached us, and said they already had been inside the company’s systems for some time. The hacker provided other internal files from Retina-X, some of which Motherboard verified at the time.
Answering a series of questions about what Retina-X changed after last year’s hack, a spokesperson wrote in an email that “we have been taking steps to enhance our data security measures. Sharing details of security measures could only serve to potentially compromise those efforts.”
“Retina-X Studios is committed to protecting the privacy of its users and we have cooperated with investigating authorities,” the spokesperson wrote. “Unfortunately, as we are well aware, the perpetrators of these egregious actions against consumers and private companies are often never identified and brought to justice.”

At the end of 2016, the hacker gained access to the servers of Retina-X, which makes several spyware products, and started collecting data and moving inside the company’s networks. Weeks later, the hacker shared samples of some of the data he accessed and stole with Motherboard. But he didn’t post any of it online. Instead, he wiped some of the servers he got into, as the company later admitted in February of 2017.
The new alleged hack comes just a few days after the hacker resurfaced online. At the beginning of February, the hacker started to dump online some of the old data he stole from Retina-X in late 2016. The hacker is now using a Mastodon account called “Precise Buffalo” to share screenshots recounting how he broke in, as well as raw data from the breach, though no private data from victims and targets.
In February of 2017, a Motherboard investigation based on data provided by hackers showed that tens of thousands of people—teachers, construction workers, lawyers parents, jealous lovers—use stalkerware apps. Some of those people use the stalkerware apps to spy on their own partners without their consent, something that is illegal in the United States and is often associated with domestic abuse and violence.
Retina-X was not the only spyware company hacked last year. Other hackers also breached FlexiSpy, an infamous provider of spyware that has actively marketed its apps to jealous lovers. At the time, the hackers promised that their two victims—FlexiSpy and Retina-X—were only the first in line, and that they would target more companies that sell similar products.
Additional reporting by Joseph Cox and Jason Koebler.
Read more: When Technology Takes Hostages: The Rise of ‘Stalkerware’
Got a tip? You can contact this reporter securely on Signal at +1 917 257 1382, OTR chat at, or email
Get six of our favorite Motherboard stories every day by signing up for our newsletter.
Add a comment...
Wait while more posts are being loaded