Cover photo
Daniel Estrada
Lives in Internet
29,943 followers|6,485,862 views


Daniel Estrada

Shared publicly  - 
John Michael Greer:

You can have everything you need to build a bicycle and still be unable to make a telescope or a radio receiver, and vice versa.

Strictly speaking, therefore, nothing requires the project of deliberate technological regress to move in lockstep to the technologies of a specific past date and stay there. It would be wholly possible to dump certain items of modern technology while keeping others. It would be just as possible to replace one modern technological suite with an older equivalent from one decade, another with an equivalent from a different decade and so on. Imagine, for example, a future America in which solar water heaters (worked out by 1920) and passive solar architecture (mostly developed in the 1960s and 1970s) were standard household features, canal boats (dating from before 1800) and tall ships (ditto) were the primary means of bulk transport, shortwave radio (developed in the early 20th century) was the standard long-range communications medium, ultralight aircraft (largely developed in the 1980s) were still in use, and engineers crunched numbers using slide rules (perfected around 1880).

There’s no reason why such a pastiche of technologies from different eras couldn’t work. We know this because what passes for modern technology is a pastiche of the same kind, in which (for example) cars whose basic design dates from the 1890s are gussied up with onboard computers invented a century later. Much of modern technology, in fact, is old technology with a new coat of paint and a few electronic gimmicks tacked on, and it’s old technology that originated in many different eras, too. Part of what differentiates modern technology from older equivalents, in other words, is mere fashion. Another part, though, moves into more explosive territory.


These days, when you see the words “new and improved” on a product, rather more often than not, the only thing that’s been improved is the bottom line of the company that’s trying to sell it to you. When you hear equivalent claims about some technology that’s being marketed to society as a whole, rather than sold to you personally, the same rule applies at least as often.


Technological progress is a function of collective choices—do we fund Sealab or the Apollo program? Supersonic transports or urban light rail? Energy conservation and appropriate tech or an endless series of wars in the Middle East? No impersonal force makes those decisions; individuals and institutions make them, and then use the rhetoric of impersonal progress to cloak the political and financial agendas that guide the decision-making process.

What’s more, even if the industrial world chooses to invest its resources in a project, the laws of physics and economics determine whether the project is going to work. The Concorde is the poster child here, a technological successbut an economic flop that never even managed to cover its operating costs. Like nuclear power, it was only viable given huge and continuing government subsidies, and since the strategic benefits Britain and France got from having Concordes in the air were nothing like so great as those they got from having an independent source of raw material for nuclear weapons, it’s not hard to see why the subsidies went where they did.

That is to say, when something is being lauded as the next great step forward in the glorious march of progress leading humanity to a better world, those who haven’t drunk themselves tipsy on folk mythology need to keep four things in mind. The first is that the next great step forward  in the glorious march of progress (etc.) might not actually work when it’s brought down out of the billowing clouds of overheated rhetoric into the cold hard world of everyday life. The second is that even if it works, the next great step forward (etc.) may be a white elephant in economic terms, and survive only so long as it gets propped up by subsidies. The third is that even if it does make economic sense, the next great step (etc.) may be an inferior product, and do a less effective job of meeting human needs than whatever it’s supposed to replace. The fourth is that when it comes right down to it, to label something as the next great (etc.) is just a sales pitch, an overblown and increasingly trite way of saying “Buy this product!”

Those necessary critiques, in turn, are all implicit in the project of deliberate technological regress. Get past the thoughtstopping rhetoric that insists “you can’t turn back the clock”—to rephrase a comment of G.K. Chesterton’s, most people turn back the clock every fall, so that’s hardly a valid objection—and it becomes hard not to notice that “progress” is just a label for whatever choices happen to have been made by governments and corporations, with or without input from the rest of us. If we don’t like the choices that have been made for us in the name of progress, in turn, we can choose something else.


Once you buy into the notion that the specific choices made by industrial societies over the last three centuries or so are something more than the projects that happened to win out in the struggle for wealth and power, once you let yourself believe that there’s a teleology to it all—that there’s some objectively definable goal called “progress” that all these choices did a better or worse job of furthering—you’ve just made it much harder to ask where this thing called “progress” is going. The word “progress,” remember, means going further in the same direction, and it’s precisely questions about the direction that industrial society is going that most need to be asked.

I’d like to suggest, in fact, that going further in the direction we’ve been going isn’t a particularly bright idea just now.  It isn’t even necessary to point to the more obviously self-destructive dimensions of business as usual. Look at any trend that affects your life right now, however global or local that trend may be, and extrapolate it out in a straight line indefinitely; that’s what going further in the same direction means. If that appeals to you, dear reader, then you’re certainly welcome to it.  I have to say it doesn’t do much for me.
Last week’s post here on The Archdruid Report appears to have hit a nerve. That didn’t come as any sort of a surprise, admittedly.  It’s one thing to point out that going back to the simpler and less energy-intensive technolo...
View original post
Matt Uebel's profile photoWilliam Rutiser's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
// I love seeing the internet come together in moments like these. We can't anticipate such events, we can only ride the wave wherever it takes us. The joy is that it takes us all along with it. 

A few days back I shared an study that social media correlated with a rise in violence compared to mass media. They argued that mass media unified communities where social media fractured them, leading to violence.

This whole dress color thing is a clear reminder that the internet is one network. We spend out time on different subnets talking to niche communities about the things that only we care about, so it might look like we're fragmented and balkanized because we're not all dealing with the same input. But this is utterly wrongheaded; that's not how socialization works at all. No community is ever really homogeneous. 

Social media isn't just mass media, it is organized media. That means it's operating at a level of complexity and scale beyond anything traditional media can comprehend. It's the difference between throwing a bucket of water on your garden and installing a sprinkler system. 

We've crafted a global network that can amplify the mumblings on one network to attract the attention of our best minds across all networks. Within hours, the same debate (over the color of that dress) was happening a million times in parallel, with experts on call to weigh in.

We didn't need a mass message or single voice to rally that collective attention. We just needed something compelling enough to share. Our networks form a seamless fabric that we've stitched together ourselves. We can bridge these networks and assemble at a massive scale when needed, or we can decompose into our cliques and save energy. Controlling our assembled attention is hard, and we have a lot to learn about cooperating at these scales. I'm sure we'll encounter even greater challenges in the future, so this is excellent practice. 

Good job, internet. Keep doing this kind of thing. It's good for you. 

// Framing this event for Skylar hth
jorge Soares's profile photoAaron Helton's profile photoWilliam Rutiser's profile photoVictoria Rose's profile photo
G+ was way late catching onto the dress thing. It was pretty quick about Nimoy. I don't think it's ever produced something that's had a life off this network.

'cept maybe us.  
Add a comment...
Memory lapse draws bumblebees to untried flowers

In their everyday lives, bumblebees have a lot to remember: the colours, patterns, scents and symmetries of flowers, the best ways to get food from the best ones, as well as their locations and how to get to them. Relative to their lifespan, bumblebees have a good long-term memory for these details. They learn very early how to manipulate flowers to get nectar or pollen out of them, and still remember this three weeks later, towards the end of their short lives.

But they do make mistakes. "Bees can memorise more than one flower type, though there are costs," says Lars Chittka of Queen Mary University of London. "Bees make more mistakes when they juggle multiple memories than if they just focus on one flower type."
View original post
Add a comment...

Daniel Estrada

Shared publicly  - 
// I fifth of articles in top tier journals are never cited. Good thing we're keeping them behind paywalls!
Add a comment...

Daniel Estrada

Shared publicly  - 
The app also tracks positive emotions, and automatically invites the friends you like to hang out more often. It pairs with a smart watch that monitors heart-rate variability—a known measure of stress—and uses a custom algorithm to crunch that data and determine your mood. If you're noticeably emotional, the app pings you afterward to ask what was going on and who you were with. Over time, it charts which friends are making you happiest or most crazy.

In a week-long user test, college freshmen who tried the app seemed to find it useful. ("Maybe I shouldn't hang out with Mark," one student says in this video. "Maybe he's kind of a dick").
1 comment on original post
Gorka Navarrete's profile photoEli Viertel's profile photo
I also noticed right away and thought it was on purpose, but now I'm not so sure how the two in the screenshot seem to exemplify "not getting along".  He's got a smile, she sort of has a grimace. Though it could be cultural differences or just the syllable she's about to use, then when you watch it, indeed she interrupts him in a way that seems like annoyance.  Would have been hilarious if their watches both lit up at that moment, "dick alert, dick alert!".
Add a comment...

Daniel Estrada

Shared publicly  - 
> As predicted, the results indicate that locations characterized by higher levels of radio penetration tend to generate lower rates of collective violence, while locations characterized by higher levels of cellular penetration tend to generate higher rates of collective violence. They also point to the intriguing possibility of an interactive effect between these media forms, in which the prior presence of radio infrastructure dampens the violence-promoting effects of cellular infrastructure, demonstrating the strong effects that can arise through historical path-dependence.
via Martin Gurri:

Full Study:

// In the this study, "social media" refers to cell phones, where communication tends to be "homophilic", meaning it happens in between local networks of more homogeneous communities. 

The claim is that broader networks (=radio) speak to more diverse populations, and hence tends to unify those populations in ways that result in less sectarian violence. Social media (=cell phones) tends to reinforce community boundaries, and hence results in more violence. 

I'm skeptical that the distinction between mass communication and social media can be so cleanly drawn, especially in places with better internet penetration. Sure, my FB feed is mostly friends and acquaintances (my G+ is almost entirely strangers <3<3), but they are all sharing links from common sources; most of my access to NPR or NYT comes through social media. Does this have an overall unifying or fragmenting effect? It's hard to say; it's pretty quick to condemn "social media" on this basis. 

I'm infinitely more skeptical that the "unifying" nature of mass communication is for our own good. 
Not all information technologies affect violence equally: cellphones, but not radios, are associated with increased violence in 24 African states.
Dee Roytenberg's profile photoRichard Healy's profile photoEli Viertel's profile photoEyram Aggrey's profile photo
There's a balance. I think there's a role for solidifying new and emerging beliefs +Richard Healy, a role for solidifying pervading norms that necessarily require a certain amount of 'selection pressure' to overcome, and a role for exploring new beliefs.
Add a comment...

Daniel Estrada

Shared publicly  - 
Here's a report on a brand-spanking-new model intercomparison project looking at the impacts of stratospheric aerosol release on the global climate. It's using a completely new data set, too. Here are some thoughts.

They ran the simulation from 2020 to 2099, with a short and steep ramp-up and ramp-down of aerosol release near the beginning and end of the range (respectively) to simulate a relatively rapid start and stop (I guess they're optimistically supposing that we'll have this all wrapped up by 2100). They assumed a maximal 4x CO2 concentration (!!!) over pre-industrial levels.

The interesting novel consequences, as I see them:

* This tactic reduces the "extreme temperature and precipitation changes" compared to control model runs with the same CO2 increase and no SRM. That seems important, since the rate at which some of this stuff changes is at least as important as the absolute change in some cases, at least with regard to adaptation and other social planning. Rapid changes are hard to deal with, and a really quick change might be more damaging overall than a slower change with a higher absolute magnitude.

* There's a significantly larger change in overall radiative response (and an associated slowdown in the hydrological cycle) compared to other kinds of SRM. I guess they mean something like the "space mirror" approach here. We already knew that reducing incoming solar radiation wholesale would yield different results than reducing surface radiation via aerosols (because of uneven atmospheric heating and evaporation), but this looks like the most specific and significant quantification of that prediction that I've found.
The efficiency of the cooling effect resulting from sulfuric aerosols drops rather alarmingly as the amount of those aerosols in the atmosphere increases. The more of these compounds there are up there, apparently, the larger the average particle size becomes. This suggests a natural limit for the effectiveness of this technique, which is really important--we can't rely on this forever without the side-effects ramping up to unacceptable levels. Even if everything works out ideally, this is not a permanent fix: the more GHG we put out, the less effective this will be.

*  Even this simulation didn't include a lot of small-scale stuff, like inter-layer transport of aerosols within the atmosphere. Since those are important for things like cloud formation, it seems plausible that the estimates of precipitation impact here aren't entirely correct.
This all seems relevant for the project that I'm pursuing, especially considering the fact that there's no mention of the ways in which this might constrain other viable policy options. Given the explicitly short/medium-term effectiveness of this strategy, we'd need to have something else cooking while we're implementing it--preferably a strong mitigation strategy. However, implementing this plan will engender all kinds of complications for popular mitigation strategies, it seems to me. More grist for the mill.

#geoengineering   #climateengineering   #climatechange   #climatescience  
A new Geoengineering Model Intercomparison Project (GeoMIP) experiment designed for climate and chemistry models. S. Tilmes1, M. J. Mills1, U. Niemeier2, H. Schmidt2, A. Robock3, B. Kravitz4, J.-F. Lamarque1, G. Pitari5, and J. M. English6 1National Center for Atmospheric Research, Boulder, ...
View original post
Add a comment...
Facebook is rolling out a suicide prevention tool. "Unlike Twitter's tool, Facebook is not automatically monitoring content that is posted on the social network. Instead, users are invited to get in touch if they notice troubling content from any of their contacts, and Facebook will then reach out with the offer of help, support and tips."
A few months ago Twitter was criticized for teaming up with suicide prevention charity Samaritans to automatically monitor for key words and phrases that could indicate that someone was struggling to cope with life. Despite the privacy concerns that surrounded Samaritans Radar, Facebook has decided that it is going to launch a similar venture for Compassion Research Day in a bid to prevent suicides.
View original post
Danial Hallock's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
Peter Railton's Dewey Lecture:

Oscar Wilde was right, the problem with socialism is no free evenings. [...] The cost of building a society where the people have more say in how their lives are run is still many, many meetings. What is a meeting, after all, but people deliberating together with a capacity to act as a group that is more than just a sum of individual actions, and this sort of informed joint action is a precondition for significant social change. Come together, decide together, act together, and  bear the consequences together. We must own our institutions or they will surely own us. As Aristotle told us, one becomes a citizen not by belonging to a polity or having a vote, but by shouldering the tasks of joint deliberation and civic governance. And there is no civic or faculty governance, no oversight of discrimination in hiring and promotion, no regulation of pollutants, no organization of faculty or students to initiate curricular reform, no mobilization by
professional associations to protect their most vulnerable members or to promote greater diversity, no increased humaneness in the treatment of animals and human subjects, no chance to offset arbitrariness and bullying within offices and departments, no oversight of progress and revision of plans in response to changing circumstances, without actual people who care spending long hours in the work of planning, meeting, and making things happens. The alternative is for all these decisions to be made at the discretion of those on high—or not at all.


As I look around me from the vantage point of Philosophy, I see colleagues and students investing countless hours trying to enhance the inclusion of women and other under-represented groups, or to build collective bargaining for graduate student instructors and term lecturers, or to reach out beyond the university to promote equitable trade, or to support humane and ecological practices in agriculture, or to bring new resources to under-served communities. These efforts involve personal sacrifice, and often made by those within the academy whose positions are the least secure. Moreover, they are making these sacrifices without a movement at their backs, or a Zeitgeist to buoy them from below. So it behooves those of us who are more secure to revive our spirit of activism. To lend a hand, and to use whatever leverage we might have to provide badly-needed support. 


I have come to believe that the world of cognition is no less steeped in affect. That’s as it should be, I now realize, since affect is the mind’s way of registering appreciation—of evidence, of the importance of a fact, of the value of an action, of the goodness of a life. Most of things I blame myself for in life I did from fear of social embarrassment and humiliation—this has been a much more effective deterrent than clubs or threats in keeping me from doing the right thing. Reasoning has certainly helped me try to address these fears, but reasoning has done an almost equally good job of rationalizing my failures to overcome them. When I have managed to overcome this fear, it is because an appreciation of the values and ideals and lives at stake got the better of my over-socialized self—I felt, and not merely thought, I had to act. 


In truth, we are still to a considerable degree still in a world of “Don’t ask, don’t tell” with regard to depression and associated mental disorders, such as anxiety, even though these will severely affect one in ten of us over the course of a lifetime, and often at more than one point in a lifetime. 


Why should I contribute to making it harder for others to acknowledge their depression and seek help? I know what has held me back all these years. Would people think less of me? Would I seem to be tainted, reduced in their eyes, someone with an inner failing whom no one would want to hire or with whom no one would want to marry or have children? Would even friends start tip-toeing around my psyche? Would colleagues trust me with responsibility? I’m now established in my career, so some of these questions have lost some of their bite for me. But not all of them. And think of those who are not as well-placed as I have come to be. Think how these questions can as resonate in the mind of a depressed undergraduate or graduate student, trying and failing to do his work, trying to earn the confidence and esteem of his teachers, worried what his friends and parents will think, afraid to show his face in the Department, struggling to find his first job. Will he feel free to come forward and ask for help? Or think of a young faculty member, trying to earn the confidence and esteem of her colleagues, perhaps one of the 12-13% of women who will experience a depressive episode in association with pregnancy? Will she feel free to come forward? We’re beginning to accept parental or care-giver leave as a normal part of a career—will faculty feel equally able to request medical leave for depression? 


Suicide is the second leading cause of death for post-secondary students, and rising. And its chief cause is untreated depression. Twenty percent of college students say their depression  level is higher than it should be, but only 6% say that they would seek help, and still fewer actually do.

What does it say to our students or colleagues, how does it contribute their ability to seek care, or to escape a sense of utter loneliness and inability to make it out the other side, if even grey grown-ups like me with established careers and loving families can’t be open about the depression that has so deeply shaped our lives, and who can make it clear by our very selves, there’s real help, you can make it, it’s worth it, you’re worth it.
1 comment on original post
John HD Dewey's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
The AI from Google's DeepMind acquisition has learnt to play 49 different retro computer games completely without human input.
‘Agent’ hailed as first step towards true AI as it gets adept at playing 49 retro computer games and comes up with its own winning strategies
4 comments on original post
Ryan Reece's profile photoDante Johnson's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
Charlie Stross, presented in full:

I should jot down some axioms about politics ...

1) We're living in an era of increasing automation. And it's trivially clear that the adoption of automation privileges capital over labour (because capital can be substituted for labour, and the profit from its deployment thereby accrues to capital rather than being shared evenly across society).

2) A side-effect of the rise of capital is the financialization of everything—capital flows towards profit centres and if there aren't enough of them profits accrue to whoever can invent some more (even if the products or the items they're guaranteed against are essentially imaginary: futures, derivatives, CDOs, student loans).

3) Since the collapse of the USSR and the rise of post-Tiananmen China it has become glaringly obvious that capitalism does not require democracy. Or even benefit from it. Capitalism as a system may well work best in the absence of democracy.

4) The iron law of bureaucracy states that for all organizations, most of their activity will be devoted to the perpetuation of the organization, not to the pursuit of its ostensible objective. (This emerges organically from the needs of the organization's employees.)

5) Governments are organizations.

6) We observe the increasing militarization of police forces and the priviliging of intelligence agencies all around the world. And in the media, a permanent drumbeat of fear, doubt and paranoia directed at "terrorists" (a paper tiger threat that kills fewer than 0.1% of the number who die in road traffic accidents).

7) Money can buy you cooperation from people in government, even when it's not supposed to.

8) The internet disintermediates supply chains.

9) Political legitimacy in a democracy is a finite resource, so supplies are constrained.

10) The purpose of democracy is to provide a formal mechanism for transfer of power without violence, when the faction in power has lost legitimacy.

11) Our mechanisms for democratic power transfer date to the 18th century. They are inherently slower to respond to change than the internet and our contemporary news media.

12) A side-effect of (7) is the financialization of government services (2).

13) Security services are obeying the iron law of bureaucracy (4) when they metastasize, citing terrorism (6) as a justification for their expansion.

14) The expansion of the security state is seen as desirable by the government not because of the terrorist threat (which is largely manufactured) but because of (11): the legitimacy of government (9) is becoming increasingly hard to assert in the context of (2), (12) is broadly unpopular with the electorate, but (3) means that the interests of the public (labour) are ignored by states increasingly dominated by capital (because of (1)) unless there's a threat of civil disorder. So states are tooling up for large-scale civil unrest.

15) The term "failed state" carries a freight of implicit baggage: failed at what, exactly? The unspoken implication is, "failed to conform to the requirements of global capital" (not democracy—see (3)) by failing to adequately facilitate (2).

16) I submit that a real failed state is one that does not serve the best interests of its citizens (insofar as those best interests do not lead to direct conflict with other states).

17) In future, inter-state pressure may be brought to bear on states that fail to meet the criteria in (15) even when they are not failed states by the standard of point (16). See also: Greece.

18) As human beings, our role in this picture is as units of Labour (unless we're eye-wateringly rich, and thereby rare).

19) So, going by (17) and (18), we're on the receiving end of a war fought for control of our societies by opposing forces that are increasingly more powerful than we are.

Have a nice century!


a) Student loans are loans against an imaginary product—something that may or may not exist inside someone's head and which may or may not enable them to accumulate more capital if they are able to use it in the expected manner and it remains useful for a 20-30 year period. I have a CS degree from 1990. It's about as much use as an aerospace engineering degree from 1927 ...

b) Some folks (especially Americans) seem to think that their AR-15s are a guarantor that they can resist tyranny. But guns are an 18th century response to 18th century threats to democracy. Capital doesn't need to point a gun at you to remove your democratic rights: it just needs more cameras, more cops, and a legal system that is fair and just and bankrupts you if you are ever charged with public disorder and don't plead guilty.

c) (sethg reminded me of this): A very important piece of the puzzle is that while capital can move freely between the developed and underdeveloped world, labour cannot. So capital migrates to seek the cheapest labour, thereby reaping greater profits. Remember this next time you hear someone complaining about "immigrants coming here and taking our jobs". Or go google for "investors visa" if you can cope with a sudden attack of rage.
Right now, I'm chewing over the final edits on a rather political book. And I think, as it's a near future setting, I should jot down some axioms about politics ... We're living in an era of increasing automation. And it's trivially clear that the adoption of automation privileges capital over ...
1 comment on original post
Matt Uebel's profile photoAlexander Nikitin's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
You can never step in the same river twice

People say this remark is from Heraclitus.  The main idea is that the river keeps changing as the water flows.  The other idea is that you keep changing, too! 

Jorge Luis Borges wrote:

… each time I recall fragment 91 of Heraclitus, "You cannot step into the same river twice," I admire his dialectical skill, for the facility with which we accept the first meaning (“The river is another”) covertly imposes upon us the second meaning (“I am another”) and gives us the illusion of having invented it…

But actually it seems Heraclitus didn't exactly say "you cannot step into the same river twice".

He lived roughly from 535 to 475 BC. Only fragments of his writings remain. Most of what we know about him comes from Diogenes Laertius, a notoriously unreliable biographer who lived 600 years later. 

For example: Diogenes said that Heraclitus became sick, tried to cure himself by smearing himself with cow manure and lying in the sun... and died, covered with poop. 

But Diogenes also said that Pythagoras died while running away from an angry mob when he refused to cross a field of beans, because beans were sacred to the Pythagoreans.  And Diogenes also said Pythagoras had a golden thigh - and was once seen in two places at the same time.

So we don't really know much about Heraclitus.  And among later Greeks he was famous for his obscurity, nicknamed “the riddler” and “the dark one”.

Nonetheless a certain remark of his has always excited people interested in the concepts of sameness and change.

In one of Plato's dialogs the Socrates character says:

Heraclitus is supposed to say that all things are in motion and nothing at rest; he compares them to the stream of a river, and says that you cannot go into the same water twice.

This is often read as saying that all is in flux; nothing stays the same. But a more reliable quote passed down through Cleanthes says:

On those stepping into rivers staying the same other and other waters flow.

That's harder to understand - read it twice!    It seems that while the river stays the same, the water does not.

No matter what the details are, to me Heraclitus was trying to pose the great mystery of time: we can only say an entity changes if it is also the same in some way — because if it were completely different, we could not speak of "an entity" that was changing.

Of course we can mentally separate the aspect that stays the same and the aspect that changes.  But we must also bind these aspects together, if we are to say that "the same thing is changing".

In category theory, we try to swim these deep waters using the concept of isomorphism.   Very roughly, two things are isomorphic if they are "the same in a way".  This lets us have our cake and eat it too: two things can be unequal yet isomorphic.

So when you step in the river the second time, it's a different but isomorphic river, and a different but isomorphic you. 

And the isomorphism itself?  That's the passage of time.

So, isomorphisms exhibit a subtle interplay between sameness and difference that may begin to do justice to Heraclitus.

None of these thoughts are new.  I'm thinking them again because I'm writing a chapter on "concepts of sameness" for Elaine Landry's book Category Theory for the Working Philosopher.  You can see a list of chapters and their authors here:

Here and in future articles you can watch me write my paper, and help me out.  It'll be more technical - and I hope more precise! - than my remarks here.  But it's supposed to be sort of fun, too.

In Part 2, I talk about the Chinese paradox "when is a white horse not a horse?":

In Part 3, I ask if you've ever used the equation x = x for anything.  And I pose a precise conjecture which claims that this equation is useless.  I would like someone to settle this conjecture!

But if x = x is a useless equation, why do mathematicians think it's fundamental to our concept of equality?
43 comments on original post
Mina farM's profile photoAveri Torres's profile photoAlex Schleber's profile photoGerard Hearn's profile photo
+Daniel Estrada depends on what the referent of 'river' is, no? :)
Add a comment...
Have him in circles
29,943 people
Jon Martensen's profile photo
Robert Bevins's profile photo
Nuocca Kei's profile photo
frederick hankey's profile photo
HeMa Aladin's profile photo
Alejandro leiva's profile photo
raice anwor's profile photo
William Laws's profile photo
Jack Ware's profile photo
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Wildomar, CA - Riverside, CA - Urbana, IL - Normal, IL - New York, NY - Onjuku, Japan - Hong Kong, China - Black Rock City, NV - Santa Fe Springs, CA
Robot. Made of smaller robots.
I've written under the handle Eripsa for over a decade on various blogs and forums. Today I do my blogging and research at Digital Interface and on my G+ stream.

I'm interested in issues at the intersection of the mind and technology. I write and post on topics ranging from AI and robotics to the politics of digital culture.

Specific posting interests are described in more detail here and here.


So I'm going to list a series of names, not just to cite their influence on my work, but really to triangulate on what the hell it is I think I'm doing. 

Turing, Quine, Heidegger, Dan Dennett, Andy Clark, Bruce Sterling, Bruno Latour, Aaron Swartz, Clay Shirky, Jane McGonical, John Baez, OWS, and Google. 


My avatar is the symbol for Digital Philosophy. You can think of it as a digital twist on Anarchism, but I prefer to think of it as the @ symbol all grown up. +Kyle Broom helped with the design. Go here for a free button with the symbol.

Basic Information
Other names
Daniel Estrada's +1's are the things they like, agree with, or want to recommend.
Santa Fe Institute

Complexity research expanding the boundaries of science

Center Camp

Center Camp hasn't shared anything on this page with you.

Augmata Hive

experimenting with synthetic networks

Ars Technica

Serving the technologist for over 1.3141592 x 10⁻¹ centuries

Burn, media, burn! Why we destroy comics, disco records, and TVs

Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c

American Museum of Natural History

From dinosaurs to deep space: science news from the Museum

Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum

Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks

Honeybees may have personality

Thrill-seeking isn't limited to humans, or even to vertebrates. Honeybees also show personality traits, with some loving adventure more than

DVICE: The Internet weighs as much as a largish strawberry

Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want

DVICE: Depression leads to different web surfing

While a lot of folks try to self-diagnose using the Internet (Web MD comes to mind), it turns out that the simple way someone uses the Inter

Greatest Speeches of the 20th Century

Shop Google Play on the web. Purchase and enjoy instantly on your Android phone or tablet without the hassle of syncing.

The Most Realistic Robotic Ass Ever Made

In the never-ending quest to bridge the uncanny valley, Japanese scientists have turned to one area of research that has, so far, gone ignor

Rejecting the Skeptic Identity

Do you identify yourself as a skeptic? Sarah Moglia, event specialist for the SSA and blogger at RantaSarah Rex prefers to describe herself

philosophy bites: Adina Roskies on Neuroscience and Free Will

Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that

Stanford Researchers Crack Captcha Code

A research team at Stanford University has introduced Decaptcha, a tool that decodes captchas.

Kickstarter Expects To Provide More Funding To The Arts Than NEA

NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O

How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out

IBM&#39;s &quot;hyperlocal&quot; weather forecasting system aims to give government agencies and companies an 84-hour view into the future o

NYT: Google to sell Android-based heads-up display glasses this year

It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the

A Swarm of Nano Quadrotors

Experiments performed with a team of nano quadrotors at the GRASP Lab, University of Pennsylvania. Vehicles developed by KMel Robotics.