Long answer, It's complicated...
I fully support 'the internet of things', but at this stage, it's too early to really tell what the major risks will be, or the types of attacks to expect. To be honest, I really see little purpose to having home appliances track data themselves and having direct access to the internet. It's really just asking for abuse.
What I would much rather see, are appliances that communicate only with my PC at home, and my PC allowing me to access my devices however I see fit. One could then setup everything from a single location, and have one device connecting to an internet server, rather than dozens. I would love to have a system where my smart phone would 'notice' when I'm heading home, tell my PC, and my PC turning the thermostat up to a comfortable temperature for me. I don't want my thermostat directly accessing the internet to decide what to do, as I have little control over the security of such a device.
So, as I suspected, there was a great deal of variability in the description of my politics. I got both "liberal" and "conservative" and "libertarian" and "not libertarian," some comparisons to anarchists, and some things which were mostly just complimentary but not really descriptive. So, that being said, here are where my biases are.
Empirically, I am an extremely partisan Democrat.
In my adult life, I have not voted for a Republican in any statewide or national election. I liked my state representative when I lived in Moscow, and voted for him a couple times. I like Mike Simpson (my present congressman) but have never voted for him, because I disagree with him.
I voted for Obama twice.
My main priority is avoidance of misery.
I take "very bad things" much more seriously than "very good things," and when thinking about government policy, my focus is largely on avoidance of extreme poverty, starvation, wars, major recessions, imprisonment, sexual assault, and other things with a high propensity to cause large numbers of people to be miserable.
I think that there is very little to be learned from misery, and very little to be gained from it; there are some things that are just bad, and are to be avoided at all costs. I think the government ought to do what it can to keep misery from happening, because I evaluate inactions as being essentially morally equivalent with actions.
(Inaction is, however, easier.)
"Everything it can" should start with not doing the things known to cause misery. This comes prior to any concern for punishment, symbolism, or desserts.
In particular, I am very strongly biased against wars, imprisonment, and punishment generally. I think that -- especially in the United States -- there is a tendency to think about these things in terms of whether people deserve them, or whether we are required to do these things to demonstrate our commitment to a particular position.
I have an almost-deontological bar against considering the sort of thing which causes misery. People are very good at setting priors for success against a single monolithic failure mode, but these are all things with a large number of low-probability failure modes measured against a moderate probability of success. We should not take our capacity to visualize the success of punishment or war as a proxy for the actual probability of success, which is low.
I need to know whether things are working.
It is almost as important to me to know whether things are working than for them to work in the first place. Where good metrics can be put into place to measure the success of particular programs or policies, those metrics should be used. Where we can peg things to a particular qualitative measure of success, we should do that.
"Whether things are working" is very difficult to get a grasp on in a tightly-coupled system with low feedback. We should avoid building those where possible, and build the smallest number of them possible in the event that we're forced to.
Accountability is important. Doing away with the need for accountability is better.
If we are doing something fail-dangerous or fraud-dangerous, like building a standing army or surveiling large numbers of people or building a welfare system, strong accountability measures should be in place to prevent things from going bad. Especially where 'bad' could consist of things like 'undetectable authoritarianism' or 'military coup.'
It is better to build systems which are not dangerous. If we give away money unconditionally, we do not need to determine whether it is gotten fraudulently; the category 'fraud' is a null set. If we do not surveil the population, we do not need to determine whether we have accidentally build an undetectable authoritarian system. If we do not have a standing army, we do not need to worry about a military coup.
It is inevitable that we will have to build things that are fail-dangerous. But we should be watching the total burden of fail-dangerousness across the system, because weak correlations between failures are still correlations, and catastrophic failures tend to propagate even through weak links.
This is the long way around to saying, "I don't trust the government and think that, between simple and complex solutions, or automatic and planned solutions, we should favor the former rather than the latter."
There is no power without complicity.
If you involve yourself in a large organization at any level, you will make, or be complicit in, unjustifiable decisions. Some of those decisions will harm innocents. This is not an argument against participation.
However, the moral response to complicity in unjustifiable decisions is to stand defenseless before the accusation. It is important to be judged by those you have harmed.
Symbolic actions are moral nullities.
If you are considering a policy or program which symbolizes commitment to a particular position, but does not actually advance it, discard the program.
Curtailing waste, fraud and abuse for at a cost greater than the value of the corruption does not demonstrate a commitment to reform. It demonstrates a commitment to appearing like a reformer. Declaring total war against an enemy who cannot harm you does not demonstrate toughness. It demonstrates a commitment to appearing tough.
States and other large organizations are not vehicles for making statements. Statements are vehicles for making statements.
What appears to be arbitrariness is often just determinism.
The first prerequisite to the rule of law is a law capable of ruling. This means, in practice, that legal principles need to cut one way and not the other. The common law system, for instance, is an enormous historical kludge built on a pile of other historical kludges. It encodes a large number of formalist assumptions which are, to a first approximation, wrong. It is frustrating to hear that I think that, despite all this, the legal system is basically working as intended.
You will often find me defending legal decisions which are wrong from a policy perspective as being right from a legal perspective. This is because I highly value the capacity of the law to rule. Instructions on what the government should do should produce intended results, and if not intended results, at least predictable results.
Otherwise nothing ever changes.
At scale, procedure is substance.
"But," legal readers will object, "procedure is the opposite of substance!"
And of course this is the case.
But as anyone familiar with the law can tell you, bad procedures can change or even reverse the meaning of substantive law. For instance, the NSA surveillance program is presently protected by the inability of a plaintiff to get into court to complain about it. Environmental plaintiffs are barred from pursuing polluters by the bizarre procedural rules in Lujan v. Defenders of Wildlife. Civil rights plaintiffs in employment law are prevented from enforcing their legal rights by a tortured reading of the Federal Arbitration Act.
All of these problems would be solved if we simply resolved to read things one way and not the other. We should do that.
Insofar as we can accommodate the ways in which people would like to live, we ought to accommodate them.
I have preferences. I do not want my preferences to be laws. I do want my preferences to be policies, somewhere. And this applies to everyone. In the event that we can encode people's preferences in compatible institutions, we should encode as many as would be possible. It does not bother me that people would be happy under different conditions than those which would make me happy.
I suspect that my utopia would be annoying or even dangerous for others. My utopia does not demand a lot of room. It does not demand that others live in it. And any utopia which does not demand a lot of room, or the consent of others, should be available if coherent and workable.
It really does not bother me that people disagree with me.
Really, I don't care.
I have a strong bias in favor of my own beliefs, but not a strong meta-bias in favor of my own beliefs. I believe that my beliefs are correct, but not that any particular one of my beliefs is so much more likely to be correct than any other that I am not willing to listen.
Similarly, I find some principles repugnant. Particularly, those that require large amounts of misery to be imposed in order to arrive at a particular result. This makes me, in general, anti-revolutionary, but also anti-fascist and anti-oligarchic.
But even people who hold most beliefs I find repugnant don't bother me.
I have a suspicion that most people are as likely to betray their worst principles as their best. I have known people who hold beliefs I find repugnant who are, at the drop of a hat, willing to renounce those principles and act like a decent human being. However important it is not to let those people be in charge or make their policies reputable, there is no barrier to being nice to them on every other count.
I believe in Chesterton's Fence. I believe Chesterton's Fence is testable.
If something is one way, and not another, and I cannot determine why, I will assume that it is for a good reason. I will not assume, however, that I cannot determine whether it is or isn't there for a good reason. This is because I think, in general, that society is designed to be legible.
I believe that social phenomena may have functions which are not simply the composite of individual intents. I do not believe in false consciousness.
So when someone tells me, "But I don't believe that anti-contraception policies are for the reason of controlling women's bodies, but for these reasons instead!", I can accept that this is true, but also accept that low-intensity agriculture makes 'control of women's bodies' a state priority, and exclusion of women from the machinery of the state also excludes almost anyone who would object to the motive 'control women's bodies.'
And so even in a world where no individual person believes the particular thing they are alleged to, it is possible that we might have a Chesterton's Fence constructed for reasons different than its function. I do not get around to saying this much, of course, because I do not think that people should be forced to defend a function while disclaiming its intent.
Fundamentally, I believe that things are getting better. I do not believe that our problems will be fixed by the march of technology without an active attempt to fix them.
In theory, technology iteratively improves our lives without bound, and even if everything stopped dead right here, we would be better off than we were, say, in the 1000s. In practice, technology tends to replace all of our old, boring problems with exciting new ones. We are forced to solve those problems before we all die from the next round of dysfunction revealed by solving the last round of problems.
We are running out of oil and even if we're not, we're ruining the environment by burning the oil we have, and we can't stop because we cured most of the diseases (certainly, plagues) and there are too many of us to survive without oil-intensive agriculture. Also the plagues are getting ahead of our cures again, which we should be worried about. Human attention isn't a bottleneck on a bunch of boring administrative tasks, except surveillance is also a boring administrative task and shouldn't we be worried about that? We might be able to fix that by encrypting everything except oh hell we invented quantum computing (maybe?); fortunately, there's a fix but we don't have the mathematics to know whether it really works ...
... and on and on and on. And as things accelerate, we really need to be racing on ahead to fix the problems we caused by solving the last ones. With our noses to the ground, it's hard to even predict anything six months from now, much less five years from now, and because of secrecy and bad data, sometimes we're forced to predict the past.
But -- and this is a but with a lot of caveats -- we have solved most of the problems which came before. This predicts, maybe, that we'll be able to solve most of the problems which come in the future? Or maybe we'll get hit with a meteor and we'll all be dead, or we'll be forced to regress to hunter-gatherers.
That fact does not reduce the urgency with which we need to solve problems.
I will be writing my Congress members over this, and donating to the EFF. This is clearly an enormous step in the wrong direction, it will severely harm the openness of the internet as we know it. It cannot be let to stand.
Today the U.S. Court of Appeals sided with carrier Verizon, effectively shutting down net neutrality and opening the door for "preferred access" of the web.
Yeah... this sucks.
Oddly enough. there wasn't a lot of commotion on the topic in the weeks and months leading to this decision. Nothing like the SOPA outcry. Maybe we're just numb.
Here's to hoping the FCC can quickly adapt, or innovation may suffer.
Yeah... with no thought towards how those fictional other companies will compare service-wise... or what will happen when every cable/internet company decides to be a restrictive greedy asshats because there are no longer any restrictions.
Terminator genes should just be outright illegal in my opinion, so there's that. As far as allergies go, it isn't difficult to test to be certain that only the genes you want are being expressed, so you're not going to get anything new that you don't know about. Then it's about responsible use of the technology, in that, don't add something that some people might be allergic to, it's kind of a dick move and highly unlikely your only option.
My stance really comes down to while there are some issues, they can be dealt with appropriately through regulation. The call to the outright banning of GMOs would remove any possibility of using the technology in a beneficial and productive manner, and reduces the question to a black and white 'ban all or unregulated'. At the very least, we'll need ways to keep the global food supply going while our farmland is being wrecked by the changing climate.
- WalgreensCertified Pharmacy Technician, 2008 - present
Do You Trust Internet-Connected Appliances Enough To Let Them Run Your H...
The idea that household appliances need Internet-connected capabilities has always seemed over the top. Take the example of Samsung’s infamo
Mars One Announces Lockheed Martin Partnership, Crowdfunding for 2018 Ma...
Mars One, the private organization proposing to colonize Mars by 2025, recently announced Lockheed Martin is working on a mission concept st
The Humble Indie Bundle V (pay what you want and help charity)
Pay whatever you want to get Psychonauts, LIMBO, Amnesia: The Dark Descent, Superbrothers: Sword & Sworcery EP, and Bastion — all while
NASA scientist says 'Trek' style warp drive actually feasible |
(WXIA) -- The geeks among us are rejoicing this week, with word from a NASA scientist that faster-than-light space travel -- or for the unin
Possible Skyrim DLC or Mods from DICE 2012 Keynote Address
Here is the footage from Todd Howard's keynote address at DICE 2012. This was from the Bethesda Game Jam after the game shipped in 2011.