1. Set a breakpoint, attach the debugger, wait for the breakpoint to be hit
2. Select a variable, right click -> set value (or hit F2)
3. Resume Program
"The only reason coders' computers work better than non-coders' computers is coders know computers are schizophrenic little children with auto-immune diseases and we don't beat them when they're bad"
I still don't understand how people break computers so badly or so quickly!
Every programmer starts out writing some perfect little snowflake like this. Then they're told on Friday they need to have six hundred snowflakes written by Tuesday, so they cheat a bit here and there and maybe copy a few snowflakes and try to stick them together or they have to ask a coworker to work on one who melts it and then all the programmers' snowflakes get dumped together in some inscrutable shape and somebody leans a Picasso on it because nobody wants to see the cat urine soaking into all your broken snowflakes melting in the light of day. Next week, everybody shovels more snow on it to keep the Picasso from falling over.
So today my wife received the following message from a close family friend. I have never really been a believer in GoFundMe because I could always see a means to hustle people out of their money by running a scam. However, in this case things seem to be different. In short, our friend reached out to the person running the campaign and verified the situation, before sending this text. I have a social platform with folks who find me interesting, so why not use this reach to help get the word out to help another.
In short: A beautiful dog is sick and the owner needs help for the surgery. You can read it all here and donate if you have the means: http://www.gofundme.com/surgeryfortrigger
What bugs me about this is that we make these kinds of decisions all the time. There are plenty of concrete, real-world cases that actually happen: do you swerve into a tree rather than hit a pedestrian? (That's greatly increasing the risk to your life -- and your passengers' -- to save another person)
I think that part of the reason that we're so nervous about computerizing these ethical decisions is not so much that they're hard, as that doing this would require us to be very explicit about how we want these decisions made -- and people tend to talk around that very explicit decision, because when they do, it tends to reveal that their actual preferences aren't the same as the ones they want their neighbors to think they have.
For example: I suspect that most people, if driving alone in a vehicle, will go to fairly significant lengths to avoid hitting a pedestrian, including putting themselves at risk by hitting a tree or running into a ditch. I suspect that if the pedestrian is pushing a stroller with a baby, they'll feel even more strongly this way. But as soon as you have passengers in the car, things change: what if it's your spouse? Your children? What if you don't particularly like your spouse?
Or we can phrase it in the way that the headline below does: "Will your self-driving car be programmed to kill you if it means saving more strangers?" This phrasing is deliberately chosen to trigger a revulsion, and if I phrase it instead the way I did above -- in terms of running into a tree to avoid a pedestrian -- your answer might be different. The phrasing in the headline, on the other hand, seems to tap into a fear of loss of autonomy, which I often hear around other parts of discussions of the future of cars. Here's a place where a decision which you normally make -- based on secret factors which only you, in your heart, know, and which nobody else will ever know for sure -- is instead going to be made by someone else, and not necessarily to your advantage. We all suspect that it would sometimes make that decision in a way that, if we were making it secret (and with the plausible deniability that comes from it being hard to operate a car during an emergency), we might make quite differently.
Oddly, if you think about how we would feel about such decisions being made by a human taxi driver, people's reactions seem different, even though there's the same loss of autonomy, and now instead of a rule you can understand, you're subject to the driver's secret decisions.
I suspect that the truth is this:
Most people would go to more lengths than they expect to save a life that they in some way cared about.
Most people would go to more lengths than they are willing to admit to save their own life: their actual balance, in the clinch, between protecting themselves and protecting others isn't the one they say it is. And most people secretly suspect that this is true, which is why the notion of the car "being programmed to kill you" in order to save other people's lives -- taking away that last chance to change your mind -- is frightening.
Most people's calculus about the lives in question is actually fairly complex, and may vary from day to day. But people's immediate conscious thoughts -- who they're happy with, who they're mad at -- may not accurately reflect what they would end up doing.
And so what's frightening about this isn't that the decision would be made by a third party, but that even if we ourselves individually made the decision, setting the knobs and dials of our car's Ethics-O-Meter every morning, we would be forcing ourselves to explicitly state what we really wanted to happen, and commit ourselves, staking our own lives and those of others on it. The opportunity to have a private calculus of life and death would go away.
As a side note, for cars this is less actually relevant, because there are actually very few cases in which you would have to choose between hitting a pedestrian and crashing into a tree which didn't come from driver inattention or other unsafe driving behaviors leading to loss of vehicle control -- precisely the sorts of things which self-driving cars don't have. So these mortal cases would be vanishingly rarer than they are in our daily lives, which is precisely where the advantage of self-driving cars comes from.
For robotic weapons such as armed drones, of course, these questions happen all the time. But in that case, we have a simple ethical answer as well: if you program a drone to kill everyone matching a certain pattern in a certain area, and it does so, then the moral fault lies with the person who launched it; the device may be more complex (and trigger our subconscious identification of it as being a "sort-of animate entity," as our minds tend to do), but ultimately it's no more a moral or ethical decision agent than a spear that we've thrown at someone, once it's left our hand and is on its mortal flight.
With the cars, the choice of the programming of ethics is the point at which these decisions are made. This programming may be erroneous, or it may fail in circumstances beyond those which were originally foreseen (and what planning for life and death doesn't?), but ultimately, ethical programming is just like any other kind of programming: you tell it you want X, and it will deliver X for you. If X was not what you really wanted, that's because you were dishonest with the computer.
The real challenge is this: if we agree on a standard ethical programming for cars, we have to agree and deal with the fact that we don't all want the same thing. If we each program our own car's ethical bounds, then we each have that individual responsibility. And in either case, these cars give us the practical requirement to be completely explicit and precise about what we do, and don't, want to happen when faced with a real-life trolley problem.
I'll be getting this to go with my thermostat
- King of Thieves
- Fallout Shelter
- franco.Kernel updater
- Easter Egg Hunt
- Angry Birds Under Pigstruction
Review: 'House of Wolves' Turned Me From A Lapsed Player Into A 'Destiny...
I wanted to like Destiny, back in September of 2014. I like a space opera no matter how you slice it, and Bungie has a way with them. It had
Android System WebView - Android Apps on Google Play
Android Webview is a system component powered by Chrome that allows Android apps to display web content. This component is pre-installed on
Google Connectivity Services - Android Apps on Google Play
Google Connectivity Services helps Android handle network connections. Keep it updated to ensure your device has the latest networking capab
SuperBeam | WiFi Direct Share – Android Apps on Google Play
Powered by LiveQoSSuperBeam is the easiest and fastest way to share large files between Android devices using WiFi direct. Devices can be pa
What If?: Serious Scientific Answers to Absurd Hypothetical Questions
From the creator of the wildly popular webcomic xkcd, hilarious and informative answers to important questions you probably never thought to