I've been trying for ages - years - to think of any reason why we need DRM.
I hate DRM as much as the next person - it pisses me off no end that I have to type three account names and three passwords into every damn device I own to get my legitimately purchased Ultraviolet copy of "Interstellar" to play on my own devices, whereas I could just pirate the thing, have it work on every device, and I wouldn't have to put up with the annoying FBI warnings they've forced me to watch all over the "legitimate" copy. (Doubly annoying, as I'm not a US citizen, so the FBI are nothing to do with me - they're just a legalised foreign fear-mongering group threatening me just in case I have impure thoughts. (I was going to say 'terror group' but that has other inappropriate connotations.) )
But I felt there was a potential reason that DRM could be important, which was just eluding me. Recently, while reading a post-Singularity sci-fi novel, it hit me. It's quite simple in concept, but bear with me, as the background will take a while to explain.
We're fairly close to being able to digitally model an entire human brain running in real time. We can already simulate small numbers of cells easily, and reproduce what appear to be reactions to stimuli. Within the next 10-30 years, we're likely to have a working simulation of a whole human brain. Shortly before or after that, we'll also have fMRI scanners, or something similar, which can register neural connections at individual cellular resolution, and then we'll be able to record a real brain's state, and reproduce it in the simulator. Unless the seat of consciousness is some aspect other than those covered by this model, we will at that time have the capability to copy a human into a machine. It won't be the most efficient way of doing it - the human mind will be running on emulated and virtualised hardware (wetware?) rather than natively, meaning the processing capability will be reduced by several orders of magnitude from what the computer hardware could do if the Human 1.0 app ran natively. But it's the beginning of a development process, not the end. It could lead to something incredible - the ascendance of humankind.
But not at first. Not for the early experimental minds. That first brain will not have sensory inputs - it'll be in a state something like sensory deprivation, with no recognisable inputs whatsoever. It's not like fiction, where cyberspace looks like the world of Tron, or the geometric shapes of Neuromancer... you won't see it like that, because you won't have eyes. You can't just link random data into the senses, as they won't be comprehensible. There might be pulses of something, but you won't know what it is. There will be no recognisable sensory inputs at all... or outputs. Even if the supervisors of the experiment decide to emulate familiar sensory inputs to keep the mind sane, it'll be very limited - think of all the senses your unconscious mind processes every single second of your existence, and then imagine that your visual input was reduced to a few kilopixels of monocular video, your aural input was a single electronic ear, and that's your lot - you'd have no body to feed you balance, temperature, proprioception or kinaesthetic awareness, no taste or smell, no heartbeat, possibly limited or no ability to move your point of view or interact in any way. No voice - you might be able to make sounds, but unless the emulation simulates a complete respiratory system, throat, larynx, jaw and mouth, you wouldn't be able to operate it the way you're used to, and you'd have to learn to make simple noises again from scratch. You'd be deaf, dumb, blind and paralysed. If you're lucky, you might be able to send simple signals, in the same way that recent research has enabled victims of locked-in syndrome to begin to re-estalibsh communication. But that's the most you can expect at first. You'd be more alone and more vulnerable than anyone has ever been. You'd be utterly dependant on the supervising biological humans for everything, and there's a high risk they'd view you as little more than a computer program. Do you ever think about whether your smiling Mac OS icon wants
to shut down at bedtime or not? Me neither.
And here's the worst part: It's a copy. There's no continuity of consciousness between the original and the simulation. Where you can make one digital copy, you can make another. And if you can make multiple copies of program data, the uniqueness and value of that program is decreased. It becomes commoditised.
For a real-world comparison, just look at the case of Henrietta Lacks (http://en.wikipedia.org/wiki/Henrietta_Lacks
), whose mutated and immortal cells were extracted from her body without permission, and now form the basis for the majority of human cell research. In a very real sense, Henrietta Lacks is still being kept alive, against her wishes, and in pieces
, over sixty years after her natural death. This isn't a dystopian nightmare, it's reality, today.
Now, think about what's been done to Ms. Lacks, and think what'll be done to the first human brain to be emulated in cyberspace. Humans subjected to sensory deprivation start to hallucinate in just a few minutes, and go mad in a few hours to a few days. This virtualised human mind would likely crack up... but what's the problem? Just go back to the saved state from before it cracked, and carry on. Heck, be more efficient, and run a dozen simulations at once... if they fail, we can just reinitialise them from before they failed. They can experience a thousand lifetimes of this existence simultaneously. What a powerful diagnostic tool - we could expose that brain to every stimulus, positive or negative, that has ever been known. Every disease, every stress and strain, every malfunction, every pain, and we can learn how they react, and develop treatments for real people. And we can then just reset it to the state it started in, guaranteeing a reproducible result without any factors outside our control. Heck, we could even leave them going after they break down, and see if they ever recover and stop trying to scream. It's not like they can stop living.
They may not remember it - the simulations that go mad would probably end up deleted or archived, and experiments can carry on with fresh copies which would - initially, at least - be in exactly the same state of mind as the first one. Further down the line, we can improve the experience - add some recorded sensory inputs to ease the translation of the mind into cyberspace, help it adapt. Maybe eventually we can isolate what they learn as they adapt, and apply that learning differential instantly to future uploads, finally making the procedure bearable, and giving the rich the immortality that they can afford to pay for. At that point, perhaps we'll reward some of the earlier copies by improving their wake-up experience and setting them free. After all, that would make all the suffering their copies endured justifiable, right? They'd be heroes.
It boils down to this: whether they remember it or not, the first humans who "survive" this uploading-and-emulation process are going to suffer more than any human has in the entirety of human existence, recorded or unrecorded.
And that's why we need strong, uncrackable DRM. Not to prevent copies of media that most of us are willing to pay a reasonable fee for anyway. Not to prevent you from playing your iTunes tracks in your car as well as through your exercise machine. Not to stop you showing a friend the joy you experienced in reading an exciting book, or continuing to watch your films while on the train. But to protect and preserve our virtual offspring against a fate literally worse than death or anything else we could imagine.
To save our digital souls.
(c) Jeffery Lay, April 17th 2015.
PS Several stories have inspired this train of thought, but none more than Greta's and Joe's stories in the "White Christmas" episode of Charlie Brooker's "Black Mirror" series. Well worth a look, as is the rest of the series, though note you don't need to watch the episodes in any particular order.