Post is pinned.Post has attachment
Would you like to learn more about mind uploading?

Not surprisingly Wikipedia has an excellent english introduction.

I am willing to learn.  I am willing to change.

Post has shared content
 
a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.

Cudos: +Titan ThinkTank 

Post has shared content
Excellent start on this serious ethics problem.
How do we prevent an inadvertent animal Hell?

#minduploading  +Anders Sandberg 
My paper about brain emulation ethics is now officially out, and open access. It is in a special issue of JETAI about risks from artificial general intelligence,
http://www.tandfonline.com/toc/teta20/26/3#.U6vmmPldWa8
http://www.aleph.se/andart/archives/2014/06/ethics_of_brain_emulations.html

Post has attachment
The third step in the Pipeline process of uploading an animal, is to teach cellular structure recognition to computers. From this recognition a 3D physiological model can be constructed, and to each such model we can attach trial simulation algorithms. 

Teaching visual recognition to computers is doable but difficult, especially when we ourselves have difficulties.

To get a feel for the scale of the tissues, cells, cellular organelle, and molecules which need to be detected and interpreted, I recommend the following exploration tool.

http://learn.genetics.utah.edu/content/cells/scale/

From +Kozmik Moore 's stash
Photo

Well written article on "impending immortality",
which somehow sounds more appealing than "impending death",
but that's just me.

Kudos: +Joe Arrigo 

Post has shared content
The Special Issue of the Journal of Artificial General Intelligence on Brain Emulation and Connectomics, a convergence of Neuroscience and Artificial General Intelligence is now available (and open access) at http://www.degruyter.com/view/j/jagi.2013.4.issue-3/issue-files/jagi.2013.4.issue-3.xml

Papers included are:

Randal Koene & Diana Deca, Editorial: Whole Brain Emulation seeks to Implement a Mind and its General Intelligence through System Identification

Sergio Pissanetzky & Felix Lanzalaco, Black-box Brain Experiments, Causal Mathematical Logic, and the Thermodynamics of Artificial Intelligence

Felix Lanzalaco & Segio Pissanetzky, Causal Mathematical Logic as a guiding framework for the prediction of “Intelligence Signals” in brain simulations

Leslie Seymour, Declarative Consciousness for Reconstruction

Daniel Eth, Juan-Carlos Foust & Brandon Whale, The Prospects of Whole Brain Emulation within the next Half- Century

Jeff Alstott, Will We Hit a Wall? Forecasting Bottlenecks to Whole Brain Emulation Development

Kamil Muzyka, The Outline of Personhood Law Regarding Artificial Intelligences and Emulated Human Entities

Peter Eckersley & Anders Sandberg, Is Brain Emulation Dangerous?
The official publication date is Dec 2013 (papers accepted for pub.) and there is a page on carboncopies.org dedicated to the special issue at http://www.carboncopies.org/call-for-papers-jagi-special-issue-on-brain-emulation-and-connectomics-a-convergence-of-neuroscience-and-artificial-general-intelligence

Post has shared content
Shared from LE Community
More good news in the "all so important" space solar panel industry.
Two 32' diameter arrays with 40 kW combined output.

Synterra, a contraction for Synthetic-Earth, is a simple term for a simulation environment which must satisfy some very un-simple requirements. 

Briefly, when requested, it must:
1) Create a relatively realistic 3D simulation of a physical locality, including accurate real-time visual input for simulated cameras, and stimulation of all 5 senses of any human avatars or uploaded occupants.

2) Create a real-time 3D surface textured solid model of the occupants, both for diagnosing correct physical behaviours, and to provide a mirror image allowing the occupants to observe themselves.

3) Create simulated functional MRI, and multichannel electroencephalographic output matching any such data taken of the occupants prior to death and uploading. This diagnostic data is absolutely necessary, especially for the first few uploaded animals and humans in order to prove these test subjects are not uncomfortable, and to adjust personal neurological models enough for these subject to correctly think, perceive sensory input, and coordinate neuromuscular output enough to communicate with outside technicians.

4) Create a detailed, color-coded, real-time, 3D display of all neurons as they respond to changing blood flow and the build up of neurotransmitter densities,  but also when they fire, where the synaptic signal travels, and the level of parallel synaptic redundancy. This diagnostic imagery should be viewable to both human technicians and to the occupants via a simulated display.

5) Be able to physically model and visually display any number of interacting uploaded occupants and/or human avatars in the same local. It must do this without allowing any exchange of permanent code artifacts between these simulations.  

Post has attachment
Dr. Ken Hayworth's earlier work (2009) on
"A Connectome Observatory for nanoscale brain imaging"
Good coverage of a wide range of topics:
* Sectioning
* Imaging
* 3D Modeling
Wait while more posts are being loaded