The invention of the digital computer had a lot to do with men trying to scientifically reconstruct a working human brain. This article is about two brilliant minds who worked at MIT, Warren McCulloch and Walter Pitts, who were irritated by Sigmund Freud's profoundly unscientific models of the mind. They set out to describe all of thought using a simple, logical calculus that modeled neurons and how neurons connected to their neighbors via synapses in vast networks in the brain.
On the assumption that a synapse could either fire a signal or not, they modeled their networks on the old, well-established logical calculus of George Boole which worked on absolutes -- yes or no answers only, and only three functions: conjunction (OR) disjunction (AND) and inversion (NOT). From these fundamental building blocks, they hoped the beauty and complexity of a thinking mind would emerge. With their new model, they discovered that a neuron connecting to itself was in fact not a paradox, but a signal that was invariant over time, or a "memory."
When they began work on the EDVAC, Von Neumann's successor to the famous ENIAC, Von Neumann used the work of McCulloch and Pitts to implement the circuitry of his new machine using vacuum tubes as synapses, and their self-connected neurons as memory; search Wikipedia for the word "Flip-Flop," we still use this electronic circuit today in modern computers. In the future, vacuum tubes would be replaced with transistors, and these innovations would become the very first modern computer.
What strikes me is the revelation that the modern digital computer was deliberately designed with the goal of imitating the human brain. I had always thought that computers were so seemingly brain-like because the several thousand people who were involved in inventing the world's very first digital computers could only invent computers if they could think and reason about the complex systems they were building in terms of simpler analogies, analogies to things we already knew about, like the human brain; art imitating reality as it were. Also I had thought, to some degree, evolution of the ideas of computers would naturally select computers that were more brain-like because the human brain is just the most well-adapted design. But as it turns out, the brain-like features of a computer were much more deliberately designed to be that way than I had thought.
It is also a frustrating example of how difficult it is to convince people that artificial intelligence is possible. You can develop the mathematical foundations for building a thinking machine (a computer), then you can actually build the computer, but once it becomes well-established technology, people don't congratulate you for building the first ever artificial intelligence. Instead they say, "well, it still isn't anything like a human, therefore it isn't intelligent." Computers and software are rapidly becoming more and more human-like in their abilities, and still no one is willing to call these things "intelligent." I have seen some people even go so far as to define "intelligence" in such a way that precludes anything that isn't strictly similar to the human brain. There is just no pleasing some people.