Profile cover photo
Profile photo
Brien Malone
63 followers -
Application Developer
Application Developer

63 followers
About
Brien's interests
View all
Brien's posts

Post has attachment
ES6 Arrow Notation 'this' Scope Bug with FiddleJS
I just ran into an interesting bug. I've been ramping up on ReactJS and as part of that, one of our senior devs recommended that I get a handle on ECMAScript 6. He pointed me to a nice ECMAScript 6 course on Pluralsight that I really enjoyed. One of the exe...

My Facebook news feed is cyclical. The same stories recirculate over and over... today is the first day that I am actually grateful for this.

A frequent rhetorical question on science blogs is: what is consciousness? This is usually framed within the context of an article about self-consciousness as some insurmountable wall for artificial intelligence research. The problem, of course, is not only that consciousness is hard to achieve, it is that we lack a common definition.

Today a reasonably concise definition of consciousness popped into my head thanks to an article from Curiosity. This was my comment:

It is easier to define consciousness if you take a step back from the individual. We are a communal species. Communication allowed us to work together. Speech and speech processing are reciprocal systems, allowing us to communicate internally - essentially making the self a community of one that acts as two (speaker/listener). Speaking to yourself and knowing that you are the source is the simplest form of self-awareness. Self-awareness + comparison + prediction + action = consciousness.
I really like solid, simple feeling around that definition of self-awareness. (Self-awareness is another nebulous term that gets batted around by philosophers and scientists alike.)

The most exciting part about that definition was the idea that our multi functional brain processes and packages thoughts for consumption by others, and that perhaps that system operates independently from the system that listens and unpacks inbound communication. Perhaps we learn things from talking to ourselves because it involves different parts of the brain. I tried to get a sense of what the outbound communication thought might be and how it might differ from inbound thought.

Inbound communication is immediately sent into the associative engines. This step doesn't seem necessary for outbound communication... unless you include the reciprocal activity used to compose thought that is packaged for communication.

There is so much potential here! I really need to think this through more thoroughly. One question I desperately want to answer is why doesn't the reciprocal self talk result in an implicit loop? What is the beginning and end of this activity?

Post has attachment
The Neurologist Who Hacked His Brain—And Almost Lost His Mind - WIRED

Post has attachment
(A tl;dr post for phlosofizers)
The ambiguity of the question: "Do we have free will?" makes the linked survey pointless, but it is a great talking point.

I think looking for consensus among the interpretations of free will would be far more interesting.

On the surface, free will is the ability to make decisions free of coercion and compulsion, but coercion and compulsion by what? Other people? Biology? Prior experience? Environment?

Some think they possess free will because when faced with a situation similar to one they experienced in the past, they can make different choices. Some think they lack free will because when faced with the decision to drink or not drink alcohol, they are compelled to drink even though they know the outcome is self-destructive.

Some think the free will discussion has to start beyond repeated experiences.

For a given decision, if you could rewind time, effectively resetting the entire universe, would it even be possible to make a different decision? (I don't think it would) If not, does that imply a lack of free will?

A popular hypothesis says that individual universes exist in which every decision has been made. If that were true, it might mean that a different outcome is possible in a time-rewind. We would just follow a different path like a puck on a pachinko board. (I'm familiar with the observed quantum behavior behind this idea, but I'm not sold on this interpretation of a multiverse.)

You want to kill someone who infuriates you. Do you have free will because you choose not to act on that desire? Do you lack free will because you chose not to act on that desire because of legal consequences or a moral imperative? Do you have free will because you could choose to ignore the consequences and kill anyway?

If you walk outside without a jacket and it is -30°, your imminent death will compel you back inside. Is that the environment robbing you of free will under penalty of death? Do you have free will if you choose to die?

What if the agent was a despotic regime? A parent? Are these things compelling you to act or are they simply factors in making the decision? Is there a difference?

A more important question is: Why care? What is the point to having the answer to the free-will question?

Those who define free will by repeat-experience decision-making could use the no-free-will viewpoint as a reason to absolve criminal behavior or justify floating through life without making willful decisions. This is the domain of predestination (i.e. "My decisions don't matter.") and is counterproductive, in my opinion. On the flip side, belief in free-will for this group would be empowering.

Those who define free will by the time-rewind scenario are in a completely different arena. The no-free-will camp simply sees biological inevitability of the decision-making process. There is no absolution of decisions, just the recognition that we are the sum of our experiences and the product of what, how much and how often we eat and sleep, our individual biology and the messages we feed our brains. The free-will camp is similar, but takes a leap into more hypothetical territory.

These two definitions of free will are not exclusive. Both should be discussed. Finding a way to separate the two would be tremendously helpful. (PhD thesis anyone?)

Post has attachment

Post has attachment
Neil deGrasse Tyson narrates: A Brief History of Everything in 8 minutes:
https://youtu.be/7KYTJ8tBoZ8

Suzanne Vega's song Tom's Diner is a 90s staple, notable for being the song used by the creator of the ubiquitous MP3 format to fine tune his compression algorithm.
http://youtu.be/kXg5pOF2PvY

Sliding down the rabbit hole, I found this haunting echo of the song made up of the sounds lost by the compression algorithm. The video is fittingly made from the artifacts of the video compression algorithm. The result is not as musical as it is artistic.

https://vimeo.com/120153502


Post has shared content
What I find so interesting about this is that the learning mechanism is simple. It isn't making intuitive leaps or educated guesses, it is trying random things and adopting actions that improve fitness. I'm not belittling the accomplishment, I'm marveling that such a simple mechanism can 'learn'
Self-learning #ArtificialIntelligence completes one level of Super Mario in 34 attempts; various news outlets reported in June 2015. Here is the video, which inspired the reports: https://www.youtube.com/watch?v=qv6UVOQ0F44

The Next Web wrote (14 June 2015): “MarI/O is a neural network that appears to be learning how to play Super Mario World by trial and error — just like you or I would. After playing the game for a bit, MarI/O learns which enemies do what (and when), then seems to decide on the best method for bypassing that enemy — just like you or I would.” http://thenextweb.com/insider/2015/06/14/watch-this-learning-neural-network-annihilate-super-mario-world-with-ease/

Engadget wrote (17 June 2015): “Unlike other AI programs, MarI/O wasn't taught anything before jumping into the game -- it didn't even know that the end of the level was to its right -- instead, some simple parameters were set. The AI has a "fitness" level, which increases the further right the character reaches, and decreases when moving left. The AI knows that fitness is good, and so, once it figures out that moving right increases that stat, it's incentivized to continue doing so. ” http://www.engadget.com/2015/06/17/super-mario-world-self-learning-ai/

Mic.com wrote (15 June 2015): “On Saturday, programmer SethBling introduced the world to MarI/O, a Machine Learning program he created to play video games. There was one important tweak: Instead of being programmed to run the course perfectly, MarI/O had to learn how to play from scratch.” http://mic.com/articles/120657/this-computer-learned-super-mario-from-scratch-and-now-it-can-kick-your-ass

Vice Motherboard wrote (15 June 2015): “The program also recognizes when fitness tapers off (when Mario dies), and adds a mutation by making Mario jump or do something different on the next level generation. Essentially, it’s a machine version of evolution, making micro- or macro-adjustments as it needs to get through. ” http://motherboard.vice.com/read/this-ai-used-neuroevolution-to-teach-itself-how-to-play-super-mario-world

See also: http://www.washingtonpost.com/blogs/innovations/wp/2015/06/15/thankfully-mario-can-demystify-this-incredibly-important-fieldmachine-learning/

#MachineLearning

Post has attachment
+Larry Blumen and I had a discussion about how the brain represents numbers. Michael from the YouTube channel VSauce takes an interesting wander down that road... He mentions something Intuitive that I never considered. Logarithms.
(The whole talk is interesting, but jump to the 5:00 mark to cut to the chase.)
Wait while more posts are being loaded