Profile

Cover photo
Piotr Kalinowski
104 followers|30,843 views
AboutPostsPhotosVideos

Stream

Piotr Kalinowski

Shared publicly  - 
 
“The first step in the acquisition of wisdom is silence, the second listening, the third memory, the fourth practice, the fifth teaching others.” — Solomon Ibn Gabriol
5
Add a comment...

Piotr Kalinowski

Shared publicly  - 
 
So true.
1
Add a comment...

Piotr Kalinowski

Shared publicly  - 
 
The Wrong Kind of Paranoia. Have you ever considered how many programming language features exist only to prevent developers from doing something? And it's not only to keep you from doing something in other people's code. Often the person you're preventing from doing this thing is yourself.
1
Add a comment...

Piotr Kalinowski

Shared publicly  - 
 
Sad day.
1
Add a comment...

Piotr Kalinowski

Shared publicly  - 
 
Oh, the drama, the controversy…

On a more serious note, I like the idea of a philosophical zombie: acts like you, looks like you, but is not self-conscious. But you cannot tell, because consciousness is about inner experience.

So if smartphones were really conscious, how would we know? How would we know they aren't?

Now please excuse me, I need to connect my phone to the charger. Poor thing is hungry.
1
Add a comment...

Piotr Kalinowski

Shared publicly  - 
 
XKCD never disappoints ;-)
1
Add a comment...

Piotr Kalinowski

Shared publicly  - 
 
True story.
1
Add a comment...
Have him in circles
104 people
affan khan's profile photo
Cindy Gace's profile photo
Jasper Webster's profile photo
amanda coniel's profile photo
Daniel Kwok's profile photo
Elżbieta Bednarek's profile photo
indah putri nurfaini's profile photo
Pero Radoš's profile photo
Julia Gryszczuk-Wicijowska's profile photo

Piotr Kalinowski

Shared publicly  - 
 
I greatly enjoyed the witty style of this article. Well worth a read for that reason alone.
Four Days of Go. By Evan Miller. April 21, 2015. Part of my work involves the mild reverse-engineering of binary file formats. I say “mild” because usually other people do all of the actual work; I just have to figure out what an extra flag field or two means, and I then take as much credit as ...
1
Add a comment...

Piotr Kalinowski

Shared publicly  - 
 
So, apparently I wear glasses, because I stayed too much indoors, reading. Oh well. I have no regrets — otherwise I might have never realised how much more distinguished I look in glasses ;-)
Short-sightedness is reaching epidemic proportions. Some scientists think they have found a reason why.
5
1
Julia Gryszczuk-Wicijowska's profile photoPiotr Kalinowski's profile photoMarcin Grabowski's profile photo
2 comments
 
If you read the article, you'll notice that the ideal solution is to read outside ;-)
Add a comment...

Piotr Kalinowski

Shared publicly  - 
 
Some food for thought. I now have 3 more books on my reading list, sigh…
Biology and culture meet morality and wistful inference as we strive to define and refine our better angels.
1
Add a comment...

Piotr Kalinowski

Shared publicly  - 
 
That   could work. Except for the volcano. That would make me feel uneasy.
1
Add a comment...

Piotr Kalinowski

Shared publicly  - 
 
Here we go again. There were few headlines about how Hawking expressed the view that artificial intelligence could spell the end of human race, we are starting to read a book entitled “Superintelligence” in the office book club, and people just love to speculate, don't they? And so the lunch discussion was all over the place.

Why are people so afraid of artificial intelligence? Is it just because they have no idea what they are talking about beyond some science fiction books and/or movies, or is it subconscious fear that cold logic has to lead to extermination of humanity? After all, time and time again we hear how our intuitive grasp of mathematics, and related subjects is flawed, if not outright wrong.

I guess they just don't realise what logic can, and cannot do. Extermination of humanity may be the most efficient route to certain goals of hypothetical artificial intelligence, but the problem is that those “goals” would correspond to axioms of a formal system. You cannot prove them with logic, and they are not derived using logic. They're just there, like our self-preservation instinct. If we created artificial intelligence (or it just emerged?), what goals would it have?

I think the problem is using the term “artificial intelligence.” I bet the average Joe thinks of it as a fully sentient, self-aware artificial being, only way more efficient at everything than we are, which does not seem to be particularly likely to happen any time soon. And if it did, probably in the US, it would be perfectly consistent with the rhetoric of their society for it to get all the best things at the expense of all the puny humans that surely simply did not work hard enough. It's a competition. You lost. Deal with it. Of course you are not getting medical care any more, and your kids will not go to college, what were you thinking?

I do hear people talking about how progress of technology is exponential, and thus that and that, but I do not understand why they insist it will stay like that. Did we, or did we not have so called Dark Ages? More importantly, why do we insist that the problem space is unbounded? Surely there are boundaries to what we can understand and do, not just because of biological limitations of the human brain, but simply because the domain of our exploration is not unbounded. Should the progress not slow down as we approach the boundaries, just like computers do not necessarily become so much faster every year now as they used to?

Driving cars, expert medical systems, automated chess players: these are just usual tools that we create. They become more and more powerful, and they may very well backfire one day to the point of accelerating the collapse of our civilisation. If the danger is not immediately obvious in these examples, just think of genetic engineering. A useful tool that may hold answers to variety of problems, and I'm not just referring to any food crisis. But with all the advancements of this technology, one day it may become just easy enough to manufacture, and release a virus that will kill us all.

This does not mean that progress has, or should be, stopped. It's just that the real danger lies not in some abstract self-aware artificial intelligence that will decide to exterminate humanity with our involvement limited to having created it. It does not lie in ever advancing technology. The human factor is the real threat, because we are the greatest enemy of our own survival. We have all the capacity required to exterminate ourselves. We do not need Skynet to do it for us.

And if (or when?) we do, the humanity will just be yet another failed experiment of evolution. Nope, that didn't work out. Let's try something else. Maybe with the dolphins?
1
Add a comment...
People
Have him in circles
104 people
affan khan's profile photo
Cindy Gace's profile photo
Jasper Webster's profile photo
amanda coniel's profile photo
Daniel Kwok's profile photo
Elżbieta Bednarek's profile photo
indah putri nurfaini's profile photo
Pero Radoš's profile photo
Julia Gryszczuk-Wicijowska's profile photo
Work
Occupation
Software Developer
Links
Contributor to
Story
Tagline
developing software one breath at a time
Basic Information
Gender
Male