Post is pinned.
A few rules for this community:
- Let's address each other by our first names, please call me Olivier.
- The primary language is English but feel free to have discussions in your native language. We know that debate can be more fluid amongst people using their native language. Just know that the organizing team only understands English and French, but we can use the Translate button.
- In French, let's use tutoiement. 
- Of course, politeness is always a requirement.

When I run the Exxistence50 Agent, it is always PAINED. Is that correct ? Did you get the same issue ? Can Someone help ?

Post has attachment
Owen Holland's talk at #BICA2015 on "Why and how we should build a zombie". I believe it addresses a few points raised by +Zarathustra Goertzel, +Helio Perroni Filho, +Chuck Esterbrook, +Borislav Iordanov, +torea foissotte, +Nick Carter, +Benjamin Goertzel in a great previous discussion : 

I'm attempting to rewrite Lesson 50's java code to make sure I get this but I'm not getting how the decision mechanism works or when composite interactions are "learned"? Can anyone help.

Here's what I've grasped for the decision mechanism - I'm not sure I understand this though...
1 - It proposes all primitive interactions - always giving them a proclivity of 0 (since their weight is always 0)
2 - It proposes all interactions (call them B) who are the postInteraction of any composite interaction (A) where A's preInteraction matches one of these previously saved values  (previouslyEnactedInteraction, previouslyEnactedInteraction's PostInteraction, or previousSuperInteraction)
3 - Then it updates the proclivity in a weird way I don't get... It loops thru all proposed interactions (A) and finds what actually ran last time they were tested (B) and then it gets the last time B was enacted and gets that interaction (C) ... and all this is used to set a.proclivity = b.valence * c.weight
4 - Then it picks the highest proclivity out of that list

And for learning composite interactions... it appears to create composite interactions from every "enactedinteraction" result paired up with previously saved interactions - even if the enacted doesn't equal the intended it still creates a composite interaction - is this right?

I'm trying a simple "hello world" test where my agent can type h or i and I'm anticipating the agent to learn to type "hi" into the chat box. But it never does - it just keeps learning higher abstractions like "hhh" and never "hi"-post-checkfeedback. 

Here's my interactions and the valences
1 - type h -> always works (0)
2 - type i -> always works (0)
3/4 - post chat -> works as long as they typed something first (1), else fail ( -1)
5/6 - listen for feedback -> success if they typed hi (10), otherwise failed (-10)

Post has attachment
As a scoop for this community, I present #AImergence , a little game that I am developing to illustrate DevAI :-). I am very curious to know what you guys will think of it. It's still in the testing phase, please send feedback in this thread of discussion!

Hi every one ! 
I'm searching for the courses of Prof. Olivier GEORGEON in PDF format, Someone can help me !  

Hello everyone, I just finished a course in mobile robotics, now I am starting the robotics course in Ideal, I hope to get alot of support and guidance from everyone. Thanks in advance, Herman

Hello Olivier (hard for me to use the first Name).
Well. Please can you tell me when you plan to open this interesting MOOC again ?
Second, I will soon follow the Course Machine Learning on Coursera by Andrew Ng, please, can you tell me if Developmental AI is covered by Classical machine learning or if it is a completely new discipline.
Excuse my newbie question In fact I have never study AI nor ML.
Thanks a lot in advance for your response.

Post has attachment
I just sent the certificates of participation by email. Sorry for being so late, we had to generate them manually. If you think you are eligible to one but did not receive it, or if it needs a modification, please contact me. Don't hesitate to do so, it will help us refine the statistics of participation.

We delivered 63 certificates, among which 41 advanced certificates. Now, we are writing a short assessment paper for the European MOOCs Stakeholders Summit (EMOOCS2015). I will share it here when it is ready.

The Semantic Pointer Architecture (SPA) proposed by Chris Eliasmith is based on many of the principles we've met at the MOOC, such as open-ended learning, focus on a sensory-motor loop and use of hierarchical representations. It also strives to be biologically plausible and produce results that quantitatively agree with cognitive experiments performed on humans. The model is implemented by the open source Nengo toolkit:

Some papers on the SPA and its foundation, the Neural Engineering Framework (NEF) can be found on Nengo's site (look into the "Publications" page), but a more complete alternative is to buy Eliasmith's book "How to Build A Brain", available on Amazon:

There is also this presentation by Travis Dewolf:

Though to be honest, having bought the book and currently going through it, I have found the presentation too shallow to even begin conveying the full reach of Eliasmith's ideas.
Wait while more posts are being loaded