Communities and Collections

Posts

Post has shared content

How

*should*government be run in the atomic age of global existential risk? Cummings offers his thoughts on how high-performance human teams operate.Add a comment...

Post has attachment

This is my synopsis of the linked paper, "A History of the Frame Problem", with a question at the end.

In 1969, McCarthy and Hayes tackled the problem of making agents that can formulate strategies to complete goals. The problem has two parts: representing the state of the world at various moments in time, and searching for a sequence of actions whose final world state satisfies the goal. Like good software engineers, they aspired to decouple the parts, and had a clever idea. They formalized in first-order logic 1) the initial state of the world, 2) the preconditions under which an action can be taken, and 3) the state-to-next-state transformation an action produces on the world. This solved the first half of the problem, and now the second problem could be solved by a generic theorem prover. Unfortunately, in practice, formalization #3 ended up being really large.

"We were obliged to add the hypothesis that if a person has a telephone, he still has it after looking up a number in the telephone book. If we had a number of actions to be performed in sequence, we would have quite a number of conditions to write down that certain actions do not change the values of certain fluents [fluent = a proposition about the world which changes over time]. In fact, with n actions and m fluents, we might have to write down n*m such conditions."

They called this problem of n*m-blowup the "frame problem", but made the mistake of including the word "philosophical" in the title of their paper, provoking AI doomsayers to cite it as yet another example of why computers could never think like humans. The discussion became more interesting when Daniel Dennett directed the attack away from the AI researches and toward the philosophers. He caricatured epistemology as a comically profound but very incomplete theory, because for thousands of years, no one had ever noticed the frame problem.

"... it is turning out that most of the truly difficult and deep puzzles of learning and intelligence get kicked downstairs by this move [of leaving the mechanical question to some dimly imagined future research]. It is rather as if philosophers were to proclaim themselves expert explainers of the methods of a stage magician, and then, when we ask them to explain how the magician does the sawing-the-lady-in-half trick, they explain that it is really quite obvious: the magician doesn't really saw her in half; he simply makes it appear that he does. 'But how does he do that?' we ask. 'Not our department', say the philosophers - and some of them add, sonorously: 'Explanation has to stop somewhere.'"

Some philosophers and AI researches argued that the original mistake leading to the frame problem was McCarthy and Hayes choosing first-order logic for world representation. Their case is easily made with the Tweety Bird problem: The premises 1) All birds fly, 2) Tweety is a bird, 3) All broken-winged creatures cannot fly, and 4) Tweety has a broken wing, can prove both 5) Tweety can fly and 6) Tweety cannot fly. Clearly premise 1 is too strong, but attempting to modify first-order logic to support "most" statements instead of "all" statements breaks monotonicity: Under "most"-enabling logic, premises 1, 2, 3 would prove 5, but premises 1, 2, 3, 4 would prove 6. An agent learning premise 4 would change its mind from conclusion 5 to conclusion 6. This is, of course, the desired behavior, but dropping the stability of truth means the agent can no longer use a generic theorem prover. The agent is using a modified logic system, and so it must use a specialized theorem prover. The question becomes: which logic system to use?

In standard first-order logic, every proposition is either true, false, or unknown. Learning new information can only ever change the status of unknown statements. To solve the tweety bird problem, a logic must enable assuming unknowns as false until proven otherwise (closed-world assumption). The symbolic AI community eventually converged on circumscription, which is a logic that assumes particular propositions to be false until proven otherwise.

McCarthy updated his situation calculus by circumscribing the proposition Abnormal, allowing him to formalize "Most birds fly" as "All birds fly unless they are abnormal" and adding the premise "Broken-winged creatures are abnormal." Since the Abnormal proposition is assumed to be false until proven otherwise, Tweety is assumed to be a normal flying bird until the agent learns that Tweety has a broken wing.

Shanahan took a time-oriented approach instead. In his circumscriptive event calculus, he circumscribed Initiates and Terminates, so he could formalize "Most birds fly" as "All birds can fly at birth" and he could replace "All broken-winged creatures cannot fly" with "Breaking a wing Terminates the flying property." Since the Terminates proposition is assumed to be false until proven otherwise, Tweety's birth state (capable of flight) is assumed to persist until the agent learns that Tweety's wing was broken.

Personally I find circumscription unsatisfying. To me, the most obvious answer for "How do you turn 'all' into 'most'?" is probability theory. As E. T. Jaynes showed, logic is merely a special case of probability theory (in which all of the probabilities are 0 or 1), so the jump from logic to probability theory seems more natural to me than circumscription. I am not alone in thinking this, of course. Many people attempted to solve the frame problem using probability theory, but as Pearl showed in 1988 regarding the Yale Shooting Problem, probability theory can never be enough, because it cannot describe counterfactuals, and thus cannot describe causality.

But that limitation disappeared in 1995, when Pearl figured out how to generalize probability theory. He discovered a complete set of axioms for his "calculus of causality", which distinguishes between observed conditional variables and intervened conditional variables.

Logic -> Probability Theory -> Calculus of Causality (wow!)

According to the linked paper, the circumscriptive event calculus and Thielscher's fluent calculus have adequately solved the frame problem. But I still wonder, has anyone re-attempted a solution using the calculus of causality?

In 1969, McCarthy and Hayes tackled the problem of making agents that can formulate strategies to complete goals. The problem has two parts: representing the state of the world at various moments in time, and searching for a sequence of actions whose final world state satisfies the goal. Like good software engineers, they aspired to decouple the parts, and had a clever idea. They formalized in first-order logic 1) the initial state of the world, 2) the preconditions under which an action can be taken, and 3) the state-to-next-state transformation an action produces on the world. This solved the first half of the problem, and now the second problem could be solved by a generic theorem prover. Unfortunately, in practice, formalization #3 ended up being really large.

"We were obliged to add the hypothesis that if a person has a telephone, he still has it after looking up a number in the telephone book. If we had a number of actions to be performed in sequence, we would have quite a number of conditions to write down that certain actions do not change the values of certain fluents [fluent = a proposition about the world which changes over time]. In fact, with n actions and m fluents, we might have to write down n*m such conditions."

They called this problem of n*m-blowup the "frame problem", but made the mistake of including the word "philosophical" in the title of their paper, provoking AI doomsayers to cite it as yet another example of why computers could never think like humans. The discussion became more interesting when Daniel Dennett directed the attack away from the AI researches and toward the philosophers. He caricatured epistemology as a comically profound but very incomplete theory, because for thousands of years, no one had ever noticed the frame problem.

"... it is turning out that most of the truly difficult and deep puzzles of learning and intelligence get kicked downstairs by this move [of leaving the mechanical question to some dimly imagined future research]. It is rather as if philosophers were to proclaim themselves expert explainers of the methods of a stage magician, and then, when we ask them to explain how the magician does the sawing-the-lady-in-half trick, they explain that it is really quite obvious: the magician doesn't really saw her in half; he simply makes it appear that he does. 'But how does he do that?' we ask. 'Not our department', say the philosophers - and some of them add, sonorously: 'Explanation has to stop somewhere.'"

Some philosophers and AI researches argued that the original mistake leading to the frame problem was McCarthy and Hayes choosing first-order logic for world representation. Their case is easily made with the Tweety Bird problem: The premises 1) All birds fly, 2) Tweety is a bird, 3) All broken-winged creatures cannot fly, and 4) Tweety has a broken wing, can prove both 5) Tweety can fly and 6) Tweety cannot fly. Clearly premise 1 is too strong, but attempting to modify first-order logic to support "most" statements instead of "all" statements breaks monotonicity: Under "most"-enabling logic, premises 1, 2, 3 would prove 5, but premises 1, 2, 3, 4 would prove 6. An agent learning premise 4 would change its mind from conclusion 5 to conclusion 6. This is, of course, the desired behavior, but dropping the stability of truth means the agent can no longer use a generic theorem prover. The agent is using a modified logic system, and so it must use a specialized theorem prover. The question becomes: which logic system to use?

In standard first-order logic, every proposition is either true, false, or unknown. Learning new information can only ever change the status of unknown statements. To solve the tweety bird problem, a logic must enable assuming unknowns as false until proven otherwise (closed-world assumption). The symbolic AI community eventually converged on circumscription, which is a logic that assumes particular propositions to be false until proven otherwise.

McCarthy updated his situation calculus by circumscribing the proposition Abnormal, allowing him to formalize "Most birds fly" as "All birds fly unless they are abnormal" and adding the premise "Broken-winged creatures are abnormal." Since the Abnormal proposition is assumed to be false until proven otherwise, Tweety is assumed to be a normal flying bird until the agent learns that Tweety has a broken wing.

Shanahan took a time-oriented approach instead. In his circumscriptive event calculus, he circumscribed Initiates and Terminates, so he could formalize "Most birds fly" as "All birds can fly at birth" and he could replace "All broken-winged creatures cannot fly" with "Breaking a wing Terminates the flying property." Since the Terminates proposition is assumed to be false until proven otherwise, Tweety's birth state (capable of flight) is assumed to persist until the agent learns that Tweety's wing was broken.

Personally I find circumscription unsatisfying. To me, the most obvious answer for "How do you turn 'all' into 'most'?" is probability theory. As E. T. Jaynes showed, logic is merely a special case of probability theory (in which all of the probabilities are 0 or 1), so the jump from logic to probability theory seems more natural to me than circumscription. I am not alone in thinking this, of course. Many people attempted to solve the frame problem using probability theory, but as Pearl showed in 1988 regarding the Yale Shooting Problem, probability theory can never be enough, because it cannot describe counterfactuals, and thus cannot describe causality.

But that limitation disappeared in 1995, when Pearl figured out how to generalize probability theory. He discovered a complete set of axioms for his "calculus of causality", which distinguishes between observed conditional variables and intervened conditional variables.

Logic -> Probability Theory -> Calculus of Causality (wow!)

According to the linked paper, the circumscriptive event calculus and Thielscher's fluent calculus have adequately solved the frame problem. But I still wonder, has anyone re-attempted a solution using the calculus of causality?

Post has shared content

Just a friendly reminder, the two best science fiction books of all time are both available for free on the homepages of their authors:

More free major SF books:

**(1)**http://www.rifters.com/real/Blindsight.htm**(2)**http://crystal.raelifin.com/More free major SF books:

**(3)**http://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando-intro.html**(4)**http://www.kschroeder.com/my-books/ventus/free-ebook-versionAdd a comment...

Post has attachment

Post has attachment

Post has attachment

Public

There are lots of reasons why I'm in favor of a basic income guarantee, but I'm still missing the most important reason. I still don't know if it will actually WORK.

There have been a handful of "experiments" over the last few decades, but few of them have been rigorous (e.g. using randomized controlled trials), none have been complete (universal, basic, and long-term), and none have been large scale (largest was 15,000 recipients, less than a quarter of the size of the podunk town I grew up in).

GiveDirectly is raising funds to fill in the evidence gap, and will purchase statistical power by experimenting in the poorest areas of the world (most likely East Africa). I've committed to giving $300/month (10 people) if they raise sufficient funds for getting statistically significant results.

https://www.givedirectly.org/basic-income

There have been a handful of "experiments" over the last few decades, but few of them have been rigorous (e.g. using randomized controlled trials), none have been complete (universal, basic, and long-term), and none have been large scale (largest was 15,000 recipients, less than a quarter of the size of the podunk town I grew up in).

GiveDirectly is raising funds to fill in the evidence gap, and will purchase statistical power by experimenting in the poorest areas of the world (most likely East Africa). I've committed to giving $300/month (10 people) if they raise sufficient funds for getting statistically significant results.

https://www.givedirectly.org/basic-income

Add a comment...

Public

Sometimes I say things that other people don't know.

Where I grew up, the response was always "How do you know so much?" (asking rhetorically)

Here, the response is usually, "Where did you learn that from?" (implying, "I'd like to learn more about that.")

I love California.

Where I grew up, the response was always "How do you know so much?" (asking rhetorically)

Here, the response is usually, "Where did you learn that from?" (implying, "I'd like to learn more about that.")

I love California.

Add a comment...

Post has attachment

Public

Congrats to +Sam Bosley, Jonathan Koren, and +Tim Su on open sourcing SquiDB! Although I haven't been closely involved with the project, and I haven't used an ORM in ages, my understanding is that the benefits of SquiDB over existing ORMs is:

1) It is more strongly typed (no casting required).

2) It supports close to 100% of the SQL grammar.

3) It supports code generation, making it suitable for android devices where every cycle counts.

1) It is more strongly typed (no casting required).

2) It supports close to 100% of the SQL grammar.

3) It supports code generation, making it suitable for android devices where every cycle counts.

Add a comment...

Public

I usually have trouble enjoying science fiction movies that show no respect for reality. Fortunately, Interstellar is not one of those movies, and thanks to its realistic portrayal of black holes, it shows that reality can be far more interesting than fantasy. So it makes me terribly sad to admit that I couldn't really enjoy the movie, despite its mostly accurate science, because it failed to depict imo realistic people and organizations. Just a few of the things that bugged me (spoilers):

---

Textbooks are being rewritten to say that the moon landing was a hoax, apparently to discourage students from becoming scientists. Because, um, the best way to fight the blight is to have less science.

A cultural norm spreads that more citizens should become farmers. High demand and government subsidies are apparently insufficient motivation.

Twelve astronauts are sent to twelve planets, and nobody ever thinks about what happens to prisoners in solitary confinement.

Professor Brand spends years doing fake research, so that he can launch the lazarus project and plan B without the hysteria of people feeling left behind. For decades, it never occurs to him that his theory might be incomplete. Yet Murph figures this out only days after the professor dies. Either all of the scientists at NASA are incredibly stupid, or the manhattan project had more scientists than the project of saving humanity from extinction.

Everyone understands gravitational time dilation, but nobody understands that the low orbiting planet must therefore be too young to be habitable.

Nobody understands that black holes cause enormous tidal waves.

Dr. Mann enacts some complicated plan involving killing people and commandeering the Endurance, instead of saying when they arrive, "I lied, please take me with you."

---

I think my enjoyment of fiction has been forever ruined now that I have read stories like hpmor, worm, the metropolitan man, and luminosity.

---

Textbooks are being rewritten to say that the moon landing was a hoax, apparently to discourage students from becoming scientists. Because, um, the best way to fight the blight is to have less science.

A cultural norm spreads that more citizens should become farmers. High demand and government subsidies are apparently insufficient motivation.

Twelve astronauts are sent to twelve planets, and nobody ever thinks about what happens to prisoners in solitary confinement.

Professor Brand spends years doing fake research, so that he can launch the lazarus project and plan B without the hysteria of people feeling left behind. For decades, it never occurs to him that his theory might be incomplete. Yet Murph figures this out only days after the professor dies. Either all of the scientists at NASA are incredibly stupid, or the manhattan project had more scientists than the project of saving humanity from extinction.

Everyone understands gravitational time dilation, but nobody understands that the low orbiting planet must therefore be too young to be habitable.

Nobody understands that black holes cause enormous tidal waves.

Dr. Mann enacts some complicated plan involving killing people and commandeering the Endurance, instead of saying when they arrive, "I lied, please take me with you."

---

I think my enjoyment of fiction has been forever ruined now that I have read stories like hpmor, worm, the metropolitan man, and luminosity.

Add a comment...

Post has attachment

Public

I was like, "Why buy a decongestant, when I could buy a jalapeño for 7 cents?"

(Worth it.)

(Worth it.)

Add a comment...

Wait while more posts are being loaded