Shared publicly  - 
Wanted: experts in Twisted, Tornado, asyncore or other Python async APIs (greenlets/gevent, Stackless, libevent all welcome!). In we're trying to hash out the async API for the future (for the Python stdlib) and we need input from expert users of the current generation of async APIs.
Zooko Wilcox-O'Hearn's profile photoPhil Schaf's profile photoAlexandre Miguel de Andrade Souza's profile photoGuido van Rossum's profile photo
This is, indeed, a platform committed to the future. I hope the community gets to a nice consensus, and we have a neat async API in the core. :-)
I think, that Python should be better than NodeJS in the async execution. Because I like Python more than JS :-)
I hope it will be pythonic. The stdlib doesn't need more Java-style code. 
Phil: what style do you consider Twisted, btw? Do you think it's pythonic?
I don't really understand why to define a new async API instead of adopting Twisted. I guess the result will be either a subset of Twisted, will have inconsequential differences from Twisted, or will be wrong.
yes, I meant the last two paragraphs where is stated by Glyph and Guido that pulling things in of this size is not what Guido is looking for. ciao - Chris
Christian: not at all. you have to learn great parts of the api to get started, and you have to know which parts of “interfaces” to override.

that makes sense in java, because in java, interfaces exist. python has mixins instead, and no compile-time checking if a interface is implemented properly (because there are none)

python has properties. if a method name contains “get”, “set”, “del”, “add”, “remove”, …, it needs a dam good reason:
1. every getter and setter should be a property instead.
2. we should use set-like object like reactor.readers instead of reactor.addReader and reactor.removeReader

python has a styleguide, PEP 8.  names_with_underscores should be used instead of camelCase, and so on.

so: twisted’s API is a bad example of a API that would be nice for java, but is awful for python, like xml.dom.
also, we have the with-statement, and decorators. an API as big as twisted should statistically have several use cases for both of them. if it hasn’t exposed at least a few of each, there is a stong indication that it’s not a python API.
Thanks for the detail +Phil Schaf. Had been impressed with the Twisted episode on +FLOSS Weekly, so was initially sympathetic with the comment here by +Zooko Wilcox-O'Hearn. Now, though, I totally understand the need for a new API which can be better integrated.
For what it is worth, all of the non-Pythonic things that Phil pointed out about the Twisted API are the way they are because they were created before the Python standard was created. :-) (Capitalization of method names, use of the "with" statement, interfaces, and decorators.)

But of course, it would still be nice to change those things to match the modern Pythonic idioms. On the other hand, doing so would break compatibility for code that was written to the Twisted API.

I wonder if a compatibility layer could make it so that Twisted-using code used this new stdlib async library or vice versa, that stdlib async-using code could run on top of Twisted.

I think one potentially productive exercise for someone who is motivated by this stuff would be to actually cut and paste a subset of the Twisted implementation and edit it to be idiomatic for modern Python. I imagine that one thing this person would learn is that there are coherent subsets of Twisted, for example, you could start by extracting the reactor pattern, and the specific reactors such as selectreactor without having to bring along any protocols or the concept of deferreds.

Another thing that I imagine would probably turn up from this is that there isn't a lot of room for alternatives in this design space. The Twisted reactor is within a stone's throw of the stdlib asyncore. The differences are mostly just bugs or limitations in asyncore.

Anyway, I hate to be a wet blanket, and I'm glad that the general notion of event-based programming is gaining mindshare in the broader Python community, but practically speaking, Twisted itself already does more or less everything I could want. It is of excellent quality, well-maintained, and I'm pretty much completely satisfied with it. It will probably be a long time before any new alternative that is getting started now can offer me something that Twisted doesn't already provide.
Phil: Thanks for your explains. Actually I knew what you said,
but a) wanted to know what/how you think, and b) avoid to write a long reply that might look like advocacy for my own stuff, which is not intended at all.
Sorry for abusing you ;-)
Callback oriented code is very easy to do wrong, and it's subjective to consider it the best. Whilst Twisted is very good, things like gevent are much more simple to understand.
Oh, yeah, if we're talking about alternative concurrency paradigms than the event-based paradigm that Twisted implements, then yes there is a lot of room to explore in the design space.
Please make it nicer/less of a bolt on than twisted, I think python is renowned for its 'straight forward ness' which I haven't seen from python async so far
Zooko: from my unexperienced point of view a compat layer would be the way to go, because
1. it works for projects like pyramid (pylons compat)
2. we don’t want to embrace non-idiomatic APIs and put more of them in the stdlib (at least i don’t), as we like the pythonic way and the stdlib should serve as example for newcomers. (that’s why performance improvements that make the code worse aren’t accepted)
3. a new API can not only introduce new idioms, but also learn from errors in the old API.

Christian: no problem, i like writing such stuff up, it makes me learn more than reading it ;) But what addotional thoughts do you have? what did i forget/get wrong?
Phil: Nothing wrong and nothing really to add but a personal opinion:
I just hate callbacks, complicated stuff, writing asynchronous style.
I agree that what was done with Twisted is pretty good, actually
the doable stuff without changing the stack or interpreter was done.

I just don't like that, and that was the reason to invent Stackless.
Not going to discuss this here, because this is off-topic.
For an sync API, what are the practical alternatives (if any) to callbacks?
I will strongly recommend to use a message oriented model of some kind as an alternativ to both callbacks and threads with locks.

A single Task receiving and processing messages is in my (20+ years) expirence the least complicated way to do concurrency in a predictable manner. Languages like Erlang, ABCL and Concurrent Object Oriented C (a concurrent version of Objective-C) 

Have a look at for an example in Python.
Rene: I'm not entirely sure what "message oriented" means to you, but my understanding of it (from Actors, Smalltalk, and E) is almost exactly what Twisted currently offers.
+Zooko Wilcox-O'Hearn Please join us on python-ideas to attest to Twisted's virtues. I don't doubt it has many, but it was designed before we could reasonably use generator-based coroutines (either PEP 342 or PEP 380 style), and even though it has added inlineCallbacks which use PEP 342, I believe that Deferreds never would have gained the popularity they have (in the Twisted world) if one of those PEPs had been implemented at the time. Twisted has many facets, and we want to be sure to learn from it, and we also want to make sure that it can continue to work in the brave new world of Python 3.4 (e.g. using some form of adaptations), but I believe we can do significantly better now. (Where by "better" in a large part I mean a more readable coding style; complex Deferred-based code is much harder to read than the same code written using coroutines.) But I start to repeat what I said on the list -- please follow us there.
Oh, I see. Well, I'm not sure it would be helpful for me to join python-ideas, because now that I understand what you want, it isn't something that I'm interested in. If I understand correctly, what you want is a way to express "A then B then C", where B may involve waiting for input from the network (or other blocking behavior), but you can still in the C part use the local variables and the rest of the stack that was set up in the A part. Anyhow you accomplish that, it will violate what I want, which is that the behavior of "A then B then C" is deterministic -- nothing that could happen from the network or from any other external source in the B part could change the result of the C part.
+Zooko Wilcox-O'Hearn I apologize -- I do not at understand what you are saying at all. You are using a shorthand and/or abstraction that makes it hard to read between the lines what your real objection is. I tried to come up with objections but had to erase every sentence I tried to write because we don't seem to have a common vocabulary. Can you give a more concrete example?
+Shane Green Just Google for "python-ideas". The link you give is similar (in some abstract way) to the Futures defined by PEP 3148 (Google that, too :-); promise and future are interchangeable terms in the wikipedia page on futures.
+Guido van Rossum you are very active to communicate with global people. As an engineer in a company, I might want to be like you. I want to communicate people in the world about what I'm interested in and what they want to talk with me. Are you still active in weekdays?
+Phil Schaf As I'm sure you're aware, PEP8 tells you to do what the code around you does. Turns out that the code around it lookedLikeThis, because it's older than the PEP8 names_with_underscores recommendation ;)

wrt Interfaces: zope's interfaces are quite different from Java's, so I don't think the comparison is sound. In Java, they are quite often used to work around a lack of MI, but that's not what they're used for in Twisted: they define an API, more specifically one that can have many implementations. How could you replace IProtocol with a mixin? Furthermore, they allow adaptation.

Also, wrt decorators, the popular inlineCallbacks functions is exclusively used as a decorator. Also,@d.addCallback is a reasonably common pattern for definine "inline" callback functions explicitly.

Wrt the with statement: I don't think there's any, no. I can't think of any good places to use it either. My code uses it a lot for the usual suspects (file io, mocks). Many of the use cases aren't quite as useful when you already have individual callbacks that are pretty much atomic. I suppose maybe we could use the with statement to run a bit of code under a particular reactor?

As for addReader/removeReader vs readers.remove/readers.add: again, API older than sets :) Even then, given the behavior in those methods, I'm not quite sure delegating that to a composed object makes a lot of sense: of course, that's an implementation argument not an API one. Either way, that's a pretty deep piece of API: most people never need to touch it.
Laurens: i know, but i only said it how twisted is not idiomatic python, not that it could have done better back then

PEP 8 (and common sense) also says that if starting a new project, one should begin writing code like the state-of-the-art is.
+Phil Schaf Okay, great. Fortunately, nobody's arguing for anything other than lowercase_with_underscores for new projects. All the new code being theorized about is lowercase_with_underscores.
Async I/O done well depends on co-routines. It does not depend on any complicated API like twisted or the like. Make co-routines native to python, that will help a lot already.
+Florian Bösch That's a popular, but not uncontested opinion. Even then, that's only part of the story, since you still need something to feed those coroutines data. I'd like to join Guido in inviting you to join the technical discussion on python-ideas :)
+Laurens Van Houtven Implementing a co-routine I/O scheduler/trampolin isn't that hard. In effect you'll be implementing I/O bound micro threads yourself, replacing the OS thread schedulers "dumb" behavior (there's other benefits as well). There certainly can be synchronization issues with micro-threading via co-routines as well (cpu bound tasks). However you're free to setup your synchronization primitives the way you need them instead of trying to convince the OS scheduler to follow them well, and pray not to deadlock. I've personally implemented variants of this, and have systems like this in production. For a large portion of use-cases very simple approaches work extremely well (see node.js). And unlike node.js, support for co-routines doesn't lead to the async pyramid of doom pattern. You can make your micro-thread local behavior be indistinguishable from sequential/blocking code which is a big boon in reducing complexity.
+Florian Bösch I'm aware of how most coroutine-based libraries work -- I'm just saying that for this to work in python, the answer's more elaborate than a single word, hence the invitation to join in on the discussion. Furthermore, nobody's suggesting an async pyramid of doom. The two suggestions that are actively being discussed are futures-based and deferred-based generators, neither of which suffer from the problem you mentioned. In fact, Deferreds sans the generator part don't even suffer from this issue. Everything I have just said has already been explicitly said in the threads Guido and I are inviting you to join, so I can only repeat that invitation :)
(That, and forward compatibility with existing libraries is considered important in the ongoing debate: we're trying to produce a compatible baseline API here. Even if writing your own coroutine scheduler is easy, that doesn't sound like a good idea from an interop POV.)
Florian, Laurens:
I am really convinced that coroutines are in fact the best approach to implement async stuff, with real symmentric coroutines.

Python does not have these, but needs to build something similar using generators, which is not that bad, but always a bit bending over.

What I'm trying to say is: I would try to design the model for async stuff under the assumption that we have real coroutines, to get the cleanest possible model.

Then this would be implemented as close as possible, using generators. This would fit into existing Python variants like PyPy, which can use native coroutines easily.
+Laurens Van Houtven People will always write their own stuff. You're not arguing against standardized co-routines in favor of whatever "server framework" are you? You have 2 choices for writing async code. Callbacks or co-routines. If you go for callbacks, you're screwed because 1) the async pyramid of doom pattern and 2) python doesn't have anonymous closures/blocks (like JS) which makes using callback based solutions extremely awkward and hard to maintain (twisted). Co-routine based schedulers cannot satisfy all use-cases. But they're fairly well in handling sequential protocol behavior. Don't get me wrong, I think anonymous closures/blocks are cruical as well. But they're much more useful in situations when you're not dealing with sequential behavior (such as say game logic, state machines etc.).
+Florian Bösch I don't quite understand what you're saying. The goal is to create cross-library, reusable components.

I have already explained why the callbacks and coroutines dichtomy is false; it's quite easy to get the same synchronous looking code when the underlying thing is callbacks. The two things that are currently actively being discussed both use a generator. Not quite coroutines (all yields are explicit), but still the synchronous looking code. If you want true coroutines, they're quite easy to have on top of the currently suggested transport/protocol abstractions too. I have a proof from construction: corotwine :) Right now there is nobody truly advocating getting coroutines in, though.

I have also already explained why the pyramid of doom argument is invalid, since none of the suggestions on the table have this issue.
+Domen Kožar It's more likely that the stdlib will get a simplistic implementation that you can then more or less replace with something nicer, such as something libuv based. libuv itself is unlikely because it's yet another C dependency. However, I certainly feel libuv is a great place to start looking for API ideas to steal.
+Laurens Van Houtven Sure, that totally makes sense. Anything that can be plugged into libuv and standardized is wild dream for many of us.
Hopefully being confused by Google+ doesn't mean I should bow out of a discussion on async APIs, but, thanks for the quick reply! 

You're right about PEP 3148.  In fact I based the Asynchronous Result class directly on PEP 3148's definition of Future.  I didn't call it a Future because I had removed the "add_done_callback", and the PEP definition didn't go into detail about the API for supporting callback-chaining (i.e., pipelining).  Design wise I prefer separating the result from the callback-chaining, so Async Result is basically the PEP's Future, less that method.  

I agree that Promise and Future are pretty interchangeable in this discussion.  I think of them as being another (arguably preferable) solution for the same problem Deferreds solve.  

Going to join the discussion now.  Thanks!
Hopefully being confused by Google+ doesn't mean I should bow-out of a discussion about async APIs!

Thank for the quick reply.  I've joined the discussion now.

You're right about PEP 3148.  The 'promised' project is just a quick prototype project, but PEP 3148 was one of the many things I researched on related topics.  The AsynchronousResult class in 'promised' is based directly on the PEP's Future API.  It's renamed because I removed the add_done_callback() method, and because the PEP didn't elaborate on a mechanism for callback-chaining (i.e., pipelining).  

I agree that Future and Promise are interchangeable; I think of Deferreds as another option for solving the problem they do.  The key difference being that a Deferred's value mutates and callback-chaining happens implicitly, where as Promises do not mutate and support explicit callback-chaining.  

Not everyone thinks of them as different options, but components of a single solution.  The Dojo Toolkit uses both Deferreds and Promises in their solution, for example.  
If it's something that goes into stdlib, I would personally like something as generic as possible so that it can easily cover many different scenarios, including existing ones like Twisted, custom message loops etc. This isn't to say that Twisted is a perfect example of an async API, but if theirs could be reasonably adapted to interoperate with stblib (by wrapping futures etc), it would be awesome.

Ideally, I'd want something that would let me write framework-agnostic async code for libraries/components, which other people could then run in different contexts - Twisted, Tk, wxWidgets, whatever - without the need to modify the code (which of course means that the frameworks would have to pick up the new API, but they undoubtedly will if it's in the standard library).

Oh, and do I read this right that generator-based coroutines are being considered as a higher-level interface for futures? That would be absolutely awesome (and the ability to return values from generator functions that's added in 3.3 looks like it was the final piece necessary to make the syntax completely natural).
I'd like to suggest that we work out known, anticipated, and wishful use cases and try to code what makes them most easily solvable. This produces a very different result than coding to a specification/protocol. You can see the difference by comparing to any of the other http libraries for python.
I really like pyev. With some boilerplate for common tasks (sockets, simple disconnected/read callback and write functions) it's pretty powerful
I've made big servers in Twisted and Tornado; I wrote the inner loop in Java's defunct Deft.  I'm in the middle of a project using Java Netty.  I wrote hellepoll which is a C++ very fast async socket server thing.  And I've done loads of Javascript callbacks.  Hmm, this is an incomplete list; I've forgotten how many different platforms I've done async and sync IO and task systems on.

My key worry: its very easy to overlook that async file IO is usually completion-baesd rather than ready-based.  Its important to have an abstraction that can do both.  I worry that Python might pick up an old ready-based API and find it ill-suited to, say, Windows sockets and Linux disk and so on.  Libevent 2 (after that recent rewrite) shows a unified API.

I'm very comfortable with callback-style reactor-style code.  But I am becoming more and more allegic to complex APIs.  I look with envy at the goroutine approach where things are hidden.  I look forward to the Go crowd improving their throughput, improving their scheduling, improving scalability across cores etc and all without the programmers using the language having to adapt, migrate, rewrite.

My sincerely hope that Python integrates the gevent approach so that monkey-patching becomes unnecessary and the yielding in (seemingly) blocking IO calls and locks works properly for the new 'task' way and in the classic thread way and generic code does not have to understand the distinction.

Please consider making it so that the innermost IO and locking primatives are gevent-aware and schedule or block depending on the context they are in.

PS found in production last week that there's a select() in multiprocessing Queue.  How has no-one found this before?  I happened to have 1024 FDs open when creating a queue and ... bang.  Nasty.
+William Edwards Thanks for your feedback. I agree that Goroutines are nice but the entire Go language was designed around them. gevent at this time is not sufficiently cross-platform and cross-Python-implementation to make it a valid approach. But hopefully we can make it so that the new approach doesn't preclude it from being used on platforms that have it. For that select() bug, please file a bug (! Finally can you clarify what you mean by the terms ready-based and completion-based? I have not come across these before.
Yeah its a shame about gevent's portability.  Its a shame that Python 3 can't go the route of making it a valid approach by brute force.

My fear of course is that Twisted, which is a twisted, unpythonesque API becomes blessed.  And if you round the edges of the API and rename stuff, then you loose source-compatibility and suddenly you might as well have blessed something cleaner and smaller instead.   Tornado might have been it, although it started small and beautiful but now its no faster than Twisted and its feeling decidedly rough again.

Regards completion-based:

If you want to read a file from disk, you typically can't say "tell me when the file is ready for reading".  You have to give the system some buffer(s), and say "read the file into this buffer and tell me when its done"; that's completion-based.

When twisted and so on were written non-blocking IO was for sockets and was ready-based.

Basically, on windows there just isn't good scaling ready-based non-blocking IO though.

Windows has solved the async IO problem in a different way; it got IO completition ports, which are completion-based.

And on Linux, non-blocking file IO (a new and rare beast, giving 50% improvement in my own hellepoll signalfd-based prototypes, but still an API I dislike to be honest) is completion-based.

The signalfd trick is Linux of course.  On windows, you have to expose a completion-based API.

Twisted does have an ICOP reactor but...

The first thing people do with a ready-based API is actually read the bytes and present reads to the user a completion-based API based on some protocol-specific envelopes; and for writes they present a completion-based API to the user and then buffer those bytes user-side and feed them to the ready-based API when its writeable.

Its far cleaner and simpler to present a completion-based API to the user, and under the hood, on some platforms, map that to ready-based and ideally edge-triggered.

I think completion-based is the natural API for programmers, and exposing the ready-based API that some platforms have is too low-level and subject to platform-specific intricacies that we don't want to push onto the user.

Libevent 2 explains this really well in this blog post:

See also
+William Edwards Don't worry, I won't rubberstamp Twisted or Tornado, with or without API renamings. (If you read the python-ideas threads that will become abundantly clear.) Thanks for the explanation of ready-based vs. completion-based. People have already suggested to look at IOCP (and I did, a little bit) so I think we've got that covered too -- thanks for the reminder. And thanks for the links!
Yeah I read about gevent going libev as an interested bystander.  Short term, I wish they'd do something for poor windows users to get IOCP.  But strategically, I think a lot of the issues were low-level issues and that, with the passing of time and evolution of libraries, the same conclusions might not be drawn in a shoot-out in a few months time.

I looked at the early libev and libevent when I was building the server that became hellepoll and at the time neither of them were much use to me.  Presumably with the passing of time a decision to do my own epoll handling would crazy now.

So I guess its really about APIs.  If Python had batteries-included async I'd want it to internalise that logic, rather than relying on some third party library to map between some intermediate abstraction and the OS.  The points that gevent team made about old buggy libevent packages on various platforms is a good one.

At its beating heart core, async IO code of any description is not too much code.  Having intermediate 'general purpose' libraries between Python and the OS APIs could well be more problem than solution.

I've tried to refine my thoughts here:
Doubtless overambition ;)
+Guido van Rossum The biggest problem about libevent is that it fails often on high loaded machines  etc (I still should have somerhere failing scenarios like "unable to resolve 'localhost'" or some broken send / receive data just because...). It usually starts to return some weird errors. Mostly the DNS resolver is broken, that's why gevent uses c-ares for DNS. Also in former (maybe even current releases) libevent http was lack of keep-alive support. I had really terrible situations with gevent 0.x(which uses libevent) and almost no problems in 1.x branch (libev).
Great direction! I see some key questions which may affect the requirements.

1. What is the expected usecases? (project type, platform, developer qualification, load)
2. How solid should it work? Or what kinds of failure is admissible in exact usecase.
3. What about supported platforms, should windows be among them? IOCP support in python ecosystem is relatively poor. (this already was mentioned many times above)
4. What level of incompatibility with other stlibs could we afford? What volume of changes in other stdlib is acceptable?
wonder what is the best way to mix the need of an eventloop and a simple task/thread scheduler

Go is maintaining any syscall in their own thread (the poll fd server for example). And it seems that rust is doing the same when it is using libuv: the libuv loop is running in another task hread.
Many event-loops, especially GUI ones, contain complex objects that can only be called in that loop's thread and which the programmer handling events needs to call e.g. getting or setting a button's label.

IO, on the other hand, doesn't really fit this model.  If you want to asynchronously perform IO, the only meaningful thing to do while its outstanding is to cancel it.  (I would love to be corrected if I'm wrong.)

But we'd also want someone using a GUI event-loop (or "game loop") to be able to perform asynchronous IO.

We don't want every game platform to have to implement its own version of async IO just because the stdlib async IO assumes its own looper, and we don't want the user to have to juggle threads.

It makes complete sense what people say about wanting to run IOCP and even epoll/kqueue etc in their own native threads (although we haven't seen profiling?  But I'd be surprised if it wasn't true; it'd bother me to write buffered bytes to sockets only as fast as the GIL lets me).

So suddenly why would we imagine that the (Python) looper is the IO reactor?

We have two concepts here:  on the Python side, we need an event loop to consume asynchronous events and that event loop is often prescribed by a GUI or game library; and on the async IO side, these will typically have their own non-Python code doing its thing and just using the Python side to make decisions and run user code.

While a rich standard library is good, I don't think a standard framework is. You guys were sane enough about not imposing a blessed web framework on us, but providing bits and pieces to make it easier to build one. I would rather expect a couple of idioms and patterns for asynchronous programming and some reference implementations — so, for example, one could build a module that works asynchronously with a service X, and someone else would just plug it into his Twisted application, still someone else would plug it into his Qt application, still another into PyGObject one — all that without needing to write ugly fat adapters that are bigger than the module in question. But, for heavens' sake, not "you MUST use our own event loop to take advantage of that feature". That would be just silly.

Also consider a plethora of event loops — I don't think you can make the One Loop that would bring them all, find them and in the darkness bind them, and yet remain clean and lean, and not be a dirty ugly warty hack unworthy of being there  in stdlib. Prove me wrong and I still wouldn't believe it. Or you would end up as in
+William Edwards Is there any way I can convince you to post to python-ideas? Subscribing is simple (go to and that way the record is in one place. FWIW, I think it's too early to settle on API details, but the observations about GUI loops are good. Most event loops that I've seen actually have a way to say "go do your thing for a short while" (maybe with variants like "do it until you are out of immediate work" or "make the smallest amount of progress without stalling" or "do it for N msecs", and maybe "and return an estimate for how soon you'd like to be called again"). A good design of that general shape should make it easy to multiplex several event loops that know nothing about each other. Interfacing to things that need to have their own thread is another specialty.
+Yaroslav Fedevych Mostly agreed. I reference that XKCD frequently (in fact my CS professor in the '70s taught me the same thing and I've been repeating it ever since :-). What I mostly want to achieve is a serious upgrade of what the stdlib offers -- the async stuff is too low level (wrappers for select(), poll(), kqueue(), epoll(), whichever ones the platform supports, and socket.setblocking(False)), wildly outdated(asyncore/asynchat), and most of the stdlib doesn't work with these (e.g. where's async support in httplib or urllib? Can I combine gzip and TextIOWrapper without ever blocking for I/O?). But, please come help out on python-ideas instead of posting heavy discussion on this thread -- I love G+, but not for extended technical discussions.
+Guido van Rossum: I'm sorry I haven't replied to you yet. You were right to be confused -- what I wrote was telegraphic and confusing. I apologize. I have added "write something sane about this in reply to GvR" on my todo list. ☺
I think it is really needed. While I am not a expert, I will just list a case use and some requirements I think good.

My ideal case use is implement actors/agents as objects, in a way I can simulate a population, in a  process, game or scenario behavior.

Its diferent of genetic algorithms, because (ideally) we can have programable atributes/behaviors over time/iterations.

The behavior/atributes of agents depends of info they receive from environment, so we need to set sensors to them.

All interation with environment needs to be by a 'sensor', whatever it can be.

A agent/actor needs to have a own schedule, to act their own routine, whatever happens at environment.

"Time" itself can be implemented using a sensor (maybe default).

Another basic sensors can be 'computer processors, memory, and space disks, as used process/memory and other devices, as network configs".

This way a agent can reschedule itself for a 'not busy' enviroment, or maybe, be rescheduled by a main or supervisor 'agent'.

In a cloud era, a final touch of perfection would be a interaction between servers, even moving or copying agents between servers, achieving load balancing.

Its my 'impossible dream" now.

PS: I forgot to mention to add as sensor (or atribute, not sure) a 'physical or virtual positioning' in 3D, as is very required for simulations, to calculate velocity, time needed, and give a sense of 'real time'.
+Alexandre Miguel de Andrade Souza That is a very interesting use case! I hadn't thought of simulations in this way but I believe that Stackless and Greenlets are indeed used for this purpose. I also do think that implementing certain behaviors as generators (used as coroutines, per PEP 380) makes a lot of sense.
You are right, I had looked at Stackless and Greenlets. I liked Stackless, except for 2 things: its not "standard" python (i hope a better solution can arrive from this discussion) and this way to work isnt elegant to my sense :
as a basic object - the actor/agent - with self-schedule, sensors, and hopefully, 3D awareness and networking, extensible to add atributes and behaviors, or better yet, I can instantiate to read atributes and call behaviors, since behaviors can be shared (or learned) between agents/actors.

While I just starting to research about the subject, my 'graal' is a Actor pattern with A.I/learning machine implemented with sensors and behaviors (programable) not only 3D or enviroment, but (in some cases) about others agents behavior.

This way, we can:

1. In a system, using profiles and choices of user, route processes, take system decisions, and schedule tasks.
2. Implement games (even MMO) with complex A.I. (EVE Online used Stackless)
3. Create Scenario applications, defining atributes and behaviors of agents/actors, and watch the results over time (including processes), even using real life data to feed the scenario start, or update it.

Equalizer++ is a OpenGL C++ library looks very promissing in some aspects, except main focus is 'rendering' 3d scenes using GPUs, While the fact the end user could be nice to implement a 3D interface/application to some cases, not all possible uses had to use a GPU. Its components Lunchbox and Collage could be a nice start to  look.
The other componets,  GPU-SD and Sequel, could be interesting, if not only to implement GPU features, but improved to work with other features too, like processors, memory, disks, etc.
I have about finished preparing my TaskIt library. It has a simple same-process asynchronizer, a resychronizer module for doing useful things with callbacks, and a powerful distributed processing framework. It does not use an event loop.
+Daniel Miller Can you summarize how Worq differentiates itself from PEP 3153 (which is implemented in Python 3.2 and up)?
It seems the PEP 3153 is (maybe necessarily?) an abstract discussion of the components needed to implement an async library, but it has few details of what the API would actually look like. Some code examples would be helpful. Is ``concurrent.futures`` the implementation in 3.2 that you're referring to? PEP 3153 does not refer to concurrent.futures, and concurrent.futures docs ( do not refer to that PEP. I really need to make my way over to Python Ideas and do a bunch of reading, I'm sure most of these questions would be answered there, but it's going to take a bunch of time for me to catch up :)

The thing I like most about Worq is a very simple API for invoking asynchronous tasks. It tries hard to use reasonable defaults to keep the most common case simple. It also tries to make the more complicated things possible, if a bit more verbose. In the simplest, and hopefully most common case, invoking a task is a function call.

# simple
deferred = q.task(*args, **kw)

# more complex
task = Task(q.task, **options)
deferred = task(*args, **kw)

The returned "deferred" object can be passed to another task as an argument, or one can wait for the real result to become available. The result-as-argument feature makes it easy to queue up a graph of tasks to be executed asynchronously.

Why does concurrent.futures.Executor provide a map() function when Python has moved toward recommending list comprehensions in favor of map()? Worq does it like this:

results = [q.task(item) for item in items]

And then...

total = q.sum(results)
print total.value

Can I do that with a list of concurrent.future.Future objects? That is, can I pass a Future object as an argument to a task and have the task execution deferred until the Future result is available?

I also think Worq is interesting because it expands the scope of task execution beyond a single machine when using a queue backend such as Redis. Someone has encouraged me to implement a concurrent.futures-compatible API for Worq, but haven't gotten around to that yet ;-)
I don't know If somebody noticed, but uvent (the gevent core based on libuv ( is crossplatfrom and delivers High performance IO on Windows (not select) too. 
+Guido van Rossum You have told before that cross platform is crucial and gevent is not, so maybe this approach will be better?
Additional gevent would probably switch to libuv in the future also.
Tony R.
Well, there is [Python Futures:](

But if you really want to the best for async, you want to look to Node.js for inspiration.  Don’t just look at Node.js’s standard library—check out the most popular packages on NPM.  

It’s not always comfortable looking at JS, but you can bet that a programming community based around an inherently async language will have a rich set of perspectives about API design.