[PEAK] StopIteration: Unexpected reactor exit

Phillip J. Eby pje at telecommunity.com
Thu Mar 25 23:37:54 EST 2004


At 07:59 PM 3/25/04 -0800, John Landahl wrote:
>On Thu, 25 Mar 2004 19:35:56 -0500, Phillip J. Eby <pje at telecommunity.com> 
>wrote:
>>Only because you're using Twisted for both levels of event loop.  PEAK 
>>allows you to have as many event loops as you like, but Twisted supports 
>>only one.
>
>By this do you mean that there can be only one EventLoop instance per 
>application (which is the case in my app; it uses 
>"binding.Obtain(events.IEventLoop)"), or one active call to that 
>instance's runUntil()?

I believe that if you use PEAK's native event loop, you can actually have 
more than one 'runUntil()' simultanteously active on the same instance, 
although I wouldn't necessarily recommend using it indiscriminately.  But 
you definitely can have multiple event loop instances.

However, Twisted only supports one reactor, so if you're using it, every 
PEAK IEventLoop object is a wrapper around that same reactor, so all your 
service areas are really sharing the same Twisted event loop, and I don't 
believe that will work properly with a nested event loop.


>Some further information gathering has revealed that the reactor does not 
>get stopped until the Task in question next tries to yield.  All interim 
>EventLoop.runUntil() invocations cause no problems.  In fact, if I change 
>the yields in the Task to eventLoop.runUntil() calls, the problem does not 
>occur.
>
>So to be a little more specific:
>
>    Task A
>      Creates a new Task B
>        Runs a method on object C (func1)
>          calls func2 (indirectly)
>            calls reactor.spawnProcess(), waits for completion with runUntil()
>            calls a remote PB method, waits for completion with runUntil()
>            returns data
>          data is used to setup another spawnProcess()
>        yields on a deferred returned by C
>        processes data from the deferred
>        reports results via a PB call

I'm lost here.  Why not just make func1 and func2 asynchronous?  You're 
still not showing anything that requires them to be synchronous.  Note that 
the first value yielded from a generator will be returned to the calling 
task, so e.g.:

yield func1(); result=events.resume()

is the same as:

result = func1()

if func1() is written to yield its result instead of returning it.

And the same for func2, of course.  Just yield the data instead of 
returning it.


>Immediately after this yield, the runUntil() called by 
>EventDriven.mainLoop raises StopIteration.  Using runUntil() here instead 
>of yield does not cause the StopIteration.  Could there be something in 
>the Task class that causes B to end prematurely?

Which yield?  Your explanation ends with "reports results via a PB call".



>FWIW, the Twisted reactor's mainloop ends directly after the return on 
>line 462 of peak.events.event_threads.py, followed by the early demise of 
>EventDriven.mainLoop.eventLoop.runUntil().  Not knowing what calls 
>Task.step(), I'm not sure how to trace what happens between the return and 
>the end of the reactor's mainloop.

Presumably, it was the reactor.  Anyway, what you're seeing is that the 
reactor exited after the task suspended on whatever it was waiting 
for.  Why did it exit?  Because Twisted reactors don't support re-entrant 
run() calls, and your nested event loops do that.  The outer runUntil() 
calls reactor.run(), after setting up a callback to crash the reactor when 
the condition is met.  So, the reactor sets self.running to true, and 
begins running.

Inside that run loop, you call runUntil() a second time, which makes a 
nested call to run(), and reactor.running is again set to true 
(redundantly, of course).  But, that nested runUntil() also sets a callback 
to crash the reactor when its condition is met.  That condition gets met, 
and the reactor "crashes", setting running to False.  runUntil() 
returns.  You then call runUntil() a second time, which calls 
reactor.run(), which sets reactor.running to True again.  Then it finishes, 
crashes, and sets it back to False.

Now the part that breaks it: your task finishes its current work, and 
suspends, returning control to the outer 'reactor.run()' invocation that it 
was called from.  But reactor.running is now false!  So the outer 
reactor.run() call immediately exits to the runUntil() that called 
it.  runUntil() detects that the reactor stopped early, i.e. without 
fulfilling the condition it was waiting for, so it raises an error.

And now you see why you can't nest event loops with Twisted: reactor.run() 
is simply not re-entrant.  You *must* change your code to be fully 
asynchronous if you want to work with Twisted.

By contrast, PEAK's native EventLoop implementation is re-entrant, because 
it doesn't use an attribute like 'self.running' to manage its loop 
state.  So, you can nest calls to 'runUntil()' on a PEAK EventLoop instance 
if you need to.  (Again, I recommend that you be sure you really need/want 
to do that, and that you're not just overlooking a way to make everything 
you're dealing with asynchronous.  And also that you check to make sure 
whether you really want to use the *same* EventLoop instance, rather than a 
separate one specifically for the nested activity.)


>>You'll have to make the synchronous functions asynchronous then.  What 
>>you do is take all the 100% synchronous parts (i.e. parts that *don't* 
>>call any asynchronous code), and farm them out to threads using 
>>reactor.deferToThread.  You'll then be left with a 100% asynchronous task.
>
>I don't know if it's possible to refactor it this way since the data func2 
>returns to its caller is dependent on the two Twisted deferreds.  One 
>possibility might be to have these run in the main thread through 
>callFromThread(), but that might be difficult in this scenario.  I'll look 
>into this option further.

I think you're making this too hard, and perhaps overlooking the fact that 
you can make methods of your other objects be generators, and yield calls 
to those generators within a Task.




More information about the PEAK mailing list