|
From: Michiel de H. <mjl...@ya...> - 2009-07-17 01:57:49
|
The chunkiness probably comes from the fact that inputhook_wx is called repeatedly. This is different from how PyOS_InputHook is being used in Tkinter, PyGTK, and the Mac OS X backend. Schematically, this is how the Tkinter/PyGTK/MacOSX event loops work: 1) PyOS_InputHook is called when Python is waiting for the user to type in the next Python command. 2) The hook function sets up the event loop such that stdin is being monitored while the event loop is running. 3) The hook function then starts the event loop. 4) When input is available on stdin, the hook function exits the event loop, and returns. This is how the proposed Wx event loop currently works: 1) PyOS_InputHook is called when Python is waiting for the user to type in the next Python command. 2) The hook function processes whatever events are available at the time. 3) The hook function returns. 4) If still no input is available on stdin, Python calls the hook function again via PyOS_InputHook after a timeout. I believe the timeout is 0.1 seconds by default. However, Python may not call PyOS_InputHook repeatedly at all; this depends on which Python version is being used, and the version of the readline library. In some configurations (particularly on Windows), PyOS_InputHook is called only once, so wx will freeze between Python commands. I am not familiar with wx, but there hopefully there is some way to monitor stdin while the event loop is running? --Michiel. --- On Thu, 7/16/09, Brian Granger <ell...@gm...> wrote: > From: Brian Granger <ell...@gm...> > Subject: Re: [matplotlib-devel] [IPython-dev] [Enthought-Dev] Ctypes based prototype of PyOS_InputHook for wx 2.8 and 2.9 > To: "Robert Kern" <rk...@en...> > Cc: ent...@en..., "matplotlib development list" <mat...@li...>, "IPython Development list" <ipy...@sc...> > Date: Thursday, July 16, 2009, 6:57 PM > Robert, > > Thanks for testing this so quickly. Performance is one of > the big issues that I am concerned about. I will work on a > Cython based version to see if that solves the problem. > > Cheers, > > Brian > > > > > Works for me with wx 2.8.8.1 on OS X 10.5 and > Chaco. Pan and zoom > > interactions are substantially chunky, though. I do not see > such > > chunkiness with -wthread. It would be worth exploring a > Cython > > alternative to see if it is just ctypes and general Python > overhead to > > blame. > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an > enigma, a harmless > > enigma that is made terrible by our own mad attempt to > interpret it as > > though it had an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > IPython-dev mailing list > > IPy...@sc... > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > -----Inline Attachment Follows----- > > ------------------------------------------------------------------------------ > Enter the BlackBerry Developer Challenge > This is your chance to win up to $100,000 in prizes! For a > limited time, > vendors submitting new applications to BlackBerry App > World(TM) will have > the opportunity to enter the BlackBerry Developer > Challenge. See full prize > details at: http://p.sf.net/sfu/Challenge > -----Inline Attachment Follows----- > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > |
|
From: Michiel de H. <mjl...@ya...> - 2009-07-17 06:32:11
|
Without monitoring stdin, you could do the following:
while True:
run the event loop for a specified duration (say, 0.1 seconds)
check for input on stdin; if there is any: break
But you can only do this if wx has such a time-out capability. If not, you can do the following:
while True:
handle all accumulated events
check for input on stdin; if there is any: break
sleep for 0.1 seconds
The sleep is important, otherwise the CPU is busy 100% of the time, which will drain your battery.
This loop is essentially what you are doing in your current code, except that you're using Python/readline for the repeated calls into the hook function. It's better to have this loop explicitly inside your hook function, because of the variation in PyOS_InputHook behavior between different versions of Python/readline.
--Michiel
--- On Fri, 7/17/09, Brian Granger <ell...@gm...> wrote:
> From: Brian Granger <ell...@gm...>
> Subject: Re: [matplotlib-devel] [IPython-dev] [Enthought-Dev] Ctypes based prototype of PyOS_InputHook for wx 2.8 and 2.9
> To: "Michiel de Hoon" <mjl...@ya...>
> Cc: "Robert Kern" <rk...@en...>, ent...@en..., "matplotlib development list" <mat...@li...>, "IPython Development list" <ipy...@sc...>
> Date: Friday, July 17, 2009, 12:59 AM
> Michiel,
>
> Thanks for the reply, this will help us to find a better
> approach. According to one of the wx devs, Robin Dunn, wx
> currently does not have the ability to monitor stdin in its
> even loop without polling. I guess there is a GSoC project
> to add this capability, but it is not there yet. Any
> thoughts on how this could be done without monitoring
> stdin. I will give the polling stdin approach a try
> though.
>
>
> Cheers,
>
> Brian
>
>
>
>
> The chunkiness probably comes from the fact that
> inputhook_wx is called repeatedly. This is different from
> how PyOS_InputHook is being used in Tkinter, PyGTK, and the
> Mac OS X backend.
>
>
>
> Schematically, this is how the Tkinter/PyGTK/MacOSX event
> loops work:
>
>
>
> 1) PyOS_InputHook is called when Python is waiting for the
> user to type in the next Python command.
>
>
>
> 2) The hook function sets up the event loop such that stdin
> is being monitored while the event loop is running.
>
>
>
> 3) The hook function then starts the event loop.
>
>
>
> 4) When input is available on stdin, the hook function
> exits the event loop, and returns.
>
>
>
> This is how the proposed Wx event loop currently works:
>
>
>
> 1) PyOS_InputHook is called when Python is waiting for the
> user to type in the next Python command.
>
>
>
> 2) The hook function processes whatever events are
> available at the time.
>
>
>
> 3) The hook function returns.
>
>
>
> 4) If still no input is available on stdin, Python calls
> the hook function again via PyOS_InputHook after a timeout.
>
>
>
> I believe the timeout is 0.1 seconds by default. However,
> Python may not call PyOS_InputHook repeatedly at all; this
> depends on which Python version is being used, and the
> version of the readline library. In some configurations
> (particularly on Windows), PyOS_InputHook is called only
> once, so wx will freeze between Python commands.
>
>
>
>
> I am not familiar with wx, but there hopefully there is
> some way to monitor stdin while the event loop is running?
>
>
>
> --Michiel.
>
>
>
>
>
> --- On Thu, 7/16/09, Brian Granger <ell...@gm...> wrote:
>
>
>
> > From: Brian Granger <ell...@gm...>
>
> > Subject: Re: [matplotlib-devel] [IPython-dev]
> [Enthought-Dev] Ctypes based prototype of PyOS_InputHook for
> wx 2.8 and 2.9
>
> > To: "Robert Kern" <rk...@en...>
>
> > Cc: ent...@en...,
> "matplotlib development list" <mat...@li...>,
> "IPython Development list" <ipy...@sc...>
>
>
> > Date: Thursday, July 16, 2009, 6:57 PM
>
> > Robert,
>
> >
>
> > Thanks for testing this so quickly. Performance is
> one of
>
> > the big issues that I am concerned about. I will
> work on a
>
> > Cython based version to see if that solves the
> problem.
>
> >
>
> > Cheers,
>
> >
>
> > Brian
>
> >
>
> >
>
> >
>
> >
>
> > Works for me with wx 2.8.8.1 on OS X 10.5 and
>
> > Chaco. Pan and zoom
>
> >
>
> > interactions are substantially chunky, though. I do
> not see
>
> > such
>
> >
>
> > chunkiness with -wthread. It would be worth exploring
> a
>
> > Cython
>
> >
>
> > alternative to see if it is just ctypes and general
> Python
>
> > overhead to
>
> >
>
> > blame.
>
> >
>
> >
>
> >
>
> > --
>
> >
>
> > Robert Kern
>
> >
>
> >
>
> >
>
> > "I have come to believe that the whole world is
> an
>
> > enigma, a harmless
>
> >
>
> > enigma that is made terrible by our own mad attempt
> to
>
> > interpret it as
>
> >
>
> > though it had an underlying truth."
>
> >
>
> > -- Umberto Eco
>
> >
>
> > _______________________________________________
>
> >
>
> > IPython-dev mailing list
>
> >
>
> > IPy...@sc...
>
> >
>
> > http://mail.scipy.org/mailman/listinfo/ipython-dev
>
> >
>
> >
>
> >
>
> >
>
> > -----Inline Attachment Follows-----
>
> >
>
> >
> ------------------------------------------------------------------------------
>
> > Enter the BlackBerry Developer Challenge
>
> > This is your chance to win up to $100,000 in prizes!
> For a
>
> > limited time,
>
> > vendors submitting new applications to BlackBerry App
>
> > World(TM) will have
>
> > the opportunity to enter the BlackBerry Developer
>
> > Challenge. See full prize
>
> > details at: http://p.sf.net/sfu/Challenge
>
> > -----Inline Attachment Follows-----
>
> >
>
> > _______________________________________________
>
> > Matplotlib-devel mailing list
>
> > Mat...@li...
>
> > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>
> >
>
>
>
>
>
>
>
>
>
>
|
|
From: Robert K. <rk...@en...> - 2009-07-17 19:58:26
|
On Fri, Jul 17, 2009 at 14:48, Brian Granger<ell...@gm...> wrote: > Michiel, > > Thanks for the ideas. I have implemented both of the approaches you > describe and I am attaching a file that has all 3 approaches. At this > point, all 3 approaches work on OS X, Python 2.5 with wx 2.8/2.9. What I > most need to to find strenuous test cases that can probe which of these has > the best performance? Robert, could you run the Chaco test again with > approaches 2 and 3 and try tuning the parameters (see the docstrings)? #2 was pretty good out-of-box. #3 was slightly better than #1 but still noticeably chunky. Reducing the sleep down to 0.01 instead of 0.05 made things appreciably smooth. I thought I noticed a tiny bit of chunkiness, but I certainly didn't do a double-blind trial. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
|
From: Ondrej C. <on...@ce...> - 2009-07-17 20:08:00
|
On Fri, Jul 17, 2009 at 1:57 PM, Robert Kern<rk...@en...> wrote: > On Fri, Jul 17, 2009 at 14:48, Brian Granger<ell...@gm...> wrote: >> Michiel, >> >> Thanks for the ideas. I have implemented both of the approaches you >> describe and I am attaching a file that has all 3 approaches. At this >> point, all 3 approaches work on OS X, Python 2.5 with wx 2.8/2.9. What I >> most need to to find strenuous test cases that can probe which of these has >> the best performance? Robert, could you run the Chaco test again with >> approaches 2 and 3 and try tuning the parameters (see the docstrings)? > > #2 was pretty good out-of-box. #3 was slightly better than #1 but > still noticeably chunky. Reducing the sleep down to 0.01 instead of > 0.05 made things appreciably smooth. I thought I noticed a tiny bit of > chunkiness, but I certainly didn't do a double-blind trial. Exactly the same observation on Linux. E.g. #1 the slowest, #3 quite good, #2 perfect. However: with #2, if I did copy and paste of some command into the python terminal, I could see how ipython was putting the command letter by letter on the prompt, e.g. by pasting "inputhook.remove_inputhook()" I could literally see: i in inp inpu ... (everything on one line, e.g. like if there was sleep(0.05) between each letter) with #1 and #3, pasting was immediate. Ondrej |
|
From: Ondrej C. <on...@ce...> - 2009-07-17 20:13:16
|
On Fri, Jul 17, 2009 at 2:07 PM, Ondrej Certik<on...@ce...> wrote: > On Fri, Jul 17, 2009 at 1:57 PM, Robert Kern<rk...@en...> wrote: >> On Fri, Jul 17, 2009 at 14:48, Brian Granger<ell...@gm...> wrote: >>> Michiel, >>> >>> Thanks for the ideas. I have implemented both of the approaches you >>> describe and I am attaching a file that has all 3 approaches. At this >>> point, all 3 approaches work on OS X, Python 2.5 with wx 2.8/2.9. What I >>> most need to to find strenuous test cases that can probe which of these has >>> the best performance? Robert, could you run the Chaco test again with >>> approaches 2 and 3 and try tuning the parameters (see the docstrings)? >> >> #2 was pretty good out-of-box. #3 was slightly better than #1 but >> still noticeably chunky. Reducing the sleep down to 0.01 instead of >> 0.05 made things appreciably smooth. I thought I noticed a tiny bit of >> chunkiness, but I certainly didn't do a double-blind trial. > > Exactly the same observation on Linux. E.g. #1 the slowest, #3 quite > good, #2 perfect. However: > > with #2, if I did copy and paste of some command into the python > terminal, I could see how ipython was putting the command letter by > letter on the prompt, e.g. by pasting "inputhook.remove_inputhook()" I > could literally see: > > i > in > inp > inpu > ... > > (everything on one line, e.g. like if there was sleep(0.05) between each letter) > > with #1 and #3, pasting was immediate. so I also reduced the sleep in #3 from 0.05 to 0.01 and then #3 is absolutely smooth for me and also pasting to ipython is immediate e.g. this looks like a perfect solution to me. Ondrej |
|
From: Brian G. <ell...@gm...> - 2009-07-17 20:49:45
|
Ondrej and Robert, Thanks for testing this. Some comments: 2) We can speed up pasting and general keyboard response by changing the polling time. Pasting is slow very slow at the original setting of 50. But if you make it smaller pasting becomes faster (although still not instant). 3) We can speed up the GUI response by decreasing the time.sleep interval. The setting of 0.01 works pretty well. Why not decrease the polling or sleep times even further? As you descrease either of these times, the idle CPU load starts to go up. Here is what I observe on my MacBook pro (both 2 and 3 show the same result): polling/sleep time of 1 (ms) gives about 13% CPU load polling/sleep time of 5 (ms) gives about 3% CPU load polling/sleep time of 10 (ms) gives about 1.5% CPU load In summary, method 3 with a time of 10 ms seems like the best overall approach. However, I am going to leave in the other methods and make it easy to set the time intervals. That way, if people want to optimize their performance for particular usage cases they can. Now, onto testing for Windows. Can anyone help with that? Thanks, Brian On Fri, Jul 17, 2009 at 1:13 PM, Ondrej Certik <on...@ce...> wrote: > On Fri, Jul 17, 2009 at 2:07 PM, Ondrej Certik<on...@ce...> wrote: > > On Fri, Jul 17, 2009 at 1:57 PM, Robert Kern<rk...@en...> wrote: > >> On Fri, Jul 17, 2009 at 14:48, Brian Granger<ell...@gm...> > wrote: > >>> Michiel, > >>> > >>> Thanks for the ideas. I have implemented both of the approaches you > >>> describe and I am attaching a file that has all 3 approaches. At this > >>> point, all 3 approaches work on OS X, Python 2.5 with wx 2.8/2.9. What > I > >>> most need to to find strenuous test cases that can probe which of these > has > >>> the best performance? Robert, could you run the Chaco test again with > >>> approaches 2 and 3 and try tuning the parameters (see the docstrings)? > >> > >> #2 was pretty good out-of-box. #3 was slightly better than #1 but > >> still noticeably chunky. Reducing the sleep down to 0.01 instead of > >> 0.05 made things appreciably smooth. I thought I noticed a tiny bit of > >> chunkiness, but I certainly didn't do a double-blind trial. > > > > Exactly the same observation on Linux. E.g. #1 the slowest, #3 quite > > good, #2 perfect. However: > > > > with #2, if I did copy and paste of some command into the python > > terminal, I could see how ipython was putting the command letter by > > letter on the prompt, e.g. by pasting "inputhook.remove_inputhook()" I > > could literally see: > > > > i > > in > > inp > > inpu > > ... > > > > (everything on one line, e.g. like if there was sleep(0.05) between each > letter) > > > > with #1 and #3, pasting was immediate. > > so I also reduced the sleep in #3 from 0.05 to 0.01 and then #3 is > absolutely smooth for me and also pasting to ipython is immediate e.g. > this looks like a perfect solution to me. > > Ondrej > _______________________________________________ > IPython-dev mailing list > IPy...@sc... > http://mail.scipy.org/mailman/listinfo/ipython-dev > |
|
From: Gael V. <gae...@no...> - 2009-07-17 21:44:47
|
On Fri, Jul 17, 2009 at 02:13:08PM -0600, Ondrej Certik wrote: > so I also reduced the sleep in #3 from 0.05 to 0.01 and then #3 is > absolutely smooth for me and also pasting to ipython is immediate e.g. > this looks like a perfect solution to me. Polling at 100Hz is a horrendous solution from a technical point of view. I typical have a dozen IPython instances opened, where I have been working a while ago, but not doing anything right now, because I am planning to come back to it. Having these all poll at a 100Hz wil keep my laptop hot, make it switch context all the time, and drain the battery. Adobe Flash works that way. I use it as seldom as possible. One trick I play sometimes when I am developping software that needs to poll and cannot be event-driven, is to unable polling when there is activity, but turn it off when there is None. I am not sure how you can adapt the idea here, though. Gaël |
|
From: Brian G. <ell...@gm...> - 2009-07-17 21:54:24
|
Gael, Polling at 100Hz is a horrendous solution from a technical point of view. > I typical have a dozen IPython instances opened, where I have been > working a while ago, but not doing anything right now, because I am > planning to come back to it. Having these all poll at a 100Hz wil keep my > laptop hot, make it switch context all the time, and drain the battery. > Adobe Flash works that way. I use it as seldom as possible. > I agree that polling is a non-optimal approach. But, until wx supports monitoring stdin from within the event loop, we are stuck with polling. Because of usage cases like yours, I think it is important that users be able to tune these things. For example, slower polling intervals work just fine for many things (like basic matplotlib plots) and have essentially 0 load of the CPU. It also depends on what type of compromises you are wlling to make. If you don't mind slightly slower keyboard response, but you want super fast GUI responses, then approach 2 will work great. Likewise, if you don't mind slow GUI response, but want fast keyboard, then approach 3 is best. Bottom line = we are into a position of compromise because of wx. The good news is that I think we can offer users a very flexible way of tuning all these things. > > One trick I play sometimes when I am developping software that needs to > poll and cannot be event-driven, is to unable polling when there is > activity, but turn it off when there is None. I am not sure how you can > adapt the idea here, though. > I will think about this. Cheers, Brian > > Gaël > _______________________________________________ > Enthought-Dev mailing list > Ent...@en... > https://mail.enthought.com/mailman/listinfo/enthought-dev > |
|
From: Ville M. V. <viv...@gm...> - 2009-07-17 21:57:35
|
On Sat, Jul 18, 2009 at 12:54 AM, Brian Granger<ell...@gm...> wrote: > best. Bottom line = we are into a position of compromise because of wx. > The good news is that I think we can offer users a very flexible way of > tuning all these things. Perhaps adaptive autotuning algorithm could help your case; if stdin came in rapidly, poll again very soon, otherwise adjust the delay. -- Ville M. Vainio http://tinyurl.com/vainio |
|
From: Ondrej C. <on...@ce...> - 2009-07-17 23:15:26
|
On Fri, Jul 17, 2009 at 3:57 PM, Ville M. Vainio<viv...@gm...> wrote:
> On Sat, Jul 18, 2009 at 12:54 AM, Brian Granger<ell...@gm...> wrote:
>
>> best. Bottom line = we are into a position of compromise because of wx.
>> The good news is that I think we can offer users a very flexible way of
>> tuning all these things.
>
> Perhaps adaptive autotuning algorithm could help your case; if stdin
> came in rapidly, poll again very soon, otherwise adjust the delay.
The following patch implements this in the #3 approach:
$ diff -Naur /home/ondrej/Desktop/inputhook.py inputhook.py
--- /home/ondrej/Desktop/inputhook.py 2009-07-17 14:09:34.000000000 -0600
+++ inputhook.py 2009-07-17 17:12:37.000000000 -0600
@@ -110,17 +110,26 @@
This sleep time should be tuned though for best performance.
"""
import wx
+ from timeit import default_timer as clock
app = wx.GetApp()
if app is not None:
assert wx.Thread_IsMain()
evtloop = wx.EventLoop()
ea = wx.EventLoopActivator(evtloop)
+ t = clock()
while not stdin_ready():
while evtloop.Pending():
+ t = clock()
evtloop.Dispatch()
app.ProcessIdle()
- time.sleep(0.01) # Change this to tune performance
+ if clock() - t > 0.1:
+ # no input is happening, we can sleep as much as we want
+ time.sleep(0.05)
+ else:
+ # input is happening, either wx (e.g. mouse) or keyboard, so
+ # sleep only very little
+ time.sleep(0.001)
del ea
return 0
Now if no input is happening, the "sleep(0.05)" version is running,
thus it has very low CPU usage. If however some input is happening
(either matplotlib, or ipython), then we just sleep(0.001), maybe we
don't have to sleep at all, I am not sure about this.
In any case, this should fix Gael's objection.
Ondrej
|
|
From: Robert K. <rk...@en...> - 2009-07-17 22:02:01
|
On Fri, Jul 17, 2009 at 16:54, Brian Granger<ell...@gm...> wrote: > Gael, > > >> Polling at 100Hz is a horrendous solution from a technical point of view. >> I typical have a dozen IPython instances opened, where I have been >> working a while ago, but not doing anything right now, because I am >> planning to come back to it. Having these all poll at a 100Hz wil keep my >> laptop hot, make it switch context all the time, and drain the battery. >> Adobe Flash works that way. I use it as seldom as possible. > > I agree that polling is a non-optimal approach. But, until wx supports > monitoring stdin from within the event loop, we are stuck with polling. Can you describe the patch you are putting together for wxPython? or is it wxWidgets? Perhaps there is a way for us to monkeypatch the same approach into old versions. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
|
From: Brian G. <ell...@gm...> - 2009-07-17 22:31:23
|
Can you describe the patch you are putting together for wxPython? or > is it wxWidgets? Perhaps there is a way for us to monkeypatch the same > approach into old versions. > There is *very* little difference between my ctypes prototype and the patch for wxPython. The only real differences are these: * A few lines of C code that sets PyOS_InputHook and handles threading * A wx.App subclass called IApp that turns on the capability and has the implementation of the inputhook. We could definitely monkey patch wx with this IApp class. Cheers, Brian > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > IPython-dev mailing list > IPy...@sc... > http://mail.scipy.org/mailman/listinfo/ipython-dev > |
|
From: Robert K. <rk...@en...> - 2009-07-17 22:41:18
|
On Fri, Jul 17, 2009 at 17:31, Brian Granger<ell...@gm...> wrote: > > >> Can you describe the patch you are putting together for wxPython? or >> is it wxWidgets? Perhaps there is a way for us to monkeypatch the same >> approach into old versions. > > There is *very* little difference between my ctypes prototype and the patch > for wxPython. Which approach? #1? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
|
From: Brian G. <ell...@gm...> - 2009-07-17 22:58:59
|
The current patch for wxpython is based on approach 1, but that is obviously going to change after what we are seeing performance wise. Once I have a ctypes version that is really well tested (also on Win32 and Linux) I will help create a patch for wx that implements that approach. Cheers, Brian On Fri, Jul 17, 2009 at 3:40 PM, Robert Kern <rk...@en...> wrote: > On Fri, Jul 17, 2009 at 17:31, Brian Granger<ell...@gm...> > wrote: > > > > > >> Can you describe the patch you are putting together for wxPython? or > >> is it wxWidgets? Perhaps there is a way for us to monkeypatch the same > >> approach into old versions. > > > > There is *very* little difference between my ctypes prototype and the > patch > > for wxPython. > > Which approach? #1? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > IPython-dev mailing list > IPy...@sc... > http://mail.scipy.org/mailman/listinfo/ipython-dev > |
|
From: Brian G. <ell...@gm...> - 2009-07-17 04:59:56
|
Michiel, Thanks for the reply, this will help us to find a better approach. According to one of the wx devs, Robin Dunn, wx currently does not have the ability to monitor stdin in its even loop without polling. I guess there is a GSoC project to add this capability, but it is not there yet. Any thoughts on how this could be done without monitoring stdin. I will give the polling stdin approach a try though. Cheers, Brian > The chunkiness probably comes from the fact that inputhook_wx is called > repeatedly. This is different from how PyOS_InputHook is being used in > Tkinter, PyGTK, and the Mac OS X backend. > > Schematically, this is how the Tkinter/PyGTK/MacOSX event loops work: > > 1) PyOS_InputHook is called when Python is waiting for the user to type in > the next Python command. > > 2) The hook function sets up the event loop such that stdin is being > monitored while the event loop is running. > > 3) The hook function then starts the event loop. > > 4) When input is available on stdin, the hook function exits the event > loop, and returns. > > This is how the proposed Wx event loop currently works: > > 1) PyOS_InputHook is called when Python is waiting for the user to type in > the next Python command. > > 2) The hook function processes whatever events are available at the time. > > 3) The hook function returns. > > 4) If still no input is available on stdin, Python calls the hook function > again via PyOS_InputHook after a timeout. > > I believe the timeout is 0.1 seconds by default. However, Python may not > call PyOS_InputHook repeatedly at all; this depends on which Python version > is being used, and the version of the readline library. In some > configurations (particularly on Windows), PyOS_InputHook is called only > once, so wx will freeze between Python commands. > > I am not familiar with wx, but there hopefully there is some way to monitor > stdin while the event loop is running? > > --Michiel. > > > --- On Thu, 7/16/09, Brian Granger <ell...@gm...> wrote: > > > From: Brian Granger <ell...@gm...> > > Subject: Re: [matplotlib-devel] [IPython-dev] [Enthought-Dev] Ctypes > based prototype of PyOS_InputHook for wx 2.8 and 2.9 > > To: "Robert Kern" <rk...@en...> > > Cc: ent...@en..., "matplotlib development list" < > mat...@li...>, "IPython Development list" < > ipy...@sc...> > > Date: Thursday, July 16, 2009, 6:57 PM > > Robert, > > > > Thanks for testing this so quickly. Performance is one of > > the big issues that I am concerned about. I will work on a > > Cython based version to see if that solves the problem. > > > > Cheers, > > > > Brian > > > > > > > > > > Works for me with wx 2.8.8.1 on OS X 10.5 and > > Chaco. Pan and zoom > > > > interactions are substantially chunky, though. I do not see > > such > > > > chunkiness with -wthread. It would be worth exploring a > > Cython > > > > alternative to see if it is just ctypes and general Python > > overhead to > > > > blame. > > > > > > > > -- > > > > Robert Kern > > > > > > > > "I have come to believe that the whole world is an > > enigma, a harmless > > > > enigma that is made terrible by our own mad attempt to > > interpret it as > > > > though it had an underlying truth." > > > > -- Umberto Eco > > > > _______________________________________________ > > > > IPython-dev mailing list > > > > IPy...@sc... > > > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > > > > > > -----Inline Attachment Follows----- > > > > > ------------------------------------------------------------------------------ > > Enter the BlackBerry Developer Challenge > > This is your chance to win up to $100,000 in prizes! For a > > limited time, > > vendors submitting new applications to BlackBerry App > > World(TM) will have > > the opportunity to enter the BlackBerry Developer > > Challenge. See full prize > > details at: http://p.sf.net/sfu/Challenge > > -----Inline Attachment Follows----- > > > > _______________________________________________ > > Matplotlib-devel mailing list > > Mat...@li... > > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > > > > > > |