You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was using examples/hello and hitting http://localhost:8080/short (which serves a small file from disk via a standard blocking open() call) with 50 concurrent connections via ab. What of course happens is that the process quickly runs out of file descriptors.
I wonder if the authors of this software have ideas about how to do file io in a non-blocking way from within a fapws3 server? Node.js (which I was also benching) uses libeio with a thread pool to prevent io from blocking the main thread, and handles large concurrent loads easily. I haven't found anything similar for python, but I feel like it's probably out there somewhere.
The text was updated successfully, but these errors were encountered:
I was running benchmarks of various async web frameworks ala http://brizzled.clapper.org/id/88.html.
I was using examples/hello and hitting http://localhost:8080/short (which serves a small file from disk via a standard blocking open() call) with 50 concurrent connections via ab. What of course happens is that the process quickly runs out of file descriptors.
I wonder if the authors of this software have ideas about how to do file io in a non-blocking way from within a fapws3 server? Node.js (which I was also benching) uses libeio with a thread pool to prevent io from blocking the main thread, and handles large concurrent loads easily. I haven't found anything similar for python, but I feel like it's probably out there somewhere.
The text was updated successfully, but these errors were encountered: