-
-
Notifications
You must be signed in to change notification settings - Fork 32.2k
gh-134954: Hard-cap max file descriptors in subprocess test fd_status #134955
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
On some systems, `SC_OPEN_MAX` may return a very large value (i.e. 10**30), leading to the subprocess test timing out (or run forever). Prevent this situation by applying a hard cap on how many file descriptors are checked.
s/fd_stats/fd_status/
🤖 New build scheduled with the buildbot fleet by @gpshead for commit c534fe5 🤖 Results will be shown at: https://buildbot.python.org/all/#/grid?branch=refs%2Fpull%2F134955%2Fmerge If you want to schedule another build, you need to add the 🔨 test-with-buildbots label again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good to see some systems finally actually go the "no limits, its not the 90s" route on that value. skim through buildbot results in a few hours - I expect they'll be fine. then feel free to merge.
thanks @gpshead ! |
…status (pythonGH-134955) * Hard-cap max file descriptors in subprocess test fd_status On some systems, `SC_OPEN_MAX` may return a very large value (i.e. 10**30), leading to the subprocess test timing out (or run forever). Prevent this situation by applying a hard cap on how many file descriptors are checked. * Fix typo in usage docstring s/fd_stats/fd_status/ (cherry picked from commit f58873e) Co-authored-by: Itamar Oren <[email protected]>
GH-134980 is a backport of this pull request to the 3.14 branch. |
…status (pythonGH-134955) * Hard-cap max file descriptors in subprocess test fd_status On some systems, `SC_OPEN_MAX` may return a very large value (i.e. 10**30), leading to the subprocess test timing out (or run forever). Prevent this situation by applying a hard cap on how many file descriptors are checked. * Fix typo in usage docstring s/fd_stats/fd_status/ (cherry picked from commit f58873e) Co-authored-by: Itamar Oren <[email protected]>
GH-134981 is a backport of this pull request to the 3.13 branch. |
…_status (GH-134955) (#134981) gh-134954: Hard-cap max file descriptors in subprocess test fd_status (GH-134955) * Hard-cap max file descriptors in subprocess test fd_status On some systems, `SC_OPEN_MAX` may return a very large value (i.e. 10**30), leading to the subprocess test timing out (or run forever). Prevent this situation by applying a hard cap on how many file descriptors are checked. * Fix typo in usage docstring s/fd_stats/fd_status/ (cherry picked from commit f58873e) Co-authored-by: Itamar Oren <[email protected]>
…_status (GH-134955) (#134980) gh-134954: Hard-cap max file descriptors in subprocess test fd_status (GH-134955) * Hard-cap max file descriptors in subprocess test fd_status On some systems, `SC_OPEN_MAX` may return a very large value (i.e. 10**30), leading to the subprocess test timing out (or run forever). Prevent this situation by applying a hard cap on how many file descriptors are checked. * Fix typo in usage docstring s/fd_stats/fd_status/ (cherry picked from commit f58873e) Co-authored-by: Itamar Oren <[email protected]>
|
On some systems,
SC_OPEN_MAX
may return a very large value (i.e. 2**30), leading to the subprocess test timing out (or run forever).Prevent this situation by applying a hard cap on how many file descriptors are checked.