Skip to content

asyncio warnings and errors triggered by stressful code #109490

Closed as duplicate of#114177
@shoffmeister

Description

@shoffmeister

Bug report

Bug description:

Running the code below yields many lines of unwanted error output of the kind

Loop <_UnixSelectorEventLoop running=False closed=True debug=False> that handles pid 842379 is closed

and also runtime exceptions after all user code has completed of the kind

Exception ignored in: <function BaseSubprocessTransport.__del__ at 0x7f9558756340>
Traceback (most recent call last):
  File "/usr/lib64/python3.11/asyncio/base_subprocess.py", line 126, in __del__
    self.close()
  File "/usr/lib64/python3.11/asyncio/base_subprocess.py", line 104, in close
    proto.pipe.close()
  File "/usr/lib64/python3.11/asyncio/unix_events.py", line 566, in close
    self._close(None)
  File "/usr/lib64/python3.11/asyncio/unix_events.py", line 590, in _close
    self._loop.call_soon(self._call_connection_lost, exc)
  File "/usr/lib64/python3.11/asyncio/base_events.py", line 761, in call_soon
    self._check_closed()
  File "/usr/lib64/python3.11/asyncio/base_events.py", line 519, in _check_closed
    raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed

This apparently is caused by having asyncio.wait_for with a TIMEOUT_SECS at an absurdly low value - which is that low to be able to reliably provoke the problem - IOW, this misbehaviour seems to depend on timeout errors being raised.

Of particular note is the time.sleep(2) at the end which I use to delay process termination; this makes the problem visible on stderr. Note that the exit code of the Python process is always 0 (watch --errexit --interval=0 python debugger-bug.py).

The implementation is cobbled together from an example in the Python documentation (with fixes applied) and from StackOverflow for throttling best practices with a Semaphore.

I am running this on Fedora Linux 38, Python 3.11.5, on an Intel 11800H CPU, lots of memory spare, within VMware Workstation.

import asyncio
import sys
import time

TASK_COUNT: int = 50
TASK_THROTTLE_COUNT: int = 7

TIMEOUT_SECS: float = 0.00001


async def get_date(i: int) -> str:
    # print(f"Starting task #{i}")

    code = "import datetime; print(datetime.datetime.now())"

    proc = await asyncio.create_subprocess_exec(
        sys.executable, "-c", code, stdout=asyncio.subprocess.PIPE
    )

    try:
        stdout, _ = await asyncio.wait_for(proc.communicate(), TIMEOUT_SECS)
    except asyncio.TimeoutError:
        return "TIMEOUT"

    line = stdout.decode("ascii").rstrip()

    # Wait for the subprocess exit.
    await proc.wait()
    return line


async def throttled_get_date(sem: asyncio.Semaphore, i: int):
    async with sem:  # semaphore limits num of simultaneous downloads
        return await get_date(i)


async def run_throttled():
    sem = asyncio.Semaphore(TASK_THROTTLE_COUNT)

    tasks = [asyncio.create_task(throttled_get_date(sem, i)) for i in range(TASK_COUNT)]

    return await asyncio.gather(*tasks)


async def run_unthrottled():
    tasks = [asyncio.create_task(get_date(i)) for i in range(TASK_COUNT)]
    return await asyncio.gather(*tasks)


def shell_to_get_date():

    start = time.time()
    date = asyncio.run(run_throttled())
    duration = time.time() - start

    print(f"Current dates: {date} - {duration} secs")
    if len(date) != TASK_COUNT:
        raise Exception


if __name__ == "__main__":
    shell_to_get_date()
    print("done")
    time.sleep(2)

CPython versions tested on:

3.11

Operating systems tested on:

Linux

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    Status

    Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions