You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Winit is big. There are a bunch of software, and most importantly, people relying on it but there still have been rather serious regressions introduced recently. The following ones practically render winit programs useless under Windows and XMonad.
The first two both refer to a group of issues that affect the Windows platform. These are present in winit 0.21.0 meaning that everyone who targets Windows must skip this version of winit. But no matter how impactful, those issues still slip through the cracks and may make it into multiple crates the same way they made it into winit, glutin and glium.
The pourpose of this current issue is to nudge the project towards more stable releases.
How?
This could be done by introducing more automated tests. Some rudimental tests could be done with plain old cargo tests, however I think tests that give keyboard and mouse input would be much more useful.
None of these seem to support Wayland but I don't know what tha means for us. PyAutoGUI seems to be the most mature but I only made a quick search so there could be better alternatives out there.
Currently the way I imagine it is that each test has two executables:
a rust binary that's built using winit
and a separate tester executable that launches the rust program.
The tester program gives the rust app some simulated mouse and keyboard input and the rust program, in turn, gives some output from which the tester determines whether the rust program passed the test.
For all of this to work there needs to be a testing environment that supports gui tests. I can see that we are currently using Travis and Appveyor. Do those support running graphical programs that expect and simulate keyboard and mouse input?
With all this said I would be up for writing some tests and ensuring that they behave properly on Windows at least, but of course I would need help with the rest.
The text was updated successfully, but these errors were encountered:
Thanks for opening this issue! I'm definitely interested in the idea of having automated tests - I've definitely noticed that more and more issues are slipping through the cracks of PRs, and that's not a sustainable situation for us.
We actually moved over to GitHub actions for CI recently, so we're not using Travis or Appveyor any more. I don't know if it supports GUI environments, so I've opened #1498 to test things out and get a sense of where that stands.
EDIT: It looks like the Windows and macOS environments have GUIs initialized. Linux doesn't, but we might be able to manually install and start X11 on those systems. Even if we aren't able to test Wayland right now, testing all the other desktop platforms would be a pretty major step forwards.
EDIT 2: I'm not entirely sure how we should go about making tests cross-platform, though - it's difficult to write consistent automated tests that interact with window decorations when each platform has different window decorations. What might work is having the tests reference a struct that specifies the positions of the window decoration elements on different platforms, then change those values based on what platform we're running on.
Why?
Winit is big. There are a bunch of software, and most importantly, people relying on it but there still have been rather serious regressions introduced recently. The following ones practically render winit programs useless under Windows and XMonad.
#1429
#1461
#1474
The first two both refer to a group of issues that affect the Windows platform. These are present in winit 0.21.0 meaning that everyone who targets Windows must skip this version of winit. But no matter how impactful, those issues still slip through the cracks and may make it into multiple crates the same way they made it into winit, glutin and glium.
The pourpose of this current issue is to nudge the project towards more stable releases.
How?
This could be done by introducing more automated tests. Some rudimental tests could be done with plain old cargo tests, however I think tests that give keyboard and mouse input would be much more useful.
Here are a few tools that could help us do that
https://github.com/enigo-rs/enigo
https://pypi.org/project/pynput/
https://pypi.org/project/PyAutoGUI/
None of these seem to support Wayland but I don't know what tha means for us. PyAutoGUI seems to be the most mature but I only made a quick search so there could be better alternatives out there.
Currently the way I imagine it is that each test has two executables:
The tester program gives the rust app some simulated mouse and keyboard input and the rust program, in turn, gives some output from which the tester determines whether the rust program passed the test.
For all of this to work there needs to be a testing environment that supports gui tests. I can see that we are currently using Travis and Appveyor. Do those support running graphical programs that expect and simulate keyboard and mouse input?
With all this said I would be up for writing some tests and ensuring that they behave properly on Windows at least, but of course I would need help with the rest.
The text was updated successfully, but these errors were encountered: