Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add proper out-of-line test expectation files #9

Open
Tracked by #60
Gankra opened this issue Jun 28, 2022 · 2 comments · May be fixed by #70
Open
Tracked by #60

Add proper out-of-line test expectation files #9

Gankra opened this issue Jun 28, 2022 · 2 comments · May be fixed by #70
Labels
enhancement New feature or request

Comments

@Gankra
Copy link
Owner

Gankra commented Jun 28, 2022

You should be able to have static files checked in with your project that configure expectation/execution of tests. Similar to the web-platform-tests (WPT), you want to be able to have a shared repository of tests and for different projects to be able to pull them in. However different projects may pass/fail different tests, so you want them to be able to apply their own set of expectations externally from the actual test decls.

The default for all tests is that they will be run and expected to pass. The expectation file should ideally allow:

  • "selecting" a test or family of tests based on certain "selectors", in the same vein as CSS (name, abi, platform, ...)
    • selection should have some amount of ordering/specialization, so you can have a broad command like "these tests all fail" and a more specific command saying "but this subset still passes", inheriting everything from the broader selector that isn't overridden
    • the implicit semantics is that there is a selector all-tests: run, pass, and everything is just modifying that
  • optionally specifying how to run the selected tests
    • run (default)
    • compile-only(?)
    • skip
  • optionally specifying what to expect from the selected tests
    • pass (default)
    • random (must run succesfully, don't care about result)
    • compile-fail (must not compile (or link?))
    • run-fail (must compile+link but have mismatched ABI)
    • fail (catchall for all failures)
    • busted (synonym for fail, but indicates that it's not supposed to fail, and is backlog)

Possible sources of inspiration:

@Gankra
Copy link
Owner Author

Gankra commented Aug 10, 2022

As of #13 I opted for a "make code be code" approach, and there is not a function in the code where you can designate to what extent a test should run, and whether a test is expected to pass/fail at a specific step or is busted/random.

There is an explicit place where one could vendor the code and apply a patch to this logic:

https://github.com/Gankra/abi-checker/blob/a2232d45f202846f5c02203c9f27355360f9a2ff/src/report.rs#L44-L50

@Gankra
Copy link
Owner Author

Gankra commented Jun 30, 2024

As of #20 anyone using the vendor patch probably has a merge conflict of sorts from the test harness being completely rewritten, but the function and code is largely the same:

abi-cafe/src/report.rs

Lines 35 to 50 in 6ba6865

if cfg!(windows) && (test.test == "i128" || test.test == "u128") {
result.check = Random;
}
// FIXME: investigate why this is failing to build
if cfg!(windows) && is_c && (test.test == "EmptyStruct" || test.test == "EmptyStructInside") {
result.check = Busted(Build);
}
//
//
// THIS AREA RESERVED FOR VENDORS TO APPLY PATCHES
// END OF VENDOR RESERVED AREA
//
//

Gankra added a commit that referenced this issue Jul 13, 2024
Gankra added a commit that referenced this issue Jul 13, 2024
@Gankra Gankra linked a pull request Jul 13, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant