Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: NaN values often yield very good interpretation metrics #575

Open
fhausmann opened this issue Dec 9, 2024 · 0 comments
Open

[Bug]: NaN values often yield very good interpretation metrics #575

fhausmann opened this issue Dec 9, 2024 · 0 comments

Comments

@fhausmann
Copy link

fhausmann commented Dec 9, 2024

Contact details

No response

What happened?

In the case of a NaN value for metrics with interpretations, a very good (or very bad) interpretation is often reported, due to a lack of checks in the _analysis function.

Please let me know, if I should look deeper into this and provide a Pull request.
A potential fix would be: https://github.com/fhausmann/pycm/tree/fix_na_interpretation

Steps to reproduce

import pycm
import numpy as np
auc_values = 0.6 
pycm.interpret.AUC_analysis(auc_values)
# 'Fair'
auc_values = np.nan
pycm.interpret.AUC_analysis(auc_values)
# 'Excellent'

Expected behavior

Some metrics, such as pearson_C_analysis return "None" in this case, which I think is the correct behavior.

Actual behavior

Either the best or the worst interpretation is reported, depending on the metric.

Operating system

Linux

Python version

Python 3.12

PyCM version

PyCM 4.1

Relevant log output

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant