Skip to content
View mariagrandury's full-sized avatar
🤗
Happy coding!
🤗
Happy coding!

Organizations

@neurocats @somosnlp @bertin-project

Block or report mariagrandury

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Stars

⚖️ Bias in NLP

8 repositories

WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in…

Python 176 14 Updated Jun 18, 2024

Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰

Python 96 22 Updated Nov 17, 2023

Fair Embedding Engine

Python 13 1 Updated Oct 25, 2020

🤗 Disaggregators: Curated data labelers for in-depth analysis.

Python 65 5 Updated Feb 8, 2023

A Python multilingual toolkit for Sentiment Analysis and Social NLP tasks

Jupyter Notebook 573 65 Updated Jul 9, 2024

Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110). This framework is also used to evaluate text-to-image …

Python 2,033 269 Updated Jan 30, 2025

The Foundation Model Transparency Index

74 8 Updated May 23, 2024

A curated list of awesome responsible machine learning resources.

3,712 592 Updated Jan 16, 2025