A web-based masked language model demo using Transformers.js library. Built with React and Vite.
Alternatively, you can use the application directly on Huggingface Spaces without any installation: huggingface.co/spaces/ysdede/fill-mask-demo
- Support for multiple BERT/RoBERTa models
- WebGPU and WASM backend support
- Multiple quantization options
- Sequential token prediction
- Predict multiple masked tokens sequentially using previous predictions
- Option to toggle between sequential and parallel prediction modes
- Configurable mask placeholders
- Use model's mask token ([MASK])
- Double period (..)
- Single space
- Custom placeholder text
- Real-time performance metrics
- Model load time
- Inference time per prediction
- Comprehensive prediction results
- Shows completed sentence
- Displays original mask pattern
- Shows inference text for each mask
- Multiple token predictions per mask
-
Clone the repository:
git clone https://github.com/ysdede/fillmask-js.git cd fillmask-js
-
Install dependencies:
npm install
-
Run the development server:
npm run dev
-
Build for production:
npm run build
-
Preview the production build:
npm run preview
- Select a model, backend (WebGPU/WASM) and quantization level
- Load the model
- Enter text with masks (use ?? or tokens)
- Choose prediction mode:
- Sequential: Uses previous predictions for subsequent masks
- Parallel: Predicts all masks independently
- Select placeholder type for unpredicted masks
- Click "Unmask" to get predictions
Contributions are welcome! Feel free to open issues or submit pull requests to improve the project.
- Fork the repository
- Create a new branch:
git checkout -b feature/YourFeature
- Commit your changes:
git commit -m 'Add some feature'
- Push to the branch:
git push origin feature/YourFeature
- Open a Pull Request
This project is licensed under the MIT License.