Local inference You can use SmolLM2 models locally with frameworks like Transformers.js, llama.cpp, MLX.