-
Notifications
You must be signed in to change notification settings - Fork 123
Issues: huggingface/optimum-intel
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Follow-up request after #916 due to issues on the OpenVINO side
#965
by andrei-kochin
was closed Nov 25, 2024
Exporting tokenizers to OpenVINO is not supported for tokenizers version > 0.19 ?
#947
by JamieVC
was closed Nov 22, 2024
Need
optimum-intel
to keep up with the transformer latest update
#896
by szeyu
was closed Sep 13, 2024
Issue while exporting internlm2-chat-7b model to OpenVINO format
#811
by lumurillo
was closed Jul 11, 2024
issues with IPEXModel.from_pretrained with sentence-transformer models (all-MiniLM-L6-v2, intfloat/e5-mistral-7b-instruct)
#810
by rbrugaro
was closed Jul 15, 2024
[model enable] Request to enable DeciLM-7B-instruct model exporting
#809
by lumurillo
was closed Sep 19, 2024
[model enable] Request to enable gemma2 model exporting
enhancement
New feature or request
#798
by peiqingj
was closed Sep 5, 2024
Safety checker doesn't work with Latent Consistency Model pipeline
#790
by adrianboguszewski
was closed Jul 4, 2024
Pipeline accelerator defaults to 'ort' runtime instead of 'ipex'
#735
by rbrugaro
was closed May 30, 2024
Previous Next
ProTip!
What’s not been updated in a month: updated:<2025-01-13.