-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: intel-analytics/ipex-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Ubuntu 22.04 Kernel 6.8.0-45 cannot install intel-i915-dkms
#12156
opened Oct 2, 2024 by
huiwangnick
will ipex-llm support tiny model Octopus series and when?
user issue
#12147
opened Sep 29, 2024 by
yangqing-yq
why does ipex-llm not support fastllm for serving?
user issue
#12140
opened Sep 27, 2024 by
yangqing-yq
codegeex-nano can't work on IPEX-LLM 2.1.0, even though it worked well on 2.1.0b2
user issue
#12139
opened Sep 27, 2024 by
moutainriver
when using saved glm-4v-9b low bit model with pictures, error would ocurr.
user issue
#12138
opened Sep 27, 2024 by
wluo1007
MTL platform with ARC 770 cannot allocate memory block with size lager than 4GB when running vLLM Qwen2-VL-2B
user issue
#12136
opened Sep 27, 2024 by
weijiejx
Upstream i915 GUC load failed on Ubuntu 24.04 Kernel 6.8.0-31 with Arc A770
user issue
#12122
opened Sep 26, 2024 by
huiwangnick
cant run ollama using llm-cpp on 12th igpu under linux
user issue
#12120
opened Sep 25, 2024 by
user7z
An error occurred while running the qwen2.5:3b model.
user issue
#12118
opened Sep 25, 2024 by
JerryXu2023
Running Docker ollama with Igpu keeps failing to generate.
user issue
#12116
opened Sep 24, 2024 by
Daniel-dev22
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.