-
Notifications
You must be signed in to change notification settings - Fork 863
Issues: openvinotoolkit/openvino_notebooks
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Request for the new notebook which converts the Yolo-Nas Custom trained model to FP32 and INT8.
enhancement
New feature or request
no_stale
Do not mark as stale
#1066
opened May 18, 2023 by
kenilaivid
updated Aug 30, 2024
Request for the new notebook which implement Catboost into openVINO
enhancement
New feature or request
no_stale
Do not mark as stale
#1656
opened Nov 7, 2023 by
Fraapp24
updated Aug 30, 2024
ip-adapter plus is not working with full face
PSE
Escalate to PSE for further investigate
support_request
#2400
opened Sep 19, 2024 by
circuluspibo
updated Oct 10, 2024
a new Visual LLM project that is suitable for creating openvino notebook TinyGPT-V
enhancement
New feature or request
Stale
#1648
opened Jan 25, 2024 by
sanbuphy
updated Nov 8, 2024
(Optimization of LLM inference) Does Intel OpenVINO support offloading LLM models, allowing some layers to remain on the SSD while loading the main layers into RAM during inference computation?
#2533
opened Nov 19, 2024 by
hsulin0806
updated Nov 25, 2024
Generic NPU optimizing notebook
category: NPU
OpenVINO NPU plugin
support_request
#2531
opened Nov 18, 2024 by
SRai22
updated Dec 26, 2024
notebook "llm rag llama-index" fail to initial
support_request
#2587
opened Dec 10, 2024 by
JamieVC
updated Dec 26, 2024
Instant-id return black image and report invalid value encountered in cast
PSE
Escalate to PSE for further investigate
#2622
opened Jan 2, 2025 by
dannyweng88122
updated Jan 9, 2025
[Feature Request] CogAgent-9B OpenVINO support
PSE
Escalate to PSE for further investigate
#2624
opened Jan 4, 2025 by
sanbuphy
updated Jan 13, 2025
Dynamic speculative decoding is significantly slower than auto-regressive and than speculative decoding generation
category: GPU
OpenVINO GPU plugin
PSE
Escalate to PSE for further investigate
#2621
opened Jan 1, 2025 by
shira-g
updated Jan 14, 2025
add mistral-7b-instruct v0.3 in the llm config?
#2720
opened Feb 6, 2025 by
JamieVC
updated Feb 6, 2025
Unable to run qwen2-vl inference on Intel integrated GPU. Works fine with CPU
#2740
opened Feb 11, 2025 by
paks100
updated Feb 13, 2025
Fail to convert Qwen2.5-VL-7B-Instruct model to INT4
#2771
opened Feb 25, 2025 by
aixiwangintel
updated Feb 25, 2025
Unable to Run LLM on NPU
category: NPU
OpenVINO NPU plugin
#2758
opened Feb 20, 2025 by
Harsha0056
updated Feb 26, 2025
deepseek R1 distill Qwen 1.5B INT4-NPU or FP16
#2779
opened Feb 26, 2025 by
susantowijaya
updated Feb 26, 2025
"Create Function-calling Agent using OpenVINO and Qwen-Agent" notebook throwing errors when running Gradio
#2778
opened Feb 26, 2025 by
antoniomtz
updated Feb 26, 2025
catvton notebook fails to safely seirilize
#2797
opened Mar 4, 2025 by
nevakrien
updated Mar 5, 2025
Flux.1 was moved to OpenVINO.genai, but the GPU still fails to generate correct text strings.
#2792
opened Mar 4, 2025 by
JamieVC
updated Mar 6, 2025
ProTip!
What’s not been updated in a month: updated:<2025-02-09.