We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A user reported this issue in the group chat.
ERROR: Model worker crashed: Llama.cpp failed decoding: Decode Error 1: NoKvCacheSlot at: <nobodywho::NobodyWhoChat as godot_core::gen::classes::node::re_export::INode>::physics_process (src\lib.rs:178) ERROR: Model output channel died. Did the LLM worker crash? at: <nobodywho::NobodyWhoChat as godot_core::gen::classes::node::re_export::INode>::physics_process (src\lib.rs:184) [6:54 PM]
The text was updated successfully, but these errors were encountered:
llama.cpp
Hm... this bug still appears for me. Not sure it is actually resolved.
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
A user reported this issue in the group chat.
The text was updated successfully, but these errors were encountered: