Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'dict' object has no attribute 'locals' #384

Open
filowsky opened this issue Jan 3, 2025 · 3 comments
Open

'dict' object has no attribute 'locals' #384

filowsky opened this issue Jan 3, 2025 · 3 comments

Comments

@filowsky
Copy link

filowsky commented Jan 3, 2025

Pipeline starts but my module is not loaded. After dependencies being downloaded, there is a following error with no stacktrace nor explanation:

Error loading module: test-pipeline
**'dict' object has no attribute 'locals'**
WARNING:root:No Pipeline class found in test-pipeline
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:9099 (Press CTRL+C to quit)

For requirements I use:

requests~=2.32.3
pydantic>=2.8.0 
llama-index==0.12.5
llama-index-llms-azure-openai==0.3.0 
llama-index-embeddings-azure-openai==0.3.0 
llama-index-vector-stores-qdrant==0.4.2

I am using helm chart for pipelines (chart version 0.0.5) and UI (chart version 4.0.6) deployment.

Worth to add that I managed to run it successfully, as a standalone script, on my local machine using the same versions of dependencies. Also, I noticed that the error shows up when I'm adding these code to the pipeline

from llama_index.core import StorageContext
from llama_index.vector_stores.qdrant import QdrantVectorStore
from qdrant_client import QdrantClient

...

self.index = VectorStoreIndex.from_documents(
    documents=self.documents,
    show_progress=True,
    storage_context=StorageContext.from_defaults(
        vector_store=QdrantVectorStore(
            client=QdrantClient(url=QDRANT_URL),
            collection_name="collection_name"
        )
    ),
)

Whole output log below
scratch_142.txt

@ezavesky
Copy link

ezavesky commented Jan 5, 2025

You probably don't want to hear this, but since you've isolated it to specific code within llama_index or qdrant, that's probably where your issue lies -- not within any thing within this pipelines repo. The code within this repo is very straightforward and there is no mention of 'locals'

Having no specific research done (you didn't include enough code), a few issues in qdrant's issues and SO postings may point there -- langchain-ai/langchain#16962. There's also a small chance that something doesn't behave well when using async responses, but that's just a guess since locals implies some localized variable scope.

@paulinergt
Copy link

Hello! I'm facing the same issue however I'm not using qdrant...
If anyone has an update on this it would be very much appreciated :)
Thank you!

@paulinergt
Copy link

Hello! I resolved the error on my end. :)

It seems the issue was caused by the requirements being downloaded twice:

  1. From the requirements.txt file.
  2. Directly from the pipeline script header via the install_frontmatter_requirements function:
title: Custom Llama Index Pipeline
author: open-webui
date: 2024-05-30
version: 1.0
license: MIT
description: A pipeline for retrieving relevant information from a knowledge base using the Llama Index library.
requirements: llama-index-retrievers-bm25, llama-index-embeddings-huggingface, llama-index-readers-github, llama-index-vector-stores-postgres

This duplication led to dependency errors.
I removed the requirements section from the pipeline header, which fixed the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants