Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(wren-ai-service): add litellm embedder #1247

Merged
merged 16 commits into from
Feb 3, 2025

Conversation

cyyeh
Copy link
Member

@cyyeh cyyeh commented Jan 28, 2025

issue #1078

Summary by CodeRabbit

  • New Features

    • Introduced asynchronous text and document embedding with enhanced error handling and real-time progress feedback.
    • Updated configuration examples to use a new embedding provider and revised model settings for a consistent user experience.
    • Enabled integration of a new dataset within the document store.
  • Tests

    • Expanded test coverage to validate the integration of the new embedding provider.
  • Chores

    • Optimized dependency constraints and streamlined installation processes in the CI workflow for improved stability.

@cyyeh cyyeh added module/ai-service ai-service related ci/ai-service ai-service related labels Jan 28, 2025
Copy link
Contributor

coderabbitai bot commented Jan 28, 2025

Caution

Review failed

The head commit changed during the review from ee0cd65 to ec13668.

Walkthrough

This pull request updates multiple configuration files to switch embedding providers from the legacy ones to litellm_embedder. It revises model names, API field names, and pipeline references accordingly. Additionally, new asynchronous embedding classes are introduced alongside changes in the provider’s initialization code, test adjustments, and dependency updates in the project configuration. Minor modifications in CI workflows and document store initialization for SQL pairs are also included.

Changes

File(s) Change Summary
wren-ai-service/docs/config_examples/config.ollama.yaml
wren-ai-service/docs/config_examples/config.deepseek.yaml
wren-ai-service/docs/config_examples/config.google_ai_studio.yaml
wren-ai-service/docs/config_examples/config.groq.yaml
Updated embedder provider references from ollama_embedder/openai_embedder to litellm_embedder, adjusted model names (e.g., nomic-embed-textopenai/nomic-embed-text, text-embedding-004gemini/text-embedding-004), and replaced url with api_base where applicable.
wren-ai-service/src/providers/__init__.py Introduced a new variable model_additional_params in the embedder_processor to dynamically capture and incorporate extra model parameters in the returned output.
wren-ai-service/src/providers/embedder/litellm.py Added new asynchronous classes: AsyncTextEmbedder, AsyncDocumentEmbedder, and the provider class LitellmEmbedderProvider with methods to retrieve embedders. These classes include new error handling (backoff) and batch processing features.
wren-ai-service/tests/pytest/providers/test_loader.py Updated tests to reflect the addition of a new provider by increasing the count of providers from 11 to 12 and adding a test case for litellm_embedder.
.github/workflows/ai-service-test.yaml Modified the dependency installation command from poetry install to poetry install --without eval to exclude evaluation dependencies.
wren-ai-service/pyproject.toml Tightened the Python version constraint to <3.13, updated haystack-ai to version 2.7.0, reallocated tomlkit from evaluation to development, added new dev dependencies (deepeval, tomlkit, nltk), and removed some dependencies from the evaluation group.
wren-ai-service/src/providers/document_store/qdrant.py Enhanced the _reset_document_store method to include the initialization of a document store for the sql_pairs dataset.

Sequence Diagram(s)

sequenceDiagram
  participant U as User
  participant P as LitellmEmbedderProvider
  participant T as AsyncTextEmbedder
  participant API as Embedding API

  U->>P: Request text embedder instance
  P->>T: Return AsyncTextEmbedder
  U->>T: Invoke run(text)
  T->>API: Send text embedding request
  API-->>T: Return embedding result
  T-->>U: Deliver embedding data with metadata
Loading
sequenceDiagram
  participant U as User
  participant P as LitellmEmbedderProvider
  participant D as AsyncDocumentEmbedder
  participant API as Embedding API

  U->>P: Request document embedder instance
  P->>D: Return AsyncDocumentEmbedder
  U->>D: Invoke run(documents, batch_size)
  loop Batch processing
    D->>API: Send batch embedding request
    API-->>D: Return batch embedding result
  end
  D-->>U: Provide complete document embeddings
Loading

Possibly related PRs

Suggested reviewers

  • paopa

Poem

Oh, what fun in code I see,
A rabbit hops with glee!
New embedders run with zest,
Async flows pass every test.
In configs and code, we play our part—
Hopping along with a joyful heart! 🐇
Let litellm lead, a work of art!


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (10)
wren-ai-service/src/providers/embedder/litellm.py (8)

1-3: Consider consolidating or reorganizing imports.
Imports from standard libraries (logging, os) and third-party libraries (openai, backoff, haystack, litellm, etc.) can be grouped by convention, improving readability.


5-9: Ensure consistent error-handling strategy.
Using backoff is great for resiliency, but confirm that other exceptions (e.g., network interruptions, HTTP 4xx errors) are also addressed. If not, you may consider broadening your exception handling strategy or logging additional diagnostic info.


18-30: Add optional multi-field metadata embedding support.
At present, _prepare_texts_to_embed only concatenates doc.content. Consider if you’d like to embed additional fields from the metadata in the same pass. This might enhance the embedding by providing more context.


33-48: Clarify or document default parameter values in constructor.
The constructor for AsyncTextEmbedder has parameters api_key, model, etc., which can be None. It might be helpful to include docstrings or default values in the signature to clarify usage, especially if they are required.


79-94: Update docstring for improved clarity.
AsyncDocumentEmbedder does not have a docstring explaining its usage or requirements. Adding a brief summary of its purpose and parameters helps future contributors.


95-125: Guard against missing usage keys in _embed_batch.
When appending usage statistics, the code assumes prompt_tokens and total_tokens always exist. Future changes to the provider or partial usage returns could break this. Consider a safe retrieval pattern, e.g. meta["usage"].get("prompt_tokens", 0).

- meta["usage"]["prompt_tokens"] += response.usage.prompt_tokens
- meta["usage"]["total_tokens"] += response.usage.total_tokens
+ meta["usage"]["prompt_tokens"] = meta["usage"].get("prompt_tokens", 0) + getattr(response.usage, "prompt_tokens", 0)
+ meta["usage"]["total_tokens"] = meta["usage"].get("total_tokens", 0) + getattr(response.usage, "total_tokens", 0)

127-154: Optionally allow chunking for large documents.
Embedding large documents, even in batches, can be expensive or lead to hitting model limits. Consider offering an optional chunk-based approach to further split very large documents.


156-197: Consider lazy loading or caching for scaled usage.
Initial logging in the provider is helpful, but if usage grows significantly (e.g., frequent model changes, dynamic environment changes), consider advanced patterns like caching or lazy loading. This can optimize repeated instantiations.

wren-ai-service/docs/config_examples/config.ollama.yaml (2)

17-21: Enhance configuration documentation for better user guidance.

Consider adding comments to explain:

  • The reason for using the openai/ prefix in the model name
  • Alternative api_base configurations for non-Docker environments
  • The implications of the 120-second timeout value

112-112: Remove trailing space.

There's a trailing space after nomic-embed-text that should be removed.

-    embedder: litellm_embedder.openai/nomic-embed-text 
+    embedder: litellm_embedder.openai/nomic-embed-text
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 112-112: trailing spaces

(trailing-spaces)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b80bf06 and bba5ae6.

📒 Files selected for processing (4)
  • wren-ai-service/docs/config_examples/config.ollama.yaml (4 hunks)
  • wren-ai-service/src/providers/__init__.py (1 hunks)
  • wren-ai-service/src/providers/embedder/litellm.py (1 hunks)
  • wren-ai-service/tests/pytest/providers/test_loader.py (1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.35.1)
wren-ai-service/docs/config_examples/config.ollama.yaml

[error] 112-112: trailing spaces

(trailing-spaces)

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: pytest
  • GitHub Check: pytest
  • GitHub Check: Analyze (go)
  • GitHub Check: build-image (linux_arm64_runner, linux/arm64)
🔇 Additional comments (4)
wren-ai-service/src/providers/embedder/litellm.py (1)

58-77: Confirm response.usage access safety.
While the code checks if response.usage, it later references usage keys (e.g., prompt_tokens) in other parts of the file. Ensure all code paths handle partial or missing usage data gracefully.

wren-ai-service/tests/pytest/providers/test_loader.py (1)

35-36: Great addition to test coverage.
Verifying "litellm_embedder" ensures that the new provider is properly loaded. This test helps confirm that the provider is wired up.

wren-ai-service/docs/config_examples/config.ollama.yaml (2)

43-43: LGTM! Consistent embedder configuration across all pipelines.

The embedder references have been correctly and consistently updated to use the new LiteLLM format throughout all pipeline configurations.

Also applies to: 46-46, 49-49, 53-53, 56-56, 92-92, 103-103, 109-109, 112-112, 115-115


Line range hint 1-141: Verify security considerations in the example configuration.

Please ensure:

  1. The example configuration doesn't expose any sensitive information
  2. Add a comment warning users about securing their API endpoints
  3. Consider adding a note about the implications of having development: true in production environments
🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 42-42: wrong indentation: expected 0 but found 2

(indentation)

wren-ai-service/src/providers/__init__.py Show resolved Hide resolved
@cyyeh cyyeh marked this pull request as draft January 28, 2025 23:48
@cyyeh cyyeh marked this pull request as ready for review February 1, 2025 07:43
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
wren-ai-service/docs/config_examples/config.google_ai_studio.yaml (2)

16-21: Embedder Section Update & Indentation Warning:
The embedder configuration correctly updates the provider to litellm_embedder and the model to gemini/text-embedding-004. However, YAMLlint reports an indentation warning on line 19 (wrong indentation: expected 0 but found 2). Please review the list-item indentation under models: to ensure it meets your YAML style guidelines.

🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 19-19: wrong indentation: expected 0 but found 2

(indentation)


Line range hint 43-115: Pipeline Embedder References & Trailing Space Cleanup:
Pipeline entries have been consistently updated to use litellm_embedder.text-embedding-004. Additionally, static analysis detected trailing spaces on line 112. Removing these extra spaces will clean up the YAML formatting.

For example, update line 112 as follows:

-    embedder: litellm_embedder.text-embedding-004 
+    embedder: litellm_embedder.text-embedding-004
🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 42-42: wrong indentation: expected 0 but found 2

(indentation)

wren-ai-service/docs/config_examples/config.groq.yaml (2)

17-23: Embedder Block Indentation Issue:
The embedder section is updated correctly with the provider set to litellm_embedder and model text-embedding-3-large. However, YAMLlint reports an indentation warning on line 21. Please adjust the indentation of the list item to conform to YAML standards (e.g., remove unnecessary leading spaces).

🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 21-21: wrong indentation: expected 0 but found 2

(indentation)


Line range hint 45-117: Pipeline Embedder References & Trailing Space Removal:
Pipeline entries now correctly reference litellm_embedder.text-embedding-3-large. Note that a trailing space was detected on line 114; removing it will improve the YAML formatting.

For instance, update line 114 as follows:

-    embedder: litellm_embedder.text-embedding-3-large 
+    embedder: litellm_embedder.text-embedding-3-large
🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 44-44: wrong indentation: expected 0 but found 2

(indentation)

wren-ai-service/docs/config_examples/config.deepseek.yaml (2)

33-39: Embedder Configuration Indentation Warning:
The embedder block now uses litellm_embedder with model text-embedding-3-large, which is correct. However, YAMLlint indicates an indentation issue on line 37 for the list item. Please adjust the indentation to meet YAML formatting standards.

🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 37-37: wrong indentation: expected 0 but found 2

(indentation)


Line range hint 61-133: Pipeline Embedder Updates & Trailing Space Cleanup:
Pipeline entries consistently use litellm_embedder.text-embedding-3-large, maintaining alignment with the new provider changes. Additionally, there is a trailing space on line 130. Removing that trailing space will improve file cleanliness.

For example, update line 130 as shown:

-    embedder: litellm_embedder.text-embedding-3-large 
+    embedder: litellm_embedder.text-embedding-3-large
🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 60-60: wrong indentation: expected 0 but found 2

(indentation)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bba5ae6 and b1595e2.

📒 Files selected for processing (5)
  • wren-ai-service/docs/config_examples/config.deepseek.yaml (4 hunks)
  • wren-ai-service/docs/config_examples/config.google_ai_studio.yaml (4 hunks)
  • wren-ai-service/docs/config_examples/config.groq.yaml (4 hunks)
  • wren-ai-service/docs/config_examples/config.ollama.yaml (4 hunks)
  • wren-ai-service/tests/pytest/providers/test_loader.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • wren-ai-service/tests/pytest/providers/test_loader.py
🧰 Additional context used
🪛 YAMLlint (1.35.1)
wren-ai-service/docs/config_examples/config.google_ai_studio.yaml

[warning] 19-19: wrong indentation: expected 0 but found 2

(indentation)


[error] 112-112: trailing spaces

(trailing-spaces)

wren-ai-service/docs/config_examples/config.groq.yaml

[warning] 21-21: wrong indentation: expected 0 but found 2

(indentation)


[error] 114-114: trailing spaces

(trailing-spaces)

wren-ai-service/docs/config_examples/config.deepseek.yaml

[warning] 37-37: wrong indentation: expected 0 but found 2

(indentation)


[error] 130-130: trailing spaces

(trailing-spaces)

wren-ai-service/docs/config_examples/config.ollama.yaml

[error] 112-112: trailing spaces

(trailing-spaces)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: pytest
🔇 Additional comments (2)
wren-ai-service/docs/config_examples/config.ollama.yaml (2)

17-21: Embedder Configuration Update:
The embedder block now uses litellm_embedder with the updated model value openai/nomic-embed-text, which aligns with the new provider requirements.


Line range hint 43-115: Pipeline Embedder References Update:
All pipeline entries have been updated to reference litellm_embedder.openai/nomic-embed-text, ensuring consistency across configurations. Please verify that the associated llm and other service references are still aligned with your overall service architecture.

🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 42-42: wrong indentation: expected 0 but found 2

(indentation)

@cyyeh cyyeh requested a review from paopa February 3, 2025 01:38
wren-ai-service/pyproject.toml Show resolved Hide resolved
docker/config.example.yaml Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
wren-ai-service/src/providers/document_store/qdrant.py (2)

244-247: Consider improving the error message specificity.

The error message could be more helpful by including the actual value that caused the error.

-                msg = f"DocumentStore.write_documents() expects a list of Documents but got an element of {type(doc)}."
+                msg = f"DocumentStore.write_documents() expects a list of Documents but got an element of type {type(doc)}: {repr(doc)}"

274-289: Consider handling the last batch size in progress bar update.

The progress bar update uses a fixed write_batch_size which might not be accurate for the last batch.

-                progress_bar.update(self.write_batch_size)
+                progress_bar.update(len(document_batch))
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6b66ef8 and ef91a18.

📒 Files selected for processing (1)
  • wren-ai-service/src/providers/document_store/qdrant.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: pytest
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)
🔇 Additional comments (1)
wren-ai-service/src/providers/document_store/qdrant.py (1)

357-362: LGTM! The addition follows existing patterns.

The new line follows the established pattern of initializing document stores for different datasets. The change is well-integrated with the existing code structure.

Let's verify the usage of this new document store:

✅ Verification successful

The "sql_pairs" document store initialization is properly integrated.

The shell script output confirms that the "sql_pairs" dataset is referenced in multiple parts of the codebase (including tests, pipelines, and configuration files), thereby validating the new initialization in the Qdrant provider.

  • Confirmed consistent usage across tests (e.g., test_sql_pairs_preparation.py, test_sql_pairs_deletion.py) and pipelines.
  • Configuration files and other modules (e.g., in src/config.py and src/web/v1/services/sql_pairs_preparation.py) reference "sql_pairs" appropriately.
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for references to the new sql_pairs dataset
# Expected: Find references to confirm this dataset is used elsewhere in the codebase

# Search for direct references to sql_pairs
rg -l "sql_pairs"

# Search for any SQL-related dataset configurations
rg -l "dataset.*sql" 

Length of output: 1979

@cyyeh cyyeh force-pushed the feat/ai-service/litellm-embedder branch from ef91a18 to 1f0d299 Compare February 3, 2025 05:51
Copy link
Member

@paopa paopa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. thanks for the work!

@paopa paopa merged commit b3543c9 into main Feb 3, 2025
8 checks passed
@paopa paopa deleted the feat/ai-service/litellm-embedder branch February 3, 2025 06:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/ai-service ai-service related module/ai-service ai-service related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants