Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[request] Feat: Automation Pipeline with Custom Models for a RAG Knowledge Base #2437

Open
2 tasks done
onehr opened this issue Jan 4, 2025 · 1 comment
Open
2 tasks done
Labels
enhancement New feature or request

Comments

@onehr
Copy link

onehr commented Jan 4, 2025

Feature Request: Automation Pipeline with Custom Models for a RAG Knowledge Base

Overview

I would like to propose the development of an Automation Pipeline tailored for building a Retrieval-Augmented Generation (RAG) Knowledge Base. The envisioned pipeline follows a structured flow:

Information (RSS) → Processing (extensions) → Embedding (LLM) → RAG

This pipeline aims to create a versatile and extensible platform capable of supporting multiple workflows and integrations, enhancing the overall flexibility and functionality of the knowledge base.

Motivation

As the demand for sophisticated knowledge management systems grows, there's a need for a robust pipeline that not only handles data efficiently but also integrates seamlessly with various models and tools. By incorporating custom models and supporting both local and remote Large Language Models (LLMs), the platform can cater to diverse use cases and user requirements.

Key Questions

  1. How can users easily program with content?

    • What are the most effective methods to enable users to develop extensions, integrations, or add-ons without extensive knowledge of the codebase?
    • How can we balance flexibility with simplicity to cater to both technical and non-technical users?
  2. How can the system integrate with local or remote LLMs?

    • What architectural considerations are necessary to support both local installations and remote API integrations?
    • How can we ensure security and scalability when connecting to diverse LLM sources?

I look forward to the community's feedback, thanks!

Suggested solution

I don’t have specific solutions at the moment, as a regular user and not yet familiar with how the Follow internal is implemented.

Alternative

No response

Additional context

No response

Validations

  • Check that there isn't already an issue that request the same feature to avoid creating a duplicate.
  • This issue is valid
@onehr onehr added the enhancement New feature or request label Jan 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant