Skip to content

Commit

Permalink
docs: Fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
iusztinpaul committed Jul 31, 2024
1 parent 8634727 commit 1ce2297
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
6 changes: 3 additions & 3 deletions INSTALL_AND_USAGE.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@

Before starting to install the LLM Twin project, make sure you have installed the following dependencies on your system:

- (Docker ">=v27.0.3")[https://www.docker.com/]
- (GNU Make ">=3.81")[https://www.gnu.org/software/make/]
- [Docker ">=v27.0.3"](https://www.docker.com/)
- [GNU Make ">=3.81"](https://www.gnu.org/software/make/)

The whole LLM Twin application will be run locally using Docker.

Expand Down Expand Up @@ -92,7 +92,7 @@ docker logs llm-twin-bytewax
```
You should see logs reflecting the cleaning, chunking, and embedding operations (without any errors, of course).

To check that the Qdrant `vector DB` is populated successfully, go to its dashboard at localhost:6333/dashboard. There, you should see the repositories or article collections created and populated.
To check that the Qdrant `vector DB` is populated successfully, go to its dashboard at [localhost:6333/dashboard](localhost:6333/dashboard). There, you should see the repositories or article collections created and populated.

> [!NOTE]
> If using the cloud version of Qdrant, go to your Qdrant account and cluster to see the same thing as in the local dashboard.
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<div align="center">
<h2>LLM Twin Course: Building Your Production-Ready AI Replica</h2>
<h1>Learn to build an end-to-end system for production-ready LLM & RAG systems by building your LLM Twin</h1>
<h1>Learn to architect and implement a production-ready LLM & RAG system by building your LLM Twin</h1>
<h3>From data gathering to productionizing LLMs using LLMOps good practices.</h3>
<i>by <a href="https://github.com/iusztinpaul">Paul Iusztin</a>, <a href="https://github.com/alexandruvesa">Alexandru Vesa</a> and <a href="https://github.com/Joywalker">Alexandru Razvant</a></i>
</div>
Expand Down Expand Up @@ -153,7 +153,7 @@ To understand how to install and run the LLM Twin code, go to the [INSTALL_AND_U

The bonus Superlinked series has an extra dedicated [README](https://github.com/decodingml/llm-twin-course/blob/main/6-bonus-superlinked-rag/README.md) that you can access under the [6-bonus-superlinked-rag](https://github.com/decodingml/llm-twin-course/tree/main/6-bonus-superlinked-rag) directory.

Here we explain all the changes made to the code to run it with the improved RAG layer powered by [Superlinked](https://rebrand.ly/superlinked-github).
In that section, we explain how to run it with the improved RAG layer powered by [Superlinked](https://rebrand.ly/superlinked-github).

## Meet your teachers!

Expand Down

0 comments on commit 1ce2297

Please sign in to comment.