Skip to content

stanley-fork/cortex.cpp

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cortex.cpp

Cortex cpp's Readme Banner

GitHub commit activity Github Last Commit Github Contributors GitHub closed issues Discord

Documentation - API Reference - Changelog - Bug reports - Discord

Cortex.cpp is currently in active development.

Overview

Cortex is a Local AI API Platform that is used to run and customize LLMs.

Key Features:

  • Pull from Huggingface, or Cortex Built-in Models
  • Models stored in universal file formats (vs blobs)
  • Swappable Engines (default: llamacpp, future: ONNXRuntime, TensorRT-LLM)
  • Cortex can be deployed as a standalone API server, or integrated into apps like Jan.ai

Coming soon; now available on cortex-nightly:

  • Engines Management (install specific llama-cpp version and variants)
  • Nvidia Hardware detection & activation (current: Nvidia, future: AMD, Intel, Qualcomm)
  • Cortex's roadmap is to implement the full OpenAI API including Tools, Runs, Multi-modal and Realtime APIs.

Local Installation

Cortex has an Local Installer that packages all required dependencies, so that no internet connection is required during the installation process.

Cortex also has a Network Installer which downloads the necessary dependencies from the internet during the installation.

Windows: cortex.exe

MacOS (Silicon/Intel): cortex.pkg

Linux debian based distros: cortex-linux-local-installer.deb

  • For Linux: Download the installer and run the following command in terminal:
    # Linux debian based distros
    curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh | sudo bash -s -- --deb_local

    # Other Linux distros
    curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh | sudo bash -s
  • The binary will be installed in the /usr/bin/ directory.

Usage

CLI

After installation, you can run Cortex.cpp from the command line by typing cortex --help.

# Run a Model
cortex pull llama3.2                                    
cortex pull bartowski/Meta-Llama-3.1-8B-Instruct-GGUF
cortex run llama3.2                          

# Resource Management
cortex ps                               (view active models & RAM/VRAM used)       
cortex models stop llama3.2                

# Available on cortex-nightly:
cortex engines install llama-cpp -m     (lists versions and variants)
cortex hardware list                    (hardware detection)
cortex hardware activate   

cortex stop

Refer to our Quickstart and CLI documentation for more details.

API:

Cortex.cpp includes a REST API accessible at localhost:39281.

Refer to our API documentation for more details.

Models

Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access:

  • Hugging Face: GGUF models eg author/Model-GGUF
  • Cortex Built-in Models

Once downloaded, the model .gguf and model.yml files are stored in ~\cortexcpp\models.

Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.

Cortex Built-in Models & Quantizations

Model /Engine llama.cpp Command
phi-3.5 cortex run phi3.5
llama3.2 cortex run llama3.2
llama3.1 cortex run llama3.1
codestral cortex run codestral
gemma2 cortex run gemma2
mistral cortex run mistral
ministral cortex run ministral
qwen2 cortex run qwen2.5
openhermes-2.5 cortex run openhermes-2.5
tinyllama cortex run tinyllama

View all Cortex Built-in Models.

Cortex supports multiple quantizations for each model.

❯ cortex-nightly pull llama3.2
Downloaded models:
    llama3.2:3b-gguf-q2-k

Available to download:
    1. llama3.2:3b-gguf-q3-kl
    2. llama3.2:3b-gguf-q3-km
    3. llama3.2:3b-gguf-q3-ks
    4. llama3.2:3b-gguf-q4-km (default)
    5. llama3.2:3b-gguf-q4-ks
    6. llama3.2:3b-gguf-q5-km
    7. llama3.2:3b-gguf-q5-ks
    8. llama3.2:3b-gguf-q6-k
    9. llama3.2:3b-gguf-q8-0

Select a model (1-9): 

Advanced Installation

Network Installer (Stable)

Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.

Linux debian based distros: cortex-linux-network-installer.deb

Beta & Nightly Versions (Local Installer)

Cortex releases Beta and Nightly versions for advanced users to try new features (we appreciate your feedback!)

  • Beta (early preview): CLI command: cortex-beta
  • Nightly (released every night): CLI Command: cortex-nightly
    • Nightly automatically pulls the latest changes from upstream llama.cpp repo, creates a PR and runs tests.
    • If all test pass, the PR is automatically merged into our repo, with the latest llama.cpp version.
Version Windows MacOS Linux debian based distros
Beta (Preview) cortex.exe cortex.pkg cortex.deb
Nightly (Experimental) cortex.exe cortex.pkg cortex.deb

Network Installer

Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.

Version Type Windows MacOS Linux debian based distros
Stable (Recommended) cortex.exe cortex.pkg cortex.deb
Beta (Preview) cortex.exe cortex.pkg cortex.deb
Nightly (Experimental) cortex.exe cortex.pkg cortex.deb

Build from Source

Windows

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.bat
vcpkg install
  1. Build the Cortex.cpp inside the engine/build folder:
mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static
cmake --build . --config Release
  1. Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h

MacOS

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
  1. Build the Cortex.cpp inside the engine/build folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
  1. Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h

Linux

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
  1. Build the Cortex.cpp inside the engine/build folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
  1. Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h

Uninstallation

Windows

  1. Open the Windows Control Panel.
  2. Navigate to Add or Remove Programs.
  3. Search for cortexcpp and double click to uninstall. (for beta and nightly builds, search for cortexcpp-beta and cortexcpp-nightly respectively)

MacOs

Run the uninstaller script:

sudo sh cortex-uninstall.sh

For MacOS, there is a uninstaller script comes with the binary and added to the /usr/local/bin/ directory. The script is named cortex-uninstall.sh for stable builds, cortex-beta-uninstall.sh for beta builds and cortex-nightly-uninstall.sh for nightly builds.

Linux

sudo apt remove cortexcpp

Contact Support

Releases

No releases published

Packages

No packages published

Languages

  • C++ 79.2%
  • C 13.2%
  • Python 3.1%
  • Shell 1.5%
  • Inno Setup 1.4%
  • CMake 0.9%
  • Other 0.7%