PrivateGPT v0.4.0

Mar 5, 2024

Today we are introducing PrivateGPT v0.4.0! In this release, we have made the project more modular, flexible, and powerful, making it an ideal choice for production-ready applications.

This version comes packed with big changes:

LlamaIndex v0.10 full migration

PrivateGPT utilizes LlamaIndex as part of its technical stack. LlamaIndex is a data framework for LLM-based applications that benefit from context augmentation.

Following the upgrade to LlamaIndex's recent v0.10, which included significant changes to both its library packaging and codebase organization, we decided to perform a similar cleanup process to improve the project installation process and prepare PrivateGPT to adopt new features.

With the help of our friends from LlamaIndex team, PrivateGPT has gone through a full refactor to adapt its codebase to the new version of LlamaIndex. This new version comes with out-of-the-box performance improvements and opens the door to new functionalities we’ll be incorporating soon!

⚠️ Note: If you are updating from an already existing PrivateGPT installation, you may need to perform a full clean install, reseting your virtual environment. Otherwise the cache may create some trouble with LlamaIndex previous version.

Revamped installation and dependency management

This new version makes PrivateGPT more modular to better align it with the different setups required by product-ready applications, wether they are local, cloud-based, or mixed. That modularization comes with a new installation process.

From now on, during PrivateGPT installation you’ll get to decide exactly the components you are installing:

LLM: choose the LLM provider, being the options:

  • llms-ollama: adds support for Ollama LLM, the easiest way to get a local LLM running, requires Ollama running locally

  • llms-llama-cpp: adds support for local LLM using LlamaCPP - expect a messy installation process on some platforms

  • llms-sagemaker: adds support for Amazon Sagemaker LLM, requires Sagemaker inference endpoints

  • llms-openai: adds support for OpenAI LLM, requires OpenAI API key

  • llms-openai-like: adds support for 3rd party LLM providers that are compatible with OpenAI's API

Embeddings: choose the Embeddings model provider:

  • embeddings-ollama: adds support for Ollama Embeddings, requires Ollama running locally

  • embeddings-huggingface: adds support for local Embeddings using HuggingFace

  • embeddings-sagemaker: adds support for Amazon Sagemaker Embeddings, requires Sagemaker inference endpoints

  • embeddings-openai: adds support for OpenAI Embeddings, requires OpenAI API key

Vector store: which vector database to use:

  • vector-stores-qdrant: adds support for Qdrant vector store

  • vector-stores-chroma: adds support for Chroma DB vector store

  • vector-stores-postgres: adds support for Postgres vector store

UI: whether or not add support to PrivateGPT’s UI (Gradio-based), or just go with the API

In order to only install the required dependencies, PrivateGPT offers different extras that can be combined during the installation process.

poetry install --extras "<extra1> <extra2>..."

For example:

poetry install --extras "ui llms-ollama embeddings-huggingface vector-stores-qdrant"

Will install privateGPT with support for the UI, Ollama as the local LLM provider, local Huggingface embeddings and Qdrant as the vector database.

You can mix and match as you need, making sure only the necessary dependencies will be installed. We’ll be adding support for more options in the coming version!

Check all the information in the installation docs at https://docs.privategpt.dev/installation/getting-started/installation

More and better documented setup examples

We’ve added a set of ready-to-use setups that serve as examples that cover different needs.

  • Local, Ollama-powered setup, the easiest to install local setup

  • Private, Sagemaker-powered setup, using Sagemaker in a private AWS cloud

  • Non-Private, OpenAI-powered test setup, in order to try PrivateGPT powered by GPT3-4

  • Local, Llama-CPP powered setup, the usual local setup, hard to get running on certain systems

Every setup comes backed by a settings-xxx.yaml file in the root of the project where you can fine-tune the configuration to your needs (parameters like the model to be used, the embeddings dimensionality, etc.).

You can find a detailed guide in the installation section at https://docs.privategpt.dev/installation/getting-started/installation

Ollama as the new recommended local setup

We are recommending the usage of Ollama as a both the LLM and Embeddings provider for loal setups. Ollama simplifies the process of running language models locally; they are focused on enhancing the experience of setting up local models, and getting the most out of your local hardware. It is way easier than running on LlamaCPP - the method we’ve been using by default in the past, which has caused lots of headaches to PrivateGPT users.

We’ll share more about this partnership in a future blog post.

In order to use PrivateGPT with Ollama, follow these simple steps:

  • Go to ollama.ai and follow the instructions to install Ollama on your machine.

  • After the installation, make sure the Ollama desktop app is closed.

  • Install the models to be used, the default settings-ollama.yaml is configured to use mistral 7b LLM (~4GB) and nomic-embed-text Embeddings (~275MB). Therefore:

ollama pull mistral
ollama pull nomic-embed-text


  • Start Ollama service (it will start a local inference server, serving both the LLM and the Embeddings models):

ollama serve


  • Once done, on a different terminal, you can install PrivateGPT with the following command:

poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"


  • Once installed, you can run PrivateGPT. Make sure you have a working Ollama running locally before running the following command.

PGPT_PROFILES=ollama make run
# On windows you'll need to set the PGPT_PROFILES env var in a different way


PrivateGPT will use the already existing settings-ollama.yaml configuration file, which is already configured to use Ollama LLM and Embeddings, and Qdrant vector database. Review it and adapt it to your needs (different models, different Ollama port, etc.)

Conclusion

This update gets PrivateGPT up to date with the latest dependencies and makes it a more modular and flexible project. It sets the path for the big updates that are coming next. It is a breaking change though, so in case you have any question, come say hi in Discord!

More from Zylon

Get Started Today

Get Started Today

Get Started Today