Requirements
What it takes to run Kowalski
Self-host requirements
You will only need all of them if you are not running it dockerized. Read "Running with Docker" for more information.
- Bun (latest is suggested)
- A Telegram bot (create one at @BotFather)
- FFmpeg (only for the
/yt
command) - Docker and Docker Compose (only required for Docker setup)
- Postgres
AI Requirements
Using AI features is not suggested for all users who plan to host Kowalski. It requires a server or computer capable of being under intense load when users are active. In the future, we plan to support for using LLM APIs to remove this requirement.
CPU-Only
A CPU with at least 8 cores is recommended, otherwise AI commands will be extremely slow, and not worth the stress you are putting on the CPU.
If you plan to use CPU, you will also need a lot of RAM to load the models themselves. 16GB is suggested at a minimum, and larger models can require upwards of 64-256GB of RAM. If you have a GPU available, you can use it to speed up the process.
GPU-Only
GPU support has not been tested. With some extra configuration, you should have no problem using your GPU, as Ollama has amazing support. Using a GPU will speed up the model's responses significantly. We are not rich enough to afford them, so if you have tested Kowalski with GPU, please let us know.
Your GPU will require enough VRAM to load the models, which will limit the size of the models you can run. As mentioned above, these models can be quite large.
Please ensure your GPU is compatible with Ollama, as well. Supported GPUs can be found on in the Ollama documentation