Installation & Upgrade¶
Install¶
Homebrew (macOS / Linux)¶
Chocolatey (Windows)¶
pip (all platforms)¶
pip install assgen
# With GPU inference support (for the server machine)
pip install "assgen[inference]"
Windows with CUDA (server machine)¶
On a Windows machine with an NVIDIA GPU, install PyTorch with CUDA first:
python -m venv C:\assgen-venv
C:\assgen-venv\Scripts\activate
# Install CUDA-enabled PyTorch, then assgen
pip install torch --index-url https://download.pytorch.org/whl/cu121
pip install "assgen[inference]"
See Server Setup (Windows) for full details.
From a GitHub Release¶
Each tagged release publishes a .whl wheel file on the
Releases page.
- Go to github.com/aallbrig/assgen/releases
- Find the latest release (or the version you want)
- Download the
.whlfile under Assets - Install it:
pip install assgen-<version>-py3-none-any.whl
# With inference extras
pip install "assgen-<version>-py3-none-any.whl[inference]"
From source (development install)¶
Upgrade¶
The easiest way to stay current is the built-in upgrade command:
# Check if a newer version exists, then prompt to upgrade
assgen upgrade
# Just check — exit 0 if up-to-date, exit 1 if outdated (useful in scripts)
assgen upgrade --check
# Upgrade without a confirmation prompt
assgen upgrade --yes
# Include pre-releases
assgen upgrade --pre
The upgrade command:
- Fetches the latest release info from the GitHub Releases API
- Compares it against the running version
- If newer, shows a summary of the release notes
- Runs
pip install assgen==<version>in the same Python environment
Keeping inference extras
If you installed with pip install "assgen[inference]", re-run with
pip install "assgen[inference]==<new-version>" after upgrading, or add
--inference support to the upgrade workflow in a future release.
Verify installation¶
Hardware recommendations¶
| GPU | VRAM | Suitable models |
|---|---|---|
| RTX 4070 / 3080 | 12 GB | SDXL, TripoSR, MusicGen-Medium, AudioLDM2 |
| RTX 4090 / 3090 | 24 GB | All catalog models |
| Any CPU | — | Orchestration, job queue, text models |
Set the inference device in ~/.config/assgen/server.yaml: