Skip to content

Installation & Upgrade

Install

Homebrew (macOS / Linux)

brew install aallbrig/tap/assgen

Chocolatey (Windows)

choco install assgen

pip (all platforms)

pip install assgen

# With GPU inference support (for the server machine)
pip install "assgen[inference]"

Windows with CUDA (server machine)

On a Windows machine with an NVIDIA GPU, install PyTorch with CUDA first:

python -m venv C:\assgen-venv
C:\assgen-venv\Scripts\activate

# Install CUDA-enabled PyTorch, then assgen
pip install torch --index-url https://download.pytorch.org/whl/cu121
pip install "assgen[inference]"

See Server Setup (Windows) for full details.

From a GitHub Release

Each tagged release publishes a .whl wheel file on the Releases page.

  1. Go to github.com/aallbrig/assgen/releases
  2. Find the latest release (or the version you want)
  3. Download the .whl file under Assets
  4. Install it:
pip install assgen-<version>-py3-none-any.whl

# With inference extras
pip install "assgen-<version>-py3-none-any.whl[inference]"

From source (development install)

git clone https://github.com/aallbrig/assgen.git
cd assgen
pip install -e ".[dev]"

Upgrade

The easiest way to stay current is the built-in upgrade command:

# Check if a newer version exists, then prompt to upgrade
assgen upgrade

# Just check — exit 0 if up-to-date, exit 1 if outdated (useful in scripts)
assgen upgrade --check

# Upgrade without a confirmation prompt
assgen upgrade --yes

# Include pre-releases
assgen upgrade --pre

The upgrade command:

  1. Fetches the latest release info from the GitHub Releases API
  2. Compares it against the running version
  3. If newer, shows a summary of the release notes
  4. Runs pip install assgen==<version> in the same Python environment

Keeping inference extras

If you installed with pip install "assgen[inference]", re-run with pip install "assgen[inference]==<new-version>" after upgrading, or add --inference support to the upgrade workflow in a future release.


Verify installation

assgen version
# assgen  version: 0.0.1  python: 3.12.3  platform: Linux

assgen --help

Hardware recommendations

GPU VRAM Suitable models
RTX 4070 / 3080 12 GB SDXL, TripoSR, MusicGen-Medium, AudioLDM2
RTX 4090 / 3090 24 GB All catalog models
Any CPU Orchestration, job queue, text models

Set the inference device in ~/.config/assgen/server.yaml:

device: "cuda"   # auto | cuda | cpu | mps (Apple Silicon)