AMD Credit Runbook
Use this checklist as soon as AMD Developer Cloud credits are approved.
1. Create Instance
Target:
- AMD Developer Cloud
- AMD Instinct MI300X
- ROCm 6.x image if available
- Enough disk for videos, model cache, and rendered clips
2. Clone Repository
git clone https://github.com/JakgritB/ElevenClip.AI.git
cd ElevenClip.AI
3. Configure Environment
cp .env.example .env
Edit .env:
DEMO_MODE=false
HF_TOKEN=<your-hugging-face-token>
WHISPER_MODEL_ID=openai/whisper-large-v3
QWEN_TEXT_MODEL_ID=Qwen/Qwen2.5-7B-Instruct
QWEN_VL_MODEL_ID=Qwen/Qwen2-VL-7B-Instruct
FFMPEG_VIDEO_CODEC=h264_amf
4. Verify ROCm
rocminfo | head
rocm-smi
Verify PyTorch:
python - <<'PY'
import torch
print("cuda available:", torch.cuda.is_available())
print("device:", torch.cuda.get_device_name(0) if torch.cuda.is_available() else "none")
print("hip:", torch.version.hip)
PY
On ROCm, PyTorch still exposes AMD GPUs through the torch.cuda API.
5. Start Backend And Frontend
Docker path:
docker compose build --build-arg INSTALL_EXTRAS=.[ai,rocm-inference] backend
docker compose up
Manual backend path:
cd backend
python -m venv .venv
source .venv/bin/activate
pip install -e ".[ai,rocm-inference]"
uvicorn app.main:app --host 0.0.0.0 --port 8000
Manual frontend path:
cd frontend
npm install
npm run dev -- --host 0.0.0.0
6. Run Benchmark
CPU baseline:
DEMO_MODE=false HIP_VISIBLE_DEVICES= python scripts/benchmark.py --youtube-url "<demo-video-url>" --language Thai --style informative --niche education --clip-length 60
AMD GPU:
DEMO_MODE=false python scripts/benchmark.py --youtube-url "<demo-video-url>" --language Thai --style informative --niche education --clip-length 60
Save the JSON outputs into:
data/benchmarks/cpu.json
data/benchmarks/mi300x.json
7. Update Submission Materials
After the benchmark:
- Update
README.md. - Update
docs/SUBMISSION.md. - Update
docs/PITCH_DECK.md. - Update Hugging Face Space.
- Record the final demo video.