File size: 2,142 Bytes
00b7145
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
# AMD Credit Runbook

Use this checklist as soon as AMD Developer Cloud credits are approved.

## 1. Create Instance

Target:

- AMD Developer Cloud
- AMD Instinct MI300X
- ROCm 6.x image if available
- Enough disk for videos, model cache, and rendered clips

## 2. Clone Repository

```bash
git clone https://github.com/JakgritB/ElevenClip.AI.git
cd ElevenClip.AI
```

## 3. Configure Environment

```bash
cp .env.example .env
```

Edit `.env`:

```bash
DEMO_MODE=false
HF_TOKEN=<your-hugging-face-token>
WHISPER_MODEL_ID=openai/whisper-large-v3
QWEN_TEXT_MODEL_ID=Qwen/Qwen2.5-7B-Instruct
QWEN_VL_MODEL_ID=Qwen/Qwen2-VL-7B-Instruct
FFMPEG_VIDEO_CODEC=h264_amf
```

## 4. Verify ROCm

```bash
rocminfo | head
rocm-smi
```

Verify PyTorch:

```bash
python - <<'PY'
import torch
print("cuda available:", torch.cuda.is_available())
print("device:", torch.cuda.get_device_name(0) if torch.cuda.is_available() else "none")
print("hip:", torch.version.hip)
PY
```

On ROCm, PyTorch still exposes AMD GPUs through the `torch.cuda` API.

## 5. Start Backend And Frontend

Docker path:

```bash
docker compose build --build-arg INSTALL_EXTRAS=.[ai,rocm-inference] backend
docker compose up
```

Manual backend path:

```bash
cd backend
python -m venv .venv
source .venv/bin/activate
pip install -e ".[ai,rocm-inference]"
uvicorn app.main:app --host 0.0.0.0 --port 8000
```

Manual frontend path:

```bash
cd frontend
npm install
npm run dev -- --host 0.0.0.0
```

## 6. Run Benchmark

CPU baseline:

```bash
DEMO_MODE=false HIP_VISIBLE_DEVICES= python scripts/benchmark.py --youtube-url "<demo-video-url>" --language Thai --style informative --niche education --clip-length 60
```

AMD GPU:

```bash
DEMO_MODE=false python scripts/benchmark.py --youtube-url "<demo-video-url>" --language Thai --style informative --niche education --clip-length 60
```

Save the JSON outputs into:

```text
data/benchmarks/cpu.json
data/benchmarks/mi300x.json
```

## 7. Update Submission Materials

After the benchmark:

- Update `README.md`.
- Update `docs/SUBMISSION.md`.
- Update `docs/PITCH_DECK.md`.
- Update Hugging Face Space.
- Record the final demo video.