File size: 3,105 Bytes
8aed052
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
license: mit
---

# magicBERT

A masked-language-model-style transformer for Commander deck completion. Given a partial deck (the "context") and a sequence of masked slots, magicBERT predicts the full 100-card deck in a permutation-invariant way using Hungarian matching.

## Architecture

`magicBERT` uses a standard transformer encoder with one addition: after each encoder layer, a cross-attention layer attends to a set of **context card embeddings**. 
The context cards serve as the conditioning signal — "given these cards, complete the rest of the deck."

```
input_ids (masked slots to fill)
      |
[Token + Positional Embeddings]
      |
  Encoder Layer 1
      |
  Cross-Attention → context_cards
      |
  Encoder Layer 2
      |
  Cross-Attention → context_cards
      ...
      |
  LayerNorm → LM Head → logits (B, seq_len, vocab_size)
```

## Generation

Two generation modes are available:

- **`generate`** — single-pass: run one forward pass, apply a legality mask (Commander-legal cards only), then solve the global assignment problem with `linear_sum_assignment`.Basics are allowed to repeat; non-basics are constrained to appear at most once.

- **`iterative_generate`** — multi-pass refinement: after each step, the lowest-confidence slots are re-masked and the model is run again, allowing it to revise uncertain picks in light of its other choices.


## Usage

```
import torch
from transformers import AutoModel, AutoTokenizer

model_name = "nishtahir/magicBERT"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)  # type: ignore[assignment]
model.eval()

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)


cards = ["Yuriko, the Tiger's Shadow"]

# Tokenize context cards
context_token_ids: list[int] = tokenizer.convert_tokens_to_ids(cards)  # type: ignore[assignment]
unknown = [
    name
    for name, tid in zip(cards, context_token_ids, strict=True)
    if tid == tokenizer.unk_token_id
]
if unknown:
    print(f"Warning: the following cards were not found in the vocabulary: {unknown}")

# Build (1, C) context tensor
context_ids = torch.tensor([context_token_ids], dtype=torch.long)

# Built input vector of masked cards.
seq_len: int = model.config.seq_len
input_ids = torch.full((1, seq_len), model.config.mask_token_id, dtype=torch.long)

# Make prediction
token_ids = model.generate(input_ids, context_ids=context_ids)  # (1, seq_len)

# Decode token Ids back into card names
slot_ids: list[int] = token_ids[0].tolist()
card_names: list[str] = tokenizer.convert_ids_to_tokens(slot_ids)  # type: ignore[assignment]

pad_token = tokenizer.pad_token
deck = [name for name in card_names if name != pad_token]

print(f"\nGenerated deck ({len(deck)} cards):")
for i, name in enumerate(deck, 1):
    print(f"  {i:>3}. {name}")

#  Generated deck (100 cards):
#    1. Watery Grave
#    2. Yuriko, the Tiger's Shadow
#    3. Verdant Catacombs
#    4. Island
#    5. Prosperous Thief
#    6. Clearwater Pathway // Murkwater Pathway
#    7. Island
#    8. Island
#    9. Mist-Syndicate Naga
#   10. Marsh Flats
```