phanerozoic commited on
Commit
29db051
·
verified ·
1 Parent(s): 1fd78aa

Upload folder using huggingface_hub

Browse files
Files changed (4) hide show
  1. README.md +180 -0
  2. config.json +9 -0
  3. model.py +80 -0
  4. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - pytorch
5
+ - safetensors
6
+ - threshold-logic
7
+ - neuromorphic
8
+ - error-correction
9
+ - hamming-code
10
+ ---
11
+
12
+ # threshold-hamming74encoder
13
+
14
+ Hamming(7,4) encoder. Transforms 4 data bits into a 7-bit codeword with single-error correction capability.
15
+
16
+ ## Circuit Overview
17
+
18
+ ```
19
+ d1 d2 d3 d4
20
+ │ │ │ │
21
+ ├───┼───┼───┤
22
+ │ │ │ │
23
+ │ │ │ └────────────────────────────► c7 = d4
24
+ │ │ └───────────────────────► c6 = d3
25
+ │ └──────────────────► c5 = d2
26
+ └─────────────► c3 = d1
27
+ │ │ │ │
28
+ ▼ ▼ │ ▼
29
+ ┌───────────────┐
30
+ │ d1 XOR d2 XOR │──────► c1 = p1
31
+ │ d4 │
32
+ └───────────────┘
33
+ │ │ │
34
+ ▼ ▼ ▼
35
+ ┌───────────────┐
36
+ │ d1 XOR d3 XOR │──────► c2 = p2
37
+ │ d4 │
38
+ └───────────────┘
39
+ │ │ │
40
+ ▼ ▼ ▼
41
+ ┌───────────────┐
42
+ │ d2 XOR d3 XOR │──► c4 = p3
43
+ │ d4 │
44
+ └───────────────┘
45
+ ```
46
+
47
+ ## The Hamming(7,4) Code
48
+
49
+ Richard Hamming invented this code in 1950. It encodes 4 data bits into 7 bits such that any single-bit error can be detected and corrected.
50
+
51
+ **Codeword structure:**
52
+
53
+ | Position | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
54
+ |----------|---|---|---|---|---|---|---|
55
+ | Bit | p1 | p2 | d1 | p3 | d2 | d3 | d4 |
56
+ | Type | parity | parity | data | parity | data | data | data |
57
+
58
+ **Parity equations:**
59
+ - p1 = d1 ⊕ d2 ⊕ d4 (covers positions 1,3,5,7)
60
+ - p2 = d1 ⊕ d3 ⊕ d4 (covers positions 2,3,6,7)
61
+ - p3 = d2 ⊕ d3 ⊕ d4 (covers positions 4,5,6,7)
62
+
63
+ ## 3-Way XOR Implementation
64
+
65
+ Each parity bit requires a 3-input XOR. In threshold logic:
66
+
67
+ ```
68
+ XOR(a,b,c) = XOR(XOR(a,b), c)
69
+
70
+ a b
71
+ │ │
72
+ └─┬─┘
73
+
74
+ ┌───────┐
75
+ │ XOR │ (2 layers)
76
+ └───────┘
77
+
78
+ │ c
79
+ └─┬─┘
80
+
81
+ ┌───────┐
82
+ │ XOR │ (2 more layers)
83
+ └───────┘
84
+
85
+
86
+ XOR(a,b,c)
87
+ ```
88
+
89
+ Total depth: 4 layers per parity. All three parities compute in parallel.
90
+
91
+ ## Code Properties
92
+
93
+ | Property | Value |
94
+ |----------|-------|
95
+ | Data bits (k) | 4 |
96
+ | Codeword bits (n) | 7 |
97
+ | Parity bits (n-k) | 3 |
98
+ | Minimum distance | 3 |
99
+ | Error correction | 1 bit |
100
+ | Error detection | 2 bits |
101
+
102
+ ## Example Encoding
103
+
104
+ ```
105
+ Data: 1011
106
+
107
+ p1 = 1 ⊕ 0 ⊕ 1 = 0
108
+ p2 = 1 ⊕ 1 ⊕ 1 = 1
109
+ p3 = 0 ⊕ 1 ⊕ 1 = 0
110
+
111
+ Codeword: 0110011
112
+ ↑↑ ↑
113
+ p1p2p3
114
+ ```
115
+
116
+ ## Architecture
117
+
118
+ | Component | Neurons | Parameters |
119
+ |-----------|---------|------------|
120
+ | p1 (3-way XOR) | 6 | 22 |
121
+ | p2 (3-way XOR) | 6 | 22 |
122
+ | p3 (3-way XOR) | 6 | 22 |
123
+ | d1-d4 pass-through | 4 | 20 |
124
+ | **Total** | **22** | **86** |
125
+
126
+ **Layers: 4** (two cascaded XOR stages)
127
+
128
+ ## All 16 Codewords
129
+
130
+ | Data | Codeword | HW |
131
+ |------|----------|-----|
132
+ | 0000 | 0000000 | 0 |
133
+ | 1000 | 1110000 | 3 |
134
+ | 0100 | 1001100 | 3 |
135
+ | 1100 | 0111100 | 4 |
136
+ | 0010 | 0101010 | 3 |
137
+ | 1010 | 1011010 | 4 |
138
+ | 0110 | 1100110 | 4 |
139
+ | 1110 | 0010110 | 3 |
140
+ | 0001 | 1101001 | 4 |
141
+ | 1001 | 0011001 | 3 |
142
+ | 0101 | 0100101 | 3 |
143
+ | 1101 | 1010101 | 4 |
144
+ | 0011 | 1000011 | 3 |
145
+ | 1011 | 0110011 | 4 |
146
+ | 0111 | 0001111 | 4 |
147
+ | 1111 | 1111111 | 7 |
148
+
149
+ Note: Minimum Hamming distance between any two codewords is 3.
150
+
151
+ ## Usage
152
+
153
+ ```python
154
+ from safetensors.torch import load_file
155
+
156
+ w = load_file('model.safetensors')
157
+
158
+ def hamming74_encode(d1, d2, d3, d4):
159
+ """Encode 4 data bits to 7-bit Hamming codeword"""
160
+ # See model.py for full implementation
161
+ pass
162
+
163
+ # Encode data word 1011
164
+ codeword = hamming74_encode(1, 0, 1, 1)
165
+ # Returns [0, 1, 1, 0, 0, 1, 1]
166
+ ```
167
+
168
+ ## Files
169
+
170
+ ```
171
+ threshold-hamming74encoder/
172
+ ├── model.safetensors
173
+ ├── model.py
174
+ ├── config.json
175
+ └── README.md
176
+ ```
177
+
178
+ ## License
179
+
180
+ MIT
config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "threshold-hamming74encoder",
3
+ "description": "Hamming(7,4) encoder as threshold circuit",
4
+ "inputs": 4,
5
+ "outputs": 7,
6
+ "neurons": 22,
7
+ "layers": 4,
8
+ "parameters": 86
9
+ }
model.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from safetensors.torch import load_file
3
+
4
+ def load_model(path='model.safetensors'):
5
+ return load_file(path)
6
+
7
+ def xor2_from_weights(a, b, w, or_w, or_b, nand_w, nand_b, and_w, and_b):
8
+ """Compute XOR(a,b) using threshold gates"""
9
+ inp = torch.tensor([float(a), float(b)])
10
+ or_out = float((inp * or_w).sum() + or_b >= 0)
11
+ nand_out = float((inp * nand_w).sum() + nand_b >= 0)
12
+ l1 = torch.tensor([or_out, nand_out])
13
+ return int((l1 * and_w).sum() + and_b >= 0)
14
+
15
+ def hamming74_encode(d1, d2, d3, d4, w):
16
+ """Hamming(7,4) encoder: 4 data bits -> 7 coded bits"""
17
+ inp = torch.tensor([float(d1), float(d2), float(d3), float(d4)])
18
+
19
+ # p1 = d1 XOR d2 XOR d4
20
+ or_out = float((inp * w['p1.xor12.layer1.or.weight']).sum() + w['p1.xor12.layer1.or.bias'] >= 0)
21
+ nand_out = float((inp * w['p1.xor12.layer1.nand.weight']).sum() + w['p1.xor12.layer1.nand.bias'] >= 0)
22
+ xor12 = int((torch.tensor([or_out, nand_out]) * w['p1.xor12.layer2.weight']).sum() + w['p1.xor12.layer2.bias'] >= 0)
23
+
24
+ inp2 = torch.tensor([float(xor12), float(d4)])
25
+ or_out = float((inp2 * w['p1.xor_final.layer1.or.weight']).sum() + w['p1.xor_final.layer1.or.bias'] >= 0)
26
+ nand_out = float((inp2 * w['p1.xor_final.layer1.nand.weight']).sum() + w['p1.xor_final.layer1.nand.bias'] >= 0)
27
+ p1 = int((torch.tensor([or_out, nand_out]) * w['p1.xor_final.layer2.weight']).sum() + w['p1.xor_final.layer2.bias'] >= 0)
28
+
29
+ # p2 = d1 XOR d3 XOR d4
30
+ or_out = float((inp * w['p2.xor13.layer1.or.weight']).sum() + w['p2.xor13.layer1.or.bias'] >= 0)
31
+ nand_out = float((inp * w['p2.xor13.layer1.nand.weight']).sum() + w['p2.xor13.layer1.nand.bias'] >= 0)
32
+ xor13 = int((torch.tensor([or_out, nand_out]) * w['p2.xor13.layer2.weight']).sum() + w['p2.xor13.layer2.bias'] >= 0)
33
+
34
+ inp2 = torch.tensor([float(xor13), float(d4)])
35
+ or_out = float((inp2 * w['p2.xor_final.layer1.or.weight']).sum() + w['p2.xor_final.layer1.or.bias'] >= 0)
36
+ nand_out = float((inp2 * w['p2.xor_final.layer1.nand.weight']).sum() + w['p2.xor_final.layer1.nand.bias'] >= 0)
37
+ p2 = int((torch.tensor([or_out, nand_out]) * w['p2.xor_final.layer2.weight']).sum() + w['p2.xor_final.layer2.bias'] >= 0)
38
+
39
+ # p3 = d2 XOR d3 XOR d4
40
+ or_out = float((inp * w['p3.xor23.layer1.or.weight']).sum() + w['p3.xor23.layer1.or.bias'] >= 0)
41
+ nand_out = float((inp * w['p3.xor23.layer1.nand.weight']).sum() + w['p3.xor23.layer1.nand.bias'] >= 0)
42
+ xor23 = int((torch.tensor([or_out, nand_out]) * w['p3.xor23.layer2.weight']).sum() + w['p3.xor23.layer2.bias'] >= 0)
43
+
44
+ inp2 = torch.tensor([float(xor23), float(d4)])
45
+ or_out = float((inp2 * w['p3.xor_final.layer1.or.weight']).sum() + w['p3.xor_final.layer1.or.bias'] >= 0)
46
+ nand_out = float((inp2 * w['p3.xor_final.layer1.nand.weight']).sum() + w['p3.xor_final.layer1.nand.bias'] >= 0)
47
+ p3 = int((torch.tensor([or_out, nand_out]) * w['p3.xor_final.layer2.weight']).sum() + w['p3.xor_final.layer2.bias'] >= 0)
48
+
49
+ # Data pass-through
50
+ c3 = int((inp * w['d1.weight']).sum() + w['d1.bias'] >= 0)
51
+ c5 = int((inp * w['d2.weight']).sum() + w['d2.bias'] >= 0)
52
+ c6 = int((inp * w['d3.weight']).sum() + w['d3.bias'] >= 0)
53
+ c7 = int((inp * w['d4.weight']).sum() + w['d4.bias'] >= 0)
54
+
55
+ # Output: c1=p1, c2=p2, c3=d1, c4=p3, c5=d2, c6=d3, c7=d4
56
+ return [p1, p2, c3, p3, c5, c6, c7]
57
+
58
+ if __name__ == '__main__':
59
+ w = load_model()
60
+ print('Hamming(7,4) Encoder')
61
+ print('Input (d1d2d3d4) -> Output (c1c2c3c4c5c6c7)')
62
+
63
+ def ref_encode(d1, d2, d3, d4):
64
+ p1 = d1 ^ d2 ^ d4
65
+ p2 = d1 ^ d3 ^ d4
66
+ p3 = d2 ^ d3 ^ d4
67
+ return [p1, p2, d1, p3, d2, d3, d4]
68
+
69
+ errors = 0
70
+ for d in range(16):
71
+ d1, d2, d3, d4 = (d>>0)&1, (d>>1)&1, (d>>2)&1, (d>>3)&1
72
+ result = hamming74_encode(d1, d2, d3, d4, w)
73
+ expected = ref_encode(d1, d2, d3, d4)
74
+ status = 'OK' if result == expected else 'FAIL'
75
+ if result != expected:
76
+ errors += 1
77
+ r_str = ''.join(map(str, result))
78
+ print(f'{d1}{d2}{d3}{d4} -> {r_str} {status}')
79
+
80
+ print(f'\n{16-errors}/16 correct')
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6d774242ada4c6fad4ffaacfd2c5e17a07100cb3e6b4fe3a117180a08f4bbfd
3
+ size 3784