CGN (Confluence Gate Network) β MNIST
A new standard architecture built on Gate Neurons, trained without backpropagation.
What is CGN?
The Confluence Gate Network (CGN) is a network architecture built on a single primitive: the Gate Neuron (GN).
h_j = max(0, Ξ£ x_i Β· W_ij β ΞΈ_j)
Multiple signals converge, sum, and fire above threshold β the same operation as a biological neuron. No filter size, no stride, no pooling, no weight sharing. Zero architectural hyperparameters.
Key Results on MNIST
| Configuration | Accuracy | Gates | Parameters | Backward Pass | Hardware | Time |
|---|---|---|---|---|---|---|
| CGN (h=128) | 90.4% | 128 | 101,632 | No | 1 CPU core | 35s |
| CGN (256β96 pruned) | 88.8% | 96 | 76,224 | No | 1 CPU core | 35s |
- No backpropagation β forward-only learning (River Learning)
- No GPU β single CPU core, 35 seconds
- No optimization tricks β no batch normalization, no data augmentation, no momentum
- Self-compressing β 256 gates automatically prune to 96 (62% removed)
CNN vs CGN
| CNN | CGN | |
|---|---|---|
| Input information retained | ~3% (97% lost) | 100% |
| Architectural decisions per layer | 7+ | 0 |
| Learning | Backward pass | Forward only |
| Interpretability | Post-hoc tools (SHAP, LIME) | Read the weights |
| Filter shape | Prescribed | Discovered by data |
| Gate count | Prescribed | Found by convergence |
What's in this repo
checkpoint/β Trained weights (W1, W2) for h=128 configurationscripts/verify_mnist.pyβ Inference-only verification scriptscripts/visualize_gates.pyβ Gate receptive field and vote visualizationscripts/compare_resolution.pyβ CNN vs CGN resolution comparisonfigures/β Pre-generated visualizationsresults/β Training logs
Verification
pip install numpy
python scripts/verify_mnist.py
Expected output: ~89.3% on the full 10K test set.
Note: The checkpoint was saved at a different epoch than the best test accuracy (90.4% at epoch 82).
Visualizations
Gate Receptive Fields
Each gate discovers its own spatial pattern from data β no filter shape prescribed.
CNN vs CGN: What Each Architecture Sees
CNN reduces 28Γ28 to 5Γ5 (97% information loss). CGN sees the full image.
CGN Architecture
Gate Neuron Detail
Paper Series
- Forward-Only Path Carving Without Backpropagation (Zenodo, 2026)
- Inference Is Learning: No Phase Separation (Zenodo, 2026)
- One Gate, One Hundred Thousand Edges: Scaling to MNIST (Zenodo, 2026)
- The Converged Structure Is the Explanation (Zenodo, 2026)
- Confluence Gate Networks: From Biological Neuron to Standard Architecture (Zenodo, 2026)
- Template Sharing and Network Design from Learning (upcoming)
Patent
Korean Patent Application 10-2026-0052624 (filed 2026). PCT filing planned.
License
The checkpoint and inference scripts are provided for verification and research purposes only. The training algorithm (River Learning) is proprietary and not included in this repository.
Contact
Yeonseong Cynn β whitepep@gmail.com



