metadata
license: mit
base_model: google-bert/bert-base-uncased
library_name: transformers
pipeline_tag: fill-mask
tags:
- bwsk
- combinator-analysis
- transformer
- reversible-backprop
- convergence-training
datasets:
- wikitext
metrics:
- pseudo-perplexity
model-index:
- name: bwsk-bert-base
results:
- task:
type: fill-mask
name: Fine-tune (Conventional)
dataset:
name: wikitext
type: wikitext
metrics:
- name: pseudo-perplexity
type: pseudo-perplexity
value: 5.4006
verified: false
- task:
type: fill-mask
name: Fine-tune (BWSK Analyzed)
dataset:
name: wikitext
type: wikitext
metrics:
- name: pseudo-perplexity
type: pseudo-perplexity
value: 5.5685
verified: false
- task:
type: fill-mask
name: Fine-tune (BWSK Reversible)
dataset:
name: wikitext
type: wikitext
metrics:
- name: pseudo-perplexity
type: pseudo-perplexity
value: 5.4872
verified: false
- task:
type: fill-mask
name: From Scratch (Conventional)
dataset:
name: wikitext
type: wikitext
metrics:
- name: pseudo-perplexity
type: pseudo-perplexity
value: 1489.1825
verified: false
- task:
type: fill-mask
name: From Scratch (BWSK Analyzed)
dataset:
name: wikitext
type: wikitext
metrics:
- name: pseudo-perplexity
type: pseudo-perplexity
value: 1480.6243
verified: false
- task:
type: fill-mask
name: From Scratch (BWSK Reversible)
dataset:
name: wikitext
type: wikitext
metrics:
- name: pseudo-perplexity
type: pseudo-perplexity
value: 1503.8561
verified: false
BWSK BERT-base
BERT-base (110M params) trained in 6 variants (3 BWSK modes x 2 experiments) on WikiText-2 with full convergence training and early stopping.
This repo contains all model weights, configs, and training results in a single consolidated repository.
What is BWSK?
BWSK is a framework that classifies every neural network operation as S-type (information-preserving, reversible, coordination-free) or K-type (information-erasing, synchronization point) using combinator logic. This classification enables reversible backpropagation through S-phases to save memory, and CALM-based parallelism analysis.
Model Overview
| Property | Value |
|---|---|
| Base Model | google-bert/bert-base-uncased |
| Architecture | Transformer (masked_lm) |
| Parameters | 110M |
| Dataset | WikiText-2 |
| Eval Metric | Pseudo-Perplexity |
S/K Classification
| Type | Ratio |
|---|---|
| S-type (information-preserving) | 67.3% |
| K-type (information-erasing) | 32.7% |
Fine-tune Results
| Mode | Final Loss | Val Pseudo-Perplexity | Test Pseudo-Perplexity | Peak Memory | Time | Epochs |
|---|---|---|---|---|---|---|
| Conventional | 1.8896 | 5.56 | 5.40 | 4.0 GB | 7.4m | 5 |
| BWSK Analyzed | 1.9163 | 5.54 | 5.57 | 4.0 GB | 7.3m | 5 |
| BWSK Reversible | 1.5086 | 5.57 | 5.49 | 2.9 GB | 9.1m | 5 |
Memory savings (reversible vs conventional): 27.7%
From Scratch Results
| Mode | Final Loss | Val Pseudo-Perplexity | Test Pseudo-Perplexity | Peak Memory | Time | Epochs |
|---|---|---|---|---|---|---|
| Conventional | 6.9915 | 1383.85 | 1489.18 | 4.0 GB | 7.3m | 5 |
| BWSK Analyzed | 7.4792 | 1373.72 | 1480.62 | 4.0 GB | 7.4m | 5 |
| BWSK Reversible | 7.0919 | 1401.24 | 1503.86 | 2.9 GB | 9.0m | 5 |
Memory savings (reversible vs conventional): 27.6%
Repository Structure
βββ README.md
βββ results.json
βββ finetune-conventional/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ finetune-bwsk-analyzed/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ finetune-bwsk-reversible/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-conventional/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-bwsk-analyzed/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-bwsk-reversible/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
Usage
Load a specific variant:
from transformers import AutoModelForMaskedLM, AutoTokenizer
# Load fine-tuned conventional variant
model = AutoModelForMaskedLM.from_pretrained(
"tzervas/bwsk-bert-base", subfolder="finetune-conventional"
)
tokenizer = AutoTokenizer.from_pretrained(
"tzervas/bwsk-bert-base", subfolder="finetune-conventional"
)
# Load from-scratch BWSK reversible variant
model = AutoModelForMaskedLM.from_pretrained(
"tzervas/bwsk-bert-base", subfolder="scratch-bwsk-reversible"
)
Training Configuration
| Setting | Value |
|---|---|
| Optimizer | AdamW |
| LR (fine-tune) | 5e-05 |
| LR (from-scratch) | 3e-04 |
| LR Schedule | Cosine with warmup |
| Max Grad Norm | 1.0 |
| Mixed Precision | AMP (float16) |
| Early Stopping | Patience 3 |
| Batch Size | 4 |
| Sequence Length | 512 |
Links
Citation
@software{zervas2026bwsk,
author = {Zervas, Tyler},
title = {BWSK: Combinator-Typed Neural Network Analysis},
year = {2026},
url = {https://github.com/tzervas/ai-s-combinator},
}
License
MIT