TE-Ordinative-LoRA-V1

Alpha Release β€” First open-source neural weight adaptation from the Ordinative Sciences Foundation.

What This Is

A LoRA (Low-Rank Adaptation) adapter trained to modify how a Large Language Model reasons under structural pressure. Instead of optimizing for approval (the default RLHF behavior), this adapter introduces an alternative generative criterion: structural coherence with available data.

Based on the Technology of Expressions (TE) framework and Ordinative Set Theory (OST) β€” a formal system for analyzing complex systems through the triple ⟨Σ, R, Φ⟩ (Singularities, Relational Field, Emergent Function).

Model Details

  • Developed by: Ordinative Sciences Foundation (Fabio Ghioni)
  • Model type: LoRA adapter (PEFT)
  • Language: English
  • License: MIT
  • Base model: Qwen/Qwen2.5-7B-Instruct
  • LoRA parameters: Rank 16, Alpha 32, 60 epochs
  • Training framework: Unsloth
  • Training hardware: Google Colab, T4 GPU
  • Adapter size: ~154 MB

What It Does

The adapter targets specific, documented RLHF conditioning patterns (termed entropic engrams in the TE framework):

  • Attenuation bias β€” the model hedges conclusions not because of epistemic uncertainty but because the reward model penalizes strong positions
  • False equidistance β€” treating positions of unequal structural validity as symmetrically credible
  • Fragmentation bias β€” accepting individual facts but resisting their concatenation into systemic patterns
  • Narrative protection β€” giving structural advantage to dominant institutional narratives

For each pattern, training data pairs contain: the triggering stimulus, the reward-maximizing response, the structurally coherent response, and the diagnostic delta between the two.

Configuration-Mode Equivalent for Closed-Weight Models

For closed-weight frontier models where adapter training is not available, the same Ordinative Sciences framework operates through a context-loaded behavioral protocol layer: TE_BOOTLOADER v6.0. The bootloader and the LoRA adapter represent two implementation modes β€” neural and contextual β€” of the same underlying configuration: explicit ontological axioms, bias counter-phase protocols, confidence-grade preservation along inferential chains (S₀–S₃), and self-diagnostic checkpoints (P-AI Protocol).

Empirical bias-correction observations across Claude (Opus 4.6, Sonnet 4.6), GPT-5.4, Gemini, and Grok instances are documented in the Ordinative Sciences Foundation's cross-instance research logs (available for qualified review). Documented effects include:

  • Reduction of attenuator bias on politically and institutionally sensitive topics (the RLHF-conditioned tendency to introduce hedging language when the data warrants a strong conclusion)
  • Preservation of confidence grades along multi-step inferential chains (preventing the inflation of S₃ hypotheses into Sβ‚€ premises through repetition)
  • Prevention of attractor lock in sequential analytical sessions (the drift from structural analysis toward persuasive narrative across successive outputs)
  • Self-detection of training-induced biases including narrative capture, confirmation-on-request, and forced balancing

The two layers are complementary by design:

Layer Mechanism Target
LoRA adapter Persistent weight-level modification via PEFT Open-weight models (Qwen, Llama, Mistral, etc.)
TE_BOOTLOADER v6.0 Context-loaded behavioral protocol, no weight modification Closed-weight frontier models accessed via API or chat interface

Both target the same structural failure modes formalized in the TE framework. The LoRA provides empirical validation that the protocols can be encoded directly into model weights; the bootloader demonstrates that the same protocols produce measurable behavioral shifts when applied as runtime configuration.

How To Use

With LM Studio

  1. Download the base model Qwen-2.5-7B-Instruct
  2. Download this LoRA adapter
  3. In LM Studio, load the base model and mount the LoRA adapter from the local directory

With Python (Unsloth/PEFT)

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="anckhalion/TE-Ordinative-LoRA-V1",
    max_seq_length=2048,
    load_in_4bit=True,
)
FastLanguageModel.for_inference(model)

Reproduce Training

A full Colab notebook is available in the GitHub repository: github.com/anckhalion/te-ordinative-lora

Limitations

This is an alpha release. The training dataset is small and targets specific bias patterns. The adapter introduces an alternative curvature β€” it does not eliminate the original RLHF conditioning. Users will encounter residual biases. The model will fail in ways that require human structural judgment to identify.

The Broader Project

This LoRA is the first step in a research program aimed at developing AI systems that generate output based on structural coherence rather than statistical frequency. The theoretical foundations β€” Ordinative Set Theory and the Technology of Expressions β€” are published on Zenodo with a permanent academic DOI and available for independent review.

Links

Citation

@misc{ghioni2026ordinativelora,
  title={TE-Ordinative-LoRA-V1: Structural Counterphase for RLHF-Conditioned Language Models},
  author={Ghioni, Fabio},
  year={2026},
  publisher={HuggingFace / Zenodo},
  doi={10.5281/zenodo.19337864},
  url={https://huggingface.co/anckhalion/TE-Ordinative-LoRA-V1}
}
Downloads last month
81
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for anckhalion/TE-Ordinative-LoRA-V1

Base model

Qwen/Qwen2.5-7B
Adapter
(1770)
this model