KrizNore4-E2B-v1 - Is Now Released!

image

  • IMAGE GENERATED USING CHATGPT!

KrizNore4-E2B-v1 - Brand New Model, Strong Conversation, Efficient Roleplay

Follow Google Deepmind's Great Models, We've decided to benchmark the model and found that Although the model is great At Invoking tools, and agentic purposes. It manage to topple our latest Model ElaNore!

Following this, we've decided to train the newest Gemma4 model with our new Propietary dataset, with new pipeline and quality control! Producing **KrizNore4-E2B-v1!

  • Feast your eyes at KrizNore's RP capabilities, without sacrificing its Agentic and Its Instructional Base!
  • Model is ENTIRELY UNCENSORED CREDIT TO P-E-W, (PLEASE USE ACCORDINGLY!) for better immersiveness
  • Krizela4-E2B-v1, Is STILL trained on Google Colab, with a goal of Making The BEST Smallest RP model that can be Run on any hardware!
  • Krizela4-E2B-v1 specializes in Roleplaying scenarios!
  • WE STRIPPED THE REQUIREMENT FOR CHATML!, the model is good either with Kobold's defaults or with other formats depending on YOUR CARD

READ MORE FOR MORE INFO 5 BILLIONS PARAMS MODEL

KrizNore4-E2B-v1 Model Procedure/Methodology:

  • KrizNore4-E2B-v1 is trained Using P-E-W's Heretical Version of the Base Model(Gemma4-E2B).
  • Dataset RP-MIXED-V2 were used for this training (PLEASE READ ELaNore's Methodology on HOW WE BUILT RP-MIXED-V2), However We decided to strip off, all of its ChatML formatting, which hopefully removes the bias of the trained model from CHATML's formatting. (the newly created dataset were named RP-MIXED-V3)
  • Multiple instances were then trained using different accounts on Google colab. Looking for different flavor, configuration and combinations that excells in our given target!
  • We conclude that the first version (against 3 versions) were the best, balancing Instruction and Roleplaying capabilities!

(RP-MIXED-V3 composition)

  • 4k synthetically made dataset contains the following:

    • Single Roleplay
    • MultiTurn Roleplay
    • Narration Roleplay
  • 2k Human Dataset contains the following:

    • Human Written Roleplay
    • Small Salvaged Dataset from Iris Uncensored Reformat R2
  • KrizNore4-E2B-v1 is Trained using Unsloth, SFT with 3 Epochs with final Training loss of 2.3 using the RP-MIXED-V3 dataset, Trained on GOOGLE COLAB FREE TIER T4 GPU which took... long, this is accounting my DPO and adjusted model from ElaNore Which took the resources I plan to allocate for the Gemma finetune

  • KrizNore4-E2B-v1 is Our Brand New Powerful Model, If you ever encountered any issue, Want to commission us, or have any suggestions, please email us directly through nexus.networkinteractives@gmail.com we value any reports, suggestions to how we improve future Model, Once again feel free to finetune the model to your likings, However please consider Adding this Page for CREDITS

  • Please handle the AI with Care and ethical considerations when FINETUNING this AI model. This is because of its UNCENSORED Nature.

  • We are not responsible for what this model generates. Use it responsibly and legally. You downloaded it, you own what you do with it.

  • KrizNore4-E2B-v1 is

    • Developed by: N-Bot-Int

    • License: agpl 3.0

      EQ BENCH 2.0 WAS NOT RAN ON THIS MODEL, we ran out of resources :( Feel free to benchmark the model yourself and publish it, we'll happily add it here!

      (PLACEHOLDER BENCHMARK)

  • Notice

    • For a Good Experience, Please use
      • (PLEASE CALIBRATE THE MODEL DEPENDING ON THE CHARACTER CARD YOU USE)
  • Detail card:

    • Parameter
      • 5 Billion Parameters
      • (Please check your GPU Core, VRAM, CPU and RAM to see if you can comfortably run 4B models)
  • Finetuning tool:

    • Unsloth AI
    • This Gemma4 model was trained 2x faster with Unsloth and Huggingface's TRL library.
    • Fine-tuned Using:
    • Google Colab Free Tier
Downloads last month
170
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for N-Bot-Int/KrizNore4-E2B-v1-merged

Finetuned
(7)
this model
Quantizations
1 model

Datasets used to train N-Bot-Int/KrizNore4-E2B-v1-merged

Collection including N-Bot-Int/KrizNore4-E2B-v1-merged