Update README.md
Browse files
README.md
CHANGED
|
@@ -15,8 +15,8 @@ tags:
|
|
| 15 |
Granite-4.1-30B is a 30B parameter long-context instruct model finetuned from *Granite-4.1-30B-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. Granite 4.1 models have gone through an improved post-training pipeline, including supervised finetuning and reinforcement learning alignment, resulting in enhanced tool calling, instruction following, and chat capabilities.
|
| 16 |
|
| 17 |
- **Developers:** Granite Team, IBM
|
| 18 |
-
- **HF Collection:** [Granite 4.1 Language Models HF Collection](https://huggingface.co/collections/ibm-granite/granite-41)
|
| 19 |
-
- **Technical Blog:** [Granite-4.1 Blog](https://huggingface.co/blog/ibm-granite/
|
| 20 |
- **GitHub Repository:** [ibm-granite/granite-4.1-language-models](https://github.com/ibm-granite/granite-4.1-language-models)
|
| 21 |
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
|
| 22 |
- **Release Date**: April 29th, 2026
|
|
@@ -569,14 +569,14 @@ Granite-4.1-30B baseline is built on a decoder-only dense transformer architectu
|
|
| 569 |
**Training Data:**
|
| 570 |
Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) a select set of human-curated data.
|
| 571 |
|
| 572 |
-
**Supervised Fine-
|
| 573 |
-
Instruct model has been fine tuned with significantly improved SFT-pipeline and Reinforcement learning pipelines with high quality mix of various datasets as mentioned above. With
|
| 574 |
|
| 575 |
**Infrastructure:**
|
| 576 |
We trained the Granite 4.1 Language Models utilizing an NVIDIA GB200 NVL72 cluster hosted in CoreWeave. Intra-rack communication occurs via the 72-GPU NVLink domain, and a non-blocking, full Fat-Tree NDR 400 Gb/s InfiniBand network provides inter-rack communication. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
|
| 577 |
|
| 578 |
**Ethical Considerations and Limitations:**
|
| 579 |
-
Granite 4.1 Instruction Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering multiple languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such
|
| 580 |
|
| 581 |
**Resources**
|
| 582 |
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
|
|
|
|
| 15 |
Granite-4.1-30B is a 30B parameter long-context instruct model finetuned from *Granite-4.1-30B-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. Granite 4.1 models have gone through an improved post-training pipeline, including supervised finetuning and reinforcement learning alignment, resulting in enhanced tool calling, instruction following, and chat capabilities.
|
| 16 |
|
| 17 |
- **Developers:** Granite Team, IBM
|
| 18 |
+
- **HF Collection:** [Granite 4.1 Language Models HF Collection](https://huggingface.co/collections/ibm-granite/granite-41-language-models)
|
| 19 |
+
- **Technical Blog:** [Granite-4.1 Blog](https://huggingface.co/blog/ibm-granite/granite-4-1)
|
| 20 |
- **GitHub Repository:** [ibm-granite/granite-4.1-language-models](https://github.com/ibm-granite/granite-4.1-language-models)
|
| 21 |
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
|
| 22 |
- **Release Date**: April 29th, 2026
|
|
|
|
| 569 |
**Training Data:**
|
| 570 |
Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) a select set of human-curated data.
|
| 571 |
|
| 572 |
+
**Supervised Fine-Tuning and Reinforcement Learning:**
|
| 573 |
+
Instruct model has been fine tuned with significantly improved SFT-pipeline and Reinforcement learning pipelines with high quality mix of various datasets as mentioned above. With rigorous SFT-RL cycles we have improved Granite-4.1 model's tool calling, instruction following and chat capabilities. For further details please check our [Granite-4.1 Blog]((https://huggingface.co/blog/ibm-granite/granite-4-1)).
|
| 574 |
|
| 575 |
**Infrastructure:**
|
| 576 |
We trained the Granite 4.1 Language Models utilizing an NVIDIA GB200 NVL72 cluster hosted in CoreWeave. Intra-rack communication occurs via the 72-GPU NVLink domain, and a non-blocking, full Fat-Tree NDR 400 Gb/s InfiniBand network provides inter-rack communication. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
|
| 577 |
|
| 578 |
**Ethical Considerations and Limitations:**
|
| 579 |
+
Granite 4.1 Instruction Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering multiple languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such cases, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. We urge the community to use this model with proper safety testing and tuning tailored for their specific tasks. To enhance safety in enterprise deployments, we recommend using Granite 4.1 Language models alongside [Granite Guardian](https://huggingface.co/ibm-granite/granite-guardian-4.1-8b), a model designed to detect and flag risks in inputs and outputs across key dimensions outlined in the IBM AI Risk Atlas.
|
| 580 |
|
| 581 |
**Resources**
|
| 582 |
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
|