IMISLab commited on
Commit
b4bdff0
·
verified ·
1 Parent(s): a237b59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -3
README.md CHANGED
@@ -1,3 +1,132 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - IMISLab/CulturaQA
5
+ language:
6
+ - el
7
+ metrics:
8
+ - accuracy
9
+ - bertscore
10
+ base_model:
11
+ - mistralai/Ministral-3-8B-Instruct-2512-BF16
12
+ pipeline_tag: text-generation
13
+ tags:
14
+ - greek
15
+ - nlp
16
+ - genai
17
+ - LLM
18
+ - QA
19
+ - chat
20
+ - maistros
21
+ ---
22
+ # Maistros-8B-Instruct: A Greek Large Language Model adapted through Knowledge Distillation from Large Reasoning Models
23
+
24
+ We introduce Maistros-8B-Instruct, a Greek-adapted LLM based on `mistralai/Ministral-3-8B-Instruct-2512-BF16` fine-tuned using Low-Rank Adaptation (LoRA) on [CulturaQA](https://huggingface.co/datasets/IMISLab/CulturaQA).
25
+ For information regarding the model training, validation and evaluation, as well as its limitations see the [arxiv preprint]().
26
+
27
+ <div align="center">
28
+ <img src="Maistros-Greek.png" width="50%" alt="Maistros Greek logo"/>
29
+ </div>
30
+
31
+ ## Model Information
32
+
33
+ - 256k context length (approx. 150,000 Greek words).
34
+ - We extend the training of `Ministral-3-8B-Instruct-2512-BF16` with Greek linguistic and cultural knowledge from the training part of [CulturaQA](https://huggingface.co/datasets/IMISLab/CulturaQA).
35
+ - We use LoRA fine-tuning to mitigate catastrophic forgetting and retain the base models' capabilities.
36
+ - We merge the adapted weights from LoRA fine-tuning to the base model to produce Maistros-8B-Instruct, a specialized Greek LLM.
37
+ - Maistros-8B-Instruct achieves state-of-the-art performance in most Greek QA datasets, when compared to other open-weight models.
38
+
39
+ ## Evaluation
40
+
41
+ | Acc (%) | DemosQA | GPCR | INCLUDE | Greek ASEP MCQA | Greek Medical MCQA | Plutus QA | Greek Truthful QA | Greek MMLU (Greek-specific) | CulturaQA |
42
+ | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
43
+ | **Open-Weights Models** | | | | | | | | | |
44
+ | **Maistros 8B (Ours)** | 50.83 | **64.42** | **58.70** | **67.25** | **49.54** | **73.33** | 53.37 | **78.17** | **71.99** |
45
+ | Ministral 3 8B | **51.67** | 59.62 | 54.17 | 63.25 | 47.92 | 65.33 | 52.51 | 76.23 | 71.03 |
46
+ | Krikri 8B | 49.50 | 54.81 | 50.54 | 63.08 | 45.37 | 64.44 | **54.83** | 71.04 | 71.31 |
47
+ | Plutus 8B | 45.67 | 50.00 | 48.37 | 62.92 | 39.35 | 57.33 | 34.52 | 70.38 | 67.44 |
48
+ | EuroLLM v2 9B | 41.50 | 53.85 | 39.13 | 46.08 | 31.71 | 42.67 | 36.72 | 58.17 | 70.33 |
49
+ | Gemma 3n E4B | 47.17 | 60.10 | 50.00 | 57.75 | 43.75 | 53.78 | 46.76 | 71.39 | 69.10 |
50
+ | Qwen 3 8B | 48.83 | 31.73 | 49.28 | 54.58 | 36.64 | 63.56 | 42.72 | 67.57 | 68.73 |
51
+ | **Proprietary Models** | | | | | | | | | |
52
+ | Gemini 3 flash | **55.67** | **88.46** | **88.77** | **94.75** | **92.82** | **89.78** | **88.62** | **95.03** | 73.97 |
53
+ | GPT-5 mini | 53.00 | 77.40 | 74.46 | 78.92 | 78.01 | 76.89 | 75.89 | 87.49 | **75.09** |
54
+
55
+ ## How to load and run the model.
56
+ Use the following code to run the model locally or you can host the model using [vLLM]('https://vllm.ai/').
57
+
58
+ ```python
59
+ from transformers import (
60
+ AutoTokenizer,
61
+ Mistral3ForConditionalGeneration,
62
+ set_seed
63
+ )
64
+
65
+ # Set the model path, device and a random seed for reproducibility.
66
+ model_path = 'IMISLab/Maistros-8B-Instruct'
67
+ device = 'cuda'
68
+ set_seed(42)
69
+
70
+ # Loading the model tokenizer.
71
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code = True)
72
+
73
+ # Causal Language Models predict tokens from left to right and use EOS token for padding.
74
+ tokenizer.pad_token = tokenizer.eos_token
75
+ tokenizer.padding_side = 'right'
76
+
77
+
78
+ # Load the model from the path to the device and set it in evaluation mode.
79
+ self.model = Mistral3ForConditionalGeneration.from_pretrained(model_path, device_map = self.device, trust_remote_code = True)
80
+ self.model.eval()
81
+
82
+ # Set the system, instruction and user prompts.
83
+ system_prompt = ''
84
+ instruction_prompt = ''
85
+ user_prompt = ''
86
+
87
+ # Defining the message template.
88
+ messages = [
89
+ {'role': 'system', 'content': [{'type': 'text', 'text': system_prompt}]}
90
+ {'role': 'user', 'content': [{'type': 'text', 'text': '\n\n'.join((instruction_prompt, user_prompt))}]}
91
+ ]
92
+
93
+ # Applying the tokenizer chat template.
94
+ tokenized = self.tokenizer.apply_chat_template(
95
+ messages,
96
+ add_generation_prompt = True,
97
+ return_tensors = 'pt',
98
+ return_dict = True
99
+ )
100
+
101
+ # Sending the tokenized instances to the device.
102
+ tokenized = {k: v.to(self.device) for k, v in tokenized.items()}
103
+ input_len = len(tokenized['input_ids'][0])
104
+
105
+ # Generating the model output.
106
+ output = self.model.generate(
107
+ **tokenized,
108
+ max_new_tokens = self.max_output_tokens,
109
+ do_sample = False, # Equivalent to temperature = 0.0
110
+ temperature = None,
111
+ top_p = None,
112
+ top_k = None
113
+ )
114
+
115
+ # Decoding the assistant part of the output and printing it.
116
+ decoded_output = self.tokenizer.decode(output[0][input_len:], skip_special_tokens = True)
117
+ print(decoded_output)
118
+ ```
119
+
120
+ ## Contact
121
+
122
+ If you have any questions/feedback about the dataset please e-mail one of the following authors:
123
+ ```
124
+ giarelis@ceid.upatras.gr
125
+ cmastrokostas@ac.upatras.gr
126
+ karacap@upatras.gr
127
+ ```
128
+ ## Citation
129
+
130
+ ```
131
+ TBA
132
+ ```