snavazio commited on
Commit
7099bd3
·
verified ·
1 Parent(s): 6b32f93

Add model card

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ base_model: microsoft/Phi-3.5-mini-instruct
5
+ tags:
6
+ - project-management
7
+ - communication
8
+ - lora
9
+ - peft
10
+ - phi-3.5
11
+ pipeline_tag: text-generation
12
+ ---
13
+
14
+ # PMCommunicator
15
+
16
+ PMCommunicator is a LoRA fine-tune of [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)
17
+ (3.8B parameters) specialized for generating professional project management communications.
18
+
19
+ Given a project context (from PMPlanner + PMReasoner), it generates stakeholder-ready prose:
20
+ kickoff emails, status reports, risk escalation memos, executive summaries, board updates,
21
+ and project closeout reports.
22
+
23
+ ## Model Details
24
+
25
+ | Property | Value |
26
+ |---|---|
27
+ | Base model | microsoft/Phi-3.5-mini-instruct (3.8B) |
28
+ | Fine-tuning method | LoRA (PEFT) |
29
+ | LoRA rank | 16, alpha 32 |
30
+ | Trainable params | 25M / 3.82B (0.65%) |
31
+ | Training data | 28,000+ PM communication examples |
32
+ | Val loss | 0.0105 |
33
+ | License | MIT |
34
+
35
+ ## Usage
36
+
37
+ ```python
38
+ from transformers import AutoTokenizer, AutoModelForCausalLM
39
+ import torch
40
+
41
+ model_id = "pmcore/pmcommunicator"
42
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
43
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
44
+
45
+ system = (
46
+ "You are PMCommunicator, an expert project manager and communications specialist. "
47
+ "Generate professional, stakeholder-ready project communications based on the provided "
48
+ "project context. Be specific — use the actual project name, numbers, and timeline. "
49
+ "Write in clear business English. Output only the communication document itself."
50
+ )
51
+ user = "Project: Cloud migration, 50 legacy apps, 18 months, $8M budget. 3 phases planned.\n\nWrite a weekly status report for stakeholders."
52
+
53
+ prompt = f"<|system|>\n{system}<|end|>\n<|user|>\n{user}<|end|>\n<|assistant|>\n"
54
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
55
+ outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.3, do_sample=True)
56
+ print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
57
+ ```
58
+
59
+ ## Full Pipeline
60
+
61
+ Use PMCommunicator as part of the full PMCore pipeline for best results.
62
+ See [PMCore on GitHub](https://github.com/snavazio/pmcore).