mzen commited on
Commit
500db8e
·
verified ·
1 Parent(s): 2c6835d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -14,7 +14,8 @@ pipeline_tag: text-generation
14
 
15
  # Model Card for EventModel-1.2B
16
 
17
- This model is a fine-tuned version of [liquidai/LFM2-1.2B](https://huggingface.co/liquidai/LFM2-1.2B).
 
18
  It has been trained using [TRL](https://github.com/huggingface/trl).
19
 
20
  ## Quick start
@@ -22,8 +23,8 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
22
  ```python
23
  from transformers import pipeline
24
 
25
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
26
- generator = pipeline("text-generation", model="None", device="cuda")
27
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
28
  print(output["generated_text"])
29
  ```
 
14
 
15
  # Model Card for EventModel-1.2B
16
 
17
+ EventModel is a 1.2B parameter model finetune of LFM2-1.2B using data extracted from r/parents. The idea is to come up with problems that a kid of certain age would face. This is done by using data from r/Parenting, analyzing the problem, analyzing the kid group using iterative few-shot prompting, then finetunning a generative model with the results.
18
+
19
  It has been trained using [TRL](https://github.com/huggingface/trl).
20
 
21
  ## Quick start
 
23
  ```python
24
  from transformers import pipeline
25
 
26
+ question = "13 year old, boy"
27
+ generator = pipeline("text-generation", model="mzen/EventModel-1.2B", device="cuda")
28
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
29
  print(output["generated_text"])
30
  ```