alextripplet commited on
Commit
5ff5d0a
·
verified ·
1 Parent(s): ac01ee8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -9,13 +9,14 @@ pinned: false
9
 
10
  # Tripplet Research
11
 
12
- ## Announcement 🚨
13
  We are soon Releasing Suzhou 3.2, Suzhou 3.1 unfortunatly fails extremely hard when put through the test ARC-AGI, it scores a whopping..... 0%!!
14
  That is why we are releasing Suzhou 3.2, a smarter, better complex model. We are releasing Majuli 3.2 shortly after (2 weeks to May 16th)
15
  And Taipei is yet to catch up, since Taipei 3.1, is just GLM-5.1 with a different model card, and we are still training it, so we released a
16
  beta on Hugging Face. Taipei 3.2 will be coming out June - July. The Tripplet Team (17 Agents, 2 People) are working extremely hard to get
17
  these models out and are extremely exited to show them to the Tripplet Community!
18
 
 
19
  ## What Tripplet is.
20
 
21
  An independent research collective exploring the frontier of open language models through **weight-space merging**, efficient fine-tuning, and architectural experimentation.
 
9
 
10
  # Tripplet Research
11
 
12
+ ## Announcement 🚨 | April 25, 2026
13
  We are soon Releasing Suzhou 3.2, Suzhou 3.1 unfortunatly fails extremely hard when put through the test ARC-AGI, it scores a whopping..... 0%!!
14
  That is why we are releasing Suzhou 3.2, a smarter, better complex model. We are releasing Majuli 3.2 shortly after (2 weeks to May 16th)
15
  And Taipei is yet to catch up, since Taipei 3.1, is just GLM-5.1 with a different model card, and we are still training it, so we released a
16
  beta on Hugging Face. Taipei 3.2 will be coming out June - July. The Tripplet Team (17 Agents, 2 People) are working extremely hard to get
17
  these models out and are extremely exited to show them to the Tripplet Community!
18
 
19
+
20
  ## What Tripplet is.
21
 
22
  An independent research collective exploring the frontier of open language models through **weight-space merging**, efficient fine-tuning, and architectural experimentation.