Spaces:
Configuration error
Configuration error
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,10 +1,23 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Appen AI Research
|
| 2 |
+
|
| 3 |
+
**The data behind the models you use every day.**
|
| 4 |
+
|
| 5 |
+
Appen sits at the intersection of human expertise and AI development β we've helped build training data for the world's leading search engines, ranking models, and frontier LLMs. Now we're publishing the research, benchmarks, and insights that come from working at that scale.
|
| 6 |
+
|
| 7 |
+
## What we publish here
|
| 8 |
+
|
| 9 |
+
- π **Research** β data-centric AI, human-AI co-annotation, annotation methodology, and model evaluation
|
| 10 |
+
- π **Benchmarks** β evaluation suites covering hallucination, bias, annotation quality, and model readiness
|
| 11 |
+
- π **Whitepapers** β practical guides on responsible AI, data strategy, and the annual State of AI & ML report
|
| 12 |
+
|
| 13 |
+
## Where our expertise runs deep
|
| 14 |
+
|
| 15 |
+
- **Alignment & RLHF** β CoT reasoning traces, SME-led feedback, SFT demonstrations, adversarial red teaming
|
| 16 |
+
- **Multimodal** β VLM training data, image-text pairs, video annotation, structured document labelling
|
| 17 |
+
- **Speech & audio** β 500+ global locales, expressive TTS, dialectal and code-switched speech
|
| 18 |
+
- **Evaluation** β hallucination benchmarking, A/B arena testing, bias detection, regulatory audit support
|
| 19 |
+
- **Embodied AI** β World Model Data, Egocentric Data, Robotics, LiDAR.
|
| 20 |
+
|
| 21 |
+
## Learn more
|
| 22 |
+
|
| 23 |
+
π [appen.com](https://www.appen.com) | π [Whitepapers](https://www.appen.com/whitepapers) | π§ͺ [Data-Centric AI](https://www.appen.com/data-centric-ai)
|