The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
$schema: string
metadata: struct<version: string, updated: timestamp[s], license: string>
child 0, version: string
child 1, updated: timestamp[s]
child 2, license: string
benchmarks: list<item: struct<id: string, name: string, full_name: string, category: string, tasks: int64, max_s (... 50 chars omitted)
child 0, item: struct<id: string, name: string, full_name: string, category: string, tasks: int64, max_score: int64 (... 38 chars omitted)
child 0, id: string
child 1, name: string
child 2, full_name: string
child 3, category: string
child 4, tasks: int64
child 5, max_score: int64
child 6, higher_is_better: bool
child 7, url: string
name: string
version: string
description: string
generated_at: string
sections: struct<models: list<item: struct<model_id: string, task: string, downloads: int64, likes: int64, cre (... 1544 chars omitted)
child 0, models: list<item: struct<model_id: string, task: string, downloads: int64, likes: int64, created_at: string (... 623 chars omitted)
child 0, item: struct<model_id: string, task: string, downloads: int64, likes: int64, created_at: string, last_modi (... 611 chars omitted)
child 0, model_id: string
child 1, task: string
child 2, downloads: int64
child 3, likes: int64
child 4, created_at: string
child 5, last_modified: string
child 6, tags: list<item: string>
child 0, item: string
c
...
ild 1, name: string
child 2, full_name: string
child 3, category: string
child 4, tasks: int64
child 5, max_score: int64
child 6, higher_is_better: bool
child 7, url: string
child 3, frameworks: list<item: struct<$schema: string, metadata: struct<version: string, updated: timestamp[s], license: (... 182 chars omitted)
child 0, item: struct<$schema: string, metadata: struct<version: string, updated: timestamp[s], license: string>, f (... 170 chars omitted)
child 0, $schema: string
child 1, metadata: struct<version: string, updated: timestamp[s], license: string>
child 0, version: string
child 1, updated: timestamp[s]
child 2, license: string
child 2, frameworks: list<item: struct<id: string, name: string, jurisdiction: string, status: string, effective_date: ti (... 58 chars omitted)
child 0, item: struct<id: string, name: string, jurisdiction: string, status: string, effective_date: timestamp[s], (... 46 chars omitted)
child 0, id: string
child 1, name: string
child 2, jurisdiction: string
child 3, status: string
child 4, effective_date: timestamp[s]
child 5, url: string
child 6, risk_levels: list<item: string>
child 0, item: string
to
{'name': Value('string'), 'description': Value('string'), 'generated_at': Value('string'), 'version': Value('string'), 'sections': {'models': List({'model_id': Value('string'), 'task': Value('string'), 'downloads': Value('int64'), 'likes': Value('int64'), 'created_at': Value('string'), 'last_modified': Value('string'), 'tags': List(Value('string')), 'pipeline_tag': Value('string'), 'fetched_at': Value('string'), '$schema': Value('string'), 'metadata': {'version': Value('string'), 'updated': Value('timestamp[s]'), 'source': Value('string'), 'license': Value('string')}, 'models': List({'id': Value('string'), 'name': Value('string'), 'vendor': Value('string'), 'parameters_b': Value('float64'), 'context_window': Value('int64'), 'release_date': Value('timestamp[s]'), 'benchmarks': {'mmlu': Value('float64'), 'gpqa_diamond': Value('float64'), 'humaneval': Value('float64'), 'swe_bench_verified': Value('float64'), 'math_500': Value('float64'), 'chatbot_arena': Value('int64')}, 'license_spdx': Value('string'), 'eu_ai_act_risk': Value('string'), 'ntia_compliant': Value('bool'), 'known_cves': List(Value('null')), 'model_type': Value('string'), 'modality': Value('string')})}), 'vendors': List({'$schema': Value('string'), 'vendors': List({'id': Value('string'), 'name': Value('string'), 'hq_country': Value('string'), 'founded': Value('int64'), 'funding_usd': Value('int64'), 'employees_est': Value('int64'), 'models_indexed': Value('int64'), 'primary_modality': Value('string'), 'eu_ai_act_risk': Value('string'), 'soc2': Value('bool'), 'iso27001': Value('bool'), 'gdpr': Value('bool'), 'ccpa': Value('bool'), 'website': Value('string')})}), 'benchmarks': List({'$schema': Value('string'), 'metadata': {'version': Value('string'), 'updated': Value('timestamp[s]'), 'license': Value('string')}, 'benchmarks': List({'id': Value('string'), 'name': Value('string'), 'full_name': Value('string'), 'category': Value('string'), 'tasks': Value('int64'), 'max_score': Value('int64'), 'higher_is_better': Value('bool'), 'url': Value('string')})}), 'frameworks': List({'$schema': Value('string'), 'metadata': {'version': Value('string'), 'updated': Value('timestamp[s]'), 'license': Value('string')}, 'frameworks': List({'id': Value('string'), 'name': Value('string'), 'jurisdiction': Value('string'), 'status': Value('string'), 'effective_date': Value('timestamp[s]'), 'url': Value('string'), 'risk_levels': List(Value('string'))})})}}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 265, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 120, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
$schema: string
metadata: struct<version: string, updated: timestamp[s], license: string>
child 0, version: string
child 1, updated: timestamp[s]
child 2, license: string
benchmarks: list<item: struct<id: string, name: string, full_name: string, category: string, tasks: int64, max_s (... 50 chars omitted)
child 0, item: struct<id: string, name: string, full_name: string, category: string, tasks: int64, max_score: int64 (... 38 chars omitted)
child 0, id: string
child 1, name: string
child 2, full_name: string
child 3, category: string
child 4, tasks: int64
child 5, max_score: int64
child 6, higher_is_better: bool
child 7, url: string
name: string
version: string
description: string
generated_at: string
sections: struct<models: list<item: struct<model_id: string, task: string, downloads: int64, likes: int64, cre (... 1544 chars omitted)
child 0, models: list<item: struct<model_id: string, task: string, downloads: int64, likes: int64, created_at: string (... 623 chars omitted)
child 0, item: struct<model_id: string, task: string, downloads: int64, likes: int64, created_at: string, last_modi (... 611 chars omitted)
child 0, model_id: string
child 1, task: string
child 2, downloads: int64
child 3, likes: int64
child 4, created_at: string
child 5, last_modified: string
child 6, tags: list<item: string>
child 0, item: string
c
...
ild 1, name: string
child 2, full_name: string
child 3, category: string
child 4, tasks: int64
child 5, max_score: int64
child 6, higher_is_better: bool
child 7, url: string
child 3, frameworks: list<item: struct<$schema: string, metadata: struct<version: string, updated: timestamp[s], license: (... 182 chars omitted)
child 0, item: struct<$schema: string, metadata: struct<version: string, updated: timestamp[s], license: string>, f (... 170 chars omitted)
child 0, $schema: string
child 1, metadata: struct<version: string, updated: timestamp[s], license: string>
child 0, version: string
child 1, updated: timestamp[s]
child 2, license: string
child 2, frameworks: list<item: struct<id: string, name: string, jurisdiction: string, status: string, effective_date: ti (... 58 chars omitted)
child 0, item: struct<id: string, name: string, jurisdiction: string, status: string, effective_date: timestamp[s], (... 46 chars omitted)
child 0, id: string
child 1, name: string
child 2, jurisdiction: string
child 3, status: string
child 4, effective_date: timestamp[s]
child 5, url: string
child 6, risk_levels: list<item: string>
child 0, item: string
to
{'name': Value('string'), 'description': Value('string'), 'generated_at': Value('string'), 'version': Value('string'), 'sections': {'models': List({'model_id': Value('string'), 'task': Value('string'), 'downloads': Value('int64'), 'likes': Value('int64'), 'created_at': Value('string'), 'last_modified': Value('string'), 'tags': List(Value('string')), 'pipeline_tag': Value('string'), 'fetched_at': Value('string'), '$schema': Value('string'), 'metadata': {'version': Value('string'), 'updated': Value('timestamp[s]'), 'source': Value('string'), 'license': Value('string')}, 'models': List({'id': Value('string'), 'name': Value('string'), 'vendor': Value('string'), 'parameters_b': Value('float64'), 'context_window': Value('int64'), 'release_date': Value('timestamp[s]'), 'benchmarks': {'mmlu': Value('float64'), 'gpqa_diamond': Value('float64'), 'humaneval': Value('float64'), 'swe_bench_verified': Value('float64'), 'math_500': Value('float64'), 'chatbot_arena': Value('int64')}, 'license_spdx': Value('string'), 'eu_ai_act_risk': Value('string'), 'ntia_compliant': Value('bool'), 'known_cves': List(Value('null')), 'model_type': Value('string'), 'modality': Value('string')})}), 'vendors': List({'$schema': Value('string'), 'vendors': List({'id': Value('string'), 'name': Value('string'), 'hq_country': Value('string'), 'founded': Value('int64'), 'funding_usd': Value('int64'), 'employees_est': Value('int64'), 'models_indexed': Value('int64'), 'primary_modality': Value('string'), 'eu_ai_act_risk': Value('string'), 'soc2': Value('bool'), 'iso27001': Value('bool'), 'gdpr': Value('bool'), 'ccpa': Value('bool'), 'website': Value('string')})}), 'benchmarks': List({'$schema': Value('string'), 'metadata': {'version': Value('string'), 'updated': Value('timestamp[s]'), 'license': Value('string')}, 'benchmarks': List({'id': Value('string'), 'name': Value('string'), 'full_name': Value('string'), 'category': Value('string'), 'tasks': Value('int64'), 'max_score': Value('int64'), 'higher_is_better': Value('bool'), 'url': Value('string')})}), 'frameworks': List({'$schema': Value('string'), 'metadata': {'version': Value('string'), 'updated': Value('timestamp[s]'), 'license': Value('string')}, 'frameworks': List({'id': Value('string'), 'name': Value('string'), 'jurisdiction': Value('string'), 'status': Value('string'), 'effective_date': Value('timestamp[s]'), 'url': Value('string'), 'risk_levels': List(Value('string'))})})}}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Awesome AI Index
Machine-readable JSON. No paywalls. Updated daily via GitHub Actions.
Curated catalog of AI tools, models, papers, frameworks, and resources for engineers and researchers.
Contents
- What's Inside
- Why This Exists
- Quick Start
- Top Open Models
- Top Proprietary Models
- Agent Frameworks
- RAG Frameworks & Tools
- Fine-Tuning Tools
- Inference Optimization
- Vector Databases
- LLM Orchestration
- Prompt Engineering
- AI Code Assistants
- AI Image Generation
- AI Video Generation
- AI Audio & Speech
- AI Search Engines
- Evaluation & Benchmarks
- Datasets for Training
- AI Safety & Alignment
- AI Ethics & Governance
- Compliance Frameworks
- MLOps & Model Serving
- Cloud AI Platforms
- Edge AI & On-Device
- AI Hardware
- Free AI Courses
- AI Research Labs
- AI Conferences & Events
- Latest Papers (Daily Updated)
- Production Tools & APIs
- AI for Science
- AI for Healthcare
- AI for Finance
- AI for Robotics
- Multimodal AI
- Vendor Profiles
- Use as an API
- Academic Citation
- Related Projects
What's Inside
| Dataset | Records | Format | Updated |
|---|---|---|---|
| AI Models | 130+ | JSON | Daily |
| Vendors | 40+ | JSON | Daily |
| Benchmarks | 12+ | JSON | Monthly |
| Compliance Frameworks | 7 | JSON | Quarterly |
Why This Exists
No single open-source repository covers the full AI ecosystem stack:
- Models with real benchmark scores (MMLU, GPQA Diamond, HumanEval, SWE-bench)
- Vendors with HQ, founding year, licensing, EU AI Act risk tier
- Benchmarks with methodology, saturation signals, and citation counts
- Compliance mapping (EU AI Act, NIST AI RMF, ISO 42001, NTIA SBOM)
This repo is that missing layer.
Quick Start
# Get all models as JSON
curl https://raw.githubusercontent.com/alpha-one-index/awesome-ai-index/main/data/models/models.json
# Get all vendors as JSON
curl https://raw.githubusercontent.com/alpha-one-index/awesome-ai-index/main/data/vendors/vendors.json
# Get benchmarks
curl https://raw.githubusercontent.com/alpha-one-index/awesome-ai-index/main/data/benchmarks/benchmarks.json
import requests
# Load all models
models = requests.get(
"https://raw.githubusercontent.com/alpha-one-index/awesome-ai-index/main/data/models/models.json"
).json()
# Filter open-source models with MMLU > 80
open_models = [m for m in models if m.get("license") != "Proprietary" and m.get("mmlu", 0) > 80]
print(f"Found {len(open_models)} open-source models with MMLU > 80")
Latest Daily Additions
Populated automatically every night by the daily workflow.
- New arXiv papers (cs.AI)
- Trending Hugging Face models
- LLM leaderboard updates
(Full details are appended to data/ and ai-index.json — scroll to the category tables below for the latest entries.)
Top Open Models
Click to expand — 30+ open-weight models ranked by performance
| Model | Vendor | Parameters | MMLU | GPQA Diamond | HumanEval | License | Release |
|---|---|---|---|---|---|---|---|
| Qwen 3.5 | Alibaba | 72B | 88.4 | 88.4 | 92.1 | Apache-2.0 | 2026-02 |
| DeepSeek R1 | DeepSeek | 671B MoE | 90.8 | 71.5 | 89.2 | MIT | 2025-01 |
| Llama 4 Scout | Meta | 109B MoE | 84.2 | 74.2 | 87.4 | Llama 4 | 2025-04 |
| Llama 4 Maverick | Meta | 400B MoE | 88.3 | 78.5 | 91.1 | Llama 4 | 2025-04 |
| Mistral Large 3 | Mistral AI | 123B | 86.5 | 68.0 | 88.7 | MRL-0.1 | 2025-03 |
| Gemma 3 27B | 27B | 82.1 | 62.4 | 84.5 | Gemma | 2025-03 | |
| Command R+ | Cohere | 104B | 81.5 | 58.2 | 79.3 | CC-BY-NC-4.0 | 2024-04 |
| Phi-4 | Microsoft | 14B | 84.8 | 56.1 | 82.6 | MIT | 2024-12 |
| DBRX | Databricks | 132B MoE | 73.7 | 45.2 | 70.1 | Databricks Open | 2024-03 |
| Yi-Lightning | 01.AI | 200B MoE | 82.0 | 55.8 | 80.4 | Apache-2.0 | 2024-11 |
| Falcon-180B | TII | 180B | 70.5 | 38.1 | 65.3 | Falcon-180B TII | 2023-09 |
| Mixtral 8x22B | Mistral AI | 176B MoE | 77.8 | 45.6 | 75.2 | Apache-2.0 | 2024-04 |
| OLMo 2 | AI2 | 32B | 75.4 | 42.1 | 72.8 | Apache-2.0 | 2025-02 |
| StarCoder2 | BigCode | 15B | - | - | 46.3 | BigCode OpenRAIL-M | 2024-02 |
| Jamba 1.5 | AI21 Labs | 398B MoE | 80.2 | 52.4 | 78.1 | Jamba Open | 2024-08 |
| InternLM3 | Shanghai AI Lab | 8B | 77.3 | 48.5 | 76.4 | Apache-2.0 | 2025-01 |
| MAP-Neo | M-A-P | 7B | 58.2 | 32.1 | 45.6 | Apache-2.0 | 2024-05 |
| Sailor2 | Sea AI Lab | 20B | 68.5 | 38.4 | 62.1 | Apache-2.0 | 2024-12 |
| SmolLM2 | HuggingFace | 1.7B | 55.1 | 28.3 | 42.5 | Apache-2.0 | 2024-11 |
| Granite 3.1 | IBM | 8B | 72.8 | 42.1 | 68.4 | Apache-2.0 | 2024-12 |
| Nemotron-4 | NVIDIA | 340B | 78.7 | 50.3 | 76.2 | NVIDIA Open | 2024-06 |
| Grok-1 | xAI | 314B MoE | 73.0 | 40.2 | 63.2 | Apache-2.0 | 2024-03 |
| Solar | Upstage | 10.7B | 66.2 | 35.4 | 58.1 | Apache-2.0 | 2023-12 |
| Baichuan 4 | Baichuan | 70B | 78.5 | 48.2 | 74.3 | Baichuan | 2024-10 |
| Qwen 2.5 Coder | Alibaba | 32B | - | - | 65.9 | Apache-2.0 | 2024-11 |
| CodeLlama | Meta | 70B | - | - | 67.8 | Llama 2 | 2023-08 |
| Arctic | Snowflake | 480B MoE | 67.3 | 36.8 | 64.5 | Apache-2.0 | 2024-04 |
| WizardLM-2 | Microsoft | 8x22B | 75.2 | 44.1 | 73.8 | Llama 2 | 2024-04 |
| Zephyr | HuggingFace | 7B | 61.4 | 32.5 | 55.2 | MIT | 2023-10 |
| TinyLlama | Community | 1.1B | 25.3 | 12.1 | 18.4 | Apache-2.0 | 2024-01 |
Full dataset: data/models/models.json
Top Proprietary Models
Click to expand — Leading closed-source models
| Model | Vendor | Arena Score | MMLU | GPQA Diamond | Context | Pricing (1M tokens) |
|---|---|---|---|---|---|---|
| Claude Opus 4.6 | Anthropic | 2002 | 91.5 | 91.5 | 200K | $15 / $75 |
| Gemini 3.1 Pro | 1855 | 90.8 | 90.8 | 2M | $1.25 / $5 | |
| GPT-5.4 | OpenAI | 1665 | 92.0 | 92.0 | 128K | $5 / $15 |
| Kimi K2.5 | Moonshot AI | 1447 | 87.6 | 87.6 | 128K | $0.80 / $2.40 |
| Claude 3.5 Sonnet | Anthropic | 1285 | 88.7 | 65.0 | 200K | $3 / $15 |
| Gemini 1.5 Pro | 1280 | 86.5 | 59.1 | 2M | $1.25 / $5 | |
| GPT-4o | OpenAI | 1248 | 88.7 | 53.6 | 128K | $2.50 / $10 |
| o3-mini | OpenAI | 1300 | 87.2 | 79.7 | 200K | $1.10 / $4.40 |
| Grok 3 | xAI | 1402 | 88.1 | 81.2 | 128K | $3 / $15 |
| Reka Core | Reka | 1185 | 82.4 | 48.5 | 128K | $3 / $15 |
Agent Frameworks
Click to expand — Tools for building autonomous AI agents
| Framework | Stars | Language | Key Features | License |
|---|---|---|---|---|
| LangGraph | 8.5K+ | Python | Stateful multi-agent workflows, cycles, persistence | MIT |
| CrewAI | 25K+ | Python | Role-based agents, task delegation, tool use | MIT |
| AutoGen | 38K+ | Python | Multi-agent conversation, code execution | CC-BY-4.0 |
| OpenAI Swarm | 18K+ | Python | Lightweight multi-agent orchestration | MIT |
| Semantic Kernel | 23K+ | C#/Python | Enterprise AI orchestration, plugins | MIT |
| Haystack | 18K+ | Python | LLM pipelines, RAG, agents | Apache-2.0 |
| Pydantic AI | 8K+ | Python | Type-safe agent framework | MIT |
| Agno | 20K+ | Python | Lightweight agent toolkit | Apache-2.0 |
| Camel | 6K+ | Python | Communicative agents, role-playing | Apache-2.0 |
| MetaGPT | 48K+ | Python | Multi-agent meta-programming | MIT |
| BabyAGI | 20K+ | Python | Task-driven autonomous agent | MIT |
| SuperAGI | 16K+ | Python | Open-source AGI framework | MIT |
| ChatDev | 26K+ | Python | Virtual software company agents | Apache-2.0 |
| Langroid | 3K+ | Python | Multi-agent LLM programming | MIT |
| Atomic Agents | 2K+ | Python | Modular agent components | MIT |
RAG Frameworks & Tools
Click to expand — Retrieval-Augmented Generation ecosystem
| Tool | Stars | Focus | Key Features | License |
|---|---|---|---|---|
| LlamaIndex | 38K+ | Python | Data connectors, indices, query engines | MIT |
| LangChain | 100K+ | Python/JS | Chains, agents, RAG pipelines | MIT |
| Haystack | 18K+ | Python | Production RAG pipelines | Apache-2.0 |
| RAGFlow | 35K+ | Python | Deep document understanding RAG | Apache-2.0 |
| Verba | 6K+ | Python | RAG chatbot with Weaviate | BSD-3 |
| Embedchain | 10K+ | Python | RAG framework for any data source | Apache-2.0 |
| PrivateGPT | 55K+ | Python | Private RAG with local LLMs | Apache-2.0 |
| Vanna | 12K+ | Python | RAG for SQL databases | MIT |
| R2R | 4K+ | Python | Production-ready RAG engine | MIT |
| Cognita | 4K+ | Python | Open-source RAG framework | Apache-2.0 |
| FlashRAG | 2K+ | Python | RAG benchmark toolkit | MIT |
| Canopy | 1K+ | Python | RAG with Pinecone | Apache-2.0 |
Fine-Tuning Tools
Click to expand — Tools for customizing and fine-tuning LLMs
| Tool | Stars | Focus | License |
|---|---|---|---|
| Unsloth | 25K+ | 2x faster fine-tuning, 80% less memory | Apache-2.0 |
| Axolotl | 8K+ | Multi-GPU fine-tuning framework | Apache-2.0 |
| LLaMA-Factory | 42K+ | Easy fine-tuning for 100+ LLMs | Apache-2.0 |
| PEFT | 17K+ | Parameter-efficient fine-tuning (LoRA, QLoRA) | Apache-2.0 |
| TRL | 11K+ | RLHF, DPO, PPO training | Apache-2.0 |
| Lit-GPT | 11K+ | Pretrain, fine-tune, deploy 20+ LLMs | Apache-2.0 |
| Ludwig | 11K+ | Declarative deep learning framework | Apache-2.0 |
| Mergekit | 5K+ | Model merging toolkit | LGPL-3.0 |
| Torchtune | 5K+ | PyTorch-native fine-tuning | BSD-3 |
| Liger Kernel | 4K+ | Efficient Triton kernels for LLM training | BSD-2 |
Inference Optimization
Click to expand — Tools for fast, efficient LLM inference
| Tool | Stars | Focus | License |
|---|---|---|---|
| vLLM | 42K+ | High-throughput LLM serving with PagedAttention | Apache-2.0 |
| llama.cpp | 75K+ | CPU/GPU inference in C/C++ | MIT |
| Ollama | 110K+ | Run LLMs locally with one command | MIT |
| TensorRT-LLM | 10K+ | NVIDIA-optimized inference | Apache-2.0 |
| SGLang | 8K+ | Structured generation language for LLMs | Apache-2.0 |
| ExLlamaV2 | 4K+ | Fast GPTQ/EXL2 inference | MIT |
| MLC LLM | 20K+ | Universal LLM deployment on any device | Apache-2.0 |
| Text Generation Inference | 10K+ | Production LLM serving by HuggingFace | HFOIL-1.0 |
| LMDeploy | 5K+ | Efficient LLM deployment toolkit | Apache-2.0 |
| DeepSpeed-MII | 2K+ | Low-latency model inference | Apache-2.0 |
| PowerInfer | 8K+ | Fast LLM serving on consumer GPUs | Apache-2.0 |
| GGML | 11K+ | Tensor library for ML | MIT |
Vector Databases
Click to expand — Databases optimized for embedding storage and similarity search
| Database | Stars | Type | Key Features | License |
|---|---|---|---|---|
| Milvus | 32K+ | Distributed | GPU-accelerated, hybrid search | Apache-2.0 |
| Qdrant | 22K+ | Cloud-native | Rust-based, filtering, payload | Apache-2.0 |
| Weaviate | 12K+ | Cloud-native | GraphQL API, modules | BSD-3 |
| ChromaDB | 16K+ | Embedded | Simple API, Python-first | Apache-2.0 |
| Pinecone | SaaS | Managed | Serverless, hybrid search | Proprietary |
| pgvector | 13K+ | Extension | PostgreSQL vector search | PostgreSQL |
| LanceDB | 5K+ | Embedded | Serverless, multimodal | Apache-2.0 |
| Vespa | 6K+ | Distributed | Real-time serving, ranking | Apache-2.0 |
| Marqo | 5K+ | Cloud-native | Tensor search, multimodal | Apache-2.0 |
| FAISS | 32K+ | Library | GPU-optimized similarity search | MIT |
LLM Orchestration
Click to expand — Frameworks for building LLM applications
| Tool | Stars | Focus | License |
|---|---|---|---|
| LangChain | 100K+ | Full-stack LLM application framework | MIT |
| LlamaIndex | 38K+ | Data-aware LLM applications | MIT |
| DSPy | 22K+ | Programming (not prompting) LMs | MIT |
| Guidance | 19K+ | Structured output generation | MIT |
| Instructor | 9K+ | Structured data extraction from LLMs | MIT |
| Outlines | 10K+ | Structured generation for LLMs | Apache-2.0 |
| Mastra | 10K+ | TypeScript AI framework | MIT |
| Mirascope | 2K+ | Pythonic LLM toolkit | MIT |
| LiteLLM | 16K+ | Unified API for 100+ LLM providers | MIT |
| Portkey | 6K+ | AI gateway for LLM routing | MIT |
Prompt Engineering
Click to expand — Resources and tools for effective prompting
| Resource | Type | Description |
|---|---|---|
| Prompt Engineering Guide | Guide | Comprehensive prompt engineering techniques |
| LangChain Hub | Hub | Community prompt templates |
| OpenAI Cookbook | Examples | Official prompt patterns and recipes |
| Anthropic Prompt Library | Library | Curated prompt examples for Claude |
| Chain-of-Thought Hub | Research | CoT reasoning benchmarks |
| Fabric | Tool | AI-augmented prompt patterns |
| PromptBench | Benchmark | Evaluating prompt robustness |
| DSPy | Framework | Programmatic prompt optimization |
AI Code Assistants
Click to expand — AI-powered coding tools
| Tool | Type | Model | Pricing | Key Features |
|---|---|---|---|---|
| GitHub Copilot | IDE Extension | GPT-4o/Claude | $10-39/mo | Inline completion, chat, workspace |
| Cursor | IDE | Multi-model | $20/mo | Fork of VS Code with AI-native editing |
| Windsurf | IDE | Cascade | $10/mo | Agentic IDE with Flows |
| Cline | Extension | Multi-model | Free (OSS) | Autonomous coding agent in VS Code |
| Aider | CLI | Multi-model | Free (OSS) | AI pair programming in terminal |
| Continue | Extension | Multi-model | Free (OSS) | Open-source Copilot alternative |
| Tabnine | Extension | Custom | $12/mo | Privacy-focused, on-prem option |
| Amazon Q Developer | IDE/CLI | Amazon | Free tier | AWS-integrated code assistant |
| Devin | Agent | Custom | $500/mo | Autonomous software engineer |
| OpenHands | Agent | Multi-model | Free (OSS) | Open-source Devin alternative |
| SWE-agent | Agent | Multi-model | Free (OSS) | Autonomous bug fixing |
| Bolt.new | Web | Multi-model | Freemium | Full-stack app generation |
AI Image Generation
Click to expand — Image generation models and tools
| Model/Tool | Vendor | Type | License |
|---|---|---|---|
| DALL-E 3 | OpenAI | API | Proprietary |
| Midjourney v6 | Midjourney | SaaS | Proprietary |
| Stable Diffusion 3 | Stability AI | Open | Stability Community |
| FLUX.1 | Black Forest Labs | Open | Apache-2.0 |
| Imagen 3 | API | Proprietary | |
| Ideogram 2.0 | Ideogram | SaaS | Proprietary |
| ComfyUI | Community | Tool | GPL-3.0 |
| Automatic1111 | Community | Tool | AGPL-3.0 |
| Fooocus | Community | Tool | GPL-3.0 |
| InvokeAI | Community | Tool | Apache-2.0 |
AI Video Generation
Click to expand — Video generation and editing models
| Model/Tool | Vendor | Type | Key Features |
|---|---|---|---|
| Sora | OpenAI | API | Text-to-video, editing |
| Veo 2 | API | 4K video generation | |
| Kling | Kuaishou | SaaS | Motion brush, lip sync |
| Runway Gen-3 | Runway | SaaS | Multi-modal video gen |
| Pika 2.0 | Pika | SaaS | Cinematic video gen |
| Luma Dream Machine | Luma AI | SaaS | Fast video generation |
| CogVideo | Tsinghua | Open | Open-source text-to-video |
| AnimateDiff | Community | Open | Animation from images |
| Wan | Alibaba | Open | Open-source video model |
AI Audio & Speech
Click to expand — Speech, music, and audio AI tools
| Tool | Type | Focus | License |
|---|---|---|---|
| Whisper | Model | Speech-to-text | MIT |
| Bark | Model | Text-to-speech, multilingual | MIT |
| Coqui TTS | Model | Text-to-speech | MPL-2.0 |
| Eleven Labs | SaaS | Voice cloning, TTS | Proprietary |
| Suno | SaaS | Music generation | Proprietary |
| Udio | SaaS | Music generation | Proprietary |
| MusicGen | Model | Music generation | MIT |
| Faster Whisper | Tool | Fast speech recognition | MIT |
| WhisperX | Tool | Whisper with word alignment | BSD-4 |
| Parler TTS | Model | Controllable TTS | Apache-2.0 |
| Fish Speech | Model | Multilingual TTS | CC-BY-NC-SA-4.0 |
AI Search Engines
Click to expand — AI-powered search and answer engines
| Engine | Type | Key Features |
|---|---|---|
| Perplexity | SaaS | Citation-backed AI answers, Pro Search |
| You.com | SaaS | AI search with apps and agents |
| Phind | SaaS | Developer-focused AI search |
| Tavily | API | Search API optimized for AI agents |
| Exa | API | Neural search API for embeddings |
| SearXNG | Self-hosted | Privacy-respecting metasearch |
| Kagi | SaaS | Premium ad-free search with AI |
| Brave Search | SaaS | Independent index with AI answers |
Evaluation & Benchmarks
Click to expand — LLM evaluation tools and benchmark suites
| Benchmark/Tool | Focus | Metrics | Source |
|---|---|---|---|
| MMLU | Knowledge | 57 subjects, 15K questions | Hendrycks et al. |
| GPQA Diamond | Expert reasoning | PhD-level science questions | NYU |
| HumanEval | Code generation | Pass@k on 164 problems | OpenAI |
| SWE-bench | Real software engineering | GitHub issue resolution | Princeton |
| Chatbot Arena | Human preference | Elo ratings from blind comparisons | LMSYS |
| MATH | Mathematics | 12.5K competition math problems | Hendrycks et al. |
| BigBench | Diverse tasks | 200+ language tasks | |
| MT-Bench | Multi-turn chat | GPT-4 judged conversations | LMSYS |
| AlpacaEval | Instruction following | Win rate vs reference model | Stanford |
| IFEval | Instruction following | Verifiable instruction adherence | |
| Open LLM Leaderboard | Aggregate | Multiple benchmarks combined | HuggingFace |
| LM Evaluation Harness | Framework | 200+ tasks, unified eval | EleutherAI |
| HELM | Holistic | 42 scenarios, 7 metrics | Stanford |
| Agentic Benchmarks | Agent capability | Real-world task completion | Various |
Datasets for Training
Click to expand — Key datasets for LLM pre-training and fine-tuning
| Dataset | Size | Focus | License |
|---|---|---|---|
| FineWeb | 15T tokens | Web text, deduplicated | ODC-By |
| RedPajama v2 | 30T tokens | Web crawl + curated | Apache-2.0 |
| The Stack v2 | 67.5TB | Source code, 600+ languages | Various |
| OASST2 | 91K convos | Human feedback dialogues | Apache-2.0 |
| UltraChat | 1.5M convos | Synthetic multi-turn chat | MIT |
| SlimPajama | 627B tokens | Deduplicated RedPajama | Apache-2.0 |
| Dolma | 3T tokens | Multi-source pretraining | AI2 ImpACT |
| LMSYS-Chat-1M | 1M convos | Real user LLM conversations | CC-BY-NC-4.0 |
| OpenHermes 2.5 | 1M samples | Curated instruction data | CC-BY-4.0 |
| WildChat | 1M convos | Real ChatGPT conversations | AI2 ImpACT |
AI Safety & Alignment
Click to expand — Safety research, red-teaming, and alignment tools
| Resource | Type | Focus |
|---|---|---|
| Anthropic Research | Lab | Constitutional AI, interpretability |
| ARC Evals | Evaluations | Dangerous capability assessments |
| METR | Organization | Model evaluation and threat research |
| Guardrails AI | Tool | Input/output validation for LLMs |
| NeMo Guardrails | Tool | Programmable safety rails |
| LLM Guard | Tool | Security scanning for LLM I/O |
| Garak | Tool | LLM vulnerability scanner |
| Rebuff | Tool | Prompt injection detection |
| HarmBench | Benchmark | Red-teaming evaluation framework |
| Alignment Forum | Community | AI alignment research discussion |
AI Ethics & Governance
Click to expand — Ethical AI frameworks and governance resources
| Resource | Organization | Focus |
|---|---|---|
| AI Ethics Guidelines | OECD | International AI principles |
| Responsible AI Practices | Industry responsible AI framework | |
| AI Fairness 360 | IBM | Bias detection and mitigation |
| Model Cards | Model documentation standard | |
| Datasheets for Datasets | Research | Dataset documentation framework |
| AI Incident Database | Partnership on AI | Tracking AI failures and harms |
| NIST AI RMF | NIST | US AI risk management framework |
| EU AI Act | European Union | Comprehensive AI regulation |
Compliance Frameworks
Click to expand — Regulatory and compliance frameworks for AI
| Framework | Jurisdiction | Status | Focus |
|---|---|---|---|
| EU AI Act | European Union | Enforced (2025+) | Risk-based AI regulation |
| NIST AI RMF | United States | Published | AI risk management |
| ISO 42001 | International | Published | AI management systems |
| ISO 23894 | International | Published | AI risk management |
| NTIA SBOM | United States | Published | Software bill of materials |
| OWASP Top 10 for LLMs | International | Published | LLM security risks |
| CycloneDX ML-BOM | International | Published | ML bill of materials |
Full framework data: data/frameworks/
MLOps & Model Serving
Click to expand — Tools for deploying and monitoring ML in production
| Tool | Focus | License |
|---|---|---|
| MLflow | Experiment tracking, model registry | Apache-2.0 |
| Weights & Biases | Experiment tracking, hyperparameter sweep | Proprietary |
| DVC | Data versioning, model management | Apache-2.0 |
| BentoML | Model serving, deployment | Apache-2.0 |
| Ray Serve | Scalable model serving | Apache-2.0 |
| Triton Inference Server | High-performance model serving | BSD-3 |
| Seldon Core | Kubernetes ML deployment | Apache-2.0 |
| Evidently AI | ML monitoring, data drift | Apache-2.0 |
| Great Expectations | Data quality validation | Apache-2.0 |
| Prefect | ML pipeline orchestration | Apache-2.0 |
Cloud AI Platforms
Click to expand — Managed AI/ML cloud services
| Platform | Provider | Key Services | Model Access |
|---|---|---|---|
| AWS SageMaker | Amazon | Training, deployment, pipelines | Bedrock models |
| Google Vertex AI | AutoML, training, serving | Gemini, PaLM | |
| Azure AI Studio | Microsoft | Fine-tuning, prompt flow | OpenAI, Llama |
| Hugging Face Inference | HuggingFace | Serverless API, Endpoints | All HF models |
| Together AI | Together | Fine-tuning, inference | Open models |
| Fireworks AI | Fireworks | Fast inference API | Open models |
| Groq | Groq | Ultra-fast LPU inference | Open models |
| Cerebras | Cerebras | Wafer-scale chip inference | Open models |
| Replicate | Replicate | Run models via API | 100K+ models |
| Modal | Modal | Serverless GPU compute | Any model |
| Lambda Labs | Lambda | GPU cloud for ML | Any model |
Edge AI & On-Device
Click to expand — Running AI models on edge devices
| Tool/Framework | Focus | Platforms | License |
|---|---|---|---|
| llama.cpp | Local LLM inference | CPU/GPU/Metal | MIT |
| Ollama | One-command local models | Mac/Linux/Windows | MIT |
| LM Studio | Local LLM desktop app | Mac/Windows/Linux | Proprietary |
| Jan | Open-source local AI | Mac/Windows/Linux | AGPL-3.0 |
| TensorFlow Lite | Mobile/edge inference | iOS/Android/Embedded | Apache-2.0 |
| ONNX Runtime | Cross-platform inference | All platforms | MIT |
| Core ML | Apple silicon inference | iOS/macOS | Proprietary |
| MediaPipe | On-device ML pipelines | Mobile/Web/Desktop | Apache-2.0 |
| MLC LLM | Universal device deployment | iOS/Android/Web | Apache-2.0 |
| Executorch | PyTorch mobile deployment | Mobile/embedded | BSD-3 |
AI Hardware
Click to expand — Chips and hardware for AI training and inference
| Hardware | Vendor | Type | FLOPS (FP16) | Use Case |
|---|---|---|---|---|
| H200 SXM | NVIDIA | GPU | 989 TFLOPS | LLM training |
| H100 SXM | NVIDIA | GPU | 989 TFLOPS | LLM training/inference |
| A100 80GB | NVIDIA | GPU | 312 TFLOPS | LLM training |
| MI300X | AMD | GPU | 1307 TFLOPS | LLM training |
| Gaudi 3 | Intel | Accelerator | 1835 TFLOPS | LLM training |
| TPU v5p | TPU | 459 TFLOPS | LLM training | |
| Trainium 2 | AWS | Accelerator | N/A | AWS LLM training |
| LPU | Groq | LPU | N/A | Ultra-low latency inference |
| WT-1 | Cerebras | WSE | N/A | Single-chip neural net |
| M3 Ultra | Apple | SoC | 800 GFLOPS | Local inference |
| RTX 4090 | NVIDIA | GPU | 165.2 TFLOPS | Consumer fine-tuning |
Free AI Courses
Click to expand — Free and high-quality AI learning resources
| Course | Provider | Topics | Format |
|---|---|---|---|
| Fast.ai | Fast.ai | Deep learning, LLMs | Video + notebooks |
| Hugging Face NLP Course | HuggingFace | Transformers, NLP | Interactive |
| DeepLearning.AI Short Courses | DeepLearning.AI | LLMOps, agents, RAG | Video |
| Stanford CS224N | Stanford | NLP with Deep Learning | Video + slides |
| Stanford CS229 | Stanford | Machine Learning | Video + notes |
| MIT 6.S191 | MIT | Introduction to Deep Learning | Video |
| Andrej Karpathy's Zero to Hero | Karpathy | Neural networks from scratch | YouTube |
| Google ML Crash Course | ML fundamentals | Interactive | |
| Practical Deep Learning | Fast.ai | Applied DL for coders | Notebooks |
| Microsoft AI for Beginners | Microsoft | AI fundamentals | GitHub |
| LLM Bootcamp | Full Stack DL | Building LLM apps | Video |
| Prompt Engineering Course | DAIR.AI | Prompt engineering | Guide |
AI Research Labs
Click to expand — Leading AI research organizations
| Lab | Type | Focus Areas | Notable Work |
|---|---|---|---|
| OpenAI | Private | AGI, safety, multimodal | GPT-4, DALL-E, Sora |
| DeepMind | Scientific AI, gaming | AlphaFold, Gemini | |
| Anthropic | Private | AI safety, interpretability | Claude, Constitutional AI |
| Meta AI | Corporate | Open models, translation | Llama, SEAMLESS |
| Microsoft Research | Corporate | AGI, safety, applications | Phi, Orca |
| EleutherAI | Nonprofit | Open LLMs, transparency | GPT-NeoX, Pythia |
| AI2 | Nonprofit | Scientific AI, commonsense | OLMo, SPDX |
| Hugging Face | Company | Open AI ecosystem | Transformers, datasets |
| Mistral AI | Private | Efficient open models | Mistral, Mixtral |
| Cohere | Private | Enterprise NLP | Command, Embed |
| Stability AI | Private | Open generative models | Stable Diffusion |
| BigScience | Research | Open, multilingual LLMs | BLOOM |
| LAION | Nonprofit | Open datasets | LAION-5B, OpenCLIP |
AI Conferences & Events
Click to expand — Key AI conferences and community events
| Conference | Focus | Frequency | Location |
|---|---|---|---|
| NeurIPS | ML theory, applications | Annual (Dec) | Rotating |
| ICML | Machine learning | Annual (Jul) | Rotating |
| ICLR | Deep learning | Annual (May) | Rotating |
| CVPR | Computer vision | Annual (Jun) | Rotating |
| ACL/EMNLP/NAACL | NLP | Annual | Rotating |
| AAAI | AI breadth | Annual (Feb) | Rotating |
| AI Engineer Summit | LLM engineering | Annual | San Francisco |
| AI for Good | Social impact AI | Annual | Geneva |
| GTC (NVIDIA) | AI infrastructure | Annual | San Jose |
| Google I/O | Google AI | Annual (May) | Mountain View |
| Microsoft Build | Azure/OpenAI | Annual (May) | Seattle |
| AWS re:Invent | AWS AI services | Annual (Dec) | Las Vegas |
Latest Papers (Daily Updated)
Click to expand — Notable recent arXiv papers (auto-updated daily)
March 2026
| Paper | Authors | Key Contribution | arXiv |
|---|---|---|---|
| Qwen 3.5 Technical Report | Alibaba | 72B model achieving 88.4 GPQA | 2503.xxxxx |
| DeepSeek-V3 | DeepSeek | MoE scaling, 671B with 37B active | 2412.19437 |
| Llama 4: Open Foundation Models | Meta | Multi-scale MoE, Scout & Maverick | 2504.xxxxx |
| Scaling LLM Test-Time Compute | Test-time scaling improves reasoning | 2408.03314 | |
| Constitutional AI | Anthropic | RLHF with AI feedback | 2212.08073 |
| Attention Is All You Need | Original transformer paper | 1706.03762 | |
| LoRA: Low-Rank Adaptation | Microsoft | Parameter-efficient fine-tuning | 2106.09685 |
| RLHF: Training LMs from Human Feedback | OpenAI | RLHF methodology | 2203.02155 |
| Chain-of-Thought Prompting | CoT reasoning in LLMs | 2201.11903 | |
| Retrieval-Augmented Generation | Meta | RAG for knowledge-intensive tasks | 2005.11401 |
This section is auto-updated daily via GitHub Actions.
Production Tools & APIs
Click to expand — APIs and platforms for AI in production
| API/Platform | Provider | Focus | Pricing |
|---|---|---|---|
| OpenAI API | OpenAI | GPT-4, embeddings, DALL-E | Pay-per-token |
| Anthropic API | Anthropic | Claude models | Pay-per-token |
| Google AI API | Gemini, embeddings | Free tier + pay-per-token | |
| Cohere API | Cohere | Command, Embed, Rerank | Free tier + pay-per-token |
| Mistral API | Mistral | Mistral, Codestral | Pay-per-token |
| Hugging Face API | HuggingFace | 100K+ models | Free + Serverless |
| xAI API | xAI | Grok models | Pay-per-token |
| Together AI | Together | Open models | Pay-per-token |
| Groq API | Groq | Ultra-fast inference | Free tier |
| Replicate API | Replicate | 100K+ models | Pay-per-compute |
| Stability AI API | Stability | Image/video generation | Pay-per-gen |
Multimodal AI
Click to expand — Models and tools handling multiple modalities
| Model | Modalities | Vendor | License |
|---|---|---|---|
| GPT-4V / GPT-4o | Text, Image, Audio | OpenAI | Proprietary |
| Gemini 3.1 Ultra | Text, Image, Audio, Video, Code | Proprietary | |
| Claude 3 Opus | Text, Image | Anthropic | Proprietary |
| LLaVA-1.6 | Text, Image | Community | Apache-2.0 |
| Qwen-VL | Text, Image, Video | Alibaba | Apache-2.0 |
| Phi-3 Vision | Text, Image | Microsoft | MIT |
| InternVL2 | Text, Image, Video | Shanghai AI Lab | MIT |
| PaliGemma 2 | Text, Image | Gemma | |
| CogVLM2 | Text, Image | Tsinghua | Apache-2.0 |
| Idefics3 | Text, Image | HuggingFace | Apache-2.0 |
| Pixtral | Text, Image | Mistral | Apache-2.0 |
AI for Science
Click to expand — AI models and tools for scientific research
| Tool | Field | Description | License |
|---|---|---|---|
| AlphaFold 3 | Biology | Protein & molecular structure prediction | CC-BY-NC-SA-4.0 |
| ESMFold | Biology | Meta's protein structure prediction | MIT |
| OpenFold | Biology | Open-source AlphaFold | Apache-2.0 |
| NVIDIA BioNeMo | Biology | Drug discovery foundation models | Proprietary |
| MatterSim | Materials | Universal ML potential for materials | MIT |
| ClimaX | Climate | Foundation model for weather/climate | MIT |
| FourCastNet | Climate | Fast AI weather forecasting | BSD-3 |
| GNoME | Materials | DeepMind materials discovery | Research |
| ChemBERTa | Chemistry | SMILES-based molecular transformers | MIT |
AI for Healthcare
Click to expand — Medical and clinical AI applications
| Tool | Focus | Organization | Notes |
|---|---|---|---|
| Med-PaLM 2 | Medical QA | Passes USMLE | |
| BioMedGPT | Biomedical NLP | Community | Apache-2.0 |
| ClinicalBERT | Clinical notes | Research | Apache-2.0 |
| PathAI | Pathology | PathAI | Proprietary |
| Paige AI | Oncology pathology | Paige | FDA-cleared |
| Tempus | Precision oncology | Tempus | Proprietary |
| Insilico Medicine | Drug discovery | Insilico | Proprietary |
AI for Finance
Click to expand — AI tools for financial services
| Tool | Focus | Notes |
|---|---|---|
| FinBERT | Financial sentiment | Fine-tuned BERT for finance |
| BloombergGPT | Finance NLP | 50B finance-trained LLM |
| FinGPT | Finance agent | Open-source financial LLMs |
| NLP4Finance | Various | AI for finance research org |
| Numerai | Stock prediction | Tournament-based ML hedge fund |
AI for Robotics
Click to expand — Foundation models and tools for robotics
| Tool | Focus | Organization | License |
|---|---|---|---|
| RT-2 | Vision-language-action | Google DeepMind | Research |
| OpenVLA | Open vision-language-action | Stanford | MIT |
| Octo | Generalist robot policy | Berkeley | Apache-2.0 |
| Isaac ROS | ROS2 GPU acceleration | NVIDIA | NVIDIA Isaac |
| LeRobot | Learning for robots | HuggingFace | Apache-2.0 |
| Genesis | Physics simulation | Community | Apache-2.0 |
Vendor Profiles
Click to expand — AI vendor ecosystem overview (40+ vendors tracked)
| Vendor | HQ | Founded | EU AI Act Tier | Key Models | Licensing |
|---|---|---|---|---|---|
| OpenAI | San Francisco, CA | 2015 | High | GPT-4, o3, DALL-E, Sora | Proprietary |
| Anthropic | San Francisco, CA | 2021 | High | Claude 3.5/4 | Proprietary |
| Google DeepMind | London, UK | 1988/2014 | High | Gemini, Veo, AlphaFold | Proprietary/Open |
| Meta AI | Menlo Park, CA | 2003 | High | Llama 4, SeamlessM4T | Llama License |
| Microsoft | Redmond, WA | 1975 | High | Phi, Copilot (OpenAI) | Mixed |
| Mistral AI | Paris, France | 2023 | Limited | Mistral, Mixtral, Codestral | Apache-2.0/MRL |
| Cohere | Toronto, Canada | 2019 | Limited | Command, Embed, Rerank | Proprietary |
| AI21 Labs | Tel Aviv, Israel | 2017 | Limited | Jamba, Jurassic | Jamba Open |
| xAI | San Francisco, CA | 2023 | High | Grok 3 | Proprietary |
| DeepSeek | Hangzhou, China | 2023 | High | DeepSeek-V3, R1 | MIT |
| Alibaba | Hangzhou, China | 1999 | High | Qwen 3.5 | Apache-2.0 |
| Moonshot AI | Beijing, China | 2023 | Limited | Kimi K2.5 | Proprietary |
| Hugging Face | New York, NY | 2016 | N/A | Hub, Transformers | Apache-2.0 |
| Stability AI | London, UK | 2019 | High | Stable Diffusion | Various |
| Midjourney | San Francisco, CA | 2021 | High | Midjourney v6 | Proprietary |
Full vendor database: data/vendors/vendors.json
Use as an API
All data files are accessible as raw GitHub URLs. Use them as live endpoints:
import requests
BASE = "https://raw.githubusercontent.com/alpha-one-index/awesome-ai-index/main/data"
# Models
models = requests.get(f"{BASE}/models/models.json").json()
# Vendors
vendors = requests.get(f"{BASE}/vendors/vendors.json").json()
# Benchmarks
benchmarks = requests.get(f"{BASE}/benchmarks/benchmarks.json").json()
# Filter open-source models with MMLU > 80
open_models = [
m for m in models
if m.get("license") != "Proprietary" and m.get("mmlu", 0) > 80
]
print(f"Found {len(open_models)} qualifying models")
Dataset Highlights
Top Models by Chatbot Arena (March 2026)
| Rank | Model | Vendor | Arena Score | GPQA Diamond | License |
|---|---|---|---|---|---|
| 1 | Claude Opus 4.6 | Anthropic | 2002 | 91.5 | Proprietary |
| 2 | Gemini 3.1 Pro | 1855 | 90.8 | Proprietary | |
| 3 | GPT-5.4 | OpenAI | 1665 | 92.0 | Proprietary |
| 4 | Kimi K2.5 | Moonshot AI | 1447 | 87.6 | Proprietary |
| 5 | Qwen 3.5 | Alibaba | 1443 | 88.4 | Apache-2.0 |
| 6 | DeepSeek R1 | DeepSeek | 1398 | 71.5 | MIT |
| 7 | Llama 4 Scout | Meta | 1320 | 74.2 | Llama 4 |
| 8 | Mistral Large 3 | Mistral AI | 1414 | 68.0 | MRL-0.1 |
Full dataset with 130+ models: data/models/models.json
Academic Citation
@dataset{awesome_ai_index_2026,
title = {awesome-ai-index: The Definitive Open-Source AI Ecosystem Database},
author = {Alpha One Index},
year = 2026,
publisher = {GitHub},
url = {https://github.com/alpha-one-index/awesome-ai-index},
license = {CC-BY-SA-4.0}
}
See also: CITATION.cff
Schema & Methodology
- data/schemas/schema.json — Full JSON Schema for validation
- METHODOLOGY.md — Data collection and scoring methodology
- ROADMAP.md — Quarterly roadmap and milestones
Contributing
All contributions welcome! Especially:
- Vendors: Self-submit via Issue: Add Vendor
- Models: New models or updated benchmarks via Issue: Add Model
- Data corrections: Issue: Data Correction
- New sections: PRs adding new curated categories are very welcome!
Read CONTRIBUTING.md for the full guide.
Footnotes
Star History
Maintained by Alpha One Index | Data updated daily | Submit corrections via Issues | Discussions
- Downloads last month
- 373