id stringlengths 36 36 | status stringclasses 1
value | inserted_at timestamp[us] | updated_at timestamp[us] | _server_id stringlengths 36 36 | title stringlengths 11 142 | authors stringlengths 3 297 | filename stringlengths 5 62 | content stringlengths 2 64.1k | content_class.responses sequencelengths 1 1 | content_class.responses.users sequencelengths 1 1 | content_class.responses.status sequencelengths 1 1 | content_class.suggestion sequencelengths 1 4 | content_class.suggestion.agent null | content_class.suggestion.score null |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
aa6cd850-deb8-434a-8e48-3b9b83f59850 | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T13:36:03.943000 | 04931499-a195-4dbe-8e88-3615fb461334 | Data is better together: Enabling communities to collectively build better datasets together using Argilla and Hugging Face Spaces | davanstrien, dvilasuero | community-datasets.md | Recently, Argilla and Hugging Face [launched](https://huggingface.co/posts/dvilasuero/680660181190026) `Data is Better Together`, an experiment to collectively build a preference dataset of prompt rankings. In a few days, we had:
- 350 community contributors labeling data
- Over 11,000 prompt ratings
See the [progre... | [
[
"llm",
"data",
"community",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"data",
"community",
"tools",
"llm"
] | null | null |
3d7d7a2d-491b-449f-ba3b-510a45e1ead4 | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T19:00:17.290000 | fdfa8e88-1b3f-43c9-905a-510602a63ee3 | A Security Review of Gradio 5 | abidlabs, pngwn | gradio-5-security.md | **We audited Gradio 5 so that your machine learning apps are safe!**
In the past few years, [Gradio](https://github.com/gradio-app/gradio/) (>6 million monthly Pypi installs) has become the default way to build machine learning web applications in Python. In just a few lines of code, you can create a user interface fo... | [
[
"mlops",
"implementation",
"security",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"security",
"tools",
"implementation",
"mlops"
] | null | null |
dc3ec0f4-c053-491d-8c35-0938492e1238 | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T17:14:34.129000 | 078c94d6-25c8-47bc-9402-90bbea13d14d | Showcase Your Projects in Spaces using Gradio | merve | gradio-spaces.md | It's so easy to demonstrate a Machine Learning project thanks to [Gradio](https://gradio.app/).
In this blog post, we'll walk you through:
- the recent Gradio integration that helps you demo models from the Hub seamlessly with few lines of code leveraging the [Inference API](https://huggingface.co/inference-api).
- h... | [
[
"mlops",
"implementation",
"tools",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"mlops",
"implementation",
"tools",
"integration"
] | null | null |
aa30786c-27c9-4929-9e95-5c2516aed772 | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T18:49:32.224000 | 80f1fa1e-c44c-432b-96e3-e313679d4c1a | Introducing smolagents: simple agents that write actions in code. | m-ric, merve, thomwolf | smolagents.md | Today we are launching [`smolagents`](https://github.com/huggingface/smolagents), a very simple library that unlocks agentic capabilities for language models. Here’s a glimpse:
```python
from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel
agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=HfApiMod... | [
[
"llm",
"implementation",
"tools",
"text_generation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"implementation",
"tools",
"text_generation"
] | null | null |
df2462d0-e003-4f15-ac32-7363e169e427 | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T03:17:50.594000 | 07dece9f-a414-48df-8173-23243786b9cd | MTEB: Massive Text Embedding Benchmark | Muennighoff | mteb.md | MTEB is a massive benchmark for measuring the performance of text embedding models on diverse embedding tasks.
The 🥇 [leaderboard](https://huggingface.co/spaces/mteb/leaderboard) provides a holistic view of the best text embedding models out there on a variety of tasks.
The 📝 [paper](https://arxiv.org/abs/2210.073... | [
[
"data",
"research",
"benchmarks",
"tools",
"text_classification"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"benchmarks",
"research",
"tools",
"data"
] | null | null |
f01bfc90-3615-45c6-a448-debd0ddd13d1 | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T03:19:26.902000 | 510bfb44-c7a6-4eea-9b34-c0a929d2d0e7 | Porting fairseq wmt19 translation system to transformers | stas | porting-fsmt.md | ##### A guest blog post by Stas Bekman
This article is an attempt to document how [fairseq wmt19 translation system](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) was ported to [`transformers`](https://github.com/huggingface/transformers/).
I was looking for some interesting project to work on and [... | [
[
"transformers",
"research",
"implementation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"translation",
"implementation",
"research"
] | null | null |
a31d084d-090e-4d29-a190-2c087869171a | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T18:47:44.828000 | 0e7993a0-8558-44d2-af5f-b858e6aff2cd | Introducing the Open Ko-LLM Leaderboard: Leading the Korean LLM Evaluation Ecosystem | Chanjun, hunkim, clefourrier | leaderboard-upstage.md | In the fast-evolving landscape of Large Language Models (LLMs), building an “ecosystem” has never been more important. This trend is evident in several major developments like Hugging Face's democratizing NLP and Upstage building a Generative AI ecosystem.
Inspired by these industry milestones, in September of 2023, a... | [
[
"llm",
"research",
"benchmarks",
"community"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"community",
"research"
] | null | null |
512bb096-2538-4be8-8ebd-8866cd1bc14c | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T19:13:54.373000 | db443612-33f7-4ad6-8684-01c4413a97a0 | Deploying 🤗 ViT on Kubernetes with TF Serving | chansung, sayakpaul | deploy-tfserving-kubernetes.md | In the [<u>previous post</u>](https://huggingface.co/blog/tf-serving-vision), we showed how
to deploy a [<u>Vision Transformer (ViT)</u>](https://huggingface.co/docs/transformers/main/en/model_doc/vit)
model from 🤗 Transformers locally with TensorFlow Serving. We covered
topics like embedding preprocessing and postpro... | [
[
"computer_vision",
"transformers",
"mlops",
"tutorial",
"deployment"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"transformers",
"mlops",
"deployment"
] | null | null |
c5f128b3-f370-4984-89cd-132b753a94b3 | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T03:17:15.373000 | 4caf7254-0df2-4acd-8ff2-b335e3c7d9bd | AMD + 🤗: Large Language Models Out-of-the-Box Acceleration with AMD GPU | fxmarty, IlyasMoutawwakil, mohitsha, echarlaix, seungrokj, mfuntowicz | huggingface-and-optimum-amd.md | Earlier this year, [AMD and Hugging Face announced a partnership](https://huggingface.co/blog/huggingface-and-amd) to accelerate AI models during the AMD's AI Day event. We have been hard at work to bring this vision to reality, and make it easy for the Hugging Face community to run the latest AI models on AMD hardwar... | [
[
"llm",
"implementation",
"optimization",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"optimization",
"implementation",
"integration"
] | null | null |
5fbe5aae-7a41-4b61-9506-ae7e8bdb9836 | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T03:13:57.062000 | 3a503229-03f0-4c5f-abd9-9f62f7613473 | Fine-Tune a Semantic Segmentation Model with a Custom Dataset | tobiasc, nielsr | fine-tune-segformer.md | <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script>
<a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="... | [
[
"computer_vision",
"transformers",
"tutorial",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"transformers",
"fine_tuning",
"tutorial"
] | null | null |
87f38fed-f820-4344-bd87-a019413f8662 | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T18:52:58.126000 | 4cac3387-3005-45bd-a1fb-d605ab09f600 | Accelerating Document AI | rajistics, nielsr, florentgbelidji, nbroad | document-ai.md | Enterprises are full of documents containing knowledge that isn't accessible by digital workflows. These documents can vary from letters, invoices, forms, reports, to receipts. With the improvements in text, vision, and multimodal AI, it's now possible to unlock that information. This post shows you how your teams can ... | [
[
"computer_vision",
"implementation",
"multi_modal"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"multi_modal",
"implementation",
"tutorial"
] | null | null |
7129deb4-9c64-4b1e-a27b-71a789ce3cd4 | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T18:59:13.437000 | 36285803-8548-4393-a819-fc9b45ce933f | Overview of natively supported quantization schemes in 🤗 Transformers | ybelkada, marcsun13, IlyasMoutawwakil, clefourrier, fxmarty | overview-quantization-transformers.md | We aim to give a clear overview of the pros and cons of each quantization scheme supported in transformers to help you decide which one you should go for.
Currently, quantizing models are used for two main purposes:
- Running inference of a large model on a smaller device
- Fine-tune adapters on top of quantized mode... | [
[
"transformers",
"implementation",
"optimization",
"quantization"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"quantization",
"optimization",
"implementation"
] | null | null |
05615c67-233e-4acf-92c4-5a3564376aad | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T13:34:39.854000 | 8607bfc3-dbe2-46e0-9570-b0e8ff2fff70 | How to train your model dynamically using adversarial data | chrisjay | mnist-adversarial.md | ##### What you will learn here
- 💡the basic idea of dynamic adversarial data collection and why it is important.
- ⚒ how to collect adversarial data dynamically and train your model on them - using an MNIST handwritten digit recognition task as an example.
## Dynamic adversarial data collection (DADC)
Static benchm... | [
[
"data",
"research",
"benchmarks",
"tutorial"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"data",
"research",
"benchmarks",
"tutorial"
] | null | null |
7a3744a5-a39a-448d-8507-2cd0993c514c | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T19:15:04.653000 | 219ed138-a525-4b47-a5cb-445983ff4c8b | Benchmarking Language Model Performance on 5th Gen Xeon at GCP | MatrixYao, kding1, IlyasMoutawwakil | intel-gcp-c4.md | **TL;DR**: We benchmark 2 representative agentic AI workload components, text embedding and text generation, on two Google Cloud Compute Engine Xeon-based CPU instances, namely N2 and C4. The results consistently shows that C4 has 10x to 24x higher throughput over N2 in text embedding and 2.3x to 3.6x higher throughput... | [
[
"llm",
"benchmarks",
"tutorial",
"optimization",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"efficient_computing",
"optimization"
] | null | null |
Dataset Card for blog_posts_classified
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds = rg.Dataset.from_hub("fdaudens/blog_posts_classified", settings="auto")
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
Using this dataset with datasets
To load the records of this dataset with datasets, you'll just need to install datasets as pip install datasets --upgrade and then use the following code:
from datasets import load_dataset
ds = load_dataset("fdaudens/blog_posts_classified")
This will only load the records of the dataset, but not the Argilla settings.
Dataset Structure
This dataset repo contains:
- Dataset records in a format compatible with HuggingFace
datasets. These records will be loaded automatically when usingrg.Dataset.from_huband can be loaded independently using thedatasetslibrary viaload_dataset. - The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
- A dataset configuration folder conforming to the Argilla dataset format in
.argilla.
The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.
Fields
The fields are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
| Field Name | Title | Type | Required |
|---|---|---|---|
| title | Blog Post Title | text | True |
| authors | Authors | text | True |
| filename | Source Filename | text | True |
| content | Blog Content | text | True |
Questions
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
|---|---|---|---|---|---|
| content_class | What topics does this blog post cover? | multi_label_selection | True | Select all topics that apply to this blog post | ['llm', 'computer_vision', 'audio', 'transformers', 'data', 'mlops', 'research', 'implementation', 'benchmarks', 'tutorial', 'community', 'security', 'optimization', 'deployment', 'tools', 'text_generation', 'text_classification', 'translation', 'image_generation', 'multi_modal', 'quantization', 'fine_tuning', 'integration', 'efficient_computing', 'robotics'] |
Data Splits
The dataset contains a single split, which is train.
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation guidelines
Pre-annotated blog posts with manual labels. Please verify and adjust the classifications as needed.
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
[More Information Needed]
Contributions
[More Information Needed]
- Downloads last month
- 21