url
stringlengths
31
71
targets
stringlengths
11
143
authors
stringlengths
6
190
date
stringlengths
11
18
inputs
stringlengths
140
14.8k
https://huggingface.co/blog/community-update
Introducing Pull Requests and Discussions 🥳
No authors found
May 25, 2022
We are thrilled to announce the release of our latest collaborative features: pull requests and discussions on the Hugging Face Hub!Pull requests and discussions are available today under the community tab for all repository types: models, datasets, and Spaces. Any member of the community can create and participate in ...
https://huggingface.co/blog/red-teaming
Red-Teaming Large Language Models
Nazneen Rajani, Nathan Lambert, Lewis Tunstall
February 24, 2023
Red-Teaming Large Language ModelsHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesRed-Teaming Large Language Models
https://huggingface.co/blog/diffusers-coreml
Using Stable Diffusion with Core ML on Apple Silicon
Pedro Cuenca
December 1, 2022
Thanks to Apple engineers, you can now run Stable Diffusion on Apple Silicon using Core ML!This Apple repo provides conversion scripts and inference code based on 🧨 Diffusers, and we love it! To make it as easy as possible for you, we converted the weights ourselves and put the Core ML versions of the models in the Hu...
https://huggingface.co/blog/the-age-of-ml-as-code
The Age of Machine Learning As Code Has Arrived
Julien Simon
October 20, 2021
The 2021 edition of the State of AI Report came out last week. So did the Kaggle State of Machine Learning and Data Science Survey. There's much to be learned and discussed in these reports, and a couple of takeaways caught my attention."AI is increasingly being applied to mission critical infrastructure like national ...
https://huggingface.co/blog/ethics-soc-4
Ethics and Society Newsletter #4: Bias in Text-to-Image Models
Sasha Luccioni, Giada Pistilli, Nazneen Rajani, Elizabeth Allendorf, Irene Solaiman, Nathan Lambert, Margaret Mitchell
June 26, 2023
TL;DR: We need better ways of evaluating bias in text-to-image modelsIntroductionText-to-image (TTI) generation is all the rage these days, and thousands of TTI models are being uploaded to the Hugging Face Hub. Each modality is potentially susceptible to separate sources of bias, which begs the question: how do we unc...
https://huggingface.co/blog/leaderboard-patronus
Introducing the Enterprise Scenarios Leaderboard: a Leaderboard for Real World Use Cases
Selvan Sunitha Ravi, Rebecca Qian, Anand Kannappan, Clémentine Fourrier
January 31, 2024
Today, the Patronus team is excited to announce the new Enterprise Scenarios Leaderboard, built using the Hugging Face Leaderboard Template in collaboration with their teams. The leaderboard aims to evaluate the performance of language models on real-world enterprise use cases. We currently support 6 diverse tasks - Fi...
https://huggingface.co/blog/setfit
SetFit: Efficient Few-Shot Learning Without Prompts
Unso Eun Seo Jo, Lewis Tunstall, Luke Bates, Oren Pereg, Moshe Wasserblat
September 26, 2022
Few-shot learning with pretrained language models has emerged as a promising solution to every data scientist's nightmare: dealing with data that has few to no labels 😱.Together with our research partners at Intel Labs and the UKP Lab, Hugging Face is excited to introduce SetFit: an efficient framework for few-shot fi...
https://huggingface.co/blog/hardware-partners-program
Introducing 🤗 Optimum: The Optimization Toolkit for Transformers at Scale
Morgan Funtowicz, Ella Charlaix, Michael Benayoun, Jeff Boudier
September 14, 2021
This post is the first step of a journey for Hugging Face to democratizestate-of-the-art Machine Learning production performance.To get there, we will work hand in hand with ourHardware Partners, as we have with Intel below.Join us in this journey, and follow Optimum, our new open source library!Why 🤗 Optimum?🤯 Scali...
https://huggingface.co/blog/autotrain-image-classification
Image Classification with AutoTrain
Nima Boscarino
September 28, 2022
So you’ve heard all about the cool things that are happening in the machine learning world, and you want to join in. There’s just one problem – you don’t know how to code! 😱 Or maybe you’re a seasoned software engineer who wants to add some ML to your side-project, but you don’t have the time to pick up a whole new te...
https://huggingface.co/blog/stable-diffusion-finetuning-intel
Fine-tuning Stable Diffusion Models on Intel CPUs
Julien Simon
July 14, 2023
Diffusion models helped popularize generative AI thanks to their uncanny ability to generate photorealistic images from text prompts. These models have now found their way into enterprise use cases like synthetic data generation or content creation. The Hugging Face hub includes over 5,000 pre-trained text-to-image mod...
https://huggingface.co/blog/intel-starcoder-quantization
Accelerate StarCoder with 🤗 Optimum Intel on Xeon: Q8/Q4 and Speculative Decoding
Ofir Zafrir, Ella Charlaix, Igor Margulis, Jonathan Mamou, Guy Boudoukh, Oren Pereg, Moshe Wasserblat, Haihao Shen, Ahmad Yasin, FanZhao
January 30, 2024
IntroductionRecently, code generation models have become very popular, especially with the release of state-of-the-art open-source models such as BigCode’s StarCoder and Meta AI’s Code Llama. A growing number of works focuses on making Large Language Models (LLMs) more optimized and accessible. In this blog, we are hap...
https://huggingface.co/blog/using-ml-for-disasters
Using Machine Learning to Aid Survivors and Race through Time
Merve Noyan, Alara Dirik
March 3, 2023
On February 6, 2023, earthquakes measuring 7.7 and 7.6 hit South Eastern Turkey, affecting 10 cities and resulting in more than 42,000 deaths and 120,000 injured as of February 21.A few hours after the earthquake, a group of programmers started a Discord server to roll out an application called afetharita, literally me...
https://huggingface.co/blog/how-to-train
How to train a new language model from scratch using Transformers and Tokenizers
Julien Chaumond
February 14, 2020
Over the past few months, we made several improvements to our transformers and tokenizers libraries, with the goal of making it easier than ever to train a new language model from scratch.In this post we’ll demo how to train a “small” model (84 M parameters = 6 layers, 768 hidden size, 12 attention heads) – that’s the ...
https://huggingface.co/blog/getting-started-with-embeddings
Getting Started With Embeddings
Omar Espejel
June 23, 2022
Check out this tutorial with the Notebook Companion:Understanding embeddingsAn embedding is a numerical representation of a piece of information, for example, text, documents, images, audio, etc. The representation captures the semantic meaning of what is being embedded, making it robust for many industry applications....
https://huggingface.co/blog/inference-update
An Overview of Inference Solutions on Hugging Face
Julien Simon
November 21, 2022
Every day, developers and organizations are adopting models hosted on Hugging Face to turn ideas into proof-of-concept demos, and demos into production-grade applications. For instance, Transformer models have become a popular architecture for a wide range of machine learning (ML) applications, including natural langua...
https://huggingface.co/blog/graphcore-update
Graphcore and Hugging Face Launch New Lineup of IPU-Ready Transformers
Sally Doherty
May 26, 2022
Graphcore and Hugging Face have significantly expanded the range of Machine Learning modalities and tasks available in Hugging Face Optimum, an open-source library for Transformers performance optimization. Developers now have convenient access to a wide range of off-the-shelf Hugging Face Transformer models, optimised...
https://huggingface.co/blog/leaderboard-hebrew
Introducing the Open Leaderboard for Hebrew LLMs!
Shaltiel Shmidman, Tal Geva, Omer Koren, Clémentine Fourrier
May 5, 2024
This project addresses the critical need for advancement in Hebrew NLP. As Hebrew is considered a low-resource language, existing LLM leaderboards often lack benchmarks that accurately reflect its unique characteristics. Today, we are excited to introduce a pioneering effort to change this narrative — our new open LLM ...
https://huggingface.co/blog/tf-serving-vision
Deploying TensorFlow Vision Models in Hugging Face with TF Serving
Sayak Paul
July 25, 2022
In the past few months, the Hugging Face team and external contributorsadded a variety of vision models in TensorFlow to Transformers. Thislist is growing comprehensively and already includes state-of-the-artpre-trained models like Vision Transformer,Masked Autoencoders,RegNet,ConvNeXt,and many others!When it comes to ...
https://huggingface.co/blog/ethical-charter-multimodal
Putting ethical principles at the core of the research lifecycle
Lucile Saulnier, Siddharth Karamcheti, Hugo Laurençon, Leo Tronchon, Thomas Wang, Victor Sanh, Amanpreet Singh, Giada Pistilli, Sasha Luccioni, Yacine Jernite, Margaret Mitchell, Douwe Kiela
May 19, 2022
Ethical charter - Multimodal project Purpose of the ethical charter It has been well documented that machine learning research and applications can potentially lead to "data privacy issues, algorithmic biases, automation risks and malicious uses" (NeurIPS 2021 ethics guidelines). The purpose of this short document is...
https://huggingface.co/blog/leaderboard-haizelab
Introducing the Red-Teaming Resistance Leaderboard
Steve Li, Richard, Leonard Tang, Clémentine Fourrier
February 23, 2024
Content warning: since this blog post is about a red-teaming leaderboard (testing elicitation of harmful behavior in LLMs), some users might find the content of the related datasets or examples unsettling.LLM research is moving fast. Indeed, some might say too fast.While researchers in the field continue to rapidly exp...
https://huggingface.co/blog/notebooks-hub
Jupyter X Hugging Face
Daniel van Strien, Vaibhav Srivastav, Merve Noyan
March 23, 2023
We’re excited to announce improved support for Jupyter notebooks hosted on the Hugging Face Hub!From serving as an essential learning resource to being a key tool used for model development, Jupyter notebooks have become a key component across many areas of machine learning. Notebooks' interactive and visual nature let...
https://huggingface.co/blog/Llama2-for-non-engineers
Non-engineers guide: Train a LLaMA 2 chatbot
Andrew Jardine, Abhishek Thakur
September 28, 2023
Introduction In this tutorial we will show you how anyone can build their own open-source ChatGPT without ever writing a single line of code! We’ll use the LLaMA 2 base model, fine tune it for chat with an open-source instruction dataset and then deploy the model to a chat app you can share with your friends. All by ju...
https://huggingface.co/blog/lora
Using LoRA for Efficient Stable Diffusion Fine-Tuning
Pedro Cuenca, Sayak Paul
January 26, 2023
LoRA: Low-Rank Adaptation of Large Language Models is a novel technique introduced by Microsoft researchers to deal with the problem of fine-tuning large-language models. Powerful models with billions of parameters, such as GPT-3, are prohibitively expensive to fine-tune in order to adapt them to particular tasks or do...
https://huggingface.co/blog/gemma
Welcome Gemma - Google’s new open LLM
Philipp Schmid, Omar Sanseviero, Pedro Cuenca
February 21, 2024
Gemma, a new family of state-of-the-art open LLMs, was released today by Google! It's great to see Google reinforcing its commitment to open-source AI, and we’re excited to fully support the launch with comprehensive integration in Hugging Face.Gemma comes in two sizes: 7B parameters, for efficient deployment and devel...
https://huggingface.co/blog/amd_pervasive_developer_ai_contest
AMD Pervasive AI Developer Contest
Guruprasad MP
February 14, 2024
AMD and Hugging Face are actively engaged in helping developers seamlessly deploy cutting-edge AI models on AMD hardware. This year, AMD takes their commitment one step further by providing developers free, hands-on access to state-of-the-art AMD hardware through their recently announced Pervasive AI Developer Contest....
https://huggingface.co/blog/mteb
MTEB: Massive Text Embedding Benchmark
Niklas Muennighoff
October 19, 2022
MTEB is a massive benchmark for measuring the performance of text embedding models on diverse embedding tasks.The 🥇 leaderboard provides a holistic view of the best text embedding models out there on a variety of tasks. The 📝 paper gives background on the tasks and datasets in MTEB and analyzes leaderboard results!Th...
https://huggingface.co/blog/classification-use-cases
How Hugging Face Accelerated Development of Witty Works Writing Assistant
Julien Simon, Violette Lepercq, Florent Gbelidji, Elena Nazarenko, Lukas Kahwe Smith
March 1, 2023
The Success Story of Witty Works with the Hugging Face Expert Acceleration Program.If you're interested in building ML solutions faster, visit the Expert Acceleration Program landing page and contact us here!Business ContextAs IT continues to evolve and reshape our world, creating a more diverse and inclusive environme...
https://huggingface.co/blog/evaluation-structured-outputs
Improving Prompt Consistency with Structured Generations
Will Kurt, Remi Louf, Clémentine Fourrier
April 30, 2024
Recently, the Leaderboards and Evals research team at Hugging Face did small experiments, which highlighted how fickle evaluation can be. For a given task, results are extremely sensitive to minuscule changes in prompt format! However, this is not what we want: a model prompted with the same amount of information as in...
https://huggingface.co/blog/deploy-deepfloydif-using-bentoml
Deploying Hugging Face Models with BentoML: DeepFloyd IF in Action
Sherlock Xu, Zhao Shenyang
August 9, 2023
Hugging Face provides a Hub platform that allows you to upload, share, and deploy your models with ease. It saves developers the time and computational resources required to train models from scratch. However, deploying models in a real-world production environment or in a cloud-native way can still present challenges....
https://huggingface.co/blog/datasets-docs-update
Introducing new audio and vision documentation in 🤗 Datasets
Steven Liu
July 28, 2022
Open and reproducible datasets are essential for advancing good machine learning. At the same time, datasets have grown tremendously in size as rocket fuel for large language models. In 2020, Hugging Face launched 🤗 Datasets, a library dedicated to:Providing access to standardized datasets with a single line of code.T...
https://huggingface.co/blog/policy-blog
Public Policy at Hugging Face
Irene Solaiman, Yacine Jernite, Margaret Mitchell
April 8, 2024
AI Policy at Hugging Face is a multidisciplinary and cross-organizational workstream. Instead of being part of a vertical communications or global affairs organization, our policy work is rooted in the expertise of our many researchers and developers, from Ethics and Society Regulars and the legal team to machine learn...
https://huggingface.co/blog/hf-hub-glam-guide
The Hugging Face Hub for Galleries, Libraries, Archives and Museums
Daniel van Strien
June 12, 2023
The Hugging Face Hub for Galleries, Libraries, Archives and MuseumsWhat is the Hugging Face Hub?Hugging Face aims to make high-quality machine learning accessible to everyone. This goal is pursued in various ways, including developing open-source code libraries such as the widely-used Transformers library, offering fre...
https://huggingface.co/blog/tf_tpu
Training a language model with 🤗 Transformers using TensorFlow and TPUs
Matthew Carrigan, Sayak Paul
April 27, 2023
IntroductionTPU training is a useful skill to have: TPU pods are high-performance and extremely scalable, making it easy to train models at any scale from a few tens of millions of parameters up to truly enormous sizes: Google’s PaLM model (over 500 billion parameters!) was trained entirely on TPU pods. We’ve previousl...
https://huggingface.co/blog/content-guidelines-update
Announcing our new Community Policy
Giada Pistilli
June 15, 2023
As a community-driven platform that aims to advance Open, Collaborative, and Responsible Machine Learning, we are thrilled to support and maintain a welcoming space for our entire community! In support of this goal, we've updated our Content Policy.We encourage you to familiarize yourself with the complete document to ...
https://huggingface.co/blog/panel-on-hugging-face
Panel on Hugging Face
Rudiger, Sophia Yang
June 22, 2023
We are thrilled to announce the collaboration between Panel and Hugging Face! 🎉 We have integrated a Panel template in Hugging Face Spaces to help you get started building Panel apps and deploy them on Hugging Face effortlessly. What does Panel offer?Panel is an open-source Python library that lets you easily build po...
https://huggingface.co/blog/overview-quantization-transformers
Overview of natively supported quantization schemes in 🤗 Transformers
Younes Belkada, Marc Sun, Ilyas Moutawwakil, Clémentine Fourrier, Félix Marty
September 12, 2023
We aim to give a clear overview of the pros and cons of each quantization scheme supported in transformers to help you decide which one you should go for.Currently, quantizing models are used for two main purposes:Running inference of a large model on a smaller deviceFine-tune adapters on top of quantized modelsSo far,...
https://huggingface.co/blog/ai-residency
Announcing the 🤗 AI Research Residency Program 🎉 🎉 🎉
Douwe Kiela
March 22, 2022
The 🤗 Research Residency Program is a 9-month opportunity to launch or advance your career in machine learning research 🚀. The goal of the residency is to help you grow into an impactful AI researcher. Residents will work alongside Researchers from our Science Team. Together, you will pick a research problem and then...
https://huggingface.co/blog/graphml-classification
Graph classification with Transformers
No authors found
April 14, 2023
In the previous blog, we explored some of the theoretical aspects of machine learning on graphs. This one will explore how you can do graph classification using the Transformers library. (You can also follow along by downloading the demo notebook here!)At the moment, the only graph transformer model available in Transf...
https://huggingface.co/blog/google-cloud-model-garden
Making thousands of open LLMs bloom in the Vertex AI Model Garden
Philipp Schmid, Jeff Boudier
April 10, 2024
Today, we are thrilled to announce the launch of Deploy on Google Cloud, a new integration on the Hugging Face Hub to deploy thousands of foundation models easily to Google Cloud using Vertex AI or Google Kubernetes Engine (GKE). Deploy on Google Cloud makes it easy to deploy open models as API Endpoints within your ow...
https://huggingface.co/blog/vlms
Vision Language Models Explained
Merve Noyan, Edward Beeching
April 11, 2024
Vision language models are models that can learn simultaneously from images and texts to tackle many tasks, from visual question answering to image captioning. In this post, we go through the main building blocks of vision language models: have an overview, grasp how they work, figure out how to find the right model, h...
https://huggingface.co/blog/deep-rl-q-part2
An Introduction to Q-Learning Part 2/2
Thomas Simonini
May 20, 2022
Unit 2, part 2 of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here....
https://huggingface.co/blog/watermarking
AI Watermarking 101: Tools and Techniques
Sasha Luccioni, Yacine Jernite, Derek Thomas, Emily Witko, Ezi Ozoani, Josef Fukano, Vaibhav Srivastav, Brigitte Tousignant, Margaret Mitchell
February 26, 2024
In recent months, we've seen multiple news stories involving ‘deepfakes’, or AI-generated content: from images of Taylor Swift to videos of Tom Hanks and recordings of US President Joe Biden. Whether they are selling products, manipulating images of people without their consent, supporting phishing for private informat...
https://huggingface.co/blog/zero-deepspeed-fairscale
Fit More and Train Faster With ZeRO via DeepSpeed and FairScale
Stas Bekman
January 19, 2021
A guest blog post by Hugging Face fellow Stas BekmanAs recent Machine Learning models have been growing much faster than the amount of GPU memory added to newly released cards, many users are unable to train or even just load some of those huge models onto their hardware. While there is an ongoing effort to distill som...
https://huggingface.co/blog/owkin-substra
Creating Privacy Preserving AI with Substra
Ali Imran, Katie Link, Nima Boscarino, Thibault Fouqueray
April 12, 2023
With the recent rise of generative techniques, machine learning is at an incredibly exciting point in its history. The models powering this rise require even more data to produce impactful results, and thus it’s becoming increasingly important to explore new methods of ethically gathering data while ensuring that data ...
https://huggingface.co/blog/inferentia-llama2
Make your llama generation time fly with AWS Inferentia2
David Corvoysier
November 7, 2023
Update (02/2024): Performance has improved even more! Check our updated benchmarks.In a previous post on the Hugging Face blog, we introduced AWS Inferentia2, the second-generation AWS Inferentia accelerator, and explained how you could use optimum-neuron to quickly deploy Hugging Face models for standard text and visi...
https://huggingface.co/blog/vision-transformers
Deep Dive: Vision Transformers On Hugging Face Optimum Graphcore
Julien Simon
August 18, 2022
Deep Dive: Vision Transformers On Hugging Face Optimum GraphcoreHugging Face Models Datasets Spaces Posts Docs Solutions Pricing Log In Sign Up Back to Articles Deep Dive: Vision Transformers On Hugging Face Optimum Graphcore
https://huggingface.co/blog/safecoder-vs-closed-source-code-assistants
SafeCoder vs. Closed-source Code Assistants
Julien Simon
September 11, 2023
For decades, software developers have designed methodologies, processes, and tools that help them improve code quality and increase productivity. For instance, agile, test-driven development, code reviews, and CI/CD are now staples in the software industry. In "How Google Tests Software" (Addison-Wesley, 2012), Google ...
https://huggingface.co/blog/text-to-video
Text-to-Video: The Task, Challenges and the Current State
Alara Dirik
May 8, 2023
Text-to-video is next in line in the long list of incredible advances in generative models. As self-descriptive as it is, text-to-video is a fairly new computer vision task that involves generating a sequence of images from text descriptions that are both temporally and spatially consistent. While this task might seem ...
https://huggingface.co/blog/ethics-soc-5
Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings
Margaret Mitchell
September 29, 2023
Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 MusingsHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesEthics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings
https://huggingface.co/blog/wuerstchen
Introducing Würstchen: Fast Diffusion for Image Generation
Dominic Rampas, Pablo Pernías, Kashif Rasul, Sayak Paul, Pedro Cuenca
September 13, 2023
What is Würstchen? Würstchen is a diffusion model, whose text-conditional component works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by orders of magnitude. Training on 1024×1024 images is way more expensive than ...
https://huggingface.co/blog/zero-shot-eval-on-the-hub
Very Large Language Models and How to Evaluate Them
helen, Tristan Thrush, Abhishek Thakur, Lewis Tunstall, Douwe Kiela
October 3, 2022
Large language models can now be evaluated on zero-shot classification tasks with Evaluation on the Hub! Zero-shot evaluation is a popular way for researchers to measure the performance of large language models, as they have been shown to learn capabilities during training without explicitly being shown labeled example...
https://huggingface.co/blog/jat
Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent
Quentin Gallouédec, Edward Beeching, Clément ROMAC, Thomas Wolf
April 22, 2024
IntroductionWe're excited to share Jack of All Trades (JAT), a project that aims to move in the direction of a generalist agent. The project started as an open reproduction of the Gato (Reed et al., 2022) work, which proposed to train a Transformer able to perform both vision-and-language and decision-making tasks. We ...
https://huggingface.co/blog/sb3
Welcome Stable-baselines3 to the Hugging Face Hub 🤗
Thomas Simonini
January 21, 2022
At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. That’s why we’re happy to announce that we integrated Stable-Baselines3 to the Hugging Face Hub.Stable-Baselines3 is one of the most popular PyTorch Deep Reinforcement Learning library that makes it easy t...
https://huggingface.co/blog/arena-tts
TTS Arena: Benchmarking Text-to-Speech Models in the Wild
mrfakename, Vaibhav Srivastav, Clémentine Fourrier, Lucain Pouget, Yoach Lacombe, Main Horse, Sanchit Gandhi
February 27, 2024
Automated measurement of the quality of text-to-speech (TTS) models is very difficult. Assessing the naturalness and inflection of a voice is a trivial task for humans, but it is much more difficult for AI. This is why today, we’re thrilled to announce the TTS Arena. Inspired by LMSys's Chatbot Arena for LLMs, we devel...
https://huggingface.co/blog/nystromformer
Nyströmformer: Approximating self-attention in linear time and memory via the Nyström method
Antoine SIMOULIN
August 2, 2022
IntroductionTransformers have exhibited remarkable performance on various Natural Language Processing and Computer Vision tasks. Their success can be attributed to the self-attention mechanism, which captures the pairwise interactions between all the tokens in an input. However, the standard self-attention mechanism ha...
https://huggingface.co/blog/ai-comic-factory
Deploying the AI Comic Factory using the Inference API
Julian Bilcke
October 2, 2023
We recently announced Inference for PROs, our new offering that makes larger models accessible to a broader audience. This opportunity opens up new possibilities for running end-user applications using Hugging Face as a platform.An example of such an application is the AI Comic Factory - a Space that has proved incredi...
https://huggingface.co/blog/arena-lighthouz
Introducing the Chatbot Guardrails Arena
Sonali Pattnaik, Rohan Karan, Srijan Kumar, Clémentine Fourrier
March 21, 2024
With the recent advancements in augmented LLM capabilities, deployment of enterprise AI assistants (such as chatbots and agents) with access to internal databases is likely to increase; this trend could help with many tasks, from internal document summarization to personalized customer and employee support. However, da...
https://huggingface.co/blog/leaderboard-vectara
A guide to setting up your own Hugging Face leaderboard: an end-to-end example with Vectara's hallucination leaderboard
Ofer Mendelevitch, Bae, Clémentine Fourrier
January 12, 2024
Hugging Face’s Open LLM Leaderboard (originally created by Ed Beeching and Lewis Tunstall, and maintained by Nathan Habib and Clémentine Fourrier) is well known for tracking the performance of open source LLMs, comparing their performance in a variety of tasks, such as TruthfulQA or HellaSwag.This has been of tremendou...
https://huggingface.co/blog/tgi-messages-api
From OpenAI to Open LLMs with Messages API on Hugging Face
Andrew Reed, Philipp Schmid, Joffrey THOMAS, David Holtz
February 8, 2024
We are excited to introduce the Messages API to provide OpenAI compatibility with Text Generation Inference (TGI) and Inference Endpoints.Starting with version 1.4.0, TGI offers an API compatible with the OpenAI Chat Completion API. The new Messages API allows customers and users to transition seamlessly from OpenAI mo...
https://huggingface.co/blog/gradio-reload
AI Apps in a Flash with Gradio's Reload Mode
Freddy Boulton
April 16, 2024
In this post, I will show you how you can build a functional AI application quickly with Gradio's reload mode. But before we get to that, I want to explain what reload mode does and why Gradio implements its own auto-reloading logic. If you are already familiar with Gradio and want to get to building, please skip to th...
https://huggingface.co/blog/galore
GaLore: Advancing Large Model Training on Consumer-grade Hardware
Titus von Koeller, Jiawei Zhao, Matthew Douglas, Yaowei Zheng, Younes Belkada, Zachary Mueller, Amy Roberts, Sourab Mangrulkar, Benjamin Bossan
March 20, 2024
The integration of GaLore into the training of large language models (LLMs) marks a significant advancement in the field of deep learning, particularly in terms of memory efficiency and the democratization of AI research. By allowing for the training of billion-parameter models on consumer-grade hardware, reducing memo...
https://huggingface.co/blog/gptq-integration
Making LLMs lighter with AutoGPTQ and transformers
Marc Sun, Félix Marty, 潘其威, Junjae Lee, Younes Belkada, Tom Jobbins
August 23, 2023
Large language models have demonstrated remarkable capabilities in understanding and generating human-like text, revolutionizing applications across various domains. However, the demands they place on consumer hardware for training and deployment have become increasingly challenging to meet. 🤗 Hugging Face's core miss...
https://huggingface.co/blog/playlist-generator
Building a Playlist Generator with Sentence Transformers
Nima Boscarino
July 13, 2022
A short while ago I published a playlist generator that I’d built using Sentence Transformers and Gradio, and I followed that up with a reflection on how I try to use my projects as effective learning experiences. But how did I actually build the playlist generator? In this post we’ll break down that project and look a...
https://huggingface.co/blog/bridgetower
Accelerating Vision-Language Models: BridgeTower on Habana Gaudi2
Régis Pierrard, Anahita Bhiwandiwalla
June 29, 2023
Update (29/08/2023): A benchmark on H100 was added to this blog post. Also, all performance numbers have been updated with newer versions of software.Optimum Habana v1.7 on Habana Gaudi2 achieves x2.5 speedups compared to A100 and x1.4 compared to H100 when fine-tuning BridgeTower, a state-of-the-art vision-language mo...
https://huggingface.co/blog/policy-ntia-rfc
AI Policy @🤗: Response to the U.S. National Telecommunications and Information Administration’s (NTIA) Request for Comment on AI Accountability
Yacine Jernite, Margaret Mitchell, Irene Solaiman
June 20, 2023
AI Policy @🤗: Response to the U.S. NTIA's Request for Comment on AI AccountabilityHugging Face Models Datasets Spaces Posts Docs Solutions Pricing Log In Sign Up Back to Articles AI Policy @🤗: Response to the U.S. National Telecommunications and Information Administration’s (NTI...
https://huggingface.co/blog/leaderboard-decodingtrust
An Introduction to AI Secure LLM Safety Leaderboard
Chenhui Zhang, Chulin Xie, Mintong Kang, Chejian Xu, Bo Li
January 26, 2024
Given the widespread adoption of LLMs, it is critical to understand their safety and risks in different scenarios before extensive deployments in the real world. In particular, the US Whitehouse has published an executive order on safe, secure, and trustworthy AI; the EU AI Act has emphasized the mandatory requirements...
https://huggingface.co/blog/interns-2023
We are hiring interns!
Lysandre, Douwe Kiela
November 29, 2022
Want to help build the future at -- if we may say so ourselves -- one of the coolest places in AI? Today we’re announcing our internship program for 2023. Together with your Hugging Face mentor(s), we’ll be working on cutting edge problems in AI and machine learning.Applicants from all backgrounds are welcome! Ideally,...
https://huggingface.co/blog/huggy-lingo
Huggy Lingo: Using Machine Learning to Improve Language Metadata on the Hugging Face Hub
Daniel van Strien
August 2, 2023
Huggy Lingo: Using Machine Learning to Improve Language Metadata on the Hugging Face Hubtl;dr: We're using machine learning to detect the language of Hub datasets with no language metadata, and librarian-bots to make pull requests to add this metadata. The Hugging Face Hub has become the repository where the community ...
https://huggingface.co/blog/leaderboard-contextual
Introducing ConTextual: How well can your Multimodal model jointly reason over text and image in text-rich scenes?
Rohan Wadhawan, Hritik Bansal, Kai-Wei Chang, NANYUN (Violet) PENG, Clémentine Fourrier
March 5, 2024
Models are becoming quite good at understanding text on its own, but what about text in images, which gives important contextual information? For example, navigating a map, or understanding a meme? The ability to reason about the interactions between the text and visual context in images can power many real-world appli...
https://huggingface.co/blog/leaderboard-upstage
Introducing the Open Ko-LLM Leaderboard: Leading the Korean LLM Evaluation Ecosystem
Park, Sung Kim, Clémentine Fourrier
February 20, 2024
In the fast-evolving landscape of Large Language Models (LLMs), building an “ecosystem” has never been more important. This trend is evident in several major developments like Hugging Face's democratizing NLP and Upstage building a Generative AI ecosystem.Inspired by these industry milestones, in September of 2023, at ...
https://huggingface.co/blog/snorkel-case-study
Snorkel AI x Hugging Face: unlock foundation models for enterprises
Violette Lepercq
April 6, 2023
This article is a cross-post from an originally published post on April 6, 2023 in Snorkel's blog, by Friea Berg .As OpenAI releases GPT-4 and Google debuts Bard in beta, enterprises around the world are excited to leverage the power of foundation models. As that excitement builds, so does the realization that most com...
https://huggingface.co/blog/intel-sapphire-rapids
Accelerating PyTorch Transformers with Intel Sapphire Rapids, part 1
Julien Simon
January 2, 2023
About a year ago, we showed you how to distribute the training of Hugging Face transformers on a cluster or third-generation Intel Xeon Scalable CPUs (aka Ice Lake). Recently, Intel has launched the fourth generation of Xeon CPUs, code-named Sapphire Rapids, with exciting new instructions that speed up operations commo...
https://huggingface.co/blog/fasttext
Welcome fastText to the Hugging Face Hub
Sheon Han, Juan Pino
June 6, 2023
fastText is a library for efficient learning of text representation and classification. Open-sourced by Meta AI in 2016, fastText integrates key ideas that have been influential in natural language processing and machine learning over the past few decades: representing sentences using bag of words and bag of n-grams, u...
https://huggingface.co/blog/ram-efficient-pytorch-fsdp
Fine-tuning Llama 2 70B using PyTorch FSDP
Sourab Mangrulkar, Sylvain Gugger, Lewis Tunstall, Philipp Schmid
September 13, 2023
IntroductionIn this blog post, we will look at how to fine-tune Llama 2 70B using PyTorch FSDP and related best practices. We will be leveraging Hugging Face Transformers, Accelerate and TRL. We will also learn how to use Accelerate with SLURM. Fully Sharded Data Parallelism (FSDP) is a paradigm in which the optimizer ...
https://huggingface.co/blog/your-first-ml-project
Liftoff! How to get started with your first ML project 🚀
Nima Boscarino
June 29, 2022
People who are new to the Machine Learning world often run into two recurring stumbling blocks. The first is choosing the right library to learn, which can be daunting when there are so many to pick from. Even once you’ve settled on a library and gone through some tutorials, the next issue is coming up with your first ...
https://huggingface.co/blog/ml-for-games-4
2D Asset Generation: AI for Game Development #4
Dylan Ebert
January 26, 2023
Welcome to AI for Game Development! In this series, we'll be using AI tools to create a fully functional farming game in just 5 days. By the end of this series, you will have learned how you can incorporate a variety of AI tools into your game development workflow. I will show you how you can use AI tools for:Art Style...
https://huggingface.co/blog/eval-on-the-hub
Announcing Evaluation on the Hub
Lewis Tunstall, Abhishek Thakur, Tristan Thrush, Sasha Luccioni, Leandro von Werra, Nazneen Rajani, Aleksandra Piktus, Omar Sanseviero, Douwe Kiela
June 28, 2022
This project has been archived. If you want to evaluate LLMs on the Hub, check out this collection of leaderboards.TL;DR: Today we introduce Evaluation on the Hub, a new tool powered by AutoTrain that lets you evaluate any model on any dataset on the Hub without writing a single line of code!Evaluate all the models 🔥�...
https://huggingface.co/blog/chatbot-amd-gpu
Run a Chatgpt-like Chatbot on a Single GPU with ROCm
Andy Luo
May 15, 2023
Introduction ChatGPT, OpenAI's groundbreaking language model, has become aninfluential force in the realm of artificial intelligence, paving theway for a multitude of AI applications across diverse sectors. With itsstaggering ability to comprehend and generate human-like text, ChatGPThas transformed industries, from cu...
https://huggingface.co/blog/series-c
We Raised $100 Million for Open & Collaborative Machine Learning 🚀
Hugging Face
May 9, 2022
We Raised $100 Million for Open & Collaborative Machine Learning 🚀Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesWe Raised $100 Million for Open & Collaborative Machine Learning 🚀
https://huggingface.co/blog/accelerated-inference
How we sped up transformer inference 100x for 🤗 API customers
No authors found
January 18, 2021
🤗 Transformers has become the default library for data scientists all around the world to explore state of the art NLP models and build new NLP features. With over 5,000 pre-trained and fine-tuned models available, in over 250 languages, it is a rich playground, easily accessible whichever framework you are working in...
https://huggingface.co/blog/ml-web-games
Making ML-powered web games with Transformers.js
Joshua
July 5, 2023
In this blog post, I'll show you how I made Doodle Dash, a real-time ML-powered web game that runs completely in your browser (thanks to Transformers.js). The goal of this tutorial is to show you how easy it is to make your own ML-powered web game... just in time for the upcoming Open Source AI Game Jam (7-9 July 2023)...
https://huggingface.co/blog/spaces_3dmoljs
Visualize proteins on Hugging Face Spaces
Simon Duerr
August 24, 2022
In this post we will look at how we can visualize proteins on Hugging Face Spaces.Motivation 🤗Proteins have a huge impact on our life - from medicines to washing powder. Machine learning on proteins is a rapidly growing field to help us design new and interesting proteins. Proteins are complex 3D objects generally com...
https://huggingface.co/blog/scalable-data-inspection
Interactively explore your Huggingface dataset with one line of code
Stefan Suwelack, Alexander Druz, Dominik H, Markus Stoll
October 25, 2023
The Hugging Face datasets library not only provides access to more than 70k publicly available datasets, but also offers very convenient data preparation pipelines for custom datasets.Renumics Spotlight allows you to create interactive visualizations to identify critical clusters in your data. Because Spotlight underst...
https://huggingface.co/blog/spacy
Welcome spaCy to the Hugging Face Hub
Omar Sanseviero, Ines Montani
July 13, 2021
spaCy is a popular library for advanced Natural Language Processing used widely across industry. spaCy makes it easy to use and train pipelines for tasks like named entity recognition, text classification, part of speech tagging and more, and lets you build powerful applications to process and analyze large volumes of ...
https://huggingface.co/blog/leaderboard-medicalllm
The Open Medical-LLM Leaderboard: Benchmarking Large Language Models in Healthcare
Aaditya Ura (looking for PhD), Pasquale Minervini, Clémentine Fourrier
April 19, 2024
Over the years, Large Language Models (LLMs) have emerged as a groundbreaking technology with immense potential to revolutionize various aspects of healthcare. These models, such as GPT-3, GPT-4 and Med-PaLM 2 have demonstrated remarkable capabilities in understanding and generating human-like text, making them valuabl...
https://huggingface.co/blog/inference-endpoints
Getting Started with Hugging Face Inference Endpoints
Julien Simon
October 14, 2022
Training machine learning models has become quite simple, especially with the rise of pre-trained models and transfer learning. OK, sometimes it's not that simple, but at least, training models will never break critical applications, and make customers unhappy about your quality of service. Deploying models, however......
https://huggingface.co/blog/ml-for-games-5
Generating Stories: AI for Game Development #5
Dylan Ebert
February 7, 2023
Welcome to AI for Game Development! In this series, we'll be using AI tools to create a fully functional farming game in just 5 days. By the end of this series, you will have learned how you can incorporate a variety of AI tools into your game development workflow. I will show you how you can use AI tools for:Art Style...
https://huggingface.co/blog/openvino
Accelerate your models with 🤗 Optimum Intel and OpenVINO
Ella Charlaix, Julien Simon
November 2, 2022
Last July, we announced that Intel and Hugging Face would collaborate on building state-of-the-art yet simple hardware acceleration tools for Transformer models. ​Today, we are very happy to announce that we added Intel OpenVINO to Optimum Intel. You can now easily perform inference with OpenVINO Runtime on a variety o...
https://huggingface.co/blog/hugging-face-endpoints-on-azure
Hugging Face Collaborates with Microsoft to launch Hugging Face Model Catalog on Azure
Jeff Boudier, Philipp Schmid, Julien Simon
May 24, 2023
Hugging Face Collaborates with Microsoft to launch Hugging Face Model Catalog on AzureHugging Face Models Datasets Spaces Posts Docs Solutions Pricing Log In Sign Up Back to Articles Hugging Face Collaborates with Microsoft to launch Hugging Face Model Catalog on Azure
https://huggingface.co/blog/vq-diffusion
VQ-Diffusion
Will Berman
November 30, 2022
Vector Quantized Diffusion (VQ-Diffusion) is a conditional latent diffusion model developed by the University of Science and Technology of China and Microsoft. Unlike most commonly studied diffusion models, VQ-Diffusion's noising and denoising processes operate on a quantized latent space, i.e., the latent space is com...
https://huggingface.co/blog/starcoder
StarCoder: A State-of-the-Art LLM for Code
Leandro von Werra, Loubna Ben Allal
May 4, 2023
Introducing StarCoder StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. We f...
https://huggingface.co/blog/sdxl_jax
Accelerating Stable Diffusion XL Inference with JAX on Cloud TPU v5e
Pedro Cuenca, Juan Acevedo, Alex Spiridonov, Pate Motter, Yavuz Yetim, Vaibhav Singh, Vijaya Singh, Patrick von Platen
October 3, 2023
Generative AI models, such as Stable Diffusion XL (SDXL), enable the creation of high-quality, realistic content with wide-ranging applications. However, harnessing the power of such models presents significant challenges and computational costs. SDXL is a large image generation model whose UNet component is about thre...
https://huggingface.co/blog/hugging-face-wiz-security-blog
Hugging Face partners with Wiz Research to Improve AI Security
Josef Fukano, Guillaume Salou, Michelle Habonneau, Adrien, Luc Georges, Nicolas Patry, Julien Chaumond
April 4, 2024
We are pleased to announce that we are partnering with Wiz with the goal of improving security across our platform and the AI/ML ecosystem at large.Wiz researchers collaborated with Hugging Face on the security of our platform and shared their findings. Wiz is a cloud security company that helps their customers build a...
https://huggingface.co/blog/optimum-nvidia
Optimum-NVIDIA on Hugging Face enables blazingly fast LLM inference in just 1 line of code
Laikh Tewari, Morgan Funtowicz
December 5, 2023
Large Language Models (LLMs) have revolutionized natural language processing and are increasingly deployed to solve complex problems at scale. Achieving optimal performance with these models is notoriously challenging due to their unique and intense computational demands. Optimized performance of LLMs is incredibly val...
https://huggingface.co/blog/aws-marketplace
Hugging Face Hub on the AWS Marketplace: Pay with your AWS Account
Philipp Schmid, Simon Brandeis, Jeff Boudier
August 10, 2023
The Hugging Face Hub has landed on the AWS Marketplace. Starting today, you can subscribe to the Hugging Face Hub through AWS Marketplace to pay for your Hugging Face usage directly with your AWS account. This new integrated billing method makes it easy to manage payment for usage of all our managed services by all mem...
https://huggingface.co/blog/data-measurements-tool
Introducing the 🤗 Data Measurements Tool: an Interactive Tool for Looking at Datasets
Sasha Luccioni, Yacine Jernite, Margaret Mitchell
November 29, 2021
tl;dr: We made a tool you can use online to build, measure, and compare datasets.Click to access the 🤗 Data Measurements Tool here.As developers of a fast-growing unified repository for Machine Learning datasets (Lhoest et al. 2021), the 🤗 Hugging Face team has been working on supporting good practices for dataset do...
https://huggingface.co/blog/trl-ddpo
Finetune Stable Diffusion Models with DDPO via TRL
luke meyers, Sayak Paul, Kashif Rasul, Leandro von Werra
September 29, 2023
IntroductionDiffusion models (e.g., DALL-E 2, Stable Diffusion) are a class of generative models that are widely successful at generating images most notably of the photorealistic kind. However, the images generated by these models may not always be on par with human preference or human intention. Thus arises the align...
https://huggingface.co/blog/chinese-language-blog
Introducing HuggingFace blog for Chinese speakers: Fostering Collaboration with the Chinese AI community
Tiezhen WANG, Adina Yakefu, Luke Cheng
April 24, 2023
Welcome to our blog for Chinese speakers!We are delighted to introduce Hugging Face’s new blog for Chinese speakers: hf.co/blog/zh! A committed group of volunteers has made this possible by translating our invaluable resources, including blog posts and comprehensive courses on transformers, diffusion, and reinforcement...
https://huggingface.co/blog/livebook-app-deployment
Deploy Livebook notebooks as apps to Hugging Face Spaces
José Valim
June 15, 2023
The Elixir community has been making great strides towards Machine Learning and Hugging Face is playing an important role on making it possible. To showcase what you can already achieve with Elixir and Machine Learning today, we use Livebook to build a Whisper-based chat app and then deploy it to Hugging Face Spaces. A...
https://huggingface.co/blog/graphcore
Hugging Face and Graphcore partner for IPU-optimized Transformers
Sally Doherty
September 14, 2021
Graphcore and Hugging Face are two companies with a common goal – to make it easier for innovators to harness the power of machine intelligence. Hugging Face’s Hardware Partner Program will allow developers using Graphcore systems to deploy state-of-the-art Transformer models, optimised for our Intelligence Processing ...