url stringlengths 31 71 | targets stringlengths 11 143 | authors stringlengths 6 190 | date stringlengths 11 18 | inputs stringlengths 140 14.8k |
|---|---|---|---|---|
https://huggingface.co/blog/ethics-diffusers | Ethical guidelines for developing the Diffusers library | Giada Pistilli | March 2, 2023 | We are on a journey to make our libraries more responsible, one commit at a time! As part of the Diffusers library documentation, we are proud to announce the publication of an ethical framework. Given diffusion models' real case applications in the world and potential negative impacts on society, this initiative aims ... |
https://huggingface.co/blog/cloudflare-workers-ai | Bringing serverless GPU inference to Hugging Face users | Philipp Schmid, Jeff Boudier, Rita Kozlov, Nikhil Kothari | April 2, 2024 | Today, we are thrilled to announce the launch of Deploy on Cloudflare Workers AI, a new integration on the Hugging Face Hub. Deploy on Cloudflare Workers AI makes using open models as a serverless API easy, powered by state-of-the-art GPUs deployed in Cloudflare edge data centers. Starting today, we are integrating som... |
https://huggingface.co/blog/habana-gaudi-2-benchmark | Faster Training and Inference: Habana Gaudi®-2 vs Nvidia A100 80GB | Régis Pierrard | December 14, 2022 | In this article, you will learn how to use Habana® Gaudi®2 to accelerate model training and inference, and train bigger models with 🤗 Optimum Habana. Then, we present several benchmarks including BERT pre-training, Stable Diffusion inference and T5-3B fine-tuning, to assess the performance differences between first ge... |
https://huggingface.co/blog/duckdb-nsql-7b | Text2SQL using Hugging Face Dataset Viewer API and Motherduck DuckDB-NSQL-7B | Andrea Soria, Till Döhmen, Sen Wu, Laurel Orr | April 4, 2024 | Today, integrating AI-powered features, particularly leveraging Large Language Models (LLMs), has become increasingly prevalent across various tasks such as text generation, classification, image-to-text, image-to-image transformations, etc.Developers are increasingly recognizing these applications' potential benefits,... |
https://huggingface.co/blog/deep-rl-dqn | Deep Q-Learning with Space Invaders | Thomas Simonini | June 7, 2022 | Unit 3, of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.⚠️ A ne... |
https://huggingface.co/blog/bloom | 🌸 Introducing The World's Largest Open Multilingual Language Model: BLOOM 🌸 | BigScience Workshop | July 12, 2022 | Introducing The World's Largest Open Multilingual Language Model: BLOOMHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to Articles🌸 Introducing The World's Largest Open Multilingual Language Model: BLOOM 🌸 |
https://huggingface.co/blog/leaderboard-artificial-analysis | Bringing the Artificial Analysis LLM Performance Leaderboard to Hugging Face | Micah Hill-Smith, George Cameron, Clémentine Fourrier | May 3, 2024 | Building applications with LLMs requires considering more than just quality: for many use-cases, speed and price are equally or more important. For consumer applications and chat experiences, speed and responsiveness are critical to user engagement. Users expect near-instant responses, and delays can directly lead to r... |
https://huggingface.co/blog/eu-ai-act-oss | AI Policy @🤗: Open ML Considerations in the EU AI Act | Yacine Jernite | July 24, 2023 | AI Policy @🤗: Open ML Considerations in the EU AI ActHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesAI Policy @🤗: Open ML Considerations in the EU AI Act |
https://huggingface.co/blog/os-llms | Open-Source Text Generation & LLM Ecosystem at Hugging Face | Merve Noyan | July 17, 2023 | [Updated on July 24, 2023: Added Llama 2.]Text generation and conversational technologies have been around for ages. Earlier challenges in working with these technologies were controlling both the coherence and diversity of the text through inference parameters and discriminative biases. More coherent outputs were less... |
https://huggingface.co/blog/introducing-doi | Introducing DOI: the Digital Object Identifier to Datasets and Models | Sasha Luccioni, Sylvestre Bcht, Christopher Akiki, Alix Leroy | October 7, 2022 | Our mission at Hugging Face is to democratize good machine learning. That includes best practices that make ML models and datasets more reproducible, better documented, and easier to use and share.To solve this challenge, we're excited to announce that you can now generate a DOI for your model or dataset directly from ... |
https://huggingface.co/blog/gradio-blocks | Gradio 3.0 is Out! | Abubakar Abid | May 16, 2022 | Machine learning demos are an increasingly vital part of releasing a model. Demos allow anyone — not just ML engineers — to try out a model in the browser, give feedback on predictions, and build trust in the model if it performs well. More than 600,000 ML demos have been built with the Gradio library since its first v... |
https://huggingface.co/blog/deep-rl-pg | Policy Gradient with PyTorch | Thomas Simonini | June 30, 2022 | Unit 5, of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.⚠️ A ne... |
https://huggingface.co/blog/tapex | Efficient Table Pre-training without Real Data: An Introduction to TAPEX | Qian Liu | May 23, 2022 | In recent years, language model pre-training has achieved great success via leveraging large-scale textual data. By employing pre-training tasks such as masked language modeling, these models have demonstrated surprising performance on several downstream tasks. However, the dramatic gap between the pre-training task (e... |
https://huggingface.co/blog/sentence-transformers-in-the-hub | Sentence Transformers in the Hugging Face Hub | Omar Sanseviero, Nils Reimers | June 28, 2021 | Over the past few weeks, we've built collaborations with many Open Source frameworks in the machine learning ecosystem. One that gets us particularly excited is Sentence Transformers.Sentence Transformers is a framework for sentence, paragraph and image embeddings. This allows to derive semantically meaningful embeddin... |
https://huggingface.co/blog/mantis-case-study | Why we’re switching to Hugging Face Inference Endpoints, and maybe you should too | Matthew Upson | February 15, 2023 | Hugging Face recently launched Inference Endpoints; which as they put it: solves transformers in production. Inference Endpoints is a managed service that allows you to:Deploy (almost) any model on Hugging Face HubTo any cloud (AWS, and Azure, GCP on the way)On a range of instance types (including GPU)We’re switching s... |
https://huggingface.co/blog/ray-rag | Retrieval Augmented Generation with Huggingface Transformers and Ray | Ray Project (Anyscale) | February 10, 2021 | Huggingface Transformers recently added the Retrieval Augmented Generation (RAG) model, a new NLP architecture that leverages external documents (like Wikipedia) to augment its knowledge and achieve state of the art results on knowledge-intensive tasks. In this blog post, we introduce the integration of Ray, a library ... |
https://huggingface.co/blog/diffusers-2nd-month | What's new in Diffusers? 🎨 | Omar Sanseviero | September 12, 2022 | A month and a half ago we released diffusers, a library that provides a modular toolbox for diffusion models across modalities. A couple of weeks later, we released support for Stable Diffusion, a high quality text-to-image model, with a free demo for anyone to try out. Apart from burning lots of GPUs, in the last thre... |
https://huggingface.co/blog/aws-partnership | Hugging Face and AWS partner to make AI more accessible | Jeff Boudier, Philipp Schmid, Julien Simon | February 21, 2023 | It’s time to make AI open and accessible to all. That’s the goal of this expanded long-term strategic partnership between Hugging Face and Amazon Web Services (AWS). Together, the two leaders aim to accelerate the availability of next-generation machine learning models by making them more accessible to the machine lear... |
https://huggingface.co/blog/unsloth-trl | Make LLM Fine-tuning 2x faster with Unsloth and 🤗 TRL | Daniel Han-Chen | January 10, 2024 | Pulling your hair out because LLM fine-tuning is taking forever? In this post, we introduce a lightweight tool developed by the community to make LLM fine-tuning go super fast!Before diving into Unsloth, it may be helpful to read our QLoRA blog post, or be familiar with LLM fine-tuning using the 🤗 PEFT library.Unsloth... |
https://huggingface.co/blog/community-datasets | Data is better together: Enabling communities to collectively build better datasets together using Argilla and Hugging Face Spaces | Daniel van Strien, Daniel Vila | March 4, 2024 | Recently, Argilla and Hugging Face launched Data is Better Together, an experiment to collectively build a preference dataset of prompt rankings. In a few days, we had:350 community contributors labeling data Over 11,000 prompt ratingsSee the progress dashboard for the latest stats!This resulted in the release of 10k_p... |
https://huggingface.co/blog/regions | Introducing Storage Regions on the Hub | Eliott Coyac, Remy TROMPIER, Adrien, Michelle Habonneau, Violette Lepercq, Julien Chaumond | November 3, 2023 | As part of our Enterprise Hub plan, we recently released support for Storage Regions.Regions let you decide where your org's models and datasets will be stored. This has two main benefits, which we'll briefly go over in this blog post:Regulatory and legal compliance, and more generally, better digital sovereigntyPerfor... |
https://huggingface.co/blog/carbon-emissions-on-the-hub | CO2 Emissions and the 🤗 Hub: Leading the Charge | Sasha Luccioni, Zachary Mueller, Nate Raw | April 22, 2022 | What are CO2 Emissions and why are they important?Climate change is one of the greatest challenges that we are facing and reducing emissions of greenhouse gases such as carbon dioxide (CO2) is an important part of tackling this problem. Training and deploying machine learning models will emit CO2 due to the energy usag... |
https://huggingface.co/blog/decision-transformers | Introducing Decision Transformers on Hugging Face 🤗 | Edward Beeching, Thomas Simonini | March 28, 2022 | At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. Recently, we have integrated Deep RL frameworks such as Stable-Baselines3. And today we are happy to announce that we integrated the Decision Transformer, an Offline Reinforcement Learning method, into the... |
https://huggingface.co/blog/model-cards | Model Cards | Ezi Ozoani, Marissa Gerchick, Margaret Mitchell | December 20, 2022 | Introduction Model cards are an important documentation framework for understanding, sharing, and improving machine learning models. When done well, a model card can serve as a boundary object, a single artefact that is accessible to people with different backgrounds and goals in understanding models - including develo... |
https://huggingface.co/blog/snowball-fight | Introducing Snowball Fight ☃️, our First ML-Agents Environment | Thomas Simonini | December 2, 2021 | We're excited to share our first custom Deep Reinforcement Learning environment: Snowball Fight 1vs1 🎉.Snowball Fight is a game made with Unity ML-Agents, where you shoot snowballs against a Deep Reinforcement Learning agent. The game is hosted on Hugging Face Spaces. 👉 You can play it online hereIn this post, we'll ... |
https://huggingface.co/blog/ambassadors | Student Ambassador Program’s call for applications is open! | Violette Lepercq | May 13, 2022 | Student Ambassador Program’s call for applications is open!Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesStudent Ambassador Program’s call for applications is open! |
https://huggingface.co/blog/peft | 🤗 PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware | Sourab Mangrulkar, Sayak Paul | February 10, 2023 | Motivation Large Language Models (LLMs) based on the transformer architecture, like GPT, T5, and BERT have achieved state-of-the-art results in various Natural Language Processing (NLP) tasks. They have also started foraying into other domains, such as Computer Vision (CV) (VIT, Stable Diffusion, LayoutLM) and Audio (W... |
https://huggingface.co/blog/clipseg-zero-shot | Zero-shot image segmentation with CLIPSeg | Tobias Cornille, Niels Rogge | December 21, 2022 | This guide shows how you can use CLIPSeg, a zero-shot image segmentation model, using 🤗 transformers. CLIPSeg creates rough segmentation masks that can be used for robot perception, image inpainting, and many other tasks. If you need more precise segmentation masks, we’ll show how you can refine the results of CLIPSeg... |
https://huggingface.co/blog/infinity-cpu-performance | Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs | Philipp Schmid, Jeff Boudier, Morgan Funtowicz | January 13, 2022 | Inference Endpoints to easily deploy models on dedicated infrastructure managed by Hugging Face.Our open-source optimization libraries, 🤗 Optimum Intel and 🤗 Optimum ONNX Runtime, to get the highest efficiency out of training and running models for inference.Hugging Face Expert Acceleration Program, a commercial serv... |
https://huggingface.co/blog/ort-accelerating-hf-models | Accelerating over 130,000 Hugging Face models with ONNX Runtime | Sophie Schoenmeyer, Morgan Funtowicz | October 4, 2023 | What is ONNX Runtime?ONNX Runtime is a cross-platform machine learning tool that can be used to accelerate a wide variety of models, particularly those with ONNX support.Hugging Face ONNX Runtime SupportThere are over 130,000 ONNX-supported models on Hugging Face, an open source community that allows users to build, tr... |
https://huggingface.co/blog/inference-endpoints-llm | Deploy LLMs with Hugging Face Inference Endpoints | Philipp Schmid | July 4, 2023 | Open-source LLMs like Falcon, (Open-)LLaMA, X-Gen, StarCoder or RedPajama, have come a long way in recent months and can compete with closed-source models like ChatGPT or GPT4 for certain use cases. However, deploying these models in an efficient and optimized way still presents a challenge.In this blog post, we will s... |
https://huggingface.co/blog/sc2-instruct | StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation | Yuxiang Wei, Federico Cassano, Jiawei Liu, Yifeng Ding, Naman Jain, Harm de Vries, Leandro von Werra, Arjun Guha, Lingming Zhang | April 29, 2024 | Instruction tuning is an approach of fine-tuning that gives large language models (LLMs) the capability to follow natural and human-written instructions. However, for programming tasks, most models are tuned on either human-written instructions (which are very expensive) or instructions generated by huge and proprietar... |
https://huggingface.co/blog/leaderboard-nphardeval | NPHardEval Leaderboard: Unveiling the Reasoning Abilities of Large Language Models through Complexity Classes and Dynamic Updates | Lizhou Fan, Wenyue Hua, Haoyang Ling, Clémentine Fourrier | February 2, 2024 | We're happy to introduce the NPHardEval leaderboard, using NPHardEval, a cutting-edge benchmark developed by researchers from the University of Michigan and Rutgers University. NPHardEval introduces a dynamic, complexity-based framework for assessing Large Language Models' (LLMs) reasoning abilities. It poses 900 algor... |
https://huggingface.co/blog/fetch-eap-case-study | Fetch Consolidates AI Tools and Saves 30% Development Time with Hugging Face on AWS | Violette Lepercq | February 23, 2023 | If you need support in using Hugging Face and AWS, please get in touch with us here - our team will contact you to discuss your requirements! Executive Summary Fetch, a consumer rewards company, developed about 15 different AI tools to help it receive, route, read, process, analyze, and store receipts uploaded by user... |
https://huggingface.co/blog/game-jam-first-edition-results | Results of the Open Source AI Game Jam | Thomas Simonini, Dylan Ebert, Omar Sanseviero | July 21, 2023 | From July 7th to July 11th, we hosted our first Open Source AI Game Jam, an exciting event that challenged game developers to create innovative games within a tight 48-hour window using AI.The primary objective was to create games that incorporate at least one Open Source AI Tool. Although proprietary AI tools were all... |
https://huggingface.co/blog/transformers-design-philosophy | Don't Repeat Yourself* | Patrick von Platen | April 5, 2022 | 🤗 Transformers Design Philosophy"Don't repeat yourself", or DRY, is a well-known principle of software development. The principle originates from "The pragmatic programmer", one of the most read books on code design.The principle's simple message makes obvious sense: Don't rewrite a logic that already exists somewhere... |
https://huggingface.co/blog/streamlit-spaces | Hosting your Models and Datasets on Hugging Face Spaces using Streamlit | Merve Noyan | October 5, 2021 | Showcase your Datasets and Models using Streamlit on Hugging Face SpacesStreamlit allows you to visualize datasets and build demos of Machine Learning models in a neat way. In this blog post we will walk you through hosting models and datasets and serving your Streamlit applications in Hugging Face Spaces. Building dem... |
https://huggingface.co/blog/asr-chunking | Making automatic speech recognition work on large files with Wav2Vec2 in 🤗 Transformers | Nicolas Patry | February 1, 2022 | Wav2Vec2 is a popular pre-trained model for speech recognition.Released in September 2020by Meta AI Research, the novel architecture catalyzed progress inself-supervised pretraining for speech recognition, e.g. G. Ng etal., 2021, Chen et al,2021, Hsu et al.,2021 and Babu et al.,2021. On the Hugging Face Hub,Wav2Vec2's ... |
https://huggingface.co/blog/sd_distillation | Open-sourcing Knowledge Distillation Code and Weights of SD-Small and SD-Tiny | Yatharth Gupta | August 1, 2023 | In recent times, the AI community has witnessed a remarkable surge in the development of larger and more performant language models, such as Falcon 40B, LLaMa-2 70B, Falcon 40B, MPT 30B, and in the imaging domain with models like SD2.1 and SDXL. These advancements have undoubtedly pushed the boundaries of what AI can a... |
https://huggingface.co/blog/gcp-partnership | Hugging Face and Google partner for open AI collaboration | Jeff Boudier, Philipp Schmid | January 25, 2024 | At Hugging Face, we want to enable all companies to build their own AI, leveraging open models and open source technologies. Our goal is to build an open platform, making it easy for data scientists, machine learning engineers and developers to access the latest models from the community, and use them within the platfo... |
https://huggingface.co/blog/accelerate-deepspeed | Accelerate Large Model Training using DeepSpeed | Sourab Mangrulkar, Sylvain Gugger | June 28, 2022 | In this post we will look at how we can leverage the Accelerate library for training large models which enables users to leverage the ZeRO features of DeeSpeed. Motivation 🤗 Tired of Out of Memory (OOM) errors while trying to train large models? We've got you covered. Large models are very performant [1] but difficul... |
https://huggingface.co/blog/ryght-case-study | Ryght’s Journey to Empower Healthcare and Life Sciences with Expert Support from Hugging Face | Andrew Reed, Johnny Crupi | April 16, 2024 | This is a guest blog post by the Ryght team. Who is Ryght? Ryght is building an enterprise-grade generative AI platform tailored for the healthcare and life sciences sectors. Today is their official launch of Ryght Preview, now publicly available for all.Life science companies are amassing a wealth of data from divers... |
https://huggingface.co/blog/1b-sentence-embeddings | Train a Sentence Embedding Model with 1 Billion Training Pairs | Antoine SIMOULIN | October 25, 2021 | Sentence embedding is a method that maps sentences to vectors of real numbers. Ideally, these vectors would capture the semantic of a sentence and be highly generic. Such representations could then be used for many downstream applications such as clustering, text mining, or question answering.We developed state-of-the-... |
https://huggingface.co/blog/amused | Welcome aMUSEd: Efficient Text-to-Image Generation | Isamu Isozaki, Suraj Patil, Will Berman, Sayak Paul | January 4, 2024 | We’re excited to present an efficient non-diffusion text-to-image model named aMUSEd. It’s called so because it’s a open reproduction of Google's MUSE. aMUSEd’s generation quality is not the best and we’re releasing a research preview with a permissive license. In contrast to the commonly used latent diffusion approach... |
https://huggingface.co/blog/safecoder | Introducing SafeCoder | Jeff Boudier, Philipp Schmid | August 22, 2023 | Today we are excited to announce SafeCoder - a code assistant solution built for the enterprise.The goal of SafeCoder is to unlock software development productivity for the enterprise, with a fully compliant and self-hosted pair programmer. In marketing speak: “your own on-prem GitHub copilot”.Before we dive deeper, he... |
https://huggingface.co/blog/game-jam | Announcing the Open Source AI Game Jam 🎮 | Thomas Simonini | June 1, 2023 | Announcing the Open Source AI Game Jam 🎮Hugging Face Models Datasets Spaces Posts Docs Solutions Pricing Log In Sign Up Back to Articles Announcing the Open Source AI Game Jam 🎮 |
https://huggingface.co/blog/huggingface-and-ibm | Hugging Face and IBM partner on watsonx.ai, the next-generation enterprise studio for AI builders | Julien Simon | May 23, 2023 | Hugging Face and IBM partner on watsonx.ai, the next-generation enterprise studio for AI buildersHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesHugging Face and IBM partner on watsonx.ai, the next-generation enterprise studio for AI builders |
https://huggingface.co/blog/fellowship | Announcing the Hugging Face Fellowship Program | Merve Noyan, Omar Espejel | May 17, 2022 | The Fellowship is a network of exceptional people from different backgrounds who contribute to the Machine Learning open-source ecosystem 🚀. The goal of the program is to empower key contributors to enable them to scale their impact while inspiring others to contribute as well. How the Fellowship works 🙌🏻 This is H... |
https://huggingface.co/blog/us-national-ai-research-resource | AI Policy @🤗: Comments on U.S. National AI Research Resource Interim Report | Irene Solaiman | August 1, 2022 | Comments on U.S. National AI Research Resource Interim ReportHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesAI Policy @🤗: Comments on U.S. National AI Research Resource Interim Report |
https://huggingface.co/blog/quanto-introduction | Quanto: a pytorch quantization toolkit | David Corvoysier, Younes Belkada, Marc Sun | March 18, 2024 | Quantization is a technique to reduce the computational and memory costs of evaluating Deep Learning Models by representing their weights and activations with low-precision data types like 8-bit integer (int8) instead of the usual 32-bit floating point (float32).Reducing the number of bits means the resulting model req... |
https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds | My Journey to a serverless transformers pipeline on Google Cloud | Dominici | March 18, 2021 | This article will discuss my journey to deploy the transformers sentiment-analysis pipeline on Google Cloud. We will start with a quick introduction to transformers and then move to the technical part of the implementation. Finally, we'll summarize this implementation and review what we have achieved.The GoalI wanted t... |
https://huggingface.co/blog/writer-case-study | Leveraging Hugging Face for complex generative AI use casess | Jeff Boudier, Waseem AlShikh | July 1, 2023 | Leveraging Hugging Face for complex generative AI use casesHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesLeveraging Hugging Face for complex generative AI use casess |
https://huggingface.co/blog/matryoshka | 🪆 Introduction to Matryoshka Embedding Models | Tom Aarsen, Joshua, Omar Sanseviero | February 23, 2024 | In this blogpost, we will introduce you to the concept of Matryoshka Embeddings and explain why they are useful. We will discuss how these models are theoretically trained and how you can train them using Sentence Transformers.Additionally, we will provide practical guidance on how to use Matryoshka Embedding models an... |
https://huggingface.co/blog/train-your-controlnet | Train your ControlNet with diffusers 🧨 | Apolinário from multimodal AI art, Pedro Cuenca | March 24, 2023 | IntroductionControlNet is a neural network structure that allows fine-grained control of diffusion models by adding extra conditions. The technique debuted with the paper Adding Conditional Control to Text-to-Image Diffusion Models, and quickly took over the open-source diffusion community author's release of 8 differe... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.