Spaces:
Running
Apply for a GPU community grant: Personal project
InferencePort AI - Stable, Self‑Hosted Video Generation for the Open‑Source Community
Project Overview
InferencePort AI is an open‑source platform designed to make AI interaction and multimodal generation accessible to everyone. We provide a unified interface for running local models, hosting servers for other users to connect to, and offering free cloud‑backed generation for users who don’t have the hardware to run models themselves.
A major part of our roadmap is reliable, high‑quality video generation using modern diffusion‑based video models. Today, we rely on external services such as Pollinations and api.airforce to provide video inference. Unfortunately, both services have become increasingly unstable — frequent 429s, outages, inconsistent model behavior, and unpredictable latency. This instability directly affects our users and prevents us from offering dependable video generation inside InferencePort AI.
To solve this, we need to self‑host video models on Hugging Face using a Community GPU.
Why We Need a Hugging Face GPU
Stability for our users
InferencePort AI is committed to providing free, reliable access to text, video, audio, and image generation. External APIs are too unstable or too expensive to guarantee uptime. Self‑hosting on HF ensures predictable performance and eliminates dependency on fragile third‑party services.Video models require powerful GPUs
Models like CogVideoX, LTX‑Video, SVD, and MoVideo require high‑memory GPUs (A100/H100 class) to run efficiently. CPU or low‑VRAM environments are not viable for real‑time or batch video inference.We provide free cloud model access
InferencePort AI is built around open access. Many users cannot run these models locally, so our cloud endpoints act as a bridge. A Community GPU would allow us to keep these services free, stable, and open.We contribute back to the ecosystem
We publish to GitHub and HuggingFace:
- All of our main app code
- All of our website code
- Nearly all of our server code is currently with HuggingFace and Supabase
A GPU grant would allow us to:
- Provide free, stable video generation to the 500+ users who rely on InferencePort AI
- Offer a potentially publicly documented API for developers, students, and researchers
- Reduce reliance on unstable external providers
- Expand support for multimodal generation (video, audio, image, embeddings)
- Publish open‑source pipelines that help others deploy these models themselves
- Improve accessibility for users without GPUs or technical expertise
- InferencePort AI is already used by creators, hobbyists, and developers who need a simple, reliable way to run AI models. Adding stable video generation dramatically expands what they can build.
Requested Resources
A single high‑memory GPU (A100/H100 or equivalent) would allow us to:
Run modern video diffusion models at usable speeds
Serve multiple concurrent users
Maintain uptime without relying on unstable third‑party APIs
Expand free access to multimodal generation tools
Closing Statement
InferencePort AI exists to make advanced AI models accessible to everyone. A Hugging Face Community GPU would allow us to provide stable, self‑hosted video generation and eliminate our dependency on unreliable external APIs. With such a feature, we could even increase the rate limit from 3 videos/day on the free tier to much higher. We are committed to maintaining the service, documenting our work openly, and ensuring the entire community benefits from this resource.
Thank you for considering our request.