A newer version of the Gradio SDK is available: 6.14.0
metadata
title: Human Activity Recognition
emoji: π
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.29.0
app_file: app.py
pinned: false
license: unknown
models:
- Rishi2455/Human-Activity-Recognition
datasets:
- Bingsu/Human_Action_Recognition
tags:
- image-classification
- human-activity-recognition
- mobilenetv2
- tensorflow
short_description: Classify 15 human activities from images using MobileNetV2
π Human Activity Recognition
A fine-tuned MobileNetV2 model that classifies 15 human activities from images.
π― Supported Activities
| # | Activity | Emoji |
|---|---|---|
| 1 | Calling | π |
| 2 | Clapping | π |
| 3 | Cycling | π΄ |
| 4 | Dancing | π |
| 5 | Drinking | π₯€ |
| 6 | Eating | π½οΈ |
| 7 | Fighting | π₯ |
| 8 | Hugging | π€ |
| 9 | Laughing | π |
| 10 | Listening to Music | π§ |
| 11 | Running | π |
| 12 | Sitting | πͺ |
| 13 | Sleeping | π΄ |
| 14 | Texting | π± |
| 15 | Using Laptop | π» |
π§ Technical Details
- Architecture: MobileNetV2 (fine-tuned)
- Input: 224Γ224 RGB images
- Dataset: Human Action Recognition (12,600 train / 5,400 test images)
- Model: Rishi2455/Human-Activity-Recognition
π API Usage
This Space exposes a REST API. Click the "Use via API" button at the bottom of the page to see the auto-generated client code.
from gradio_client import Client
client = Client("Rishi2455/Human-Activity-Recognition-Demo")
result = client.predict(
"path/to/your/image.jpg",
api_name="/predict"
)
print(result)