Rishi2455's picture
Update README.md
be661ea verified
---
title: Human Activity Recognition
emoji: πŸƒ
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.29.0
app_file: app.py
pinned: false
license: unknown
models:
- Rishi2455/Human-Activity-Recognition
datasets:
- Bingsu/Human_Action_Recognition
tags:
- image-classification
- human-activity-recognition
- mobilenetv2
- tensorflow
short_description: Classify 15 human activities from images using MobileNetV2
---
# πŸƒ Human Activity Recognition
A fine-tuned **MobileNetV2** model that classifies **15 human activities** from images.
## 🎯 Supported Activities
| # | Activity | Emoji |
|---|----------|-------|
| 1 | Calling | πŸ“ž |
| 2 | Clapping | πŸ‘ |
| 3 | Cycling | 🚴 |
| 4 | Dancing | πŸ’ƒ |
| 5 | Drinking | πŸ₯€ |
| 6 | Eating | 🍽️ |
| 7 | Fighting | πŸ₯Š |
| 8 | Hugging | πŸ€— |
| 9 | Laughing | πŸ˜‚ |
| 10 | Listening to Music | 🎧 |
| 11 | Running | πŸƒ |
| 12 | Sitting | πŸͺ‘ |
| 13 | Sleeping | 😴 |
| 14 | Texting | πŸ“± |
| 15 | Using Laptop | πŸ’» |
## πŸ”§ Technical Details
- **Architecture:** MobileNetV2 (fine-tuned)
- **Input:** 224Γ—224 RGB images
- **Dataset:** [Human Action Recognition](https://huggingface.co/datasets/Bingsu/Human_Action_Recognition) (12,600 train / 5,400 test images)
- **Model:** [Rishi2455/Human-Activity-Recognition](https://huggingface.co/Rishi2455/Human-Activity-Recognition)
## πŸš€ API Usage
This Space exposes a REST API. Click the **"Use via API"** button at the bottom of the page to see the auto-generated client code.
```python
from gradio_client import Client
client = Client("Rishi2455/Human-Activity-Recognition-Demo")
result = client.predict(
"path/to/your/image.jpg",
api_name="/predict"
)
print(result)
```