--- title: Human Activity Recognition emoji: 🏃 colorFrom: blue colorTo: green sdk: gradio sdk_version: 5.29.0 app_file: app.py pinned: false license: unknown models: - Rishi2455/Human-Activity-Recognition datasets: - Bingsu/Human_Action_Recognition tags: - image-classification - human-activity-recognition - mobilenetv2 - tensorflow short_description: Classify 15 human activities from images using MobileNetV2 --- # 🏃 Human Activity Recognition A fine-tuned **MobileNetV2** model that classifies **15 human activities** from images. ## 🎯 Supported Activities | # | Activity | Emoji | |---|----------|-------| | 1 | Calling | 📞 | | 2 | Clapping | 👏 | | 3 | Cycling | 🚴 | | 4 | Dancing | 💃 | | 5 | Drinking | 🥤 | | 6 | Eating | 🍽️ | | 7 | Fighting | 🥊 | | 8 | Hugging | 🤗 | | 9 | Laughing | 😂 | | 10 | Listening to Music | 🎧 | | 11 | Running | 🏃 | | 12 | Sitting | 🪑 | | 13 | Sleeping | 😴 | | 14 | Texting | 📱 | | 15 | Using Laptop | 💻 | ## 🔧 Technical Details - **Architecture:** MobileNetV2 (fine-tuned) - **Input:** 224×224 RGB images - **Dataset:** [Human Action Recognition](https://huggingface.co/datasets/Bingsu/Human_Action_Recognition) (12,600 train / 5,400 test images) - **Model:** [Rishi2455/Human-Activity-Recognition](https://huggingface.co/Rishi2455/Human-Activity-Recognition) ## 🚀 API Usage This Space exposes a REST API. Click the **"Use via API"** button at the bottom of the page to see the auto-generated client code. ```python from gradio_client import Client client = Client("Rishi2455/Human-Activity-Recognition-Demo") result = client.predict( "path/to/your/image.jpg", api_name="/predict" ) print(result) ```