popkek00 commited on
Commit
add0ce0
·
verified ·
1 Parent(s): 4272ef9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: microsoft/resnet-18
3
+ license: mit
4
+ tags:
5
+ - image-classification
6
+ - pytorch
7
+ - computer-vision
8
+ - fall-detection
9
+ ---
10
+ # Fall Detection Model (ResNet-18 Fine-tuned)
11
+
12
+ This model is a fine-tuned ResNet-18 for image classification, specifically trained to detect falls in images.
13
+
14
+ ## Model Details
15
+ - **Base Model:** `microsoft/resnet-18`
16
+ - **Dataset:** `hiennguyen9874/fall-detection-dataset`
17
+ - **Task:** Binary image classification (fall/no_fall)
18
+ - **Classes:**
19
+ - `0`: `no_fall`
20
+ - `1`: `fall`
21
+
22
+ ## How to Use
23
+
24
+ ### 1. Load the Model and Image Processor
25
+
26
+ ```python
27
+ from transformers import AutoModelForImageClassification, AutoImageProcessor
28
+ from PIL import Image
29
+ import torch
30
+
31
+ # Assuming 'device' is already defined (e.g., torch.device("cuda" if torch.cuda.is_available() else "cpu"))
32
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
33
+
34
+ repo_id = "popkek00/fall_detection_model" # Your model's repository ID
35
+
36
+ model = AutoModelForImageClassification.from_pretrained(repo_id).to(device)
37
+ image_processor = AutoImageProcessor.from_pretrained(repo_id)
38
+
39
+ model.eval() # Set model to evaluation mode
40
+ ```
41
+
42
+ ### 2. Prepare an Image for Inference
43
+
44
+ ```python
45
+ # Example: Load an image (replace with your image path or PIL Image object)
46
+ # You can load an image from a URL, local file, or a BytesIO object
47
+ # For demonstration, let's assume you have a PIL Image object called `example_image`
48
+
49
+ # Create a dummy image for demonstration
50
+ example_image = Image.new('RGB', (224, 224), color = 'red')
51
+
52
+ # Process the image
53
+ inputs = image_processor(images=example_image, return_tensors="pt")
54
+ pixel_values = inputs["pixel_values"].to(device)
55
+ ```
56
+
57
+ ### 3. Get Predictions
58
+
59
+ ```python
60
+ with torch.no_grad():
61
+ outputs = model(pixel_values)
62
+
63
+ logits = outputs.logits
64
+ probabilities = torch.softmax(logits, dim=1)
65
+ predicted_class_id = probabilities.argmax().item()
66
+
67
+ # Get the human-readable label from the model's config
68
+ predicted_label = model.config.id2label[predicted_class_id]
69
+ confidence = probabilities[0, predicted_class_id].item() * 100
70
+
71
+ print(f"Predicted label: {predicted_label} (Confidence: {confidence:.2f}%)")
72
+ ```
73
+
74
+ ---