YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

How to Run and Test the Watermark Removal Model

Setup and Training

  1. Install dependencies (run once):

    !pip install -U gdown ultralytics wandb scikit-learn requests
    
  2. Mount Google Drive and set working directory:

    from google.colab import drive
    drive.mount('/content/drive', force_remount=False)
    import os
    os.chdir('/content/drive/MyDrive/Colab/Watermark_remover')
    
  3. Download and prepare datasets
    The script downloads watermark datasets from Google Drive, extracts them, and collects images for watermarking.

  4. Generate watermarked images and YOLO labels
    Watermarks are added to images with bounding box labels created in YOLO format.

  5. Split dataset into training and validation sets and create data.yaml for YOLOv11 training.

  6. Train the YOLOv11 model with augmentations and tuned hyperparameters:

    from ultralytics import YOLO
    import wandb
    
    wandb.login()  # Login to Weights & Biases for experiment tracking
    
    model = YOLO("yolo11m.pt")  # Load YOLOv11m base model
    
    model.train(
        data="data.yaml",
        epochs=100,
        batch=16,
        imgsz=640,
        project="logo_detection",
        name="yolo11m_logo_run",
        exist_ok=True,
        save=True,
        save_txt=True,
        augment=True,
        hsv_h=0.015,
        hsv_s=0.7,
        fliplr=0.5,
        mixup=0.1,
        mosaic=1.0,
        scale=0.5,
        shear=0.0,
        perspective=0.0,
        translate=0.1
    )
    

Testing and Visualization

  1. Load the trained model weights:

    from ultralytics import YOLO
    model = YOLO("logo_detection/yolo11m_logo_run/weights/best.pt")
    
  2. Select test images from the validation set:

    from pathlib import Path
    import random
    
    test_folder = Path("dataset/images/val")
    test_images = list(test_folder.glob("*.*"))
    test_images = random.sample(test_images, min(10, len(test_images)))
    
  3. Run detection and watermark removal with visualization:

    import cv2
    import numpy as np
    import matplotlib.pyplot as plt
    
    def visualize_detection_and_removal(model, img_path):
        results = model(str(img_path))[0]
        img = cv2.imread(str(img_path))
        img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    
        # Draw detection boxes
        img_boxes = img.copy()
        for box in results.boxes:
            xyxy = box.xyxy[0].cpu().numpy().astype(int)
            cv2.rectangle(img_boxes, (xyxy[0], xyxy[1]), (xyxy[2], xyxy[3]), (0,255,0), 2)
    
        # Create mask for inpainting
        mask = np.zeros(img.shape[:2], dtype=np.uint8)
        for box in results.boxes:
            xyxy = box.xyxy[0].cpu().numpy().astype(int)
            x1, y1, x2, y2 = xyxy
            mask[y1:y2, x1:x2] = 255
    
        # Remove watermark using inpainting
        inpainted = cv2.inpaint(img, mask, 3, cv2.INPAINT_TELEA)
        inpainted_rgb = cv2.cvtColor(inpainted, cv2.COLOR_BGR2RGB)
    
        # Display images
        plt.figure(figsize=(15,5))
        plt.subplot(1,3,1)
        plt.title("Original Image")
        plt.imshow(img_rgb)
        plt.axis('off')
    
        plt.subplot(1,3,2)
        plt.title("Detected Logos")
        plt.imshow(cv2.cvtColor(img_boxes, cv2.COLOR_BGR2RGB))
        plt.axis('off')
    
        plt.subplot(1,3,3)
        plt.title("Watermark Removed")
        plt.imshow(inpainted_rgb)
        plt.axis('off')
        plt.show()
    
    for img_path in test_images:
        print(f"Testing image: {img_path.name}")
        visualize_detection_and_removal(model, img_path)
    

Summary

  • This repository provides a pipeline to generate watermarked images with YOLO labels, train a YOLOv11 model to detect logos/watermarks, and remove them using inpainting.
  • Training is done in Colab with Google Drive for storage.
  • Testing visualizes detection and watermark removal results on sample validation images.

Citations: [1] https://huggingface.co/templates/model-card-example/blob/f0ce9d5d178c10e164d406868f72b1f2f2158cde/README.md [2] https://github.com/huggingface/datasets/blob/main/templates/README_guide.md [3] https://huggingface.co/docs/hub/en/model-cards [4] https://huggingface.co/templates/model-card-example/blame/f0ce9d5d178c10e164d406868f72b1f2f2158cde/README.md [5] https://machinelearninglibrarian.substack.com/p/2023-03-07-readme-templatehtml [6] https://huggingface.co/templates/model-card-example/commit/f0ce9d5d178c10e164d406868f72b1f2f2158cde [7] https://huggingface.co/learn/llm-course/en/chapter4/4 [8] https://huggingface.co/SEBIS/code_trans_t5_base_code_documentation_generation_ruby/blame/2a39c4e86977714a6ed4aab478098a43e9751e05/README.md

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Spaces using SoSa123456/Yolom11_sheypoor_eghlym 2