YAML Metadata Warning:The pipeline tag "multimodal-rating-prediction" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

LoViF: LatentQuery Fusion Baseline Checkpoint (DistilBERT + DINOv2)

This repository contains the official baseline checkpoint for the LoViF 2026 Efficiency Track multimodal rating prediction task.

Model Description

The model implements a LatentQuery Fusion architecture, designed for high-efficiency multimodal inference. It fuses visual features from DINOv2 and textual features from DistilBERT using a cross-attention based latent query mechanism.

  • Vision Backbone: facebook/dinov2-base (LoRA adapter)
  • Text Backbone: distilbert/distilbert-base-uncased (LoRA adapter)
  • Fusion Method: LatentQuery Fusion (LQF)
  • Total Parameters: 186.09M
  • Trainable Parameters (LoRA): 33.15M

Performance Metrics

Based on our validation set (SetA validation split):

  • PLCC (Pearson Linear Correlation Coefficient): 0.2627
  • MSE (Mean Squared Error): 0.5823
  • FLOPs per sample (k=2): 116.79G
  • FLOPs per sample (k=1): 70.58G

Usage

To use this checkpoint, clone the LoViF repository and follow the reproduction steps in the README.

Download via CLI

huggingface-cli download nwpuluka/lovif-paper-submission-checkpoint best.ckpt --local-dir .

Loading in Python

import torch
checkpoint = torch.load("best.ckpt", map_location="cpu")
model_state = checkpoint["model_state"]
# Load model_state into your LoViF model instance

Reproduction

This checkpoint was trained using the baseline_lqf_distilbert_dinov2_train.yaml configuration for one round of optimization with modality dropout enabled.

Citation

If you use this model in your research, please cite our LoViF 2026 paper.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support