image imagewidth (px) 256 256 | input_tensor_file stringlengths 25 25 | expected_output stringlengths 3 121 | attended_features listlengths 1.28k 1.28k | svd_entropy float64 3.71 4.01 | class_token_disagreement float64 0.67 0.72 | attention_hstate_entropy float64 5.55 5.55 | cluster_tag stringclasses 10
values | original_idx int64 0 999 |
|---|---|---|---|---|---|---|---|---|
cluster_00_tensor_0313.pt | tiger, Panthera tigris | [
-0.042757682502269745,
-0.07462379336357117,
-0.009764431975781918,
0.08188130706548691,
0.062457360327243805,
0.049996137619018555,
-0.09694744646549225,
0.05573298782110214,
-0.03310302272439003,
-0.05645230412483215,
-0.04580490663647652,
0.034308381378650665,
0.0019781594164669514,
-0.... | 3.9481 | 0.712 | 5.5452 | Failure_Type_0 | 313 | |
cluster_00_tensor_0744.pt | macaw | [
-0.08467535674571991,
-0.09759694337844849,
-0.036934562027454376,
-0.049045272171497345,
-0.08824513852596283,
-0.12436039745807648,
-0.18944117426872253,
0.04315178096294403,
0.0493779256939888,
-0.0907459631562233,
0.026958607137203217,
0.09663968533277512,
0.05578245222568512,
-0.01215... | 3.9433 | 0.7111 | 5.5452 | Failure_Type_0 | 744 | |
cluster_00_tensor_0040.pt | keeshond | [
-0.03529665619134903,
-0.07441218942403793,
0.16242077946662903,
0.045218247920274734,
0.12413804233074188,
0.026106491684913635,
-0.025006724521517754,
0.021077686920762062,
0.10131088644266129,
-0.08787120878696442,
-0.0208493210375309,
-0.04676402360200882,
0.08213478326797485,
-0.13842... | 3.9367 | 0.7099 | 5.5452 | Failure_Type_0 | 40 | |
cluster_00_tensor_0309.pt | standard poodle | [
0.10472637414932251,
0.02893882244825363,
-0.004141107201576233,
-0.07775022834539413,
0.046265967190265656,
0.0641598254442215,
0.03617875277996063,
0.02138337679207325,
0.04937855899333954,
-0.05787979066371918,
0.10750304907560349,
0.07417039573192596,
0.09239428490400314,
-0.0760990679... | 3.9332 | 0.7093 | 5.5452 | Failure_Type_0 | 309 | |
cluster_00_tensor_0769.pt | wombat | [
0.031114649027585983,
-0.057404667139053345,
-0.004196714609861374,
-0.0047711459919810295,
-0.0850888043642044,
-0.02003687620162964,
-0.06128498166799545,
0.015726987272500992,
-0.03529662638902664,
0.0023635984398424625,
0.006777514703571796,
-0.044521763920784,
0.04556923732161522,
0.0... | 3.9321 | 0.7091 | 5.5452 | Failure_Type_0 | 769 | |
cluster_00_tensor_0330.pt | Rhodesian ridgeback | [
-0.05206843465566635,
-0.018991701304912567,
-0.06596186012029648,
0.0448838546872139,
-0.005966514348983765,
0.08922818303108215,
0.017457501962780952,
0.018015775829553604,
0.04096487909555435,
-0.07513013482093811,
0.04916471242904663,
0.008986244909465313,
-0.02687898837029934,
-0.1452... | 3.9315 | 0.709 | 5.5452 | Failure_Type_0 | 330 | |
cluster_00_tensor_0522.pt | Irish terrier | [
0.11376934498548508,
-0.004189024679362774,
0.03708987683057785,
-0.044723205268383026,
0.0012909406796097755,
0.03763214498758316,
-0.012615308165550232,
0.06301374733448029,
0.08085247874259949,
-0.08615829050540924,
0.10172305256128311,
0.12152599543333054,
0.11309114098548889,
-0.13400... | 3.9315 | 0.709 | 5.5452 | Failure_Type_0 | 522 | |
cluster_00_tensor_0631.pt | great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias | [0.05760074406862259,0.026633065193891525,-0.11710600554943085,-0.16802816092967987,0.13262552022933(...TRUNCATED) | 3.931 | 0.7089 | 5.5452 | Failure_Type_0 | 631 | |
cluster_00_tensor_0824.pt | Irish water spaniel | [-0.05164175480604172,-0.0743803083896637,0.026059959083795547,0.19694527983665466,0.074605457484722(...TRUNCATED) | 3.9307 | 0.7089 | 5.5452 | Failure_Type_0 | 824 | |
cluster_00_tensor_0653.pt | Appenzeller | [-0.03338195011019707,-0.04945172742009163,0.063169464468956,-0.0025147348642349243,-0.0313656851649(...TRUNCATED) | 3.9297 | 0.7087 | 5.5452 | Failure_Type_0 | 653 |
The dataset now contains 990 images in addition to the previous 10 diverse samples.
Analysis of Blind Spots in Pixio (ViT-H/16) – A Vision Transformer
1. Model Selection
- Model:
facebook/pixio-vith16 - Release Date: 17 Dec 2025
- Parameters: 631M
- Modality: Vision
- Type: Base model
The model is a Vision Transformer (ViT) with a patch size of 16 and a hidden size of 1280, using 32 layers. A distinctive feature of
Pixio is that it uses 8 class tokens instead of a single [CLS] token, each capturing different semantic patterns in the image.
This architectural choice is key to the blind‑spot analysis described below.
2. Loading the Model
I loaded the model using the Hugging Face transformers library. The following code snippet shows how the model, image processor, and
configuration are loaded (expected input). A Hugging Face token is required because the model is gated.
import torch
import torchvision
from transformers import AutoImageProcessor, AutoModel, AutoConfig
from google.colab import userdata
device = 'cuda' if torch.cuda.is_available() else 'cpu'
config = AutoConfig.from_pretrained("facebook/pixio-vith16",
token=userdata.get("TOKEN"))
config.output_attentions = True
processor = AutoImageProcessor.from_pretrained("facebook/pixio-vith16",
config=config,
trust_remote_code=True,
token=userdata.get("TOKEN"))
pixio_base = AutoModel.from_pretrained("facebook/pixio-vith16",
config=config,
token=userdata.get("TOKEN")).to(device)
pixio_base.eval()
def transform_image(ex):
image = ex['image'].convert('RGB')
transform = torchvision.transforms.Compose([
torchvision.transforms.Resize(256),
torchvision.transforms.CenterCrop(256),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
return transform(image), ex['label']
After loading, I processed 1000 images from the ImageNet‑1k (link) validation set and saved the results locally. For each image I computed the model’s hidden states and attentions once.
3. Blind‑Spot Detection Method: A Hybrid SVD‑Entropy Approach
I developed a hybrid approach combining two complementary metrics to identify inputs where the model is uncertain or internally inconsistent. The expected output is a transformers.modeling_outputs.BaseModelOutputWithPooling type object containing last hidden state before the last norm operation , attentions for each layer etc. We are concerned with these two..
3.1 SVD Disagreement (using the 8 CLS tokens)
Pixio outputs 8 class tokens at the final layer:outputs.last_hidden_state[0, :8, :] has shape [8, 1280].
Each token learns to attend to different image patterns; ideally they should agree on a coherent representation. When
they disagree, the embeddings will point in different directions – We use singular values to represent this idea, with highest singular
value being used to calculate disagreement.
I form a matrix from these 8 token embeddings, center them, and perform a singular value decomposition:
The singular values
indicate how much variance is captured along each direction. If the tokens agree,
dominates; if they disagree,the energy is spread across many singular values.
I define SVD disagreement as:
A value close to 1 means high disagreement; close to 0 means strong consensus.
cls_tokens = outputs.last_hidden_state[0, :8, :] # [8, 1280]
centered = cls_tokens - cls_tokens.mean(dim=0)
_, S, _ = torch.svd(centered)
disagreement = 1.0 - (S[0] / S.sum())
3.2 Attention Entropy
The last‑layer attention weights indicate where each token looks. I average over heads, then compute the attention that the 8 class tokens collectively pay to the patch tokens (indices 8 and above). This gives a probability distribution
over patches:
where is the number of attention heads.
The entropy of this distribution measures how diffuse the attention is:
High entropy means the model is not focusing on any specific region – another indicator of uncertainty.
last_layer_attn = outputs.attentions[-1] # [1, heads, seq, seq]
cls_to_patch = last_layer_attn[0, :, :8, 8:].mean(dim=(0, 1)) # [num_patches]
probs = torch.nn.functional.softmax(cls_to_patch, dim=0)
entropy = -torch.sum(probs * torch.log(probs + 1e-10))
3.3 Hybrid SVD‑Entropy Score
I combine the two metrics into a single hybrid SVD‑entropy score:
Images with a high score are candidates for being “blind spots” – inputs where the model is likely to make mistakes.
4. Diverse Sampling via Clustering
To avoid selecting near‑duplicate images, I took the top‑1000 highest‑scoring images and clustered them based on attended features. These features are the patch token embeddings, weighted by how much attention they receive from the 8 class token.
patch_tokens = outputs.last_hidden_state[0, 8:, :] # [num_patches, 1280]
attended_features = (patch_tokens * cls_to_patch.unsqueeze(-1)).sum(dim=0)
I then applied k‑means clustering (k = 10) on these 1000 feature vectors. From each cluster I selected the image with the highest hybrid score. This yielded 10 diverse, high‑uncertainty samples.
The full selection function:
def get_svd_entropy_subset(base_dir, top_n=1000, final_k=10):
scores = []
indices = sorted([int(f.split('.')[0]) for f in os.listdir(f"{base_dir}/disagreement")])
for idx in indices:
d = torch.load(f"{base_dir}/disagreement/{idx:04d}.pt").item()
e = torch.load(f"{base_dir}/entropy/{idx:04d}.pt").item()
scores.append({'idx': idx, 'score': d * e, 'svd': d, 'entropy': e})
scores.sort(key=lambda x: x['score'], reverse=True)
candidates = scores[:top_n]
feats = np.stack([torch.load(f"{base_dir}/attended_features/{c['idx']:04d}.pt").numpy()
for c in candidates])
kmeans = KMeans(n_clusters=final_k, n_init=10).fit(feats)
selected = []
for i in range(final_k):
cluster_members = [candidates[j] for j, label in enumerate(kmeans.labels_) if label == i]
if cluster_members:
selected.append(max(cluster_members, key=lambda x: x['score']))
return selected
5. Recommendations for Collecting a Fine‑Tuning Dataset
The same hybrid SVD‑entropy methodology can be used to automatically mine a large, diverse set of hard examples from any image collection – for example, from the web, from existing datasets, or from a model’s own misclassifications.
My recommendation however is that the finetuning strategy should be task specific as it is plausible that the model's high entropy in its attention hidden states might be useful for example for other tasks like image to text where it has to describe an image, therefore the behaviour we discovered may not always be undesirable. High disagreement in the class tokens might also useful in such a task as well , as the model would need to pull info from vastly different parts of the image to accurately describe it.
Procedure:
- Gather a large (preferably labelled or label manually after collection) image corpus – (e.g., CIFAR100, LAION‑5B, Open Images, or even random web crawls).
- Run the hybrid SVD‑entropy pipeline on each image: forward pass, compute SVD disagreement (using the 8 CLS tokens) and attention entropy.
- Select the top 0.1% highest‑scoring images – these are the candidates where the model is most uncertain.
- Cluster the selected images using the attended features (as in step 4) to ensure diversity.
- sample from clusters either uniformly or otherwise to create a collection of high difficulty images and finetune on them
Dataset size:
A few thousand to tens of thousands of such hard, diverse examples should suffice to noticeably improve the model’s robustness.
A good starting point would be 5 000 – 10 000 images, balanced across the clusters.
One could then iteratively re‑run my pipeline after each fine‑tuning round to discover new blind spots.
architectural modifications suggestions:
Coincidentally i have been reading the differential transformer paper , and i think finetuning a slightly altered model that uses differential attention with the pretrained weights of pixio could be used to also improve performance as it significantly reduces noise in the attention layers which could be one of the reasons for the attention dispersion observed in the samples selected.
I also think finetuning while minimizing the hybrid svd entropy metric as an additional loss function would be beneficial though it would increase computation requirements(especially for svd) and might require full finetuning of all attention layers which would be memory and time intensive.
6. Visualizing the diverse samples with attention heatmap overlay
References
Yang, L., Li, S.-W., Li, Y., Lei, X., Wang, D., Mohamed, A., Zhao, H., & Xu, H. (2025). In Pursuit of Pixel Supervision for Visual Pre-training. arXiv:2512.15715.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3), 211-252. doi.org
- Downloads last month
- 12,323
