JAFAR: Jack up Any Feature at Any Resolution

1Sorbonne Université, 2Thales cortAIx Labs, 3Valeo.ai

JAFAR upsamples features from any foundation vision encoder to any image resolution, using the input image as high-resolution guidance. It generates sharp, boundary-aligned feature maps and serves as a versatile drop-in module for a variety of downstream tasks—including semantic segmentation, open-vocabulary segmentation, depth estimation, CAM evaluation, and bird’s-eye-view segmentation—consistently enhancing performance.

Abstract

Foundation Vision Encoders have become essential for a wide range of dense vision tasks. However, their low-resolution spatial feature outputs necessitate feature upsampling to produce the high-resolution modalities required for downstream tasks. In this work, we introduce JAFAR—a lightweight and flexible feature upsampler that enhances the spatial resolution of visual features from any Foundation Vision Encoder to an arbitrary target resolution. JAFAR employs an attention-based module designed to promote semantic alignment between high-resolution queries—derived from low-level image features—and semantically enriched low-resolution keys, using Spatial Feature Transform (SFT) modulation. Notably, despite the absence of high-resolution supervision, we demonstrate that learning at low upsampling ratios and resolutions generalizes remarkably well to significantly higher output scales. Extensive experiments show that JAFAR effectively recovers fine-grained spatial details and consistently outperforms existing feature upsampling methods across a diverse set of downstream tasks.

Any Resolution

Any Backbone

JAFAR Architecture

jafar architecture

Overview of JAFAR. To construct the upsampling kernel, queries and keys are derived from a shared image representation. Queries are downsampled to match the target output resolution, while keys are downsampled to align with the spatial resolution of the vision encoder’s features. Keys are then semantically enriched via SFT modulation to promote semantic alignment between queries and keys. The resulting kernel is then used to interpolate features from the foundation vision encoder.

Features Visualization

unsupervised results
DINOv2 ViT-S/14 features at 32 × 32 resolution from the ImageNet validation set are upsampled to 448 × 448. Baseline methods—whether training-free, task-dependent, or task-agnostic—introduce varying levels of blurriness and artifacts. Besides being task-agnostic, JAFAR produces sharp, content-aware feature maps with fewer artifacts.

Downstream Evaluation

BibTeX


      @misc{couairon2025jafar,
      title={JAFAR: Jack up Any Feature at Any Resolution}, 
      author={Paul Couairon and Loick Chambon and Louis Serrano and Jean-Emmanuel Haugeard and Matthieu Cord and Nicolas Thome},
      year={2025},
      eprint={2506.11136},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.11136},
      }