File size: 8,567 Bytes
c321a39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2cb5935
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c321a39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
---
license: apache-2.0
task_categories:
- text-to-image
- image-segmentation
- image-to-text
language:
- en
tags:
- diffusion-models
- causal-inference
- physical-alignment
- synthetic
size_categories:
- 10k<n<100k
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: id
    dtype: string
  - name: seed
    dtype: int32
  - name: model
    dtype: string
  - name: prompt
    dtype: string
  - name: category
    dtype: string
  - name: subcategory
    dtype: string
  - name: metadata
    dtype: string
  - name: image
    dtype: image
  - name: mask_object
    dtype: image
  - name: mask_mirror
    dtype: image
  - name: mask_on_mirror_reflection
    dtype: image
  splits:
  - name: train
    num_bytes: 1923017035.0
    num_examples: 28700
  download_size: 1861513544
  dataset_size: 1923017035.0
---

# LINA: Learning INterventions Adaptively for Physical Alignment and Generalization in Diffusion Models

<div align="center">

[[Website]](https://opencausalab.github.io/LINA)
[[ArXiv]](https://arxiv.org/abs/2512.13290)
[[GitHub]](https://github.com/OpenCausaLab/LINA)
[[PDF]](https://arxiv.org/pdf/2512.13290)

[![Python Version](https://img.shields.io/badge/Python-3.10-blue.svg)](https://github.com/OpenCausaLab/LINA)
[![GitHub license](https://img.shields.io/github/license/OpenCausaLab/LINA)](https://github.com/OpenCausaLab/LINA)

</div>

## 💡 Dataset Overview: The Physical Alignment Probe (PAP)

This repository contains the **Physical Alignment Probe (PAP)** dataset, the core diagnostic dataset introduced in our paper **LINA**.

To quantitatively measure the distortion of the prompt-to-image mapping, we construct the PAP dataset. It is a multi-modal corpus based on Causal Scene Graphs (CSG) designed to diagnose physical reasoning and out-of-distribution (OOD) generation capabilities in diffusion models. It facilitates diagnostic interventions by providing structured prompts, generated images, and fine-grained segmentation masks.

### Key Components

The dataset comprises three core components:

1.  **Structured Prompt Library**: A collection of **287** structured prompts divided into three diagnostic subsets:
    * **Optics**: Probes adherence to implicit physical laws (e.g., reflection logic).
    * **Density**: Tests understanding of buoyancy, mass, and material interactions.
    * **OOD (Out-of-Distribution)**: Focuses on counterfactual attributes and complex spatial relations.

2.  **Large-Scale Image Corpus**: A corpus of **28,700** images. We generate 50 images per prompt using SOTA diffusion models (SD-3.5-large and FLUX.1-Krea-dev). These serve as the basis for our baseline evaluation.

3.  **Diagnostic Segmentation Masks**: Fine-grained segmentation masks for critical elements. We employ an MLLM-based evaluator to identify bounding boxes and point prompts, which are then used to generate precise masks via SAM2.

### Evaluation Metrics

We use this dataset to quantify failures based on our Causal Generative Model definitions:

* **Texture Alignment**: The success rate for generating *direct elements* (Y<sub>D</sub>), i.e., the object itself.
* **Physical Alignment**: The success rate for generating *indirect elements* (Y<sub>I</sub>), i.e., causal effects like reflections or buoyancy states.

---

## 🚀 Usage

You can load this dataset directly using the Hugging Face `datasets` library.

```python
from datasets import load_dataset
import json

# Load the full dataset
dataset = load_dataset("OpenCausaLab/LINA-PAP", split="train")

# Example: Accessing a single sample
sample = dataset[0]

print(f"ID: {sample['id']}")
print(f"Model: {sample['model']}")
print(f"Seed: {sample['seed']}")
print(f"Prompt: {sample['prompt']}")

# Parse metadata (stored as a JSON string for flexibility)
if sample['metadata']:
    meta = json.loads(sample['metadata'])
    print(f"Physical Attributes: {meta}")

# Display the generated image
sample['image'].show()

# Display a mask (e.g., if it exists in an Optics sample)
# Note: Masks are None for Density/OOD samples or if not detected
if sample['mask_on_mirror_reflection'] is not None:
    print("Displaying reflection mask...")
    sample['mask_on_mirror_reflection'].show()
```

### Data Structure

The dataset features are organized as follows:

* **Core Info**:
    * `id` (str): Unique identifier for the prompt case (e.g., `optics_0001`).
    * `seed` (int): The random seed used for generation (0-49).
    * `model` (str): The source model (`sd3.5` or `flux`).
    * `prompt` (str): The full text prompt used for generation.
* **Images**:
    * `image` (Image): The generated original RGB image.
* **Categorization**:
    * `category` (str): Main diagnostic category (`optics`, `density`, `ood`).
    * `subcategory` (str): Specific sub-task (e.g., `buoyancy_correct`, `size_reversal`).
    * `metadata` (str): A JSON-formatted string containing detailed physical attributes (e.g., material, expected outcome).
* **Segmentation Masks (Optics Only)**:
    * `mask_object` (Image): Binary mask of the main object.
    * `mask_mirror` (Image): Binary mask of the mirror surface.
    * `mask_on_mirror_reflection` (Image): Binary mask of the reflection on the mirror.
---

## 📖 About The Paper: LINA

We introduce **LINA** (Learning INterventions Adaptively), a novel framework that enforces physical alignment and out-of-distribution (OOD) instruction following in image and video Diffusion Models (DMs).

Diffusion models have achieved remarkable success but still struggle with physical alignment (e.g., correct reflections, gravity) and OOD generalization. We argue that these issues stem from the models' failure to learn causal directions and to disentangle causal factors. **LINA** addresses this by learning to predict prompt-specific interventions without altering pre-trained weights.

Our project page is at [https://opencausalab.github.io/LINA](https://opencausalab.github.io/LINA).

<div align="center">
  <img src="assets/overall_demo.webp" width="100%" alt="Overall Demo"/>
</div>

<br>

**Failures in DMs and LINA's improvement.**
(a) Baseline models often generate reflections extending beyond surfaces or produce texture errors.
(b) Baseline models fail to capture precise spatial prepositions.
By calibrating the sampling dynamics, **LINA** successfully aligns the generation with the intended causal graph while preserving original textures.

### Key Contributions

1.  **Causal Scene Graph (CSG):** We introduce a representation that unifies causal dependencies and spatial layouts, providing a basis for diagnostic interventions.
2.  **Physical Alignment Probe (PAP):** We construct this dataset consisting of structured prompts, SOTA-generated images, and fine-grained masks to quantify DMs' physical and OOD failures.
3.  **Diagnostic Analysis:** We perform CSG-guided masked inpainting, providing the first quantitative evaluation of DMs' multi-hop reasoning failures through bidirectional probing of edges in the CSG.
4.  **LINA Framework:** We propose a framework that learns to predict and apply prompt-specific guidance, achieving SOTA alignment on image and video DMs without MLLM inference or retraining.

### Architecture

**LINA** operates in two phases to calibrate the mapping from prompt to image:

* **Phase 1 (Offline):** We train an **Adaptive Intervention Module (AIM)** using a dataset of "hard cases" where baseline models fail. An MLLM evaluator identifies optimal intervention strengths.
* **Phase 2 (Online):** For new prompts, the pre-trained AIM predicts intervention parameters (γ₁, γ₂). LINA then applies token-level and latent-level interventions during a reallocated computation schedule to enforce causal structure.

<div align="center">
  <img src="assets/architecture.webp" width="100%" alt="LINA Architecture"/>
</div>

### Performance

Extensive experiments show that **LINA** achieves state-of-the-art performance on challenging causal generation tasks. It effectively repairs texture hallucinations and causal failures in both image models and video models, significantly outperforming existing editing baselines and closed-source solutions.

---

## 📚 Citation

If you find our work or dataset useful in your research, please cite:

```bibtex
@article{yu2025lina,
  title={LINA: Learning INterventions Adaptively for Physical Alignment and Generalization in Diffusion Models},
  author={Shu Yu and Chaochao Lu},
  year={2025},
  journal={arXiv preprint arXiv:2512.13290},
  url={https://arxiv.org/abs/2512.13290},
}
```