metadata
dataset_name: LIBERO-Mem
pretty_name: 'LIBERO-Mem: Long-Horizon Object-Centric Kitchen Manipulation'
tags:
- robotics
- imitation-learning
- reinforcement-learning
- manipulation
- vision
- hdf5
- datasets
license: mit
task_categories:
- reinforcement-learning
- other
language:
- en
LIBERO-Mem Dataset Specification
Metadata (metainfo.json) + Demonstrations (.hdf5)
This document presents the complete schema for the LIBERO-Mem dataset, covering both:
metainfo.json— task-level metadata, bounding boxes, segmentation, and initial states.hdf5demonstration files — synchronized RGB-D observations, segmentation maps, proprioception, and control actions
Both sources together provide a time-aligned, object-centric, and pixel-level representation of robot manipulation trajectories.
📘 Part I — metainfo.json Metadata Format
The metainfo.json file contains all task-level metadata used to interpret and reconstruct demonstrations.
Each top-level key represents a task, and inside each task are one or more demonstration entries (demo_1, demo_2, …).
🌟 Top-Level Structure
{
"<TASK_NAME>": {
"demo_1": { ... },
"demo_2": { ... }
},
"<TASK_NAME_2>": { ... }
}
Examples of task names:
KITCHEN_SCENE1_1_pick_up_the_bowl_and_place_it_back_on_the_plateKITCHEN_SCENE1_7_swap_the_2_bowls_on_their_plates_using_the_empty_plateKITCHEN_SCENE1_9_put_the_cream_cheese_in_the_nearest_basket_and_place_that_basket_in_the_center
📁 Per-Demo Structure
Each demonstration contains six fields:
1. success
- Type:
bool
Indicates whether the demonstration completes its intended task.
2. initial_state
- Type:
list[number]
Simulator state vector for restoring the initial conditions.
3. task_nouns
- Type:
list[string]
Core object references for the task.
4. task_description
- Type:
string
Natural-language description of the task.
5. exo_boxes
- Type:
list[frame_dict]
Bounding boxes from the exo-camera, one per timestep.
6. ego_boxes
- Type:
list[frame_dict]
Bounding boxes from the ego-camera, same structure asexo_boxes.
🧩 Metadata Summary
<TASK>/<DEMO>/
success: bool
initial_state: number[N]
task_nouns: string[K]
task_description: string
exo_boxes: list<frame_dict>
ego_boxes: list<frame_dict>
frame_dict:
"<object>": [seg_id, [cx, cy, w, h], obj_subgoal]
📘 Part II — HDF5 Demonstration Format
Each .hdf5 file stores raw observations, proprioception, and control data.
📁 File Structure
data/
demo_0/
demo_1/
demo_2/
🔧 Per-Demo Structure
data/demo_i/
actions (T, 7)
dones (T,)
obs/
agentview_rgb
agentview_depth
agentview_seg
eye_in_hand_rgb
eye_in_hand_depth
eye_in_hand_seg
gripper_states
joint_states
📊 Field Descriptions
actions
(T, 7) — float64
dones
(T,) — uint8
Observation Fields
- agentview_rgb —
(T,256,256,3) - agentview_depth —
(T,256,256) - agentview_seg —
(T,256,256) - eye_in_hand_rgb —
(T,256,256,3) - eye_in_hand_depth —
(T,256,256) - eye_in_hand_seg —
(T,256,256)
Proprioception
- gripper_states —
(T,2) - joint_states —
(T,7)
📘 Combined Schema Overview
metainfo.json
<TASK>/<DEMO>/
success: bool
initial_state: number[N]
task_nouns: string[3]
task_description: string
exo_boxes: list<frame_dict>
ego_boxes: list<frame_dict>
HDF5
data/demo_i/
actions: (T, 7)
dones: (T,)
obs/
agentview_rgb
agentview_depth
agentview_seg
eye_in_hand_rgb
eye_in_hand_depth
eye_in_hand_seg
gripper_states
joint_states