Goat-core Dataset

A Multi-scene, Multi-modal Dataset for Downstream Target Localization Tasks

Introduction

The Goat-core dataset provides comprehensive data for downstream target localization in robotics and computer vision. It features 4 distinct scenes (5cd, 4ok, nfv, tee).

The dataset records essential environmental data including Depth maps, RGB images, Local Position (local_pos), and Camera Intrinsics. Within the ground truth data, we define downstream localization tasks organized into episodes and sub-tasks.

4 Scenes
6 Episodes per Scene
20 Tasks per Episode
3 Task Types

File Structure

The dataset is organized into two main directories: dataset (raw sensor data) and groundtruth (task definitions).

Goat-core ├── dataset │ ├── 4ok │ ├── 5cd │ ├── nfv │ └── tee │ ├── depth # Depth maps │ ├── images # RGB images │ ├── sparse/0/cameras.txt # Intrinsics │ └── local_pos.txt # Local position data │ └── groundtruth ├── 4ok ├── 5cd ├── nfv └── tee ├── 0 # Episode 0 ├── 1 # Episode 1 ├── 2 # Episode 2 ├── 3 # Episode 3 ├── 4 # Episode 4 ├── 5 # Episode 5 ├── 01clothes # Specific Sub-task Folder │ ├── language.txt # Task Descriptions │ ├── pos.txt # Position Groundtruth │ ├── task_type.txt # Task Type Labels │ ├── 01clothes_0.png # Image Index 0 (Anchor) │ └── ... ├── 02towel └── 03bed

Task Indexing Guide

Tasks are categorized into three types, each indexed differently:

  • Object Task: Indexed via language.txt.
  • Language Task: Indexed via language.txt.
  • Image Task: Indexed using the image with ID 0 (e.g., 00rack_0.png).

Access the Data

The dataset is available for academic and research use.


Download Goat-core Dataset

Hosted on [Google Drive / Hugging Face] | License: CC BY-NC 4.0

Citation

If you use Goat-core in your research, please cite the following paper:

@misc{ zhou2025lagmemo, title={LagMemo: Language 3D Gaussian Splatting Memory for Multi-modal Open-vocabulary Multi-goal Visual Navigation}, author={Zhou, Haotian and Wang, Xiaole and Li, He and Sun, Fusheng and Guo, Shengyu and Qi, Guolei and Xu, Jianghuan and Zhao, Huijing}, journal={arXiv preprint arXiv:2510.24118}, year={2025} howpublished={\url{https://weekgoodday.github.io/lagmemo/}}, }