dgural commited on
Commit
e22bdfd
·
verified ·
1 Parent(s): f6c36d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -157
README.md CHANGED
@@ -1,71 +1,11 @@
1
- ---
2
- annotations_creators: []
3
- language: en
4
- size_categories:
5
- - n<1K
6
- task_categories: []
7
- task_ids: []
8
- pretty_name: nvidia-physical-ai
9
- tags:
10
- - fiftyone
11
- - group
12
- dataset_summary: '
13
-
14
-
15
-
16
-
17
- This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 100 samples.
18
-
19
-
20
- ## Installation
21
-
22
-
23
- If you haven''t already, install FiftyOne:
24
-
25
-
26
- ```bash
27
-
28
- pip install -U fiftyone
29
-
30
- ```
31
-
32
-
33
- ## Usage
34
-
35
-
36
- ```python
37
 
38
- import fiftyone as fo
 
 
39
 
40
- from fiftyone.utils.huggingface import load_from_hub
41
-
42
-
43
- # Load the dataset
44
-
45
- # Note: other available arguments include ''max_samples'', etc
46
-
47
- dataset = load_from_hub("dgural/my-quickstart-dataset")
48
-
49
-
50
- # Launch the App
51
-
52
- session = fo.launch_app(dataset)
53
-
54
- ```
55
-
56
- '
57
  ---
58
 
59
- # Dataset Card for nvidia-physical-ai
60
-
61
- <!-- Provide a quick summary of the dataset. -->
62
-
63
-
64
-
65
-
66
-
67
- This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 100 samples.
68
-
69
  ## Installation
70
 
71
  If you haven't already, install FiftyOne:
@@ -74,149 +14,136 @@ If you haven't already, install FiftyOne:
74
  pip install -U fiftyone
75
  ```
76
 
 
 
77
  ## Usage
78
 
79
  ```python
80
  import fiftyone as fo
81
  from fiftyone.utils.huggingface import load_from_hub
82
 
83
- # Load the dataset
84
- # Note: other available arguments include 'max_samples', etc
85
- dataset = load_from_hub("dgural/my-quickstart-dataset")
86
 
87
  # Launch the App
88
  session = fo.launch_app(dataset)
89
  ```
90
 
 
91
 
92
- ## Dataset Details
93
-
94
- ### Dataset Description
95
-
96
- <!-- Provide a longer summary of what this dataset is. -->
97
-
98
-
99
-
100
- - **Curated by:** [More Information Needed]
101
- - **Funded by [optional]:** [More Information Needed]
102
- - **Shared by [optional]:** [More Information Needed]
103
- - **Language(s) (NLP):** en
104
- - **License:** [More Information Needed]
105
-
106
- ### Dataset Sources [optional]
107
-
108
- <!-- Provide the basic links for the dataset. -->
109
-
110
- - **Repository:** [More Information Needed]
111
- - **Paper [optional]:** [More Information Needed]
112
- - **Demo [optional]:** [More Information Needed]
113
-
114
- ## Uses
115
-
116
- <!-- Address questions around how the dataset is intended to be used. -->
117
-
118
- ### Direct Use
119
-
120
- <!-- This section describes suitable use cases for the dataset. -->
121
-
122
- [More Information Needed]
123
-
124
- ### Out-of-Scope Use
125
-
126
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
127
-
128
- [More Information Needed]
129
-
130
- ## Dataset Structure
131
-
132
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
133
-
134
- [More Information Needed]
135
-
136
- ## Dataset Creation
137
-
138
- ### Curation Rationale
139
-
140
- <!-- Motivation for the creation of this dataset. -->
141
 
142
- [More Information Needed]
143
 
144
- ### Source Data
145
 
146
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
 
 
 
 
 
147
 
148
- #### Data Collection and Processing
 
149
 
150
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
 
151
 
152
- [More Information Needed]
 
153
 
154
- #### Who are the source data producers?
 
 
155
 
156
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
157
 
158
- [More Information Needed]
159
 
160
- ### Annotations [optional]
 
 
 
161
 
162
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
163
 
164
- #### Annotation process
165
 
166
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
 
167
 
168
- [More Information Needed]
 
 
 
169
 
170
- #### Who are the annotators?
 
171
 
172
- <!-- This section describes the people or systems who created the annotations. -->
 
 
 
 
173
 
174
- [More Information Needed]
175
 
176
- #### Personal and Sensitive Information
177
 
178
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
179
 
180
- [More Information Needed]
181
 
182
- ## Bias, Risks, and Limitations
183
 
184
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
185
 
186
- [More Information Needed]
187
 
188
- ### Recommendations
189
 
190
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
191
 
192
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
 
 
 
193
 
194
- ## Citation [optional]
195
 
196
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
 
197
 
198
- **BibTeX:**
 
199
 
200
- [More Information Needed]
201
 
202
- **APA:**
203
 
204
- [More Information Needed]
205
 
206
- ## Glossary [optional]
 
 
 
207
 
208
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
209
 
210
- [More Information Needed]
211
 
212
- ## More Information [optional]
213
 
214
- [More Information Needed]
215
 
216
- ## Dataset Card Authors [optional]
 
217
 
218
- [More Information Needed]
219
 
220
- ## Dataset Card Contact
221
 
222
- [More Information Needed]
 
 
1
+ # Dataset Card for nvidia-physical-ai-sample
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
+ This dataset is a **small curated sample (100 items)** extracted from the full
4
+ [NVIDIA PhysicalAI Autonomous Vehicles dataset](https://huggingface.co/datasets/nvidia/PhysicalAI-Autonomous-Vehicles).
5
+ It is intended for **quick experimentation**, **tutorials**, and **FiftyOne integration demos** without requiring the multi-terabyte original dataset.
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
 
 
 
 
 
 
 
 
 
 
9
  ## Installation
10
 
11
  If you haven't already, install FiftyOne:
 
14
  pip install -U fiftyone
15
  ```
16
 
17
+ ---
18
+
19
  ## Usage
20
 
21
  ```python
22
  import fiftyone as fo
23
  from fiftyone.utils.huggingface import load_from_hub
24
 
25
+ # Load the sample dataset
26
+ dataset = load_from_hub("dgural/PhysicalAI-Autonomous-Vehicles-Sample")
 
27
 
28
  # Launch the App
29
  session = fo.launch_app(dataset)
30
  ```
31
 
32
+ ---
33
 
34
+ # Dataset Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
+ ## Dataset Description
37
 
38
+ This dataset provides a **representative slice** of the NVIDIA PhysicalAI Autonomous Vehicles dataset, including:
39
 
40
+ - Camera
41
+ - A structure identical to the full dataset, suitable for:
42
+ - Pipeline prototyping
43
+ - Instructional demos
44
+ - AV data exploration with FiftyOne
45
+ - Quick testing of loaders/adapters/exporters
46
 
47
+ The full dataset is available at:
48
+ **https://huggingface.co/datasets/nvidia/PhysicalAI-Autonomous-Vehicles**
49
 
50
+ ### Curated by
51
+ Voxel51 (sample extraction), derived from NVIDIA’s original dataset.
52
 
53
+ ### Language(s)
54
+ - en (metadata)
55
 
56
+ ### License
57
+ Inherits licensing from the original NVIDIA dataset.
58
+ See the main dataset page for license details.
59
 
60
+ ---
61
 
62
+ ## Dataset Sources
63
 
64
+ - **Primary Dataset:** NVIDIA PhysicalAI Autonomous Vehicles
65
+ - **Sample Extraction:** Voxel51 using FiftyOne + Physical AI Workbench pipelines
66
+ - **Repository:** https://huggingface.co/datasets/nvidia/PhysicalAI-Autonomous-Vehicles
67
+ - **Demo Code:** https://github.com/voxel51/fiftyone
68
 
69
+ ---
70
 
71
+ # Uses
72
 
73
+ ## Direct Use
74
+ Appropriate uses of this dataset include:
75
 
76
+ - Testing dataset import/export mechanisms
77
+ - Unit tests for dataset auditing logic
78
+ - Teaching users how to navigate AV sensor datasets
79
+ - Lightweight experimentation
80
 
81
+ ## Out-of-Scope Use
82
+ This sample is **not** suitable for:
83
 
84
+ - Training ML models
85
+ - Benchmarking performance
86
+ - Statistical analysis
87
+ - Scenario diversity evaluation
88
+ - Research intended to generalize across AV driving conditions
89
 
90
+ ---
91
 
92
+ # Dataset Structure
93
 
94
+ This sample preserves the same organizational layout as the full PhysicalAI dataset:
95
 
96
+ - Per-sample grouped data
97
 
98
+ Each sample corresponds to a discrete AV sensor datapoint.
99
 
100
+ ---
101
 
102
+ # Dataset Creation
103
 
104
+ ## Curation Rationale
105
 
106
+ The full PhysicalAI dataset is extremely large.
107
+ This sample provides a lightweight, highly portable subset that can be used for:
108
 
109
+ - Rapid experimentation
110
+ - Prototyping ingestion pipelines
111
+ - Teaching and demos
112
+ - Running on laptops or small instances
113
 
114
+ ## Source Data
115
 
116
+ The underlying data originates from NVIDIA’s PhysicalAI dataset.
117
+ The sample was created by subselecting a limited number of frames and repacking them while preserving field structure.
118
 
119
+ ### Source data produced by
120
+ NVIDIA Autonomous Vehicles & PhysicalAI teams.
121
 
122
+ ---
123
 
124
+ # Bias, Risks, and Limitations
125
 
126
+ Because this is a **non-representative sample**, it:
127
 
128
+ - Does *not* capture full scenario diversity
129
+ - Should *not* be used for model training
130
+ - Cannot support robust statistical evaluation
131
+ - May omit critical driving edge cases
132
 
133
+ It is designed solely for small-scale experimentation.
134
 
135
+ ---
136
 
137
+ # Citation
138
 
139
+ If you use this dataset or sample, cite the original:
140
 
141
+ **NVIDIA PhysicalAI Autonomous Vehicles Dataset**
142
+ https://huggingface.co/datasets/nvidia/PhysicalAI-Autonomous-Vehicles
143
 
144
+ ---
145
 
146
+ # Contact
147
 
148
+ For questions related to this sample or the Physical AI Workbench:
149
+ https://voxel51.com