Datasets:
license: cc0-1.0
language:
- en
tags:
- climate
pretty_name: 'ARGO_Profiles:'
Argovis Argo Ocean Profiles
Dataset summary:
This dataset contains ocean profile data collected by the international Argo float program and accessed via the Argovis API. Each record corresponds to a single profile measured by an autonomous drifting float, including time, location, basin, and associated profile metadata fields that can be joined to the underlying temperature and salinity data structures. The goal of this dataset is to provide a ready-to-use subset of Argo profiles for machine learning, geospatial analysis, and educational use.[1][2][3][4]
The files were programmatically fetched using the Argovis API in Python, converted to pandas DataFrames, and exported as CSV. This makes it easy to load the data in common data science environments (Python, R, Julia) without needing to write custom API integration code.[5][6][1]
Source and provenance:
- Original data source: Argo Global Data Assembly Centers (GDACs), accessed through the Argovis platform.[2][4]
- Access method: Argovis API (https://argovis-api.colorado.edu/argo) with query filters on time and optional geographic constraints.[1][5]
- Processing steps:
- Profiles were requested via HTTP GET with an Argovis API key.
- JSON responses were flattened using pandas.json_normalize.
- Selected fields were exported to CSV for easier downstream use.[6][7]
These data are a derivative, convenience-formatted view of the original Argo profile data; they do not modify scientific content, only representation (JSON → tabular CSV).
Files included:
Depending on how you upload, you might have some or all of:
- argovis_profiles.csv
- Full export of profiles for the selected time range, including metadata arrays and nested columns flattened where possible.
- argovis_profiles_minimal.csv
- Smaller view with key columns only, such as:
- _id: unique profile identifier in Argovis.
- timestamp: profile measurement time (UTC).
- geolocation.coordinates: longitude and latitude pair.
- basin: ocean basin index.
- profile_direction: upcast/downcast indicator where available.[3][1]
- Smaller view with key columns only, such as:
If additional files are added later (e.g., separate files for metadata, variables, or different time windows), the filenames should clearly reflect their content and time coverage.
Data fields (high-level):
Typical columns in the main CSV include:[8][1]
- _id: Combined float ID and cycle number, uniquely identifying each profile.
- basin: Integer code identifying the ocean basin (e.g., Atlantic, Pacific).
- timestamp: ISO 8601 timestamp of the profile.
- date_updated_argovis: Time when the profile record was last updated in Argovis.
- source: Source metadata including upstream data center information.
- cycle_number: Profile cycle number for the float.
- geolocation.type: Geometry type (usually “Point”).
- geolocation.coordinates: [longitude, latitude] pair for the profile location.
- profile_direction: Profile direction (ascent/descent) when available.
- vertical_sampling_scheme: Description of vertical sampling strategy.
- data, data_info, metadata, data_warning: Nested structures with detailed variable and QC information as provided by Argovis. These may require additional parsing for advanced use cases.
Users are encouraged to consult Argovis and Argo documentation for full definitions of scientific variables, QC flags, and conventions.[4][3][1]
Intended uses:
This dataset is useful for:
- Oceanography and climate research:
- Exploring spatial and temporal patterns in upper-ocean temperature and salinity.
- Studying variability in different basins or time periods.[3][4]
- Machine learning and AI:
- Building models for ocean state estimation or anomaly detection.
- Training geospatial–temporal models, sequence models, or clustering methods on Argo profiles.
- Education and teaching:
- Demonstrations of working with real-world scientific sensor data.
- Exercises on APIs, data wrangling, and data visualization in Python.
Because this dataset is derived from the operational Argo network, it reflects real measurement noise, missing data patterns, and QC flags that are valuable for realistic ML pipelines.[4][3]
How to load the data:
Example in Python with pandas:
import pandas as pd
# Replace FILE_NAME.csv with the actual file name in this repo
df = pd.read_csv("argovis_profiles.csv")
print(df.shape)
print(df.columns[:20])
print(df.head())
When using the datasets library:
from datasets import load_dataset
dataset = load_dataset("your-username/your-dataset-name")
df = dataset["train"].to_pandas()
Adjust split name and file mapping depending on how the dataset is configured on the Hub.[9][10]
Limitations and caveats:
- This is a subset in time (and optionally space), not the full Argo archive.[2][4]
- Deep scientific interpretation (e.g., water mass analysis) requires careful handling of:
- Quality flags (QC).
- Pressure, temperature, and salinity calibration details.
- Regional and temporal coverage biases.[3][4]
- Some columns, especially data, data_info, and metadata, may be nested or complex; advanced users may need custom parsing code to extract specific variables or levels.
Users should always refer to the official Argo documentation and Argovis API documentation for authoritative descriptions of variables and processing.[11][1][3]
License and attribution:
The underlying Argo data are described as freely available without restriction and are treated similarly to open data in many catalogs. However, proper acknowledgment of the Argo program is required in any work using these data.[12][2]
If using this dataset, please include an acknowledgment along the lines of:
“These data were collected and made freely available by the international Argo program and the national programs that contribute to it. Argo data are freely available from the Global Data Assembly Centers. See https://doi.org/10.17882/42182 for original data access and documentation.”[11][2][3]
Also cite this Hugging Face dataset if it is used as a curated, preprocessed source in your workflows.
Contact and contributions:
If you find issues in the CSV export, want additional time ranges or regions, or would like to contribute parsing scripts or example notebooks (e.g., for plotting sections or training ML models), feel free to:
- Open an issue or discussion on this dataset’s Hugging Face page.[13][14]
- Propose a pull request with improved dataset cards, loaders, or examples if the dataset is maintained in a Git-based repository.
Contributions that improve documentation, reproducible data-fetching scripts, and example analyses are very welcome and help others use Argo and Argovis data more effectively.