Datasets:

Modalities:
Time-series
Formats:
parquet
Size:
< 1K
ArXiv:
License:
fabiencasenave commited on
Commit
47fb231
·
verified ·
1 Parent(s): 9881c74

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md CHANGED
@@ -1,4 +1,11 @@
1
  ---
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: Base_2_2/Zone
@@ -75,3 +82,111 @@ configs:
75
  - split: OOD
76
  path: data/OOD-*
77
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-sa-4.0
3
+ task_categories:
4
+ - graph-ml
5
+ pretty_name: 2D quasistatic non-linear structural mechanics solutions
6
+ tags:
7
+ - physics learning
8
+ - geometry learning
9
  dataset_info:
10
  features:
11
  - name: Base_2_2/Zone
 
82
  - split: OOD
83
  path: data/OOD-*
84
  ---
85
+ <p align='center'>
86
+ <img src='https://i.ibb.co/MDqsmb5H/Logo-Tensile2d-2-consolas-100.png' alt='https://i.ibb.co/MDqsmb5H/Logo-Tensile2d-2-consolas-100.png' width='1000'/>
87
+ <img src='https://i.ibb.co/Js062hF/preview.png' alt='https://i.ibb.co/Js062hF/preview.png' width='1000'/>
88
+ </p>
89
+
90
+ ```yaml
91
+ legal:
92
+ owner: Safran
93
+ license: cc-by-sa-4.0
94
+ data_production:
95
+ type: simulation
96
+ physics: 2D quasistatic non-linear structural mechanics, small deformations, plane
97
+ strain
98
+ num_samples:
99
+ train: 500
100
+ test: 200
101
+ OOD: 2
102
+ storage_backend: hf_datasets
103
+ plaid:
104
+ version: 0.1.13.dev36+g21db6656e.d20260302
105
+
106
+ ```
107
+ This dataset was generated with [`plaid`](https://plaid-lib.readthedocs.io/), we refer to this documentation for additional details on how to extract data from `plaid_sample` objects.
108
+
109
+ The simplest way to use this dataset is to first download it:
110
+ ```python
111
+ from plaid.storage import download_from_hub
112
+
113
+ repo_id = "channel/dataset"
114
+ local_folder = "downloaded_dataset"
115
+
116
+ download_from_hub(repo_id, local_folder)
117
+ ```
118
+
119
+ Then, to iterate over the dataset and instantiate samples:
120
+ ```python
121
+ from plaid.storage import init_from_disk
122
+
123
+ local_folder = "downloaded_dataset"
124
+ split_name = "train"
125
+
126
+ datasetdict, converterdict = init_from_disk(local_folder)
127
+
128
+ dataset = datasetdict[split]
129
+ converter = converterdict[split]
130
+
131
+ for i in range(len(dataset)):
132
+ plaid_sample = converter.to_plaid(dataset, i)
133
+ ```
134
+
135
+ It is possible to stream the data directly:
136
+ ```python
137
+ from plaid.storage import init_streaming_from_hub
138
+
139
+ repo_id = "channel/dataset"
140
+
141
+ datasetdict, converterdict = init_streaming_from_hub(repo_id)
142
+
143
+ dataset = datasetdict[split]
144
+ converter = converterdict[split]
145
+
146
+ for sample_raw in dataset:
147
+ plaid_sample = converter.sample_to_plaid(sample_raw)
148
+ ```
149
+
150
+ Plaid samples' features can be retrieved like the following:
151
+ ```python
152
+ from plaid.storage import load_problem_definitions_from_disk
153
+ local_folder = "downloaded_dataset"
154
+ pb_defs = load_problem_definitions_from_disk(local_folder)
155
+
156
+ # or
157
+ from plaid.storage import load_problem_definitions_from_hub
158
+ repo_id = "channel/dataset"
159
+ pb_defs = load_problem_definitions_from_hub(repo_id)
160
+
161
+
162
+ pb_def = pb_defs[0]
163
+
164
+ plaid_sample = ... # use a method from above to instantiate a plaid sample
165
+
166
+ for t in plaid_sample.get_all_time_values():
167
+ for path in pb_def.get_in_features_identifiers():
168
+ plaid_sample.get_feature_by_path(path=path, time=t)
169
+ for path in pb_def.get_out_features_identifiers():
170
+ plaid_sample.get_feature_by_path(path=path, time=t)
171
+ ```
172
+
173
+ For those familiar with HF's `datasets` library, raw data can be retrieved without using the `plaid` library:
174
+ ```python
175
+ from datasets import load_dataset
176
+
177
+ repo_id = "channel/dataset"
178
+
179
+ datasetdict = load_dataset(repo_id)
180
+
181
+ for split_name, dataset in datasetdict.items():
182
+ for raw_sample in dataset:
183
+ for feat_name in dataset.column_names:
184
+ feature = raw_sample[feat_name]
185
+ ```
186
+ Notice that raw data refers to the variable features only, with a specific encoding for time variable features.
187
+
188
+ ### Dataset Sources
189
+
190
+ - **Papers:**
191
+ - [arxiv](https://arxiv.org/pdf/2305.12871)
192
+ - [arxiv](https://arxiv.org/abs/2505.02974)