sincerelin thainasaraiva commited on
Commit
8b722d1
·
0 Parent(s):

Duplicate from CelfAI/COOPER

Browse files

Co-authored-by: Thaina Saraiva <thainasaraiva@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
60
+ *.csv filter=lfs diff=lfs merge=lfs -text
61
+ synthetic_data_ydt.csv filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ publish/
2
+ .vscode/
3
+ .env
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - mobileNetwork
7
+ - 5G
8
+ task_ids:
9
+ - univariate-time-series-forecasting
10
+ - multivariate-time-series-forecasting
11
+ configs:
12
+ - config_name: measurements_by_cell
13
+ data_files:
14
+ - split: train
15
+ path: dataset/train_data.csv
16
+ - split: test
17
+ path: dataset/test_data.csv
18
+ - config_name: topology
19
+ data_files:
20
+ - split: main
21
+ path: metadata/topology.csv
22
+ - config_name: performance_indicators_meanings
23
+ data_files:
24
+ - split: main
25
+ path: metadata/performance_indicators_meanings.csv
26
+
27
+
28
+
29
+
30
+ ---
31
+ # 📡 COOPER
32
+ ### Cellular Operational Observations for Performance and Evaluation Research
33
+ **An Open Benchmark of Synthetic Mobile Network Performance Indicators for Reproducible Research**
34
+
35
+ ---
36
+
37
+ ## 🧭 Overview
38
+
39
+ **COOPER** is an open-source **synthetic dataset of mobile network performance measurement (PM) time series**, designed to support **reproducible AI/ML research** in wireless networks. The dataset is named in honor of **Martin Cooper**, a pioneer of cellular communications.
40
+
41
+ COOPER emulates the **statistical distributions, temporal dynamics, and structural patterns** of real 5G network PM data while containing **no sensitive or operator-identifiable information**.
42
+
43
+ The dataset is released together with a **reproducible benchmarking framework** used to evaluate synthetic data generation methods.
44
+
45
+ ---
46
+
47
+ ## 🎯 Motivation
48
+
49
+ Access to real telecom PM/KPI data is often restricted due to:
50
+
51
+ - Confidentiality agreements
52
+ - Privacy regulations
53
+ - Commercial sensitivity
54
+
55
+ This lack of open data limits **reproducibility** in AI-driven research for wireless networks. COOPER addresses this gap by providing a **realistic yet privacy-preserving synthetic alternative** suitable for:
56
+
57
+ - Network monitoring research
58
+ - KPI forecasting
59
+ - Anomaly detection
60
+ - AI-native network automation
61
+ - 5G/6G performance evaluation
62
+
63
+ ---
64
+
65
+ ## 🏗 Dataset Creation Methodology
66
+
67
+ To generate COOPER, three complementary synthetic data generation paradigms were evaluated:
68
+
69
+ 1. **Adversarial approaches**
70
+ 2. **Probabilistic models**
71
+ 3. **Model-based time-series methods**
72
+
73
+ These were benchmarked using a **unified quantitative and qualitative evaluation framework** considering:
74
+
75
+ - Distributional similarity
76
+ - Temporal fidelity
77
+ - Shape alignment
78
+ - Discriminative performance
79
+ - Downstream forecasting capability
80
+
81
+ The generator demonstrating the most **balanced and consistent performance** across these criteria was selected to produce COOPER.
82
+
83
+ ---
84
+
85
+ ## 📊 Source Data Characteristics (Before Anonymization)
86
+
87
+ The real dataset used to model the synthetic data was:
88
+
89
+ - Fully **anonymized** to remove operator-sensitive information
90
+ - Cleaned and standardized for consistency
91
+
92
+ | Property | Value |
93
+ |---------|------|
94
+ | Radio Access Technology | 5G |
95
+ | Number of PM Indicators | 45 |
96
+ | Total Number of Cells | 84 |
97
+ | Base Stations | 12 |
98
+ | Geographic Area | ~1.35 km² |
99
+ | Collection Period | 31 days |
100
+ | Sampling Interval | 1 hour |
101
+ | Data Representation | Multi-cell time series |
102
+
103
+ A **cell** is defined as a radiating unit within a specific RAT and frequency band. Each base station may host multiple cells.
104
+
105
+ ---
106
+
107
+ ## 📡 Network Deployment Characteristics
108
+
109
+ The modeled network includes two frequency bands and two 5G architectures:
110
+
111
+ | Band | Architecture | Number of Cells |
112
+ |------|-------------|----------------|
113
+ | N28 (700 MHz) | Option 2 (Standalone) | 6 |
114
+ | N28 (700 MHz) | Option 3 (Non-Standalone) | 48 |
115
+ | N78 (3500 MHz) | Option 2 (Standalone) | 6 |
116
+ | N78 (3500 MHz) | Option 3 (Non-Standalone) | 24 |
117
+
118
+ Most cells operate in **Option 3 (NSA)** mode, reflecting a typical **EN-DC deployment** where LTE provides the control-plane anchor.
119
+
120
+ ---
121
+
122
+ ## 📈 PM Indicator Categories
123
+
124
+ Indicators follow **3GPP TS 28.552** performance measurement definitions and are grouped into:
125
+
126
+ ### 1️⃣ Radio Resource Control (RRC) Connection
127
+ Procedures for establishing UE radio connections and tracking active users.
128
+ - `RRC.ConnEstabSucc`
129
+ - `RRC.ConnEstabAtt`
130
+ - `RRC.ConnMax`
131
+
132
+ ### 2️⃣ Mobility Management
133
+ Handover and redirection performance across frequencies.
134
+ - `MM.HoExeIntraFreqSucc`
135
+ - `MM.HoExeInterFreqSuccOut`
136
+
137
+ ### 3️⃣ Channel Quality Indicator (CQI)
138
+ Distribution of downlink channel quality reports (CQI 0–15).
139
+ - `CARR.WBCQIDist.Bin0`
140
+ - `CARR.WBCQIDist.Bin15`
141
+
142
+ ### 4️⃣ Throughput and Data Volume
143
+ Traffic volume and transmission duration.
144
+ - `ThpVolDl`
145
+ - `ThpTimeDl`
146
+
147
+ ### 5️⃣ Availability
148
+ Cell downtime due to failures or energy-saving mechanisms.
149
+ - `CellUnavail.System`
150
+ - `CellUnavail.EnergySaving`
151
+
152
+ ### 6️⃣ UE Context
153
+ User session establishment attempts and successes.
154
+ - `UECNTX.Est.Att`
155
+ - `UECNTX.Est.Succ`
156
+
157
+ ---
158
+
159
+ ## 🧪 Benchmarking Framework
160
+
161
+ COOPER is distributed with a **reproducible evaluation pipeline** that allows researchers to compare synthetic data generators using:
162
+
163
+ - Statistical similarity metrics
164
+ - Temporal alignment measures
165
+ - Shape-based similarity
166
+ - Classification distinguishability
167
+ - Forecasting task performance
168
+
169
+ This framework enables standardized evaluation of synthetic telecom datasets.
170
+
171
+ ---
172
+
173
+ ## 🔬 Intended Use Cases
174
+
175
+ COOPER is suitable for:
176
+
177
+ - Time-series forecasting research
178
+ - Network anomaly detection
179
+ - Root cause analysis modeling
180
+ - RAN performance optimization studies
181
+ - Reproducible academic research in 5G/6G systems
182
+
183
+ ---
184
+
185
+ ## ⚠️ Data Notice for Dataset Users
186
+
187
+ **Due to the real network nature of the source data, some inconsistent values were intentionally maintained in this dataset.**
188
+ We recommend **preprocessing the data before use** (e.g., handling outliers, missing values, or domain-specific inconsistencies) according to your application and methodology.
189
+
190
+ ---
191
+
192
+ ## 🤝 Contribution & Reproducibility
193
+
194
+ This project promotes **open and reproducible telecom AI research**.
195
+ Researchers are encouraged to:
196
+
197
+ - Benchmark new generation models using the provided framework
198
+ - Share improvements and derived datasets
199
+ - Compare methods under the same evaluation protocol
200
+
201
+ ---
202
+
203
+ ## 📜 License
204
+
205
+ This dataset is released for **research and educational purposes**.
206
+ (Include specific license here, e.g., CC BY 4.0 / MIT / Apache 2.0)
207
+
208
+ ---
209
+
210
+ ## 📖 Citation
211
+
212
+ If you use COOPER in your research, please cite:
213
+
214
+ > *COOPER: An Open Benchmark of Synthetic Mobile Network Performance Indicators for Reproducible Research*
215
+
216
+ (Full citation to be added)
217
+
Visualization_Metrics/PDF_by_cell/Log_PDF_N.User.RRCConn.Max_Cell_0.png ADDED

Git LFS Details

  • SHA256: 3098025a56babd6eed64fd0c0c0fadaad1453098147cb003c42f01b1a3e98202
  • Pointer size: 130 Bytes
  • Size of remote file: 63.3 kB
Visualization_Metrics/heatmap.png ADDED

Git LFS Details

  • SHA256: b290a43427de1b6a81dfe2bc73b95f407c47c21e54474e6155ca71bf0e376450
  • Pointer size: 131 Bytes
  • Size of remote file: 744 kB
dataset/test_data.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7cfca869710258d5fd746fbd0c2ae8812603c3a9cd7bad4797bc8b8228b0160
3
+ size 2294862
dataset/train_data.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6687a849574ad843bcac34ba4ad00a186bf09284cd0a1ab00b73cc7a368d6767
3
+ size 9129437
example/download_dataset.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import load_dataset
2
+ import pandas as pd
3
+
4
+ ### Measurements by cell ###
5
+ measurements_by_cell = load_dataset('CelfAI/COOPER','measurements_by_cell')
6
+
7
+ measurements_by_cell_data_train = measurements_by_cell['train'].to_pandas()
8
+ measurements_by_cell_data_test = measurements_by_cell['test'].to_pandas()
9
+ measurements_by_cell_data = pd.concat([measurements_by_cell_data_train, measurements_by_cell_data_test])
10
+
11
+ ### Topology ###
12
+ topology = load_dataset('CelfAI/COOPER','topology')
13
+ topology_data = topology['main'].to_pandas()
14
+
15
+
16
+
17
+ ### Performance indicators meanings ###
18
+ performance_indicators_meanings = load_dataset('CelfAI/COOPER','performance_indicators_meanings')
19
+ performance_indicators_meanings_data = performance_indicators_meanings['main'].to_pandas()
20
+
21
+ #### Optionally Join Measurements by cell and Topology ###
22
+ all_data = pd.merge(measurements_by_cell_data, topology_data, on='LocalCellName', how='left')
23
+ pm_columns=[x for x in measurements_by_cell_data.columns.tolist() if x not in ['LocalCellName', 'datetime']]
24
+
25
+ mean_by_cell= measurements_by_cell_data.groupby('LocalCellName')[pm_columns].mean().reset_index()
26
+ min_by_cell= measurements_by_cell_data.groupby('LocalCellName')[pm_columns].min().reset_index()
27
+
28
+ mean_by_band= all_data.groupby('Band')[pm_columns].mean().reset_index()
29
+ mean_by_site= all_data.groupby('SiteLabel')[pm_columns].mean().reset_index()
30
+
31
+
example/save_in_postgress.py ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Load COOPER datasets from Hugging Face and persist them into a PostgreSQL database.
3
+
4
+ This script:
5
+ - Loads measurements_by_cell, topology, and performance_indicators_meanings from CelfAI/COOPER.
6
+ - Optionally computes aggregated views (mean/min by cell, mean by band/site).
7
+ - Creates the database if missing, then writes the main tables via pandas to_sql.
8
+
9
+ Usage:
10
+ python save_in_postgress.py
11
+
12
+ Requires: datasets, pandas, sqlalchemy, psycopg2-binary
13
+ """
14
+
15
+ from datasets import load_dataset
16
+ import pandas as pd
17
+ from sqlalchemy import create_engine, text
18
+
19
+ # ---------------------------------------------------------------------------
20
+ # Constants
21
+ # ---------------------------------------------------------------------------
22
+
23
+ DATASET_REPO = "CelfAI/COOPER"
24
+ SPLITS_MEASUREMENTS = ("train", "test")
25
+
26
+ # Default PostgreSQL connection (override via env or arguments if needed).
27
+ DEFAULT_CONFIG = {
28
+ "USERNAME": "postgres",
29
+ "PASSWORD": "postgres",
30
+ "HOST": "localhost",
31
+ "PORT": "5432",
32
+ "DB_NAME": "cooper",
33
+ }
34
+
35
+
36
+ # ---------------------------------------------------------------------------
37
+ # Data loading
38
+ # ---------------------------------------------------------------------------
39
+
40
+
41
+ def load_measurements_by_cell() -> pd.DataFrame:
42
+ """Load measurements_by_cell from COOPER, merge train and test splits."""
43
+ ds = load_dataset(DATASET_REPO, "measurements_by_cell")
44
+ train = ds["train"].to_pandas()
45
+ test = ds["test"].to_pandas()
46
+ return pd.concat([train, test], ignore_index=True)
47
+
48
+
49
+ def load_topology() -> pd.DataFrame:
50
+ """Load topology from COOPER (main split)."""
51
+ ds = load_dataset(DATASET_REPO, "topology")
52
+ return ds["main"].to_pandas()
53
+
54
+
55
+ def load_performance_indicators_meanings() -> pd.DataFrame:
56
+ """Load performance_indicators_meanings from COOPER (main split)."""
57
+ ds = load_dataset(DATASET_REPO, "performance_indicators_meanings")
58
+ return ds["main"].to_pandas()
59
+
60
+
61
+ def prepare_measurements_for_db(df: pd.DataFrame) -> pd.DataFrame:
62
+ """Normalize column names for PostgreSQL (dots -> underscores)."""
63
+ out = df.copy()
64
+ out.columns = out.columns.str.replace(".", "_", regex=False)
65
+ return out
66
+
67
+
68
+ def prepare_performance_indicators_for_db(df: pd.DataFrame) -> pd.DataFrame:
69
+ """Rename 3GPP_reference to reference_3gpp for valid SQL identifier."""
70
+ out = df.copy()
71
+ out.rename(columns={"3GPP_reference": "reference_3gpp"}, inplace=True)
72
+ return out
73
+
74
+
75
+ # ---------------------------------------------------------------------------
76
+ # Optional aggregated views (for analytics; not written to DB in this script)
77
+ # ---------------------------------------------------------------------------
78
+
79
+
80
+ def compute_aggregations(
81
+ measurements: pd.DataFrame,
82
+ topology: pd.DataFrame,
83
+ ) -> dict[str, pd.DataFrame]:
84
+ """
85
+ Join measurements with topology and compute mean/min by cell, mean by band/site.
86
+ Returns a dict of DataFrames for optional export or analysis.
87
+ """
88
+ all_data = pd.merge(measurements, topology, on="LocalCellName", how="left")
89
+ pm_columns = [
90
+ c for c in measurements.columns
91
+ if c not in ("LocalCellName", "datetime")
92
+ ]
93
+
94
+ return {
95
+ "mean_by_cell": measurements.groupby("LocalCellName")[pm_columns].mean().reset_index(),
96
+ "min_by_cell": measurements.groupby("LocalCellName")[pm_columns].min().reset_index(),
97
+ "mean_by_band": all_data.groupby("Band")[pm_columns].mean().reset_index(),
98
+ "mean_by_site": all_data.groupby("SiteLabel")[pm_columns].mean().reset_index(),
99
+ }
100
+
101
+
102
+ # ---------------------------------------------------------------------------
103
+ # Database setup and population
104
+ # ---------------------------------------------------------------------------
105
+
106
+
107
+ def ensure_database(engine_admin, db_name: str) -> None:
108
+ """Create database if it does not exist (idempotent)."""
109
+ with engine_admin.connect() as conn:
110
+ conn = conn.execution_options(isolation_level="AUTOCOMMIT")
111
+ result = conn.execute(
112
+ text("SELECT 1 FROM pg_database WHERE datname = :name"),
113
+ {"name": db_name},
114
+ )
115
+ if result.scalar() is None:
116
+ conn.execute(text(f"CREATE DATABASE {db_name} TEMPLATE template0;"))
117
+
118
+
119
+ def get_engine(config: dict, database: str | None = None):
120
+ """Build SQLAlchemy engine for the given database (default: postgres)."""
121
+ db = database or "postgres"
122
+ url = (
123
+ f"postgresql+psycopg2://{config['USERNAME']}:{config['PASSWORD']}"
124
+ f"@{config['HOST']}:{config['PORT']}/{db}"
125
+ )
126
+ return create_engine(url)
127
+
128
+
129
+ def write_tables(engine, measurements: pd.DataFrame, topology: pd.DataFrame, performance_indicators: pd.DataFrame) -> None:
130
+ """Write the three main DataFrames to PostgreSQL (replace existing tables)."""
131
+ measurements.to_sql("measurements", engine, if_exists="replace", index=False)
132
+ performance_indicators.to_sql(
133
+ "performance_indicators_meanings", engine, if_exists="replace", index=False
134
+ )
135
+ topology.to_sql("topology", engine, if_exists="replace", index=False)
136
+
137
+
138
+ def list_public_tables(engine) -> list[tuple]:
139
+ """Return list of (table_name,) in the public schema."""
140
+ with engine.connect() as conn:
141
+ result = conn.execute(
142
+ text(
143
+ "SELECT table_name FROM information_schema.tables "
144
+ "WHERE table_schema = 'public';"
145
+ )
146
+ )
147
+ return result.fetchall()
148
+
149
+
150
+ # ---------------------------------------------------------------------------
151
+ # DDL: CREATE TABLE IF NOT EXISTS (run before loading data)
152
+ # ---------------------------------------------------------------------------
153
+
154
+ query_Performance_Indicators_meaning = """
155
+ CREATE TABLE IF NOT EXISTS performance_indicators_meanings (
156
+ name TEXT PRIMARY KEY,
157
+ category TEXT,
158
+ description TEXT,
159
+ unit TEXT,
160
+ collection_method TEXT,
161
+ collection_condition TEXT,
162
+ measurement_entity TEXT,
163
+ reference_3gpp TEXT
164
+ );
165
+ """
166
+
167
+ query_Topology = """
168
+ CREATE TABLE IF NOT EXISTS topology (
169
+ SiteLabel TEXT,
170
+ LocalCellName TEXT PRIMARY KEY,
171
+ Sector INT,
172
+ PCI INT,
173
+ DuplexMode TEXT,
174
+ Band TEXT,
175
+ dlBandwidth TEXT,
176
+ Azimuth NUMERIC,
177
+ MDT INT,
178
+ EDT INT,
179
+ HBeamwidth INT,
180
+ AntennaHeight NUMERIC,
181
+ GroundHeight INT,
182
+ OperationMode TEXT,
183
+ distance_X NUMERIC,
184
+ distance_Y NUMERIC
185
+ );
186
+ """
187
+
188
+ query_Measurements = """
189
+ CREATE TABLE IF NOT EXISTS measurements (
190
+ LocalCellName TEXT REFERENCES topology(LocalCellName) ON DELETE CASCADE,
191
+ datetime TEXT,
192
+ RRC_ConnEstabSucc INT,
193
+ RRC_ConnEstabAtt INT,
194
+ RRC_Setup INT,
195
+ RRC_ConnMax INT,
196
+ MM_HoExeIntraFreqSuccOut INT,
197
+ MM_HoExeIntraFreqReqOut INT,
198
+ MM_HoExeIntraFreqSucc INT,
199
+ MM_HoExeIntraFreqAtt INT,
200
+ MM_HoExecInterFreqReqOut_Cov INT,
201
+ MM_HoExeInterFreqSuccOut_Cov INT,
202
+ MM_HoPrepInterFreqReqOut_Cov INT,
203
+ MM_HoExeInterFreqReqOut INT,
204
+ MM_HoExeInterFreqSuccOut INT,
205
+ MM_HoPrepInterFreqReqOut INT,
206
+ MM_HoPrepIntraFreqReqOut INT,
207
+ MM_HoFailIn_Admit INT,
208
+ MM_HoExeIntraFreqPrepReqIn INT,
209
+ MM_Redirection_Blind INT,
210
+ MM_Redirection_Cov INT,
211
+ CARR_WBCQIDist_Bin0 INT,
212
+ CARR_WBCQIDist_Bin1 INT,
213
+ CARR_WBCQIDist_Bin2 INT,
214
+ CARR_WBCQIDist_Bin3 INT,
215
+ CARR_WBCQIDist_Bin4 INT,
216
+ CARR_WBCQIDist_Bin5 INT,
217
+ CARR_WBCQIDist_Bin6 INT,
218
+ CARR_WBCQIDist_Bin7 INT,
219
+ CARR_WBCQIDist_Bin8 INT,
220
+ CARR_WBCQIDist_Bin9 INT,
221
+ CARR_WBCQIDist_Bin10 INT,
222
+ CARR_WBCQIDist_Bin11 INT,
223
+ CARR_WBCQIDist_Bin12 INT,
224
+ CARR_WBCQIDist_Bin13 INT,
225
+ CARR_WBCQIDist_Bin14 INT,
226
+ CARR_WBCQIDist_Bin15 INT,
227
+ ThpVolDl NUMERIC,
228
+ ThpVolUl NUMERIC,
229
+ ThpTimeDl NUMERIC,
230
+ ThpTimeUl NUMERIC,
231
+ CellUnavail_System INT,
232
+ CellUnavail_Manual INT,
233
+ CellUnavail_EnergySaving INT,
234
+ UECNTX_Est_Att INT,
235
+ UECNTX_Est_Succ INT,
236
+ UECNTX_Rem INT
237
+ );
238
+ """
239
+
240
+
241
+ def create_tables_if_not_exist(engine) -> None:
242
+ """Create tables from DDL if they do not exist (topology first, then measurements FK)."""
243
+ with engine.connect() as conn:
244
+ conn.execute(text(query_Performance_Indicators_meaning))
245
+ conn.execute(text(query_Topology))
246
+ conn.execute(text(query_Measurements))
247
+ conn.commit()
248
+
249
+
250
+ # ---------------------------------------------------------------------------
251
+ # Main
252
+ # ---------------------------------------------------------------------------
253
+
254
+
255
+ def main(config: dict | None = None) -> None:
256
+ config = config or DEFAULT_CONFIG
257
+ db_name = config["DB_NAME"]
258
+
259
+ # 1) Load and prepare data
260
+ measurements = load_measurements_by_cell()
261
+ topology = load_topology()
262
+ performance_indicators = load_performance_indicators_meanings()
263
+
264
+ measurements = prepare_measurements_for_db(measurements)
265
+ performance_indicators = prepare_performance_indicators_for_db(performance_indicators)
266
+
267
+
268
+ # 2) Create the database if it does not exist, then connect to it
269
+ engine_admin = get_engine(config, database="postgres")
270
+ ensure_database(engine_admin, db_name)
271
+ engine = get_engine(config, database=db_name)
272
+
273
+ # 3) Create tables from DDL if they do not exist
274
+ create_tables_if_not_exist(engine)
275
+
276
+ # 4) Load data into tables (replace existing data)
277
+ write_tables(engine, measurements, topology, performance_indicators)
278
+
279
+ # 5) Verify: list tables in public schema
280
+ tables = list_public_tables(engine)
281
+ print("Tables in public schema:", tables)
282
+
283
+
284
+ if __name__ == "__main__":
285
+ main()
metadata/performance_indicators_meanings.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc26a56260baf17c33c13c778155ce3b056f427f74c7d9263d3f0c0e0e3a0a2d
3
+ size 19664
metadata/topology.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ca3a92f13679b7886e2def9e696170d90ae2dce1ce0e00a5c679a9c2583f7da
3
+ size 8211