Add research report (Chinese)
Browse files- research_report.md +152 -0
research_report.md
ADDED
|
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 风控序列模型调研报告 & 代码模板
|
| 2 |
+
|
| 3 |
+
## 一、App 安装序列建模 — 论文调研
|
| 4 |
+
|
| 5 |
+
### 核心结论
|
| 6 |
+
|
| 7 |
+
> **EBES 2024 大规模 benchmark 的结论:GRU + CoLES 对比学习 > 纯 Transformer**,在金融行为序列上平均排名第一。不要默认上 Transformer,GRU + 自监督预训练更强。Transformer 只有在十亿级预训练(TransactionGPT/BehaveGPT 规模)时才能超过 GRU。
|
| 8 |
+
|
| 9 |
+
### 推荐方法排序
|
| 10 |
+
|
| 11 |
+
| 优先级 | 方法 | 论文 | 链接 | 核心思路 | 适用场景 |
|
| 12 |
+
|---|---|---|---|---|---|
|
| 13 |
+
| ⭐1 | **CoLES + GRU** | CoLES: Contrastive Learning for Event Sequences (KDD 2022) | https://arxiv.org/abs/2002.08232 | 无标签自监督预训练,随机时间切片做对比学习,GRU编码序列→用户向量→下游LightGBM | 中等序列长度(<500),标签少 |
|
| 14 |
+
| ⭐2 | **Graph-Augmented CoLES** | Beyond Isolated Clients (2026) | https://arxiv.org/abs/2604.09085 | 在CoLES基础上构建用户-App二部图,用GraphSAGE生成App embedding,替换原始embedding,AUC+2.3% | App共现关系重要时 |
|
| 15 |
+
| 3 | **LBSF 层级折叠 Transformer** | Financial Risk via Long-term Behavior Sequence Folding (IEEE 2024) | https://arxiv.org/abs/2411.15056 | 按App类别分组折叠→类别内Transformer→类别间Transformer;正余弦时间编码 | 超长序列(180天+/500+事件) |
|
| 16 |
+
| 4 | **TabBERT** | Tabular Transformers for Multivariate Time Series (IBM 2021) | https://arxiv.org/abs/2011.01843 | 双层Transformer:字段级(App属性间注意力)+序列级(跨安装事件注意力);MLM预训练 | App有丰富属性字段时 |
|
| 17 |
+
| 5 | **BehaveGPT + DRO** | BehaveGPT: Foundation Model for User Behavior (2025) | https://arxiv.org/abs/2505.17631 | GPT风格自回归,DRO分布鲁棒优化解决长尾App类型问题 | 大规模数据(>1亿事件),长尾分布严重 |
|
| 18 |
+
| 6 | **TransactionGPT** | TransactionGPT (Visa 2025) | https://arxiv.org/abs/2511.08939 | 3D-Transformer:特征/元数据/时间三路编码,十亿级预训练 | Visa规模数据,一般公司不适用 |
|
| 19 |
+
| 7 | **BTF Self-Supervised** | Self-Attention for Banking Transaction Flow (2024) | https://arxiv.org/abs/2410.08243 | 自定义tokenizer(文本/金额/日期),BERT-MLM预训练→信用风险微调 | 交易流水有文本描述时 |
|
| 20 |
+
|
| 21 |
+
### 关键超参数参考(CoLES,最推荐方案)
|
| 22 |
+
|
| 23 |
+
```
|
| 24 |
+
序列编码器: GRU (不是 LSTM,不是 Transformer)
|
| 25 |
+
隐藏维度: 256–512
|
| 26 |
+
事件嵌入: app_category embed_dim=16, app_id embed_dim=32
|
| 27 |
+
对比学习: batch=256, sub-slices per user K=4
|
| 28 |
+
优化器: Adam, lr=1e-3
|
| 29 |
+
时间编码: 相邻事件时间差作为连续数值特征
|
| 30 |
+
下游: 冻结向量→LightGBM 或 微调GRU+MLP head
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
### 关键工程建议
|
| 34 |
+
|
| 35 |
+
1. **App词表处理**:App总量可能有几百万,但80%用户只安装Top 500 App。建议:`app_id`保留Top 50K,长尾合并到`<OTHER>`,或者用 App→类目(一级/二级) 层级编码
|
| 36 |
+
2. **时间编码**:不用Transformer的positional encoding,用实际时间差 `Δt = t_{i} - t_{i-1}`(天/小时级别)作为连续特征
|
| 37 |
+
3. **序列截断**:保留最近200~500次安装,过长截断最老的
|
| 38 |
+
4. **标签稀缺**:用CoLES无监督预训练(不需要标签),生成256d用户向量后接LightGBM
|
| 39 |
+
|
| 40 |
+
### 开源代码库
|
| 41 |
+
|
| 42 |
+
| 库 | 安装 | 用途 |
|
| 43 |
+
|---|---|---|
|
| 44 |
+
| **pytorch-lifestream** | `pip install pytorch-lifestream` | CoLES实现,含完整教程 |
|
| 45 |
+
| IBM TabFormer | https://github.com/IBM/TabFormer | TabBERT实现 |
|
| 46 |
+
| EBES Benchmark | https://github.com/on-point-rnd/ebes | 10个金融序列数据集+模型zoo |
|
| 47 |
+
|
| 48 |
+
---
|
| 49 |
+
|
| 50 |
+
## 二、征信结构化数据建模 — 论文调研
|
| 51 |
+
|
| 52 |
+
### 核心结论
|
| 53 |
+
|
| 54 |
+
> **中小数据集(<100K):LightGBM/XGBoost 仍然是王者**。DL只有在数据量>100K且有大量无标签数据做预训练时才有优势。2024年ICLR最佳:**TabM(集成MLP)比Transformer更快更稳**。
|
| 55 |
+
|
| 56 |
+
### 推荐方法排序
|
| 57 |
+
|
| 58 |
+
| 优先级 | 方法 | 论文 | 链接 | 核心思路 | 适用场景 |
|
| 59 |
+
|---|---|---|---|---|---|
|
| 60 |
+
| ⭐1 | **LightGBM/XGBoost** (baseline) | Why do tree-based models still outperform DL on tabular data? (NeurIPS 2022) | https://arxiv.org/abs/2207.08815 | 45个数据集验证GBDT在中小规模稳赢DL | n<100K,特征噪声多 |
|
| 61 |
+
| ⭐2 | **TabM + PLE** | TabM: Advancing Tabular DL with Parameter-Efficient Ensembling (ICLR 2025) | https://arxiv.org/abs/2410.24210 | MLP+BatchEnsemble(k=32)+分段线性数值编码;46数据集SOTA;比FT-T快5-30x | n>50K,需要DL方案 |
|
| 62 |
+
| 3 | **FT-Transformer** | Revisiting DL Models for Tabular Data (NeurIPS 2021) | https://arxiv.org/abs/2106.11959 | 每个特征独立token化→Transformer注意力学特征交互 | 需要注意力可解释性 |
|
| 63 |
+
| 4 | **FT-T + PLE 数值编码** | On Embeddings for Numerical Features in Tabular DL (2022) | https://arxiv.org/abs/2203.05556 | 对数值特征做分段线性(分位数)编码再送入Transformer,缩小与GBDT差距 | 数值特征多,分布不规则 |
|
| 64 |
+
| 5 | **SAINT** | SAINT: Improved Neural Networks for Tabular Data (2021) | https://arxiv.org/abs/2106.01342 | 行内注意力+行间注意力(intersample);CutMix+对比学习预训练 | 标签少时做自监督预训练 |
|
| 65 |
+
| 6 | **FinPT** | FinPT: Financial Risk Prediction with Profile Tuning (2023) | https://arxiv.org/abs/2308.00065 | 把表格转成自然语言描述→微调LLM | 特征名有语义,且想做可解释性 |
|
| 66 |
+
|
| 67 |
+
### 关键超参数参考(TabM,最推荐DL方案)
|
| 68 |
+
|
| 69 |
+
```
|
| 70 |
+
框架: MLP + BatchEnsemble
|
| 71 |
+
集成成员数 k=32
|
| 72 |
+
隐藏层宽度: 256–512
|
| 73 |
+
网络深度: 3–5 层
|
| 74 |
+
数值编码: PLE, bins=32 (分位数划分)
|
| 75 |
+
Dropout: 0.0–0.3
|
| 76 |
+
优化器: AdamW, lr∈[1e-4, 3e-4], weight_decay=1e-5
|
| 77 |
+
早停: patience=16, metric=val_AUC
|
| 78 |
+
预处理: QuantileTransformer(output_distribution='normal')
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
### 关键工程建议
|
| 82 |
+
|
| 83 |
+
1. **缺失值处理**:每个特征加一个`is_missing`二值指示列,然后用中位数/均值填充原值。模型自己学"缺失=有信息"
|
| 84 |
+
2. **类别不平衡**(坏账率1-3%):
|
| 85 |
+
- 训练:`BCEWithLogitsLoss(pos_weight=N_neg/N_pos)`
|
| 86 |
+
- 推理:阈值校准(Youden's J / KS统计量)
|
| 87 |
+
- **不推荐SMOTE**(会破坏概率校准)
|
| 88 |
+
3. **时间一致性**:用时间分割train/val/test(不要随机分割),征信数据有时间漂移
|
| 89 |
+
4. **最终方案**:`0.5 * LightGBM + 0.5 * TabM` 的集成通常比单模型好
|
| 90 |
+
|
| 91 |
+
### DL vs GBDT 决策树
|
| 92 |
+
|
| 93 |
+
```
|
| 94 |
+
Credit bureau data regime:
|
| 95 |
+
├── n < 10K labeled samples: → XGBoost/LightGBM wins clearly
|
| 96 |
+
├── n = 10K–100K: → GBDT baseline first; try TabM if GBDT saturates
|
| 97 |
+
├── n > 100K: → TabM competitive; FT-T with PLE can win
|
| 98 |
+
├── Large unlabeled pool available: → SAINT pretraining (contrastive) adds +2-3%
|
| 99 |
+
├── Column names semantic: → Consider FinPT/LLM approach
|
| 100 |
+
└── Out-of-time evaluation: → domain-aware splits critical
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
### 开源代码库
|
| 104 |
+
|
| 105 |
+
| 库 | 安装 | 用途 |
|
| 106 |
+
|---|---|---|
|
| 107 |
+
| **TabM** (Yandex) | https://github.com/yandex-research/tabm | ICLR 2025,表格DL SOTA |
|
| 108 |
+
| **rtdl_revisiting_models** | `pip install rtdl_revisiting_models` | FT-Transformer官方实现 |
|
| 109 |
+
| **rtdl_num_embeddings** | `pip install rtdl_num_embeddings` | PLE/周期性数值编码 |
|
| 110 |
+
| **pytorch-tabular** | `pip install pytorch-tabular[extra]` | 统一框架(含FT-T, TabNet, SAINT) |
|
| 111 |
+
|
| 112 |
+
---
|
| 113 |
+
|
| 114 |
+
## 三、融合决策
|
| 115 |
+
|
| 116 |
+
```
|
| 117 |
+
你的数据情况:
|
| 118 |
+
├── App 安装序列:
|
| 119 |
+
│ ├── 有大量无标签数据(全量用户)? → CoLES 自监督预训练 ⭐
|
| 120 |
+
│ ├── App 之间有共现关系? → Graph-Augmented CoLES
|
| 121 |
+
│ ├── 序列很长(>500)? → LBSF 层级折叠
|
| 122 |
+
│ └── App 有丰富属性(类目/权限/大小)? → TabBERT
|
| 123 |
+
│
|
| 124 |
+
├── 征信数据:
|
| 125 |
+
│ ├── 样本量 < 100K? → LightGBM 先搞 baseline
|
| 126 |
+
│ ├── 样本量 > 100K 且想用 DL? → TabM + PLE
|
| 127 |
+
│ ├── 标签很少但无标签多? → SAINT 预训练
|
| 128 |
+
│ └── 需要和 App 序列模型融合? → 各自出向量→拼接→MLP/LightGBM
|
| 129 |
+
│
|
| 130 |
+
└── 两个模型如何合并做最终决策?
|
| 131 |
+
→ App 序列模型输出 256d 用户向量
|
| 132 |
+
→ 征信模型输出预测概率或中间特征
|
| 133 |
+
→ Late Fusion: concat → LightGBM / Logistic Regression
|
| 134 |
+
(不要 early fusion,两个数据源性质差异太大)
|
| 135 |
+
```
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
## 四、代码文件说明
|
| 140 |
+
|
| 141 |
+
| 文件 | 内容 |
|
| 142 |
+
|---|---|
|
| 143 |
+
| `app_sequence_model.py` | App安装序列完整建模:CoLES+GRU预训练→有监督微调→LightGBM→图增强 |
|
| 144 |
+
| `credit_bureau_model.py` | 征信数据建模:TabM+PLE+FT-Transformer+LightGBM集成+阈值校准+PSI监控 |
|
| 145 |
+
| `fusion_model.py` | Late Fusion:两个模型输出融合为最终风控决策 |
|
| 146 |
+
|
| 147 |
+
使用方式:
|
| 148 |
+
1. 替换各文件中 `CONFIG` 里的特征列名为你的实际字段
|
| 149 |
+
2. 替换数据加载部分为你的数据源
|
| 150 |
+
3. 先跑 App 序列模型的 Stage 1(无监督预训练,不需要标签)
|
| 151 |
+
4. 再跑征信模型(LightGBM baseline → TabM)
|
| 152 |
+
5. 最后用 fusion_model 做融合决策
|