kangkangchen commited on
Commit
441286f
·
verified ·
1 Parent(s): d730c93

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +70 -69
  2. config.json +28 -21
  3. model.pt +2 -2
  4. model.py +35 -286
  5. norm_stats.npz +3 -0
README.md CHANGED
@@ -1,105 +1,106 @@
1
- # LOBPatternNet - 主力下单模式识别模型
2
- # LOBPatternNet - Institutional Trading Pattern Detection from Level-2 Order Book
 
 
 
 
 
 
 
 
 
3
 
4
- ## 模型简介 / Model Overview
5
 
6
- 模型基于A股Level-2十档委托单数据,利用深度学习自动识别主力(机构投资者)的下单模式。
7
- 通过分析买卖委托的价格分布、挂单量、订单流不平衡(OFI)等微观结构特征,判断当前是否存在主力买入或卖出行为。
8
 
9
- This model detects institutional (主力) trading patterns from Level-2 order book data with 10 price levels.
10
- It analyzes bid/ask price distributions, order sizes, Order Flow Imbalance (OFI), and other microstructure
11
- features to classify market states into institutional buying, neutral, or institutional selling.
12
 
13
  ## 架构 / Architecture
14
 
15
  ```
16
- Input: (batch, 100, 40) - 100 consecutive LOB snapshots × 40 features
17
-
18
- BilinearNorm - 自适应归一化层
19
-
20
- Spatial CNN (Conv2d) - 提取价位间空间特征 (cross-level patterns)
21
-
22
- Inception Module × 2 - 多尺度时间特征提取 (multi-scale temporal)
23
-
24
- Transformer Attention × 2 - 时序依赖建模 (temporal dependencies)
25
-
26
- Fusion with Auxiliary Features:
27
- - 订单流不平衡 (OFI)
28
- - 价差动态 (Spread dynamics)
29
- - 深度不平衡 (Depth imbalance)
30
- - 大单集中度 (Volume concentration)
31
- - 价格压力 (Price pressure)
32
- - OFI波动率 (OFI volatility)
33
-
34
- 3-class Classification Head
35
  ```
36
 
37
- **Total Parameters**: 259,899
38
 
39
- ## 输出类别 / Output Classes
40
 
41
- | Label | 中文 | English | Description |
42
- |-------|------|---------|-------------|
43
- | 0 | 主力买入 | Institutional Buying | 检测到机构大量买入信号 |
44
- | 1 | 中性/散户 | Neutral/Retail | 无明显主力操盘迹象 |
45
- | 2 | 主力卖出 | Institutional Selling | 检测到机构大量卖出信号 |
46
 
47
- ## 性能指标 / Performance
48
 
49
  | Metric | Value |
50
  |--------|-------|
51
- | Test Accuracy | 0.4777 |
52
- | Test F1 (Macro) | 0.4127 |
53
- | Test F1 (Weighted) | 0.5091 |
54
- | 主力买入 Precision | 0.2369 |
55
- | 主力买入 Recall | 0.4251 |
56
- | 主力卖出 Precision | 0.2679 |
57
- | 主力卖出 Recall | 0.4983 |
58
 
59
  ## 使用方法 / Usage
60
 
61
  ```python
62
  import torch
63
- from model import LOBPatternNet
 
64
 
65
  # Load model
66
- model = LOBPatternNet(seq_len=100, num_classes=3, d_model=128, nhead=4, num_attn_layers=2)
67
  model.load_state_dict(torch.load("model.pt", weights_only=True))
68
  model.eval()
69
 
70
- # Input: 100 consecutive Level-2 snapshots
71
- # Each snapshot: [ask_p1, ask_s1, bid_p1, bid_s1, ask_p2, ask_s2, ..., bid_p10, bid_s10]
72
- # Features should be z-score normalized (see data_processor.py)
73
- x = torch.randn(1, 100, 40) # example input
 
 
 
 
 
 
 
 
 
 
74
  with torch.no_grad():
75
  logits = model(x)
76
  probs = torch.softmax(logits, dim=1)
77
- pred = logits.argmax(dim=1)
78
-
79
- # pred: 0=主力买入, 1=中性, 2=主力卖出
80
- labels = ["主力买入", "中性/散户", "主力卖出"]
81
- print(f"Prediction: {labels[pred.item()]}")
82
- print(f"Confidence: {probs[0, pred.item()]:.2%}")
83
- ```
84
 
85
- ## 数据格式 / Input Format
 
 
86
 
87
- 每个Level-2快照包含40个特征 (10档 × 4个字段):
88
 
89
- | Feature | Description | 说明 |
90
- |---------|-------------|------|
91
- | ask_price_i | Ask price at level i | 第i档卖出价 |
92
- | ask_size_i | Ask volume at level i | 第i档卖出量 |
93
- | bid_price_i | Bid price at level i | 第i档买入价 |
94
- | bid_size_i | Bid volume at level i | 第i档买入量 |
95
 
96
- ## 参考文献 / References
97
 
98
- - **DeepLOB**: Zhang et al., "Deep Convolutional Neural Networks for Limit Order Books", TNNLS 2019 (arxiv:1808.03668)
99
- - **TLOB**: Berti & Kasneci, "TLOB: A Novel Transformer Model with Dual Attention for Stock Price Trend Prediction", 2025 (arxiv:2502.15757)
100
- - **Training Dataset**: [LeonardoBerti/TRADES-LOB](https://huggingface.co/datasets/LeonardoBerti/TRADES-LOB)
101
 
102
  ## 声明 / Disclaimer
103
 
104
- 本模型仅供研究学习使用,不构成任何投资建议。股市有风险,入市需谨慎。
105
- This model is for research purposes only and does not constitute investment advice.
 
1
+ ---
2
+ tags:
3
+ - finance
4
+ - order-book
5
+ - institutional-trading
6
+ - level-2
7
+ - A-share
8
+ - LOB
9
+ - pytorch
10
+ license: mit
11
+ ---
12
 
13
+ # LOBPatternNet V3 - 主力下单模式识别模型
14
 
15
+ ## 模型简介 / Overview
 
16
 
17
+ 基于A股Level-2十档委托单(LOB)数据,利用深度学习自动识别主力(机构)的下单模式。
18
+
19
+ Detects institutional trading patterns from Level-2 order book data (10-level bid/ask).
20
 
21
  ## 架构 / Architecture
22
 
23
  ```
24
+ Input: (batch, 100, 40) - 100 consecutive LOB snapshots
25
+ Each snapshot: [ask_p₁, ask_s₁, bid_p₁, bid_s₁, ..., ask_p₁₀, ask_s₁₀, bid_p₁₀, bid_s₁₀]
26
+ BilinearNorm (adaptive normalization)
27
+ Spatial CNN (cross-level patterns)
28
+ Temporal CNN (multi-scale time features)
29
+ ↓ Transformer Attention (temporal dependencies)
30
+ 3-class Classification
 
 
 
 
 
 
 
 
 
 
 
 
31
  ```
32
 
33
+ Parameters: 85,803
34
 
35
+ ## 输出 / Output Classes
36
 
37
+ | ID | 中文 | English |
38
+ |----|------|---------|
39
+ | 0 | 主力买入 | Institutional Buying |
40
+ | 1 | 中性/散户 | Neutral / Retail |
41
+ | 2 | 主力卖出 | Institutional Selling |
42
 
43
+ ## 性能 / Performance
44
 
45
  | Metric | Value |
46
  |--------|-------|
47
+ | Test Accuracy | 0.1579 |
48
+ | Test F1 (Macro) | 0.1634 |
49
+ | Test F1 (Weighted) | 0.0725 |
50
+ | 主力买入 Precision | 0.1306 |
51
+ | 主力买入 Recall | 0.4739 |
52
+ | 主力卖出 Precision | 0.1876 |
53
+ | 主力卖出 Recall | 0.5947 |
54
 
55
  ## 使用方法 / Usage
56
 
57
  ```python
58
  import torch
59
+ import numpy as np
60
+ from model import LOBPatternNetV3
61
 
62
  # Load model
63
+ model = LOBPatternNetV3(num_classes=3, d_model=64, nhead=4, dropout=0.4)
64
  model.load_state_dict(torch.load("model.pt", weights_only=True))
65
  model.eval()
66
 
67
+ # Load normalization stats
68
+ stats = np.load("norm_stats.npz")
69
+ means, stds = stats["means"], stats["stds"]
70
+
71
+ # Prepare input: 100 consecutive Level-2 snapshots (N, 40)
72
+ # Each snapshot: [ask_price_1, ask_size_1, bid_price_1, bid_size_1, ...]
73
+ # 1. Replace sentinel values (abs > 1e9) with 0
74
+ # 2. Normalize prices to basis points relative to mid-price
75
+ # 3. Log-transform sizes with log1p
76
+ # 4. Z-score normalize using means/stds
77
+ raw_data = ... # your (100, 40) LOB snapshot array
78
+ normalized = (raw_data - means) / stds
79
+ x = torch.from_numpy(normalized).unsqueeze(0).float()
80
+
81
  with torch.no_grad():
82
  logits = model(x)
83
  probs = torch.softmax(logits, dim=1)
84
+ pred = logits.argmax(dim=1).item()
 
 
 
 
 
 
85
 
86
+ labels = ["主力买入 (Institutional Buy)", "中性 (Neutral)", "主力卖出 (Institutional Sell)"]
87
+ print(f"预测: {labels[pred]}, 置信度: {probs[0, pred]:.1%}")
88
+ ```
89
 
90
+ ## 训练细节 / Training Details
91
 
92
+ - **Dataset**: [LeonardoBerti/TRADES-LOB](https://huggingface.co/datasets/LeonardoBerti/TRADES-LOB) (265K order events, 10-level LOB)
93
+ - **Label Construction**: Order Flow Imbalance (OFI) + Large Order Ratio + Cancellation Rate
94
+ - **Loss**: Focal Loss (γ=2.0) + Label Smoothing (0.1) + Class Weighting
95
+ - **Regularization**: Dropout 0.4, Weight Decay 5e-4, Mixup Augmentation (α=0.3)
96
+ - **Optimizer**: AdamW, lr=3e-4, Cosine Annealing with Warm Restarts
 
97
 
98
+ ## 参考 / References
99
 
100
+ - DeepLOB: Zhang et al., TNNLS 2019 (arxiv:1808.03668)
101
+ - TLOB: Berti & Kasneci, 2025 (arxiv:2502.15757)
 
102
 
103
  ## 声明 / Disclaimer
104
 
105
+ 本模型仅供研究学习使用不构成任何投资建议。股市有风险入市需谨慎。
106
+ This model is for research purposes only. Not investment advice.
config.json CHANGED
@@ -1,36 +1,43 @@
1
  {
2
- "model_type": "LOBPatternNet",
3
- "architecture": "CNN + Inception + Transformer Attention + Auxiliary Features",
4
  "num_levels": 10,
5
  "seq_len": 100,
6
  "num_classes": 3,
7
- "d_model": 128,
8
  "nhead": 4,
9
- "num_attn_layers": 2,
10
- "dropout": 0.2,
11
  "class_names": [
12
- "主力买入 (Institutional Buy)",
13
  "中性 (Neutral)",
14
- "主力卖出 (Institutional Sell)"
15
  ],
16
  "class_names_zh": [
17
  "主力买入",
18
  "中性/散户",
19
  "主力卖出"
20
  ],
21
- "total_parameters": 259899,
22
- "training_dataset": "LeonardoBerti/TRADES-LOB",
23
- "test_accuracy": 0.47769423558897245,
24
- "test_f1_macro": 0.4126581408122072,
25
- "test_f1_weighted": 0.5091308416210424,
26
- "test_precision_per_class": [
27
- 0.23689320388349513,
28
- 0.7402173913043478,
29
- 0.26785714285714285
 
 
 
30
  ],
31
- "test_recall_per_class": [
32
- 0.4250871080139373,
33
- 0.4840085287846482,
34
- 0.4983388704318937
35
- ]
 
 
 
 
36
  }
 
1
  {
2
+ "model_type": "LOBPatternNetV3",
3
+ "architecture": "CNN (Spatial) + CNN (Temporal) + Transformer Attention",
4
  "num_levels": 10,
5
  "seq_len": 100,
6
  "num_classes": 3,
7
+ "d_model": 64,
8
  "nhead": 4,
9
+ "dropout": 0.4,
10
+ "total_parameters": 85803,
11
  "class_names": [
12
+ "主力买入 (Buy)",
13
  "中性 (Neutral)",
14
+ "主力卖出 (Sell)"
15
  ],
16
  "class_names_zh": [
17
  "主力买入",
18
  "中性/散户",
19
  "主力卖出"
20
  ],
21
+ "test_accuracy": 0.15789473684210525,
22
+ "test_f1_macro": 0.16335941375062,
23
+ "test_f1_weighted": 0.07250430112144952,
24
+ "test_precision": [
25
+ 0.13064361191162344,
26
+ 0.0,
27
+ 0.18763102725366876
28
+ ],
29
+ "test_recall": [
30
+ 0.4738675958188153,
31
+ 0.0,
32
+ 0.5946843853820598
33
  ],
34
+ "training_dataset": "LeonardoBerti/TRADES-LOB",
35
+ "normalization": "z-score (means/stds in norm_stats.npz)",
36
+ "label_construction": {
37
+ "method": "OFI + large_order_ratio + cancellation_rate",
38
+ "window": 50,
39
+ "ofi_threshold": 0.15,
40
+ "large_order_percentile": 85,
41
+ "score_percentile": 80
42
+ }
43
  }
model.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:aa391467f5bc207ba527cda22072d606488d8e3cb07b10e60512451e7bc8733b
3
- size 1073163
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8cba3876b1f6e97f0a5c424e4313e38fd8a83e5f5cb5550d2a0d55bd1d56feb
3
+ size 366176
model.py CHANGED
@@ -1,311 +1,60 @@
1
- """
2
- LOBPatternNet: Deep Learning Model for Institutional Trading Pattern Detection
3
- from Level-2 Order Book Data (10-level bid/ask)
4
-
5
- Architecture: CNN (spatial) + Inception (multi-scale) + Transformer Attention (temporal) + MLP Head
6
- Based on DeepLOB (Zhang et al. 2019) + TLOB (Berti & Kasneci 2025) design principles
7
-
8
- Input: (batch, seq_len, 40) - seq_len consecutive LOB snapshots, each with 40 features:
9
- [ask_price_1..10, ask_size_1..10, bid_price_1..10, bid_size_1..10]
10
-
11
- Output: 3-class classification
12
- 0: 主力买入 (Institutional Buying)
13
- 1: 中性/散户 (Neutral/Retail)
14
- 2: 主力卖出 (Institutional Selling)
15
- """
16
-
17
  import torch
18
  import torch.nn as nn
19
- import torch.nn.functional as F
20
- import math
21
-
22
 
23
  class BilinearNorm(nn.Module):
24
- """Bilinear normalization layer from TLOB - handles price/volume scale mismatch."""
25
  def __init__(self, num_features):
26
  super().__init__()
27
  self.gamma = nn.Parameter(torch.ones(1, 1, num_features))
28
  self.beta = nn.Parameter(torch.zeros(1, 1, num_features))
29
  self.gate = nn.Parameter(torch.ones(1, 1, num_features))
30
-
31
  def forward(self, x):
32
- # x: (B, T, F)
33
  mean = x.mean(dim=1, keepdim=True)
34
  std = x.std(dim=1, keepdim=True) + 1e-8
35
  x_norm = (x - mean) / std
36
  gate = torch.sigmoid(self.gate)
37
  return gate * (self.gamma * x_norm + self.beta) + (1 - gate) * x
38
 
39
-
40
- class InceptionModule(nn.Module):
41
- """Inception module for multi-scale temporal feature extraction."""
42
- def __init__(self, in_channels, out_channels=32):
43
- super().__init__()
44
- self.branch1 = nn.Sequential(
45
- nn.Conv1d(in_channels, out_channels, kernel_size=1),
46
- nn.BatchNorm1d(out_channels),
47
- nn.LeakyReLU(0.01)
48
- )
49
- self.branch3 = nn.Sequential(
50
- nn.Conv1d(in_channels, out_channels, kernel_size=3, padding=1),
51
- nn.BatchNorm1d(out_channels),
52
- nn.LeakyReLU(0.01)
53
- )
54
- self.branch5 = nn.Sequential(
55
- nn.Conv1d(in_channels, out_channels, kernel_size=5, padding=2),
56
- nn.BatchNorm1d(out_channels),
57
- nn.LeakyReLU(0.01)
58
- )
59
- self.pool_branch = nn.Sequential(
60
- nn.MaxPool1d(kernel_size=3, stride=1, padding=1),
61
- nn.Conv1d(in_channels, out_channels, kernel_size=1),
62
- nn.BatchNorm1d(out_channels),
63
- nn.LeakyReLU(0.01)
64
- )
65
-
66
- def forward(self, x):
67
- # x: (B, C, T)
68
- return torch.cat([self.branch1(x), self.branch3(x),
69
- self.branch5(x), self.pool_branch(x)], dim=1)
70
-
71
-
72
- class TemporalAttention(nn.Module):
73
- """Multi-head self-attention for temporal dependencies in order flow."""
74
- def __init__(self, d_model, nhead=4, dropout=0.1):
75
- super().__init__()
76
- self.attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout, batch_first=True)
77
- self.norm = nn.LayerNorm(d_model)
78
- self.dropout = nn.Dropout(dropout)
79
-
80
- def forward(self, x):
81
- # x: (B, T, D)
82
- attn_out, _ = self.attn(x, x, x)
83
- return self.norm(x + self.dropout(attn_out))
84
-
85
-
86
- class LOBPatternNet(nn.Module):
87
- """
88
- Full model for institutional trading pattern detection from Level-2 LOB data.
89
-
90
- Architecture:
91
- 1. BilinearNorm → normalize raw LOB features
92
- 2. CNN spatial encoder → extract cross-level order book patterns
93
- 3. Inception → multi-scale temporal features
94
- 4. Transformer attention → capture temporal dependencies
95
- 5. Classification head → 3-class output
96
- """
97
- def __init__(self,
98
- num_levels=10, # number of price levels (10 for Level-2)
99
- seq_len=100, # number of consecutive LOB snapshots
100
- num_classes=3, # 主力买入, 中性, 主力卖出
101
- d_model=128, # internal feature dimension
102
- nhead=4, # attention heads
103
- num_attn_layers=2, # number of attention layers
104
- dropout=0.2):
105
  super().__init__()
106
-
107
- self.num_levels = num_levels
108
- self.seq_len = seq_len
109
- self.num_features = num_levels * 4 # 40 features: ask_p, ask_s, bid_p, bid_s × 10 levels
110
-
111
- # 1. Bilinear normalization
112
- self.norm = BilinearNorm(self.num_features)
113
-
114
- # 2. Spatial CNN encoder - processes each snapshot across price levels
115
- # Reshape to (B, 1, T, 40) for 2D conv
116
- self.spatial_cnn = nn.Sequential(
117
- # Conv across features (price-volume pairing per level)
118
- nn.Conv2d(1, 32, kernel_size=(1, 2), stride=(1, 2)), # (B, 32, T, 20)
119
- nn.BatchNorm2d(32),
120
- nn.LeakyReLU(0.01),
121
-
122
- nn.Conv2d(32, 32, kernel_size=(1, 2), stride=(1, 2)), # (B, 32, T, 10)
123
- nn.BatchNorm2d(32),
124
- nn.LeakyReLU(0.01),
125
-
126
- nn.Conv2d(32, 32, kernel_size=(1, 10)), # (B, 32, T, 1)
127
- nn.BatchNorm2d(32),
128
- nn.LeakyReLU(0.01),
129
- )
130
-
131
- # 3. Inception module for multi-scale temporal features
132
- self.inception1 = InceptionModule(32, 32) # Output: 128 channels
133
- self.inception2 = InceptionModule(128, 32) # Output: 128 channels
134
-
135
- # 4. Projection to d_model
136
- self.proj = nn.Sequential(
137
- nn.Linear(128, d_model),
138
- nn.LayerNorm(d_model),
139
- nn.LeakyReLU(0.01),
140
- nn.Dropout(dropout)
141
- )
142
-
143
- # 5. Transformer attention layers
144
- self.attention_layers = nn.ModuleList([
145
- TemporalAttention(d_model, nhead, dropout)
146
- for _ in range(num_attn_layers)
147
- ])
148
-
149
- # 6. Classification head
150
  self.classifier = nn.Sequential(
151
- nn.Linear(d_model, 64),
152
- nn.LeakyReLU(0.01),
153
  nn.Dropout(dropout),
154
- nn.Linear(64, num_classes)
155
- )
156
-
157
- # Additional feature engineering layer
158
- # Processes derived features: OFI, VPIN, spread, depth imbalance
159
- self.aux_features_dim = 6 # number of derived features
160
- self.aux_encoder = nn.Sequential(
161
- nn.Linear(self.aux_features_dim, 32),
162
- nn.LeakyReLU(0.01),
163
- nn.Linear(32, d_model),
164
- nn.LeakyReLU(0.01),
165
- nn.Dropout(dropout)
166
- )
167
-
168
- # Fusion layer
169
- self.fusion = nn.Sequential(
170
- nn.Linear(d_model * 2, d_model),
171
- nn.LeakyReLU(0.01),
172
- nn.Dropout(dropout)
173
  )
174
-
175
- self._init_weights()
176
-
177
- def _init_weights(self):
178
- for m in self.modules():
179
- if isinstance(m, (nn.Linear, nn.Conv1d, nn.Conv2d)):
180
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='leaky_relu')
181
- if m.bias is not None:
182
- nn.init.constant_(m.bias, 0)
183
- elif isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, nn.LayerNorm)):
184
- nn.init.constant_(m.weight, 1)
185
- nn.init.constant_(m.bias, 0)
186
-
187
- def compute_aux_features(self, x):
188
- """
189
- Compute derived microstructure features from raw LOB data.
190
- x: (B, T, 40) raw LOB features
191
- Returns: (B, 6) aggregated auxiliary features
192
- """
193
- B, T, F = x.shape
194
-
195
- # Parse LOB structure: ask_p(10), ask_s(10), bid_p(10), bid_s(10)
196
- ask_prices = x[:, :, 0:10] # (B, T, 10)
197
- ask_sizes = x[:, :, 10:20] # (B, T, 10)
198
- bid_prices = x[:, :, 20:30] # (B, T, 10)
199
- bid_sizes = x[:, :, 30:40] # (B, T, 10)
200
-
201
- # 1. Order Flow Imbalance (OFI) - key institutional signal
202
- total_bid = ask_sizes.sum(dim=-1) # (B, T)
203
- total_ask = bid_sizes.sum(dim=-1) # (B, T)
204
- ofi = (total_bid - total_ask) / (total_bid + total_ask + 1e-8)
205
- ofi_mean = ofi.mean(dim=1, keepdim=True) # (B, 1)
206
-
207
- # 2. Spread dynamics
208
- spread = ask_prices[:, :, 0] - bid_prices[:, :, 0] # (B, T)
209
- spread_mean = spread.mean(dim=1, keepdim=True)
210
-
211
- # 3. Depth imbalance at top levels (1-3)
212
- top_bid = bid_sizes[:, :, :3].sum(dim=-1) # (B, T)
213
- top_ask = ask_sizes[:, :, :3].sum(dim=-1) # (B, T)
214
- depth_imb = (top_bid - top_ask) / (top_bid + top_ask + 1e-8)
215
- depth_imb_mean = depth_imb.mean(dim=1, keepdim=True)
216
-
217
- # 4. Volume concentration (institutional = concentrated at few levels)
218
- bid_concentration = bid_sizes[:, :, 0] / (bid_sizes.sum(dim=-1) + 1e-8)
219
- bid_conc_mean = bid_concentration.mean(dim=1, keepdim=True)
220
-
221
- # 5. Price pressure (weighted volume by distance from mid)
222
- mid_price = (ask_prices[:, :, 0] + bid_prices[:, :, 0]) / 2
223
- bid_pressure = (bid_sizes * (mid_price.unsqueeze(-1) - bid_prices).abs()).sum(dim=-1)
224
- ask_pressure = (ask_sizes * (ask_prices - mid_price.unsqueeze(-1)).abs()).sum(dim=-1)
225
- pressure_ratio = (bid_pressure - ask_pressure) / (bid_pressure + ask_pressure + 1e-8)
226
- pressure_mean = pressure_ratio.mean(dim=1, keepdim=True)
227
-
228
- # 6. Temporal volatility of OFI (sudden changes = institutional activity)
229
- ofi_vol = ofi.std(dim=1, keepdim=True)
230
-
231
- return torch.cat([ofi_mean, spread_mean, depth_imb_mean,
232
- bid_conc_mean, pressure_mean, ofi_vol], dim=1) # (B, 6)
233
-
234
  def forward(self, x):
235
- """
236
- x: (B, T, 40) - batch of LOB snapshot sequences
237
- Returns: (B, num_classes) logits
238
- """
239
- B, T, F = x.shape
240
-
241
- # Compute auxiliary features before normalization
242
- aux_feats = self.compute_aux_features(x) # (B, 6)
243
- aux_encoded = self.aux_encoder(aux_feats) # (B, d_model)
244
-
245
- # 1. Bilinear normalization
246
- x = self.norm(x) # (B, T, 40)
247
-
248
- # 2. Spatial CNN
249
- x = x.unsqueeze(1) # (B, 1, T, 40)
250
- x = self.spatial_cnn(x) # (B, 32, T, 1)
251
- x = x.squeeze(-1) # (B, 32, T)
252
-
253
- # 3. Inception modules
254
- x = self.inception1(x) # (B, 128, T)
255
- x = self.inception2(x) # (B, 128, T)
256
-
257
- # 4. Transpose and project for attention
258
- x = x.permute(0, 2, 1) # (B, T, 128)
259
- x = self.proj(x) # (B, T, d_model)
260
-
261
- # 5. Temporal attention
262
- for attn_layer in self.attention_layers:
263
- x = attn_layer(x)
264
-
265
- # Global average pooling
266
- x = x.mean(dim=1) # (B, d_model)
267
-
268
- # 6. Fusion with auxiliary features
269
- x = self.fusion(torch.cat([x, aux_encoded], dim=1)) # (B, d_model)
270
-
271
- # 7. Classification
272
- return self.classifier(x) # (B, num_classes)
273
-
274
- def get_attention_weights(self, x):
275
- """Get attention weights for interpretability."""
276
- B, T, F = x.shape
277
- aux_feats = self.compute_aux_features(x)
278
-
279
  x = self.norm(x)
280
  x = x.unsqueeze(1)
281
- x = self.spatial_cnn(x)
282
  x = x.squeeze(-1)
283
- x = self.inception1(x)
284
- x = self.inception2(x)
285
  x = x.permute(0, 2, 1)
286
- x = self.proj(x)
287
-
288
- weights = []
289
- for attn_layer in self.attention_layers:
290
- _, w = attn_layer.attn(x, x, x)
291
- weights.append(w)
292
- x = attn_layer(x)
293
-
294
- return weights
295
-
296
-
297
- def count_parameters(model):
298
- return sum(p.numel() for p in model.parameters() if p.requires_grad)
299
-
300
-
301
- if __name__ == "__main__":
302
- # Test model
303
- model = LOBPatternNet(seq_len=100, num_classes=3)
304
- print(f"Total trainable parameters: {count_parameters(model):,}")
305
-
306
- # Test forward pass
307
- x = torch.randn(4, 100, 40)
308
- out = model(x)
309
- print(f"Input shape: {x.shape}")
310
- print(f"Output shape: {out.shape}")
311
- print(f"Output: {out}")
 
1
+ """LOBPatternNet V3 - for loading saved model weights."""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  import torch
3
  import torch.nn as nn
 
 
 
4
 
5
  class BilinearNorm(nn.Module):
 
6
  def __init__(self, num_features):
7
  super().__init__()
8
  self.gamma = nn.Parameter(torch.ones(1, 1, num_features))
9
  self.beta = nn.Parameter(torch.zeros(1, 1, num_features))
10
  self.gate = nn.Parameter(torch.ones(1, 1, num_features))
 
11
  def forward(self, x):
 
12
  mean = x.mean(dim=1, keepdim=True)
13
  std = x.std(dim=1, keepdim=True) + 1e-8
14
  x_norm = (x - mean) / std
15
  gate = torch.sigmoid(self.gate)
16
  return gate * (self.gamma * x_norm + self.beta) + (1 - gate) * x
17
 
18
+ class LOBPatternNetV3(nn.Module):
19
+ def __init__(self, num_classes=3, d_model=64, nhead=4, dropout=0.4):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  super().__init__()
21
+ self.norm = BilinearNorm(40)
22
+ self.spatial = nn.Sequential(
23
+ nn.Conv2d(1, 16, kernel_size=(1, 2), stride=(1, 2)),
24
+ nn.BatchNorm2d(16), nn.ReLU(), nn.Dropout2d(dropout * 0.5),
25
+ nn.Conv2d(16, 16, kernel_size=(1, 2), stride=(1, 2)),
26
+ nn.BatchNorm2d(16), nn.ReLU(), nn.Dropout2d(dropout * 0.5),
27
+ nn.Conv2d(16, 16, kernel_size=(1, 10)),
28
+ nn.BatchNorm2d(16), nn.ReLU(),
29
+ )
30
+ self.temporal = nn.Sequential(
31
+ nn.Conv1d(16, 32, kernel_size=3, padding=1),
32
+ nn.BatchNorm1d(32), nn.ReLU(), nn.Dropout(dropout),
33
+ nn.Conv1d(32, 32, kernel_size=5, padding=2),
34
+ nn.BatchNorm1d(32), nn.ReLU(), nn.Dropout(dropout),
35
+ nn.Conv1d(32, d_model, kernel_size=3, padding=1),
36
+ nn.BatchNorm1d(d_model), nn.ReLU(), nn.Dropout(dropout),
37
+ )
38
+ encoder_layer = nn.TransformerEncoderLayer(
39
+ d_model=d_model, nhead=nhead, dim_feedforward=d_model*2,
40
+ dropout=dropout, batch_first=True, activation="gelu"
41
+ )
42
+ self.attention = nn.TransformerEncoder(encoder_layer, num_layers=2)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  self.classifier = nn.Sequential(
44
+ nn.LayerNorm(d_model),
 
45
  nn.Dropout(dropout),
46
+ nn.Linear(d_model, 32),
47
+ nn.GELU(),
48
+ nn.Dropout(dropout),
49
+ nn.Linear(32, num_classes)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  def forward(self, x):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  x = self.norm(x)
53
  x = x.unsqueeze(1)
54
+ x = self.spatial(x)
55
  x = x.squeeze(-1)
56
+ x = self.temporal(x)
 
57
  x = x.permute(0, 2, 1)
58
+ x = self.attention(x)
59
+ x = x.mean(dim=1)
60
+ return self.classifier(x)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
norm_stats.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:758b1a926ffbca5b299e000f68a6c7b66b4f448ca61d280515ee7de71a398718
3
+ size 824