File size: 75,700 Bytes
19e67d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
Proceedings of Machine Learning Research vol 236:1–23, 2024 3rd Conference on Causal Learning and Reasoning

## **Finding Alignments Between Interpretable** **Causal Variables and Distributed Neural Representations**


**Atticus Geiger** [˚] _[♢]_ **, Zhengxuan Wu** [*] **, Christopher Potts, Thomas Icard, and Noah D. Goodman**





**Pr(Ai)** [2] **R Group** _[♢]_ **Stanford University**

**{atticusg, wuzhengx, cgpotts, icard, ngoodman}@stanford.edu**





**Editors:** Francesco Locatello and Vanessa Didelez





**Abstract**





Causal abstraction is a promising theoretical framework for explainable artificial intelligence that

defines when an interpretable high-level causal model is a faithful simplification of a low-level deep

learning system. However, existing causal abstraction methods have two major limitations: they

require a brute-force search over alignments between the high-level model and the low-level one, and

they presuppose that variables in the high-level model will align with disjoint sets of neurons in the

low-level one. In this paper, we present _distributed alignment search_ (DAS), which overcomes these

limitations. In DAS, we find the alignment between high-level and low-level models using gradient

descent rather than conducting a brute-force search, and we allow individual neurons to play multiple

distinct roles by analyzing representations in non-standard bases— _distributed_ representations. Our

experiments show that DAS can discover internal structure that prior approaches miss. Overall, DAS

removes previous obstacles to uncovering conceptual structure in trained neural nets.



**1. Introduction**





Can an interpretable symbolic algorithm be used to faithfully explain a complex neural network

model? This is a key question for interpretability; a positive answer can provide guarantees about

how the model will behave, and a negative answer could lead to fundamental concerns about whether

the model will be safe and trustworthy.

Causal abstraction provides a mathematical framework for precisely characterizing what it means

for any complex causal system (e.g., a deep learning model) to implement a simpler causal system

(e.g., a symbolic algorithm) (Rubenstein et al., 2017; Beckers et al., 2019; Massidda et al., 2023).

For modern AI models, the fundamental operation for assessing whether this relationship holds in

practice has been the _interchange intervention_ (also known as activation patching), in which a neural

network is provided a ‘base’ input, and sets of neurons are forced to take on the values they would

have if different ‘source’ inputs were processed (Geiger et al., 2020; Vig et al., 2020; Finlayson et al.,

2021; Meng et al., 2022). The counterfactuals that these interventions create are the basis for causal

inferences about model behavior.



Geiger et al. (2021) show that the relevant causal abstraction relation obtains when interchange

interventions on aligned high-level variables and low-level variables have equivalent effects. This

ideal relationship rarely obtains in practice, but the proportion of interchange interventions with the

same effect ( _interchange intervention accuracy_ ; IIA) provides a graded notion, and Geiger et al.

(2023) formally ground this metric in the theory of approximate causal abstraction. Geiger et al. also

use causal abstraction theory as a unified framework for a wide range of recent intervention-based

analysis methods (Vig et al., 2020; Csordás et al., 2021; Feder et al., 2021; Ravfogel et al., 2020;





  - Equal contribution.





© 2024 .





Elazar et al., 2020; De Cao et al., 2021; Abraham et al., 2022; Olah et al., 2020; Olsson et al., 2022;

Chan et al., 2022).

Causal abstraction techniques have been applied to diverse problems (Geiger et al., 2019, 2020;

Li et al., 2021; Huang et al., 2022). However, previous applications have faced two central challenges.

First, causal abstraction requires a computationally intensive brute-force search process to find

optimal alignments between the variables in the high-level model and the states of the low-level one.

Where exhaustive search is intractable, we risk missing the best alignment entirely. Second, these

prior methods are _localist_ : they artificially limit the space of possible alignments by presupposing

that high-level causal variables will be aligned with disjoint groups of neurons. There is no reason to

assume this a priori, and indeed much recent work in model explanation (see especially Ravfogel

et al. 2020, 2022; Elazar et al. 2020; Olah et al. 2020; Olsson et al. 2022) is converging on the insight

of Smolensky (1986), Rumelhart et al. (1986), and McClelland et al. (1986) that individual neurons

can play multiple conceptual roles. Smolensky (1986) identified _distributed neural representations_

as “patterns” consisting of linear combinations of unit vectors.

In the current paper, we propose distributed alignment search (DAS), which overcomes the above

limitations of prior causal abstraction work. In DAS, we find the best alignment via _gradient descent_

rather than conducting a brute-force search. In addition, we use _distributed interchange interventions_,

which are “soft” interventions in which the causal mechanisms of a group of neurons are edited

such that (1) their values are rotated with a change-of-basis matrix, (2) the targeted dimensions

of the rotated neural representation are fixed to be the corresponding values in the rotated neural

representation created for the source inputs, and (3) the representation is rotated back to the standard

neuron-aligned basis. The key insight is that viewing a neural representation through an alternative

basis that is not aligned with individual neurons can reveal interpretable dimensions (Smolensky,

1986).

In our experiments, we evaluate the capabilities of DAS to provide faithful and interpretable

explanations with two tasks that have obvious interpretable high-level algorithmic solutions with two

intermediate variables. In both tasks, the distributed alignment learned by DAS is as good or better

than both the closest localist alignment and the best localist alignment in a brute-force search.

In our first set of experiments, we focus on a hierarchical equality task that has been used

extensively in developmental and cognitive psychology as a test of relational reasoning (Premack,

1983; Thompson et al., 1997; Geiger et al., 2022a): the inputs are sequences r _w,_ _x,_ _y,_ _z_ s, and the

label is given by p _w_ “ _x_ q “ p _y_ “ _z_ q. We train a simple feed-forward neural network on this task and

show that it perfectly solves the task. Our key question: does this model implement a program that

computes _w_ “ _x_ and _y_ “ _z_ as intermediate values, as we might hypothesize humans do? Using DAS,

we find a distributed alignment with 100% IIA. In other words, the network is perfectly abstracted by

the high-level model; the distinction between the learned neural model and the symbolic algorithm is

thus one of implementation.

Our second task models a natural language inference dataset (Geiger et al., 2020) where the

inputs are premise and hypothesis sentences p _p,_ _h_ q that are identical but for the words _wp_ and _wh_ ;

the label is either _entails_ ( _p_ makes _h_ true) or _contradicts_ / _neutral_ ( _p_ makes _h_ false). We fine-tune a

pretrained language model to perfectly solve the task. With DAS, we find a perfect alignment (100%

IIA) to a causal model with a binary variable for the entailment relation between the words _wp_ and

_wh_ (e.g., _dog_ entails _mammal_ ).

In both our sets of experiments, the DAS analyses reveal perfect abstraction relations. However,

we also identify an important difference between them. In the NLI case, the entailment relation can





2





FINDING DISTRIBUTED ALIGNMENTS





be decomposed into representations of _wp_ and _wh_ . What appears to be a representation of lexical

entailment is, in this case, a “data structure” containing two representations of word identity, rather

than an encoding of their entailment relation. By contrast, the hierarchical equality models learn

representations of _w_ “ _x_ and _y_ “ _z_ that cannot be decomposed into representations of _w_, _x_, _y_ and _z_ .

In other words, these relations are entirely abstracted from the entities participating in the relation;

DAS reveals that the neural network truly implements a symbolic, tree-structured algorithm.





**2. Related Work**





A theory of _causal abstraction_ specifies exactly when a ‘high-level causal model’ can be seen as

an abstract characterization of some ‘low-level causal model’ (Iwasaki and Simon, 1994; Chalupka

et al., 2017; Rubenstein et al., 2017; Beckers et al., 2019). The basic idea is that high-level variables

are associated with (potentially overlapping) sets of low-level variables that summarize their causal

mechanisms with respect to a set of hard or soft interventions (Massidda et al., 2023). In practice, a

graded notion of _approximate_ causal abstraction is often more useful (Beckers et al., 2019; Rischel

and Weichwald, 2021; Geiger et al., 2023).





Geiger et al. (2023) argue that causal abstraction is a generic theoretical framework for providing

_faithful_ (Jacovi and Goldberg, 2020; Lyu et al., 2022) and _interpretable_ (Lipton, 2018) explanations of

AI models and show that LIME (Ribeiro et al., 2016), causal effect estimation (Abraham et al., 2022;

Feder et al., 2021), causal mediation analysis (Vig et al., 2020; Csordás et al., 2021; De Cao et al.,

2021), iterated nullspace projection (Ravfogel et al., 2020; Elazar et al., 2020), and circuit-based

explanations (Olah et al., 2020; Olsson et al., 2022; Wang et al., 2022; Chan et al., 2022) can all be

understood as causal abstraction analysis.



Interchange intervention training (IIT) objectives are minimized when a high-level causal model

is an abstraction of a neural network under a given alignment (Geiger et al., 2022b; Wu et al., 2022;

Huang et al., 2022). In this paper, we use IIT objectives to learn an alignment between a high-level

causal model and a deep learning model.





**3. Methods**





We focus on acyclic causal models (Pearl, 2001; Spirtes et al., 2000) and seek to provide an intuitive

overview of our method. An **acyclic causal model** consists of input, intermediate, and output

**variables**, where each variable has an associated set of **values** it can take on and a **causal mechanism**

that determine the value of the variable based on the value of its causal parents. For a simple running

example, we modify the boolean conjunction models of Geiger et al. (2022b) to reveal key properties

of DAS. A causal model _B_ for this problem can be defined as below, where the inputs and outputs

are booleans T and F. Alongside _B_, we also define a causal model _N_ of a linear feed-forward neural

network that solves the task. Here we show _B_, _N_, and the parameters of _N_ :















_WW_ 21 “ “ ““ sincospp2020 [˝][˝] qq ´cossinpp2020 [˝][˝] qq ‰‰ **w** “ _b_ “ ´“ 11 _._ 81 ‰















The model _N_ predicts T if _O_ ą 0 and F otherwise. This network solves the boolean conjunction

problem perfectly in that all pairs of input boolean values are mapped to the intended output.

An input **x** of a model _M_ determines a unique total setting _M_ p **x** q of all the variables in the

model. The inputs are fixed to be **x** and the causal mechanisms of the model determine the values of





3





the remaining variables. We denote the values that _M_ p **x** q assigns to the variable or variables **Z** as

GETVALUES **Z** p _M_ p **x** qq. For example, GETVALUES _V_ 3p _B_ prT _,_ Fsqq “ F.



**3.1. Interventions**





Interventions are a fundamental building block of causal models, and of causal abstraction analysis

in particular. An intervention **I** Ð **i** is a setting **i** of variables **I** . Together, an intervention and an input

setting **x** of a model _M_ determine a unique total setting that we denote as _M_ **I** Ð **i** p **x** q. The inputs are

fixed to be **x**, and the causal mechanisms of the model determine the values of the non-intervened

variables, with the intervened variables **I** being fixed to **i** .

We can define interventions on both our causal model _B_ and our neural model _N_ . For example,

_BV_ 1ÐTprF _,_ Tsq is our boolean model when it processes input rF _,_ Ts but with variable _V_ 1 set to T. This

has the effect of changing the output value to T. Similarly, whereas _N_ pr0 _,_ 1sq leads to an intermediate

values _h_ 1 “ ´0 _._ 34 and _h_ 2 “ 0 _._ 94 and output value ´1 _._ 2, if we compute _Nh_ 1Ð1 _._ 34pr0 _,_ 1sq, then the

output value is 0 _._ 48. This has the effect of changing the predicted value to T, because 0 _._ 48 ą 0.





**3.2. Alignment**



In causal abstraction analysis, we ask whether a specific low-level model like _N_ implements a

high-level algorithm like _B_ . This is always relative to a specific _alignment_ of variables between

the two models. An alignment Π “ ptΠ _X_ u _X_ _,_ t _τX_ u _X_ q assigns to each high-level variable _X_ a set of

low-level variables Π _X_ and a function _τX_ that maps from values of the low-level variables in Π _X_ to

values of the aligned high-level variable _X_ . One possible alignment between _B_ and _N_ is shown in

the diagram above: Π is depicted by the dashed lines connecting _B_ and _N_ .

We immediately know what the functions for high-level input and output variables are. For the

inputs, T is encoded as 1 and F is encoded as 0, meaning _τP_ p1q “ _τQ_ p1q “ T and _τP_ p0q “ _τQ_ p0q “ F.

For the output, the network only predicts T if _y_ ą 0, meaning _τV_ 3p _x_ q “ T if _x_ ą 0, else F. This is

simply a consequence of how a neural network is used and trained. The functions for high-level

intermediate variables _τV_ 1p _x_ q and _τV_ 2p _x_ q must be discovered and verified experimentally.



**3.3. Constructive Causal Abstraction**





Relative to an alignment like this, we can define abstraction:





**Definition 1** (Constructive Causal Abstraction) _A high-level causal model H is a constructive_

_abstraction of a low-level causal model L under alignment_ Π _exactly when the following holds for_

_every low-level input setting_ **x** _and low-level intervention_ **I** Ð **i** _:_





_τ_ p _L_ **I** Ð **i** p **x** q˘ “ _Hτ_ p **I** Ð **i** qp _τ_ p **x** qq





_H_ being a causal abstraction of _L_ under Π guarantees that the causal mechanism for each high-level

variable _X_ is a faithful rendering of the causal mechanisms for the low-level variables in Π _X_ .

To assess the degree to which a high-level model is a constructive causal abstraction of a low-level

model, we perform interchange interventions:





**Definition 2** (Interchange Interventions) _Given source input settings_ t **s** _j_ u _[k]_ 1 _[, and non-overlapping]_

_sets of intermediate variables_ t **X** _j_ u _[k]_ 1 _[for model][ M][, define the interchange intervention as the model]_





IIp _M,_ t **s** _j_ u _[k]_ 1 _[,]_ [t] **[X]** _[ j]_ [u] _[k]_ 1 [q “] _[ M]_ [Ź] _[k]_

_j_ “1 [x] **[X]** _[j]_ [Ð][GetVals] **[X]** _j_ [p] _[M]_ [ p] _[s]_ _[j]_ [qqy]





_where_ [Ź] _[k]_ _j_ “1 [x¨y] _[ concatenates a set of interventions.]_





4





FINDING DISTRIBUTED ALIGNMENTS





A _base_ input setting can be fed into the resulting model to compute the counterfactual output value.

Consider the following interchange intervention:





IIp _B,_ trT _,_ Tsu _,_ tt _V_ 1uuq “ _B_ t _V_ 1uÐGetValst _V_ 1up _B_ prT _,_ Tsqq





We process a base input and a source input, and then we intervene on a target variable, replacing it

with the value obtained by processing the source. Our causal model is fully known, and so we know

ahead of time that this interchange intervention yields T. For our neural network, the corresponding

behavior is not known ahead of time. The interchange intervention corresponding to the above

(according to the alignment we are exploring) is as follows





IIp _N,_ tr1 _,_ 1su _,_ tt _H_ 1uuq “ _N_ t _V_ 1u Ð GetValst _H_ 1up _N_ pr1 _,_ 1sqq





And, indeed, the counterfactual behavior of the model and the network _N_ are unequal:



















F





|V3 “ T V3 “ T<br>V1 “ T V2 “ T V1 “ T V2 “ T<br>F T T T|Col2|Col3|

|---|---|---|

|_V_1 “ T|_V_2 “ T|_V_2 “ T|

||||

|F|T|T|





|O “ ´0.26 O “ 0.08<br>H1 “ 0.6 H2 “ 0.94 H1 “ 0.6 H2 “ 1.28<br>0 1 1 1|Col2|Col3|

|---|---|---|

|_H_1 “ 0_._6|_H_2 “ 1_._28|_H_2 “ 1_._28|

||||

|0|1|1|







Under the given alignment, the interchange interventions at the low and high level have different

effects. Thus, we have a counterexample to constructive abstraction as given in Definition 1.

Although _N_ has perfect behavioral accuracy, its accuracy under the counterfactuals created by our

interventions is not perfect, and thus _B_ is not a constructive abstraction of _N_ under this alignment.





**3.4. Distributed Interventions**





The above conclusion is based on the kind of localist causal abstraction explored in the literature to

date. As noted in Section 1, there are two risks associated with this conclusion: (1) we may have

chosen a suboptimal alignment, and (2) we may be wrong to assume that the relevant structure will

be encoded in the standard basis we have implicitly assumed throughout.

If we simply rotate the representation r _H_ 1 _,_ _H_ 2s by ´20 [˝] to get a new representation r _Y_ 1 _,Y_ 2s, then

the resulting network has perfect behavioral and counterfactual accuracy when we align _V_ 1 and _V_ 2

with _Y_ 1 and _Y_ 2. What this reveals is that there is an alignment, but not in the basis we chose. Since

the choice of basis was arbitrary, our negative conclusion about the causal abstraction relation was

spurious.

This rotation localizes the information about the first and second argument into separate dimensions. To understand this, observe that the weight matrix of the linear network rotates a two

dimensional vector by 20 [˝] and the rotation matrix rotates the representation by 340 [˝] . The two

matrices are inverses. Because this network is linear, there is no activation function and so rotating

the hidden representation “undoes” the transformation of the input by the weight matrix. Under this

non-standard basis, the first hidden dimension is equal to the first input argument and the second

hidden dimension is equal to the second input argument.

This reveals an essential aspect of distributed neural representations: there is a many-to-many

mapping between neurons and concepts, and thus multiple high-level causal variables might be

encoded in structures from overlapping groups of neurons (Rumelhart et al., 1986; McClelland et al.,

1986). In particular, Smolensky (1986) proposes that viewing a neural representation under a basis





5





**Y** 2































































Figure 1: A generic multi-source distributed interchange intervention. The base input and two source

inputs create three total settings of a model. The top left (green) and right (blue) total

model settings are determined by two source inputs and the middle total model setting

(red) is determined by the base input. Three hidden units from each total setting are rotated

with an orthogonal matrix **R** : **X** Ñ **Y** . Then we intervene on the rotated representation for

the base input and fix two dimensions to be the value they take on for each source input,

respectively. Then we unrotate the representation with **R** [´][1] and compute a counterfactual

total model setting for the base input. In DAS, the orthogonal matrix is found with gradient

descent using a high-level causal model to guide the search process.





that is not aligned with individual neurons can reveal the interpretable distributed structure of the

neural representations.



To make good on this intuition we define a distributed intervention, which first transforms a set

of variables to a vector space, then does interchange on orthogonal sub-spaces, before transforming

back to the original representation space.





**Definition 3** Distributed Interchange Interventions _We begin with a causal model M with input_

_variables_ **S** _and source input settings_ t **s** _j_ u _[k]_ _j_ “1 _[. Let]_ **[ N]** _[ be a subset of variables in][ M][, the]_ [ target]

variables _. Let_ **Y** _be a vector space with subspaces_ t **Y** _j_ u _[k]_ 0 _[that form an orthogonal decomposition, i.e.,]_

**Y** “ [À] _[k]_ _j_ “0 **[Y]** _[j][. Let]_ **[ R]** _[ be an invertible function]_ **[ R]** [ :] **[ N]** [ Ñ] **[ Y]** _[. Write]_ [ Proj] **Y** _j_ _[for the orthogonal projection]_

_operator of a vector in_ **Y** _onto subspace_ **Y** _j._ [1] _A_ _**distributed interchange intervention**_ _yields a new_

_model_ DIIp _M,_ **R** _,_ t **s** _j_ u _[k]_ 1 _[,]_ [t] **[Y]** _[j]_ [u] _[k]_ 0 [q] _[ which is identical to][ M][ except that the mechanisms][ F]_ **[N]** _[ (which yield]_





1. Thus, Proj generalizes GetVals to arbitrary vector spaces.





6





FINDING DISTRIBUTED ALIGNMENTS





_values of_ **N** _from a total setting) are replaced by:_







ˆ



´

` ˘ [¯]

_F_ **N** [˚][p] **[v]** [q “] **[ R]** [´][1] Proj **Y** 0 **R** _F_ **N** p **v** q `







ÿ _k_ ´ ` ˘ [¯˙]



Proj **Y** _j_ **R** _F_ **N** p _M_ p **sj** qq _._

_j_ “1







Notice that in this definition the base setting is partially preserved through the intervention (in

subspace **Y** 0) and hence this is a _soft_ intervention on **N** that rewrites causal mechanisms while

maintaining a causal dependence between parent and child.

Under this new alignment, the high-level interchange intervention IIp _B,_ trT _,_ Tsu _,_ tt _V_ 1uuq “

_B_ t _V_ 1uÐGetValst _V_ 1up _B_ prT _,_ Tsqq is aligned with the low-level distributed interchange intervention







ff





_,_ tr1 _,_ 1su _,_ tt _Y_ 1uuq







DIIp _N,_







«

cosp´20 [˝] q ´ sinp´20 [˝] q

sinp´20 [˝] q cosp´20 [˝] q







and the counterfactual output behavior of _B_ and _N_ are equal:





T







ff











«

cosp20 [˝] q ´ sinp20 [˝] q





|Col1|Col2|O “ 0.08|Col4|Col5|

|---|---|---|---|---|

||_H_1 “ 0_._6|_H_1 “ 0_._6|_H_2 “ 1_._28||















_O_ “ 0 _._ 08

















|Col1|H1 “ ´0.34|H2 “ 0.94|Col4|

|---|---|---|---|

||0|1|1|





|Col1|H1 “ 0.6|Col3|H2 “ 1.28|Col5|

|---|---|---|---|---|

||1||1||







In what follows we will assume that **X** are already vector spaces (which is true for neural nets)

and the functions **R** are rotation operators. In this case, the subspaces **Y** _j_ can be identified without

loss of generality with those spanned by the first | **Y** 0| basis vectors for **Y** 0, the next | **Y** 1| basis vectors

for **Y** 1, and so on. (The following methods would be well-defined for non-linear transformations, as

long as they were invertible and differentiable, but efficient implementation becomes harder.)





**3.5. Distributed Alignment Search**





The question then arises of how to find good rotations. As we discussed above, previous causal

abstraction analyses of neural networks have performed brute-force search through a discrete space

of hand-picked alignments. In distributed alignment search (DAS), we find an alignment between one

or more high-level variables and disjoint sub-spaces (but not necessarily subsets) of a large neural

representation. We define a distributed interchange intervention training objective, use differentiable

parameterizations for the space of orthogonal matrices (such as provided by PyTorch), and then

optimize the objective with stochastic gradient descent. Crucially, the low-level and high-level

models are frozen during learning so we are only changing the alignment.

In the following definition we assume that a neural network specifies an output _distribution_ for a

given input, which can then be pushed forward to a distribution on output values of the high-level

model via an alignment function _τ_ . We may similarly interpret even a deterministic high-level

model as defining a (e.g., delta) distribution on output values. We make use of these distributions,

after interchange intervention, to define a differentiable loss for the rotation matrix which aligns

intermediate variables.





**Definition 4** Distributed Interchange Intervention Training Objective _Begin with a low-level neural_

_network L, with low-level input settings_ **Inputs** _L, a high-level algorithm H, with high-level output_





7





_settings_ **Out** _H, and an alignment τ for their input and output variables. Suppose we want to align_

_intermediate high level variables X_ _j_ P **Vars** _H with rotated subspaces_ **Y** _j of a neural representation_

**N** Ă **Vars** _L with learned rotation matrix_ **R** _[θ]_ : **N** Ñ **Y** _._

_In general, we can define a training objective using any differentiable loss function_ Loss _that_

_quantifies the distance between two total high-level settings._





ˆ ˙





ÿ



Loss DIIp _L,_ **R** _[θ]_ _,_ t **s** _j_ u _[k]_ 1 _[,]_ [t] **[Y]** _[ j]_ [u] _[k]_ 0 [qp] **[b]** [q] _[,]_ [II][p] _[H][,]_ [t] _[τ]_ [p] **[s]** _[j]_ [qu] _[k]_ 1 _[,]_ [t] **[X]** _[ j]_ [u] _[k]_ 1 [qp] _[τ]_ [p] **[b]** [qq]

**b** _,_ **s** 1 _,...,_ **s** _k_ P **Inputs** _L_





_For our experiments, we compute the cross entropy loss_ CEp¨ _,_ ¨q _between the high-level output_

_distribution_ Pp **out** _H_ | _H_ p _τ_ p **b** qqq _and the push-forward under τ of the low-level output distribution_

P _[τ]_ p **out** _H_ | _L_ p **b** qq _. The overall objective is:_





ˆ ˙





ÿ



CE Pp **out** _H_ |IIp _H,_ t _τ_ p **s** _j_ qu _[k]_ 1 _[,]_ [t] **[X]** _[ j]_ [u] _[k]_ 1 [qqp] _[τ]_ [p] **[b]** [qq] _[,]_ [P] _[τ]_ [p] **[out]** _[H]_ [|][DII][p] _[L][,]_ **[R]** _[θ]_ _[,]_ [t] **[s]** _[j]_ [u] _[k]_ 1 _[,]_ [t] **[Y]** _[j]_ [u] _[k]_ 0 [qp] **[b]** [qq]

**b** _,_ **s** 1 _,...,_ **s** _k_ P **Inputs** _L_





While we still have discrete hyperparameters p **N** _,_ | **Y** 0| _,...,_ | **Y** _k_ |q—the target population and the

dimensionality of the sub-spaces used for each high-level variable—we may use stochastic gradient

descent to determine the rotation that minimizes loss, thus yielding the best distributed alignment

between _L_ and _H_ .





**3.6. Approximate Causal Abstraction**





Perfect causal abstraction relationships are unlikely to arise for neural networks trained to solve

complex empirical tasks. We use a graded notion of accuracy:





**Definition 5** Distributed Interchange Intervention Accuracy _Given low-level and high-level causal_

_models L and H with alignment_ pΠ _,_ _τ_ q _, rotation_ **R** : **N** Ñ **Y** _, and orthogonal decomposition_ t **Y** _j_ u _[k]_ 0 _[.]_

_If we let_ **Inputs** _L be low-level input settings and_ t **X** _j_ u _[k]_ 1 _[be high-level intermediate variables the]_

_**interchange intervention accuracy (IIA)**_ _is as follows_







ÿ





**b** _,_ **s** 1 _,...,_ **s** _k_ P **Inputs** _L_







1

| **Inputs** _L_ | _[k]_ [`][1]







” ı

_τ_ `DIIp _L,_ **R** _[θ]_ _,_ t **s** _j_ u _[k]_ 1 _[,]_ [t] **[Y]** _[ j]_ [u] _[k]_ 0 [qp] **[b]** [q] ˘ “ IIp _H,_ t _τ_ p **s** _j_ qu _[k]_ 1 _[,]_ [t] **[X]** _[ j]_ [u] _[k]_ 1 [qp] _[τ]_ [p] **[b]** [qq]







IIA is the proportion of aligned interchange interventions that have equivalent high-level and lowlevel effects. In our example with _N_ and _A_, IIA is 100% and the high-level model is a perfect

abstraction of the low-level model (Def. 1). When IIA is _α_ ă100%, we rely on the graded notion of

_α-on-average_ approximate causal abstraction (Geiger et al., 2023), which coincides with IIA.





**3.7. General Experimental Setup**





We illustrate the value of DAS by analyzing feed-forward networks trained on a hierarchical equality

and pretrained Transformer-based language models (Vaswani et al., 2017) fine-tuned on a natural

language inference task. Our evaluation paradigm is as follows:





1. Train the neural network _N_ to solve the task. In all experiments, the neural models achieve

perfect accuracy on both training and testing data.





2. Create interchange intervention training datasets using a high-level causal model. Each

example consists of a base input, one or more source inputs, high-level causal variables





8





FINDING DISTRIBUTED ALIGNMENTS





targetted for intervention, and a counterfactual gold label that will be output by the network

if the interchange intervention has the hypothesized effect on model behavior. This gold

label is a counterfactual output of the high-level model we will align with the network. (See

Appendix A.1 for details)





3. Optimize an orthogonal matrix to learn a distributed alignment for each high-level model that

maximizes IIA using the training objective in Def. 4. We experiment with different hidden

dimension sizes for our low-level model and different intervention site sizes (dimensionality of low-level subspaces) and locations (the layer where the intervention happens). (See

Appendix A.2 for details)





4. Evaluate a baseline that brute-force searches through a discrete space of alignments and selects

the alignment with the highest IIA. We search the space of alignments by aligning each highlevel variable with groups of neurons in disjoint sliding windows. (See Appendix A.3 for

details)





5. Evaluate the localist alignment “closest” to the learned distributed alignment. The rotation matrix for the localist alignment will be axis-aligned with the standard basis, possibly permuting

and reflecting unit axes. (See Appendix A.4 for details)





6. Determine whether each distributed representation aligned with high-level variables can be

decomposed into multiple representations that encode the identity of the input values to the

variable’s causal mechanism. We do this by learning a second rotation matrix that decomposes

learned distributed representation, holding the first rotation matrix fixed. (See Appendix A.5

for details)





The codebase used to run these experiments is at [2] . We have replicated the hierarchical equality

experiment using the Pyvene library at [3] .



**4. Hierarchical Equality Experiment**





We now illustrate the power of DAS for analyzing networks designed to solve a hierarchical equality

task. We concentrate on analyzing a trained feed-forward network.

A _basic_ equality task is to determine whether a pair of objects are the same ( _x_ “ _y_ ). A _hierarchical_

equality task is to determine whether a pair of pairs of objects have identical relations: p _w_ “ _x_ q “

p _y_ “ _z_ q. Specifically, the input to the task is two pairs of objects and the output is True if both pairs

are equal or both pairs are unequal and False otherwise. For example, p _A,_ _A,_ _B,_ _B_ q and p _A,_ _B,C,_ _D_ q

are both assigned True while p _A,_ _B,C,C_ q is assigned False.





**4.1. Low-Level Neural Model**





We train a three-layer feed-forward network with ReLU activations to perform the hierarchical

equality task. Each input object is represented by a randomly initialized vector. Specifically, our

model has the following architecture where _k_ is the number of layers.





_h_ 1 “ ReLUpr _x_ 1; _x_ 2; _x_ 3; _x_ 4s _W_ 1 ` _b_ 1q _h_ _j_ ´1 “ ReLUp _h_ _jWj_ ` _b_ _j_ q _y_ “ **softmax** p _hkWk_ ` _bk_ q





The input vectors are in R _[n]_, the biases are in R [4] _[n]_, and the weights are in R [4] _[n]_ [ˆ][4] _[n]_ . We evaluate our

model on held-out random vectors unseen during training, as in Geiger et al. 2022a.





[2. https://github.com/atticusg/InterchangeInterventions/tree/zen](https://github.com/atticusg/InterchangeInterventions/tree/zen)

[3. https://github.com/stanfordnlp/pyvene/blob/main/tutorials/advanced_tutorials/DAS_Main_](https://github.com/stanfordnlp/pyvene/blob/main/tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb)

[Introduction.ipynb](https://github.com/stanfordnlp/pyvene/blob/main/tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb)





9





Both Equality Relations Left Equality Relation Identity of First Argument Identity Subspace of

Left Equality





Hidden size Intervention size **Layer 1** **Layer 2** **Layer 3** **Layer 1** **Layer 2** **Layer 3** **Layer 1** **Layer 2** **Layer 3** **Layer 1**





| **N** | “ 16 1 0.88 0.51 0.50 0.85 0.54 0.50 0.51 0.52 0.50 0.51

| **N** | “ 16 2 0.97 0.54 0.50 0.85 0.55 0.50 0.50 0.52 0.51 0.50

| **N** | “ 16 8 1.00 0.57 0.50 0.90 0.56 0.50 0.52 0.53 0.51 0.51





| **N** | “ 32 2 0.93 0.63 0.49 0.92 0.65 0.50 0.52 0.55 0.52 0.50

| **N** | “ 32 4 0.97 0.63 0.49 0.94 0.65 0.50 0.51 0.55 0.52 0.51

| **N** | “ 32 16 0.99 0.67 0.53 0.99 0.65 0.50 0.49 0.55 0.52 0.51





Brute-Force Search 0.60 0.56 0.52 0.64 0.64 0.57 0.50 0.51 0.54    Localist Alignment 0.73 0.56 0.48 0.60 0.50 0.49 0.46 0.47 0.48    



Table 1: Hierarchical equality alignment learning results. The table can be read as follows: **Layer**

**1**, **Layer 2**, and **Layer 3** indicate which layer of neurons is targeted, | **N** | is the number

of neurons in a layer, _k_ is the number of neurons aligned with each intermediate variable

(red) where our subspace model occupies _[k]_



2 [with rounding up to the closest integer, and]

the values in each cell are interchange intervention accuracies for the learned alignment

on training data. We report the best results from three runs with distinct random seeds for

training the rotation matrix (the same frozen low-level model is used for each seed).





**4.2. High-Level Models**





We use DAS to evaluate whether trained neural networks have achieved the natural solution to the

hierarchical equality task where the left and right equality relations are computed and then used to

predict the final label (Figure 2).







However, evaluating this high-level model alone is

insufficient, as there are obviously many other high-level

models of this task. To further contextualize our results,

we also consider two alternatives: a high-level model

where only the equality relation of the first pair is represented and a high-level model where the lone intermediate

variable encodes the identity of the first input object (leaving all computation for the final step). These alternative

high-level models also solve the task perfectly.





**4.3. Discussion**











Figure 2: A causal model that computes

the hierarchical equality task.







The IIA results achieved by the best alignment for each high-level model can be seen in Table 1. The

best alignments found are with the ‘Both Equality Relations’ model that is widely assumed in the

cognitive science literature. For all causal models, DAS learns a more faithful alignment (higher

IIA) than a brute-force search through localist alignments. This result is most pronounced for ‘Both

Equality Relations’, where DAS learns perfect or near-perfect alignments under a number of settings,

whereas the best brute-force alignment achieves only 0.60 and the best localist alignment achieves

only 0.73. Finally, the distributed representation of left equality could not be decomposed into a

representation of the first argument identity. We see this in the very low performance of the ‘Identity

Subspace of Left Equality’ results. This indicates that models are truly learning to encode an abstract

equality relation, rather than merely storing the identities of the inputs.





10





FINDING DISTRIBUTED ALIGNMENTS





**Sentence Pairs** **Label**





_premise_ : A man is talking to someone in a taxi. _entails_

_hypothesis_ : A man is talking to someone in a car.





_premise_ : The people are **not** playing sitars. _neutral_

_hypothesis_ : The people are **not** playing instruments.





( _a_ ) Two MoNLI examples.







MONLIp **p** _,_ **h** q





1 _lexrel_ Ð GET-LEXRELp **p** _,_ **h** q

2 _neg_ Ð CONTAINS-NOTp **p** _,_ **h** q

3 **if** _neg_ :

4 **return** REVERSEp _lexrel_ q

5 **return** _lexrel_





( _b_ ) A simple program that

solves MoNLI.







Figure 4: Monotonicity NLI task examples and high-level model.





**4.4. Analyzing a Randomly Initialized Network**







To calibrate intuitions about our method, we evaluate the

ability of DAS to optimize for interchange intervention

accuracy on a frozen randomly initialized networks that

achieves chance accuracy (50%) on the hierarchical equality task. This investigates the degree to which random

causal structures can be used to systematically manipulate

the counterfactual behavior of the network. We evaluate

networks with different hidden representation sizes while

holding the four input vectors fixed at 4 dimensions, under the hypothesis that more hidden neurons create more

random structure that DAS can search through. These results are summarized in Table 4.4. Observe that, in small

networks, there is no ability to increase interchange intervention accuracy. However, as we increase the size of

the hidden representation to be orders of magnitude larger

than the input dimension of 16, the interchange intervention accuracy increases. This confirms our hypothesis and

serves as a check that demonstrates DAS cannot construct

entirely new behaviors from random structure.





**5. Monotonicity NLI Experiment**







Both Equality Relations





Hidden size Intervention size **Layer 1**





| **N** | “ 16 _k_ “ 8 0.50

| **N** | “ 64 _k_ “ 32 0.50

| **N** | “ 256 _k_ “ 128 0.51

| **N** | “ 1028 _k_ “ 512 0.55

| **N** | “ 4096 _k_ “ 2048 0.64





Figure 3: DAS on a random network

with a 16 dimension input.

An oversized hidden dimension allows DAS to manipulate the model behavior by

searching through a large

space of random mechanisms.







In our second experiment, we analyze a BERT model fine-tuned on the Monotonicity Natural

Language Inference (MoNLI) benchmark (Geiger et al., 2020). A MoNLI example consists of a

premise sentence and hypothesis sentence and the output label is _entails_ when the premise makes

the hypothesis true, and _neutral_ otherwise. Two examples are in Figure 4( _a_ ). Every example is such

that a single word _wp_ in the premise sentence was changed to a hypernym (more general term) or

hyponym (more specific term) _wh_ to create the hypothesis. About half of MoNLI examples contain a

negation that scopes over the word replacement site, and the remaining examples have no negation.

When no negation is present, the label for a premise–hypothesis pair is the lexical relation. When

negation is present, the label for a premise–hypothesis pair is the reverse of the lexical relation.





11





Negation and Lexical Entailment Identity of Lexeme Lexeme Subspace of

Lexical Entailment Lexical Entailment





Hidden size Intervention size **Layer 7** **Layer 9** **Layer 11** **Layer 7** **Layer 9** **Layer 11** **Layer 7** **Layer 9** **Layer 11** **Layer 9**





| **N** | “ 768 64 0.65 0.96 0.91 0.88 1.00 0.97 0.88 0.94 0.93 0.97

| **N** | “ 768 128 0.65 0.99 0.92 0.88 1.00 0.99 0.89 0.93 0.92 0.97

| **N** | “ 768 256 0.67 1.00 0.86 0.91 1.00 1.00 0.88 0.96 0.88 0.98





Brute-Force Search 0.60 0.56 0.52 0.64 0.64 0.57 0.50 0.51 0.54    Localist Alignment 0.51 0.51 0.51 0.47 0.47 0.47 0.50 0.50 0.50    



Table 2: Monotonicity NLI results. The table can be read as follows: **Layer 7**, **Layer 9**, and **Layer**

**11** indicate which layer of neurons is targeted, | **N** | is the number of neurons in a layer, _k_ is

the number of neurons aligned with each intermediate variable (red) where our subspace

model occupies _[k]_



2 [, and the values in each cell are interchange intervention accuracies for the]

learned alignment on training data. We report the best results from three runs with distinct

random seeds.





**5.1. Low-Level Neural Model**





We fine-tune an uncased BERT-base model (Devlin et al., 2019) finetuned on the MultiNLI dataset

(Williams et al., 2018). [4] Our BERT model has 12 layers and 12 heads with a hidden dimension of

768. We concatenate the tokenized sequences of the premise sentence and hypothesis sentence with a

rSEPs token. Because of the size of the rotation matrix, we can’t look for distributed representations

across all tokens; we look only at the representations of the rCLSs token because the final classification

is made from this token’s representation in the last layer.





**5.2. High-Level Models**





We use DAS to evaluate whether BERT fine-tuned on MoNLI will represent two boolean intermediate

variables. The first is an indicator variable for negation, which is true if and only if negation is present

in the premise and hypothesis. The second is a variable that is true if _wp_ entails _wh_ . This model is

perhaps best expressed as a simple program (Figure 4( _b_ )). Again, we also consider two alternative

high-level models to contextualize our results. One model represents only lexical entailment and not

negation. The other represents the identity of the premise word _wp_ .





**5.3. Results**





The IIA results achieved by the best alignment for each high-level model can be seen in Table 2. There

is a perfect alignment between fine-tuned BERT and a symbolic algorithm with variables representing

the presence of negation and the lexical entailment relation between _wp_ and _wh_ . In Table 2, this

is shown by the perfect IIA for layer 9 and intervention size 256, meaning 256 non-standard basis

dimensions of the rCLSs token representation in layer 9 of BERT encode the relation between _wp_

and _wh_ and 256 other non-standard basis dimensions encode negation. Across all alignments and

intervention types, DAS learns more faithful alignments (higher IIA) than a brute-force search

through alignments, and no localist alignment comes close to the learned distributed alignments in

terms of IIA.





4. The parameters are provided by the Hugging Face `transformers` library (Wolf et al., 2019), downloaded from



[https://huggingface.co/ishan/bert-base-uncased-mnli.](https://huggingface.co/ishan/bert-base-uncased-mnli)





12





FINDING DISTRIBUTED ALIGNMENTS





However, the distributed representation of the lexical entailment relation between _wp_ and _wh_ can

be nearly perfectly decomposed into two representations that encode the identity of the word _wp_ and

the identity of the word _wh_, respectively. This result is shown by the near perfect IIA in the final

column of Table 2. This tells us that what appeared to be a representation of the lexical entailment,

was in fact a “data structure” of two word identity representations.



**6. Conclusion**





We introduce distributed alignment search (DAS), a method to align interpretable causal variables

with distributed neural representations. We learn distributed alignments that are more interpretable

than localist alignments and do so with a gradient-descent based search method that improves upon

the state-of-the-art brute-force search. In our two experiments, we discovered perfect alignments of

distributed neural representations to binary high-level variables encoding simple equality and lexical

entailment relations. However, when we investigated the substructure of these representations, we

found that the lexical entailment representations could be decomposed into sub-representations of

word identity. This highlights the need to investigate the causal substructure of neural representations.

On the other hand, the presence of perfect representations of simple equality relations that cannot be

decomposed into representations of the entities in the relations is a foundational result that should

inform our understanding of how and when symbolic and connectionist architectures coexist.



**Acknowledgments**





This research is supported in part by grants from Open Philanthropy, Meta AI, Amazon, and the

Stanford Institute for Human-Centered Artificial Intelligence (HAI).



**References**





Eldar David Abraham, Karel D’Oosterlinck, Amir Feder, Yair Ori Gat, Atticus Geiger, Christopher

Potts, Roi Reichart, and Zhengxuan Wu. CEBaB: Estimating the causal effects of real-world

[concepts on NLP model behavior. arXiv:2205.14140, 2022. URL https://arxiv.org/abs/](https://arxiv.org/abs/2205.14140)

[2205.14140.](https://arxiv.org/abs/2205.14140)





Sander Beckers, Frederick Eberhardt, and Joseph Y. Halpern. Approximate causal abstractions. In

_Proceedings of The 35th Uncertainty in Artificial Intelligence Conference_, 2019.





Krzysztof Chalupka, Frederick Eberhardt, and Pietro Perona. Causal feature learning: an overview.

_Behaviormetrika_, 44:137–164, 2017.





Lawrence Chan, Adrià Garriga-Alonso, Nicholas Goldowsky-Dill, Ryan Greenblatt, Jenny Nitishinskaya, Ansh Radhakrishnan, Buck Shlegeris, and Nate Thomas. Causal scrubbing: a method for

rigorously testing interpretability hypotheses, 2022.





Róbert Csordás, Sjoerd van Steenkiste, and Jürgen Schmidhuber. Are neural nets modular? inspecting

functional modularity through differentiable weight masks. In _9th International Conference on_

_Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_ . OpenReview.net,

[2021. URL https://openreview.net/forum?id=7uVcpu-gMD.](https://openreview.net/forum?id=7uVcpu-gMD)





Nicola De Cao, Leon Schmid, Dieuwke Hupkes, and Ivan Titov. Sparse interventions in language

[models with differentiable masking. arXiv:2112.06837, 2021. URL https://arxiv.org/abs/](https://arxiv.org/abs/2112.06837)

[2112.06837.](https://arxiv.org/abs/2112.06837)





13





Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep

bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of_

_the North American Chapter of the Association for Computational Linguistics: Human Language_

_Technologies, Volume 1 (Long and Short Papers)_, pages 4171–4186, Minneapolis, Minnesota,

June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL

[https://aclanthology.org/N19-1423.](https://aclanthology.org/N19-1423)





Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. Amnesic probing: Behavioral explanation with amnesic counterfactuals. In _Proceedings of the 2020 EMNLP Workshop BlackboxNLP:_

_Analyzing and Interpreting Neural Networks for NLP_ . Association for Computational Linguistics,

November 2020. doi: 10.18653/v1/W18-5426.





Amir Feder, Nadav Oved, Uri Shalit, and Roi Reichart. CausaLM: Causal Model Explanation

Through Counterfactual Language Models. _Computational Linguistics_, pages 1–54, 05 2021.

[ISSN 0891-2017. doi: 10.1162/coli_a_00404. URL https://doi.org/10.1162/coli_a_00404.](https://doi.org/10.1162/coli_a_00404)





Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart Shieber, Tal Linzen, and Yonatan

Belinkov. Causal analysis of syntactic agreement mechanisms in neural language models. In _Asso-_

_ciation for Computational Linguistics and International Joint Conference on Natural Language_

_Processing (ACL-IJCNLP)_ [, 2021. URL https://aclanthology.org/2021.acl-long.144.](https://aclanthology.org/2021.acl-long.144)





Atticus Geiger, Ignacio Cases, Lauri Karttunen, and Christopher Potts. Posing fair generalization

tasks for natural language inference. In _Proceedings of the 2019 Conference on Empirical Methods_

_in Natural Language Processing and the 9th International Joint Conference on Natural Language_

_Processing (EMNLP-IJCNLP)_, pages 4475–4485, Stroudsburg, PA, November 2019. Association

[for Computational Linguistics. doi: 10.18653/v1/D19-1456. URL https://www.aclweb.org/](https://www.aclweb.org/anthology/D19-1456)

[anthology/D19-1456.](https://www.aclweb.org/anthology/D19-1456)





Atticus Geiger, Kyle Richardson, and Chris Potts. Neural natural language inference models

partially embed theories of lexical entailment and negation. In _Proceedings of the 2020 EMNLP_

_Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ . Association for

Computational Linguistics, November 2020. doi: 10.18653/v1/W18-5426.





Atticus Geiger, Hanson Lu, Thomas Icard, and Christopher Potts. Causal abstractions

of neural networks. In _Advances in Neural Information Processing Systems_, volume 34, pages 9574–9586, 2021. [URL https://papers.nips.cc/paper/2021/hash/](https://papers.nips.cc/paper/2021/hash/4f5c422f4d49a5a807eda27434231040-Abstract.html)

[4f5c422f4d49a5a807eda27434231040-Abstract.html.](https://papers.nips.cc/paper/2021/hash/4f5c422f4d49a5a807eda27434231040-Abstract.html)





Atticus Geiger, Alexandra Carstensen, Michael C Frank, and Christopher Potts. Relational reasoning

and generalization using nonsymbolic neural networks. _Psychological Review_, 2022a. doi:

[10.1037/rev0000371. URL https://doi.org/10.1037/rev0000371.](https://doi.org/10.1037/rev0000371)





Atticus Geiger, Zhengxuan Wu, Hanson Lu, Josh Rozner, Elisa Kreiss, Thomas Icard, Noah Goodman, and Christopher Potts. Inducing causal structure for interpretable neural networks. In

Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato,

editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of

_Proceedings of Machine Learning Research_, pages 7324–7338. PMLR, 17–23 Jul 2022b. URL

[https://proceedings.mlr.press/v162/geiger22a.html.](https://proceedings.mlr.press/v162/geiger22a.html)





14





FINDING DISTRIBUTED ALIGNMENTS





Atticus Geiger, Chris Potts, and Thomas Icard. Causal abstraction for faithful interpretation of AI

[models. arXiv:2106.02997, 2023. URL https://arxiv.org/abs/2106.02997.](https://arxiv.org/abs/2106.02997)





Jing Huang, Zhengxuan Wu, Kyle Mahowald, and Christopher Potts. Inducing character-level

structure in subword-based language models with Type-level Interchange Intervention Training.

[Ms., Stanford University and UT Austin, 2022. URL https://arxiv.org/abs/2212.09897.](https://arxiv.org/abs/2212.09897)





Yumi Iwasaki and Herbert A. Simon. Causality and model abstraction. _Artificial Intelligence_, 67(1):

143–194, 1994.





Alon Jacovi and Yoav Goldberg. Towards faithfully interpretable NLP systems: How should we

define and evaluate faithfulness? In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R.

Tetreault, editors, _Proceedings of the 58th Annual Meeting of the Association for Computational_

_Linguistics, ACL 2020, Online, July 5-10, 2020_, pages 4198–4205. Association for Computational

[Linguistics, 2020. doi: 10.18653/v1/2020.acl-main.386. URL https://doi.org/10.18653/v1/](https://doi.org/10.18653/v1/2020.acl-main.386)

[2020.acl-main.386.](https://doi.org/10.18653/v1/2020.acl-main.386)





Belinda Z. Li, Maxwell I. Nye, and Jacob Andreas. Implicit representations of meaning in

neural language models. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics_

_and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP_

_2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021_, pages 1813–1827. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.acl-long.143. URL

[https://doi.org/10.18653/v1/2021.acl-long.143.](https://doi.org/10.18653/v1/2021.acl-long.143)





Zachary C. Lipton. The mythos of model interpretability. _Commun. ACM_, 61(10):36–43, 2018. doi:

[10.1145/3233231. URL https://doi.org/10.1145/3233231.](https://doi.org/10.1145/3233231)





Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. Towards faithful model explanation

in NLP: A survey. _CoRR_, abs/2209.11326, 2022. doi: 10.48550/arXiv.2209.11326. URL

[https://doi.org/10.48550/arXiv.2209.11326.](https://doi.org/10.48550/arXiv.2209.11326)





Riccardo Massidda, Atticus Geiger, Thomas Icard, and Davide Bacciu. Causal abstraction with

soft interventions. In Mihaela van der Schaar, Cheng Zhang, and Dominik Janzing, editors,

_Conference on Causal Learning and Reasoning, CLeaR 2023, 11-14 April 2023, Amazon Devel-_

_opment Center, Tübingen, Germany, April 11-14, 2023_, volume 213 of _Proceedings of Machine_

_Learning Research_ [, pages 68–87. PMLR, 2023. URL https://proceedings.mlr.press/v213/](https://proceedings.mlr.press/v213/massidda23a.html)

[massidda23a.html.](https://proceedings.mlr.press/v213/massidda23a.html)





J. L. McClelland, D. E. Rumelhart, and PDP Research Group, editors. _Parallel Distributed Processing._

_Volume 2: Psychological and Biological Models_ . MIT Press, Cambridge, MA, 1986.





Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing

factual associations in GPT. In _Advances in Neural Information Processing Systems_

_(NeurIPS)_ [, 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/](https://proceedings.neurips.cc/paper_files/paper/2022/file/6f1d43d5a82a37e89b0665b33bf3a182-Paper-Conference.pdf)

[6f1d43d5a82a37e89b0665b33bf3a182-Paper-Conference.pdf.](https://proceedings.neurips.cc/paper_files/paper/2022/file/6f1d43d5a82a37e89b0665b33bf3a182-Paper-Conference.pdf)





Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter.

Zoom in: An introduction to circuits. _Distill_, 2020. doi: 10.23915/distill.00024.001.

https://distill.pub/2020/circuits/zoom-in.





15





Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan,

Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli,

Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane

Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish,

and Chris Olah. In-context learning and induction heads. _Transformer Circuits Thread_, 2022.

https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html.





Judea Pearl. Direct and indirect effects. In _Proceedings of the Seventeenth Conference on Uncer-_

_tainty in Artificial Intelligence_, UAI’01, page 411–420, San Francisco, CA, USA, 2001. Morgan

Kaufmann Publishers Inc. ISBN 1558608001.





David Premack. The codes of man and beasts. _Behavioral and Brain Sciences_, 6(1):125–136, 1983.

[doi: 10.1017/S0140525X00015077. URL https://doi.org/10.1017/S0140525X00015077.](https://doi.org/10.1017/S0140525X00015077)





Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. Null it out:

Guarding protected attributes by iterative nullspace projection. In Dan Jurafsky, Joyce Chai,

Natalie Schluter, and Joel R. Tetreault, editors, _Proceedings of the 58th Annual Meeting of the_

_Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020_, pages 7237–7256.

Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.acl-main.647. URL

[https://doi.org/10.18653/v1/2020.acl-main.647.](https://doi.org/10.18653/v1/2020.acl-main.647)





Shauli Ravfogel, Michael Twiton, Yoav Goldberg, and Ryan D Cotterell. Linear adversarial concept

erasure. In _International Conference on Machine Learning (ICML)_ [, 2022. URL https://](https://proceedings.mlr.press/v162/ravfogel22a.html)

[proceedings.mlr.press/v162/ravfogel22a.html.](https://proceedings.mlr.press/v162/ravfogel22a.html)





Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?": Explaining the

predictions of any classifier. In _Proceedings of the 22nd ACM SIGKDD International Conference_

_on Knowledge Discovery and Data Mining_, KDD ’16, page 1135–1144, New York, NY, USA, 2016.

Association for Computing Machinery. ISBN 9781450342322. doi: 10.1145/2939672.2939778.

[URL https://doi.org/10.1145/2939672.2939778.](https://doi.org/10.1145/2939672.2939778)





Eigil F. Rischel and Sebastian Weichwald. Compositional abstraction error and a category of causal

models. In _Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence (UAI)_,

2021.





Paul K. Rubenstein, Sebastian Weichwald, Stephan Bongers, Joris M. Mooij, Dominik Janzing,

Moritz Grosse-Wentrup, and Bernhard Schölkopf. Causal consistency of structural equation

models. In _Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI)_,

2017.





D. E. Rumelhart, J. L. McClelland, and PDP Research Group, editors. _Parallel Distributed Processing._

_Volume 1: Foundations_ . MIT Press, Cambridge, MA, 1986.





P. Smolensky. Neural and conceptual interpretation of PDP models. In _Parallel Distributed Pro-_

_cessing: Explorations in the Microstructure, Vol. 2: Psychological and Biological Models_, page

390–431. MIT Press, Cambridge, MA, USA, 1986. ISBN 0262631105.





Peter Spirtes, Clark Glymour, and Richard Scheines. _Causation, Prediction, and Search_ . MIT Press,

2000.





16





FINDING DISTRIBUTED ALIGNMENTS





Roger K R Thompson, David L Oden, and Sarah T Boysen. Language-naive chimpanzees (pan

troglodytes) judge relations between relations in a conceptual matching-to-sample task. _Journal of_

_Experimental Psychology: Animal Behavior Processes_, 23(1):31—-43, 1997.





Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,

Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg,

S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, _Advances in Neural_

_Information Processing Systems 30_, pages 5998–6008. Curran Associates, Inc., 2017. URL

[http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf.](http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf)





Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and

Stuart Shieber. Causal mediation analysis for interpreting neural nlp: The case of gender bias,

2020.





Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. Interpretability in the wild: a circuit for indirect object identification in GPT-2 small. _arXiv preprint_

_arXiv:2211.00593_ [, 2022. URL https://arxiv.org/abs/2211.00593.](https://arxiv.org/abs/2211.00593)





Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for

sentence understanding through inference. In _Proceedings of the 2018 Conference of the North_

_American Chapter of the Association for Computational Linguistics: Human Language Technolo-_

_gies, Volume 1 (Long Papers)_, pages 1112–1122. Association for Computational Linguistics, 2018.

[URL http://aclweb.org/anthology/N18-1101.](http://aclweb.org/anthology/N18-1101)





Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi,

Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. Huggingface’s

transformers: State-of-the-art natural language processing. _ArXiv_, abs/1910.03771, 2019.





Zhengxuan Wu, Karel D’Oosterlinck, Atticus Geiger, Amir Zur, and Christopher Potts. Causal

[Proxy Models for concept-based model explanations. arXiv:2209.14279, 2022. URL https:](https://arxiv.org/abs/2209.14279)

[//arxiv.org/abs/2209.14279.](https://arxiv.org/abs/2209.14279)





17





**Supplementary Materials**



**Appendix A. Experimental Setup Details**





**A.1. Training Data for distributed alignment search (DAS)**





For each task, we create training datasets for learning the rotation matrix of each high-level model.

As defined in Definition 2, each input–output pair for training the rotation matrix consists of a base

input that has two pairs of input values. Additionally, we have a set of source inputs mapping to

interventions on different intermediate variables, and the corresponding counterfactual outputs (i.e.,

the updated outputs under interventions). Note that only for cases where there are multiple high-level

intermediate variables involved, we sample more than one source input. For such cases, we randomly

choose to interchange two variables together from two source inputs or swap a single variable from a

single source input.





**Hierarchical Equality Experiments** For our high-level models abstracting both equality relations

and left equality relation, we sample a set of source inputs and interchange the equality relations of

the corresponding shape pairs from the source inputs with the equality relations from the base input.

For our high-level model abstracting the identity of the first shape, we sample a source input and

interchange the first change from the source input with the base input.





**Monotonicity NLI Experiments** For our high-level models abstracting negation or lexical entailment, we sample a set of source inputs and interchange the boolean value for negative or the value for

lexical entailment from the source inputs with the base input. For our high-level model abstracting

only the identity of replacing lexeme from the _hypothesis_ sentence, we sample another _hypothesis_

sentence from the one seen in training set and interchange its lexeme with the base input. To avoid

cases where entailment labels are invalid (e.g., the entailment relation between “car” and “tree” is

ambiguous), we specifically sample a valid English word that is either a hypernym or a hyponym

of the lexeme item in the _premise_ sentence, and from a new lexeme pair. Then, we construct a new

pair of _premise_ and _hypothesis_ sentences by sampling a sentence template (i.e., a sentence with

replaceable lexeme position such as “a man is talking to someone in a [lexeme]”) from the training

dataset and replacing the lexeme items with new ones.





**A.2. Reproducibility**





**Hierarchical Equality Experiment** We randomly generate 1.92M input–output pairs for training

the model. We train our model for 10 epochs before reaching 100% training accuracy for the task.

We also evaluate model performance on a hold-out testing set with unseen input-output pairs, and our

model achieves 100% testing accuracy. For each high-level model, we then generate a training dataset

for learning the rotation matrix. For each high-level model, we construct 640K such input–output

pairs as our training data and 19.2K pairs as our testing data.

For both training phases, we use a batch size of 6.4K with a maximum training epoch of 10.

We set the learning rate to 1e [´][3] with an early stop patient step set to 10K. Training with a single

NVIDIA 2080 Ti RTX 11GB GPU takes less than ten minutes to converge. All datasets were

balanced across the two labels during standard and interchange intervention training objectives. We

run each experiment three times with distinct random seeds.





**Monotonicity NLI Experiment** We randomly sample 10K examples from the original MoNLI

dataset and use it to train our low-level models to solve MoNLI. We finetune our model for 5 epochs

before reaching 100% training accuracy for the task. We also evaluate model performance on a





18





FINDING DISTRIBUTED ALIGNMENTS





hold-out testing set, and our model achieves 100% testing accuracy. For training and evaluating the

rotation matrix of each high-level model, we create 24K examples as our training dataset for the first

high-level model, and 10K for the rest two high-level models. For evaluation, we create 1.92K for

the first high-level model, and 1K for the rest two high-level models.

We finetune our model for 5 epochs with a learning rate of 2 _e_ [´][5] before reaching 100% task

accuracy with a batch size of 32. For the learning rotation matrix, we use a batch size of 64 with a

learning rate of 2 _e_ [´][3] for a fixed epoch number of 5. Training with a single NVIDIA 2080 Ti RTX

11GB GPU takes less than ten minutes to converge for both training phases. We run each experiment

three times with distinct random seeds.





**A.3. Brute-Force Search Baseline**





Without additional training, our brute-force search baseline finds the best IIA by searching over

possible alignments pΠ _,_ _τ_ q as in Definition 5. For simple feed-forward networks, we map a high-level

variable to a set of low-level variables within a sliding window with a size equal to the intervention

size. We then incrementally search for the sliding window achieving the best IIA score starting from

the first index of the intervened representation in the network. For Transformer-based networks,

we avoid searching over all possible windows to make computation tractable, by only looking at

windows with a starting index from t0 _,_ 64 _,_ 128 _,_ 256 _,_ 512u of the rCLSs token representation. Instead

of targeting a specific set of layers in neural networks, we perform searches over all layers. Note that

for the worst-case scenario, the number of hypotheses for the brute-force search approach becomes

intractable and can be estimated as _Cm_ _[n]_ [where] _[ n]_ [ is the total dimension size of the neural representation,]

and _m_ is the variable dimension size.





19





**A.4. Localist Alignment Baseline**





Without additional training, our localist alignment baseline finds a local optimal localist alignment

matrix based on the learned rotation matrix. We pick the rotation matrix with the best IIA result from

each category for evaluation. To find a localist alignment matrix, we follow Algorithm 1 to get our

localist alignment matrix _L_ from any orthogonal matrix _R_ . We then use _L_ as our rotation matrix

and evaluate IIA following our evaluation paradigm.





**Algorithm 1 Finding Localist Alignment Matrix**





FINDLOCALISTALIGNMENTp _R_ q



1 **//** _R_ is an orthogonal matrix.

2 _R_ a “ _R.aboslute_ _ _value_ pq

3 _L_ “ torch.zeros_likep _R_ q

4 _P_ “ rs

5 **for** _i_ “ 0; _i_ ă _R_ .shape[0]; _i_ ``

6 _P_ `“ rp _R_ a “ torch.maxp _R_ aqq.nonzero()s

7 _R_ ar _P_ r´1s.row _,_ :s “ 0 _._

8 _R_ ar: _,_ _P_ r´1s.cols “ 0 _._

9 **for** _p_ P _P_

10 _L_ r _p_ .row _, p._ cols “ 1 _._

11 _P_ “ _P_ ˚ get_signp _R_ q

12 **return** _P_





20





FINDING DISTRIBUTED ALIGNMENTS





**A.5. Subspace DAS**





After learning a rotation matrix, we can fix it and learn another rotation matrix on top of it to do

subspace high-level variable alignment. For instance, in the case of our MoNLI experiment, we fix

the rotation matrix aligning the Lexical Entailment representation and further test whether we can

learn another rotation matrix to align word identity. To achieve this, we initialize the first rotation

matrix which aligns a larger subspace and freezes its weights along with the rest of the model. Then,

we train another rotation matrix by taking the output representations from the first one with the

same training objective as the first one as defined in Definition 4. The training data for the second

rotation matrix is not the same as the first one, where we use the training data for the high-level

model hypothesized to align with the subspace (e.g., the training data for the identity of first argument

for the hierarchical equality task, and the training data for the identity of lexeme for the MoNLI

task). Note that for both of our experiments, the subspace dimension is half of its parent subspace for

simplicity.



**Appendix B. Runtime Comparison: Brute-force Search Baseline vs. DAS**





Table 3 shows the runtime comparison between our method and brute-force search under the same

settings for each task. Only our approach requires training. We underestimate the runtime for

the brute-force search approach by only considering a limited set of possible alignments without

exhaustively searching over the entire combination, which leads to intractable computations (See the

BFSmax column of Table 3). The runtime of our approach can be further optimized if we deploy early

stopping or optimized training data size, and it is invariant with the number of testing hypotheses.





Table 3: Estimated runtime comparison between our method and brute force search (BFS) baseline

(the number of testing hypotheses) for finding an alignment in a single targeted layer

measured under the same settings. The runtime of DAS is invariant with the number of

testing hypotheses.





**Runtime (sec)**

**Task** **BFS** **BFS** max **DAS**



Hierarchical Equality 31 (32) 6 _e_ [8] ( _C_ 16 [32][)] 502





Monotonicity NLI 198 (5) 2 _e_ [58] ( _C_ 32 [768][)] 1105





**Appendix C. Remarks on Learned Rotation Matrix**



Figure 5 shows the rotation in degree(s) of eigenvectors [5] of our learned rotation matrix for each task.

We pick the best-performing oracle low-level model for each task for analyses. Our results suggest

that learned rotations are not trivial, as the majority of basis vectors are rotated. These results suggest

that the representations of high-level variables are highly distributed where direct probes over learned

activation may fail to reveal the actual causal role of the representation effectively.



**Appendix D. Common Questions**





In this section, we answer common questions that may be raised while reading this report.





_Is the learned orthogonal matrix orthonormal?_





5. The eigenvectors of a rotation matrix are the vectors that remain unchanged after the rotation.





21





Figure 5: Rotation measured in degree(s) of eigenvectors of the learned rotation matrix for each task.





Yes. We use the trainable orthogonal matrix implementation from PyTorch’s torch.nn.utils.

parametrizations. It guarantees the resulting matrix is orthonormal when the rotation matrix is a

full square matrix. Keeping the matrix orthonormal is crucial since it ensures we focus on rotation

[rather than scaling. Details can be found at https://pytorch.org/docs/stable/generated/](https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html)

[torch.nn.utils.parametrizations.orthogonal.html.](https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html)





_How stable is the optimization process of the orthogonal matrix?_





We rely on the default initialization of the orthogonal matrix in pytorch. The initialization

step is important for finding the local optimal of the rotation matrix. In our experiment, we use

random seeds and pick the best results out of our distinct runs to address this issue. However, we

may consider different initialization schemes in the future.





_Is an orthogonal matrix required to find distributed alignments?_





In principle, the transformation is not required to be an orthogonal matrix. In fact, an orthogonal

matrix assumes a linear transformation before aligning with a high-level variable, which may not

be optimal if the aligning variable is represented in a non-linear sub-manifold of the representation

space. In such cases, an orthogonal transformation results in imperfect interchange intervention

accuracy, and an invertible and differentiable non-linear transformation may be more suitable (e.g.,

normalizing flow or invertible neural network). In practice, this transformation is computationally

difficult to find, and the linear connections within neural networks also make them unlikely to be

required to find alignments. We leave these investigations to future works.





_What are the prerequisites to deploy this analysis method in practice?_





We assume a partial or complete causal graph of the data generation process. Specifically, we

assume to have interchangeable high-level variables defined for the causal graph. Additionally, we





22





FINDING DISTRIBUTED ALIGNMENTS





assume we can sample counterfactual data (i.e., base and source inputs where they differ in values of

high-level variables) based on the causal graph.





_How to interpret the result if the interchange intervention accuracy is not 100%?_





When IIA is _α_ ă100%, we rely on the graded notion of _α-on-average_ approximate causal

abstraction Geiger et al. (2023), which directly coincides with IIA. More importantly, the relative IIA

rankings between the high-level models also show which high-level model is a better approximation

of the low-level model.





_Does DAS scale with large foundation models?_





Currently, the number of learnable parameters of the rotation matrix groups in polynomial time

with the size of hidden representations. For instance, if our intervention site size is 512 in the

lower-level model, the number of parameters of the rotation matrix is 512 ˆ 512, which is about

0.26M. If we want to rotate concatenated token sequence embeddings of a BERT-BASE model in any

layer, the number of parameters of the full rotation matrix is about 15.4B which becomes intractable

for standard training infrastructure. To make computation tractable, DAS should be further reducible

by representing only the aligned subspace, not the full rotation matrix. For instance, to find a 2-dim

distributed representation within a 512-dimensional representation space, we approximately only

need to learn 512 ˆ 2 parameters. In addition, we may use a low-rank approximation of the rotation

matrix.





_What are some practical usage of DAS?_





Practically, DAS transforms representations into an operatable state where interchange intervention results in interpretable model behaviors. DAS, itself, is a powerful tool for conducting causal

abstraction analysis of a neural network.



**Appendix E. Task Performance & Interchange Intervention Accuracy Over Training**

**Epochs**





We additionally measure task performance (Task Acc.) as well IIA (Int. Acc.) of our alignments over

training epochs for both seen training examples as well as unseen testing examples. Our results are

shown from Figure 6 to Figure 11.





23





( _a_ ) | **N** | “ 16





( _b_ ) | **N** | “ 32





Figure 6: Accuracy over training epochs of the high-level model abstracting both equality relations

for hierarchical equality experiment.





24





FINDING DISTRIBUTED ALIGNMENTS





( _a_ ) | **N** | “ 16





( _b_ ) | **N** | “ 32





Figure 7: Accuracy over training epochs of the high-level model abstracting left equality relation for

hierarchical equality experiment.





25





( _a_ ) | **N** | “ 16





( _b_ ) | **N** | “ 32





Figure 8: Accuracy over training epochs of the high-level model abstracting identity of first argument

for hierarchical equality experiment.





26





FINDING DISTRIBUTED ALIGNMENTS





Figure 9: Accuracy over training epochs of the high-level model abstracting both negative and lexical

entailment with | **N** | “ 768 for monotonicity NLI experiment.





Figure 10: Accuracy over training epochs of the high-level model abstracting lexical entailment with

| **N** | “ 768 for monotonicity NLI experiment.





27





Figure 11: Accuracy over training epochs of the high-level model abstracting the identity of lexeme

with | **N** | “ 768 for monotonicity NLI experiment.





28