diff --git "a/QtFJT4oBgHgl3EQfJyy0/content/tmp_files/load_file.txt" "b/QtFJT4oBgHgl3EQfJyy0/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/QtFJT4oBgHgl3EQfJyy0/content/tmp_files/load_file.txt" @@ -0,0 +1,1483 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf,len=1482 +page_content='How poor is the stimulus?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Evaluating hierarchical generalization in neural networks trained on child-directed speech Aditya Yedetore∗1, Tal Linzen2, Robert Frank3, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thomas McCoy∗4 1Boston University, 2New York University, 3Yale University, 4Princeton University yedetore@bu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='edu, linzen@nyu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='edu, robert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='frank@yale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='edu, tom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='mccoy@princeton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='edu Abstract When acquiring syntax, children consistently choose hierarchical rules over competing non- hierarchical possibilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Is this preference due to a learning bias for hierarchical struc- ture, or due to more general biases that in- teract with hierarchical cues in children’s lin- guistic input?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We explore these possibili- ties by training LSTMs and Transformers— two types of neural networks without a hi- erarchical bias—on data similar in quantity and content to children’s linguistic input: text from the CHILDES corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We then evaluate what these models have learned about English yes/no questions, a phenomenon for which hi- erarchical structure is crucial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We find that, though they perform well at capturing the sur- face statistics of child-directed speech (as mea- sured by perplexity), both model types general- ize in a way more consistent with an incorrect linear rule than the correct hierarchical rule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These results suggest that human-like general- ization from text alone requires stronger biases than the general sequence-processing biases of standard neural network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1 Introduction Syntax is driven by hierarchical structure, yet we typically encounter sentences as linear sequences of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' How do children come to recognize the hierarchical nature of the languages they acquire?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Some argue that humans must have a hierarchical inductive bias—an innate predisposition for hierar- chical structure (Chomsky, 1965, 1980).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' An alter- native view (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', Lewis and Elman, 2001) is that no such bias is necessary: there may be clear evi- dence for hierarchical structure in children’s input, so that children would choose hierarchical rules even without a hierarchical bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ∗ Work done while at Johns Hopkins University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' At first blush, recent work in natural language processing (NLP) may seem to indicate that no hier- archical bias is necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Neural networks trained on naturally-occurring text perform impressively on syntactic evaluations even though they have no explicit syntactic structure built into them (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', Gu- lordava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Wilcox et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Warstadt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' However, these results do not pro- vide strong evidence about the learning biases re- quired to learn language from the data available to humans because these models receive very dif- ferent training data than humans do (Warstadt and Bowman, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, NLP models are typically trained on far more data than children receive, so models have more opportunities to encounter rare syntactic structures (Linzen, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Second, most training sets in NLP are built from Internet text (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', Wikipedia), which differs qualitatively from the utterances that children typically hear;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', sen- tences in Wikipedia are on average 25 words long (Yasseri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2012), compared to 5 words for sentences in the North American English subset of the CHILDES corpus of child-directed speech (MacWhinney, 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In this work, to evaluate if neural networks with- out a hierarchical bias generalize like children do, we train models on text1 comparable to the sen- tences in children’s linguistic input: English data from CHILDES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We then analyze what they have learned about the relationship between declarative sentences, such as (1a), and their corresponding yes/no questions, such as (1b): (1) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Those are your checkers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Are those your checkers?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Crucially, nearly all naturally-occurring yes/no questions are consistent with two rules: one based 1Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 discusses other input types (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', visual input).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='11462v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='CL] 26 Jan 2023 on hierarchical structure (2), and one based on lin- ear order (3):2,3 (2) HIERARCHICALQ: The auxiliary at the start of a yes/no question corresponds to the main auxiliary of the corresponding declarative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (3) LINEARQ: The auxiliary at the start of a yes/no question corresponds to the first auxil- iary of the corresponding declarative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Despite the scarcity of evidence disambiguating these rules, children reliably favor HIERARCHI- CALQ (Crain and Nakayama, 1987), albeit with occasional errors consistent with LINEARQ (Am- bridge et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Yes/no questions thus are a prime candidate for an aspect of English syntax for which human-like generalization requires a hi- erarchical bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We evaluate yes/no question per- formance in LSTMs and Transformers, two neural- network architectures that have no inherent hierar- chical inductive bias (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Petty and Frank, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These architectures employ different computational mechanisms, so consistent results across both would indicate that our results are not due to idiosyncrasies of one particular architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To investigate if models generalize more con- sistently with the hierarchical or linear rule, we evaluate them on cases where the rules make dif- ferent predictions, such as (4): under HIERARCHI- CALQ, the question that corresponds to (4a) is (4b), whereas under LINEARQ it is (4c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (4) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The boy who has talked can read.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Can the boy who has talked read?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' *Has the boy who talked can read?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We find that across several ways of framing the learning task, models fail to learn HIERARCHI- CALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Instead, they generalize in ways that de- pend on linear order and on the identities of spe- cific words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These results suggest that children’s training data, if taken to be words alone, may not contain enough hierarchical cues to encourage hier- archical generalization in a learner without a hierar- chical bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, explaining human acquisition of syntax may require postulating that humans have stronger inductive biases than those of LSTMs and 2In past work these rules have been framed as transforma- tions named MOVE-FIRST and MOVE-MAIN (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We instead follow Berwick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2011) and frame the child’s knowledge as a relationship between sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 3Though these two rules are the most prominent in prior literature, other rules are possible;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' see Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transformers, or that information other than word sequences plays a crucial role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4 2 Background Though HIERARCHICALQ and LINEARQ often make the same predictions, the evidence in chil- dren’s input may still favor HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The most straightforward evidence would be ut- terances that directly disambiguate the rules, such as (4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Pullum and Scholz (2002) show that disam- biguating examples appear in the Wall Street Jour- nal, in literature, and arguably in child-directed speech, but direct evidence may still be too rare to robustly support HIERARCHICALQ (Legate and Yang, 2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Nonetheless, children might con- clude that yes/no questions obey HIERARCHI- CALQ rather than LINEARQ based on indirect evidence—evidence that other syntactic phenom- ena are hierarchical (Mulligan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To test if the cues favoring HIERARCHICALQ render a hierarchical bias unnecessary, we study how well non-hierarchically-biased models acquire English yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Several prior papers have used this approach, but their training data differed from children’s input in important ways: some used synthetic datasets (Lewis and Elman, 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Frank and Mathis, 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Clark and Eyraud, 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020), others used massive Internet corpora (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Warstadt and Bowman, 2020), and those that used child-directed speech simpli- fied the data by replacing each word with its part of speech (Perfors et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bod et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We used training data closer to children’s input, namely sentences from CHILDES with word iden- tities preserved, rather than being converted to parts of speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Two other recent works have also trained neural networks on CHILDES data (Pannitto and Herbelot, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Huebner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2021), but neither investigated yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' One particularly important reason for training models on CHILDES is that, in prior work, differ- ent types of training data have yielded diverging results: Recent models trained on synthetic data failed to properly acquire yes/no questions (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Petty and Frank, 2021), whereas ones trained on large Internet corpora scored well on evaluations of yes/no questions (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Warstadt and Bowman, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Given these differ- ing results, it is not clear from past work how these 4Our datasets and models will be uploaded online soon to facilitate further research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' models would generalize when faced with the type of data that children receive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 3 Overview of Experimental Setup We evaluated models on yes/no questions in two ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, we used relative acceptability judg- ments (Experiment 1): We trained neural networks on the task of language modeling (predicting the next word at every point in the sentence) and evalu- ated whether they assigned a higher probability to sentences consistent with LINEARQ or HIERAR- CHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our second approach was based on text generation (Experiment 2): We trained networks to take in a declarative sentence and output the corresponding question, and tested whether they generalized in a way more consistent with LIN- EARQ or HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Under both framings, we trained models on data from CHILDES and evaluated them on targeted datasets constructed to differentiate LINEARQ and HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 4 Experiment 1: Relative Acceptability 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1 Dataset To train models on data as similar as possible to the sentences children receive, we extracted data from CHILDES (MacWhinney, 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We used the North American English portion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We wished to replicate children’s input, so we excluded the children’s own utterances, leaving a 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6-million- word corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We allocated 90% of the data to training, 5% to validation, and 5% to testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We replaced words that appeared two or fewer times in the training set with , giving a replacement rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='3%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Appendix A for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2 Task: Next-Word Prediction We trained models on next-word prediction, also known as language modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We chose this task for two reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, it is clear empirically that next-word prediction can teach neural networks a substantial amount about syntax (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Second, it is plausible that humans per- form some version of next-word prediction during sentence processing (Altmann and Kamide, 1999;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Hale, 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Levy, 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Kutas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2011) and that such prediction may play a role in acquisition (Elman, 1991).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, while next-word prediction is certainly not the only goal of human language learners, we view this task as a reasonable first step in emulating human language acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='3 Architectures We used two neural network architectures: LSTMs (Hochreiter and Schmidhuber, 1997) and Trans- formers (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We chose these models for two reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, they have been the most successful architectures in NLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, we have reason to believe that, of the types of low-bias models invented, these two are the ones most likely to discover linguistic regularities in our CHILDES training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Second, the two architectures pro- cess sequences very differently (via recurrence vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' via attention).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, if both generalize similarly, we would have evidence that what was learned is strongly evidenced in the data, rather than due to a quirk of one particular architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For our LSTMs, we used 2 layers, a hidden and embedding size of 800, a batch size of 20, a dropout rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4, and a learning rate of 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For our Trans- formers, the corresponding values were 4, 800, 10, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2, and 5, and we used 4 attention heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We chose these values based on a hyperparameter search de- scribed in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' All following results are av- eraged across 10 runs with different random seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4 Results: Language Model Quality Before testing models on questions, we used per- plexity to evaluate how well they captured the basic structure of their training domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' As a baseline, we used a 5-gram model with Kneser-Ney smooth- ing (Kneser and Ney, 1995) trained with KenLM (Heafield, 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The test set perplexity for the 5-gram baseline was 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='37, while the average test set perplexity for the LSTMs and Transformers was 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='05 and 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='69, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For perplexity, lower is better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, both neural network types outperformed the strong baseline of a smoothed 5-gram model, showing that they performed well at capturing the basic statistics of their training domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 General Syntactic Evaluation As an additional way to check the validity of our setup, we evaluated our models on the Zorro dataset (Huebner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2021), which is based on BLiMP (Warstadt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Zorro contains 24 evalu- ations, each of which targets one syntactic phe- nomenon (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', subject-verb agreement) and in- volves sentence pairs for which one sentence is grammatical, and the other is minimally different 5For an intuitive illustration of our model quality, see the sample text generated by them in Appendix H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' but ungrammatical (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', by violating subject verb agreement).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A model is said to get a sentence pair correct if it assigns a higher probability to the grammatical sentence than the ungrammatical one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Huebner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2021) showed that Transformers trained on CHILDES data can perform well on many of the Zorro categories, so if our setup is sound, our own models should also perform well on Zorro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Appendix D for full results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For each syntac- tic phenomenon, most model re-runs scored above 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='9, though at least one scored near the chance level of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For each re-run of each architecture there is at least one phenomenon for which the model scores over 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='97, and many models score 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 on some phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, all models score well on at least some syntactic evaluations, attaining results comparable to those of Huebner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2021) and providing additional support for the validity of our setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We now test whether these models have also successfully learned the specific phenomenon that we focus on, yes/no questions—a phenomenon not included in the Zorro dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6 Yes/No Questions Evaluation Dataset: Forced-Choice Acceptabil- ity Judgments As a first way to test whether our models have learned HIERARCHICALQ, we eval- uate whether they assign higher probabilities to sentences consistent with HIERARCHICALQ than to minimally different sentences that are ungram- matical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For this purpose, we create an evaluation dataset containing groups of 6 questions, each cre- ated by starting with a declarative sentence, such as (5), and then deleting the first, main, or neither auxiliary, and inserting the first or main auxiliary at the front of the sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6 For instance, in (6b), the first auxiliary has been preposed, and the main auxiliary has been deleted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (5) The dog who has seen a boy did try.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (6) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the dog who seen a boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the dog who has seen a boy try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the dog who has seen a boy did try ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the dog who seen a boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the dog who has seen a boy try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the dog who has seen a boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6It would be possible to also use a ‘prepose other’ category, where an auxiliary not in the input is inserted (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We excluded this category because using it would raise complications about which ‘other’ auxiliary to choose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each group, we evaluate which question the model assigned the highest probability to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' If a model has correctly learned HIERARCHICALQ, it should assign the highest probability to the question consistent with this rule, such as (6e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Several past papers about yes/no questions have used the same general approach (Lewis and El- man, 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Reali and Christiansen, 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' How- ever, these papers considered only pairs of sen- tences, whereas we consider groups of 6 to allow for a wider range of possible generalizations that a model might have learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To generate the declaratives from which we formed groups of 6 questions, we used the context- free grammar (CFG) in Appendix F, which has a vo- cabulary selected from the most common words in CHILDES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Each declarative generated by the CFG (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', (5)) contains two auxiliary verbs: one before the sentence’s main verb and one inside a relative clause modifying the subject.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' One potential prob- lem is that some questions are consistent with both HIERARCHICALQ and LINEARQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For instance, (7a) can be formed from (7b) with the HIERARCHI- CALQ-consistent steps PREPOSE-MAIN,DELETE- MAIN, or from (7c) with the LINEARQ-consistent steps PREPOSE-FIRST,DELETE-MAIN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (7) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the boy who did see the person laugh?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The boy who did see the person did laugh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The boy who did see the person can laugh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To avoid this problem, we required that the aux- iliary before the main verb must select for a dif- ferent verb inflection than the one in the relative clause.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For instance in (5), did selects for the verb’s bare form, while has selects for the past participle form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, the auxiliary at the start of the question could only correspond to whichever auxiliary in the declarative has the same selectional properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='7 Results: Relative Question Acceptability For each sentence group, we used per-word perplex- ity to see which of the 6 candidates the models scored most highly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='8 For both LSTMs and Trans- formers, the correct category (PREPOSE MAIN, DELETE MAIN) was the second-rarest choice, and 7A model could succeed on this dataset with a rule that relates the auxiliary at the start of a question with the last auxiliary in the declarative form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Since our models fail on this dataset, this consideration is not relevant here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 8We also explored evaluation of the models with a more complex measure called SLOR where we additionally nor- malized scores by word frequency (Pauls and Klein, 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Both metrics produced qualitatively similar results, so we only report the simpler metric here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Prepose First Prepose Main Delete First Delete Main Delete none LSTM Transformer LSTM Transformer 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 Preference for question type Declarative sentence: The person who has seen this boy did try.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the person who seen this boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the person who seen this boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the person who has seen this boy try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the person who has seen this boy try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the person who has seen this boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the person who has seen this boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 1: The question types that models prefer when offered a choice between 6 questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These 6 ques- tions are formed by modifying a declarative with a rel- ative clause on the subject according to ‘prepose’ and ‘delete’ rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The correct category is PREPOSE MAIN, DELETE MAIN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each architecture, the propor- tions across all 6 question types necessarily sum to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Each bar shows the average across 10 model re-runs, with single-standard-deviation error bars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' the most frequent preference was for PREPOSE FIRST, DELETE MAIN, a category that is only par- tially correct because it references linear order in addition to hierarchical structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, neither model displays preferences con- sistent with the correct, fully-hierarchical gener- alization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The two model types showed similar scores, which may mean that these results are largely driven by the statistics of the training data that both models share, rather than the models’ dif- fering inductive biases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' One of the incorrect categories—PREPOSE MAIN, DELETE NONE, such as (6f)—only re- quires reference to hierarchical structure, so it could be said to capture the hierarchical nature of yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Nonetheless, this category was also relatively rare: combining the two fully hier- archical possibilities (PREPOSE MAIN, DELETE MAIN and PREPOSE MAIN, DELETE NONE) ac- counts for only 26% of LSTM preferences and 27% of Transformer preferences, meaning that both models over 70% of the time favored a sentence generated at least partially based on linear order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' There are two likely reasons for why our models performed so poorly on yes-no questions when they performed well on many of the phenomena in the Zorro dataset (Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, yes/no questions may simply be harder to learn than the other phe- nomena;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' indeed, yes/no questions are often singled out as being likely to pose difficulties for a general- purpose learner (Section 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alternatively, it might be that the six-way evaluation we used for yes/no questions is stricter than the binary judgments used for the Zorro dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 5 Experiment 2: Question Formation The previous experiment was designed to operate entirely in the next-word-prediction paradigm, mo- tivated by arguments from past literature about the strength and relative ecological validity of next-word-prediction as a training objective (see Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' However, one of this setup’s shortcomings is that HIERARCHICALQ describes correspondences between questions and declara- tives, but Experiment 1 focused on questions alone, with no consideration of declaratives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In this second experiment, to better capture that HIERARCHICALQ is defined over sentence pairs, we trained models on a sentence-pair task: trans- forming a declarative into a question (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For instance, given the child did learn the model must produce did the child learn ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We evaluated models in two ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, we checked if the models’ predictions fully matched the correct questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This full-sentence evaluation is demanding, and models might fail this evalua- tion for reasons unrelated to our core hypotheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For instance, given the child did learn the model might produce did the baby learn, which would be marked as incorrect, even though this lexical error is not relevant to HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' As a metric that is less demanding and that also more directly targets HIERARCHICALQ, we mea- sured if the first word of the output question corre- sponded to the first or main auxiliary of the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Critically, LINEARQ and HIERARCHICALQ make different predictions for the first word of a question so long as the two auxiliaries are distinct: see (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Because this framing lets the model freely generate its output (instead of choosing one option from a pre-specified set), we allow for the possibility that the rule learned by models may not be identical to any of our manually-generated hypotheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Solely training models to perform this transfor- mation involves the implicit assumption that, when children acquire English yes/no questions, the only evidence they leverage is English yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' However, other types of sentences may also pro- vide useful evidence (Pearl and Mis, 2016): e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', wh-questions also illustrate subject-auxiliary in- version (Pullum and Scholz, 2002), while, more generally, many types of sentences could provide evidence that the syntax as a whole is hierarchical (Perfors et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To explore this possibility, we compared a condition in which models were only trained to perform question formation (the QUESTION FORMATION condition) to another in which models were first pre-trained on next-word prediction with the exact same setup as in Experi- ment 1 before being further trained to perform ques- tion formation (the NEXT-WORD PREDICTION + QUESTION FORMATION condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1 Dataset Training Set Our question formation dataset con- sisted of the yes/no questions in the CHILDES Treebank (Pearl and Sprouse, 2013a,b), a parsed subset of CHILDES containing 189,359 sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We used these parses to extract all yes/no ques- tions from the CHILDES Treebank and derive their corresponding declarative forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The resulting declarative was concatenated with the question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' An example declarative/question pair is: (8) you can spell your name .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' can you spell your name ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The training set consisted of 10,870 declara- tive/question pairs, the validation set 1,360 pairs, and the test set 1,358 pairs (we will call this test set the randomly-partitioned test set to distinguish it from two other evaluation sets discussed below).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We trained models to perform next-word prediction on such concatenated sentence pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The first-word accuracy of the trained model was then computed based on the model’s predic- tion for the word after the period in each test exam- ple, while the full-sentence accuracy was computed based on its predictions for all tokens after the pe- riod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' All questions in the randomly-partitioned test set were withheld from both the question-formation training set and the next-word-prediction training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, models had not seen these test examples in their training, even in the NEXT-WORD PRE- DICTION + QUESTION FORMATION condition in which they were trained on both tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Evaluation Sets In addition to the randomly- partitioned test set, we used CFGs to generate two targeted evaluation sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' As in Experiment 1, we se- lected the CFGs’ vocabulary from common words in our CHILDES data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In sentences generated from the first CFG, the sentence’s first auxiliary was also its main auxiliary, so LINEARQ and HIERARCHI- CALQ make the same predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (8) exemplifies the type of declarative-question pair in this dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We call this dataset FIRST-AUX = MAIN-AUX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For sentences generated by the second CFG, the main auxiliary was the second auxiliary in the sentence;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' thus, these examples disambiguate LINEARQ and HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Example (9) is a declarative- question pair from this evaluation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (9) a boy who is playing can try .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' can a boy who is playing try ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We call this dataset FIRST-AUX ̸= MAIN-AUX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Appendix F for the CFGs used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We sampled 10,000 declarative sentences from these grammars and transformed them into questions according to HIERARCHICALQ to create our evaluation sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2 Results Randomly-Partitioned Test Set The LSTMs and Transformers in the QUESTION FORMA- TION condition performed well on the randomly- partitioned test set, with a full-question accuracy of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='68 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='014 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='87 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='005 (averaged across 10 reruns with margins indicating one standard de- viation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The models in the NEXT-WORD PRE- DICTION + QUESTION FORMATION condition per- formed similarly well, with a full-question accu- racy of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='66 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='008 for the LSTMs and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='93 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='004 for the Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For both model types, the first-word accuracy for the question was nearly 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 across re-runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We suspect that Transform- ers have a stronger full-question accuracy because producing the question requires copying all words from the declarative (but in a different order).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Copy- ing is likely easy for Transformers because they can attend to specific words in the prior context, while our LSTMs must compress the entire context into a fixed-size vector, which may degrade the individual word representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Because both model types achieved near-perfect performance on the crucial first-word accuracy metric, we conclude that our models have successfully learned how to handle the types of declarative/question pairs that we ex- tracted from the CHILDES Treebank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Targeted Evaluation Sets On our two targeted evaluation sets, models almost never produced the complete question correctly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Turning to the more lenient measure of first-word accuracy, for exam- ples on which LINEARQ and HIERARCHICALQ predict the same first output word (FIRST-AUX = MAIN-AUX), the Transformer trained only on ques- tion formation performed strongly, while the Trans- LSTM Transformer First-Aux = Main-Aux First-Aux ≠ Main-Aux HierarchicalQ & LinearQ HierarchicalQ Only LinearQ Only HierarchicalQ & LinearQ HierarchicalQ Only LinearQ Only 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 Consistency with rule(s), based on first word of question Condition Question Formation Next-Word Prediction + Question Formation Figure 2: Proportion of model-produced questions that were consistent with the linear rule LINEARQ and/or the hierarchical rule HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In the FIRST- AUX = MAIN-AUX dataset, the first auxiliary is the main auxiliary, so both LINEARQ and HIERARCHI- CALQ produce the correct question string.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The FIRST- AUX ̸= MAIN-AUX dataset disambiguates the two rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Each bar shows the average across 10 model re- runs, with error bars showing one standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' former trained on both tasks, and both LSTMs, performed reasonably well (Figure 2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' note mod- els could choose any word in their vocabulary to begin the output, so chance performance is near 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For the crucial cases that disambiguate the two rules (FIRST-AUX ̸= MAIN-AUX), both mod- els in both conditions performed more consistently with LINEARQ than HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Training on next-word prediction before question formation had inconsistent effects: it modestly increased the likelihood of hierarchical generalization in LSTMs, yet it decreased that likelihood in Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lexical Specificity In Appendix G, we further break down the FIRST-AUX ̸= MAIN-AUX results based the auxiliaries’ identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The generalization pattern varied considerably across auxiliary pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For some auxiliary pairs, the auxiliary chosen to begin the question was usually neither auxiliary in the input (Figure 3, left facet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For other pairs, models usually chose the first auxiliary, regardless of lexical identity (Figure 3, middle facet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Finally, for some pairs, the auxiliary chosen was usually the same one, regardless of whether it was the first or main auxiliary (Figure 3, right facet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Generalization based on lexical identity is rarely considered in past discussions of English yes/no question acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Of the papers on this phe- nomenon (see Clark and Lappin (2010), Lasnik and Lidz (2017), and Pearl (2021) for overviews), the only one to our knowledge that discusses lexi- have and has can and do have and did Move−first−aux Move−main−aux Move−have Move−has Move−first−aux Move−main−aux Move−can Move−do Move−first−aux Move−main−aux Move−have Move−did 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 First word behavior consistent with rule Comparison First−vs−main Aux−vs−Aux Figure 3: Lexical specificity in model behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Each facet considers only the evaluation examples contain- ing the two auxiliaries in the facet heading;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', the can and do facet includes, for example, the inputs the children who can play do learn and the children who do play can learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The bars show the proportion of model predictions for the first word of the output that are consistent with four potential movement rules, aver- aged across 10 model re-runs and with error bars show- ing one standard deviation above and below the mean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This plot only shows an illustrative subset of auxiliary pairs for one model type (Transformers in the NEXT- WORD PREDICTION + QUESTION FORMATION con- dition);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' see Appendix G for the full results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' cal specificity is Frank and Mathis (2007), which studied models trained on synthetic data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our re- sults highlight the importance of testing for a broad range of generalizations: Lexically-specific hy- potheses appear attractive for our low-bias learners, so an account of what biases can yield human-like learning should rule out these lexically-specific hy- potheses along with linear ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6 Discussion We have found that, when trained on child-directed speech, two types of standard neural networks per- formed reasonably well at capturing the statistical properties of the dataset, yet their handling of En- glish yes/no questions was more consistent with a linear rule LINEARQ than the correct hierarchi- cal rule HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These results support the hypothesis that a learner requires a hierarchical bias to consistently learn hierarchical rules when learning from the linguistic data children receive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1 Takeaways for LSTMs and Transformers When trained on massive corpora, LSTMs and Transformers perform impressively on some syn- tactic evaluations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Based on such results, it is tempt- ing to conclude that the general-purpose biases of these architectures suffice to yield human-like syn- tax acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our results caution against this interpretation: When we trained the same architec- tures on data more similar to children’s input, they failed to learn the structure of English yes/no ques- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, at least when learning from text alone, LSTMs and Transformers do not display human- like language learning—they do not generalize as humans do from the data that humans receive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2 Takeaways for the Poverty of the Stimulus Debate Below we specify four possible positions in the poverty-of-the-stimulus debate about the adequacy of children’s input for inducing hierarchical rules in low-bias learners, arranged from assuming the most limited to the most expansive innate component: (10) Any inductive biases: Any learner trained on CHILDES will generalize like humans do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (11) Any inductive biases that enable in- distribution learning: Any learner that cap- tures the statistical patterns of the training dis- tribution will generalize to HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (12) Some non-hierarchical inductive biases: Some general-purpose learners will generalize as humans do, but others will not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (13) Only a hierarchical inductive bias: No general-purpose learners will generalize as humans do: hierarchical biases are necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Position (10) is clearly false: many learners can- not learn certain aspects of syntax, no matter their training data (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', bigram models cannot capture long-distance dependencies).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our work shows that position (11) is also false: Though our models per- formed well on the in-distribution test sets of Exper- iments 1 and 2, they did not generalize in human- like ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This leaves positions (12) and (13), which our existing results cannot differentiate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' It is possible that only learners with hierarchical induc- tive biases can demonstrate human-like language learning (position (13)), but also that some learners without this bias can succeed (position (12))—just not the learners we tested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For further discussion of how computational modeling can bear on learn- ability arguments, see Wilcox et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' One potential solution supporting position (12) would be that learners leverage the hierarchical structure of some syntactic phenomenon to help conclude that other, impoverished phenomena are hierarchical (Perfors et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Mulligan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' However, our results from Experiment 2 show that giving learners access to a wider range of phenomena does not automatically improve hi- erarchical generalization: Models’ performance on question formation was not substantially improved (and in some cases was even harmed) when they were trained not just on question formation but also on next-word prediction on the entire CHILDES corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, although training on text that con- tains many linguistic phenomena can give mod- els a hierarchical inductive bias when the training is done over large Internet corpora (Warstadt and Bowman, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Mueller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2022), our results provide evidence that this conclusion does not ex- tend to models trained on child-directed speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Though both (12) and (13) remain as possibil- ities, we believe that our results more strongly support (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Of all currently available general- purpose learners, LSTMs and Transformers are the best at modeling the probabilistic structure of lin- guistic data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Therefore, if child-directed speech contains clear evidence for the hierarchical nature of yes/no questions—evidence so clear that at least some general-purpose learners could recognize it— it is likely that LSTMs and Transformers would be among the set of general-purpose learners that could use this evidence to make hierarchical gener- alizations in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The fact that these architectures instead predominantly favored linear generalizations therefore supports position (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='3 How to test for HIERARCHICALQ We have argued that an ideal simulation of the acquisition of English yes/no questions would have the following properties: (14) The training data should be similar to chil- dren’s linguistic input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (15) The training task should be ecologically valid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (16) The evaluation method should focus on corre- spondences between pairs of sentences rather than the acceptability of individual sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Property (14) motivated our use of text from CHILDES as the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We are not aware of a single experimental setup that fully satisfies both Property (15) and Property (16), so we instead used two experiments, each one focusing on one property at the cost of satisfying the other one less well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Experiment 1 works entirely in the context of the relatively ecologically valid task of next- word prediction, motivated by Property (15), but its evaluation is only based on the acceptability of in- dividual sentences, failing to satisfy Property (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Experiment 2 fully satisfies Property (16) by using an evaluation based on sentence pairs, at the cost of including a less ecologically-valid training compo- nent based on sentence transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Both ex- periments yielded qualitatively similar conclusions (failure of models to learn HIERARCHICALQ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4 Quantity of Training Data The size of our training set was plausibly within the range from which children can acquire HIER- ARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Crain and Nakayama (1987) found that children between ages 3 and 5 behaved much more consistently with HIERARCHICALQ than LINEARQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Though these children made many er- rors, their errors were usually compatible with a hierarchical rule (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', PREPOSE MAIN, DELETE NONE errors: see Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' By age 3, Ameri- can children receive approximately 10 to 33 mil- lion words of input (Hart and Risley, 1995), and the 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 million words of our training set is close to the lower end of that range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, it is reason- able to suppose that a learner that generalizes as children do would favor HIERARCHICALQ after being trained on our training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our models, in contrast, regularly preferred sentences generated in ways based on linear order (Figures 1 and 2), a category of error that is very rare in children (Crain and Nakayama, 1987;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ambridge et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In order to give our models the strongest chance of generalizing correctly, it would have been ideal to provide a quantity of data closer to 33 million words, the high end of Hart and Risley’s range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our data source did not contain enough text to make this possible, but future work could investigate ways to augment the data using other sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 Type of Training Data Our training set was both qualitatively and quanti- tatively closer to children’s input than the massive Internet corpora standardly used to train models in NLP (Linzen, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This difference is important: Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2019), Warstadt and Bowman (2020), and Mueller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2022) all found evidence that models trained on large Internet corpora performed well on yes/no questions evaluations, whereas our models trained on CHILDES performed poorly— though we cannot be certain the differences in re- sults are solely due to differences in the training data, since these prior papers used different model architectures, training tasks, and evaluation setups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Though our training data are more similar to children’s input than massive Internet corpora are, differences remain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our experiments omit several aspects of a child’s experience that might help them acquire syntax, such as prosody (Morgan and De- muth, 1996), visual information (Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2019), and meaning (Fitz and Chang, 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Abend et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2017), all of which might correlate with syntac- tic structure and thus provide cues to the correct hierarchical generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' On the other hand, our dataset might present an easier learning sce- nario than children are faced with, because chil- dren must learn to segment the speech stream into words (Lakhotia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2021), while our models do not need to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Further, though real-world grounding could provide helpful information, learners might struggle to leverage this information due to diffi- culty determining what is being discussed in the physical world (Gleitman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 7 Conclusion In this work, we trained two types of neural net- works (LSTMs and Transformers) on sentences of the types available to children and then analyzed what they had learned about English yes/no ques- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Across several evaluation paradigms, these models failed to generalize in human-like ways: Humans display hierarchical generalization, while the models’ generalization was instead based on linear order and individual words’ identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our results support the hypothesis that human-like lin- guistic generalization requires biases stronger than those of LSTMs and Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Future work should investigate what inductive biases enable suc- cessful generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' One approach would be to test architectures with built-in hierarchical struc- ture;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' past work has shown that such architectures have a hierarchical bias (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020) and generalize better on the hierarchical phenomenon of subject-verb agreement (Kuncoro et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lepori et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020), so they may also generalize bet- ter on English yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A final direction would be to expand the input beyond words alone so that learners can leverage hierarchical structure that is present in other modalities, such as hierar- chical structure in visual scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ethics Statement Use of human data: While we did not collect any new human data ourselves, many of our anal- yses involved the use of prior datasets within the CHILDES database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' All of these datasets were collected in accordance with IRB policies at the institutions of the data collectors, and all followed standard practices in obtaining informed consent and deidentifying data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='9 Risks and limitations: The main risk of our pro- posed analyses is that future work using the same analyses might draw overly strong conclusions based on increased model performance, leading to overestimates of model strength.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Such overesti- mates are an issue because they can lead users to place more trust in a model than is warranted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To clarify, we view strong performance on our evaluation datasets as necessary but not sufficient to demonstrate human-like learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, if models perform poorly on our datasets (as the models we evaluated did), then we have strong reason to con- clude that models are not learning in human-like ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' If future models perform better, such results would be consistent with human-like learning but would not conclusively establish that models learn as humans do, as they might instead be using some shallow heuristic that is not controlled for in our datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In other words, a criterion that is neces- sary but not sufficient facilitates strong conclusions about failure but does not facilitate strong conclu- sions about success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' If future papers are faced with models that are more successful, such papers would ideally supplement results based on our datasets with analyses of models’ internal strategies in order to more conclusively establish that what they have learned is not a spurious heuristic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' References Omri Abend, Tom Kwiatkowski, Nathaniel J Smith, Sharon Goldwater, and Mark Steedman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Boot- strapping language acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognition, 164:116– 143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Gerry TM Altmann and Yuki Kamide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Incremen- tal interpretation at verbs: Restricting the domain of subsequent reference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognition, 73(3):247–264.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ben Ambridge, Caroline F Rowland, and Julian M Pine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Is structure dependence an innate con- straint?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' New experimental evidence from children’s complex-question production.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognitive Science, 32(1):222–255.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Robert Berwick, Paul Pietroski, Beracah Yankama, and Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Poverty of the stimulus re- visited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognitive science, 35:1207–42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 9https://talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/share/irb/ Rens Bod, Margaux Smets, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Empiricist so- lutions to nativist problems using tree-substitution grammars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Workshop on Computational Models of Language Acquisition and Loss: EACL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1965.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Aspects of the Theory of Syntax, 50 edition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The MIT Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1980.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Rules and representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Columbia University Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alexander Clark and Rémi Eyraud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Polynomial identification in the limit of substitutable context- free languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Journal of Machine Learning Re- search, 8(8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alexander Clark and Shalom Lappin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Linguis- tic Nativism and the Poverty of the Stimulus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' John Wiley & Sons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Stephen Crain and Mineharu Nakayama.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Struc- ture dependence in grammar formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Language, pages 522–543.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Jeffrey L Elman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Distributed representations, simple recurrent networks, and grammatical struc- ture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Machine learning, 7(2):195–225.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Hartmut Fitz and Franklin Chang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cog- nition, 166:225–250.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Robert Frank and Donald Mathis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transforma- tional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Models of Human Language Acqui- sition, 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lila R Gleitman, Kimberly Cassidy, Rebecca Nappa, Anna Papafragou, and John C Trueswell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Hard words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Language learning and development, 1(1):23–64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Colorless green recurrent networks dream hierarchically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' John Hale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A probabilistic Earley parser as a psy- cholinguistic model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Second Meeting of the North American Chapter of the Association for Computa- tional Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Betty Hart and Todd R Risley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Meaningful differ- ences in the everyday experience of young American children.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Paul H Brookes Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Kenneth Heafield.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' KenLM: Faster and smaller language model queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187–197, Edinburgh, Scotland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Sepp Hochreiter and Jürgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Long short-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Neural computation, 9(8):1735–1780.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A systematic assessment of syntactic generalization in neural language mod- els.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1725–1744, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Compu- tational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Philip A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Huebner, Elior Sulem, Cynthia Fisher, and Dan Roth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' BabyBERTa: Learning more gram- mar with small-scale child-directed language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of CoNLL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Xuân-Nga Cao Kam, Iglika Stoyneshka, Lidiya Torny- ova, Janet D Fodor, and William G Sakas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bi- grams and the richness of the stimulus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognitive Science, 32(4):771–787.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Reinhard Kneser and Hermann Ney.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Improved backing-off for m-gram language modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1995 International Conference on Acoustics, Speech, and Signal Processing, 1:181–184 vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yo- gatama, Stephen Clark, and Phil Blunsom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1426–1436, Melbourne, Aus- tralia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Marta Kutas, Katherine A DeLong, and Nathaniel J Smith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A look around at what lies ahead: Pre- diction and predictability in language processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Predictions in the brain: Using our past to generate a future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Abdelrahman Mohamed, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' On generative spoken lan- guage modeling from raw audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transactions of the Association for Computational Linguistics, 9:1336– 1354.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Howard Lasnik and Jeffrey L Lidz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The argu- ment from the poverty of the stimulus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The Oxford handbook of universal grammar, pages 221–248.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Julie Anne Legate and Charles D Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Em- pirical re-assessment of stimulus poverty arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The Linguistic Review, 19(1-2):151–162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Michael Lepori, Tal Linzen, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thomas McCoy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Representations of syntax [MASK] useful: Effects of constituency and dependency structure in recursive LSTMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 3306–3316, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Roger Levy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Expectation-based syntactic com- prehension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognition, 106(3):1126–1177.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' John Lewis and Jeffrey Elman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Learnability and the statistical structure of language: Poverty of stim- ulus arguments revisited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Proceedings of the 26th Annual Boston University Conference on Language Development, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Yongjie Lin, Yi Chern Tan, and Robert Frank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Open sesame: Getting inside BERT’s linguistic knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241–253, Florence, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Tal Linzen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' How can we accelerate progress to- wards human-like linguistic generalization?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5210– 5217, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Lin- guistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Brian MacWhinney.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The CHILDES project: Tools for analyzing talk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lawrence Erlbaum Asso- ciates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thomas McCoy, Robert Frank, and Tal Linzen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Revisiting the poverty of the stimulus: hier- archical generalization without a hierarchical bias in recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thomas McCoy, Robert Frank, and Tal Linzen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Does syntax need to grow on trees?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' sources of hierarchical inductive bias in sequence-to-sequence networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' James L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Morgan and Katherine Demuth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Signal to syntax: Bootstrapping from speech to grammar in early acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Psychology Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang, and Sebastian Schuster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Coloring the blank slate: Pre-training imparts a hierarchical in- ductive bias to sequence-to-sequence models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Findings of the Association for Computational Lin- guistics: ACL 2022, pages 1352–1368, Dublin, Ire- land.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Karl Mulligan, Robert Frank, and Tal Linzen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Structure here, bias there: Hierarchical generaliza- tion by jointly learning syntactic transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the Society for Computation in Lin- guistics 2021, pages 125–135, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ludovica Pannitto and Aurélie Herbelot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Recur- rent babbling: evaluating the acquisition of gram- mar from limited input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 24th Conference on Computational Natural Lan- guage Learning, pages 165–176, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Associa- tion for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Adam Pauls and Dan Klein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Large-scale syntac- tic language modeling with treelets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 959–968, Jeju Island, Korea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lisa Pearl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Poverty of the stimulus without tears.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Language Learning and Development, pages 1–40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lisa Pearl and Benjamin Mis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The role of in- direct positive evidence in syntactic acquisition: A look at anaphoric one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Language, 92:1–30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lisa Pearl and Jon Sprouse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2013a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Computational models of acquisition for islands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Experimental syn- tax and islands effects, pages 109–131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lisa Pearl and Jon Sprouse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2013b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Syntactic islands and learning biases: Combining experimental syntax and computational modeling to investigate the lan- guage acquisition problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Language Acquisition, 20(1):23–68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Andrew Perfors, Josh Tenenbaum, and Terry Regier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The learnability of abstract syntactic princi- ples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognition, 118:306–338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Jackson Petty and Robert Frank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Trans- formers generalize linearly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='12036.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Geoffrey K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Pullum and Barbara C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Scholz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Em- pirical assessment of stimulus poverty arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The Linguistic Review, 18(1-2):9–50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Florencia Reali and Morten H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Christiansen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Un- covering the richness of the stimulus: Structure de- pendence and indirect statistical evidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cogni- tive Science, 29(6):1007–1028.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Haoyue Shi, Jiayuan Mao, Kevin Gimpel, and Karen Livescu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Visually grounded neural syntax ac- quisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1842–1861, Florence, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Advances in neural information pro- cessing systems, pages 5998–6008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alex Warstadt and Samuel R Bowman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Can neu- ral networks acquire a structural bias from raw lin- guistic data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Proceedings of the 42nd Annual Con- ference of the Cognitive Science Society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alex Warstadt and Samuel R Bowman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' What artificial neural networks can tell us about human language acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='07998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' BLiMP: The benchmark of lin- guistic minimal pairs for english.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transactions of the Association for Computational Linguistics, 8:377–392.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun Liu, and Samuel R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bowman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Learning which features matter: RoBERTa acquires a prefer- ence for linguistic generalizations (eventually).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 217–235, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computa- tional Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ethan Wilcox, Richard Futrell, and Roger Levy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Using computational models to test syntactic learn- ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' lingbuzz preprint lingbuzz/006327.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' What do RNN language models learn about filler–gap dependencies?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 211–221, Brussels, Belgium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Taha Yasseri, András Kornai, and János Kertész.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A practical approach to language complexity: A Wikipedia case study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' PLoS ONE, 7(11):e48386.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A CHILDES preprocessing details The train, test, and validation split kept each docu- ment in the corpora intact to allow for learning of context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Since a document roughly correspond to a single recording session, and the sentence order within each document was not randomized, the net- works could utilize cross sentence context while predicting the next word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Generally, we kept the data as close to the actual input that the child receives as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' However, in some cases we modified tokenization to match the CHILDES Treebank, a syntactically parsed sub- set of the CHILDES corpora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For instance, con- tractions were split, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' we replaced don’t with do n’t, The ages of the children vary by corpus, ranging from six months to twelve years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Almost 95% (49/52) of the corpora consist of transcriptions with children between one and six years of age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Note that for Experiment 2, we used the same vo- cabulary as we used in Experiment 1, which means that the words that were not present in the Exper- iment 1’s vocabulary were replaced with tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The unprocessed CHILDES datasets were down- loaded in XML format from the online XML ver- sion10 of the CHILDES database (MacWhinney, 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='11 A modified NLTK CHILDESCorpus- 10https://childes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/ data-xml/ 11https://childes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org Reader12 was used to parse the XML into plain text for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The CHILDES dataset is licensed for use under a CC BY-NC-SA 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 license13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Under the terms of this license, the data can be freely used and adapted, as long as it is not used for commercial purposes and as long as attribution is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='14 Our usage fits these criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Though CHILDES contains many corpora of many languages, we use only corpora from the North American English subset of CHILDES, which contains child-directed speech with many different North American children.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See the CHILDES database for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' By the CHILDES rules for data citation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='15 re- search that relies on more than 6 of the corpora need only cite the overall database, not each indi- vidual corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' All the data on CHILDES must adhere to IRB guidelines,16 including a requirement for anonymity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The final dataset will be included in our GitHub repository, to be released soon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This dataset is not intended for commercial use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' CHILDES corpora included The CHILDES corpora that we used were: Bates,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bernstein,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bliss,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bloom70,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bloom73,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bohannon,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Braunwald,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Brent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Brown,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Carterette,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Clark,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cornell,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Demetras1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Demetras2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' EllisWeismer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Evans,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Feldman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Garvey,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Gathercole,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Gelman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Gillam,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Gleason,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' HSLLD,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Haggerty,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Hall,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Higginson,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Kuczaj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' MacWhin- ney,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' McCune,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' McMillan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Morisset,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' NH,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Nelson,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' NewEngland,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' NewmanRatner,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Normal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' POLER,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Peters,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Post,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Rollins,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Sachs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Sawyer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Snow,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Soder- strom,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Sprott,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Suppes,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Tardif,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Valian,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' VanHouten,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' VanKleeck,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Warren,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Weist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' B Hyperparameter Search and Model Implementation We conducted a hyperparameter search for each of the architectures we investigated (LSTMs and Transformers).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our broad goal in this paper is to investigate the extent to which capturing the statis- tical properties of the CHILDES dataset naturally 12https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='nltk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/howto/childes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' html 13https://talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/share/rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='html 14https://creativecommons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/licenses/ by-nc-sa/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0/ 15https://talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/share/citation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' html 16https://talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/share/irb/ leads a learner to capture the structure of yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Therefore, we sought to find the hyper- parameter settings that made models most effective at capturing the statistical properties of CHILDES data, a goal which we operationalized as finding the model with the lowest perplexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1 Hyperparameter search LSTMs For LSTMs we explored the following hyper-parameters via a grid search for a total of 144 models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' layers: 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' hidden and embedding size: 200, 800 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' batch size: 20, 80 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' dropout rate: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' learning rate: 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' random seed: 3 per parameter combination, unique for each LSTM The LSTM model with the lowest perplexity on the validation set after training had 2 layers, a hidden and embedding size of 800, a batch size of 20, a dropout rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4, and a learning rate of 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='17 A LSTM model with these hyperparameters has 37,620,294 parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transformers For the Transformers we per- formed a hyperparameter sweep over the following hyper-parameters for a total of 84 models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' layers: 2, 4, 8, 16 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' context size: 50, 100, 500 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' hidden and embedding size: 200, 800, 1600 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' heads: 2, 4, 8, 16 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' batch size: 20, 80, 160 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' dropout rate: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' learning rate: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' random seed: 3 per parameter combination 17The hyperparameters we explored for the LSTMs were those of Gulordava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2018), the code for which can be found at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='com/ facebookresearch/colorlessgreenRNNs LSTMs Prepose First Prepose Main Delete First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='14 Delete Main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='12 Delete None 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='14 Table 1: Numerical results for LSTMs’ preference for questions consistent with combinations of ‘prepose’ and ‘delete’ rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each architecture, the propor- tion preferences across all 6 question types necessarily sum to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The Transformer model with the lowest perplexities after training had 4 layers, a context size of 500, a hidden size of 800, a batch size of 10, 4 heads, a dropout rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2, and a learning rate of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A Transformer model with these parameters has 42,759,494 parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2 Comment on model size Although neural networks generally perform better as they increase in size, the best-performing models that we found were not the largest ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This re- sult is consistent with the finding of Warstadt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2020b) that, for small training sets, smaller lan- guage models sometimes outperform larger ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, it is unlikely that scaling up models beyond the range we investigated would have yielded bet- ter CHILDES language models than the ones we trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='3 Implementation All models were implemented in Py- Torch by building on code from https: //github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='com/facebookresearch/ colorlessgreenRNNs and https: //github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='com/pytorch/examples/ tree/main/word_language_model, and trained using Nvidia k80 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The final models will be included in our GitHub repository, which will be released soon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These models are not intended for commercial use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' C PREPOSE-ONE&DELETE-ONE Full Results See Table 1 and Table 2 for these results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Table 1 and Table 2 for these results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1 Results using SLOR See Table 3 and Table 4 for these results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transformers Prepose First Prepose Main Delete First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='16 Delete Main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='06 Delete None 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='21 Table 2: Numerical results for Transformers’ prefer- ence for questions consistent with combinations of ‘pre- pose’ and ‘delete’ rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each architecture, the proportion preferences across all 6 question types nec- essarily sum to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' LSTMs Prepose First Prepose Main Delete First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='14 Delete Main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='80 Delete None 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='18 Table 3: Analysis of LSTMs’ preference for questions consistent with combinations of ‘prepose’ and ‘delete’ rules, evaluated using SLOR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each architecture, the proportion preferences across all 6 question types necessarily sum to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transformers Prepose First Prepose Main Delete First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='15 Delete Main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='40 Delete None 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='24 Table 4: Analysis of Transformers’ preference for ques- tions consistent with combinations of ‘prepose’ and ‘delete’ rules, evaluated using SLOR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each ar- chitecture, the proportion preferences across all 6 ques- tion types necessarily sum to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' D BabyBERTa dataset evaluation For an illustrative subset of the results on the Zorro evaluation dataset (discussed in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5), see Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For the full results, see Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' E Move-One Dataset Results One approach used in several past papers (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', Lewis and Elman (2001) and Reali and Chris- tiansen (2005)) is to evaluate models using pairs of sentences that can be formed by starting with a declarative sentence (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', (17)) and moving one of its auxiliaries to the front of the sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The first sentence in each pair (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', (18a) ) follows HIER- ARCHICALQ, because the main auxiliary is moved, while the second (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', (18b)), follows LINEARQ because the first auxiliary is moved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (17) The children who are talking are sleeping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (18) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Are the children who are talking sleeping?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Are the children who talking are sleeping?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='46 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='54 LSTM 02 LSTM 03 LSTM 08 Transformer 02 Transformer 03 Transformer 08 irreg_v sv_agr_rc swap_arg Zorro Evaluation Model 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 Proportion Correct Figure 4: The performance of a selected subset of model re-runs on a selected subset of the Zorro evalua- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Each Zorro evaluation targets a specific syntactic phenomenon—in the cases shown here, irregular verbs, subject-verb agreement across relative clauses, and cor- rect argument ordering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' If a model assigns a higher probability to (18a) than (18b), that is evidence that the models favors HIERARCHICALQ over LINEARQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' While this pref- erence is a necessary component of correctly learn- ing HIERARCHICALQ, it is by no means sufficient: indeed, Kam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2008) showed that models can prefer sentences consistent with HIERARCHICALQ over sentences consistent with LINEARQ due to shallow n-gram statistics rather than due to knowl- edge of hierarchical structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' More generally, there are infinitely many other incorrect hypotheses besides LINEARQ, and demonstrating successful learning of HIERARCHICALQ would require ruling out all of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Investigating all possibilities is intractable, but we can at least investigate a few additional plausible ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, in the main paper we depart from prior work by considering a greater number of candidate sentences than just the pairs of sentences used in prior work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To create the MOVE-ONE dataset, we ran- domly sampled 10,000 declarative sentences from our CFGs for which the first and main auxiliary were identical and then modified them to give 10,000 sentence pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To create the PREPOSE- ONE&DELETE-ONE dataset, we randomly sam- pled a different 10,000 declarative sentences from our CFGs for which the first and main auxiliary were different and then we modified them to give 10,000 6-tuples of sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Appendix F for more details about the CFGs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' F Context Free Grammars Figure 6 contains the context-free grammar used for the analyses in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figures 7 and 8 con- tain the context-free grammars used for the targeted evaluation sets in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 9 contains the vocabulary used for all of these datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' G Breakdown by lexical identity Here we further break down models’ predictions for the FIRST-AUX ̸= MAIN-AUX evaluation set based on the identities of the two auxiliaries in the input sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 10 gives the results for the LSTM in the QUESTION FORMATION condi- tion;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 11 for the LSTM in the NEXT-WORD PREDICTION + QUESTION FORMATION condi- tion;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 12 for the Transformer in the QUES- TION FORMATION condition;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' and Figure 13 for the for the Transformer in the NEXT-WORD PREDIC- TION + QUESTION FORMATION condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' H Example generated text Figure 14 gives some example text generated by our models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Models trained on next-word prediction produce their predictions as a probability distribu- tion over the vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To use such models to generate text, we sample a word from this distribu- tion then use that word as the model’s input for the next time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='89 79 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='67 89 86 82 98 91 100 92 100 97 85 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='88 60 56 78 40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='67 87 71 59 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='89 100 91 100 96 85 88 61 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='46 39 39 74 88 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='78 59 98 88 88 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='62 89 79 83 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='59 56 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='68 41 60 86 69 66 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='95 90 87 80 89 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='68 80 99 88 100 90 100 94 85 92 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='61 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='87 88 85 88 80 82 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='97 90 100 93 99 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='96 83 86 59 43 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='48 39 68 90 73 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='85 97 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='91 100 93 100 95 84 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='91 60 60 82 38 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='64 89 77 63 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='85 79 88 89 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='84 89 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='61 53 51 36 61 89 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='74 62 99 88 83 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='89 88 79 80 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='90 99 93 100 94 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='86 73 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='63 95 90 85 60 89 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='58 77 97 90 100 90 100 97 83 89 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='60 50 76 37 70 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='90 81 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='81 97 90 100 93 100 95 84 89 64 50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='54 37 64 89 73 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='61 97 89 85 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='100 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='81 87 61 55 72 36 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='73 93 73 61 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='87 89 64 89 83 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='81 96 90 100 91 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='38 70 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='90 76 59 97 89 84 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='78 87 85 78 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='88 100 91 100 97 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='82 91 58 46 69 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='79 77 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='46 89 54 62 92 77 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='99 81 99 95 72 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='75 54 53 61 42 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='50 81 64 61 83 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='82 97 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='83 99 94 71 78 54 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='43 72 45 48 85 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='71 59 96 80 74 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='58 91 62 62 93 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='54 48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='77 41 62 79 73 58 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='88 81 79 65 91 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='59 65 95 80 93 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='88 98 96 74 78 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='60 88 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='75 74 61 92 51 63 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='95 84 100 88 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='93 73 80 53 38 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='78 42 53 83 73 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='70 95 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='81 99 83 98 95 74 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='77 54 36 50 43 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='47 83 65 58 90 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='78 72 82 92 53 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='74 76 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='53 44 55 42 47 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='72 59 90 78 72 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='87 92 58 66 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='80 99 88 98 94 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='80 73 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='61 93 77 72 90 91 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='53 65 90 80 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='90 98 95 73 83 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='55 43 44 44 33 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='91 67 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='64 96 83 98 81 100 97 72 79 54 40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='58 40 54 80 71 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='60 92 83 69 38 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='98 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='71 75 55 47 53 42 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='49 79 86 58 88 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='79 81 64 91 61 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='62 92 84 97 85 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='42 53 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='86 74 63 88 76 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='93 90 53 63 94 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='83 99 86 96 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='75 82 55 50 53 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 01 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 02 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 03 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 04 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 05 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 06 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 07 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 09 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 01 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 02 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 03 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 04 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 05 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 06 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 07 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 09 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_determiner_noun−across_1_adjective ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_determiner_noun−between_neighbors ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_subject_verb−across_prepositional_phrase ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_subject_verb−across_relative_clause ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_subject_verb−in_question_with_aux ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_subject_verb−in_simple_question ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='anaphor_agreement−pronoun_gender ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='argument_structure−dropped_argument ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='argument_structure−swapped_arguments ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='argument_structure−transitive ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='binding−principle_a ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='case−subjective_pronoun ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='ellipsis−n_bar ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='filler−gap−wh_question_object ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='filler−gap−wh_question_subject ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='irregular−verb ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='island−effects−adjunct_island ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='island−effects−coordinate_structure_constraint ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='local_attractor−in_question_with_aux ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='npi_licensing−matrix_question ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='npi_licensing−only_npi_licensor ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='quantifiers−existential_there ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='quantifiers−superlative ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Evaluation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='% Correct ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Figure 5: Results on the targeted syntactic evaluations in Huebner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2021) in percent accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Evaluation names in Figure 4 were shortened.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_BARE MAIN-AUX VP_S_PAST} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_PAST MAIN-AUX VP_S_BARE} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_BARE MAIN-AUX VP_S_PROG} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_PROG MAIN-AUX VP_S_BARE} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_PAST MAIN-AUX VP_S_PROG} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_PROG MAIN-AUX VP_S_PAST} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_BARE MAIN-AUX VP_P_PAST} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_PAST MAIN-AUX VP_P_BARE} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_BARE MAIN-AUX VP_P_PROG} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_PROG MAIN-AUX VP_P_BARE} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_PAST MAIN-AUX VP_P_PROG} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_PROG MAIN-AUX VP_P_PAST} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Det_S N_S} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_O ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S IV } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P IV} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Figure 6: CFG used to generate PREPOSE-ONE-AND-DELETE-ONE evaluation dataset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_M_S VP_M_S | NP_M_P VP_M_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_M_S→ {Det_S N_S | Det_S N_S Prep Det_S N_S | Det_S N_S Prep Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_M_P→ {Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_O ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P | Det_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_S RC_S | Det_P N_P RC_P } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S IV } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P IV} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Figure 7: CFG used to generate FIRST-AUX = MAIN-AUX evaluation dataset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_M_S VP_M_S | NP_M_P VP_M_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_M_S→ {Det_S N_S | Det_S N_S Prep Det_S N_S | Det_S N_S Prep Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_M_P→ {Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_O ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P | Det_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_S RC_S | Det_P N_P RC_P } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S IV } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P IV} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Figure 8: CFG used to generate FIRST-AUX ̸= MAIN-AUX evaluation dataset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {the | some | this } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {the | some | those} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {baby | girl | boy | animal | child | person | horse } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {babies | girls | boys | animals | children | people | horses } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='IV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {play | read | draw | sit | fall | talk | sleep | try | work | walk} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='IV_IS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {playing | reading | drawing | sitting | falling | talking | sleeping | trying | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='working | walking} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='IV_HAS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {played | read | drawn | sat | fallen | talked | slept | tried | worked | walked} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {call | see | find | help | feed | know | pick | visit | watch | reach} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {calling | seeing | finding | helping | feeding | knowing | picking | visiting | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='watching | reaching} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_HAS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {called | seen | found | helped | fed | known | picked | visited | watched | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='reached} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {do | did | can | would | shall} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {does | did | can | would | shall} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_S_BE → {is | was} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_P_BE → {are | were} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_S_HAS→ {has} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_P_HAS→ {have} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Prep ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {by | behind } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {who | that } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Figure 9: Vocabulary used for the PREPOSE-ONE-AND-DELETE-ONE,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' FIRST-AUX ̸= MAIN-AUX,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' and FIRST- AUX = MAIN-AUX evaluation datasets Figure 10: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX eval- uation set for LSTMs first trained on next-word prediction and then question formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The two leftmost bars in each cell show a First-vs-main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' AuxX = AuxX = Auxx = AuxX = Auxx = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = was have can were shall p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='p would does op are s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1 11 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 TIT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 behavior consistent 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 Comparison 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 First-vs-main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 AuxY-vs-AuxX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 word 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0Figure 11: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalu- ation set for LSTMs trained only on question formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The two leftmost bars in each cell show a First-vs-main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 12: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalua- tion set for Transformers first trained on next-word prediction and then question formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The two leftmost bars in each cell show a First-vs-main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' AuxX = AuxX = Auxx = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = was have can were shall p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='p would does op are S 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 Comparison 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 First-vs-main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 AuxY-vs-AuxX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 Aux 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content="5 0'0 1." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 -AuxX = AuxX = AuxX = AuxX = Auxx = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = was have can were shall pIp would does op are s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 T 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 behavior consistent 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 Aux 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 05 Comparison First-vs-main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 AuxY-vs-AuxX AuxY 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 word 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 -Figure 13: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalu- ation set for Transformers trained only on question formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The two leftmost bars in each cell show a First-vs- main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Text from CHILDES do you eat in nursery school too ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' oh .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it ’s nursery school .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' xxx .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what do you get to eat ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what else do you get to eat besides a book ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' do n’t play with those books .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what else do you have ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what do you have in nursery school ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' alright - you can take it .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' why do n’t you put the turtle back .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' well - you ’re not going to change his water now .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' tonight .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' tonight - you change it about ten times a day .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' no - you can do it later .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' forget it now .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' too bad .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' too bad .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' that poor turtle .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' you do n’t let him sit there five minutes .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' that was five minutes ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' LSTM gen- erated text do you eat in nursery school too ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' oh .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it ’s nursery school .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' no !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' tomorrow ’s Friday .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' right .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' yeah - yeah .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' yeah I know what .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' I told you that he liked snipsnap that way .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' well this is address the grayish kings .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ok .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what kind of bird do you want to play with ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what else do you want to do ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' your crayons .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' got a circle .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' I ’m going to use these .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' there .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' remember ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ok do you want to put all of these there ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ok let ’s put everything away .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what do you want to do ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' you want to play in the box ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it ’s just like a ball because it has a ball on it .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transformer generated text do you eat in nursery school too ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' oh .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it ’s nursery school .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' mm-hm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' and it looks like a little girl .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' mm-hm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' like a little girl .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' hm , mm-hm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' looks like she ’s concerned about baseball this is a picture of a double football player .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' mm-hm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' um , it has a uniform on it .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' mm-hm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' anything more ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' no .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ok great .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' got a hat on .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' and a flower and a little flower .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' remember that ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' do you remember that word ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what do you remember that word ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' do you remember what it word says ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it says seven - eight .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what ’s that word ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it says eight .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' look at that word .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' three - four - six .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' eight - nine .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 14: Comparison of text generated by the LSTM and Transformer models with a block of text chosen randomly from the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The LSTMs and Transformers were both seeded with the first three sentences of the text taken from CHILDES, which is the underlined in the two model generated texts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Note that neither of the model generated texts were cherry picked either for quality or to be representative of the models’ usual output: rather they were the first things they generated when seeded with the above underlined portion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = was have can were shall pIp pinom does op are s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 rule 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 I behavior consistent with 881 18 AUXY 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 Comparison First-vs-main AuxY-vs-AuxX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'}