diff --git "a/99FLT4oBgHgl3EQfCS7y/content/tmp_files/load_file.txt" "b/99FLT4oBgHgl3EQfCS7y/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/99FLT4oBgHgl3EQfCS7y/content/tmp_files/load_file.txt" @@ -0,0 +1,2656 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf,len=2655 +page_content='Byte Pair Encoding for Symbolic Music Nathan Fradet 1 2 Jean-Pierre Briot 1 Fabien Chhel 3 Amal El Fallah Seghrouchni 1 Nicolas Gutowski 4 Abstract The symbolic music modality is nowadays mostly represented as discrete and used with sequential models such as Transformers, for deep learning tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' Recent research put efforts on the tokeniza- tion, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' the conversion of data into sequences of integers intelligible to such models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' This can be achieved by many ways as music can be com- posed of simultaneous tracks, of simultaneous notes with several attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' Until now, the pro- posed tokenizations are based on small vocabular- ies describing the note attributes and time events, resulting in fairly long token sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' In this paper, we show how Byte Pair Encoding (BPE) can improve the results of deep learning models while improving its performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' We experiment on music generation and composer classification, and study the impact of BPE on how models learn the embeddings, and show that it can help to in- crease their isotropy, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=', the uniformity of the variance of their positions in the space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' Introduction Deep learning tasks on symbolic music are nowadays mostly tackled by sequential models1, such as the Transformers (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' These models receive sequences of tokens as input, and convert them to learned embedding vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' A token is an integer associated to a high level element, such as a word or sub-word in natural language, and both are linked in a vocabulary that acts as a look-up table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' An embedding represents the semantic information of a token as a vector of fixed-size, and is learning contextually by the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' To use such models for symbolic music, one needs to tokenize the data, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=', convert it to sequences of tokens that can be decoded back.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' This can be achieved by several ways, as music can be composed of simultaneous tracks, of simultaneous notes with several attributes such as 1LIP6, Sorbonne University - CNRS, Paris, France 2Aubay, Boulogne-Billancourt, France 3 ESEO-TECH / ERIS, Angers, France 4University of Angers, Angers, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/99FLT4oBgHgl3EQfCS7y/content/2301.11975v1.pdf'} +page_content=' Correspondence to: Nathan Fradet