diff --git "a/B9FJT4oBgHgl3EQfACzo/content/tmp_files/load_file.txt" "b/B9FJT4oBgHgl3EQfACzo/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/B9FJT4oBgHgl3EQfACzo/content/tmp_files/load_file.txt" @@ -0,0 +1,521 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf,len=520 +page_content='Parkinson gait modelling from an anomaly deep representation Edgar Rangela, Fabio Martineza,∗ a Biomedical Imaging, Vision and Learning Laboratory (BIVL2ab), Universidad Industrial de Santander, 680002, Bucaramanga, Colombia Abstract Parkinson’s Disease is associated with gait movement disorders, such as pos- tural instability, stiffness, and tremors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Today, some approaches implemented learning representations to quantify kinematic patterns during locomotion, sup- porting clinical procedures such as diagnosis and treatment planning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' These approaches assumes a large amount of stratified and labeled data to optimize discriminative representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Nonetheless, these considerations may restrict the operability of approaches in real scenarios during clinical practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' This work introduces a self-supervised generative representation, under the pretext of video reconstruction and anomaly detection framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' This architecture is trained following a one-class weakly supervised learning to avoid inter-class variance and approach the multiple relationships that represent locomotion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For validation 14 PD patients and 23 control subjects were recorded, and trained with the control population only, achieving an AUC of 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9%, homoscedasticity level of 80% and shapeness level of 70% in the classification task considering its generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Keywords: Anomaly detection, Deep Learning, Weakly Supervised, Parkinson Disease 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Introduction Parkinson’s Disease (PD) is the second most common neurodegenerative dis- order, affecting more than 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='2 million people worldwide [1, 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' According to the World Health Organization, this number will increase by more than 12 million by 2030 [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' PD is characterized by the progressive loss of dopamine, a neurotrans- mitter involved in the execution of voluntary movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For this reason, the main diagnostic support is based on the observation and analysis of progressive motor disorders, such as tremor, rigidity, slowness of movement (bradykinesia), ∗Corresponding author Email addresses: edgar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='rangel@correo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='uis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='co (Edgar Rangel), famarcar@saber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='uis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='co (Fabio Martinez) URL: https://bivl2ab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='uis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='co/ (Fabio Martinez) Preprint submitted to Pattern Recognition January 30, 2023 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='11418v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='CV] 26 Jan 2023 postural instability, among many other related symptoms [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Despite of impor- tant advances to determine the sources of the disease and multiple symptoms, today, there is not a definitive and universal biomarker to characterize, diagnose, and follow the patient progression of PD patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Particularly, the gait is a multi-factorial and complex locomotion process that involves several subsystems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The associated kinematics patterns are typ- ically recovered over standard marker-based setups, that coarsely approximate complex motion behaviors, resulting in restrictive, intrusive and, altering natu- ral postural gestures for PD description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Alternative, markerless video strate- gies together with discriminative learning approximations have emerged as key solutions to support the PD characterization and classification from other dis- eases [5–9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' These methodologies have been successful in controlled studies but strongly require a stratified, balanced, and well-labeled dataset to avoid over- fitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Besides, these approaches are biased to the physicians’ experience to determine the disease and limiting the quantification to general scale indexes [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Even worst, these approaches solve classification tasks but remains limited on further explanation about data representation to define the generalization capability w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='t the new data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' This work introduces a deep generative and anomaly architecture to learn a hidden descriptor to represent locomotion patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Following a weakly super- vised methodology, a 3D net is self-trained under a gait video reconstruction pre- text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Then, the resultant embedding representation encodes complex dynamic gait relationships, captured from control population, that allows to discrimi- nate parkinson patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The main contributions of this work are summarized as follows: A new digital biomarker coded as an embedding vector with the capability to represent hidden kinematic relationships of Parkinson disease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A 3D Convolutional GAN net dedicated to learn spatio-temporal pat- terns of gait video-sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' This architecture integrates an auto-encoder net to learn video patterns in reconstruction tasks and a complementary decoder that discriminates between reconstructed and original video se- quences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A statistical test framework to validate the capability of the approach in terms of generalization, coverage of data and discrimination capability for any class with different groups between them, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' evaluate the general- ization of Parkinsonian patients, at different stages of the disease, with respect to a control population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Current Work Deep discriminative learning is nowadays the standard methodology in much of the computer vision challenges, demonstrating remarkable results in very dif- ferent domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For instance, the Parkinson characterization is achieved from 2 sensor-based and vision-based approaches, following a supervised scheme to cap- ture main observed relationships and to generate a particular prediction about the condition of the patients [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' These approaches in general are dedicated to classify and discriminate between a control population and patients with the Parkinson condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The sensor-based approaches capture kinematics from mo- tion signals, approximating to PD classification, but in many of the cases results marker-invasive, alter natural gestures, and only have recognition capabilities in advanced stages of the disease [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Contrary, the vision-based approaches exploit postural and dynamic features, from video recordings, but the represen- tations underlies on supervised schemes that requires a large amount of labeled data to learn the inter and intra variability among classes [6–9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Also, these learning methodologies require that training data have well-balanced conditions among classes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=', to have the same proportion of sample observations for each of the considered class [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Unsupervised, semi-supervised and weakly supervised approaches have emerged as a key alternative to model biomedical problems, with significative variabil- ity among observations but limited training samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' However, to the best of our knowledge, these learning methods have been poorly explored and ex- ploited in Parkinson characterization, with some preliminary alternatives that use principles of Minimum Distance Classifiers and K-means Clustering [5, 13– 17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In such sense, the PD modelling from non-supervised perspective may be addressed from reconstruction, prediction and generative tasks [18], that help to determine sample distributions and determine future postural and kinematic events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In fact, the PD pattern distribution results key to understand multi- factorial nature of PD, being determinant to define variations such as laterality affectation of disease, abnormality sources, but also to define patient prognosis, emulating the development of a particular patient during the gait.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Proposed approach This work introduces a digital PD biomarker that embedded gait motor pat- terns, from anomaly video reconstruction task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Contrary to typical classification modeling, we are dedicated to deal with one class learning, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=', only to learn control gait patterns, approaching the high variability on training samples, with- out using explicit disease labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Hence, we hypothesize that a digital biomarker of the disease can be modeled as a mixture of distributions, composed of samples that were labeled as outliers, from learned representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In consequence, we analyze the embedding, reconstruction, and discrimination space to later define rules to separate Parkinson from control vectors, during test validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The general pipeline of the proposed approach is illustrated in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A volumetric autoencoder to recover gait embedding patterns Here, we are interested on capture complex dynamic interactions during lo- comotion, observed in videos as spatio-temporal textural interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' From a self-supervised strategy (video-reconstruction task),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' we implemented a 3D deep 3 Figure 1: Pipeline of the proposed model separated in volumetric auto-encoder to recover gait patterns (a),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Digital gait biomarker (b),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Auxiliary task to discriminate reconstructions (c),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' and statistical validation of learned classes distributions (d) autoencoder that projects videos into low-dimensional vectors,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' learning the com- plex gait dynamics into a latent space (see the architecture in Figure 1-a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For doing so, 3D convolutional blocks were implemented, structured hierarchically, with the main purpose to carry out a spatio-temporal reduction while increasing feature descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Formally, a gait sequence x ∈ Nf×h×w×c, where f denotes the number of temporal frames, (h × w) are the spatial dimensions, and c is the number of color channels in the video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' This sequence is received as input in the convolutional block which is convolved with a kernel κ of dimensions (kt, kh, kw), where kt convolves on the temporal axis and kh, kw on the spatial axes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' At each level l of processing, we obtain a new volume xl ∈ Zf/2l×h/2l×w/2l×2lc that represents a bank of spatio-temporal feature maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Each of these volumet- ric features are dedicated to stand out relevant gait patterns in a zG reduced projection, that summarizes a multiscale gait motion representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The resultant embedding vector zG encodes principal dynamic non-linear correlations, which are necessary to achieve a video reconstruction x′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In this study, the validated datasets are recorded from a relative static background, so, the major dependencies to achieve an effective reconstruction lies in temporal and dynamic information expressed during the gait.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Here, we adopt zG as a digital gait biomarker that, among others, allows to study motion abnormalities associated to the Parkinson disease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' To complete end-to-end learning, 3D transposed convolutional blocks were implemented as decoder, positioned in a symmetrical configuration regarding the encoder levels, and upsampling spatio-temporal dimensions to recover original video-sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=" Formally, having the embedded feature vector zG ∈ Zn with n coded features, we obtain x′l ∈ Z2lf×2lh×2lw×c/2l volumes from transpose 4 Generator Conv 3D Conv 3D Conv 3D ZG Decoder Encoder 2'G Encoder a (a) (b) Discriminator Statistical Validation Xtest control test control control?" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Conv 3D Encoder ZD Dense Xtest parkinson (c) (d)convolutional blocks until obtaining a video reconstruction x′ ∈ Nf×h×w×c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The quality of reconstruction is key to guarantee the deep representation learning in the autoencoder part of generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' To do this, an L1 loss is implemented between x and x′ and its named contextual loss: Lcon = ∥x − x′∥1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Auxiliary task to discriminate reconstructions From a generative learning, the capability of the deep representations to code locomotion patterns may be expressed in the quality of video reconstructions x′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Hence, we hypothesize that embedding descriptors zG that properly repro- duce videos x′ should encode sufficient kinematic information of trained class, allowing to discriminate among locomotion populations, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' between control and Parkinson samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' To measure this reconstruction capability, an auxiliary task is here intro- duced to receive tuples with original and reconstructed videos (x, x′), and out- put a discriminatory decision y = {y, y′}, regarding video source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In such case, y corresponds to the label for real videos, while y′ as labels for embed- dings from reconstructed sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For doing so, we implement an adversarial L2 loss, expressed as: Ladv = ∥zD − z′ D∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In such case, for large differences between (zD, z′ D) it will be a significant error that will be propagated to the generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' It should be noted that such minimization rule optimizes only the generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Then discriminator is only minimized following a classical equally weighted cross-entropy rule, as: Ldisc = log(y)+log(1−y′) 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The auxiliary task to monitor video reconstruction is implemented from a discriminatory convolutional net that follows the same structure that encoder in Figure 1-a, which halves the spatio-temporal dimension while increases the features and finally dense layer determines its realness level (see in Figure 1- c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Interestingly, from such deep convolutional representation the input videos are projected to an embedding vector zD ∈ Zm with m coded features, which thereafter may be used as latent vectors descriptors that also encode motion and realness information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' To guarantee an optimal coding into low-dimensional embeddings, the reconstructed video x′ is mapped to an additional encoder projecting representation basis in a z′G embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In such sense, zG and z′G must be similar, and lead to x and x′ to be equal which helps in generalization of the generator, following an encoder L2 loss: Lenc = ∥zG − z′ G∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A Digital gait biomarker from anomaly embeddings The video samples are high-dimensional motor observations that can be projected into a low-dimensional embedding space, through the proposed model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Formally, each video sample is an independent and random variable x(i) ℓ from the class (i) that follows a distribution x(i) ℓ ∈ Ψ(i)[µ(x(i)), σ(x(i))] with mean µ(x(i)), and standard deviation σ(x(i)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' We then considered the proposed model as an operator that transform each sample F(x(i) ℓ ) into a low dimensional space, while preserves the original distribution, as: F(x(i) ℓ ) ∈ Ψ(i)[F(µ(x(i))), F(σ(x(i)))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' From this assumption we can measure statistical properties over low-dimensional space and explore properties as the generalization of the modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 5 Figure 2: Field of action of standard metrics of the model, where the dataset used only cover the intersection area but the model performance for new samples is not being evaluated Hence, we can adopt a new digital kinematic descriptor by considering em- bedding vector differences between (zG, z′G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For instance, large difference be- tween zG, z′G may suggest a new motion class, regarding the original distribu- tion of training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' From such approximation, we can model a scheme of one-class learning (in this case, anomaly learning) over the video distributions from the low-embedding differences observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' This scheme learns data distribution without any label constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Furthermore, if we train the architecture only with videos of a control population (c), we can define a discriminatory problem from the reconstruction, by inducing: ∥zG − z′G∥2 ≤ τ → c ∧ ∥zG − z′G∥2 > τ → p, where p is a label imposed to a video with a significant error reconstruction and projected to a Parkinson population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Statistical validation setup This new discriminatory descriptor can be validated following standard met- rics into binary projection ˆy = {c, p}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For a particular threshold τ we can re- cover metrics such as the accuracy, precision and recall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Also, ROC-AUC (the Area Under the Curve) can estimate a performance by iterating over different τ values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' However, these metrics say us about the capability of the proposed approach to discriminate classes but not about data distribution among classes [19, 20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' To robustly characterize a Parkinson digital biomarker is then demand- ing to explore more robust statistical alternatives that evidence the generaliza- tion of the embedded descriptor and estimate the performance for new samples (Figure 2 illustrates typical limitations of standard classification metrics for un- seen data being positioned on unknown places).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In fact, we hypothesize that Parkinson and control distributions, observed from an embedding representa- tion, should remain with equal properties from training and test samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' To address such assumption, in this work is explored two statistical properties to validate the shape and variance of motor population distributions: 6 Ctest Ctest parkinson Conv 3D Encoder3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Variance analysis from Homoscedasticity Here, a equality among variance of data distributions is estimated through homoscedasticity operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Particularly, this analysis is carried out for two independent groups ⟨k⟩, ⟨u⟩ with cardinality |x(i) ⟨k⟩|, |x(j) ⟨u⟩| of classes (i), (j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Here, it was considered two dispersion metrics regarding the Levene mean (∆⟨g⟩ ℓ = |x⟨g⟩ ℓ − µ(x⟨g⟩)|), and the Brown-Forsythe median (∆⟨g⟩ ℓ = |x⟨g⟩ ℓ − med(x⟨g⟩)|).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' From such dispersion distances, the test statistic W between x(i) ⟨k⟩ and x(j) ⟨u⟩ can be defined as: W = N − |P| |P| − 1 � g∈P [|x⟨g⟩|(µ(∆⟨g⟩) − µ(∆))2] � g∈P [� ℓ∈x⟨g⟩ (∆⟨g⟩ ℓ − µ(∆⟨g⟩))2] (1) where P = {x(i) ⟨k⟩, x(j) ⟨u⟩, · · · } is the union set of every data group from all classes, |P| is the cardinality of P, N is the sum of all |x⟨g⟩| cardinalities, µ(∆⟨g⟩) correspond to the mean ⟨g⟩ of ∆⟨g⟩ ℓ values and µ(∆) is the overall mean of every ∆⟨g⟩ ℓ value in P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' This estimation evaluates if the samples between two different groups are equally in variance for the same class, leading us to the first step in model generalization for any new sample related to trained data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Additionally, the homoscedasticity property is useful when is needed to check if two groups remains in the same distribution range, because two distribution can have the same shape (frequency) but be placed at different domain range, indicating a weakness for the model in new data domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' From a statistical test perspective, the value W rejects the null hypothesis of homocedasticity when W > fα,|P|−1,N−|P| where fα,|P|−1,N−|P| is the upper critical value of Fischer distribution with |P|−1 and N −|P| degrees of freedom at a significance level of α (generally 5%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' This metric allows to estimate the clustering level for the model and determine if new data samples from another domain are contained in data distributions of control or Parkinson patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Then, the homoscedasticity value of x(i) ⟨k⟩ against x(j) ⟨u⟩ is defined as follow: H(x(i) ⟨k⟩, x(j) ⟨u⟩) = � � � � � � � � � � � � � � � W(µ(x(i) ⟨k⟩, x(j) ⟨u⟩)) + W(med(x(i) ⟨k⟩, x(j) ⟨u⟩)) 2 i = j ∧ k ̸= u 0 i = j ∧ k = u 2 − (W(µ(x(i) ⟨k⟩, x(j) ⟨u⟩)) + W(med(x(i) ⟨k⟩, x(j) ⟨u⟩))) 2 i ̸= j (2) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Shapeness analysis from ChiSquare Here, we quantify the “shapenes” focused in having equally distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Following the ChiSquare test χ2 between x(i) ⟨k⟩ and x(j) ⟨u⟩ as: 7 χ2 = � ℓ (x⟨k⟩ ℓ − x⟨u⟩ ℓ )2 x⟨u⟩ ℓ (3) From this rule, it should be considered that both groups must have the same cardinality (|x⟨k⟩| = |x⟨u⟩|) and the respective data sorting determines the direction of comparison (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' the direction goes from group ⟨k⟩ to have the same distribution of ⟨u⟩).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' To address these issues we make that the lower group will be repeated in its elements without adding new unknown data to preserve its mean and standard deviation, and secondly, we evaluate both directions to quantify the similarity when χ2(x(i) ⟨k⟩ → x(j) ⟨u⟩) and χ2(x(j) ⟨u⟩ → x(i) ⟨k⟩).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The value χ2 reject the null hypothesis of equal distributions when χ2 > χ2 α,|x⟨g⟩|−1 where χ2 α,|x⟨g⟩|−1 is the upper critical value of Chi Square distribution with |x⟨g⟩| − 1 degrees of freedom at a significance level of α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' We define the shapeness value as: Sh(x(i) ⟨k⟩, x(j) ⟨u⟩) = � � � � � � � � � � � � � � � χ2(x(i) ⟨k⟩ → x(j) ⟨u⟩) + χ2(x(j) ⟨u⟩ → x(i) ⟨k⟩) 2 i = j ∧ k ̸= u 0 i = j ∧ k = u 2 − (χ2(x(i) ⟨k⟩ → x(j) ⟨u⟩) + χ2(x(j) ⟨u⟩ → x(i) ⟨k⟩)) 2 i ̸= j (4) This test can be used directly as indicator of how relatively far are the samples from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Hence, a higher value of this metric means that the samples will be clearly different and separated, but there is the possibility that control patients’ distribution is near to parkinson’s while parkinson can be clearly far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Finally, in algorithm 1 is showed the steps to calculate the proposed homoscedasticity and shapeness level for the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Experimental setup 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Datasets In this study were recruited 37 patients from control (23 subjects with av- erage age of 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7 ± 13 ) and parkinson (14 subjects with an average age of 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='8 ± 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='8) populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The patients were invited to walk (without any mark- ers protocol), developing a natural locomotion gesture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Parkinson participants were evaluated by a physiotherapist (with more than five years of experience) and stratified according to the H&Y scale (level 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0 = 2, level 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5 = 1, level 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5 = 5, and level 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0 = 6 participants).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' These patients written an informed consent and the total dataset count with the approval of the Ethics Committee of Universidad Industrial de Santander.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For recording, during a natural walking in around 3 meters, the locomotion was registered 8 times from a sagittal view, following a semi-controlled condi- tions (a green background).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In this study we use a conventional optical camera 8 Algorithm 1 Calculation of homoscedasticity and shapeness metric for any quantity of data groups with any classes Require: C = {c0, c1, · · · , cn} ▷ Classes in dataset Require: Gci = � x(i) ⟨0⟩, x(i) ⟨1⟩, · · · , x(i) ⟨mi⟩ � ∀ci ∈ C ▷ Partitions per classes h ← 0 s ← 0 for any pair (ci, cj) in C do for any pair (x(i) ⟨k⟩, x(j) ⟨u⟩) in �(Gci, Gcj) do h ← h + H(x(i) ⟨k⟩, x(j) ⟨u⟩) ▷ H defined in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 2 s ← s + Sh(x(i) ⟨k⟩, x(j) ⟨u⟩) ▷ Sh defined in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 4 end for end for N ← �n i |Gci| d ← �N 2 � ▷ Combinatory of N in groups of 2 h ← h d ▷ Homocedasticity level metric s ← s d ▷ Shapeness level metric Nikon D3500, that output sequences at 60 fps with a spatial resolution of 1080p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The camera was localized to cover the whole participant silhouette.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Every se- quence was spatially resized to 64×64 pixels, and temporally cropped to 64 frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Besides, the videos were normalized and a subsequent subsampling was carried out to ensure a complete gait cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' To follow one learning class, the proposed approach was trained only with control subjects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In such case, the set of control patients was split in common train, validation and test partitions of 11, 3 and 9 randomly patients selected, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For parkinson participants, we take for validation and test partitions of 3 and 11 patients randomly selected to complement validation and test control sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Hence, we balanced data for standard and statistical validation purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' External dataset validation A main interest in this work is to measure the capability to generalize motion patterns from anomaly deep representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Also, we are interested in mea- suring the capability of embedding descriptors to discriminate PD from other classes, even for videos captured with external protocols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Hence, in this work we only evaluate the proposed approach with a public dataset of walking videos that include knee-osteoarthritis (50 subjects with an average age of 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7 ± 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7), parkinson (16 subjects with an average age of 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='6 ± 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='3) and control (30 sub- jects with an average age of 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7 ± 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='3) patients [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The 96 participants were recorded with a static green background, blurred faces and markers on their bodies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Following the same methodology for owner data, each sequence was spatially resized to 64×64 pixels, and temporally cropped to 64 frames, and finally normalized and subsampled ensuring a complete gait cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 9 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Model configuration The introduced strategy has in the generator an autoencoder and encoder net, while the discriminator has an encoder net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The encoders use three layers that include 3D (4×4×4 and stride 2×2×2) convolutions, BatchNormalization (momentum of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='1 and epsilon of 1 × 10−5) and LeakyRelu (α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' At each progressive level, the input is reduced to half in spatial and temporal dimensions while the features are increased twice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The decoder network follows a symmetrical configuration against the encoder with same layers as encoder (replacing 3D convolutions by 3D transpose convolutions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The overall structure is summarized in table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Table 1: Generator and Discriminator Networks structure summary Module Network Levels Input Output Generator Encoder 5 64×64×64×1 1×1×1×n Decoder 5 1×1×1×n 64×64×64×1 Discriminator Encoder 5 64×64×64×1 1×1×1×1 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Evaluations and Results The proposed strategy was exhaustively validated with respect to the ca- pability to recognize parkinsonian inputs as abnormal class patterns in archi- tectures trained only with control patterns and under challenging unbalanced and scarce scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Hence, in the first experiment, the proposed strategy was trained only with control samples from owner dataset, following a video recon- struction pretext task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Hence, encoder (∥zG − z′ G∥2), contextual (∥x − x′∥1) and adversarial (∥zD − z′ D∥2) embedding errors were recovered as locomotor descriptors of the observed sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For classification purposes, these errors were binarized by imposing a threshold value, as: τzG = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='768 for encoder, τx = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='147 for contextual, and τzD = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='429 for adversarial errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Table 2 sum- marizes the achieved performance of three locomotor descriptors according to standard classification metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In general, the proposed strategy reports a re- markable capability to label parkinson patterns as abnormal samples, which are excluded from trained representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Interestingly, the contextual errors have the highest value among the others to classify between control and parkinson patients, reporting a remarkable 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% in AUC, with mistakes in only 64 video clips (approximately 3 patients).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For robustness validation, we are also interested in the distribution out- put of predictions, which may suggest the capability of generalization of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For doing so, we also validate locomotion descriptors with respect to 10 Table 2: Model performance for encoder, contextual and adversarial losses using standard metrics when the model trains with control patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Acc, Pre, Rec, Spe, F1 are for accuracy, precision, recall, specificity and f1 score respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Loss Acc Pre Rec Spe F1 ROC-AUC Encoder 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='8% 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5% 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4% 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='2% 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% Contextual 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='6% 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4% 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4% 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% Adversarial 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5% 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='3% 60% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4% 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='3% 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% introduced homoscedasticity and shapeness validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Table 3 summarizes the results achieved by each locomotion embedding descriptor, contrasting with the reported results from standard metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In such case, the validated metrics sug- gest that contextual errors may be overfitted for the trained dataset and the recording conditions, which may be restrictive for generalized architecture in other datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Contrary, the encoder descriptor shows evident statistical ro- bustness from variance and shapeness distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Furthermore, the encoder losses evidence a clearly separation between the control and parkinson distribu- tion in Figure 3, where even the proposed model can separate stages of Hoehn & Yahr with the difference between 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0 levels where the ChiSquare test shows us that both distributions remains equals meaning that both stages are difficult to model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Table 3: Model performance for encoder, contextual and adversarial losses using the proposed statistical metrics when the model trains with control patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Loss Homocedasticity Shapeness Encoder 80% 70% Contextual 50% 40% Adversarial 50% 45% To follow with one of the main interests in this work i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='e, the generaliza- tion capability, the proposed strategy was validated with an external public dataset (without any extra training) that include parkinson (16 patients), knee- osteoarthritis (50 patients) and control patients (30 patients) [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Table 4 sum- marized the achieved results to discriminate among the three unseen classes, evidencing a notable performance following encoder embedding representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' It should be noted, that Encoder achieves the highest ROC-AUC, reporting an average of 75%, being the more robust representation, as suggested by statistical 11 Figure 3: Data distribution given by the proposed model for control and parkinson samples by Hoehn & Yahr levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' homoscedasticity and shapeness validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The contextual and the adversarial losses have better accuracy, precision and recall, but the specificity suggests that there is not any evidence of correctly classifying control subjects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In such sense, the model label all samples as abnormal from trained representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In contrast, the encoder element in the network (Figure 1-a) capture relevant gait patterns to distinguish between control, parkinson and knee-osteoarthritis patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Table 4: Model performance for encoder, contextual and adversarial losses using the proposed model without retraining and same thresholds as Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Acc, Pre, Rec, Spe, F1 are for accuracy, precision, recall, specificity and f1 score respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Loss Acc Pre Rec Spe F1 ROC-AUC Encoder 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='6% 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='1% 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% 75% Contextual 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% 100% 0% 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% 50% Adversarial 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='8% 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4% 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4% 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='3% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='2% Along the same line, the external dataset was also validated with respect to homoscedasticity and shapeness metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Table 5 summarizes the achieved results from the distribution representation of output probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' As expected, the results enforce the fact that embeddings from the Encoder have much better generalization against the other losses, allowing to discriminate among three different unseen classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Remarkably, the results suggest that control subjects of the external dataset belong to the trained control set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' This fact is relevant because indicates that architecture is principally dedicated to coded locomotor patterns without strict restrictions about captured conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' To complement such results, output probabilities from three classes are summarized in violin plots, as illustrated in Figure 4 which shows the separation between the classes of parkinson and knee-osteoarthritis, also, between levels of the diseases, being remarkable the locomotor affectations produced by the patients diagnosed with knee-Osteoarthritis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 12 25 20 p< 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 p< 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 15 Encoder Errors p<0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 p< 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 10 p<0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 p> 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 Y 5 0 0 5 10 Control Stage 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0 Stage 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5 Stage 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5 Stage 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0Table 5: Model performance for encoder, contextual and adversarial losses using the proposed statistical metrics and model as Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Loss Homocedasticity Shapeness Encoder 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% Contextual 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4% 0% Adversarial 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% Figure 4: Data distribution given by the proposed model for control, parkinson (PD) and knee-osteoarthritis (KOA) samples by levels where EL is early, MD medium and SV severe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Alternatively, in an additional experiment we train using only patients di- agnosed with parkinson to force the architecture to extract these abnormal locomotion patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In such cases, the videos from control subjects are associ- ated with abnormal responses from trained architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Table 6 summarizes the achieved results from standard and statistical distribution metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' As expected, from this configuration of the architecture is achieved a lower classification per- formance because the high variability and complexity to code the disease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In fact, parkinson patients may manifest totally different locomotion affectations at the same stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For such reason, the architecture has major challenges to discriminate control subjects and therefore lower agreement with ground truth labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The statistical homoscedasticity and shapeness metrics confirm such is- sue achieving scores lower than 50% and indicating that the model, from such configuration, is not generalizable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In this configuration, it would be demanding a larger amount of parkinson patients to deal with disease variability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Discussion This work presented a deep generative scheme, designed under the one-class- learning methodology to model gait locomotion patterns in markerless video sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The proposed architecture is trained under the reconstruction video pretext task, being categorical to capture kinematic behaviors without the asso- 13 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0 p< 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 p> 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 T 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5 p<0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 p<0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 p< 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 p<0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='05 11 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0 Encoder Errors 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0 Control EL PD MD PD SV PD EL KOA MD KOA SV KOATable 6: Model performance for encoder, contextual and adversarial losses using standard metrics when the model trains with parkinson patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Acc, Pre, Rec, Spe, Homo and Shape are for accuracy, precision, recall, specificity, homocedasticity and shapeness respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Loss Acc Pre Rec Spe Homo Shape ROC-AUC Encoder 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5% 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='2% 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% 45% 50% 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% Contextual 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5% 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='5% 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='7% 50% 50% 40% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9% Adversarial 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='8% 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='1% 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4% 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='2% 45% 40% 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='8% ciation of expert diagnosis criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' From an exhaustive experimental setup, the proposed approach was trained with videos recorded from a control population, while then parkinsonian patterns were associated with anomaly patterns from the design of a discrimination metric that operates from embedding represen- tations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' From an owner dataset, the proposed approach achieves an ROC-AUC of 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='9%, while for an external dataset without unseen training videos, the proposed approach achieved an average ROC-AUC of 75%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' One of the main issues addressed in this work was to make efforts to train generative architecture with a sufficient generalization capability to capture kinematic patterns without a bias associated to the capture setups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' To carefully select such architectures, this study introduced homoscedasticity and shapeness as complementary statistical rules to validate the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' From these metrics was evidenced that encoder embeddings brings major capabilities to general- ize models, against the contextual and adversarial losses, achieving in average an 80% and 70% for homoscedasticity and shapeness, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Once these metrics defined the best architecture and embedding representation, we confirm the selection by using the external dataset with different capture conditions and even with the study of a new disease class into the population i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=', the Knee- osteoarthritis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Remarkably, the proposed approach generates embeddings with sufficient capabilities to discriminate among different unseen populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In the literature have been declared different efforts to develop computational strategies to discriminate parkinson from control patterns, following markerless and sensor-based observations [6–9, 22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' For instance, volumetric architectures have been adjusted from discriminatory rules taking minimization rules associ- ated with expert diagnosis annotations [6, 8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' These approaches have reported remarkable results (average an 95% ROC-AUC with 22 patients).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Also, Sun et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' proposed an architecture that takes frontal gait views and together with volumetric convolution layers, discriminates the level of freeze in the gait for parkinson patients with an accuracy of 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='3%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Likewise, Kour et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [22] de- velops a sensor-based approach to correlate postural relationships with several annotated disease groups (reports an accuracy = 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='4%, precision = 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='0% with 14 50 knee-ostheoarthritis, 16 parkinson and 30 control patients).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Nonetheless, such schemes are restricted to a specific recording scenario and pose observa- tional configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Besides, the minimization of these representations may be biased by label annotations associated with expert diagnostics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Contrary, the proposed approach adjusts the representation using only control video sequences without any expert label intervention during the architecture tunning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In such case, the architecture has major flexibility to code potential hidden relation- ships associated with locomotor patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In fact, the proposed approach was validated with raw video sequences, reported in [22], surpassing precision scores without any additional training to observe such videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Moreover, the proposed approach uses video sequences instead of representation from key points, that coarsely minimize dynamic complexity during locomotion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Recovered generalization metrics scores (homocedasticity = 80%, shapeness = 70% ) suggest that some patients have different statistical distributions, an expected result from variability in control population, as well as, the variability associated to disease parkinson phenotyping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' In such sense, it is demanding a large set of training data to capture additional locomotion components, to- gether with a sufficient variability spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Nonetheless, the re-training of the architecture should be supervised from output population distributions to avoid overfitting regarding specific training scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The output reconstruction may also be extended as anomaly maps to evidence in the spatial domain the regions with anomalies, which further may represent some association with the disease to help experts in the correct identification of patient prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Conclusions This work presented a deep generative architecture with the capability of dis- covering anomaly locomotion patterns, convolving entire video sequences into a 3D scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Interestingly, a parkinson disease population was projected to the architecture, returning not only outlier rejection but coding a new locomotion distribution with separable patterns with respect to the trained control popu- lation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' These results evidenced a potential use of this learning and architecture scheme to recover potential digital biomarkers, coded into embedding represen- tations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' The proposed approach was validated with standard classification rules but also with statistical measures to validate the capability of generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Future works include the validation of proposals among different stages and the use of federated scenarios with different experimental capture setups to test performance on real scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Acknowledgements The authors thank Ministry of science, technology and innovation of Colom- bia (MINCIENCIAS) for supporting this research work by the project “Mecan- ismos computacionales de aprendizaje profundo para soportar tareas de local- izaci´on, segmentaci´on y pron´ostico de lesiones asociadas con accidentes cere- brovasculares isqu´emicos.”, with code 91934.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 15 References [1] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Vos, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Abajobir, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Abate, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Abbafati, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Abbas, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Abd- Allah, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Abdulkader, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Abdulle, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Abebo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Abera, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=', Global, regional, and national incidence, prevalence, and years lived with disability for 328 diseases and injuries for 195 countries, 1990–2016: a sys- tematic analysis for the global burden of disease study 2016, The Lancet 390 (10100) (2017) 1211–1259.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [2] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Dorsey, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Bloem, The parkinson pandemic—a call to action, JAMA neurology 75 (1) (2018) 9–10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [3] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Organization, Neurological disorders: public health challenges, World Health Organization, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [4] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Balestrino, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Schapira, Parkinson disease, European journal of neurol- ogy 27 (1) (2020) 27–42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [5] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Kour, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Arora, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=', Computer-vision based diagnosis of parkinson’s disease via gait: a survey, IEEE Access 7 (2019) 156620–156645.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [6] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Guayac´an, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Rangel, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Mart´ınez, Towards understanding spatio- temporal parkinsonian patterns from salient regions of a 3d convolutional network, in: 2020 42nd Annual International Conference of the IEEE En- gineering in Medicine & Biology Society (EMBC), IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 3688– 3691.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [7] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Sun, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Martens, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Lewis, Convolutional 3d attention network for video based freezing of gait recognition, in: 2018 Digital Image Computing: Techniques and Applications (DICTA), IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 1–7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [8] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Guayac´an, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Mart´ınez, Visualising and quantifying relevant parkin- sonian gait patterns using 3d convolutional network, Journal of biomedical informatics 123 (2021) 103935.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [9] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Li, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Mestre, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Fox, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Taati, Vision-based assessment of parkinsonism and levodopa-induced dyskinesia with pose estimation, Jour- nal of neuroengineering and rehabilitation 15 (1) (2018) 1–13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [10] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Litjens, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Kooi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Bejnordi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Setio, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Ciompi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Ghafoo- rian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Van Der Laak, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Van Ginneken, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' S´anchez, A survey on deep learning in medical image analysis, Medical image analysis 42 (2017) 60–88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [11] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Sugandhi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Wahid, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Raju, Feature extraction methods for hu- man gait recognition–a survey, in: International Conference on Advances in Computing and Data Sciences, Springer, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 377–385.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [12] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Chalapathy, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Chawla, Deep learning for anomaly detection: A survey, arXiv preprint arXiv:1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='03407 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 16 [13] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Schmarje, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Santarossa, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Schr¨oder, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Koch, A survey on semi- , self-and unsupervised learning for image classification, IEEE Access 9 (2021) 82146–82168.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [14] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Cho, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Chao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Lin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Chen, A vision-based analysis system for gait recognition in patients with parkinson’s disease, Expert Systems with applications 36 (3) (2009) 7033–7039.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [15] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Lin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Liao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Lai, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Pei, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Kuo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Chang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Lo, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=', Quantification and recogni- tion of parkinsonian gait from monocular video imaging using kernel-based principal component analysis, Biomedical engineering online 10 (1) (2011) 1–21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [16] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' N˜omm, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Toomela, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Vaske, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Uvarov, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Taba, An alternative approach to distinguish movements of parkinson disease patients, IFAC- PapersOnLine 49 (19) (2016) 272–276.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [17] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Soltaninejad, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Rosales-Castellanos, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Ba, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Ibarra-Manzano, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Cheng, Body movement monitoring for parkinson’s disease patients us- ing a smart sensor based non-invasive technique, in: 2018 IEEE 20th In- ternational Conference on e-Health Networking, Applications and Services (Healthcom), IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [18] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Kiran, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Thomas, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Parakkal, An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos, Journal of Imaging 4 (2) (2018) 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [19] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Demˇsar, Statistical comparisons of classifiers over multiple data sets, The Journal of Machine Learning Research 7 (2006) 1–30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [20] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Luengo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Garc´ıa, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Herrera, A study on the use of statistical tests for experimentation with neural networks: Analysis of parametric test condi- tions and non-parametric tests, Expert Systems with Applications 36 (4) (2009) 7798–7808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [21] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Kour, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Arora, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=', A vision-based gait dataset for knee osteoarthritis and parkinson’s disease analysis with severity levels, in: International Con- ference on Innovative Computing and Communications, Springer, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 303–317.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' [22] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Kour, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Gupta, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' Arora, A vision-based clinical analysis for classifica- tion of knee osteoarthritis, parkinson’s disease and normal gait with severity based on k-nearest neighbour, Expert Systems 39 (6) (2022) e12955.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'} +page_content=' 17' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9FJT4oBgHgl3EQfACzo/content/2301.11418v1.pdf'}