diff --git "a/0tFST4oBgHgl3EQfVziF/content/tmp_files/load_file.txt" "b/0tFST4oBgHgl3EQfVziF/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/0tFST4oBgHgl3EQfVziF/content/tmp_files/load_file.txt" @@ -0,0 +1,943 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf,len=942 +page_content='Differentially Private Distributed Bayesian Linear Regression with MCMC Barı¸s Alparslan1, Sinan Yıldırım1, 2, and S¸.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' ˙Ilker Birbil3 1Faculty of Engineering and Natural Sciences, Sabancı University, ˙Istanbul, Turkey∗ 2Center of Excellence in Data Analytics (VER˙IM), Sabancı University, ˙Istanbul, Turkey 3Department of Business Analytics, University of Amsterdam, Amsterdam, The Netherlands February 1, 2023 Abstract We propose a novel Bayesian inference framework for distributed differentially private linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We consider a distributed setting where multiple parties hold parts of the data and share certain summary statistics of their portions in privacy-preserving noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We develop a novel generative statistical model for privately shared statistics, which exploits a useful distributional relation between the summary statistics of linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Bayesian estimation of the regression coefficients is conducted mainly using Markov chain Monte Carlo algorithms, while we also provide a fast version to perform Bayesian estimation in one iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The proposed methods have computational advantages over their competitors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We provide numerical results on both real and simulated data, which demonstrate that the proposed algorithms provide well-rounded estimation and prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Keywords: Differential privacy, linear regression, distributed learning, MCMC 1 Introduction Linear regression is a mathematical method that lies at the core of statistical research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Many researchers have been working on linear regression since the 19th century, and hence, many well-known solution methods exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' On a separate note, privacy-preserving statistical learning has gained popularity and importance in recent years, with differential privacy prevailing as the most commonly used definition for privacy (Dwork, 2006;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Dwork et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2014a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Dankar and El Emam, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' As a result, there is a recent but growing interest in differentially private linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Many works in the data privacy literature do not mainly focus on regression but are motivated by or can be applied to regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' As an example, differentially private empirical risk minimisation (Chaudhuri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Bassily et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Abadi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Kuru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2022) can be applied to regression once it is cast as a data-driven optimisation problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Many general-purpose Bayesian differentially private estimation methods can also be used in regression problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Williams and Mcsherry (2010) is one of the first works that considered a hierarchical model for the privatised data and Bayesian estimation for the model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2016) analyse several differential privacy mechanisms for posterior sampling and suggest using these mechanisms also ∗The study was funded by the Scientific and Technological Research Council of Turkey (T¨UB˙ITAK) ARDEB Grant No 120E534.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Barı¸s Alparslan and Sinan Yıldırım were supported by the project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='13778v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='ML] 31 Jan 2023 for linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Dimitrakakis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2017) developed a posterior sampling query algorithm to combine differential privacy and Bayesian inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Contrary to those one-sample approaches, general-purpose differentially private Markov chain Monte Carlo (MCMC) algorithms, which aim to identify the posterior distribution via iterative sampling, can also be applied to regression (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Foulds et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Yıldırım and Ermi¸s, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Heikkil¨a et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Gong, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Alparslan and Yıldırım, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Ju et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Several works in the literature are somewhat more directly related to differentially private regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2012) suggested a functional mechanism method, which is based on perturbing polynomial objective functions with privacy-preserving noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' As an alternative, Dwork et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2014b);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Wang (2018) considered perturbation of summary statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Alabi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022) provide a technical discussion on different point estimation methods for differentially private simple linear regression, that is when we have a single feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Ferrando et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022) present a method to compute confidence intervals for the coefficients of linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2021) study the rates of convergence for parameter estimation with differential privacy via output perturbation, where a non-private estimator is perturbed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' All those works consider point estimation of the linear regression parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In this paper, we focus on differential private distributed Bayesian inference for the parameters of linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We use a novel hierarchical model that relies on a distributional relationship (Proposition 1) between the summary statistics of linear regression, which, to the best of our knowledge, has not been exploited so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We propose Bayesian inference algorithms that take perturbations of summary statistics as observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The general inferential tool we pick in this paper is MCMC, a well-known framework for iterative sampling from posterior distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' As we shall see, the proposed MCMC algorithms in this paper already have lower computational complexities per iteration than their closest competitors in Bernstein and Sheldon (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Addi- tionally, we also propose much faster Bayesian estimation methods that perform estimation in one iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Finally, we assume a distributed setting where the total dataset is shared among multiple parties (data nodes), who want to collaborate for the inference of a common parameter, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Heikkil¨a et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2017) for such a setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The non-distributed setting is just a special case (single data holder) for our methodology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This paper has connections with several works in the literature, yet it has significant differences from each of those, as we shall explain below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For the privacy-preserving mechanism, we consider adding noise to summary statistics of linear regression, similarly to Wang (2018);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Bernstein and Sheldon (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The adaSSP framework of Wang (2018) motivates the fast Bayesian estimation methods developed in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, adaSSP is a point estimation method while we aim for a posterior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The latter work, Bernstein and Sheldon (2019), is particularly related to this paper as they also study Bayesian linear regression with differential privacy using perturbed statistics of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, there are some important differences between our work and that of Bernstein and Sheldon (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' These differences stem from the choice of summary statistics and the consequent hierarchical structure used for modelling linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Those modelling differences lead to significant differences in the inference methods as well as significant computational advantages for our methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Specifically, the computational complexity of our methods is O(d3), where d is the number of features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This order is much less than the O(d6) of Bernstein and Sheldon (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Finally, neither Wang (2018) nor Bernstein and Sheldon (2019) has considered a distributed learning setting like we do in 2 this paper, although both works can be modified for the distributed setting after moderate modifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Foulds et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Heikkil¨a et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2017) are other differentially Bayesian inference methods that target posterior distributions of perturbed summary statistics of sensitive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The one by Heikkil¨a et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2017) is particularly interesting because they consider a distributed setting and present linear regression as their showcase example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, we differ from those works in the way we model the perturbed statistics and in the choice of inference methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Specifically, Foulds et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Heikkil¨a et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2017) treat the perturbed statistics as if not perturbed, while we incorporate the effect of perturbation in our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Recently, Alparslan and Yıldırım (2022) and Ju et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022) employ data augmentation for modelling sensitive and privatised data and propose MCMC for Bayesian inference, the latter work having linear regression as a major application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Their methods have O(n) complexity per iteration in general where n is the number of instances in the data set, which can be slow when n is large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In contrast, our methods are scalable in data size since their computational complexities do not depend on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We note that Alparslan and Yıldırım (2022, Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2) also present an MCMC method scalable with n that exploits the approximate normality of additive summary statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, a direct application of that would lead to an algorithm with O(d6) computational complexity (per iteration), like in Bernstein and Sheldon (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The paper is organised as follows: In Section 2 we review differential privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Section 3 we lay out the hierarchical model for differentially private distributed linear regression with perturbed summary statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Section 4, we present and discuss the aspects of the proposed inference algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Section 5, we provide numerical experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We conclude in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Notation: Matrices and vectors are shown in bold-face notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For a matrix A, its transpose, trace, and determinant (whenever they exist) are AT , tr(A), and |A|, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For any sequence {ai}i≥0, we let ai:j = (ai, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' , aj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We write x ∼ P to mean the random variable x has distribution P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' N(m, Σ) stands for the multivariate normal distribution with mean m and covariance Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Wishart and inverse-Wishart distributions with scale matrix Λ and κ degrees of freedom are shown as W(Λ, κ) and IW(Λ, κ), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' IG(a, b) stands for the inverse-gamma distribution with shape and scale parameters a and b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We augment those notations with x to denote the respective probability density functions (pdf), e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', as N(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' m, Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2 Differential Privacy Differential privacy (Dwork, 2006, 2008) concerns randomised algorithms that run on sensitive, or usually private, data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' A randomised algorithm takes an input data set D ∈ D and returns a random output in O, where the randomness is intrinsic to the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' A differentially private algorithm constrains the difference between the probability distributions of the outputs obtained from neighbouring data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We say two data sets are neighbours if they differ by one individual’s piece of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Definition 1 (Differential privacy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' A randomised algorithm M : D �→ O is (ϵ, δ)-differentially private (DP) if for any pair of neighbouring data sets D, D′ ∈ D and for any subset O ⊆ O of the of support domain, it satisfies P[M(D) ∈ O] ≤ eϵP[M(D′) ∈ O] + δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 3 The definition implies that smaller (ϵ, δ) leads to more privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Privacy-preserving algorithms often use noise-adding mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' A popular noise-adding mecha- nism is the Gaussian mechanism (Dwork et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2006), which perturbs a function f : D �→ Rk of the sensitive data, for some k ≥ 1, with a random noise drawn from the Gaussian distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The amount of the added noise depends on the L2-sensitivity of the function, given by ∆f = max neighbourD1,D2∈D∥f(D1) − f(D2)∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' An (ϵ, δ)-DP Gaussian mechanism returns f(D) + ∆fσ(ϵ, δ)v, v ∼ N(0, Ik) (1) upon taking D as the input, where the quantity σ(ϵ, δ) ensures (ϵ, δ)-DP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In this work, we take σ(ϵ, δ) as the analytical solution given in Balle and Wang (2018, Algorithm 1) due to its tightness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The Gaussian mechanism is also central to other forms of privacy, such as zero-concentrated DP (Bun and Steinke, 2016) and Gaussian DP (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In this paper, we consider (ϵ, δ)-DP as the type of privacy and the Gaussian mechanism to generate noisy observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Moreover, the proposed methods in this paper never use the sensitive data once given the noisy observations generated using the Gaussian mechanism, hence exploiting the post-processing property of differential privacy (Dwork and Roth, 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Theorem 1 (Post-processing).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' If M : D �→ O be (ϵ, δ)-DP and let f : O → O′ be another mapping independent of D given M(D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Then fM : D �→ O′ with fM(D) = f(M(D)) is (ϵ, δ)-DP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 3 Differentially Private Distributed Linear Regression In this section, we present a new hierarchical model for differentially private distributed linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For ease of exposition, we first present a model with a single data holder, then generalise the model for the distributed setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 Basic Model and Privacy Setup Suppose we have a sequence of random variables {(xi, yi) : i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' , n}, where xi ∈ X ⊆ Rd×1 are the feature vectors and yi ∈ Y ⊆ R is the i’th response variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We consider the normal linear regression to model the dependency between xi and yi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Specifically, yi = xT i θ + ei, ei i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' ∼ N(0, σ2 y), i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' , n, where θ ∈ Rd is the vector of the linear regression coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We assume that the feature vectors xi’s are i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' with distribution Px.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Below, we will particularly focus on the case when Px can be assumed to be a normal distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, we will also present algorithms for general Px.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In matrix notation, the above can shortly be expressed as y = Xθ + e, e ∼ N(0, σ2 yIn), where X = � xT 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' xT n �T is the so-called design matrix, y = � y1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' yn �T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Additionally, we also define the summary statistics of X and y given by S := XT X, z := XT y, 4 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We assume a setup where S and z are privately released as the noisy summary statistics ˆS and ˆz are constructed as ˆS = S + σsM, (2) ˆz = z + σzv, v ∼ N(0, Id), (3) where M is a d × d symmetric matrix with its upper triangular elements drawn from N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Dwork et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2014b) arrange σs and σz so that both (2) and (3) are (ϵ/2, δ/2) differentially private, leading to (ϵ, δ)-DP overall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Differently than Dwork et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2014b), we set σs = σz = ∆szσ(ϵ, δ), where σ(ϵ, δ) is given in Balle and Wang (2018, Algorithm 1), and ∆sz is the overall L2 sensitivity of [S, z], given by ∆sz = � ∥X∥4 + ∥X∥2∥Y ∥2 with ∥X∥ = maxx∈X ∥x∥2 and ∥Y ∥ = maxy∈Y |y|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Based on the above relations, we shall represent a hierarchical model that enables Bayesian inference of θ given ˆS and ˆz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' One important element of our modelling approach is the following result that establishes the conditional distribution of z given S, θ, and σ2 y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For the normal linear regression model, we have z|S, θ, σ2 y ∼ N(Sθ, Sσ2 y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' First, note that, E[z|X, θ, σ2 y] = E[XT Xθ + XT e] = Sθ, (4) Cov(z|X, θ, σ2 y) = XT Xσ2 y = Sσ2 y, (5) and observe that both moments depend on X through its statistic S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Therefore, the conditional density of z given S, θ, and σ2 y is p(z|X, θ, σ2 y) = N(z;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Sθ, Sσ2 y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Next, define the function f : Rn×d �→ [0, ∞) with f(X) = p(z|X, θ, σ2 y) and let CS,θ,σ2y = {X : XT X = S}, Since the function f is constant over CS,θ,σ2y, we can write p(z|S) = � CS,θ,σ2y fdPx = N(z;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Sθ, Sσ2 y), where the second equation is by moment equations in (4) and (5) above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This concludes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Finally, we assign prior distributions for θ, σ2 y as θ ∼ N(m, C), σ2 y ∼ IG(a, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (6) 5 At this point, it is worth discussing some important modelling differences between our work and Bernstein and Sheldon (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Bernstein and Sheldon (2019), the central limit theorem (CLT) is applied to � S, z, yT y � , leading to a normality assumption for the whole vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In contrast, we use the exact conditional distribution p(z|S, θ, σ2) thanks to Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Moreover, unlike Bernstein and Sheldon (2019), we do not require a noisy version yT y, hence have a slight advantage of using less privacy-preserving noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In summary, our model has a different hierarchical structure and requires less privacy-preserving noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 Distributed Setting Next, we extend our model to the distributed setting, where the total data are shared among J ≥ 1 data holders as (X, y) = {(Xj, yj);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' , J}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (7) We let ni be number of rows in each xi, so that n = n1 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' + nJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Each data holder j shares their own summary statistics Sj = XT j Xj, zj = XT j yj with privacy-preserving noise ˆSj = Sj + σsMj, ˆzj = z + σzvj, vj ∼ N(0, Id).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (8) Note that, to preserve a given (ϵ, δ)-DP overall, each party must provide that level of privacy for their data, hence σs and σz are the same as before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The hierarchical structure of the overall model (specified for normally distributed xi’s) is shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Figure 1: Differentially private distributed linear regression model (specified for normally distributed xi’s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=') The distributed setting deserves separate consideration than the single data holder case for a couple of reasons: Firstly, the node-specific observations ( ˆS1, ˆz1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' , ( ˆSJ, ˆzJ) are altogether statistically more informative on θ than their aggregates �J j=1 ˆSj and �J j=1 ˆzj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This is because the aggregate versions are not sufficient statistics of the node-specific observations ( ˆS1, ˆz1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' , ( ˆSJ, ˆzJ) with respect to θ (even when σ2 y is known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=') Therefore, when the node-specific observations are available, one should not, in principle, trivially aggregate them and apply an inference method designed for J = 1 using those aggregates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Secondly, the partitioning of data as in (7) can be relevant to data privacy applications even outside the distributed learning framework, rendering the methodology in Section 4 useful in a 6 broader sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For example, batches of (x, y)-type of data may be donated to a common data collector as in (8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' At this point, a particular and interesting relation exists with pan-privacy applications (Dwork et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Imagine that sensitive data from individuals are collected sequentially in time, and the data holder is concerned about possible intrusions into the memory where the sensitive data are stored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Then, one possible way to ensure the privacy of the data against such possible intrusions, which is the promise of pan-privacy, is to store the noisy statistics of every new batch of data and erase the original sensitive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Then, at any time the data collector has data of the form ( ˆS1, ˆz1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' , ( ˆSJ, ˆzJ), each pair corresponding to a batch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' As a result, inference algorithms as in Section 4 can be applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 4 Algorithms for Bayesian Inference Bayesian inference targets the posterior distribution of the latent variables of the model, in particular θ, given the observations ˆS1:J and ˆz1:J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We present several Bayesian inference algorithms for the hierarchical model described in the previous section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In addition to other concerns like computational budget, the choice among those approaches mainly depends on the specification of Px as the distribution of S directly depends on it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In this paper, we have considered the following two cases and devised algorithms for each of them: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In some cases it may be adequate to specify Px = N(0, Σx).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This leads to S|Σx ∼ W(Σx, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Further, to account for the uncertainty about the covariance Σx, one can treat it as a random variable with Σx ∼ IW(Λ, κ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Figure 1 shows the hierarchical structure of the distributed setting with those specifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We defer discussing the conflict between the normality and boundedness assumptions to Remark 1 towards the end of Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' As the second case, we assume a general (non-normal) Px.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' A normal approximation, based on the CLT, could be considered for the distribution S (Wilson and Ghahramani, 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, this would require the knowledge (or accurate estimation) of up to the fourth moments of Px as well as expensive computations for sampling S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We circumvent those difficulties by plugging in a point estimate of S given ˆS and use it during the sampling process as if it is the true S itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Then, we develop two different algorithms for inference of θ, one being an MCMC algorithm and the other providing a closed form-solution for the posterior of θ following a rough point-wise estimation of σ2 y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Note that these algorithms with fixed S do not require a distribution for x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Next, we provide the details of our approaches and the resulting algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 Normally Distributed Features In this section, we present an MCMC algorithm for Bayesian inference for the differentially private distributed linear regression model when Px = N(0, Σx) and Σx ∼ IW(Λ, κ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The latent variables involved in this variant are θ, Σx, σ2 y, S1:J, z1:J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Their posterior distribution given ˆS1:J, ˆz1:J can be written as p(θ, σ2 y, Σx, z1:J, S1:J|ˆz1:J, ˆS1:J) ∝ p(θ)p(σ2 y)p(Σx) J � j=1 p(zj|θ, σ2 y, S)p(Sj|Σx)p( ˆSj|Sj)p(ˆzj|zj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (9) 7 One could design an MCMC algorithm for this posterior distribution that updates θ, σ2 y, Σx, z1:J, S1:J in turn based on their full conditional distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, such an algorithm suffers from poor convergence because of a high posterior correlation between θ and z1:J (as verified in our numerical studies).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' It is well known that highly correlated variables result in poor convergence if they are updated one conditional on the other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' To alleviate that problem, we work with the reduced model where z1:J are integrated out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The reduced model has θ, Σx, σ2 y as its latent variables, whose joint posterior distribution can be written as p(θ, σ2 y,Σx, S|ˆz, ˆS) ∝ p(θ)p(σ2 y)p(Σx) J � j=1 p(Sj|Σx)p( ˆSj|Sj)p(ˆzj|Sj, θ, σ2 y), (10) where p(ˆz|S, θ, σ2 y) = N(ˆz;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Sθ, σ2 ySθ + σ2 zId).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We would like to sample from the posterior distribution in (10) via MCMC that updates θ, σ2 y, Σx, S1:J in turn based on their full conditional distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The variables θ and Σx enjoy closed-form full conditional distributions (see Appendix A for the derivations): Σx|S1:J, ˆS1:J, ˆz1:J ∼ IW � �Λ + J � j=1 Sj, κ + n � � , (11) θ|σ2 y, ˆz, S1:J ∼ N(mp, Σp), (12) where the posterior moments for θ are Σ−1 p = J � j=1 Sj(σ2 ySj + σ2 zI)−1Sj + C−1, mp = Σp � � J � j=1 Sj(σ2 ySj + σ2 zI)−1 ˆzj + C−1m � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The full-conditional distributions of S1:J and σ2 y have no closed form;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' hence we design Metropolis- Hastings (MH) moves to update them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For σ2 y, one can simply use a random-walk MH move targeting p(σ2 y|θ, S1:J, ˆz1:J).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For S1:J, their full conditional distribution can be factorised as p(S1:J| ˆS1:J, ˆz1:J, Σx, σ2 y, θ) = J � j=1 p(Sj| ˆSj, ˆzj, Σx, σ2 y, θ), where each factor is given by p(Sj| ˆSj, ˆzj, Σx, σ2 y, θ) ∝ p(ˆzj|Sj, θ, σ2 y)p(Sj|Σx)p( ˆSj|Sj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Thanks to that factorised form, each Sj can be updated with an MH move independently and in parallel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For the MH algorithm to update one Sj, we propose a new value from a Wishart distribution as S′ j ∼ W(Sj/α, α), which has mean Sj and variance determined by α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In our experiments, we adjust a using ideas from the adaptive MCMC framework (Andrieu and Thoms, 2008) to target an acceptance rate of around 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Algorithm 1 represents the overall MCMC algorithm for the hierarchical model for differentially Bayesian distributed linear regression when Px is a normal distribution with a random covariance matrix having an inverse-Wishart distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We call this algorithm MCMC-normalX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 8 Algorithm 1: MCMC-normalX - one iteration Input: Current values of S1:J, θ, σ2 y, Σx;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' observations ˆS1:J,ˆz1:J;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' noise variances σ2 s, σ2 z;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' proposal parameters a, σ2 q;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' hyperparameters a, b, κ, Λ, m, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Output: New sample of Σx, S, σ2 y, θ 1 Sample Σx using (11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2 for j = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' J do 3 Update Sj via an MH move targeting p(Sj|Σx, θ, ˆzj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 4 Sample θ using (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 5 Update σ2 y via an MH move targeting p(σ2 y|θ, S1:J, ˆz1:J).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Admittedly, a potential concern is a conflict between the normality and boundedness assumptions (both for x and y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, we also note that the collected data often happen to have some natural boundaries (which can be exploited to determine the sensitivity of the shared statistics), and yet the normal distribution is still used for modelling and subsequent inference mainly for sake of tractability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' With the normality assumption, one can implement computationally efficient algorithms at the expense of minor modelling inaccuracies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' While we acknowledge the methodologies in Alparslan and Yıldırım (2022, Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2) and Ju et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022) that can correctly incorporate the effect of truncation into inference, we remark that those methods pay the price of exactness by having O(n) computational complexity per iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 Features with a General Distribution The normality assumption for xi’s in Section 2 may not be adequate for some data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Moreover, when d is large, updating Sj’s can be the bottleneck of MCMC-normalX in Algorithm 1 in terms of computation time and convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We propose two algorithms to address both of those concerns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' As it turns out, those algorithms provide accurate estimations even for the case of normally distributed features;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' see Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Our approach for xi’s with a general distribution is based on estimating Sj’s from the beginning, using some principled estimation method, and fixing Sj’s to those estimates during the whole course of the inference procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In that way, we obtain a faster MCMC algorithm at the expense of targeting an approximate posterior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Moreover, we have observed in our experiments that this variant is quite competitive in terms of accuracy, especially when the total number of nodes J increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We call this variant MCMC-fixedS and present it in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' As for estimating Sj’s, one could simply consider taking the privately shared ˆSj as an estimator for Sj, but ˆSj is not necessarily a positive (semi-)definite matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Instead, we propose the nearest positive semi-definite matrix of to ˆSj as the estimator of Sj in terms of the Frobenius norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (The nearest positive definite matrix to ˆSj does not exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=') To find the nearest positive semi-definite matrix, we follow Higham (1988) and apply the following procedure for each j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' , J: (i) Calculate the eigendecomposition ˆSj = EDET , where E is a matrix of eigenvectors, and D is a diagonal matrix consisting of the eigenvalues λi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (ii) The nearest symmetric positive semi-definite matrix is �Sj = ED+ET , where D+ is a diagonal matrix with D+(i, i) = max{D(i, i), 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Note that �Sj found above is the maximum likelihood estimator of Sj given ˆSj (over the set of positive semi-definite matrices) since the conditional distribution of ˆSj given Sj is a normal 9 Algorithm 2: MCMC-fixedS - one iteration Input: Current values of θ, σ2 y;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' estimates ˆS1:J, observations ˆz1:J;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' noise variance σ2 z, and hyperparameters a, b, m, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Output: New sample of σ2 y, θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Use S1:J = �S1:J throughout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2 Sample θ using (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 3 Update σ2 y via an MH move targeting p(σ2 y|θ, S1:J, ˆz1:J).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Algorithm 3: Bayes-fixedS-fast Input: ˆS1:J, ˆz1:J;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' noise variance: σ2 z;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' estimate ˜σ2 y of σ2 y;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' hyperparameters: m, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Output: Estimate ˆθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 for j = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' J do 2 Calculate the estimate �Sj for Sj using ˆSj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 3 Calculate Σj = �Sj(˜σ2 y �Sj + σ2 zI)−1 �Sj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 4 Calculate mj = �Sj(˜σ2 y �Sj + σ2 zI)−1 ˆzj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 5 return Posterior moments of θ: Σ−1 post = �J j=1 Σj + C−1, mpost = Σpost � C−1m + �J j=1 mj � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' distribution with mean Sj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' MCMC-fixedS in Algorithm 2 is faster than MCMC-normalX in Algorithm 1, since it avoids the step to update Sj’s, which constitutes the main computational burden on Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, MCMC-fixedS can be made even faster by fixing σ2 y also.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' As a crude estimator, we used ˜σ2 y = ∥Y∥/3 throughout the experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' When σ2 y is fixed in addition to S1:J, we end up with a non-iterative method where the posterior distribution of θ is calculated in closed form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We call the resulting algorithm Bayes-fixedS-fast and present it in Algorithm 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Algorithm 3 does nothing but returns the moments of the posterior distribution of θ given �Sj’s, ˆzj’s, ˜σ2 y, and the prior parameters for θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='3 Computational Cost All our methods described in this section require O(d3) computation (per iteration for the iterative ones in Algorithms 1 and 2, or as a whole for the fast version in Algorithm 3) since they deal with d × d matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In contrast, as Bernstein and Sheldon (2019) apply CLT to the vector [S, z, yT y], their methods deal with covariance matrices of size (d2 + d + 1) explicitly, which leads to O(d6) computation per MCMC iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For even moderate d, this computational difference becomes dramatic and the latter may be prohibitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Moreover, the complexity of our methods does not depend on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This is in contrast to the O(n) complexity of general-purpose methods, such as Alparslan and Yıldırım (2022, Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='3) and Ju et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022), that can be applied to linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 10 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='4 Extensions We mention two other variants of our methodology, deferring the details to Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Another solution for dealing with non-normal Px could be to average the feature vectors in X (and the corresponding response variables in y), so that the averaged rows of X can be modelled as approximately normal, due to CLT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This enables using the methods devised for normally distributed features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For the details of this approach, see Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Secondly, if the features are normally distributed but the data are not centred, we need to include the intercept parameter, which corresponds to appending xi with a one from the left, and MCMC-normalX does not directly apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In that case, we can modify the hierarchical model that accommodates the non-centralised features and the intercept parameter and still benefit from the sampling techniques involved in MCMC-normalX in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 contains the details of the modified hierarchical model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 5 Numerical Experiments We present several numerical evaluations of the proposed methods, MCMC-normalX, MCMC-fixedS, and Bayes-fixedS-fast with simulated and real data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We compare our algorithms with two methods: adaSSP of Wang (2018) and the MCMC method of Bernstein and Sheldon (2019) for differentially private linear regression that we call MCMC-B&S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Note that adaSSP and MCMC-B&S are originally proposed for the non-distributed setting, that is, J = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For a comprehensive comparison, we have implemented their extensions for J ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The details of those extensions are provided in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In particular, we have carefully generalised the model in Bernstein and Sheldon (2019) for J ≥ 1 similarly as we have done for our model in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' What we call MCMC-B&S is the adaptation of Bernstein and Sheldon (2019, Algorithm 1) for this generalised model (and (ϵ, δ)-DP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The code to replicate all of the experiments in this section can be found at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='com/sinanyildirim/Bayesian_DP_dist_LR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='git.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 Experiments with Simulated Data We have considered two different configurations, (n = 105, d = 2) and (n = 105, d = 5), for the problem size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For each (n, d), we have simulated the data as follows: We have generated θ ∼ N(0, Id), xi ∼ N(0, Σx) where Σx ∼ IW(Λ, κ) with κ = d + 1 and selected the scale matrix randomly as Λ = V T V , where V is a d × d matrix of i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' variables from N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The response variables y have been generated with σ2 y = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For inference, we have used the same Λ, κ as above and a = 20, b = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5, m = 0d×1, C = (a − 1)/bId for the other hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We have evaluated the methods at all combinations of J ∈ {1, 5, 10} and ϵ ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5, 1, 2, 5, 10}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' All the MCMC algorithms have been run for 104 iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For each (J, ϵ) pair, we have tried each method 50 times (each with different noisy observations) to obtain average performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For performance metrics, we have looked at the mean squared errors (MSE) of (i) the estimates ˆθ, and (ii) the predictions ˆy(xtest) generated by the methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For the Bayesian methods, ˆθ is taken as the mean posterior, which can be numerically estimated for the MCMC algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For prediction performance, we have calculated E[ˆy(xtest) − ytest]2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For the Bayesian methods, ˆy(xtest) is the posterior predictive expectation of ytest at xtest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For adaSSP, we simply take ˆy(xtest) = xT test ˆθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 11 The results are summarised in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We observe that MCMC-fixedS and Bayes-fixedS-fast outperform adaSSP and MCMC-B&S in almost all cases both in terms of estimation and prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Comparing the full-scale algorithms MCMC-normalX and MCMC-B&S (that involve updates of S), we observe a clear advantage of MCMC-normalX at d = 2, but MCMC-B&S becomes more competitive at d = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This can be attributed to the fact that MCMC-B&S requires the extra statistic yT y, unlike MCMC-normalX, which causes MCMC-B&S to use more noisy statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This difference becomes more significant at small d, where the relative effect of the presence of yT y on the sensitivity is more significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Finally, all methods improve as ϵ grows, which is expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 9 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 8 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 (log-)MSE: prediction, J = 1 MCMC-normalX MCMC-fixedS Bayes-fixedS-fast MCMC-B&S adaSSP 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 9 8 7 6 (log-)MSE: prediction, J = 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 9 8 7 6 5 (log-)MSE: prediction, J = 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 10 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 9 (log-)MSE: estimation J = 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 10 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 9 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 8 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 (log-)MSE: estimation J = 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 10 9 8 7 6 (log-)MSE: estimation J = 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 6 5 4 3 2 (log-)MSE: prediction, J = 1 MCMC-normalX MCMC-fixedS Bayes-fixedS-fast MCMC-B&S adaSSP 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 5 4 3 2 1 (log-)MSE: prediction, J = 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 4 3 2 1 0 (log-)MSE: prediction, J = 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 (log-)MSE: estimation J = 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 (log-)MSE: estimation J = 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 2 5 10 0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5 (log-)MSE: estimation J = 10 Figure 2: Averaged prediction and estimation performances (over 50 runs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Top row: n = 105, d = 2, Bottom row: n = 105, d = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 0 10 20 d 0 2 4 6 8 #10-3 J = 1 MCMC-normalX MCMC-fixedS MCMC-B&S 0 10 20 d 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='005 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='015 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='02 J = 5 0 10 20 d 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='04 J = 10 Figure 3: Run times per iteration for MCMC algorithms We also compare the computation times of the MCMC algorithms MCMC-normalX, MCMC-fixedS, and MCMC-B&S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Figure 3 shows the run-times of the algorithms vs d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The drastic difference in computational loads explained in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='3 is also visible in the figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' While MCMC-B&S may be improved in terms of accuracy as d increases, the O(d6) dramatically slows it down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1The algorithms were run in MATLAB 2021b on an Apple M1 chip with 8 cores and 16 GB LPDDR4 memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 12 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 Experiments with Real Data For the real data case, we have used four different data sets from the UCI Machine Learning Repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We have disregarded the columns including string data or key values (ID, name, date, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' ), and we have considered the most right-hand column as y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The finalised data sets are summarised below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' data set n d hyperlinks power plant energy 7655 4 view link bike sharing 13904 14 view link air quality 7486 12 view link 3d road 347900 3 view link For prediction, we have taken 80% of the data for training and the rest for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We present the average prediction performances (out of 50 runs) in Table 1 for each dataset and J with ϵ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We observe that the prediction performances of the compared methods are close, while MCMC-fixed-S and Bayes-fixed-S are arguably the most stable ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' When J > 1 (the distributed data setting), those two methods beat adaSSP and MCMC-B&S more satisfactorily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Table 1: Averaged prediction performances (over 50 runs) for the real datasets - ϵ = 1 J data sets MCMC-normalX MCMC-fixedS Bayes-fixedS-fast MCMC-B&S adaSSP J = 1 PowerPlant 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0129 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0129 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0129 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0128 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0139 BikeSharing 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0024 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0021 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0021 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0020 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0107 AirQuality 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0060 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0057 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0057 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0062 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0066 3droad 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 J = 5 PowerPlant 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0133 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0134 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0134 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0136 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0235 BikeSharing 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0174 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0045 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0045 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0086 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0382 AirQuality 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0142 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0099 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0130 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0227 3droad 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 J = 10 PowerPlant 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0142 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0143 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0143 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0143 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0351 BikeSharing 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0812 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0082 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0082 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0137 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0526 AirQuality 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0985 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0117 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0117 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0216 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0314 3droad 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='0229 6 Conclusion We propose a novel Bayesian inference framework, with MCMC being its main workhorse, for a differentially private distributed linear regression setting where the data is partitioned among the data holders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We provide several Bayesian inference algorithms suited to the developed hierarchical model for linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Those algorithms can be preferred one over the other depending on the computational budget, model specifics, or how much we know about the underlying statistical facts of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We exploit the conditional structure between the summary statistics of linear regression, as given in Proposition 1, which leads to feasible algorithms with computational advantages over their competitors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The numerical experiments show that the proposed methods are competitive with their state-of-the-art alternatives in terms of accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The extensions mentioned in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='4 indicate potential future directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' There is also room 13 for improvement of MCMC-normalX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We chose the most common MH moves to update σ2 y and Sj’s, without paying much attention to their efficiencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Especially for large d, more advanced techniques, such as those stemming from Hamiltonian Monte Carlo (Neal, 2001) or pseudo-marginal MCMC (Andrieu and Roberts, 2009), may be employed to facilitate the mixing of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 7 Acknowledgement The study was funded by the Scientific and Technological Research Council of Turkey (T¨UB˙ITAK) ARDEB Grant No 120E534.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Supplementary material: The code to replicate the experiments in Section 5 can be found at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='com/sinanyildirim/Bayesian_DP_dist_LR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='git.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' References Abadi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Chu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Goodfellow, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', McMahan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Mironov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Talwar, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Deep learning with differential privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, pages 308–318, New York, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' ACM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Alabi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', McMillan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Sarathy, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Smith, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Vadhan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Differentially private simple linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Proceedings on Privacy Enhancing Technologies, 2022:184–204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Alparslan, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Yıldırım, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Statistic selection and mcmc for differentially private bayesian estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Statistics and Computing, 32(5):66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1, 1, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='3 Andrieu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Roberts, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The pseudo-marginal approach for efficient Monte Carlo computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Annals of Statistics, 37(2):569–1078.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 6 Andrieu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Thoms, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' A tutorial on adaptive mcmc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Statistics and Computing, 18(4):343–373.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 Balle, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Improving the Gaussian mechanism for differential privacy: Analytical calibration and optimal denoising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Dy, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Krause, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 394–403.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 Bassily, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Smith, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Thakurta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Private empirical risk minimization: Efficient algorithms and tight error bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 464–473.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Bernstein, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Sheldon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Differentially private bayesian linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Beygelzimer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=", d'Alch´e-Buc, F." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Fox, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', editors, Advances in Neural Information Processing Systems, volume 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='3, 5, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 Bun, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Steinke, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Concentrated differential privacy: Simplifications, extensions, and lower bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Proceedings, Part I, of the 14th International Conference on Theory of Cryptography - Volume 9985, pages 635–658, New York, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Springer-Verlag New York, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2 14 Cai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The Annals of Statistics, 49(5):2825 – 2850.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Chaudhuri, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Monteleoni, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Sarwate, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Differentially private empirical risk minimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Dankar, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and El Emam, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Practicing differential privacy in health care: A review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Data Priv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 6(1):35–67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Dimitrakakis, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Nelson, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Mitrokotsa, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Rubinstein, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Differential privacy for bayesian inference through posterior sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Journal of machine learning research, 18(11):1–39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Dong, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Roth, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Su, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Gaussian differential privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Journal of the Royal Statistical Society Series B, 84(1):3–37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2 Dwork, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Differential privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Bugliesi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Preneel, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Sassone, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Wegener, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', editors, Automata, Languages and Programming, pages 1–12, Berlin, Heidelberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Springer Berlin Heidelberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1, 2 Dwork, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Differential privacy: A survey of results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Agrawal, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Du, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Duan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Li, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', editors, Theory and Applications of Models of Computation, pages 1–19, Berlin, Heidelberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Springer Berlin Heidelberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2 Dwork, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', McSherry, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Nissim, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Smith, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Calibrating noise to sensitivity in private data analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Theory of Cyrptography, pages 265–284.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2 Dwork, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Naor, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Pitassi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Rothblum, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Yekhanin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Pan-private streaming algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In ICS, pages 66–80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 Dwork, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Roth, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The algorithmic foundations of differential privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Trends Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 9(3–4):211–407.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2 Dwork, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Roth, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2014a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The algorithmic foundations of differential privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Dwork, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Talwar, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Thakurta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2014b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Analyze gauss: Optimal bounds for privacy-preserving principal component analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, STOC ’14, page 11–20, New York, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 Ferrando, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Sheldon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Parametric bootstrap for differentially private confidence intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Camps-Valls, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Ruiz, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Valera, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', editors, Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, pages 1598–1618.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Foulds, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Geumlek, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and an Kamalika Chaudhuri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' On the theory and practice of privacy-preserving Bayesian data analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Technical report, arxiv:1603.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='07294.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Gong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Exact inference with approximate computation for differentially private data via perturbations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Journal of Privacy and Confidentiality, 12(2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 15 Heikkil¨a, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Lagerspetz, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Kaski, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Shimizu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Tarkoma, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Honkela, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Differentially private bayesian learning on distributed data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Guyon, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Luxburg, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Bengio, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Fergus, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Vishwanathan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', editors, Advances in Neural Information Processing Systems, volume 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Heikkil¨a, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', J¨alk¨o, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Dikmen, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Honkela, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Differentially private Markov chain Monte Carlo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In NeurIPS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Higham, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (1988).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Computing a nearest symmetric positive semidefinite matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Linear Algebra and its Applications, 103:103–118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 Ju, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Awan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Gong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Rao, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Data augmentation MCMC for bayesian inference from privatized data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Oh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Agarwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Belgrave, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', editors, Advances in Neural Information Processing Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1, 1, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='3 Kuru, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Birbil, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', G¨urb¨uzbalaban, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Yıldırım, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Differentially private accelerated optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' SIAM Journal on Optimization, 32(2):795–821.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Neal, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Annealed importance sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Statistics and Computing, 11:125–139.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 6 Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Revisiting differentially private linear regression: optimal and adaptive prediction & estimation in unbounded domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In UAI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1, 5, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Fienberg, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Smola, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Privacy for free: Posterior sampling and stochastic gradient monte carlo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Bach, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Blei, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2493–2502, Lille, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Williams, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Mcsherry, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Probabilistic inference and differential privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Lafferty, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Williams, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Shawe-Taylor, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Zemel, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Culotta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', editors, Advances in Neural Information Processing Systems, volume 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Wilson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Ghahramani, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Generalised wishart processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, UAI’11, page 736–744, Arlington, Virginia, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' AUAI Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 2 Yıldırım, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' and Ermi¸s, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Exact MCMC with differentially private moves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Statistics and Computing, 29(5):947–963.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Xiao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Winslett, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Functional mechanism: Regression analysis under differential privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' VLDB Endow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', 5(11):1364–1375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Rubinstein, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', and Dimitrakakis, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' On the differential privacy of bayesian inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Proceedings of the AAAI Conference on Artificial Intelligence, 30(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 1 16 A Derivations for MCMC-normalX We reserve this section for the derivations required for our algorithm MCMC-normalX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Full Conditional Distribution of Σx: We note that p(Σx|S1:J, ˆS1:J, ˆz1:J) ∝ p(Σx) J � j=1 p(Sj|Σx) = |Λ|dκ/2 2dk/2Γd( κ 2)|Σx|−(d+κ+1)/2e− 1 2 tr(ΛΣ−1 x ) J � j=1 |Sj|(nj−d−1)/2e− 1 2 tr(Σ−1 x Sj) 2njd/2|Σx|nj/2Γd(nj/2) ∝ |Σx|− n 2 − (d+κ+1) 2 e− 1 2 (� tr(Σ−1 x Sj)+tr(ΛΣ−1 x )) ∝ |Σx|− (d+κ+n+1) 2 e− 1 2 tr((� Sj+Λ)Σ−1 x ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Therefore, we have Σx|S1:J, ˆS1:J, ˆz1:J ∼ IW � �Λ + J � j=1 Sj, κ + n � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Full Conditional Distribution of θ: The posterior of θ is proportional to p(θ|S1:J, σ2 y, ˆz1:J) ∝ N(θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' m, C)p(ˆz1:J|S1:J, θ, σ2 y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For the second factor, we have p(ˆz1:J|S1:J, θ, σ2 y) ∝ J � i=1 p(ˆzj|Sj, θ, σ2 y) = J � i=1 N � ˆzj;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Sjθ, σ2 ySj + σ2 zI � ∝ J � i=1 exp � −1 2(ˆzj − Sjθ)T (σ2 ySj + σ2 zI)−1(ˆzj − Sjθ) � ∝ exp � � �−1 2 � �θT � �� j Sj(σ2 ySj + σ2 zI)−1Sj � � θ − 2θT � �� j Sj(σ2 ySj + σ2 zI)−1 � � ˆzj � � � � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Reorganising the terms, we end up with p(θ|S1:J, σ2 y, ˆz1:J) ∝ exp � −1 2 � θT Σ−1 postθ − 2θT Σ−1 postmpost �� , where Σ−1 post = � j Sj(σ2 ySj + σ2 ZI)−1Sj + C−1 and mpost = Σpost[� j Sj(σ2 ySj + σ2 zI)−1)ˆzj + C−1m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Therefore, θ|S1:J, σ2 y, ˆz1:J ∼ N(mpost, Σpost).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Acceptance Ratio for the MH Update of Sj: We drop the index j from Sj for simplicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' When S′ ∼ W(S/α, α), the proposal density is q(S′|S) = |S′|(α−d−1)/2e−tr[αS−1S′]/2 |S/α|α/22αd/2Γd( α 2 ) = |S′|(α−d−1)/2e−tr[αS−1S′]/2 |S|α/22αd/2Γd( α 2 ) αα/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 17 Therefore, the acceptance ratio corresponding to this proposal is min � 1, q(S|S′) q(S′|S) W(S′;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' njΣx, κ)p( ˆS| ˆS′)N(ˆz;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' S′θ, σ2 ySθ + σ2 zId) W(S;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' njΣx, κ)p( ˆS| ˆS)N(ˆz;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Sθ, σ2ySθ + σ2zId) � , where the ratio of proposals becomes q(S|S′) q(S′|S) = |S|(α−d−1)/2|S|α/2e−tr[aS′−1S]/2 |S′|(α−d−1)/2|S′|α/2e−tr[αS−1S′]/2 = � |S| |S′| �α−(d+1)/2 eα(tr[S−1S′]−tr[S′−1S])/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Acceptance Ratio for the MH Update of σ2 y: To update σ2 y, we use a random walk proposal σ2′ y ∼ N(σ2 y, σ2 q).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The resulting acceptance ratio is min � 1, IG(σ2′ y ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' a, b) �J j=1 N(ˆzj;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Sjθ, σ2′ y Sjθ + σ2 zId) IG(σ2y;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' a, b) �J j=1 N(ˆzj;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Sjθ, σ2ySjθ + σ2zId) � B Other Variants This appendix is reserved for the details of the other variants mentioned in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For simplicity, we will assume a single data holder, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', J = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' the extension to J > 1 should be straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 Approximating Normality by Averaging When xi, i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' , n are not normal, another approach that we propose is based on modifying the data to such that the rows of the modified feature matrix, called Xav, are averages of k > 1 original features in X, and thus approximately normal, by the CLT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Specifically, let n be divisible by k so that m = n/k is an integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Consider the m × n matrix A = 1 √ k � ���� 11×k 01×k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 01×k 01×k 11×k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 01×k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 01×k 01×k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 11×k � ���� m×n , Then the matrix Xav = AX corresponds to constructing a shorter m × d matrix whose i’th column is the average of the rows (i − 1)k + 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' , ik of X (scaled by 1/ √ k the preserve the norm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' When k is large enough, we can make normality assumptions for the rows of Xav.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Further, we consider yav := Ay = Xavθ + Ae, whose mean is Xavθ and covariance AAT σ2 y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' But, we have AAT = Im, so the covariance is σ2 yIm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Therefore, the same hierarchical model in Figure 1 can be used for X′, y′ with their respective summary statistics zav = (Xav)T yav, Sav = (Xav)T Xav, as well as the noisy versions of those summary statistics to provide a given level of privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Note that Sav and zav have the same sensitivities as S and z, hence the same noise variances are needed for privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, there is less information in Sav and zav due to averaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 18 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 Including the Intercept If we include the intercept parameter, which corresponds to appending xi with a 1 from the left, the design matrix will be changed from S to S0 = � n n¯xT n¯x S � , where ¯x = 1 n �n i=1 xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Also, note that S = (n − 1)�Σx + n¯x¯xT where �Σx is the sample covariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Under the normality assumption for xi’s, ¯x ∼ N(m, Σx/n) and (n − 1)�Σx ∼ W(n − 1, Σx) are independent and have known distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Therefore, we can write a model that includes b = ¯x, ˆ Σx, and S0 where S0 replaces S in the standard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' More specifically, we have the following hierarchical model: θ ∼ N(m, C), Σx ∼ IW(Λ, κ), ˆ Σx|Σx ∼ W(n − 1, Σx), b|Σx ∼ N(µ, Σx/n), z|θ, Σ2 y, ˆΣ, b ∼ N(S0θ, S0σ2 y), ˆS| ˆΣ, b = N(S0, σ2 sI), ˆz|z = N(z, σ2 zI) with S0 = � n nbT nb (n − 1) ˆΣ + nbbT � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' C Compared Methods Here, we provide the details of the methods which we compare with the proposed methods in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Those methods are originally proposed for J = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' However, for comparison, we implemented their natural extensions to the general (distributed) case J ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The implementations of those methods can also be found in the code package provided for this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='1 MCMC-B&S Adapted to the Distributed Setting In Bernstein and Sheldon (2019), only J = 1 is considered, and the vector ss = [vec(S), z = XT y, u = yT y] is perturbed with privacy-preserving noise to generate the observations of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For J ≥ 1, we consider the following natural extension for generating perturbed observations ˆss = [vec( ˆSj), ˆzj, ˆuj] along with ˆSj = Sj + σdpMj, ˆzj = zj + vj, vj ∼ N(0, σ2 dpId), ˆuj = uj + wj, wj ∼ N(0, σ2 dp), (13) where σdp = σ(ϵ, δ)∆ss with ∆ss = � ∥X∥4 + ∥X∥2∥Y∥2 + ∥Y∥4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' For completeness, we provide the further specifics of the model: We take (θ, σ2 y) ∼ NIG(a0, b0, m, Λ0) where Λ0 = C−1 and Px = N(0, Σx) with Σx ∼ IW(Λ, κ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' During the comparisons, we set a0, b0, m, C, Λ, κ to the same values for both this model and our proposed model that assumes normally distributed features, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', Px = N(0, Σx).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Then, we apply an extension of Bernstein and Sheldon (2019, Algorithm 1) suited to those observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' One iteration of that algorithm includes the following steps in order: Calculate the D × 1 mean vector and D × D covariance matrix µss = E[ss], Σss = Cov[ss].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This step requires the fourth moments N(0, Σx).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Sample ssj ∼ N(µ(j) post,ss, Σ(j) post,ss) with Σ(j) post,ss = (njΣss(θ)−1 + (1/σ2 dp)I)−1, and µ(j) post,ss = Σ(j) post,ss(Σss(θ)−1µss + ˆssj/σ2 dp).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 19 Sample Σx ∼ IW � Λ + �J j=1 Sj, n + κ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' Sample (θ, σ2 y) ∼ NIG(an, bn, mn, Λn) by sampling σ2 y ∼ IG(an, bn), followed by sampling θ ∼ N(µn, σ2 yΛ−1 n ) with an = a0 + n/2, bn = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='5u + mT C−1m − mT nΛnmn, and Λn = Λ0 + J � j=1 Sj, mn = Λ−1 n � � J � j=1 zj + Λ0m � � , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='2 A Variant of adaSSP for the Distributed Setting The adaSSP algorithm of (Wang, 2018) is originally designed for a single data holder, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=', J = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' In adaSSP, a differentially private estimate of θ is released as ˆθ = ( ˆS + λI)−1 ˆz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (14) Here, ˆS and ˆz are the privatised versions of S and z as in (2) and (3), except that ϵ and δ must be changed to 2ϵ/3 and 2δ/3 in those equations to provide (ϵ, δ) differential privacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' This is because adaSSP uses another parameter λ, which is also calculated from the sensitive data and a third of the privacy budget is spent for privatising that calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' With v ∼ N(0, 1), λ is specifically calculated as λ = max{0, σ � d ln(6/δ) ln(2d2/ρ) − ˜λmin} with σ = ∥X∥2/(ϵ/3), λmin = min(eig(S)), and ˜λmin = max{λmin + � ln(6/δ)σv − ln(6/δ)σv, 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' We consider an extension of (Wang, 2018) for J ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' To perform the extension, we reflect on its tendency to approximate a (regularised) least square solution and consider the following estimate ˆθ = � � J � j=1 ˆSj + I J � j=1 λj � � −1 � � J � j=1 ˆzj � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' (15) Here, ˆSj, ˆzj and λj are calculated in data node j separately from the other nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' The estimation procedure in (15) does not properly account for the Bayesian paradigm but aggregates the shared ˆSj’s and ˆzj’s to approximate the (regularised) least squares solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'} +page_content=' 20' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFST4oBgHgl3EQfVziF/content/2301.13778v1.pdf'}