Buckets:

|
download
raw
64.3 kB

Understanding the gravitational-wave Hellings and Downs curve for pulsar timing arrays in terms of sound and electromagnetic waves

Fredrick A. Jenet*

University of Texas at Brownsville, Department of Physics and Astronomy,
and Center for Advanced Radio Astronomy, Brownsville, TX 78520

Joseph D. Romano

University of Texas at Brownsville, Department of Physics and Astronomy,
and Center for Gravitational-Wave Astronomy, Brownsville, TX 78520

(Dated: March 30, 2015)

Searches for stochastic gravitational-wave backgrounds using pulsar timing arrays look for correlations in the timing residuals induced by the background across the pulsars in the array. The correlation signature of an isotropic, unpolarized gravitational-wave background predicted by general relativity follows the so-called Hellings and Downs curve, which is a relatively simple function of the angle between a pair of Earth-pulsar baselines. In this paper, we give a pedagogical discussion of the Hellings and Downs curve for pulsar timing arrays, considering simpler analogous scenarios involving sound and electromagnetic waves. We calculate Hellings-and-Downs-type functions for these two scenarios and develop a framework suitable for doing more general correlation calculations.

I. INTRODUCTION

A pulsar is a rapidly-rotating neutron star that emits a beam of electromagnetic radiation (usually in the form of radio waves) from its magnetic poles.1 If the beam of radiation crosses our line of sight, we see a flash of radiation, similar to that of a lighthouse beacon. These flashes can be thought of as ticks of a giant astronomical clock, whose regularity rivals that of the best human-made atomic clocks. By precisely monitoring the arrival times of the pulses, astronomers can determine: (i) intrinsic properties of the pulsar—e.g., its rotational period and whether it is spinning up or spinning down; (ii) extrinsic properties of the pulsar—e.g., whether it is in a binary system, and if so what are its orbital parameters; and (iii) properties of the intervening ‘stuff’ between us and the pulsar—e.g., the column density of electrons in the interstellar medium.2,3 Indeed, it was the precise monitoring (for over 30 years) of the pulses from binary pulsar PSR B1913+16 that has given us the most compelling evidence to date for the existence of gravitational waves.4 The measured decrease in the orbital period of binary pulsar PSR B1913+16 agrees precisely with the predictions of general relativity for the energy loss due to gravitational-wave emission. (See Fig. 1.) This was a path-breaking result, with the discovery of the binary pulsar5 being worthy of a Nobel Prize in Physics for Joseph Taylor and Russell Hulse in 1993.

Monitoring the gravitational-wave-induced decay of a binary system, like PSR B1913+16, is one method for detecting gravitational waves. Another method is to look for the effect of gravitational waves on the radio pulses that propagate from a pulsar to a radio antenna on Earth. The basic idea is that when a gravitational wave transits the Earth-pulsar line of sight, it creates a perturbation in the intervening spatial metric, causing a change in the propagation time of the radio pulses emitted by the

pulsar.6–8 (This is the timing response of the Earth-pulsar baseline to a gravitational wave.) One can then compare the measured and predicted times of arrival (TOAs) of the pulses, using timing models that take into account the various intrinsic and extrinsic properties of the pulsar. Since standard timing models factor in only deterministic influences on the arrival times of the pulses, the difference between the measured and predicted TOAs will result in a stream of timing residuals, which encode the influence of both deterministic and stochastic (i.e., random) gravitational waves as well as any other random noise processes on the measurement.9 If one has a set of radio pulsars, i.e., a pulsar timing array (PTA), one can correlate the residuals across pairs of Earth-pulsar baselines, leveraging the common influence of a background of gravitational waves against unwanted, uncorrelated noise. The key property of a PTA is that the signal from a stochastic gravitational-wave background will be correlated across the baselines, while that from the other noise processes will not. This is what makes a PTA function as a galactic-scale, gravitational-wave detector.10

For an isotropic, unpolarized stochastic background of quadrupole gravitational radiation composed of the plus (+) and cross (×) polarization modes predicted by general relativity, the expected correlated response of a pair of Earth-pulsar baselines to the background follows the so-called Hellings and Downs curve, named after the two authors who first calculated it in 1983.11 A plot of the Hellings and Downs curve as a function of the angle between a pair of baselines is shown in Fig. 2. Searches for stochastic gravitational-wave backgrounds using pulsar timing arrays effectively compare the measured correlations with the expected values from the Hellings and Downs curve to determine whether or not a signal from an isotropic, unpolarized background is present (or absent) in the data. Gravitational-wave backgrounds predicted by alternative theories of gravity, which have different polarization modes,12 or backgrounds that have anFIG. 1: Decrease in the orbital period of binary pulsar PSR B1913+16.4 The measured data points and error bars agree with the prediction of general relativity (parabola) for the rate of orbital decay due to gravitational-wave emission.

anisotropic distribution of gravitational-wave energy on the sky,13–15 will induce different correlation signatures and must be searched for accordingly. To date no detections have been made, but upper limits on the strength of the background have been set16 that constrain certain models of gravitational-wave backgrounds produced by the inspirals of binary supermassive black holes (SMBHs) in merging galaxies throughout the universe.

Mathematically, the Hellings and Downs curve is the sky-averaged and polarization-averaged product of the response of a pair of Earth-pulsar baselines to a plane wave propagating in a particular direction with either +

or $\times$ polarization. It has the analytic form

χ(ζ)=1214(1cosζ2)+32(1cosζ2)ln(1cosζ2),(1)\chi(\zeta) = \frac{1}{2} - \frac{1}{4} \left( \frac{1 - \cos \zeta}{2} \right) + \frac{3}{2} \left( \frac{1 - \cos \zeta}{2} \right) \ln \left( \frac{1 - \cos \zeta}{2} \right), \quad (1)

where $\zeta$ is the angle between two Earth-pulsar baselines.17 (See Fig. 3 for the Earth-pulsar baseline geometry.) The integration that one must do in order to obtain the above expression is non-trivial enough that Hellings and Downs originally used the symbolic manipulation computer system MACSYMA to do the calculation.11 It turns out that is also possible to evaluate the integral by hand, using contour integration forFIG. 2: Hellings and Downs curve for the expected correlated response of a pair of Earth-pulsar baselines to an isotropic, unpolarized stochastic gravitational-wave background, plotted as a function of the angle between the baselines, cf. Eq. (1).

part of the integration (see, e.g., Appendix A). But for some reason, perhaps related to the difficulty of analytically evaluating the sky integral, students or beginning researchers who are first introduced to the Hellings and Downs curve see it as a somewhat mysterious object, intimately connected to the realm of pulsar timing. Granted, the precise analytic form in (1) is specific to the response of a pair of Earth-pulsar baselines to an isotropic, unpolarized stochastic gravitational-wave background, but Hellings-and-Downs-type functions show up in any scenario where one is interested in the dependence of the correlated response of a pair of receivers on the geometrical configuration of the two receivers. The geometry relating the configuration of one receiver to another might be more complicated (or simpler) than that for the pulsar timing case, but the basic idea of correlation across receivers is exactly the same.

The purpose of this paper is to emphasize this commonality, and to calculate Hellings-and-Downs-type functions for two simpler scenarios. Scenario 1 will be for a pair of receivers constructed from omni-directional microphones responding to an isotropic stochastic sound field. Scenario 2 will be for a pair of receivers constructed from electric dipole antennas responding to an isotropic and unpolarized stochastic electromagnetic field. These two scenarios were chosen since the derivation of the corresponding Hellings-and-Downs-type functions (cf. Eqs. (27) and (48)) and the evaluation of the necessary sky-integral and polarization averaging (for the electromagnetic-wave case) are relatively simple. But the steps that one must go through to obtain these results are identical to those for the gravitational-wave pulsar timing Hellings and Downs function, even though the mathematics needed to derive the relevant expression for the pulsar timing case (cf. Eq. (58)) is more involved. Hopefully, after reading this paper, the reader will understand

the pulsar timing Hellings and Downs curve in its proper context, and appreciate that it is a special case of a general correlation calculation.

The rest of the paper is organized as follows: In Section II, we describe a general mathematical formalism for working with random fields, which we will use repeatedly in the following sections. In Section III we apply this formalism to calculate a Hellings-and-Downs-type function for the case of omni-directional microphones in an isotropic stochastic sound field. Section IV extends the calculation to electric dipole antennas in an isotropic and unpolarized stochastic electromagnetic field, which requires us to deal with the polarization of the component waves. Finally, in Section V, we summarize the basic steps needed to calculate Hellings-and-Downs-type functions in general, and then set-up up the calculation for the actual pulsar timing Hellings and Downs curve, leaving the evaluation of the final integral to the motivated reader. (We have included details of the calculation in Appendix A, in case the reader has difficulty completing the calculation.)

II. RANDOM FIELDS AND EXPECTATION VALUES

Probably the most important reason for calculating Hellings-and-Downs-type functions is to determine the correlation signature of a signal buried in noisy data. The situation is tricky when the signal is associated with a random field (e.g., for a stochastic gravitational-wave background), since then one is effectively trying to detect “noise in noise.” Fortunately, it turns out that there is a way to surmount this problem. The key idea is that although the signal associated with a random field is typically indistinguishable from noise in a single detector orFIG. 3: Geometry for the calculation of the Hellings and Downs function for the correlated response of a pair of Earth-pulsar baselines to an isotropic, unpolarized stochastic gravitational-wave background. The coordinate system is chosen so that the Earth is located at the origin and pulsar 1 is located on the $z$ -axis, a distance $D_1$ from the origin. Pulsar 2 is located in the $xz$ -plane, a distance $D_2$ from the origin. The two Earth-pulsar baselines point along the unit vectors $\hat{\mathbf{u}}_1$ and $\hat{\mathbf{u}}_2$ . The angle between the two baselines is denoted by $\zeta$ , and is given by $\cos \zeta = \hat{\mathbf{u}}_1 \cdot \hat{\mathbf{u}}2$ . (Actually, the origin of the coordinate system for the calculation is the fixed solar system barycenter (SSB), and not the moving Earth. But since the distance between the Earth and SSB (1 au) is much smaller than the typical distance to the two pulsars, $D{1,2} \sim 1 \text{ kpc} = 2 \times 10^8 \text{ au}$ , there is no practical difference between the Earth-pulsar and SSB-pulsar baselines.)

receiving system, it is correlated between pairs of detectors or receiving systems in ways that differ, in general, from instrumental or measurement noise. (In other words, by using multiple detectors, one can leverage the common influence of the background field against unwanted, uncorrelated noise processes.) At each instant of time, the measured correlation is simply the product of the output of two detectors. But since both the field and the instrumental noise are random processes, the measured correlation will fluctuate with time as dictated by the statistical properties of the field and noise. By averaging the correlations over time, we obtain an estimate of the expected value of the correlation, which we can then compare with predicted values assuming the presence (or absence) of a signal. The purpose of this section is to develop the mathematical machinery that will allow us to perform these statistical correlation calculations.

In the following three sections of the paper, we will be working with fields (sound, electromagnetic, and gravitational fields) that are made up of waves propagating in all different directions. These waves, having been produced by a large number of independent and uncorrelated sources, will have, in general, different frequencies, amplitudes, and phases. (In the case of electromagnetic and gravitational waves, they will also have different polarizations.) Such a superposition of waves is most conve-

niently described statistically, in terms of a Fourier integral whose Fourier coefficients are random variables. The statistical properties of the field will then be encoded in the statistical properties of the Fourier coefficients, which are much simpler to work with as we shall show below, cf. Eqs. (13) and (14).

To illustrate these ideas as simply as possible, we will do the calculations in this section for an arbitrary scalar field $\Phi(t, \mathbf{x})$ . Analogous calculations would also go through for vector and tensor fields (e.g., electromagnetic and gravitational fields) with mostly just an increase in notational complexity, coming from the vector and tensor nature of these fields and their polarization properties. Scalar fields are particularly simple since they are described by a single real (or complex) number at each point in space $\mathbf{x}$ , at each instant of time $t$ . Sound waves, which we will discuss in detail in Section III, are an example of a scalar field. The Fourier integral for a scalar field $\Phi(t, \mathbf{x})$ has the form

Φ~(t,x)=d3kA~(k)ei(kxωt),Φ(t,x)=Re[Φ~(t,x)],(2)\tilde{\Phi}(t, \mathbf{x}) = \int d^3\mathbf{k} \tilde{A}(\mathbf{k}) e^{i(\mathbf{k} \cdot \mathbf{x} - \omega t)}, \quad \Phi(t, \mathbf{x}) = \text{Re}[\tilde{\Phi}(t, \mathbf{x})], \quad (2)

with $\omega/k = v$ , where $v$ is the speed of wave propagation and $k = |\mathbf{k}|$ . The relation $\omega/k = v$ is required for $e^{i(\mathbf{k} \cdot \mathbf{x} - \omega t)}$ to be a solution of the wave equation.The Fourier coefficients $\tilde{A}(\mathbf{k})$ are complex-valued random variables, and can be written as

A~(k)=A(k)eiα(k)=a(k)+ib(k),(3)\tilde{A}(\mathbf{k}) = A(\mathbf{k})e^{i\alpha(\mathbf{k})} = a(\mathbf{k}) + ib(\mathbf{k}), \quad (3)

where $A$ , $\alpha$ , $a$ , and $b$ are all real-valued functions of $\mathbf{k}$ .

The statistical properties of the field $\Phi(t, \mathbf{x})$ are completely determined by the joint probability distributions

pn(Φ1,t1,x1;Φ2,t2,x2;;Φn,tn,xn),n=1,2,(4)p_n(\Phi_1, t_1, \mathbf{x}_1; \Phi_2, t_2, \mathbf{x}_2; \dots; \Phi_n, t_n, \mathbf{x}_n), \quad n = 1, 2, \dots \quad (4)

in terms of which one can calculate the expectation values

Φ(t1,x1),Φ(t1,x1)Φ(t2,x2),etc.(5)\langle \Phi(t_1, \mathbf{x}_1) \rangle, \quad \langle \Phi(t_1, \mathbf{x}_1)\Phi(t_2, \mathbf{x}_2) \rangle, \quad \text{etc.} \quad (5)

For example, the expectation value of the field at spatial location $\mathbf{x}_1$ at time $t_1$ is defined by

Φ(t1,x1)dΦ1Φ1(t1,x1)p1(Φ1,t1,x1).(6)\langle \Phi(t_1, \mathbf{x}_1) \rangle \equiv \int_{-\infty}^{\infty} d\Phi_1 \Phi_1(t_1, \mathbf{x}_1) p_1(\Phi_1, t_1, \mathbf{x}_1). \quad (6)

Equivalently, the expectation values can be defined in terms of an ensemble average, e.g.,

Φ(t,x)limN1Ni=1NΦ(i)(t,x),(7)\langle \Phi(t, \mathbf{x}) \rangle \equiv \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{i=1}^N \Phi^{(i)}(t, \mathbf{x}), \quad (7)

where $\Phi^{(i)}(t, \mathbf{x})$ denotes a particular realization of $\Phi(t, \mathbf{x})$ . The usefulness of knowing the expectation values given in (5) is that such knowledge is equivalent to knowing the joint probability distributions (4) and hence the complete statistical properties of the field.18 These expectation values in turn are completely encoded in the expectation values of the products of the Fourier coefficients $\tilde{A}(\mathbf{k})$ .

The simplest case, which is also the one we consider, is for a multivariate Gaussian-distributed field, since knowledge of the quadratic expectation values is sufficient to determine all higher-order moments. Without loss of generality, we will work with the real random variables $a(\mathbf{k})$ and $b(\mathbf{k})$ , and assume that any non-zero constant value has been subtracted from the field:

a(k)=0,b(k)=0.(8)\langle a(\mathbf{k}) \rangle = 0, \quad \langle b(\mathbf{k}) \rangle = 0. \quad (8)

We will also assume that the field $\Phi(\mathbf{x}, t)$ is stationary in time and spatially homogeneous—i.e., that the statistical properties of the field are unaffected by a change in either the origin of time or the origin of spatial coordinates:

tt+t,xx+x.(9)t \rightarrow t + t', \quad \mathbf{x} \rightarrow \mathbf{x} + \mathbf{x}'. \quad (9)

This means that the quadratic expectation values can depend only on the difference between these coordinates:

Φ(t1,x1)Φ(t2,x2)=C(t1t2,x1x2).(10)\langle \Phi(t_1, \mathbf{x}_1)\Phi(t_2, \mathbf{x}_2) \rangle = C(t_1 - t_2, \mathbf{x}_1 - \mathbf{x}_2). \quad (10)

Such behavior follows from

a(k)a(k)=12B(k)δ3(kk),b(k)b(k)=12B(k)δ3(kk),a(k)b(k)=0,(11)\begin{aligned} \langle a(\mathbf{k})a(\mathbf{k}') \rangle &= \frac{1}{2} B(\mathbf{k}) \delta^3(\mathbf{k} - \mathbf{k}'), \\ \langle b(\mathbf{k})b(\mathbf{k}') \rangle &= \frac{1}{2} B(\mathbf{k}) \delta^3(\mathbf{k} - \mathbf{k}'), \\ \langle a(\mathbf{k})b(\mathbf{k}') \rangle &= 0, \end{aligned} \quad (11)

with

C(t1t2,x1x2)=12d3kB(k)×cos[k(x1x2)ω(t1t2)].(12)\begin{aligned} C(t_1 - t_2, \mathbf{x}_1 - \mathbf{x}_2) &= \frac{1}{2} \int d^3\mathbf{k} B(\mathbf{k}) \\ &\times \cos[\mathbf{k} \cdot (\mathbf{x}_1 - \mathbf{x}_2) - \omega(t_1 - t_2)]. \end{aligned} \quad (12)

For readers interested in proving this last statement, write $\Phi$ as the sum $\Phi = (\tilde{\Phi} + \tilde{\Phi}^*)/2$ and then use Eqs. (2) and (3) to expand the left-hand-side of Eq. (10) in terms of expectation values of $a(\mathbf{k})$ and $b(\mathbf{k})$ . Given Eq. (11), Eq. (10) then follows with $C(t_1 - t_2, \mathbf{x}_1 - \mathbf{x}_2)$ given by Eq. (12).

The physical meaning of the Dirac delta functions that appear in the expectation values of Eq. (11) is that waves propagating in different directions $\hat{\mathbf{k}}$ and $\hat{\mathbf{k}}'$ and having different angular frequencies $\omega = kv$ and $\omega' = k'v$ are statistically independent of one another. In other words, the expected correlations are non-zero only for waves traveling in the same direction and having the same frequency. Using Eqs. (8) and (11), it is also straightforward to show that the complex Fourier coefficients $\tilde{A}(\mathbf{k}) = a(\mathbf{k}) + ib(\mathbf{k})$ satisfy

A~(k)=0,A~(k)=0,(13)\langle \tilde{A}(\mathbf{k}) \rangle = 0, \quad \langle \tilde{A}^*(\mathbf{k}) \rangle = 0, \quad (13)

and that

A~(k)A~(k)=0,A~(k)A~(k)=0,A~(k)A~(k)=B(k)δ3(kk).(14)\begin{aligned} \langle \tilde{A}(\mathbf{k})\tilde{A}(\mathbf{k}') \rangle &= 0, \\ \langle \tilde{A}^*(\mathbf{k})\tilde{A}^*(\mathbf{k}') \rangle &= 0, \\ \langle \tilde{A}(\mathbf{k})\tilde{A}^*(\mathbf{k}') \rangle &= B(\mathbf{k}) \delta^3(\mathbf{k} - \mathbf{k}'). \end{aligned} \quad (14)

These two sets of expectation values for the Fourier coefficients $\tilde{A}(\mathbf{k})$ are the main results of this section. The vanishing of the first two expectation values in (14) imply$$\begin{aligned} \langle \Phi^2(t, \mathbf{x}) \rangle &= \frac{1}{4} \left\langle (\tilde{\Phi}(t, \mathbf{x}) + \tilde{\Phi}^*(t, \mathbf{x}))(\tilde{\Phi}(t, \mathbf{x}) + \tilde{\Phi}^*(t, \mathbf{x})) \right\rangle \ &= \frac{1}{4} \left{ \langle \tilde{\Phi}(t, \mathbf{x})\tilde{\Phi}(t, \mathbf{x}) \rangle + \langle \tilde{\Phi}^*(t, \mathbf{x})\tilde{\Phi}^*(t, \mathbf{x}) \rangle + \langle \tilde{\Phi}(t, \mathbf{x})\tilde{\Phi}^*(t, \mathbf{x}) \rangle + \langle \tilde{\Phi}^*(t, \mathbf{x})\tilde{\Phi}(t, \mathbf{x}) \rangle \right} \ &= \frac{1}{2} \langle \tilde{\Phi}(t, \mathbf{x})\tilde{\Phi}^*(t, \mathbf{x}) \rangle, \end{aligned} \tag{15}$$

which we will use repeatedly in the following sections.

As discussed at the start of this section, we are ultimately interested in calculating the expected correlation $\langle r_1(t)r_2(t) \rangle$ of the responses $r_1(t)$ , $r_2(t)$ of two receiving systems $R_1$ , $R_2$ to the field $\Phi(t, \mathbf{x})$ . It is this expected correlation that we can compare against the actual measured correlation, assuming that the other noise processes are uncorrelated across different receiving systems. The response of the receiving systems will be linear in the field, given by a convolution of $R_1$ and $R_2$ with $\Phi$ . Since $\Phi(t, \mathbf{x})$ is a random field, $r_1(t)$ and $r_2(t)$ will be random functions of time. In addition, the expectation value $\langle r_1(t_1)r_2(t_2) \rangle$ will depend only on the time difference $t_1 - t_2$ , as a consequence of our assumption regarding stationarity of the field, cf. (10). Hence, the expected correlation $\langle r_1(t)r_2(t) \rangle$ will be independent of time, and we expect to be able to estimate this correlation by averaging together measurements made at different instants of time:

r1(t)r2(t)limN1Ni=1Nr1(ti)r2(ti).(16)\langle r_1(t)r_2(t) \rangle \equiv \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{i=1}^N r_1(t_i)r_2(t_i). \tag{16}

Random processes for which this is true—i.e., for which time averages equal ensemble averages over different realizations of the field—are said to be ergodic.

In what follows we will assume that all our random processes are ergodic so that ensemble averages can be replaced by time averages (and/or spatial averages) if desired. This will allow us to calculate expectation values by averaging over segments of a single realization, which is usually all that we have in practice. Although ergodicity is often a good assumption to make, it is important to note that not all stationary random processes are ergodic. An example19 of a stationary random process that is not ergodic is an ensemble of constant time-series $x(t) = a$ , where the values of $a$ are uniformly distributed between $-1$ and $1$ . The ensemble average $\langle x(t) \rangle = 0$ for all $t$ , but the time-average of a single realization equals the value of $a$ for whichever time-series is drawn from the ensemble. For simplicity of presentation in the remainder of this paper, we will continue to treat the Fourier expansion coefficients as random variables and calculate ensemble averages of these quantities, rather than time


(and/or spatial) averages of products of the plane wave components $e^{i(\mathbf{k} \cdot \mathbf{x} - \omega t)}$ .

III. SCENARIO 1: SOUND WAVES

The first scenario we consider involves sound. Mathematically, sound waves in air are pressure deviations (relative to atmospheric pressure) that satisfy the 3-dimensional wave equation. If we denote the pressure deviation at time $t$ and spatial location $\mathbf{x}$ by $p(t, \mathbf{x})$ , then

2p1cs22pt2=0,(17)\nabla^2 p - \frac{1}{c_s^2} \frac{\partial^2 p}{\partial t^2} = 0, \tag{17}

where $\nabla^2$ denotes the Laplacian20 and $c_s$ denotes the speed of sound in air (approximately 340 m/s at room temperature). The most general solution of the 3-dimensional wave equation is a superposition of plane waves:

p(t,x)=d3kA(k)cos(kxωt+α(k)),(18)p(t, \mathbf{x}) = \int d^3\mathbf{k} A(\mathbf{k}) \cos(\mathbf{k} \cdot \mathbf{x} - \omega t + \alpha(\mathbf{k})), \tag{18}

where the wave vector $\mathbf{k}$ and angular frequency $\omega$ are related by $\omega/k = c_s$ in order that (17) be satisfied for each $\mathbf{k}$ . As discussed in Section II, it will be more convenient to work with the complex-valued solution

p~(t,x)=d3kA~(k)ei(kxωt),A~(k)=A(k)eiα(k),(19)\tilde{p}(t, \mathbf{x}) = \int d^3\mathbf{k} \tilde{A}(\mathbf{k}) e^{i(\mathbf{k} \cdot \mathbf{x} - \omega t)}, \quad \tilde{A}(\mathbf{k}) = A(\mathbf{k}) e^{i\alpha(\mathbf{k})}, \tag{19}

for which $p(t, \mathbf{x})$ is the real part.

For a stochastic sound field, the Fourier coefficients $\tilde{A}(\mathbf{k})$ are random variables. We will assume that these coefficients satisfy (13) and (14), with the additional requirement that the function $B(\mathbf{k})$ be independent of direction $\hat{\mathbf{k}}$ , which is appropriate for a statistically isotropic sound field. (This means there is no preferred direction of wave propagation at any point in the field.) As we shall now show, the function $B(\mathbf{k}) \equiv B(k)$ is simply related to the power per unit frequency in the sound field integrated over all directions. To prove this last claim we calculate the mean-squared pressure deviations:$$\begin{aligned} \langle p^2(t, \mathbf{x}) \rangle &= \frac{1}{2} \langle \tilde{p}(t, \mathbf{x}) \tilde{p}^*(t, \mathbf{x}) \rangle \ &= \frac{1}{2} \left\langle \int d^3 \mathbf{k} \tilde{A}(\mathbf{k}) e^{i(\mathbf{k} \cdot \mathbf{x} - \omega t)} \int d^3 \mathbf{k}' \tilde{A}^*(\mathbf{k}') e^{-i(\mathbf{k}' \cdot \mathbf{x} - \omega' t)} \right\rangle \ &= \frac{1}{2} \int d^3 \mathbf{k} \int d^3 \mathbf{k}' \langle \tilde{A}(\mathbf{k}) \tilde{A}^*(\mathbf{k}') \rangle e^{i(\mathbf{k} - \mathbf{k}') \cdot \mathbf{x}} e^{-i(\omega - \omega') t} \ &= \frac{1}{2} \int d^3 \mathbf{k} B(k) \ &= \frac{1}{2} \int_0^\infty k^2 dk \int_{S^2} d^2 \Omega_{\hat{\mathbf{k}}} B(k) \ &= 2\pi \int_0^\infty k^2 dk B(k). \end{aligned} \tag{20}$$

Thus, if we write

p2(t,x)=0dωdp2dω,(21)\langle p^2(t, \mathbf{x}) \rangle = \int_0^\infty d\omega \frac{d\langle p^2 \rangle}{d\omega}, \tag{21}

then

dp2dω=2πk2csB(k)(22)\frac{d\langle p^2 \rangle}{d\omega} = \frac{2\pi k^2}{c_s} B(k) \tag{22}

as claimed.

To determine the acoustical analogue of the pulsar timing Hellings and Downs function, we need to calculate the expected correlation of the responses $r_1(t)$ and $r_2(t)$ of two receiving systems to an isotropic stochastic sound field. A single receiving system will consist of a pair of omni-directional (i.e., isotropic) microphones that are separated in space as shown in Fig. 4. For simplicity, we will assume that the microphones are identical and have a gain $G$ that is independent of frequency. The response $r_1(t)$ of receiving system 1, consisting of microphones $A$

and $B$ , is defined to be the real part of

r~1(t)=V~A(t)V~B(t),(23)\tilde{r}_1(t) = \tilde{V}_A(t) - \tilde{V}_B(t), \tag{23}

where

V~A(t)=Gp~(xA,t),V~B(t)=Gp~(xB,t).(24)\tilde{V}_A(t) = G \tilde{p}(\mathbf{x}_A, t), \quad \tilde{V}_B(t) = G \tilde{p}(\mathbf{x}_B, t). \tag{24}

The response of receiving system 2, consisting of microphones $A$ and $C$ , is defined similarly,

r~2(t)=V~A(t)V~C(t),(25)\tilde{r}_2(t) = \tilde{V}_A(t) - \tilde{V}_C(t), \tag{25}

with microphone $C$ replacing microphone $B$ . Note that microphone $A$ is common to both receiving systems, and that we have taken the time of the measurement to be the same at both microphones, which physically corresponds to running equal-length wires from each microphone to our receiving system.

The expected value of the correlated response is then

r1(t)r2(t)=12Re{r~1(t)r~2(t)}=12Re{(V~A(t)V~B(t))(V~A(t)V~C(t))}=12Re{G2d3kd3kA~(k)A~(k)ei(ωω)t[1eikxB][1eikxC]}=12Re{G2d3kB(k)[1eikxB][1eikxC]}=12Re{G20k2dkB(k)S2d2Ωk^[1eikxBeikxC+eik(xBxC)]}=0dωdp2dωΓ12(ω),(26)\begin{aligned} \langle r_1(t) r_2(t) \rangle &= \frac{1}{2} \text{Re} \{ \langle \tilde{r}_1(t) \tilde{r}_2^*(t) \rangle \} \\ &= \frac{1}{2} \text{Re} \left\{ \left\langle \left( \tilde{V}_A(t) - \tilde{V}_B(t) \right) \left( \tilde{V}_A^*(t) - \tilde{V}_C^*(t) \right) \right\rangle \right\} \\ &= \frac{1}{2} \text{Re} \left\{ G^2 \int d^3 \mathbf{k} \int d^3 \mathbf{k}' \langle \tilde{A}(\mathbf{k}) \tilde{A}^*(\mathbf{k}') \rangle e^{-i(\omega - \omega') t} [1 - e^{i\mathbf{k} \cdot \mathbf{x}_B}] [1 - e^{-i\mathbf{k}' \cdot \mathbf{x}_C}] \right\} \\ &= \frac{1}{2} \text{Re} \left\{ G^2 \int d^3 \mathbf{k} B(k) [1 - e^{i\mathbf{k} \cdot \mathbf{x}_B}] [1 - e^{-i\mathbf{k} \cdot \mathbf{x}_C}] \right\} \\ &= \frac{1}{2} \text{Re} \left\{ G^2 \int_0^\infty k^2 dk B(k) \int_{S^2} d^2 \Omega_{\hat{\mathbf{k}}} [1 - e^{i\mathbf{k} \cdot \mathbf{x}_B} - e^{-i\mathbf{k} \cdot \mathbf{x}_C} + e^{i\mathbf{k} \cdot (\mathbf{x}_B - \mathbf{x}_C)}] \right\} \\ &= \int_0^\infty d\omega \frac{d\langle p^2 \rangle}{d\omega} \Gamma_{12}(\omega), \end{aligned} \tag{26}

where the correlation function

Γ12(ω)=Re{G214πS2d2Ωk^[1eikxBeikxC+eik(xBxC)]}.(27)\Gamma_{12}(\omega) = \text{Re} \left\{ G^2 \frac{1}{4\pi} \int_{S^2} d^2 \Omega_{\hat{\mathbf{k}}} [1 - e^{i\mathbf{k} \cdot \mathbf{x}_B} - e^{-i\mathbf{k} \cdot \mathbf{x}_C} + e^{i\mathbf{k} \cdot (\mathbf{x}_B - \mathbf{x}_C)}] \right\}. \tag{27}FIG. 4: Geometry for the calculation of the Hellings-and-Downs-type function for a pair of receivers constructed from omnidirectional microphones responding to an isotropic stochastic sound field. Receiving system 1 is constructed from microphones $A$ and $B$ , and points along the unit vector $\hat{u}_1$ . Receiving system 2 is constructed from microphones $A$ and $C$ , and points along the unit vector $\hat{u}_2$ . The coordinate system is chosen so that microphone $A$ , which is common to both receiving systems, is located at the origin. Microphone $B$ is located on the $z$ -axis, a distance $D_B$ from the origin, while microphone $C$ is located in the $xz$ -plane, a distance $D_C$ from the origin. The angle between the two receiving systems is denoted by $\zeta$ , and is given by $\cos \zeta = \hat{u}_1 \cdot \hat{u}_2$ . (Note the similarity of this figure and Fig. 3.)

The integrals of the exponentials over all directions $\hat{\mathbf{k}}$ are of the form $\int_{S^2} d^2\Omega_{\hat{\mathbf{k}}} e^{i\mathbf{k}\cdot\mathbf{x}}$ , where $\mathbf{x}$ is a fixed vector. Such an integral is most easily evaluated in a frame in which the $z$ -axis is directed along $\mathbf{x}$ . In this frame,

S2d2Ωk^eikx=02πdϕ11d(cosθ)eikDcosθ=2π1ikD(eikDeikD)=4πsinc(kD),(28)\begin{aligned} \int_{S^2} d^2\Omega_{\hat{\mathbf{k}}} e^{i\mathbf{k}\cdot\mathbf{x}} &= \int_0^{2\pi} d\phi \int_{-1}^1 d(\cos\theta) e^{ikD\cos\theta} \\ &= 2\pi \frac{1}{ikD} (e^{ikD} - e^{-ikD}) \\ &= 4\pi \operatorname{sinc}(kD), \end{aligned} \quad (28)

where $D = |\mathbf{x}|$ and $\operatorname{sinc}(x) \equiv \sin x/x$ . Since the sinc function rapidly approaches zero for $x \gg 1$ as shown in Fig. 5, we can ignore the contribution from the last three integrals in (27) provided $kD \gg 1$ , or equivalently, provided $D \gg 1/k = c_s/\omega$ . This is called the short-wavelength approximation. For audible sound, which has frequencies $f \equiv \omega/(2\pi)$ in the range $\sim 20$ Hz to $\sim 20$ kHz, this condition becomes

Dcsω=340 m/s2π20 Hz=2.7 m.(29)D \gg \frac{c_s}{\omega} = \frac{340 \text{ m/s}}{2\pi \cdot 20 \text{ Hz}} = 2.7 \text{ m}. \quad (29)

So assuming that the individual microphones are separated by more than this amount, we have

χ(ζ)Γ12(ω)G2.(30)\chi(\zeta) \equiv \Gamma_{12}(\omega) \simeq G^2. \quad (30)

FIG. 5: Plot of $\operatorname{sinc}(kD)$ versus $kD$ .

In other words, the Hellings and Downs function for an isotropic stochastic sound field is simply a constant, independent of the angle between the two receiving systems. The expected correlation is thus

r1r2G2p2,(31)\langle r_1 r_2 \rangle \simeq G^2 \langle p^2 \rangle, \quad (31)

which is the mean power in the sound field multiplied by a constant $G^2$ . This result is to be expected for omnidirectional microphones in an isotropic stochastic sound field.Although this was a somewhat long calculation to obtain an answer that, in retrospect, did not require any calculation, the formalism developed here can be applied with rather minor modifications to handle more complicated scenarios as we shall see below.

IV. SCENARIO 2: ELECTROMAGNETIC WAVES

The second example we consider involves electromagnetic waves. Similar to sound, electromagnetic waves are solutions to a 3-dimensional wave equation, but with the speed of light $c = 2.998 \times 10^8$ replacing the speed of sound $c_s$ :

2E1c22Et2=0,2B1c22Bt2=0.(32)\begin{aligned}\nabla^2 \mathbf{E} - \frac{1}{c^2} \frac{\partial^2 \mathbf{E}}{\partial t^2} &= 0, \\ \nabla^2 \mathbf{B} - \frac{1}{c^2} \frac{\partial^2 \mathbf{B}}{\partial t^2} &= 0.\end{aligned}\quad (32)

The most general solution to the wave equation for the electric and magnetic fields is given by a sum of plane waves similar to that in Eq. (2),

E~(t,x)=d3k{E~1(k)ϵ^1(k^)+E~2(k)ϵ^2(k^)}ei(kxωt),B~(t,x)=d3kk^c×{E~1(k)ϵ^1(k^)+E~2(k)ϵ^2(k^)}ei(kxωt)(33)\begin{aligned}\tilde{\mathbf{E}}(t, \mathbf{x}) &= \int d^3 \mathbf{k} \left\{ \tilde{E}_1(\mathbf{k}) \hat{\epsilon}_1(\hat{\mathbf{k}}) + \tilde{E}_2(\mathbf{k}) \hat{\epsilon}_2(\hat{\mathbf{k}}) \right\} e^{i(\mathbf{k} \cdot \mathbf{x} - \omega t)}, \\ \tilde{\mathbf{B}}(t, \mathbf{x}) &= \int d^3 \mathbf{k} \frac{\hat{\mathbf{k}}}{c} \times \left\{ \tilde{E}_1(\mathbf{k}) \hat{\epsilon}_1(\hat{\mathbf{k}}) + \tilde{E}_2(\mathbf{k}) \hat{\epsilon}_2(\hat{\mathbf{k}}) \right\} e^{i(\mathbf{k} \cdot \mathbf{x} - \omega t)}\end{aligned}\quad (33)

with

E(t,x)=Re[E~(t,x)],B(t,x)=Re[B~(t,x)],(34)\mathbf{E}(t, \mathbf{x}) = \text{Re}[\tilde{\mathbf{E}}(t, \mathbf{x})], \quad \mathbf{B}(t, \mathbf{x}) = \text{Re}[\tilde{\mathbf{B}}(t, \mathbf{x})], \quad (34)

and $\omega/k = c$ . In the above expressions, $\hat{\epsilon}_\alpha(\hat{\mathbf{k}})$ ( $\alpha = 1, 2$ ) are two unit polarization vectors, orthogonal to one another and to the direction of propagation:

ϵ^α(k^)ϵ^β(k^)=δαβ,k^ϵ^α(k^)=0.(35)\hat{\epsilon}_\alpha(\hat{\mathbf{k}}) \cdot \hat{\epsilon}_\beta(\hat{\mathbf{k}}) = \delta_{\alpha\beta}, \quad \hat{\mathbf{k}} \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}}) = 0. \quad (35)

Note that there is freedom to rotate the polarization vectors in the plane orthogonal to $\hat{\mathbf{k}}$ . For simplicity, we will

choose

ϵ^1(k^)=cosθcosϕx^+cosθsinϕy^sinθz^=θ^,ϵ^2(k^)=sinϕx^+cosϕy^=ϕ^,(36)\begin{aligned}\hat{\epsilon}_1(\hat{\mathbf{k}}) &= \cos \theta \cos \phi \hat{\mathbf{x}} + \cos \theta \sin \phi \hat{\mathbf{y}} - \sin \theta \hat{\mathbf{z}} = \hat{\boldsymbol{\theta}}, \\ \hat{\epsilon}_2(\hat{\mathbf{k}}) &= -\sin \phi \hat{\mathbf{x}} + \cos \phi \hat{\mathbf{y}} = \hat{\boldsymbol{\phi}},\end{aligned}\quad (36)

whenever $\hat{\mathbf{k}}$ points in the direction given by the standard angular coordinates $(\theta, \phi)$ on the sphere:

k^=sinθcosϕx^+sinθsinϕy^+cosθz^.(37)\hat{\mathbf{k}} = \sin \theta \cos \phi \hat{\mathbf{x}} + \sin \theta \sin \phi \hat{\mathbf{y}} + \cos \theta \hat{\mathbf{z}}. \quad (37)

Since the receiving systems that we shall consider below are constructed from electric dipole antennas, which respond only to the electric part of the field, we will ignore the magnetic field for the remainder of this discussion.

For a stochastic field, the Fourier coefficients are complex-valued random variables. We will assume that they have expectation values (cf. (13) and (14)):

E~α(k)=0,E~α(k)=0,(38)\langle \tilde{E}_\alpha(\mathbf{k}) \rangle = 0, \quad \langle \tilde{E}_\alpha^*(\mathbf{k}) \rangle = 0, \quad (38)

and

E~α(k)E~β(k)=0,E~α(k)E~β(k)=0,E~α(k)E~β(k)=δ3(kk)Pαβ(k).(39)\begin{aligned}\langle \tilde{E}_\alpha(\mathbf{k}) \tilde{E}_\beta(\mathbf{k}') \rangle &= 0, \\ \langle \tilde{E}_\alpha^*(\mathbf{k}) \tilde{E}_\beta^*(\mathbf{k}') \rangle &= 0, \\ \langle \tilde{E}_\alpha(\mathbf{k}) \tilde{E}_\beta^*(\mathbf{k}') \rangle &= \delta^3(\mathbf{k} - \mathbf{k}') P_{\alpha\beta}(\mathbf{k}).\end{aligned}\quad (39)

As before, the Dirac delta function ensures that the radiation propagating in different directions and having different angular frequencies are statistically independent of one another. If the field is also statistically isotropic and unpolarized, then the polarization tensor $P_{\alpha\beta}(\mathbf{k})$ will be proportional to the identity matrix $\delta_{\alpha\beta}$ , with a proportionality constant independent of direction on the sky:

Pαβ(k)=P(k)δαβ.(40)P_{\alpha\beta}(\mathbf{k}) = P(k) \delta_{\alpha\beta}. \quad (40)

Similar to the case for sound, the function $P(k)$ turns out to be simply related to the power per unit frequency in the electric field when summed over both polarization modes and integrated over all directions. To see this we calculate mean-squared electric field:

E2(t,x)=12E~(t,x)E~(t,x)=12d3kα=1,2E~α(k)ϵ^α(k^)ei(kxωt)d3kβ=1,2E~β(k)ϵ^β(k^)ei(kxωt)=12d3kd3kα=1,2β=1,2E~α(k)E~β(k)ϵ^α(k^)ϵ^β(k^)ei(kk)xei(ωω)t=12d3kP(k)α=1,2ϵ^α(k^)ϵ^α(k^)=4π0k2dkP(k)=0dωdE2dω,(41)\begin{aligned}\langle E^2(t, \mathbf{x}) \rangle &= \frac{1}{2} \langle \tilde{\mathbf{E}}(t, \mathbf{x}) \cdot \tilde{\mathbf{E}}^*(t, \mathbf{x}) \rangle \\ &= \frac{1}{2} \left\langle \int d^3 \mathbf{k} \sum_{\alpha=1,2} \tilde{E}_\alpha(\mathbf{k}) \hat{\epsilon}_\alpha(\hat{\mathbf{k}}) e^{i(\mathbf{k} \cdot \mathbf{x} - \omega t)} \cdot \int d^3 \mathbf{k}' \sum_{\beta=1,2} \tilde{E}_\beta^*(\mathbf{k}') \hat{\epsilon}_\beta(\hat{\mathbf{k}}') e^{-i(\mathbf{k}' \cdot \mathbf{x} - \omega' t)} \right\rangle \\ &= \frac{1}{2} \int d^3 \mathbf{k} \int d^3 \mathbf{k}' \sum_{\alpha=1,2} \sum_{\beta=1,2} \langle \tilde{E}_\alpha(\mathbf{k}) \tilde{E}_\beta^*(\mathbf{k}') \rangle \hat{\epsilon}_\alpha(\hat{\mathbf{k}}) \cdot \hat{\epsilon}_\beta(\hat{\mathbf{k}}') e^{i(\mathbf{k} - \mathbf{k}') \cdot \mathbf{x}} e^{-i(\omega - \omega')t} \\ &= \frac{1}{2} \int d^3 \mathbf{k} P(k) \sum_{\alpha=1,2} \hat{\epsilon}_\alpha(\hat{\mathbf{k}}) \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}}) \\ &= 4\pi \int_0^\infty k^2 dk P(k) = \int_0^\infty d\omega \frac{d\langle E^2 \rangle}{d\omega},\end{aligned}\quad (41)for which

dE2dω=4πk2cP(k)(42)\frac{d\langle E^2 \rangle}{d\omega} = \frac{4\pi k^2}{c} P(k) \quad (42)

as claimed. Note that this has the same form as that for sound, cf. (22), with the speed of light $c$ replacing the speed of sound $c_s$ , and the extra factor of two coming from the summation over the two (assumed statistically equivalent) polarization modes for the electromagnetic field.

To determine the electromagnetic analogue of the pul-sar timing Hellings and Downs function, we need to calcu-late the expected correlation of the responses $r_1(t)$ and $r_2(t)$ of two receiving systems to an isotropic, unpolar-ized stochastic electromagnetic field. A single receiving system will consist of a pair of electric dipole antennas that are separated in space as shown in Fig. 6. For sim-plicity, we will assume that the electric dipole antennas are identical and short relative to the wavelengths that make up the electric field. The response $r_1(t)$ of receiving system 1, consisting of electric dipole antennas $A$ and $B$ ,

is defined to be the real part of

r~1(t)=V~A(t)V~B(t),(43)\tilde{r}_1(t) = \tilde{V}_A(t) - \tilde{V}_B(t), \quad (43)

where

V~A(t)=u^1E~(xA,t),V~B(t)=u^1E~(xB,t).(44)\tilde{V}_A(t) = \hat{\mathbf{u}}_1 \cdot \tilde{\mathbf{E}}(\mathbf{x}_A, t), \quad \tilde{V}_B(t) = \hat{\mathbf{u}}_1 \cdot \tilde{\mathbf{E}}(\mathbf{x}_B, t). \quad (44)

The response $r_2(t)$ of receiving system 2, consisting of electric dipole antennas $A'$ and $C$ , is defined similarly

r~2(t)=V~A(t)V~C(t),(45)\tilde{r}_2(t) = \tilde{V}_{A'}(t) - \tilde{V}_C(t), \quad (45)

where

V~A(t)=u^2E~(xA,t),V~C(t)=u^2E~(xC,t).(46)\tilde{V}_{A'}(t) = \hat{\mathbf{u}}_2 \cdot \tilde{\mathbf{E}}(\mathbf{x}_A, t), \quad \tilde{V}_C(t) = \hat{\mathbf{u}}_2 \cdot \tilde{\mathbf{E}}(\mathbf{x}_C, t). \quad (46)

Note that $\tilde{V}_{A'}(t)$ differs from $\tilde{V}_A(t)$ since the dipole an-tenna for $A'$ points along $\hat{\mathbf{u}}_2$ , while that for $A$ points along $\hat{\mathbf{u}}_1$ .

The expected value of the correlated response is then

r1(t)r2(t)=12Re{r~1(t)r~2(t)}=12Re{(V~A(t)V~B(t))(V~A(t)V~C(t))}=12Re{d3kd3kα=1,2β=1,2E~α(k)E~β(k)u^1ϵ^α(k^)u^2ϵ^β(k^)×ei(ωω)t[1eikxB][1eikxC]}=12Re{d3kP(k)α=1,2(u^1ϵ^α(k^))(u^2ϵ^α(k^))[1eikxB][1eikxC]}=12Re{0k2dkP(k)S2dΩk^α=1,2(u^1ϵ^α(k^))(u^2ϵ^α(k^))[1eikxB][1eikxC]}=0dωdE2dωΓ12(ω),(47)\begin{aligned} \langle r_1(t)r_2(t) \rangle &= \frac{1}{2} \text{Re} \{ \langle \tilde{r}_1(t) \tilde{r}_2^*(t) \rangle \} \\ &= \frac{1}{2} \text{Re} \left\{ \left\langle \left( \tilde{V}_A(t) - \tilde{V}_B(t) \right) \left( \tilde{V}_{A'}^*(t) - \tilde{V}_C^*(t) \right) \right\rangle \right\} \\ &= \frac{1}{2} \text{Re} \left\{ \int d^3\mathbf{k} \int d^3\mathbf{k}' \sum_{\alpha=1,2} \sum_{\beta=1,2} \left\langle \tilde{E}_\alpha(\mathbf{k}) \tilde{E}_\beta^*(\mathbf{k}') \right\rangle \hat{\mathbf{u}}_1 \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}}) \hat{\mathbf{u}}_2 \cdot \hat{\epsilon}_\beta(\hat{\mathbf{k}}') \right. \\ &\quad \left. \times e^{-i(\omega-\omega')t} [1 - e^{i\mathbf{k} \cdot \mathbf{x}_B}] [1 - e^{-i\mathbf{k}' \cdot \mathbf{x}_C}] \right\} \\ &= \frac{1}{2} \text{Re} \left\{ \int d^3\mathbf{k} P(k) \sum_{\alpha=1,2} (\hat{\mathbf{u}}_1 \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}})) (\hat{\mathbf{u}}_2 \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}})) [1 - e^{i\mathbf{k} \cdot \mathbf{x}_B}] [1 - e^{-i\mathbf{k} \cdot \mathbf{x}_C}] \right\} \\ &= \frac{1}{2} \text{Re} \left\{ \int_0^\infty k^2 dk P(k) \int_{S^2} d\Omega_{\hat{\mathbf{k}}} \sum_{\alpha=1,2} (\hat{\mathbf{u}}_1 \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}})) (\hat{\mathbf{u}}_2 \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}})) [1 - e^{i\mathbf{k} \cdot \mathbf{x}_B}] [1 - e^{-i\mathbf{k} \cdot \mathbf{x}_C}] \right\} \\ &= \int_0^\infty d\omega \frac{d\langle E^2 \rangle}{d\omega} \Gamma_{12}(\omega), \end{aligned} \quad (47)

where

Γ12(ω)=Re{18πS2dΩk^α=1,2(u^1ϵ^α(k^))(u^2ϵ^α(k^))[1eikxB][1eikxC]}.(48)\Gamma_{12}(\omega) = \text{Re} \left\{ \frac{1}{8\pi} \int_{S^2} d\Omega_{\hat{\mathbf{k}}} \sum_{\alpha=1,2} (\hat{\mathbf{u}}_1 \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}})) (\hat{\mathbf{u}}_2 \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}})) [1 - e^{i\mathbf{k} \cdot \mathbf{x}_B}] [1 - e^{-i\mathbf{k} \cdot \mathbf{x}_C}] \right\}. \quad (48)

If we ignore the contribution of the integrals involving $e^{i\mathbf{k} \cdot \mathbf{x}_B}$ , $e^{-i\mathbf{k} \cdot \mathbf{x}_C}$ , and $e^{i\mathbf{k} \cdot (\mathbf{x}_B - \mathbf{x}_C)}$ , assuming as we did for sound that we are working in the short-wavelength ap-

proximation, then

Γ12(ω)18πS2d2Ωk^α=1,2(u^1ϵ^α(k^))(u^2ϵ^α(k^)),(49)\Gamma_{12}(\omega) \simeq \frac{1}{8\pi} \int_{S^2} d^2\Omega_{\hat{\mathbf{k}}} \sum_{\alpha=1,2} (\hat{\mathbf{u}}_1 \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}})) (\hat{\mathbf{u}}_2 \cdot \hat{\epsilon}_\alpha(\hat{\mathbf{k}})), \quad (49)

which is the sky-averaged and polarization-averaged product of the inner products of $\hat{\mathbf{u}}_1$ and $\hat{\mathbf{u}}_2$ with the po-FIG. 6: Geometry for the calculation of the Hellings-and-Downs-type function for a pair of receivers constructed from electric dipole antennas responding to an isotropic, unpolarized stochastic electromagnetic field. Receiving system 1 is constructed from dipole antennas $A$ and $B$ , which are both directed along $\hat{\mathbf{u}}_1$ , which points from $A$ to $B$ . Receiving system 2 is constructed from dipole antennas $A'$ and $C$ , which are both directed along $\hat{\mathbf{u}}_2$ , which points from $A'$ to $C$ . The coordinate system is chosen so that the two dipole antennas $A$ and $A'$ are located at the origin. Dipole antenna $B$ is located on the $z$ -axis, a distance $D_B$ from the origin, while dipole antenna $C$ is located in the $xz$ -plane, a distance $D_C$ from the origin. The angle between the two receiving systems is denoted by $\zeta$ , and is given by $\cos \zeta = \hat{\mathbf{u}}_1 \cdot \hat{\mathbf{u}}_2$ . (Again, note the similarity of this figure and Fig. 3.)

larization vectors $\hat{\mathbf{e}}_\alpha(\hat{\mathbf{k}})$ .

The above integral for the correlation function $\Gamma_{12}(\omega)$ can easily be evaluated in the coordinate system shown in Fig. 6. In these coordinates, $\hat{\mathbf{u}}_1 = \hat{\mathbf{z}}$ and $\hat{\mathbf{u}}_2 = \sin \zeta \hat{\mathbf{x}} + \cos \zeta \hat{\mathbf{z}}$ . Using the expressions for the polarization vectors given in (36) it follows that

u^1e^1(k^)=sinθ,u^1e^2(k^)=0,u^2e^1(k^)=sinζcosθcosϕcosζsinθ,u^1e^2(k^)=sinζsinϕ,(50)\begin{aligned} \hat{\mathbf{u}}_1 \cdot \hat{\mathbf{e}}_1(\hat{\mathbf{k}}) &= -\sin \theta, \\ \hat{\mathbf{u}}_1 \cdot \hat{\mathbf{e}}_2(\hat{\mathbf{k}}) &= 0, \\ \hat{\mathbf{u}}_2 \cdot \hat{\mathbf{e}}_1(\hat{\mathbf{k}}) &= \sin \zeta \cos \theta \cos \phi - \cos \zeta \sin \theta, \\ \hat{\mathbf{u}}_1 \cdot \hat{\mathbf{e}}_2(\hat{\mathbf{k}}) &= -\sin \zeta \sin \phi, \end{aligned} \quad (50)

for which

Γ12(ω)18πS2d2Ωk^α=1,2(u^1e^α(k^))(u^2e^α(k^))=18π02πdϕ11d(cosθ)(sinθ)×(sinζcosθcosϕcosζsinθ)=14cosζ11dx(1x2)=13cosζ.(51)\begin{aligned} \Gamma_{12}(\omega) &\simeq \frac{1}{8\pi} \int_{S^2} d^2\Omega_{\hat{\mathbf{k}}} \sum_{\alpha=1,2} (\hat{\mathbf{u}}_1 \cdot \hat{\mathbf{e}}_\alpha(\hat{\mathbf{k}}))(\hat{\mathbf{u}}_2 \cdot \hat{\mathbf{e}}_\alpha(\hat{\mathbf{k}})) \\ &= \frac{1}{8\pi} \int_0^{2\pi} d\phi \int_{-1}^1 d(\cos \theta) (-\sin \theta) \\ &\quad \times (\sin \zeta \cos \theta \cos \phi - \cos \zeta \sin \theta) \\ &= \frac{1}{4} \cos \zeta \int_{-1}^1 dx (1 - x^2) = \frac{1}{3} \cos \zeta. \end{aligned} \quad (51)

Thus

χ(ζ)Γ12(ω)13cosζ(52)\chi(\zeta) \equiv \Gamma_{12}(\omega) \simeq \frac{1}{3} \cos \zeta \quad (52)

and

r1r213cosζE2.(53)\langle r_1 r_2 \rangle \simeq \frac{1}{3} \cos \zeta \langle E^2 \rangle. \quad (53)

So the Hellings and Downs function for an isotropic, unpolarized stochastic electromagnetic field is simply proportional to the cosine of the angle between the two receiving systems.

V. SUMMARY AND DISCUSSION

In the preceding two sections, we calculated Hellings-and-Downs-type functions $\chi(\zeta)$ for two simple scenarios: (i) omni-directional microphones in an isotropic stochastic sound field, and (ii) electric dipole antennas in an isotropic, unpolarized stochastic electromagnetic field. The result for sound was trivial, $\chi(\zeta) = \text{const}$ , and in retrospect did not even require a calculation. The result for the electromagnetic case was slightly more complicated, $\chi(\zeta) = \frac{1}{3} \cos(\zeta)$ , as we had to take account of the polarization of the electromagnetic waves as well as the direction of the electric dipole antennas. But the basic steps that we went through to obtain the results were the same in both cases, and, in fact, can be abstracted to work for receivers in a general field, which we will denote here by $\Phi(t, \mathbf{x})$ :211. 1. Write down the most general expression for the field in terms of a Fourier expansion. Let the Fourier coefficients be random variables whose expectation values encode the statistical properties of the field—e.g., isotropic, unpolarized, $\dots$ . 2. 2. Using the expectation values of the Fourier coefficients, calculate $\langle \Phi^2(t, \mathbf{x}) \rangle$ . Use this expression to determine how the power in the field is distributed as a function of frequency

Φ2(t,x)=0dωdΦ2dω.(54)\langle \Phi^2(t, \mathbf{x}) \rangle = \int_0^\infty d\omega \frac{d\langle \Phi^2 \rangle}{d\omega}. \quad (54)

    1. Write down the response $r_I(t)$ of receiver $I$ to the field $\Phi(t, \mathbf{x})$ . For a linear receiving system, the response will take the form of a convolution:

rI(t)=(RIΦ)(t)=dτd3yRI(τ,y)Φ(tτ,xIy).(55)\begin{aligned} r_I(t) &= (R_I * \Phi)(t) \\ &= \int d\tau \int d^3y R_I(\tau, \mathbf{y}) \Phi(t - \tau, \mathbf{x}_I - \mathbf{y}). \end{aligned} \quad (55)

For the simple examples we considered in Sections III and IV, $R_I(\tau, \mathbf{y})$ was proportional to a sum of a product of delta functions like $\delta(\tau)\delta^3(\mathbf{y})$ , but that need not be the case in general.

    1. Using the expectation values of the Fourier coefficients, calculate the expected value of the correlated response $\langle r_1(t)r_2(t) \rangle$ for any pair of receivers. Use this expression to determine the correlation

function $\Gamma_{12}(\omega)$ defined by

r1(t)r2(t)=0dωdΦ2dωΓ12(ω).(56)\langle r_1(t)r_2(t) \rangle = \int_0^\infty d\omega \frac{d\langle \Phi^2 \rangle}{d\omega} \Gamma_{12}(\omega). \quad (56)

    1. For fixed frequency $\omega$ , the correlation $\Gamma_{12}(\omega)$ is, by definition, the value of the Hellings and Downs function evaluated for the relative configuration of the two receiving systems. For example,

χ(ζ)=Γ12(ω)(57)\chi(\zeta) = \Gamma_{12}(\omega) \quad (57)

for the simple examples that we considered in Sections III and IV, where $\zeta$ is the angle between the two receiving systems, relative to an origin defined by the common microphone $A$ for sound, or the co-located dipole antennas $A$ and $A'$ for the electromagnetic field. For more complicated receivers, such as ground-based laser interferometers like LIGO, Virgo, etc., $\chi$ will be a function of several variables; the separation vector between the vertices of the two interferometers $\mathbf{s} \equiv \mathbf{x}_1 - \mathbf{x}_2$ , as well as the unit vectors $\hat{\mathbf{u}}_1$ , $\hat{\mathbf{v}}_1$ , and $\hat{\mathbf{u}}_2$ , $\hat{\mathbf{v}}_2$ , which point along the arms of the two interferometers.

The above five steps are generic and will work for any scenario.

We conclude this paper by stating without proof the expression for the actual gravitational-wave pulsar timing Hellings and Downs function:

χ(ζ)18πS2d2Ωk^α=+,×12(u^1u^11+k^u^1):ϵα(k^)12(u^2u^21+k^u^2):ϵα(k^),(58)\chi(\zeta) \equiv \frac{1}{8\pi} \int_{S^2} d^2\Omega_{\hat{\mathbf{k}}} \sum_{\alpha=+, \times} \frac{1}{2} \left( \frac{\hat{\mathbf{u}}_1 \otimes \hat{\mathbf{u}}_1}{1 + \hat{\mathbf{k}} \cdot \hat{\mathbf{u}}_1} \right) : \epsilon_\alpha(\hat{\mathbf{k}}) \frac{1}{2} \left( \frac{\hat{\mathbf{u}}_2 \otimes \hat{\mathbf{u}}_2}{1 + \hat{\mathbf{k}} \cdot \hat{\mathbf{u}}_2} \right) : \epsilon_\alpha(\hat{\mathbf{k}}), \quad (58)

where

u^Iu^I:ϵα(k^)a=13b=13uIauIbϵα,ab(k^),I={1,2}(59)\hat{\mathbf{u}}_I \otimes \hat{\mathbf{u}}_I : \epsilon_\alpha(\hat{\mathbf{k}}) \equiv \sum_{a=1}^3 \sum_{b=1}^3 u_I^a u_I^b \epsilon_{\alpha,ab}(\hat{\mathbf{k}}), \quad I = \{1, 2\} \quad (59)

and

ϵ+(k^)=θ^θ^ϕ^ϕ^,ϵ×(k^)=θ^ϕ^+ϕ^θ^,(60)\begin{aligned} \epsilon_+(\hat{\mathbf{k}}) &= \hat{\boldsymbol{\theta}} \otimes \hat{\boldsymbol{\theta}} - \hat{\boldsymbol{\phi}} \otimes \hat{\boldsymbol{\phi}}, \\ \epsilon_\times(\hat{\mathbf{k}}) &= \hat{\boldsymbol{\theta}} \otimes \hat{\boldsymbol{\phi}} + \hat{\boldsymbol{\phi}} \otimes \hat{\boldsymbol{\theta}}, \end{aligned} \quad (60)

are the two gravitational-wave polarization tensors. (Here $\hat{\mathbf{u}}_1$ and $\hat{\mathbf{u}}_2$ are unit vectors pointing from Earth to the two pulsars, and $\zeta$ is the angle between $\hat{\mathbf{u}}_1$ and $\hat{\mathbf{u}}_2$ as shown in Fig. 3.) The extra factors of $1/(1 + \hat{\mathbf{k}} \cdot \hat{\mathbf{u}}_1)$ and $1/(1 + \hat{\mathbf{k}} \cdot \hat{\mathbf{u}}2)$ that appear in (58)—as compared to the analogous electromagnetic expression (49)—come from the calculation of the timing residual response of an Earth-pulsar baseline to the gravitational-wave field, when integrating the metric perturbations $h{ab}(t, \mathbf{x})$ along the photon world-line from the pulsar to

Earth. This is a non-trivial example of the convolution described in Step 3 above, and the mathematical details needed to derive the precise form of (58) are outside the scope of this paper. (Readers who are interested in seeing a derivation of (58) are encouraged to consult Ref. 22.) But all in all, the pulsar timing Hellings and Downs function is just a sky-averaged and polarization-averaged product of two geometrical quantities, as is the case for any Hellings-and-Downs-type function. It is now just a matter of doing the integrations, which we leave to the motivated reader.23 The final result should be proportional to (1), which has been normalized by an overall multiplicative factor of 3.### Acknowledgments

The content of this paper was originally presented by FAJ during the student week of the 2011 IPTA summer school at the University of West Virginia. JDR acknowledges support from NSF Awards PHY-1205585 and CREST HRD-1242090. We also thank the referees for numerous suggestions that have improved the presentation of the paper.

Appendix A: Details of the calculation for the pulsar timing Hellings and Downs function

Here we fill in some of the details of the integration of the pulsar timing Hellings and Downs function (58),

following the hints given in Endnote 23. The approach that we follow is based on similar presentations found in the appendices of Refs. 13,15,22.

In the coordinate system shown in Fig. 3, the two pulsars are located in directions $\hat{\mathbf{u}}_1 = \hat{\mathbf{z}}$ and $\hat{\mathbf{u}}_2 = \sin \zeta \hat{\mathbf{x}} + \cos \zeta \hat{\mathbf{z}}$ , so that

k^u^1=cosθ,k^u^2=cosζcosθ+sinζsinθcosϕ.(A1)\begin{aligned}\hat{\mathbf{k}} \cdot \hat{\mathbf{u}}_1 &= \cos \theta, \\ \hat{\mathbf{k}} \cdot \hat{\mathbf{u}}_2 &= \cos \zeta \cos \theta + \sin \zeta \sin \theta \cos \phi.\end{aligned}\tag{A1}

Using the definition (60) of the gravitational-wave polarization tensors $\epsilon_\alpha(\hat{\mathbf{k}})$ with $\hat{\theta}, \hat{\phi}$ defined in (36), it is fairly easy to show that

u^1u^1:ϵ+(k^)=sin2θ,u^1u^1:ϵ×(k^)=0,u^2u^2:ϵ+(k^)=(sinζcosθcosϕcosζsinθ)2sin2ζsin2ϕ,u^2u^2:ϵ×(k^)=2(sinζcosθcosϕcosζsinθ)sinζsinϕ.(A2)\begin{aligned}\hat{\mathbf{u}}_1 \otimes \hat{\mathbf{u}}_1 : \epsilon_+(\hat{\mathbf{k}}) &= \sin^2 \theta, \\ \hat{\mathbf{u}}_1 \otimes \hat{\mathbf{u}}_1 : \epsilon_\times(\hat{\mathbf{k}}) &= 0, \\ \hat{\mathbf{u}}_2 \otimes \hat{\mathbf{u}}_2 : \epsilon_+(\hat{\mathbf{k}}) &= (\sin \zeta \cos \theta \cos \phi - \cos \zeta \sin \theta)^2 - \sin^2 \zeta \sin^2 \phi, \\ \hat{\mathbf{u}}_2 \otimes \hat{\mathbf{u}}_2 : \epsilon_\times(\hat{\mathbf{k}}) &= -2(\sin \zeta \cos \theta \cos \phi - \cos \zeta \sin \theta) \sin \zeta \sin \phi.\end{aligned}\tag{A2}

The quantities

FIα(k^)12(u^Iu^I1+k^u^I):ϵα(k^),I={1,2},α={+,×},(A3)F_I^\alpha(\hat{\mathbf{k}}) \equiv \frac{1}{2} \left( \frac{\hat{\mathbf{u}}_I \otimes \hat{\mathbf{u}}_I}{1 + \hat{\mathbf{k}} \cdot \hat{\mathbf{u}}_I} \right) : \epsilon_\alpha(\hat{\mathbf{k}}), \quad I = \{1, 2\}, \quad \alpha = \{+, \times\},\tag{A3}

which appear in (58) are then given by

F1+(k^)=12(1cosθ),F1×(k^)=0,F2+(k^)=12[(1cosζcosθsinζsinθcosϕ)2sin2ζsin2ϕ1+cosζcosθ+sinζsinθcosϕ],F2×(k^)=12[sin2ζcosθsin(2ϕ)sin(2ζ)sinθsinϕ1+cosζcosθ+sinζsinθcosϕ],(A4)\begin{aligned}F_1^+(\hat{\mathbf{k}}) &= \frac{1}{2}(1 - \cos \theta), \\ F_1^\times(\hat{\mathbf{k}}) &= 0, \\ F_2^+(\hat{\mathbf{k}}) &= \frac{1}{2} \left[ (1 - \cos \zeta \cos \theta - \sin \zeta \sin \theta \cos \phi) - \frac{2 \sin^2 \zeta \sin^2 \phi}{1 + \cos \zeta \cos \theta + \sin \zeta \sin \theta \cos \phi} \right], \\ F_2^\times(\hat{\mathbf{k}}) &= -\frac{1}{2} \left[ \frac{\sin^2 \zeta \cos \theta \sin(2\phi) - \sin(2\zeta) \sin \theta \sin \phi}{1 + \cos \zeta \cos \theta + \sin \zeta \sin \theta \cos \phi} \right],\end{aligned}\tag{A4}

where for $F_2^+(\hat{\mathbf{k}})$ we cancelled the denominator with part of the numerator to isolate the complicated $\phi$ -dependence.

In this reference frame, the pulsar timing Hellings and Downs function (58) simplifies to

χ(ζ)=18πS2d2Ωk^F1+(k^)F2+(k^)=116π11dx(1x)I(x,ζ),(A5)\chi(\zeta) = \frac{1}{8\pi} \int_{S^2} d^2\Omega_{\hat{\mathbf{k}}} F_1^+(\hat{\mathbf{k}}) F_2^+(\hat{\mathbf{k}}) = \frac{1}{16\pi} \int_{-1}^1 dx (1-x) I(x, \zeta),\tag{A5}

where $x \equiv \cos \theta$ , and

I(x,ζ)02πdϕF2+(k^)=1202πdϕ[(1xcosζ1x2sinζcosϕ)2sin2ζsin2ϕ1+xcosζ+1x2sinζcosϕ].(A6)\begin{aligned}I(x, \zeta) &\equiv \int_0^{2\pi} d\phi F_2^+(\hat{\mathbf{k}}) \\ &= \frac{1}{2} \int_0^{2\pi} d\phi \left[ (1 - x \cos \zeta - \sqrt{1-x^2} \sin \zeta \cos \phi) - \frac{2 \sin^2 \zeta \sin^2 \phi}{1 + x \cos \zeta + \sqrt{1-x^2} \sin \zeta \cos \phi} \right].\end{aligned}\tag{A6}

The first part of the integral for $I(x, \zeta)$ is trivial:

I1(x,ζ)1202πdϕ(1xcosζ1x2sinζcosϕ)=π(1xcosζ).(A7)I_1(x, \zeta) \equiv \frac{1}{2} \int_0^{2\pi} d\phi (1 - x \cos \zeta - \sqrt{1-x^2} \sin \zeta \cos \phi) = \pi(1 - x \cos \zeta).\tag{A7}

The second part can be evaluated using contour

integration,24 which we illustrate below. Making theusual substitutions $z = e^{i\phi}$ , $\cos \phi = \frac{1}{2}(z + z^{-1})$ , etc., we obtain

I2(x,ζ)sin2ζ02πdϕsin2ϕ1+xcosζ+1x2sinζcosϕ=sin2ζCdzf(z),(A8)\begin{aligned} I_2(x, \zeta) &\equiv -\sin^2 \zeta \int_0^{2\pi} d\phi \frac{\sin^2 \phi}{1 + x \cos \zeta + \sqrt{1 - x^2} \sin \zeta \cos \phi} \\ &= -\sin^2 \zeta \oint_C dz f(z), \end{aligned} \quad (\text{A8})

where

f(z)=i(z21)2z2[4z(1+xcosζ)+21x2sinζ(z2+1)](A9)f(z) = \frac{i(z^2 - 1)^2}{z^2 [4z(1 + x \cos \zeta) + 2\sqrt{1 - x^2} \sin \zeta (z^2 + 1)]} \quad (\text{A9})

and $C$ is the unit circle in the complex $z$ -plane. The denominator of $f(z)$ can be factored using the quadratic formula for the expression in square brackets:

4z(1+xcosζ)+21x2sinζ(z2+1)=21x2sinζ(zz+)(zz),(A10)\begin{aligned} 4z(1 + x \cos \zeta) + 2\sqrt{1 - x^2} \sin \zeta (z^2 + 1) \\ = 2\sqrt{1 - x^2} \sin \zeta (z - z_+)(z - z_-), \end{aligned} \quad (\text{A10})

where

z+(1cosζ1±cosζ)(1x1±x),z1z+.(A11)z_+ \equiv -\sqrt{\left(\frac{1 \mp \cos \zeta}{1 \pm \cos \zeta}\right) \left(\frac{1 \mp x}{1 \pm x}\right)}, \quad z_- \equiv \frac{1}{z_+}. \quad (\text{A11})

In the above expression, the top signs correspond to the region $-\cos \zeta \leq x \leq 1$ and the bottom signs to the region $-1 \leq x \leq -\cos \zeta$ . One can show that for both of these

regions, $z_+$ is inside the unit circle $C$ (i.e., $|z_+| \leq 1$ ) and hence contributes to the contour integral, while $z_-$ is outside the unit circle and does not contribute. In addition, $z = 0$ lies inside the unit circle and contributes to the contour integral as a pole of order two. Using the residue theorem24

Cf(z)dz=2πiiRes(f,zi),(A12)\oint_C f(z) dz = 2\pi i \sum_i \text{Res}(f, z_i), \quad (\text{A12})

with

Res(f,z+)=limzz+{(zz+)f(z)}=i(z+z)21x2sinζ,Res(f,0)=limz0{ddz[z2f(z)]}=i(z++z)21x2sinζ,(A13)\begin{aligned} \text{Res}(f, z_+) &= \lim_{z \rightarrow z_+} \{(z - z_+) f(z)\} = \frac{i(z_+ - z_-)}{2\sqrt{1 - x^2} \sin \zeta}, \\ \text{Res}(f, 0) &= \lim_{z \rightarrow 0} \left\{ \frac{d}{dz} [z^2 f(z)] \right\} = \frac{i(z_+ + z_-)}{2\sqrt{1 - x^2} \sin \zeta}, \end{aligned} \quad (\text{A13})

it follows that

Cf(z)dz=2π(1±x)(1±cosζ),(A14)\oint_C f(z) dz = \frac{2\pi}{(1 \pm x)(1 \pm \cos \zeta)}, \quad (\text{A14})

for which

I(x,ζ)=π(1xcosζ)2π(1cosζ)(1±x).(A15)I(x, \zeta) = \pi(1 - x \cos \zeta) - 2\pi \frac{(1 \mp \cos \zeta)}{(1 \pm x)}. \quad (\text{A15})

It is now a relatively simple matter to evaluate the integral over $x$ to obtain $\chi(\zeta)$ :

χ(ζ)=116{11dx(1x)(1xcosζ)2(1+cosζ)1cosζdx2(1cosζ)cosζ1dx(1x)(1+x)}=116{2+23cosζ2(1+cosζ)(1cosζ)2(1cosζ)[2ln(21cosζ)(1+cosζ)]}=18+124cosζ+14(1cosζ)ln(1cosζ2)=16112(1cosζ2)+12(1cosζ2)ln(1cosζ2).(A16)\begin{aligned} \chi(\zeta) &= \frac{1}{16} \left\{ \int_{-1}^1 dx (1 - x)(1 - x \cos \zeta) - 2(1 + \cos \zeta) \int_{-1}^{-\cos \zeta} dx - 2(1 - \cos \zeta) \int_{-\cos \zeta}^1 dx \frac{(1 - x)}{(1 + x)} \right\} \\ &= \frac{1}{16} \left\{ 2 + \frac{2}{3} \cos \zeta - 2(1 + \cos \zeta)(1 - \cos \zeta) - 2(1 - \cos \zeta) \left[ 2 \ln \left( \frac{2}{1 - \cos \zeta} \right) - (1 + \cos \zeta) \right] \right\} \\ &= \frac{1}{8} + \frac{1}{24} \cos \zeta + \frac{1}{4} (1 - \cos \zeta) \ln \left( \frac{1 - \cos \zeta}{2} \right) \\ &= \frac{1}{6} - \frac{1}{12} \left( \frac{1 - \cos \zeta}{2} \right) + \frac{1}{2} \left( \frac{1 - \cos \zeta}{2} \right) \ln \left( \frac{1 - \cos \zeta}{2} \right). \end{aligned} \quad (\text{A16})

Note that the above expression differs from (1) by an overall normalization factor of 1/3. The normalization used in (1) was chosen so that for zero angular separation,

$\chi(\zeta)|_{\zeta=0} = 1/2$ for two distinct pulsars. This was purely an aesthetic choice, which does not change the angular dependence (i.e., shape) of the Hellings and Downs curve.

* Electronic address: merlyn@alum.mit.edu

† Electronic address: joseph.romano@utb.edu

1 D. R. Lorimer and M. Kramer, Handbook of Pulsar Astronomy. Cambridge University Press, Cambridge, UK, Dec. 2004.

2 D. R. Lorimer, "Binary and Millisecond Pulsars," Living

Reviews in Relativity, vol. 11 (8), pp. 1–90, Nov. 2008.

3 I. H. Stairs, "Testing General Relativity with Pulsar Timing," Living Reviews in Relativity, vol. 6 (5), pp. 1–49, Sept. 2003.

4 J. M. Weisberg, D. J. Nice, and J. H. Taylor, "Timing Measurements of the Relativistic Binary Pulsar PSRB1913+16,” Astrophys. J., vol. 722, pp. 1030–1034, Oct. 2010.

5 R. A. Hulse and J. H. Taylor, “Discovery of a pulsar in a binary system,” Astrophys. J., vol. 195, no. 2, pp. L51–L53, 1975.

6 F. B. Estabrook and H. D. Wahlquist, “Response of Doppler spacecraft tracking to gravitational radiation,” General Relativity and Gravitation, vol. 6, pp. 439–447, Oct. 1975.

7 M. V. Sazhin, “Opportunities for detecting ultralong gravitational waves,” Soviet Ast., vol. 22, pp. 36–38, Feb. 1978.

8 S. Detweiler, “Pulsar timing measurements and the search for gravitational waves,” Astrophys. J., vol. 234, pp. 1100–1104, Dec. 1979.

9 A pulsar timing model predicts the arrival times of the pulses given values for the pulsar’s spin frequency, frequency derivative, location on the sky, proper motion with respect to the solar system barycenter, its orbital parameters if the pulsar is in a binary, etc.2 The values of these parameters are typically determined by an iterative least-squares fitting procedure, which minimizes the root-mean-squared (rms) deviation of the resultant timing residuals. Systematic errors in the timing model parameters can usually be identified by this iterative procedure, but unmodelled processes in the timing model will lead to errors in the timing residuals that cannot easily be removed.

10 R. S. Foster and D. C. Backer, “Constructing a pulsar timing array,” Astrophys. J., vol. 361, pp. 300–308, Sept. 1990.

11 R. W. Hellings and G. S. Downs, “Upper limits on the isotropic gravitational radiation background from pulsar timing analysis,” Astrophys. J., vol. 265, pp. L39–L42, 1983.

12 K. J. Lee, F. A. Jenet, and R. H. Price, “Pulsar Timing as a Probe of Non-Einsteinian Polarizations of Gravitational Waves,” Astrophys. J., vol. 685, pp. 1304–1319, Oct. 2008.

13 C. M. F. Mingarelli, T. Sidery, I. Mandel, and A. Vecchio, “Characterizing gravitational wave stochastic background anisotropy with pulsar timing arrays,” Phys. Rev. D, vol. 88, p. 062005(17), Sept. 2013.

14 S. R. Taylor and J. R. Gair, “Searching for anisotropic gravitational-wave backgrounds using pulsar timing arrays,” Phys. Rev. D, vol. 88, p. 084001(25), Oct. 2013.

15 J. Gair, J. D. Romano, S. Taylor, and C. M. F. Mingarelli, “Mapping gravitational-wave backgrounds using methods from CMB analysis: Application to pulsar timing arrays,” Phys. Rev. D, vol. 90, p. 082001(44), Oct. 2014.

16 R. M. Shannon, V. Ravi, W. A. Coles, G. Hobbs, M. J. Keith, R. N. Manchester, J. S. B. Wyithe, M. Bailes, N. D. R. Bhat, S. Burke-Spolaor, J. Khoo, Y. Levin, S. Osłowski, J. M. Sarkissian, W. van Straten, J. P. W. Verbiest, and J.-B. Wang, “Gravitational-wave limits from pulsar timing constrain supermassive black hole evolution,” Science, vol. 342, no. 6156, pp. 334–337, 2013.

17 We are assuming here that the two pulsars—even for the case $\zeta = 0$ —are distinct (i.e., they do not occupy the same physical location in space). If we consider the same pulsar, as would be the case for an autocorrelation calculation, then the right-hand-side of (1) should have an extra term equal to $\delta(\zeta)/2$ .

18 This statement is a generalization (to fields) of the mathematical result that the Fourier transform of the probability distribution $p(x)$ for a random variable $x$ (i.e., the so-called characteristic function of the random variable) can be written as a power series with coefficients given by the expectation values $\langle x^k \rangle$ for $k = 1, 2, \dots$ .19 Thus, the expectation values $\langle x^k \rangle$ for $k = 1, 2, \dots$ completely determine the probability distribution $p(x)$ and hence the statistical properties of the random variable $x$ .

19 C. W. Helstrom, Statistical Theory of Signal Detection. Pergamon Press, Oxford, United Kingdom, 1968.

20 Recall: $\nabla^2 = \left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2} \right)$ in Cartesian coordinates $(x, y, z)$ .

21 The field might actually be a tensor field, like the gravitational-wave field $h_{ab}(t, \mathbf{x})$ , and hence should have tensor indices in general. But for simplicity, we will ignore that complication here.

22 M. Anholm, S. Ballmer, J. D. E. Creighton, L. R. Price, and X. Siemens, “Optimal strategies for gravitational wave stochastic background searches in pulsar timing data,” Phys. Rev. D, vol. 79, p. 084030(19), Apr. 2009.

23 Hint: Work in the coordinate system shown in Fig. 3 with the Earth at the origin and the two pulsars located along the $z$ -axis and in the $xz$ -plane, respectively. Evaluate (59) in this frame using (60) and the expressions for $\hat{\theta}$ , $\hat{\phi}$ given in (36). Finally, use contour integration to do the integral over the azimuthal angle $\phi$ . It is a long calculation, but worth the effort. If you have trouble completing the calculation, see Appendix A for more details.

24 M. L. Boas, Mathematical methods in the physical sciences. John Wiley & Sons, Inc., Hoboken, NJ, 2006.

Xet Storage Details

Size:
64.3 kB
·
Xet hash:
64ac6898d3f6b43efc9da05de1a3fef8ab85467c3261c167628b3215593bdcd6

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.