Buckets:

|
download
raw
31 kB

Feedback Policies for Measurement-based Quantum State Manipulation

Shuangshuang Fu

College of Engineering and Computer Science
The Australian National University, Canberra, Australia

E-mail: shuangshuang.fu@anu.edu.au

Guodong Shi

College of Engineering and Computer Science
The Australian National University, Canberra, Australia

E-mail: guodong.shi@anu.edu.au

Alexandre Proutiere

School of Electrical Engineering
Royal Institute of Technology, Stockholm, Sweden

E-mail: alepro@kth.se

Matthew R. James

College of Engineering and Computer Science
The Australian National University, Canberra, Australia

E-mail: matthew.james@anu.edu.au

Abstract. In this paper, we propose feedback designs for manipulating a quantum state to a target state by performing sequential measurements. In light of Belavkin's quantum feedback control theory, for a given set of (projective or non-projective) measurements and a given time horizon, we show that finding the measurement selection policy that maximizes the probability of successful state manipulation is an optimal control problem for a controlled Markovian process. The optimal policy is Markovian and can be solved by dynamical programming. Numerical examples indicate that making use of feedback information significantly improves the success probability compared to classical scheme without taking feedback. We also consider other objective functionals including maximizing the expected fidelity to the target state as well as minimizing the expected arrival time. The connections and differences among these objectives are also discussed.

PACS numbers: 03.67.Ac

Keywords: Feedback Policy, Quantum State-manipulation, Quantum Measurement## 1. Introduction

One fundamental difference between classical and quantum mechanics is the unavoidable back-action of quantum measurement. On the one hand, this back-action is generally thought to be detrimental for the implementation of effective quantum control. On the other hand, it also provides us one possibility to use the change caused by the measurement as a new route to manipulate the state of the system[1, 9]. A basic problem in quantum physics and engineering is how to drive a quantum system to a desired target state. There have been studies on the preparation of a given target state from a given initial state using sequential (projective or non-projective) measurements in the last few years [13, 14, 15, 16, 17].

A quantum measurement $E$ is described by a collection of measurement operators

{ME(m)}mY,\left\{ \mathbf{M}_E(m) \right\}_{m \in \mathcal{Y}},

where $\mathcal{Y}$ is an index set for measurement outcomes and the measurement operators satisfy

mYME(m)ME(m)=I.\sum_{m \in \mathcal{Y}} \mathbf{M}_E(m)^\dagger \mathbf{M}_E(m) = I.

Suppose we perform the quantum measurement $E$ on density operator $\rho$ , the probability of obtaining result $m \in \mathcal{Y}$ is $\text{tr}(\mathbf{M}_E(m)\rho\mathbf{M}_E(m)^\dagger)$ , and when $m \in \mathcal{Y}$ occurs, the post-measurement state of the quantum system becomes

MEm(ρ)=ME(m)ρME(m)tr(ME(m)ρME(m)).\mathcal{M}_E^m(\rho) = \frac{\mathbf{M}_E(m)\rho\mathbf{M}_E(m)^\dagger}{\text{tr}(\mathbf{M}_E(m)\rho\mathbf{M}_E(m)^\dagger)}.

If we are unaware of the measurement result, the unconditional state of the quantum system after the measurement can be expressed as

ME(ρ)=mYME(m)ρME(m).\mathcal{M}_E(\rho) = \sum_{m \in \mathcal{Y}} \mathbf{M}_E(m)\rho\mathbf{M}_E(m)^\dagger.

If ${\mathbf{M}E(m)}{m \in \mathcal{Y}}$ are orthogonal projectors, i.e., the $\mathbf{M}_E(m)$ are Hermitian and $\mathbf{M}_E(l)\mathbf{M}E(m) = \delta{lm}\mathbf{M}_E(m)$ , $E$ is a projective measurement. The idea of quantum state manipulation using sequential measurements [13, 14, 15, 16, 17] is as follows. By consecutively performing the measurements $E_1, \dots, E_N$ , the unconditional state for quantum system with initial state $\rho_0$ can be expressed as

ρNu=MENMEN1ME1(ρ0).\rho_N^u = \mathcal{M}_{E_N} \circ \mathcal{M}_{E_{N-1}} \circ \dots \circ \mathcal{M}_{E_1}(\rho_0).

It has been shown, analytically or numerically, how to select the measurements $E_1, \dots, E_N$ so that $\rho_N^u$ can asymptotically tend to a desired target state [13, 14, 15, 16, 17].

Making use of feedback information for quantum measurement and detection actually has a long history, which can be viewed as the dual problem of statemanipulation. The “Dolinar’s receiver” proposes a feedback strategy for discriminating two possible quantum states with prior distribution with minimum probability of error [4]. The problem is known as the quantum detection problem and Helstrom’s bound characterizes the minimum probability of error for discriminating any two non-orthonormal states [6]. Quantum detection is to identify uncertain quantum states via projective measurements; while the considered quantum state projection is to manipulate a certain quantum state to a certain target, again via projective measurements. The Dolinar’s scheme follows a similar structure that measurement is selected based on previous measurement results on different segments of the pulse, and was recently realized experimentally [5]. See [8] for a survey for the extensive studies in feedback (adaptive) design in quantum tomography.

In this paper, we propose a feedback design for quantum state manipulation via sequential measurements. For a given set of measurements and a given time horizon, we show that finding the policy of measurement selections that maximizes the probability of successful state manipulation can be solved by dynamical programming. Such derivation of the optimal policy falls to Belavkin’s quantum feedback theory [1]. Numerical examples are given which indicate that the proposed feedback policy significantly improves the success probability compared to classical policy by consecutive projections without taking feedback. In particular, the probability of reaching the target state $|1\rangle$ via feedback policy reaches 0.9968 using merely 10 measurements from initial state $|0\rangle$ . Other optimality criteria are also discussed such as the maximal expected fidelity and the minimal arrival time, and some connections and differences among the the different criteria are also discussed.

The remainder of the paper is organised as follows. In the first part of Section 2, we revisit a simple example of reaching $|1\rangle$ from $|0\rangle$ using sequential projective measurements [17], and show how feedback policies work under which even a little bit of feedback can make a nontrivial improvement. The rest of Section 2 devotes to a rigorous treatment for the problem definition and for finding the optimal feedback policy from classical quantum feedback theory. Numerical examples are given there. Section 3 investigates some other optimality criteria and finally Section 4 concludes the paper.

2. Quantum State Manipulation by Feedback

2.1. A Simple Example: Why Feedback?

Consider now a qubit system, i.e., a two-dimensional Hilbert space. The initial state of the quantum system is $|0\rangle\langle 0|$ , and the target state is $|1\rangle\langle 1|$ . Given $T \geq 2$ projective measurements from the set

E={Ei,i=1,2,,T}.(1)\mathcal{E} = \left\{ E_i, \quad i = 1, 2, \dots, T \right\}. \quad (1)where $E_i = {|\phi_i\rangle\langle\phi_i|, |\psi_i\rangle\langle\psi_i|}$ with

ϕi=cos(πi2T)0+sin(πi2T)1|\phi_i\rangle = \cos\left(\frac{\pi i}{2T}\right)|0\rangle + \sin\left(\frac{\pi i}{2T}\right)|1\rangle

and

ψi=sin(πi2T)0+cos(πi2T)1.|\psi_i\rangle = -\sin\left(\frac{\pi i}{2T}\right)|0\rangle + \cos\left(\frac{\pi i}{2T}\right)|1\rangle.

Note that the choice of $E_i$ follows the optimal selection given in [17].

The strategy in [16, 17] is simply to perform the $T$ measurements in turn from $E_1$ to $E_T$ . We call it a naive policy. The probability of successfully driving the state from $|0\rangle$ to $|1\rangle$ in $T$ steps under this naive strategy is denoted by $p(T)$ . We can easily calculate that $p(3) \approx 0.56$ and $p(10) \approx 0.8$ .

Let $T = 3$ . We next show that even only a bit of measurement feedback can improve the performance of the strategy significantly.

S1. After the first measurement $E_1$ has been made, perform $E_3$ if the outcome is $|\psi_1\rangle$ for the second step, and follow the naive policy for all other actions.

Following this scheme, it turns out that the probability of arriving at $|1\rangle$ in three steps becomes around 0.66, in contrast with $p(3) \approx 0.56$ under the naive scheme. The improvement in the probability of success comes from the fact that a feedback decision is made based on the information of the outcome of $E_1$ so that in S1 a better selection of measurement is obtained between $E_2$ and $E_3$ .

2.2. Optimal Policy from Quantum Feedback Control

We now present the solution to the optimal policy for the considered quantum state manipulation in light of the classical work of quantum feedback control theory derived by Belavkin [1] (also see [2] and [3] for a thorough treatment).

Consider a quantum system whose state is described by density operators over the qubit space. Let $\mathcal{E}$ be a given finite set of measurements serving as all feasible control actions. For each $E \in \mathcal{E}$ , we write

E={ME(y)}yY,E = \left\{ \mathbf{M}_E(y) \right\}_{y \in \mathcal{Y}},

where $\mathcal{Y}$ is a finite index set of measurement outputs and $\mathbf{M}_E(y)$ is the measurement operator corresponding to outcome $y \in \mathcal{Y}$ . Time is slotted with a horizon $N \geq 1$ . The initial state of the quantum system is $\rho_0$ , and the target state is assumed to be, for the ease of presentation, $|1\rangle\langle 1|$ .

For $0 \leq k \leq N - 1$ , we denote by $u_k \in \mathcal{E}$ the measurement performed at time $k$ and the post-measurement state after $u_k$ has been performed is denoted as $\rho_{k+1}$ . Let $y_k \in \mathcal{Y}$ be the outcome of $u_k$ . The measurement sequence ${u_k}{k=0}^{N-1}$ is selected by a policy $\pi = {\pi_k}{k=0}^{N-1}$ , where each $\pi_s$ takes value in the set $\mathcal{E}$ such that $u_k = \pi_k(y_0, \dots, y_{k-1}; u_0, \dots, u_{k-1})$ can depend on all previous selected measurements and their outcomes for all $k = 0, \dots, N - 1$ . Here for convenience we have denoted $u_{-1} = y_{-1} = \emptyset$ .We can now express the closed-loop evolution of ${\rho_k}_0^N$ as

ρk+1=Mukyk(ρk)=Muk(yk)ρkMuk(yk)tr(Muk(yk)ρkMuk(yk)),(2)\rho_{k+1} = \mathcal{M}_{u_k}^{y_k}(\rho_k) = \frac{\mathbf{M}_{u_k}(y_k)\rho_k\mathbf{M}_{u_k}^\dagger(y_k)}{\text{tr}\left(\mathbf{M}_{u_k}(y_k)\rho_k\mathbf{M}_{u_k}^\dagger(y_k)\right)}, \quad (2)

where $k = 0, \dots, N-1$ . The distribution of $y_k$ is given by

P(yk=yYuk,ρk)=tr(Muk(y)ρkMuk(y)),\mathbb{P}\left(y_k = y \in \mathcal{Y} \mid u_k, \rho_k\right) = \text{tr}\left(\mathbf{M}_{u_k}(y)\rho_k\mathbf{M}_{u_k}^\dagger(y)\right),

where $k = 0, \dots, N-1$ . Clearly ${\rho_k}_0^N$ defines a Markov chain.

We define

Jπ(N):=Pπ(ρN=11)\mathbf{J}_\pi(N) := \mathbb{P}_\pi\left(\rho_N = |1\rangle\langle 1|\right)

as the probability of successfully manipulating the quantum state to the target density matrix $|1\rangle\langle 1|$ , where $\mathbb{P}_\pi$ is the probability measure equipped with $\pi$ . We also define the cost-to-go function

V(t,x)=maxπP(ρN=11ρNt=x)\mathbf{V}(t, x) = \max_{\pi} \mathbb{P}\left(\rho_N = |1\rangle\langle 1| \mid \rho_{N-t} = x\right)

for $t = 0, 1, \dots, N$ . Following standard theories for controlled Markovian process [12, 10], the following conclusion holds.

Proposition 1 The cost-to-go function $\mathbf{V}(t, x)$ satisfies the following recursion

V(t,x)=maxuEyYP(yu,x)V(t1,Muy(x)),(3)\mathbf{V}(t, x) = \max_{u \in \mathcal{E}} \sum_{y \in \mathcal{Y}} \mathbb{P}\left(y \mid u, x\right) \mathbf{V}\left(t-1, \mathcal{M}_u^y(x)\right), \quad (3)

where $t = 1, \dots, N$ , with boundary condition $\mathbf{V}(0, x) = 1$ if $x = |1\rangle\langle 1|$ , and $\mathbf{V}(0, x) = 0$ otherwise. The maximum arrival probability $\max_{\pi} \mathbf{J}\pi(N)$ is given by $\max{\pi} \mathbf{J}\pi(N) = \mathbf{V}(N, \rho_0)$ . The optimal policy $\pi^* = {\pi_k^*}{k=0}^{N-1}$ is Markovian, and is given by

πk(ρk)=argmaxuEyYP(yu,ρk)V(Nk1,Muy(ρk))(4)\pi_k^*(\rho_k) = \arg \max_{u \in \mathcal{E}} \sum_{y \in \mathcal{Y}} \mathbb{P}\left(y \mid u, \rho_k\right) \mathbf{V}\left(N-k-1, \mathcal{M}_u^y(\rho_k)\right) \quad (4)

for $k = 0, \dots, N-1$ .

2.3. Numerical Examples

We now compare the performance of the policies with and without feedback. Again we consider driving a two-level quantum system from state $|0\rangle$ to $|1\rangle$ . The available measurements are in the set

E={Ei,i=1,2,,T}.\mathcal{E} = \left\{E_i, \quad i = 1, 2, \dots, T\right\}.

as given in Eq.(1).

It is clear from this objective that $E_* = {|0\rangle\langle 0|, |1\rangle\langle 1|}$ must be a measurement in the set $\mathcal{E}$ for $\mathbf{J}_\pi(N)$ to be a non-trivial function if all measurements in $\mathcal{E}$ are projective.Figure 1. The probabilities of successfully reaching $|1\rangle$ from the initial state $|0\rangle$ using naive policy $\pi^n$ and optimal feedback policy $\pi^*$ , respectively.

2.3.1. Feedback vs. Non-Feedback First of all, we take $T = N$ . The naive policy in turn takes projections from $E_1$ to $E_N$ , denoted $\pi^n = {\pi_k^n}{k=0}^{N-1}$ . We solve the optimal feedback policy $\pi^* = {\pi_k^*}{k=0}^{N-1}$ using Eq. (4). It is clear that $\pi^n$ is deterministic with $\pi_k^n = E_{k+1}$ , while $\pi^*$ is Markovian with $\pi_k^*$ depending on $\rho_k$ . Correspondingly, their arrival probability in $N$ steps are given by $J_{\pi^n}(N)$ and $J_{\pi^*}(N)$ , respectively. In Figure 1, we plot $J_{\pi^n}(N)$ and $J_{\pi^*}(N)$ for $N = 3, \dots, 10$ . As shown clearly in the figure, the probability of success is improved significantly. Actually for $N = 10$ , we already have $J_{\pi^*}(N) = 0.9968$ .

Moreover, as an illustration of the different actions between the naive and feedback strategies, we plot their policies for $N = 5$ in Tables I and II, respectively.

2.3.2. Influence of Measurement Set We now investigate how the size of the available measurement set $\mathcal{E}$ influences the successful arrival probability in $N$ steps under optimal feedback. In this case, the optimal arrival probability $J_{\pi^*}(N)$ is also a function of $T$ , and we therefore rewrite $J_{\pi^*}(N) = J_{\pi^*}^T(N)$ .

In Figure 2, we plot $J_{\pi^*}^T(N)$ , for $T = 10, 100, 1000$ , respectively. The numerical results show that as $T$ increases, the $J_{\pi^*}^T(N)$ quickly tends to a limiting curve, suggesting the existence of some fundamental upper bound on the arrival probability in $N$ steps using sequential projections from an arbitrarily large measurement set.

3. More Optimality Criteria

In this section, we discuss two other useful optimality criteria, to maximize the expected fidelity with the target state, or to minimize the expected time it takes to arrive at the

\pi^n k = 0 k = 1 k = 2 k = 3 k = 4
|0\rangle E_1 * * * *
|1\rangle * * * * *
|\phi_1\rangle * E_2 * * *
|\psi_1\rangle * E_2 * * *
|\phi_2\rangle * * E_3 * *
|\psi_2\rangle * * E_3 * *
|\phi_3\rangle * * * E_4 *
|\psi_3\rangle * * * E_4 *
|\phi_4\rangle * * * * E_5
|\psi_4\rangle * * * * E_5

Table 1. The actions using naive strategy $\pi^n$ to prepare the target state $|1\rangle$ , starting from $|0\rangle$ , for $N = 5$ . Here $E_i$ represents the measurement that the policy chooses, and * means that it is not possible to be in that state at the corresponding step.

\pi^* k = 0 k = 1 k = 2 k = 3 k = 4
|0\rangle E_2 E_2 E_3 E_3 E_5
|1\rangle E_5 E_5 E_5 E_5 E_5
|\phi_1\rangle E_3 E_3 E_3 E_3 E_5
|\psi_1\rangle E_5 E_5 E_5 E_5 E_5
|\phi_2\rangle E_4 E_4 E_3 E_3 E_5
|\psi_2\rangle E_1 E_1 E_1 E_1 E_5
|\phi_3\rangle E_4 E_4 E_4 E_4 E_5
|\psi_3\rangle E_1 E_1 E_2 E_2 E_5
|\phi_4\rangle E_5 E_5 E_5 E_5 E_5
|\psi_4\rangle E_2 E_2 E_2 E_2 E_5

Table 2. The actions using optimal feedback policy $\pi^*$ to prepare the target state $|1\rangle$ for $N = 5$ .

target state.

3.1. Maximal Expected Fidelity

Given two density operators $\rho$ and $\sigma$ , their fidelity is defined by [7]

F(ρ,σ)=trρσρ.F(\rho, \sigma) = \text{tr} \sqrt{\sqrt{\rho} \sigma \sqrt{\rho}}.Figure 2. The probabilities of successfully reaching $|1\rangle$ from the initial state $|0\rangle$ using different sizes of measurement set by feedback strategy.

Fidelity measures the closeness of two quantum states. Now that our target state $|1\rangle\langle 1|$ is a pure state, we have

tr11σ11=1σ1.\text{tr} \sqrt{\sqrt{|1\rangle\langle 1|} \sigma \sqrt{|1\rangle\langle 1|}} = \sqrt{\langle 1 | \sigma | 1 \rangle}.

Alternatively, we can consider the following objective functional

J~π(N)=Eπ[1ρN1],\tilde{J}_\pi(N) = \mathbb{E}_\pi \left[ \langle 1 | \rho_N | 1 \rangle \right],

and the goal is to find a policy that maximizes $\tilde{J}_\pi(N)$ .

For the two objective functionals $J_\pi(N)$ and $\tilde{J}\pi(N)$ , we denote their corresponding optimal policy as $\pi^*(N) = {\pi_k^*(N)}{k=0}^{N-1}$ and $\pi^\diamond(N) = {\pi_k^\diamond(N)}_{k=0}^{N-1}$ , respectively, where the time horizon $N$ is also indicated.

Let $\pi^\diamond(N-1) \oplus E_*$ be the policy that follows $\pi^\diamond(N-1)$ for $k = 0, \dots, N-2$ and takes value $E_*$ for $k = N-1$ . Let $\rho_k^u$ be the unconditional density operator at step $k$ for $k = 0, \dots, N-1$ . The following equations hold:

J~π(N1)=Eπ[1ρN11]=tr(ρN1u11)=Pπ(ρN=11),(5)\begin{aligned} \tilde{J}_\pi(N-1) &= \mathbb{E}_\pi \left[ \langle 1 | \rho_{N-1} | 1 \rangle \right] \\ &= \text{tr} \left( \rho_{N-1}^u |1\rangle\langle 1| \right) \\ &= \mathbb{P}_{\pi'} \left( \rho_N = |1\rangle\langle 1| \right), \end{aligned} \tag{5}

for any $\pi = {\pi_k}{k=0}^{N-2}$ , where $\pi' = \pi \oplus E* = {\pi_k}{k=0}^{N-1}$ with $\pi{N-1} = E_*$ . As a result, the following relation holds between the optimal policies under the two objectives $J_\pi(N)$ and $\tilde{J}\pi(N)$ .Proposition 2 *It holds that $\max{\pi} J_{\pi}(N) = \max_{\pi} \tilde{J}{\pi}(N - 1)$ . In fact, $\pi^{\diamond}(N) = \pi^{\diamond}(N - 1) \oplus E$ with $E_ = {|0\rangle\langle 0|, |1\rangle\langle 1|}$ .*

The intuition behind Proposition 2 is that one would expect to get as closely as possible to the target state at step $N - 1$ , if one tends to successfully project onto the target state at step $N$ . We also know from Proposition 2 that we can solve the maximal expected fidelity problem in $N$ steps by the solutions of maximizing the arrival probability in $N + 1$ steps.

Similarly, we can also find the optimal policy $\pi^{\diamond}$ for the objective $\tilde{J}{\pi}(N)$ using dynamical programming. Define the cost-to-go function $\tilde{V}(k, x)$ for $\tilde{J}{\pi}(N)$ as

V~(k,x)=maxπEπ[1ρN1ρk=x](6)\tilde{V}(k, x) = \max_{\pi} \mathbb{E}_{\pi} \left[ \langle 1 | \rho_N | 1 \rangle \mid \rho_k = x \right] \quad (6)

for $k = 0, \dots, N$ . Then $\tilde{V}(k, x)$ satisfies the following recursive equation

V~(k,x)=maxuEyYP(yu,x)V~(k+1,Muy(x)),(7)\tilde{V}(k, x) = \max_{u \in \mathcal{E}} \sum_{y \in \mathcal{Y}} \mathbb{P}(y \mid u, x) \tilde{V}(k + 1, \mathcal{M}_u^y(x)), \quad (7)

for $k = 0, \dots, N - 1$ , with terminal condition

V~(N,x)=tr(x11).(8)\tilde{V}(N, x) = \text{tr}(x | 1 \rangle \langle 1 |). \quad (8)

The optimal policy $\pi^{\diamond}$ can be obtained by solving

πk(ρk)=argmaxuEyYP(yu,ρk)V~(k+1,Muy(ρk))\pi_k^{\diamond}(\rho_k) = \arg \max_{u \in \mathcal{E}} \sum_{y \in \mathcal{Y}} \mathbb{P}(y \mid u, \rho_k) \tilde{V}(k + 1, \mathcal{M}_u^y(\rho_k))

for $k = 0, \dots, N - 1$ . The maximal expected fidelity $\tilde{J}_{\pi^{\diamond}}(N) = \tilde{V}(0, \rho_0)$ .

3.2. Minimal Arrival Time

In previous discussions the deadline $N$ plays an important role in the objective functionals as well as in their solutions. We now consider the case when the deadline is flexible, and we aim to minimize the average number of steps it takes to arrive at the target state. Now the control policy is denoted as $\pi = {\pi_k}_{k=0}^{\infty}$ , where $\pi_k$ selects a measurement from the set $\mathcal{E}$ . Associated with $\pi$ , we define

Aπ:=infk{ρk=11}.(9)\mathcal{A}_{\pi} := \inf_k \left\{ \rho_k = |1\rangle\langle 1| \right\}. \quad (9)

Note that $\mathcal{A}_{\pi}$ defines a stopping time (cf., [11]) associated with the random processes ${\rho_k}_0^{\infty}$ , and we assume that $\pi$ is proper in the sense that

Pπ(Aπ<)=1.\mathbb{P}_{\pi} \left( \mathcal{A}_{\pi} < \infty \right) = 1.

We continue to introduce

Jπb=Eπ[Aπ](10)J_{\pi}^b = \mathbb{E}_{\pi}[\mathcal{A}_{\pi}] \quad (10)

as the objective functional, which is the expected time it takes for the quantum state to reach the target $|1\rangle\langle 1|$ following policy $\pi$ . Minimizing $J_{\pi}^b$ is a stochastic shortest path problem [18].

x |0\rangle |1\rangle |\phi_1\rangle |\psi_1\rangle |\phi_2\rangle |\psi_2\rangle |\phi_3\rangle |\psi_3\rangle |\phi_4\rangle |\psi_4\rangle
\pi^\natural(x) E_2 E_5 E_3 E_5 E_4 E_5 E_5 E_1 E_5 E_2

Table 3. The optimal policy $\pi^\natural$ minimizing the expected time it takes for the quantum state to reach the target state $|1\rangle\langle 1|$ for control set $\mathcal{E}_*$ with $T = 5$ .

We introduce $\mathcal{B}_\pi(x) := \inf_k {\rho_k = |1\rangle\langle 1| \mid \rho_0 = x}$ and

Vb(x)=minπEπ[Bπ(x)].(11)\mathbf{V}^b(x) = \min_{\pi} \mathbb{E}_\pi \left[ \mathcal{B}_\pi(x) \right]. \quad (11)

The Markovian property of ${\rho_k}_{k=0}^\infty$ leads to that the optimal policy $\pi^\natural$ is stationary in the sense that $\pi_k = \pi^\natural(x)$ for all $k$ . The following conclusion holds applying directly the results of [18].

Proposition 3 The cost-to-go function $\mathbf{V}^b$ satisfies the following recursion

Vb(x)=1+minuEyYP(yu,x)Vb(Muy(x)),(12)\mathbf{V}^b(x) = 1 + \min_{u \in \mathcal{E}} \sum_{y \in \mathcal{Y}} \mathbb{P}(y \mid u, x) \mathbf{V}^b(\mathcal{M}_u^y(x)), \quad (12)

for all $x \neq |1\rangle\langle 1|$ , with boundary condition $\mathbf{V}^b(|1\rangle\langle 1|) = 0$ . The optimal policy $\pi^\natural$ is given by

π(x)=argminuEyYP(yu,x)Vb(Muy(x)).(13)\pi^\natural(x) = \arg \min_{u \in \mathcal{E}} \sum_{y \in \mathcal{Y}} \mathbb{P}(y \mid u, x) \mathbf{V}^b(\mathcal{M}_u^y(x)). \quad (13)

The optimal $J_{\pi^\natural}^b$ is given by $J_{\pi^\natural}^b = \mathbf{V}^b(\rho_0)$ .

Technically it cannot be guaranteed that for any given measurement set $\mathcal{E}$ , there always exists at least one policy $\pi$ under which $J_\pi^b$ admits a finite number. However, some straightforward calculations indicate that for the set $\mathcal{E}$ of projective measurements given in Eq. (1), finite $J_\pi^b$ can always be achieved for a class of policies.

3.3. Numerical Example: Minimal Arrival Time

Again, consider $T$ projective measurements from the set [17]

E={Ei,i=1,2,,T}.\mathcal{E} = \left\{ E_i, \quad i = 1, 2, \dots, T \right\}.

In Figure 3, we plot $J_{\pi^\natural}^b(T)$ as a function of $T$ , for $T = 2, 3, \dots, 30$ . Numerical calculations show that the minimized average number of steps of driving $|0\rangle\langle 0|$ to $|1\rangle\langle 1|$ does not depend too much on the size of control set, it oscillates around 3.8 for control sets of reasonable size. Also for measurement set $\mathcal{E}*$ with $T = 5$ , we show the optimal policy $\pi^\natural$ in Table 3.Figure 3. The minimized average number of steps it takes to arrive at the target state $|1\rangle\langle 1|$ from the initial state $|0\rangle\langle 0|$ employing control set $\mathcal{E}*$ of size $T$ .

4. Conclusions

We have proposed feedback designs for manipulating a quantum state to a target state by performing sequential measurements. Making use of Belavkin's quantum feedback control theory, we showed that finding the measurement selection policy that maximizes the probability of successful state manipulation is an optimal control problem which can be solved by dynamical programming for any given set of measurements and a given time horizon. Numerical examples indicate that making use of feedback information significantly improves the success probability compared to classical scheme without taking feedback. It was shown that the probability of reaching the target state via feedback policy reaches 0.9968 using merely 10 steps, while classical results [16, 17] suggested that naive strategy via consecutive measurements in turn reaches success probability one when the number of steps tends to infinity. Maximizing the expected fidelity to the target state and minimizing the expected arrival time were also considered, and some connections and differences among these objectives were also discussed.

Acknowledgments

We gratefully acknowledge support by the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (project number CE110001027), and AFOSR Grant FA2386-12-1-4075).

References

  • [1] V. P. Belavkin, Towards control theory of quantum observable systems, Automatica and Remote Control, vol. 44, s188, 1983.- [2] M. R. James, Risk-sensitive optimal control of quantum systems, Physical Review A, vol. 69, 032108, 2004.
  • [3] L. Bouten, R. Van Handel, and M. R. James, A discrete invitation to quantum filtering and feedback control, SIAM Review, 51(2), 239-316, 2009.
  • [4] S. J. Dolinar, An optimum receiver for the binary coherent state quantum channel, MIT Res. Lab. Electron. Quart. Progr. Rep., 111, pp. 115–120, 1973.
  • [5] R. L. Cook, P. J. Martin, and J. M. Geremia, An optimum receiver for the binary coherent state quantum channel, Nature, vol. 446, pp.774–777, 2007.
  • [6] C. W. Helstrom. Quantum Detection and Estimation Theory. Academic press, 1976.
  • [7] M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information. Cambridge university press. 2010.
  • [8] H. M. Wiseman, D. W. Berry, S. D. Bartlett, B. L. Higgins, and G. J. Pryde, Adaptive measurements in the optical quantum information laboratory, IEEE Journal of Selected Topics in Quantum Electronics, vol. 15, no. 6, pp. 1661–1672, 2009.
  • [9] H. M. Wiseman and G. J. Milburn, Quantum theory of optical feedback via homodyne detection, Physical Review Letters, vol. 70, no. 5, 548, 1993.
  • [10] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. New York : Wiley, 1994.
  • [11] R. Durrett. Probability: Theory and Examples, Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005.
  • [12] D. P. Bertsekas. Dynamic Programming and Optimal Control. Vol. II, 4th Edition. Athena Scientific, 2012.
  • [13] S. Ashhab and F. Nori, Control-free control: manipulating a quantum system using only a limited set of measurements, Physical Review A, 82(6), 062103, 2010.
  • [14] K. Jacobs, Feedback control using only quantum back-action, New Journal of Physics, 12(4), 043005, 2010.
  • [15] H. M. Wiseman, Quantum control: Squinting at quantum systems, Nature, vol. 470, no. 7333, pp. 178–179, 2011.
  • [16] L. Roa, M. L. de Guevara, A. Delgado, G. Olivares-Rentería, and A. Klimov, Quantum evolution by discrete measurements, Journal of Physics: Conference Series, vol. 84, 012017, 2007.
  • [17] A. Pechen, N. Il'in, F. Shuang, and H. Rabitz, Quantum control by von neumann measurements, Physical Review A, vol. 74, no. 5, p. 052102, 2006.
  • [18] D. P. Bertsekas and J. N. Tsitsiklis, An analysis of stochastic shortest path problems, Mathematics of Operations Research, 16(3), pp. 580–595, 1991.

Xet Storage Details

Size:
31 kB
·
Xet hash:
90e3338c6a5ea9d8e1a49b04d81bc431eb6e30dd5ff498c15d84212213fcbb5b

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.