Buckets:

|
download
raw
22.1 kB

Variants of the Empirical Interpolation Method: symmetric formulation, choice of norms and rectangular extension

Fabien Casenave1,2, Alexandre Ern3 and Tony Lelièvre3

1 IGN LAREG, Univ Paris Diderot, Sorbonne Paris Cité,
5 rue Thomas Mann, 75205 Paris Cedex 13, France

2 currently at SafranTech, Rue des Jeunes Bois,
Châteaufort, CS 80112, 78772 Magny-Les-Hameaux, France

3 Université Paris-Est, CERMICS (ENPC),
77455 Marne la Vallée Cedex 2, France

Abstract

The Empirical Interpolation Method (EIM) is a greedy procedure that constructs approximate representations of two-variable functions in separated form. In its classical presentation, the two variables play a non-symmetric role. In this work, we give an equivalent definition of the EIM approximation, in which the two variables play symmetric roles. Then, we give a proof for the existence of this approximation, and extend it up to the convergence of the EIM, and for any norm chosen to compute the error in the greedy step. Finally, we introduce a way to compute a separated representation in the case where the number of selected values is different for each variable. In the case of a physical field measured by sensors, this is useful to discard a broken sensor while keeping the information provided by the associated selected field.

MSC2010 classification

65D05, 65D15, 68W25

Keywords

Empirical Interpolation Method, rectangular case, sensor failure, symmetry

1 Introduction

Consider a function $f : \mathcal{X} \times \mathcal{Y} \rightarrow \mathbb{R}$ . The Empirical Interpolation Method (EIM) [1, 4] is an offline/online procedure that provides an approximate representation $I_d(f)$ of $f$ in separated form, where the integer $d$ denotes the number of terms in the representation. The offline stage of the EIM consists in selecting some points $(x_1, \dots, x_d) \in \mathcal{X}^d$ and $(y_1, \dots, y_d) \in \mathcal{Y}^d$ in a greedy fashion, such that

xk+1=argmaxxX(fIk(f))(x,)L(Y),(1a)x_{k+1} = \arg \max_{x \in \mathcal{X}} \| (f - I_k(f))(x, \cdot) \|_{L^\infty(\mathcal{Y})}, \quad (1a)

yk+1=argmaxyY(fIk(f))(xk+1,y),(1b)y_{k+1} = \arg \max_{y \in \mathcal{Y}} | (f - I_k(f))(x_{k+1}, y) |, \quad (1b)where $I_k(f)$ denotes the separated representation constructed with the $k$ first points $(x_l, y_l)$ , $l \in {1, \dots, k}$ . The method is efficient when the error $f - I_d(f)$ is small for reasonably small values of $d$ . Some functions $q_l(y)$ , $l \in {1, \dots, d}$ and a matrix $B$ of size $d \times d$ depending on these points are also constructed, see [1] and [4] for details. The separated representation is then obtained as

Id(f)(x,y)=1ldλl(x)ql(y),(2)I_d(f)(x, y) = \sum_{1 \leq l \leq d} \lambda_l(x) q_l(y), \quad (2)

where the functions $\lambda_l(x)$ , $l \in {1, \dots, d}$ , solve the linear system $\sum_{m=1}^d B_{l,m} \lambda_m(x) = f(x, y_l)$ , $l \in {1, \dots, d}$ . The function $I_d(f)$ satisfies the following interpolation property [4, Lemma 1]: for all $m \in {1, \dots, d}$ ,

Id(f)(x,ym)=f(x,ym),for all xX,(3a)I_d(f)(x, y_m) = f(x, y_m), \quad \text{for all } x \in \mathcal{X}, \quad (3a)

Id(f)(xm,y)=f(xm,y),for all yY.(3b)I_d(f)(x_m, y) = f(x_m, y), \quad \text{for all } y \in \mathcal{Y}. \quad (3b)

In practice, the size $d$ is not chosen a priori, and the greedy procedure stops when $|(f - I_k(f))(x_{k+1}, y_{k+1})|$ is small enough. Define $\mathcal{U} := {f(x, \cdot), x \in \mathcal{X}}$ . Elements of $\mathcal{U}$ are functions from $\mathcal{Y}$ to $\mathbb{R}$ .

Theorem 1.1 (Existence of the decomposition, [4, Theorem 1]) Assume that the interpolation points are chosen according to (1a)-(1b) and that $d < \dim \text{span}(\mathcal{U})$ . Then, the separated representation (2) is well-defined.

In [2], it is observed that $I_d(f)$ can be rewritten in an algebraically equivalent form as

Id(f)(x,y)=1l,mdDl,mf(xl,y)f(x,ym),(4)I_d(f)(x, y) = \sum_{1 \leq l, m \leq d} D_{l,m} f(x_l, y) f(x, y_m), \quad (4)

where the matrix $D$ depends on the points $x_l, y_m, l, m \in {1, \dots, d}$ , and can be constructed recursively during the offline stage of the EIM. It is easy to check that (3a)-(3b) is satisfied if and only if $D = F^{-T}$ , where $F_{l,m} = f(x_l, y_m)$ , which motivates an alternative presentation of the EIM based on Equation (4) where the variables $x$ and $y$ play symmetric roles. The double summation in (4) emphasizes this symmetric role; note that, for instance, $I_d(f)(x, y) = \sum_{1 \leq l \leq d} \tilde{\lambda}_l(x) \tilde{q}l(y)$ , with $\tilde{\lambda}l(x) = \sum{1 \leq m \leq d} D{l,m} f(x, y_m)$ and $\tilde{q}_l(y) = f(x_l, y)$ .

Let $|\cdot|_{\mathcal{Y}}$ be a norm on $\mathcal{Y}$ and suppose that the interpolation points are now selected as

xk+1=argmaxxX(fIk(f))(x,)Y,(5a)x_{k+1} = \arg \max_{x \in \mathcal{X}} \|(f - I_k(f))(x, \cdot)\|_{\mathcal{Y}}, \quad (5a)

yk+1=argmaxyY(fIk(f))(xk+1,y),(5b)y_{k+1} = \arg \max_{y \in \mathcal{Y}} |(f - I_k(f))(x_{k+1}, y)|, \quad (5b)

the difference with (1a)-(1b) being the arbitrary choice for the norm $|\cdot|{\mathcal{Y}}$ in the first line, instead of just $|\cdot|{L^\infty(\mathcal{Y})}$ . One can exchange the roles of $x$ and $y$ in the previous algorithm, leading to

yk+1=argmaxyY(fIk(f))(,y)X,(6a)y_{k+1} = \arg \max_{y \in \mathcal{Y}} \|(f - I_k(f))(\cdot, y)\|_{\mathcal{X}}, \quad (6a)

xk+1=argmaxxX(fIk(f))(x,yk+1),(6b)x_{k+1} = \arg \max_{x \in \mathcal{X}} |(f - I_k(f))(x, y_{k+1})|, \quad (6b)

for an arbitrary norm $|\cdot|{\mathcal{X}}$ on $\mathcal{X}$ . In general, the couple $(x{k+1}, y_{k+1})$ resulting from (6a)-(6b) differs from the one obtained with (5a)-(5b). Choosing the $L^\infty$ -norm in $\mathcal{Y}$ and $\mathcal{X}$ in (5a) and (6a) respectively, we infer that

(xk+1,yk+1)=argmax(x,y)X×Y(fIk(f))(x,y),(x_{k+1}, y_{k+1}) = \arg \max_{(x,y) \in \mathcal{X} \times \mathcal{Y}} |(f - I_k(f))(x, y)|,thus recovering the choice made in (1a)-(1b). For this specific choice of $L^\infty$ -norms on $\mathcal{X}$ and $\mathcal{Y}$ , (5a)-(5b) is actually equivalent to (6a)-(6b).

The first contribution of this work is the following result:

Theorem 1.2 Assume that the interpolation points are chosen according to (5a)-(5b) or (6a)-(6b) and that $d \leq \dim \text{span}(\mathcal{U})$ . Then, the separated representation (4) with $D = F^{-T}$ , where $F_{l,m} = f(x_l, y_m)$ , is well-defined and satisfies (3a)-(3b).

Theorem 1.2 extends Theorem 1.1 in two ways. First, it allows for a more general selection of the interpolation points. Second, the existence of the decomposition is ensured up to convergence. Since $\dim \text{span}(\mathcal{U})$ is usually unknown, a practical consequence is that for all $k \in \mathbb{N}$ , either $f(x, y) = I_k(f)(x, y)$ for all $(x, y) \in \mathcal{X} \times \mathcal{Y}$ , or the greedy procedure can be pursued. Moreover, we observe that the proof of Theorem 1.2 given below is straightforward and does not rely explicitly on the Kolmogorov $n$ -width.

The second contribution of this work is that starting from (4), we devise a rectangular extension of EIM. An interesting application of this extension is an improved recovery procedure if it is decided a posteriori that the values of $f$ at some points $y_l$ are not reliable and should be discarded. This situation is motivated for instance by sensor failure in the context of the Generalized Empirical Interpolation Method (GEIM) [3], which we address at the end of this work. Note that discarding a posteriori some interpolation point $y_l$ is not straightforward in the standard EIM setting since the whole construction relies on couples of points and functions defined by induction. In the present setting, the interpolation points $x_l$ can be kept even if the points $y_l$ are discarded, thereby leading to a rectangular formulation. The key idea is to replace the matrix inversion by a pseudo-inverse to compute the rectangular matrix $D$ to be used in (4).

2 Symmetric formulation of EIM and proof of Theorem 1.2

2.1 Symmetric formulation

Consider some interpolation points $(x_1, \dots, x_d) \in \mathcal{X}^d$ and $(y_1, \dots, y_d) \in \mathcal{Y}^d$ , and define the matrix $F \in \mathbb{R}^{d \times d}$ such that $F_{l,m} = f(x_l, y_m)$ , for all $l, m \in {1, \dots, d}$ .

Lemma 2.1 (Existence) If the matrix $F$ is invertible, there exists a function $I_d(f) : \mathcal{X} \times \mathcal{Y} \rightarrow \mathbb{R}$ in the form (4) satisfying the interpolation property (3a)-(3a).

Proof We observe that for all $m_0 \in {1, \dots, d}$ and all $x \in \mathcal{X}$ ,

Id(f)(x,ym0)=1l,mdDl,mf(xl,ym0)f(x,ym)=1md[1ldDl,mf(xl,ym0)]f(x,ym).I_d(f)(x, y_{m_0}) = \sum_{1 \leq l, m \leq d} D_{l,m} f(x_l, y_{m_0}) f(x, y_m) = \sum_{1 \leq m \leq d} \left[ \sum_{1 \leq l \leq d} D_{l,m} f(x_l, y_{m_0}) \right] f(x, y_m).

Hence, if $F^T D = \text{Id}$ , (3a) holds. Similarly, if $D F^T = \text{Id}$ , (3b) holds. Since $F$ is invertible, we see that we can choose $D = F^{-T}$ . $\square$

Note that the EIM interpolation is thus given by $I_d(f)(x, y) = \sum_{1 \leq l, m \leq d} (F^{-1})_{m,l} f(x_l, y) f(x, y_m)$ .

2.2 Proof of Theorem 1.2

We begin with the following result on the stopping criterion of the greedy procedure.Lemma 2.2 (Stopping criterion) Consider a set of points $(x_l, y_m){1 \leq l, m \leq k+1}$ and assume that the matrix $F^k := (f(x_l, y_m)){1 \leq l, m \leq k}$ is invertible. If

(fIk(f))(xk+1,yk+1)0,(f - I_k(f))(x_{k+1}, y_{k+1}) \neq 0,

then the matrix $F^{k+1} := (f(x_l, y_m))_{1 \leq l, m \leq k+1}$ is also invertible.

Proof Assume that $F^{k+1}$ is not invertible. This means that there exist $(\lambda_1, \dots, \lambda_{k+1})$ not all zero such that, $\forall l \in {1, \dots, k+1}$ , $\sum_{m=1}^{k+1} \lambda_m f(x_l, y_m) = 0$ , which writes

1mkλmf(xl,ym)=λk+1f(xl,yk+1).(7)\sum_{1 \leq m \leq k} \lambda_m f(x_l, y_m) = -\lambda_{k+1} f(x_l, y_{k+1}). \quad (7)

Note that necessarily, $\lambda_{k+1} \neq 0$ , since otherwise the matrix $F^k$ would not be invertible. Let us introduce the matrix $D^k = (F^k)^{-T}$ so that $I_k(f)(x, y) = \sum_{1 \leq l, m \leq k} D_{l,m}^k f(x_l, y_m)$ . Let $m_0 \in {1, \dots, k}$ . From (7), we infer that

1lkDl,m0k1mkλmf(xl,ym)=λk+11lkDl,m0kf(xl,yk+1).\sum_{1 \leq l \leq k} D_{l,m_0}^k \sum_{1 \leq m \leq k} \lambda_m f(x_l, y_m) = -\lambda_{k+1} \sum_{1 \leq l \leq k} D_{l,m_0}^k f(x_l, y_{k+1}).

and since, for all $m, m_0 \in {1, \dots, k}$ , $\sum_{1 \leq l \leq k} D_{l,m_0}^k f(x_l, y_m) = \delta_{m,m_0}$ , exchanging summations in the left-hand side leads to

λm0=λk+11lkDl,m0kf(xl,yk+1).\lambda_{m_0} = -\lambda_{k+1} \sum_{1 \leq l \leq k} D_{l,m_0}^k f(x_l, y_{k+1}).

We deduce that

1m0kλm0f(x,ym0)=λk+11lk1m0kDl,m0kf(xl,yk+1)f(x,ym0)=λk+1Ik(f)(x,yk+1).\sum_{1 \leq m_0 \leq k} \lambda_{m_0} f(x, y_{m_0}) = -\lambda_{k+1} \sum_{1 \leq l \leq k} \sum_{1 \leq m_0 \leq k} D_{l,m_0}^k f(x_l, y_{k+1}) f(x, y_{m_0}) = -\lambda_{k+1} I_k(f)(x, y_{k+1}).

Taking $x = x_{k+1}$ and using again (7), we infer that

λk+1f(xk+1,yk+1)=λk+1Ik(f)(xk+1,yk+1)-\lambda_{k+1} f(x_{k+1}, y_{k+1}) = -\lambda_{k+1} I_k(f)(x_{k+1}, y_{k+1})

and recalling that $\lambda_{k+1} \neq 0$ , we conclude that

f(xk+1,yk+1)=Ik(f)(xk+1,yk+1).f(x_{k+1}, y_{k+1}) = I_k(f)(x_{k+1}, y_{k+1}).

It is clear from (5a)-(5b) or (6a)-(6b) that for all $k < \dim \text{span}(\mathcal{U})$ , $(f - I_k(f))(x_{k+1}, y_{k+1}) \neq 0$ . Then, from Lemma 2.2, $I_{k+1}$ is well-defined, and Theorem 1.2 is proved.

Corollary 2.3 (Termination) If $(x_{k+1}, y_{k+1})$ is selected according to (5a)-(5b) or (6a)-(6b), then $|(f - I_k(f))(x_{k+1}, y_{k+1})| = 0$ , implies that $(f - I_k(f))(x, y) = 0$ for all $(x, y) \in \mathcal{X} \times \mathcal{Y}$ .

Proof Suppose that $(x_{k+1}, y_{k+1})$ is selected according to (5a)-(5b). Then $|(f - I_k(f))(x_{k+1}, y_{k+1})| = 0$ implies by definition of $y_{k+1}$ that for all $y \in \mathcal{Y}$ , $|(f - I_k(f))(x_{k+1}, y)| = 0$ . This in turn means that $|(f - I_k(f))(x_{k+1}, \cdot)|{\mathcal{Y}} = 0$ . Then, by definition of $x{k+1}$ , for all $x \in \mathcal{X}$ , $|(f - I_k(f))(x, \cdot)|{\mathcal{Y}} = 0$ , and thus $f = I_k(f)$ . The proof is similar in the case where $(x{k+1}, y_{k+1})$ is selected according to (6a)-(6b). □

The practical consequence of the above results is that, for all $k \in \mathbb{N}$ , either $|(f - I_k(f))(x_{k+1}, y_{k+1})| = 0$ and $f = I_k(f)$ (Corollary 2.3), or $I_{k+1}$ can be constructed (Lemma 2.2) and the greedy procedure can be pursued.Remark 1 (Goal-oriented approximation) In a reduced-basis context, the EIM procedure is typically used to obtain a separated representation of the coefficients of a parametrized partial differential equation. As an example, let us consider the right-hand side $f(x, y)$ of a PDE discretized using a Galerkin basis ${\varphi_1, \dots, \varphi_N}$ . Here, $x$ is the parameter and $y$ the space variable. The discretized right-hand side is then $b(x)j = \int{\Omega} f(x, y) \varphi_j(y) dy$ and using the EIM approximation, it actually can be written as a linear combination of functions of $x$ : $(I_d(b)(x))j = \int{\Omega} I_d(f)(x, y) \varphi_j(y) dy = \sum_{1 \leq l, m \leq d} D_{l,m} f(x, y_m) \int_{\Omega} f(x_l, y) \varphi_j(y) dy$ . This separated representation is very important for the overall efficiency of the greedy procedure (see [1]). Consider a finite-dimensional subspace of a Hilbert space with basis ${\varphi_1, \dots, \varphi_N}$ . Define $b(x)j = \int{\Omega} f(x, y) \varphi_j(y) dy$ for all $j \in {1, \dots, N}$ , and its EIM-based approximation $(I_d(b)(x))j = \int{\Omega} I_d(f)(x, y) \varphi_j(y) dy$ . If the EIM approximation $I_d$ has been constructed using (5a)-(5b), the error in the greedy procedure is assessed on the quality of the approximation of $f(x, y)$ . A goal-oriented alternative is to assess the quality of the approximation of $f$ on the object $b$ to be approximated:

xk+1=argmaxxX(bId(b))(x),(8a)x_{k+1} = \arg \max_{x \in \mathcal{X}} \| (b - I_d(b))(x) \|, \quad (8a)

yk+1=argmaxyY(fIk(f))(xk+1,y),(8b)y_{k+1} = \arg \max_{y \in \mathcal{Y}} | (f - I_k(f))(x_{k+1}, y) |, \quad (8b)

where $|\cdot|$ can be any norm on $\mathbb{R}^N$ .

3 Extension to rectangular cases

3.1 Different number of interpolation points $x$ and $y$

Consider a given EIM approximation, where $d$ couples $(x_m, y_m)$ , $m \in \mathcal{N} := {1, \dots, d}$ have been selected. Suppose now that for some reason, some points $y_{m_0}$ , $m_0 \in \mathcal{N}0 \subsetneq \mathcal{N}$ , must be discarded. For instance, consider that $f(x, y)$ represents the evaluation of a physical field parametrized by $x$ by a sensor located at a point $y$ . Discarding $y{m_0}$ would model the failure of the corresponding sensor. One might still want to use the information provided by the physical field parametrized by $x_{m_0}$ which has been selected together with the sensor $y_{m_0}$ during the greedy procedure. The motivation is that even if the first selected sensor fails, we can still use the information from the associated physical field in the construction of the approximate separated representation.

Our idea, inspired from Lemma 2.1 is to choose $D = (F^T)^\dagger$ , where $\cdot^\dagger$ denotes the pseudo-inverse. The matrices $D$ and $F$ are both rectangular and in $\mathbb{R}^{d \times d_0}$ with $d_0 = \text{card}(\mathcal{N}_0)$ . We recall that if $A \in \mathbb{R}^{d_1 \times d_2}$ , then $A^\dagger \in \mathbb{R}^{d_2 \times d_1}$ is the unique matrix verifying (i) $AA^\dagger A = A$ , (ii) $A^\dagger AA^\dagger = A^\dagger$ , (iii) $(AA^\dagger)^T = AA^\dagger$ , (iv) $(A^\dagger A)^T = A^\dagger A$ . Choosing $D = (F^T)^\dagger$ , the interpolation properties (3a)-(3b) will not be verified anymore, but since the goal is to find an approximation formula and not necessarily an interpolation formula, we can define the rectangular EIM approximation by

lNmN0(FT)l,mf(xl,y)f(x,ym).(9)\sum_{l \in \mathcal{N}} \sum_{m \in \mathcal{N}_0} (F^T)_{l,m}^\dagger f(x_l, y) f(x, y_m). \quad (9)

As a numerical illustration, fix $\vec{v} = (1 \ 2 \ 3)^T$ and consider the function $\mathbb{R}^3 \times \mathbb{R} \ni (\vec{x}, y) \mapsto f(\vec{x}, y) := \cos((\vec{v} \cdot \vec{x})y) \in (-1, 1)$ . Suppose that $d = 8$ couples of points $(x_k, y_k)$ , $k \in \mathcal{N} = {1, \dots, d}$ , have been selected by the greedy procedure according to (1a)-(1b). Consider any pair $i, j \in \mathcal{N}$ , $i \neq j$ and set $\mathcal{N}_0 = \mathcal{N} \setminus {i, j}$ . The relative $\ell^2$ -norm error on 1000 samplingpoints in $(0, 1)^3 \times (0, 1)$ is computed by the two following algorithms: (i) using the classical EIM approximation with the 6 couples of points $(x_k, y_k)$ , $k \in \mathcal{N}0$ , and (ii) using the approximation (9) with the 6 points ${x_k}{k \in \mathcal{N}0}$ in the $x$ -dimension and the 8 points ${y_k}{k \in \mathcal{N}}$ in the $y$ -dimension. The relative $\ell^2$ -norm error has been computed for all $\mathcal{N}_0 = \mathcal{N} \setminus {i, j}$ , $i, j \in \mathcal{N}$ such that $i \neq j$ . In case (i), the maximum, minimum, and mean relative $\ell^2$ -norm errors are respectively $6.8 \times 10^{-4}$ , $3.0 \times 10^{-6}$ , and $3.6 \times 10^{-5}$ , whereas in case (ii), these errors are respectively $2.3 \times 10^{-5}$ , $7.6 \times 10^{-7}$ , and $2.4 \times 10^{-6}$ . This example illustrates the fact that using the rectangular approximation so as to keep the information from $x_i$ and $x_j$ , $i, j \in \mathcal{N}$ such that $i \neq j$ , yields a better accuracy than discarding this information and using the square approximation.

3.2 Application to GEIM [3]

Consider a set $\mathcal{F}$ of functions $f$ defined over $\Omega \subset \mathbb{R}$ , and a dictionary $\Sigma$ of linear forms $\sigma$ defined over $\mathcal{F}$ . Consider $d$ functions $(f_1, \dots, f_d) \in \mathcal{F}^d$ and $d$ linear forms $(\sigma_1, \dots, \sigma_d) \in \Sigma^d$ and define the matrix $\hat{F} = (\sigma_l(f_m))_{1 \leq l, m \leq d}$ .

Lemma 3.1 (Existence for GEIM) Assume that $\hat{F}$ is invertible. Then, setting $D = \hat{F}^{-T}$ , the linear combination

Id(f)=1l,mdDl,mσl(f)fm(10)I_d(f) = \sum_{1 \leq l, m \leq d} D_{l, m} \sigma_l(f) f_m \quad (10)

satisfies for all $m \in {1, \dots, d}$ ,

σm(f)=σm(Id(f)),for all fF,(11a)\sigma_m(f) = \sigma_m(I_d(f)), \quad \text{for all } f \in \mathcal{F}, \quad (11a)

σ(fm)=σ(Id(fm)),for all σΣ.(11b)\sigma(f_m) = \sigma(I_d(f_m)), \quad \text{for all } \sigma \in \Sigma. \quad (11b)

Proof (11a) implies that $\sigma_{m_0}(f) = \sigma_{m_0}(I_d(f)) = \sum_{1 \leq l \leq d} \left( \sum_{1 \leq m \leq d} D_{l, m} \sigma_{m_0}(f_m) \right) \sigma_l(f)$ and (11b) implies that $\sigma(f_{m_0}) = \sigma(I_d(f_{m_0})) = \sum_{1 \leq m \leq d} \left( \sum_{1 \leq l \leq d} D_{l, m} \sigma_l(f_{m_0}) \right) \sigma(f_m)$ , meaning that $D = \hat{F}^{-T}$ can be chosen. $\square$

Lemma 3.2 (Stopping criterion for GEIM) Assume that the matrix $\hat{F}^k := (\sigma_l(f_m)){1 \leq l, m \leq k}$ is invertible. If $|\sigma{k+1}(f_{k+1}) - \sigma_{k+1}(I_k(f_{k+1}))| \neq 0$ , then the matrix $\hat{F}^{k+1} := (\sigma_l(f_m))_{1 \leq l, m \leq k+1}$ is invertible.

Proof Assume that the matrix $\hat{F}^{k+1}$ is not invertible. Then there exists $(\lambda_1, \dots, \lambda_{k+1})$ not all zero and with $\lambda_{k+1} \neq 0$ , such that $\sum_{1 \leq m \leq k} \lambda_m \sigma_l(f_m) = -\lambda_{k+1} \sigma_l(f_{k+1})$ , $\forall l \in {1, \dots, k+1}$ . We have $\sum_{1 \leq l \leq k} D_{l, m_0}^k \sum_{1 \leq m \leq k} \lambda_m \sigma_l(f_m) = -\lambda_{k+1} \sum_{1 \leq l \leq k} D_{l, m_0}^k \sigma_l(f_{k+1})$ , where $D^k = (\hat{F}^k)^{-T}$ . From the definition of $D^k$ and (10), we deduce that $\sum_{1 \leq m_0 \leq k} \lambda_{m_0} f_{m_0} = -\lambda_{k+1} I_k(f_{k+1})$ . Applying $\sigma_{k+1}$ to both sides of this relation gives $\sigma_{k+1}(f_{k+1}) = \sigma_{k+1}(I_k(f_{k+1}))$ , yielding the desired contradiction. $\square$

The generalization of Theorem 1.2 to GEIM, for which any norm on $\mathcal{F}$ can be taken, follows immediately.

Let us now return to the setting of sensor failure discussed in Section 3.1. This setting nicely fits the GEIM. Suppose that for all $\sigma \in \Sigma$ , there exists a unique $x_\sigma \in \Omega$ such that $\sigma(f) = f(x_\sigma)$ . In this setting, the functions $f$ represent the parametrized physical field depending on $x$ , and $\sigma(f) = f(x_\sigma)$ represents a sensor located at the point $x_\sigma$ , that measuresthe value of $f$ at this point. Discarding the broken sensor $m_0$ , we can produce the following rectangular approximation

lN0mN(F^T)l,mσl(f)fm,(12)\sum_{l \in \mathcal{N}_0} \sum_{m \in \mathcal{N}} (\hat{F}^T)_{l,m}^\dagger \sigma_l(f) f_m, \quad (12)

which should deliver a more accurate representation of $f$ than if the functions $f_{m_0}$ were also discarded as in the classical (square) EIM context.

Acknowledgements

The authors would like to thank Y. Maday (Laboratoire Jacques-Louis Lions) for fruitful discussions. The work of T. Lelièvre is supported by the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement number 614492.

References

  • [1] M. Barrault, Y. Maday, N. C. Nguyen, and A. T. Patera. An 'empirical interpolation' method: application to efficient reduced-basis discretization of partial differential equations. Comptes Rendus Mathematique, 339(9):667 – 672, 2004.
  • [2] F. Casenave, A. Ern, and T. Lelièvre. A nonintrusive reduced basis method applied to aeroacoustic simulations. Advances in Computational Mathematics, pages 1–26, 2014.
  • [3] Y. Maday and O. Mula. A generalized empirical interpolation method: Application of reduced basis techniques to data assimilation. In Franco Brezzi, Piero Colli Franzone, Ugo Gianazza, and Gianni Gilardi, editors, Analysis and Numerics of Partial Differential Equations, volume 4 of Springer INdAM Series, pages 221–235. Springer Milan, 2013.
  • [4] Y. Maday, N. C. Nguyen, A. T. Patera, and S. Pau. A general multipurpose interpolation procedure: the magic points. Communications On Pure And Applied Analysis, 8(1):383–404, 2008.

Xet Storage Details

Size:
22.1 kB
·
Xet hash:
c1f2394a6e09e63aa8b4a54931fd701d30ea908a1c65a9a96a8c56f2d9666bbd

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.