Local well-posedness problem
In working with my collaborators to justify the diffusion equation resulted from wave turbulence systems dominated by non-local interactions, we need to prove the local well-posedness of some wave kinetic equation. The system under our consideration is a three-dimensional Majda-McLaughlin-Tabak (MMT) equation, with its wave kinetic equation given in the form of
$\frac{\partial n_k}{\partial t} = 4\pi {\large{\int}}_{\mathbb{R}^9} |k|^{2\beta} |k_1|^{2\beta} |k_2|^{2\beta} |k_3|^{2\beta} (n_1n_2n_3 + n_2n_3n_k - n_2n_1n_k - n_3n_1n_k)$
$\delta(k+k_1-k_2-k_3) \delta(\omega+\omega_1-\omega_2-\omega_3) dk_1dk_2dk_3$, with $k\in \mathbb{R}^3$ and $\omega=|k|^\alpha$.
This equation describes the evolution of wave action spectrum as a result of four-wave interactions. I will not detail more about the physical meaning of the equation and notations used therein, because that will distract readers too much from the main theme of the post. Anyway, one of our tasks is to understand the local-wellposedness of this equation for certain ranges of $\alpha$ and $\beta$. I will use this post to summarize what I learned from this problem, in terms of the fundamental building blocks which allow us to prove local well-posedness for ODEs/PDEs. These building blocks include Picard iteration, Picard existence and uniqueness theorem, contraction mapping theorem, and Grönwall inequality.
Before starting the technical details, I would like to make two remarks:
- For readers with a similar background as me, the most important transition in mindset to understand the materials is to think of functions (say $u(x,t)$ of space and time) as points in an infinite dimensional space. This is very natural when one considers the Picard iteration as a mapping between functions, as will be shown below. However, this idea was not immediately clear to me at the beginning since I indulged too much in the world of numerical simulations where $u(x,t)$ is considered separately in the time stepping as $u(x,t_1)$, $u(x,t_2)$, etc.
- Within this post we will formulate the well-posedness problem for finite-dimensional ODEs, i.e., we consider $u(t):\mathbb{R}\rightarrow\mathbb{R}^n$ as the solution. The analysis, however, are also (almost) directly applicable to the cases of PDEs, where $n\rightarrow \infty$.
What is local well-posedness?
We start from a finite-dimensional ODE
$\frac{\partial u}{\partial t} = F(u)$, (1)
where $u:I\rightarrow V$ with $I\in\mathbb{R}$ some time interval and $V\in \mathbb{R}^n$ some normed finite-dimensional vector space (we will denote the norm generally as $||\cdot||$). The initial condition of (1) is given as
$u(0)=u_0\in V$. (2)
For local well-posedness, we are interested in finding a time interval $I$ such that the solution $u(t)\in V$ to (1) and (2) exists and is unique. If such $I$ can be found, then we say the ODE (1) with initial condition (2) is locally well-posed for $t\in I$.
Existence
Theorem 1 (Picard Existence Theorem) Let $R>0$, and let $u_0$ lie in a closed ball $\overline{B(R)}\equiv \{u\in V: ||u||<R\}$. Let $F:V\rightarrow V$ be a Lipschitz continuous function with a Lipschitz constant $K$ on the ball $\overline{B(2R)}$. If one sets
$T=\frac{1}{2K+||F(0)||/R}$,
then there exists a solution $u\in C^1([0,T]\rightarrow V)$ to (1) and (2) such that $u(t)\in \overline{B(2R)}$ for all $t\in [0,T]$.
Reminder before the proof: (1) Function $F(u)$ is called Lipschitz continuous if there exists a real constant $K$ such that for all $u_1$, $u_2$ in $V$, $||F(u_1)-F(u_2)||\leq K||u_1-u_2||$. (2) $C^1$ denotes continuously differentiable functions in time, i.e., function $u(t)$ whose first derivative exists and is continuous.
Proof: We first write (1) and (2) in integral form as
$u(t)=u_0+\int_0^t F(u(s))ds$ (3)
Note that if $u$ is continuous and solves (3) on $[0,T]$, then the right hand side of (3) (hence $u(t)$ itself) is continuously differentiable. Such continuously differentiable (i.e., in $C^1$) $u(t)$ therefore also solves (1) and (2). Because of this, it suffices to prove that a solution $u(t)\in \overline{B(2R)}$ to (3) exists for $t\in [0,T]$.
The basic idea for this proof is to formulate (3) through the Picard iteration and then find a fixed point in the iteration. Recall that the Picard iteration can be written as
$u^{n+1}=\Phi(u^n), n=0,1,2,...$ with $\Phi(u)(t)=u_0+\int_0^t F(u(s))ds$ and $u^0=u_0$.
In other words, in each iteration we are performing a mapping $\Phi:X\rightarrow X$ with $X=C([0,T] \rightarrow \overline{B(2R)})$ denoting the space of continuous functions from $[0,T]$ to $\overline{B(2R)}$ (yet to be proved). In order to prove some $u(t)$ solves (3), we only need to prove that the mapping $\Phi$ has a fixed point $u^*=\Phi(u^*)$ with $u^*\in X$. Two steps are needed for this purpose.
Step 1: We need to show that $\Phi$ indeed maps $X$ to $X$ (so that the solution $u(t)$ stays in the ball $\overline{B(2R)}$). If $u\in X$, then for any $t\in [0,T]$, we can start from the triangle inequality and show that
$||\Phi(u)(t)|| \leq ||u_0|| + T sup_{s\in[0,T]} ||F(u(s))||$
$ \leq R + T(||F(0)||+2RK) \leq 2R$ (4)
by choice of $T$ stated in Theorem 1. We note that the second inequality above comes from the Lipschitz condition of $F(u)$, i.e., for $u\in X$, $sup_{s\in[0,T]}||F(u(s))-F(0)|| \leq sup_{s\in[0,T]}K||u(s)-0|| \leq 2RK$. Since equation (4) is valid for all $t\in [0,T]$, step 1 is complete.
Step 2: We need to show that the Picard iteration yields a fixed point. For this purpose it suffices to show that $\Phi$ is a contraction map (and then apply the contraction mapping theorem detailed below). Let's first define a distance metric on space $X$ so that the "contraction" of $\Phi$ can be measured:
$d(u,v)=sup_{s\in[0,T]} ||u(t)-v(t)||$.
Then, from a similar argument as in step 1, we have
$||\Phi(u)(t)-\Phi(v)(t)||=||\int_0^t F(u(s))-F(v(s)) ds|| \leq Tsup_{s\in[0,T]} K||u(s)-v(s)||$.
Since this equation is valid for all $t$ (i.e., a $sup$ operator can be added on LHS), we have $d(\Phi(u),\Phi(v))\leq d(u,v)/2$ by choice of $T$ stated in Theorem 1. Hence step 2 is complete upon application of the contraction mapping theorem.
End of proof of Theorem 1. $\square$
Contraction mapping theorem: Let $\Phi: X\rightarrow X$ be a contraction map, then $\Phi$ admits a unique fixed point $u^*$, i.e., $\Phi(u^*)=u^*$. Also, $u^*$ can be found from the iteration $u^{n+1}=\Phi(u^n)$ (Picard iteration in this case).
Here we show an intuitive "proof" of the contraction mapping theorem. For a contraction map $\Phi$, we have $d(\Phi(u),\Phi(v))\leq qd(u,v)$ with $q<1$. Applying this contraction iteratively, we have $d(u^{n+1},u^n)\leq q^n d(u^1, u^0)$. For $n\rightarrow \infty$, $u^{n+1}=u^n=u^*$. $\square$
Uniqueness
Before showing the uniqueness theorem, we need to discuss why it is necessary to consider the uniqueness of the solution. From a numerical simulation perspective, if we have an ODE with a certain initial condition, it seems that the solution is unique and can be approached when the time step $\Delta t \rightarrow 0$. From an intuitive perspective, if we throw a stone with certain initial direction and velocity, the trajectory of the stone should also be unique. So why do we need to prove the uniqueness of ODE solution from the first place?
To understand this, we need to be aware that, unfortunately, not all ODEs behave as "nice" as "throwing a stone". As an example, we consider the following ODE $y'=2y^{1/2}$ with initial condition $y(0)=0$. From the undergraduate differential equation class, we learn the technique to solve it as $dy/y^{1/2}=2dt \Rightarrow 2y^{1/2}=2t+C \Rightarrow y=t^2$. However, this is not the unique solution of this ODE. It can be easily seen that there exists a family of solution
$y_a(x)=\begin{cases} 0, & \text{if $t<a$}.\\ (t-a)^2, & \text{if $t\geq a$}.
\end{cases}$,
which is valid for any $a>0$. The reason that this family of solution is missed in the undergraduate technique is because the first procedure requires $y\neq 0$, that rules out the $y=0$ case which is a solution itself. In addition, if we examine the right hand side of this ODE, we find that the function $y^{1/2}$ is in fact not Lipschitz continuous since its derivative approaches infinity at $y=0$. Hence, demonstrating the uniqueness of solution in initial value problems of ODEs is necessary.
Theorem 2 (Picard Uniqueness Theorem) Let $u,v:I\rightarrow V$ be continuously differentiable solutions to (1), i.e., $\partial_t u=F(u)$, $\partial_t v=F(v)$. If $u_0=v_0$, then $u$ and $v$ are identical on $I$, with $I$ an interval including $t_0$.
Proof: Without loss of generality we consider $t_0=0$. We will also prove only for the part of $I$ with $t>0$ because the other part can be argued from symmetry (by replacing $u(t)$ with $u(-t)$ and $F$ by $-F$).
We again start from the integral form
$u(t)=u_0+\int_0^t F(u(s))ds$, $v(t)=v_0+\int_0^t F(v(s))ds$
so that
$u(t)-v(t)=\int_0^t F(u(s)) - F(v(s))ds$.
From Lipschitz condition on $F$, we have
$||u(t)-v(t)||\leq K\int_0^t ||u(s)-v(s)|| ds$
Further by the integral form of the Grönwall’s inequality (detailed below), we conclude that
$||u(t)-v(t)||\leq exp(\int_0^t K ds)(u_0-v_0)=0$.
End of proof of Theorem 2. $\square$
We next review the Grönwall’s inequality, which is a useful tool to bound solutions of ODEs.
Grönwall’s inequality (differential form) Let
$\partial_t f(t) \leq A(t) f(t)$, (5)
with $f:I\rightarrow \mathbb{R}$ a scalar continuous differential function, $A:I\rightarrow \mathbb{R}$ a continuous function. Set initial condition $f(0)=f_0\in \mathbb{R}$. Then
$f(t) \leq exp(\int_0^t A(s)ds) f_0$. (6)
Proof: Applying the integrating factor scheme to (5), we obtain
$\partial_t{\large(}exp(-\int_0^t A(s) ds) f(t){\large)}\leq 0$,
which means
$exp(-\int_0^t A(s) ds) f(t) \leq f_0$,
and thus
$f(t) \leq exp(\int_0^t A(s)ds) f_0$. $\square$
Grönwall’s inequality (integral form) Let $I$ be an interval containing $0$ as the left point, Let $u: I\rightarrow \mathbb{R}$ and $A: I\rightarrow [0,\infty)$ be continuous functions obeying inequality
$ f(t) \leq f_0 + \int_0^t A(s) f(s) ds $, (7)
then one has (6) for $t\in I$.
Proof: From (7) and that the function $t\mapsto f_0 + \int_0^t A(s) f(s) ds$ is continuously differentiable, we obtain
$\frac{d}{dt}(f_0 + \int_0^t A(s) f(s) ds) = A(t)f(t) \leq A(t) (f_0 + \int_0^t A(s) f(s) ds)$.
Applying the differential form of the Grönwall’s inequality (with $f_0 + \int_0^t A(s) f(s) ds$ as a whole being a function), we have
$f_0 + \int_0^t A(s) f(s) ds \leq exp(\int_0^t A(s)ds) f_0$.
The claim then follows by applying (7) to the above equation. $\square$
We remark that the integral form of the Grönwall’s inequality cannot be directly obtained from the differential form. This is because (7) does not imply (5), i.e., the inequality on the integral of the function does not imply the same inequality on the function itself. Starting from the weaker condition (7), we do additionally need $A(t)>0$ in the proof of the integral form. Furthermore, it is clear now that in the proof of Theorem 2, we only need the integral form the Grönwall’s inequality considering $||u(t)-v(t)||$ as $f(t)$.
Back to the kinetic equation problem
The Picard existence and uniqueness theorem guarantees the local-wellposedness of ODEs/PDEs. The critical condition needed is that $F$ is Lipschitz continuous. Going back to the wave kinetic equation problem, this is indeed what we need to prove (which is the difficult part) to understand its local well-posedness. In the kinetic equation PDE setting, we first set up some functional norm for $n(k)$ and then apply the Picard theorem to establish the local-wellposedness of the kinetic equation. I will update with a preprint of the paper when it is available.
References
Terry Tao's blog: 254A, Notes 1: Local well-posedness of the Navier-Stokes equations.
Wikipedia; StackExchange
Comments
Post a Comment