Kreyszig-2.2-Normed Space, Banach Space

Problem 1. Show that the norm \(\|x\|\) of x is the distance from x to O.

Definition:

A norm on a (real or complex) vector space \(X\) is a real-valued function on \(X\) whose value at an \(x \in X\) is denoted by \(\|x\|\) (read "norm of x") and which has the properties

  • \(\|x\| \geq 0\) (Non-negativity)

  • \(\|x\| = 0 \iff x = 0\) (Definiteness)

  • \(\|a x\| = |a| \|x\|\) (Homogeneity)

  • \(\|x + y\| \leq \|x\| + \|y\|\) (Triangle Inequality);

here \(x\) and \(y\) are arbitrary vectors in \(X\) and \(a\) is any scalar.

Solution:

The norm \(\|x\|\) of a vector \(x\) in a vector space is a generalization of the notion of "length" of a vector. It measures the size of vectors and is consistent with our geometric intuition.

In a normed vector space, the distance \(d\) between two vectors \(x\) and \(y\) is defined as:

\begin{equation*} d(x, y) = \|x - y\| \end{equation*}

The distance from any vector \(x\) to the origin \(O\) (the zero vector \(0\)) is then:

\begin{equation*} d(x, O) = \|x - 0\| \end{equation*}

Since subtracting the zero vector does not change the vector \(x\), we have:

\begin{equation*} d(x, O) = \|x\| \end{equation*}

Thus, the norm \(\|x\|\) is the distance from the vector \(x\) to the origin \(O\) in the vector space \(X\). This relationship holds in any normed vector space, whether it be a space of real numbers, complex numbers, or more abstract objects.


Problem 2. Verify that the usual length of a vector in the plane or in three-dimensional space has the properties (N1) to (N4) of a norm.

Solution:

In the Plane (\(\mathbb{R}^2\))

For a vector \(x = (x_1, x_2)\) in \(\mathbb{R}^2\), the usual length (Euclidean norm) is defined as:

\begin{equation*} \|x\| = \sqrt{x_1^2 + x_2^2} \end{equation*}

Property (N1): Non-negativity

\begin{equation*} \|x\| = \sqrt{x_1^2 + x_2^2} \geq 0 \end{equation*}

The square of any real number is non-negative, and the square root of a non-negative number is also non-negative.

Property (N2): Definiteness

\begin{equation*} \|x\| = 0 \iff x_1^2 + x_2^2 = 0 \iff x_1 = 0 \text{ and } x_2 = 0 \iff x = (0, 0) \end{equation*}

The norm is zero if and only if both components of the vector are zero.

Property (N3): Homogeneity

For any scalar \(a\) and vector \(x = (x_1, x_2)\),

\begin{equation*} \|a \cdot x\| = \| (a x_1, a x_2) \| = \sqrt{(a x_1)^2 + (a x_2)^2} = |a| \cdot \sqrt{x_1^2 + x_2^2} = |a| \cdot \|x\| \end{equation*}

The norm of a scaled vector is the absolute value of the scalar times the norm of the vector.

Property (N4): Triangle Inequality

For any vectors \(x = (x_1, x_2)\) and \(y = (y_1, y_2)\), let's consider the norm of their sum:

\begin{equation*} \|x + y\| = \| (x_1 + y_1, x_2 + y_2) \| = \sqrt{(x_1 + y_1)^2 + (x_2 + y_2)^2} \end{equation*}

To prove the triangle inequality, we expand the square of the norm of \(x + y\):

\begin{equation*} \|x + y\|^2 = x_1^2 + 2x_1y_1 + y_1^2 + x_2^2 + 2x_2y_2 + y_2^2 \end{equation*}

We can rewrite this as:

\begin{equation*} \|x + y\|^2 = \|x\|^2 + 2\langle x, y \rangle + \|y\|^2 \end{equation*}

By the Cauchy-Schwarz inequality, we know:

\begin{equation*} |\langle x, y \rangle| \leq \|x\| \cdot \|y\| \end{equation*}

So we have:

\begin{equation*} \|x + y\|^2 \leq \|x\|^2 + 2\|x\| \cdot \|y\| + \|y\|^2 = (\|x\| + \|y\|)^2 \end{equation*}

Taking the square root of both sides (and remembering that the square root function is increasing), we get:

\begin{equation*} \|x + y\| \leq \|x\| + \|y\| \end{equation*}

This completes the proof of the triangle inequality for vectors in \(\mathbb{R}^2\).

In Three-Dimensional Space (:math:`mathbb{R}^3`)

The verification of properties (N1) to (N4) in three-dimensional space follows similarly to that in the plane, with the addition of the third component for each vector. The proof of the triangle inequality in \(\mathbb{R}^3\) follows the same steps as above.

Concavity of the Square Root Function

The function \(f(t) = \sqrt{t}\) is concave on \([0, \infty)\) because its second derivative is negative:

\begin{equation*} f''(t) = -\frac{1}{4t^{3/2}} < 0 \text{ for all } t > 0 \end{equation*}

The concavity of the square root function ensures that the function applied to a sum is less than or equal to the sum of the functions applied to each term separately, which is consistent with the triangle inequality as we've just proven.

The above reasoning solidifies that the Euclidean norm satisfies the triangle inequality, completing the verification that it indeed constitutes a norm in both \(\mathbb{R}^2\) and \(\mathbb{R}^3\).

Detailed Transition

Consider two vectors \(x\) and \(y\) in \(\mathbb{R}^n\). The squared norm of their sum is:

\begin{equation*} \|x + y\|^2 = \langle x + y, x + y \rangle \end{equation*}

Expanding the inner product:

\begin{equation*} \|x + y\|^2 = \langle x, x \rangle + 2\langle x, y \rangle + \langle y, y \rangle \end{equation*}

The inner product of a vector with itself is the square of its norm:

\begin{equation*} \|x + y\|^2 = \|x\|^2 + 2\langle x, y \rangle + \|y\|^2 \end{equation*}

By the Cauchy-Schwarz inequality:

\begin{equation*} |\langle x, y \rangle| \leq \|x\| \cdot \|y\| \end{equation*}

This implies:

\begin{equation*} 2|\langle x, y \rangle| \leq 2\|x\| \cdot \|y\| \end{equation*}

Since the norms are non-negative, and the inner product can be negative, we use the absolute value:

\begin{equation*} 2\langle x, y \rangle \leq 2\|x\| \cdot \|y\| \end{equation*}

Substituting back into our expanded norm equation:

\begin{equation*} \|x + y\|^2 \leq \|x\|^2 + 2\|x\| \cdot \|y\| + \|y\|^2 \end{equation*}

The right-hand side is the square of \(\|x\| + \|y\|\):

\begin{equation*} \|x + y\|^2 \leq (\|x\| + \|y\|)^2 \end{equation*}

Taking the square root of both sides, since the square root function is monotonically increasing:

\begin{equation*} \|x + y\| \leq \|x\| + \|y\| \end{equation*}

This is the triangle inequality for norms, demonstrating that the Euclidean norm satisfies property (N4).


Problem 4. Show that we may replace (N2) by \(\|x\| = 0 \iff x = 0\) without altering the concept of a norm. Show that non-negativity of a norm also follows from (N3) and (N4).

Solution:

Part 1: Replacing (N2)

The definiteness condition states that \(\|x\| = 0 \iff x = 0\). This condition is crucial because it ensures that the only vector with a norm of zero is the zero vector itself.

Part 2: Non-negativity from (N3) and (N4)

Property (N3) states that \(\|a x\| = |a| \|x\|\) for any scalar \(a\) and any vector \(x\). This property is known as absolute homogeneity or scalability.

Property (N4) is the triangle inequality, which states that \(\|x + y\| \leq \|x\| + \|y\|\) for any vectors \(x\) and \(y\).

To show that non-negativity follows from (N3) and (N4), consider the following:

For any vector \(x\) in the vector space, by property (N3), we have:

\begin{equation*} \|0 \cdot x\| = |0| \|x\| = 0 \end{equation*}

Here we used the fact that multiplying any vector by zero yields the zero vector, and the absolute value of zero is zero. This gives us the result that \(\|0\| = 0\).

Now, using the triangle inequality (N4), for any vector \(x\):

\begin{equation*} \|x\| = \|x + 0\| \leq \|x\| + \|0\| \end{equation*}

Since we've established that \(\|0\| = 0\), this simplifies to:

\begin{equation*} \|x\| \leq \|x\| + 0 \end{equation*}

Thus, \(\|x\| \leq \|x\|\), which is true by the reflexivity of the inequality. This shows that \(\|x\|\) must be non-negative since it cannot be less than itself.

Together, these parts demonstrate that the property (N2) can be replaced by the definiteness condition without changing the concept of a norm, and that non-negativity can be derived from (N3) and (N4), confirming that these properties are sufficient to define a norm.


Problem 5. Show that the Euclidean norm with components \(x_i\) replaced by \(\xi_i\) and scalar \(a\) replaced by \(\alpha\) defines a norm on the vector space \(\mathbb{R}^n\).

Solution:

To demonstrate that the Euclidean norm defines a norm on \(\mathbb{R}^n\) with the components \(x_i\) replaced by \(\xi_i\) and the scalar \(a\) replaced by \(\alpha\), we must verify that it satisfies the following properties:

  1. Non-negativity: For any vector \(x\), since each component \(\xi_i\) is squared, the sum is non-negative. Therefore, \(\|x\| \geq 0\).

  2. Definiteness: The norm \(\|x\|\) equals zero if and only if every \(\xi_i\) is zero, which implies that \(x\) is the zero vector.

  3. Homogeneity (or scalability): For any scalar \(\alpha\) and vector \(x\), the norm of the scaled vector is given by:

    \begin{equation*} \|\alpha x\| = \sqrt{\sum_{i=1}^{n} (\alpha \xi_i)^2} = |\alpha| \sqrt{\sum_{i=1}^{n} \xi_i^2} = |\alpha| \|x\| \end{equation*}
  4. Triangle Inequality: For vectors \(x = (\xi_1, \xi_2, \ldots, \xi_n)\) and \(y = (\eta_1, \eta_2, \ldots, \eta_n)\), we need to show that \(\|x + y\| \leq \|x\| + \|y\|\).

    Starting with the left side of the inequality:

    \begin{equation*} \|x + y\|^2 = \sum_{i=1}^{n} (\xi_i + \eta_i)^2 = \sum_{i=1}^{n} (\xi_i^2 + 2\xi_i\eta_i + \eta_i^2) \end{equation*}

    Applying the Cauchy-Schwarz inequality:

    \begin{equation*} \left| \sum_{i=1}^{n} \xi_i\eta_i \right| \leq \sqrt{\sum_{i=1}^{n} \xi_i^2} \cdot \sqrt{\sum_{i=1}^{n} \eta_i^2} \end{equation*}

    We then have:

    \begin{equation*} 2\sum_{i=1}^{n} \xi_i\eta_i \leq 2\sqrt{\sum_{i=1}^{n} \xi_i^2} \cdot \sqrt{\sum_{i=1}^{n} \eta_i^2} \end{equation*}

    Substituting this back into the squared norm of \(x + y\), we get:

    \begin{equation*} \|x + y\|^2 \leq \sum_{i=1}^{n} \xi_i^2 + 2\sqrt{\sum_{i=1}^{n} \xi_i^2} \cdot \sqrt{\sum_{i=1}^{n} \eta_i^2} + \sum_{i=1}^{n} \eta_i^2 \end{equation*}

    Which simplifies to:

    \begin{equation*} \|x + y\|^2 \leq \left( \sqrt{\sum_{i=1}^{n} \xi_i^2} + \sqrt{\sum_{i=1}^{n} \eta_i^2} \right)^2 \end{equation*}

    Taking the square root of both sides:

    \begin{equation*} \|x + y\| \leq \sqrt{\sum_{i=1}^{n} \xi_i^2} + \sqrt{\sum_{i=1}^{n} \eta_i^2} \end{equation*}

    Thus, we have proven the triangle inequality:

    \begin{equation*} \|x + y\| \leq \|x\| + \|y\| \end{equation*}

By confirming these properties, we have shown that the Euclidean norm with substitutions \(\xi_i\) for \(x_i\) and \(\alpha\) for \(a\) indeed defines a norm on the vector space \(\mathbb{R}^n\).


Problem 6. Let \(X\) be the vector space of all ordered pairs \(x = (\xi_1, \xi_2)\), \(y = (\eta_1, \eta_2)\), ... of real numbers. We are to show that norms on \(X\) are defined by:

\begin{equation*} \|x\|_1 = |\xi_1| + |\xi_2| \end{equation*}
\begin{equation*} \|x\|_2 = (\xi_1^2 + \xi_2^2)^{1/2} \end{equation*}
\begin{equation*} \|x\|_{\infty} = \max\{|\xi_1|, |\xi_2|\} \end{equation*}

Solution:

  1. For the \(L^1\) norm:

    • Non-negativity: Since absolute values are always non-negative, we have \(|\xi_1| + |\xi_2| \geq 0\).

    • Definiteness: \(\|x\|_1 = 0\) if and only if \(|\xi_1| = 0\) and \(|\xi_2| = 0\), which occurs if and only if \(\xi_1 = 0\) and \(\xi_2 = 0\), hence \(x = 0\).

    • Scalar multiplication: For any scalar \(\alpha\), \(\|\alpha x\|_1 = |\alpha \xi_1| + |\alpha \xi_2| = |\alpha|(|\xi_1| + |\xi_2|) = |\alpha| \|x\|_1\).

    • Triangle inequality: For any vectors \(x = (\xi_1, \xi_2)\) and \(y = (\eta_1, \eta_2)\), \(\|x + y\|_1 = |(\xi_1 + \eta_1)| + |(\xi_2 + \eta_2)| \leq (|\xi_1| + |\eta_1|) + (|\xi_2| + |\eta_2|) = \|x\|_1 + \|y\|_1\).

  2. For the \(L^2\) norm:

    • Non-negativity: The sum of squares is non-negative, and so is their square root, hence \(\|x\|_2 \geq 0\).

    • Definiteness: \(\|x\|_2 = 0\) if and only if \(\xi_1^2 + \xi_2^2 = 0\), which occurs only when \(\xi_1 = 0\) and \(\xi_2 = 0\), thus \(x = 0\).

    • Scalar multiplication: \(\|\alpha x\|_2 = ((\alpha \xi_1)^2 + (\alpha \xi_2)^2)^{1/2} = |\alpha| (\xi_1^2 + \xi_2^2)^{1/2} = |\alpha| \|x\|_2\).

    • Triangle inequality: This follows from the Minkowski inequality, which is a general result and holds for the \(L^2\) norm.

  3. For the \(L^\infty\) norm:

    • Non-negativity: The maximum of absolute values is non-negative, so \(\|x\|_{\infty} \geq 0\).

    • Definiteness: \(\|x\|_{\infty} = 0\) if and only if both \(|\xi_1| = 0\) and \(|\xi_2| = 0\), which means \(x = 0\).

    • Scalar multiplication: For any scalar \(\alpha\), \(\|\alpha x\|_{\infty} = \max\{|\alpha \xi_1|, |\alpha \xi_2|\} = |\alpha| \max\{|\xi_1|, |\xi_2|\} = |\alpha| \|x\|_{\infty}\).

    • Triangle inequality: For any vectors \(x\) and \(y\), \(\|x + y\|_{\infty} \leq \|x\|_{\infty} + \|y\|_{\infty}\) because the maximum absolute value of the sum of components is less than or equal to the sum of the maximum absolute values.

Triangle inequality

For the \(L^2\) norm, we want to prove the triangle inequality:

\begin{equation*} \|x + y\|_2 \leq \|x\|_2 + \|y\|_2 \end{equation*}

where \(x = (\xi_1, \xi_2)\) and \(y = (\eta_1, \eta_2)\). We start by squaring both sides of the inequality:

\begin{equation*} (\|x + y\|_2)^2 \leq (\|x\|_2 + \|y\|_2)^2 \end{equation*}

Expanding the left-hand side, we have:

\begin{equation*} (\xi_1 + \eta_1)^2 + (\xi_2 + \eta_2)^2 \end{equation*}

And the right-hand side becomes:

\begin{equation*} (\|x\|_2)^2 + 2\|x\|_2\|y\|_2 + (\|y\|_2)^2 \end{equation*}

Simplifying the norms, we obtain:

\begin{equation*} \xi_1^2 + 2\xi_1\eta_1 + \eta_1^2 + \xi_2^2 + 2\xi_2\eta_2 + \eta_2^2 \leq \xi_1^2 + \xi_2^2 + 2\sqrt{(\xi_1^2 + \xi_2^2)(\eta_1^2 + \eta_2^2)} + \eta_1^2 + \eta_2^2 \end{equation*}

The inequality holds due to the Cauchy-Schwarz inequality, which asserts:

\begin{equation*} (\sum a_i b_i)^2 \leq (\sum a_i^2)(\sum b_i^2) \end{equation*}

In our case, it implies:

\begin{equation*} (2\xi_1\eta_1 + 2\xi_2\eta_2)^2 \leq (2\sqrt{(\xi_1^2 + \xi_2^2)(\eta_1^2 + \eta_2^2)})^2 \end{equation*}

Since the inequality holds when squared, it also holds when we take the square root of both sides, which gives us the triangle inequality for the \(L^2\) norm:

\begin{equation*} \|x + y\|_2 \leq \|x\|_2 + \|y\|_2 \end{equation*}

Problem 7. Prove that the vector space of all continuous real-valued functions on \([a, b]\) forms a normed space \(X\) with norm defined by

\begin{equation*} \|x\| = \left( \int_a^b x(t)^2 \, dt \right)^{1/2} \end{equation*}

satisfies the properties (N1) to (N4).

Solution:

To prove that the given norm satisfies the properties (N1) to (N4), we consider two functions \(x(t)\) and \(y(t)\) from the vector space, and a scalar \(\alpha\).

(N1) Non-negativity: Since \(x(t)^2 \geq 0\) for all \(t\), it follows that

\begin{equation*} \|x\| = \left( \int_a^b x(t)^2 \, dt \right)^{1/2} \geq 0. \end{equation*}

(N2) Definiteness: If \(\|x\| = 0\), then

\begin{equation*} \int_a^b x(t)^2 \, dt = 0. \end{equation*}

Since \(x(t)^2 \geq 0\), this implies \(x(t)^2 = 0\) almost everywhere, and hence \(x(t) = 0\) almost everywhere.

(N3) Scalar multiplication: We have

\begin{equation*} \|\alpha x\| = \left( \int_a^b (\alpha x(t))^2 \, dt \right)^{1/2} = |\alpha| \left( \int_a^b x(t)^2 \, dt \right)^{1/2} = |\alpha| \|x\|. \end{equation*}

(N4) Triangle inequality: The proof of the triangle inequality for this norm involves the Cauchy-Schwarz inequality for integrals. We start by expanding the square of the norm of the sum:

\begin{equation*} \|x + y\|^2 = \int_a^b (x(t) + y(t))^2 \, dt. \end{equation*}

Expanding the integrand and applying the Cauchy-Schwarz inequality, we get:

\begin{equation*} \int_a^b (x(t) + y(t))^2 \, dt = \int_a^b x(t)^2 \, dt + 2\int_a^b x(t)y(t) \, dt + \int_a^b y(t)^2 \, dt \leq \int_a^b x(t)^2 \, dt + 2\left(\int_a^b x(t)^2 \, dt\right)^{1/2} \left(\int_a^b y(t)^2 \, dt\right)^{1/2} + \int_a^b y(t)^2 \, dt. \end{equation*}

This implies:

\begin{equation*} \|x + y\|^2 \leq \left( \left( \int_a^b x(t)^2 \, dt \right)^{1/2} + \left( \int_a^b y(t)^2 \, dt \right)^{1/2} \right)^2. \end{equation*}

Taking the square root of both sides, we obtain the triangle inequality:

\begin{equation*} \|x + y\| \leq \|x\| + \|y\|. \end{equation*}

This completes the proof that the vector space of all continuous real-valued functions on \([a, b]\) with the given norm is a normed space.

The more detailes for triangle inequality are:

\begin{equation*} \|x+y\|^2 = \int_a^b (x(t) + y(t))^2 \, dt \end{equation*}

We expand the integrand:

\begin{equation*} \int_a^b (x(t) + y(t))^2 \, dt = \int_a^b (x(t)^2 + 2x(t)y(t) + y(t)^2) \, dt \end{equation*}

We then split the integral:

\begin{equation*} \int_a^b (x(t)^2 + 2x(t)y(t) + y(t)^2) \, dt = \int_a^b x(t)^2 \, dt + 2\int_a^b x(t)y(t) \, dt + \int_a^b y(t)^2 \, dt \end{equation*}

Using the Cauchy-Schwarz inequality for integrals to handle the cross-term:

\begin{equation*} \left(\int_a^b x(t)y(t) \, dt\right)^2 \leq \left(\int_a^b x(t)^2 \, dt\right) \left(\int_a^b y(t)^2 \, dt\right) \end{equation*}

This implies that:

\begin{equation*} 2\int_a^b x(t)y(t) \, dt \leq 2\left(\int_a^b x(t)^2 \, dt\right)^{1/2} \left(\int_a^b y(t)^2 \, dt\right)^{1/2} \end{equation*}

Combine the results to get an upper bound for the integral of the sum:

\begin{equation*} \|x+y\|^2 \leq \int_a^b x(t)^2 \, dt + 2\left(\int_a^b x(t)^2 \, dt\right)^{1/2} \left(\int_a^b y(t)^2 \, dt\right)^{1/2} + \int_a^b y(t)^2 \, dt \end{equation*}

Recognizing that the right-hand side is a perfect square:

\begin{equation*} \|x+y\|^2 \leq \left( \left( \int_a^b x(t)^2 \, dt \right)^{1/2} + \left( \int_a^b y(t)^2 \, dt \right)^{1/2} \right)^2 \end{equation*}

Since both sides are positive, we can take the square root:

\begin{equation*} \|x+y\| \leq \left( \int_a^b x(t)^2 \, dt \right)^{1/2} + \left( \int_a^b y(t)^2 \, dt \right)^{1/2} \end{equation*}

Which simplifies to the triangle inequality for the \(L^2\) norm:

\begin{equation*} \|x+y\| \leq \|x\| + \|y\| \end{equation*}

This completes the proof for the triangle inequality of the \(L^2\) norm in the vector space of continuous real-valued functions on \([a, b]\).


Problem 8. There are several norms of practical importance on the vector space ofordered n-tuples of numbers - \(||x||_1 = |\xi _1| + |\xi _2| + \ldots + |\xi _n|\) - \(||x||_p = (|\xi _1|^p + |\xi _2|^p + \ldots + |\xi _n|^p)^{1/p}\) for \(p \geq 1\) - \(||x||_\infty = \max\{|\xi _1|, |\xi _2|, \ldots, |\xi _n|\}\)

Solution:

To verify that each of these functions is a norm, we need to show they satisfy the four properties of norms:

  1. Non-negativity: \(||x|| \geq 0\) for all \(x \in X\).

  2. Definiteness: \(||x|| = 0\) if and only if \(x\) is the zero vector.

  3. Homogeneity (or scalability): \(||\alpha x|| = |\alpha| ||x||\) for any scalar \(\alpha\) and any \(x \in X\).

  4. Triangle inequality: \(||x + y|| \leq ||x|| + ||y||\) for all \(x, y \in X\).

For the \(p\)-norm, the first three properties are straightforward to verify. The triangle inequality for the \(p\)-norm is established by Minkowski's inequality.

\begin{equation*} \left(\sum_{i=1}^{n} |\xi _i + \eta _i|^p\right)^{1/p} \leq \left(\sum_{i=1}^{n} |\xi _i|^p\right)^{1/p} + \left(\sum_{i=1}^{n} |\eta _i|^p\right)^{1/p} \end{equation*}

This is the triangle inequality for the \(p\)-norms. To prove Minkowski's inequality, we consider:

  • For \(p=1\), the inequality reduces to the triangle inequality for absolute values, which is trivially true.

  • For \(p>1\), we use Hölder's inequality, which for \(\frac{1}{p} + \frac{1}{q} = 1\) (where \(p,q>1\)), states:

\begin{equation*} \sum_{i=1}^{n} |\xi _i \eta _i| \leq \left(\sum_{i=1}^{n} |\xi _i|^p\right)^{1/p} \left(\sum_{i=1}^{n} |\eta _i|^q\right)^{1/q} \end{equation*}

By applying Hölder's inequality, we rewrite the left side of Minkowski's inequality as follows:

\begin{equation*} \sum_{i=1}^{n} |\xi _i + \eta _i|^p = \sum_{i=1}^{n} |\xi _i + \eta _i|^{p-1} |\xi _i + \eta _i| \end{equation*}

We then apply Hölder's inequality with \(|\xi _i + \eta _i|^{p-1}\) and \(|\xi _i + \eta _i|\) as the sequences, and by doing the same for \(|\eta _i|\) instead of \(|\xi _i|\), and then summing the inequalities, we obtain:

\begin{equation*} \left(\sum_{i=1}^{n} |\xi _i + \eta _i|^p\right) \leq \left(\sum_{i=1}^{n} |\xi _i + \eta _i|^p\right)^{\frac{p}{p-1}} \left(\left(\sum_{i=1}^{n} |\xi _i|^p\right)^{1/p} + \left(\sum_{i=1}^{n} |\eta _i|^p\right)^{1/p}\right) \end{equation*}

Raising both sides to the \(\frac{1}{p}\) power completes the proof of Minkowski's inequality and establishes the triangle inequality for the \(p\)-norm.

By verifying that each function satisfies all four norm properties, we show that \(||x||_1\), \(||x||_p\), and \(||x||_\infty\) each define a norm on the vector space XX.


Problem 9. Verify that the space \(C[a, b]\) with the norm given by

\begin{equation*} \|x\| = \max_{t \in [a, b]} |x(t)| \end{equation*}

where \([a, b]\) is the interval, defines a norm.

Solution:

To verify that the given formula defines a norm on the space \(C[a, b]\), we need to check that it satisfies the following properties for all functions \(x, y \in C[a, b]\) and all scalars \(\lambda\):

  1. Non-negativity: \(\|x\| \geq 0\), and \(\|x\| = 0\) if and only if \(x(t) = 0\) for all \(t \in [a, b]\).

  2. Absolute scalability: \(\|\lambda x\| = |\lambda| \|x\|\).

  3. Triangle inequality: \(\|x + y\| \leq \|x\| + \|y\|\).

Non-negativity

For any \(x \in C[a, b]\), since \(x(t)\) is a continuous function on a closed interval, it will attain a maximum absolute value which is non-negative. Thus, \(\|x\| = \max_{t \in [a, b]} |x(t)| \geq 0\). Also, \(\|x\| = 0\) if and only if \(|x(t)| = 0\) for all \(t\), which means \(x(t) = 0\) for all \(t \in [a, b]\).

Absolute scalability

For any scalar \(\lambda\) and any \(x \in C[a, b]\), we have:

\begin{equation*} \|\lambda x\| = \max_{t \in [a, b]} |\lambda x(t)| = |\lambda| \max_{t \in [a, b]} |x(t)| = |\lambda| \|x\| \end{equation*}

This follows because the absolute value function is homogeneous, meaning \(|ab| = |a||b|\) for all \(a, b\).

Triangle inequality

The triangle inequality states that for any \(x, y \in C[a, b]\), the norm of their sum is less than or equal to the sum of their norms:

\begin{equation*} \|x + y\| = \max_{t \in [a, b]} |x(t) + y(t)| \leq \max_{t \in [a, b]} (|x(t)| + |y(t)|) \leq \max_{t \in [a, b]} |x(t)| + \max_{t \in [a, b]} |y(t)| = \|x\| + \|y\| \end{equation*}

The inequality \(|x(t) + y(t)| \leq |x(t)| + |y(t)|\) follows from the triangle inequality for absolute values, and we use the fact that the maximum value of a sum is less than or equal to the sum of the maximum values.

Since the given norm satisfies all three properties, it is indeed a norm on the space \(C[a, b]\).

Clarification of Non-negativity Property:

For any function \(x\) in \(C[a, b]\), the norm is defined as

\begin{equation*} \|x\| = \max_{t \in [a, b]} |x(t)| \end{equation*}

Since \(x(t)\) is a continuous function on the closed interval \([a, b]\), it has the following properties:

  1. Boundedness: A continuous function on a closed interval is bounded. That is, there exists a real number \(M\) such that \(|x(t)| \leq M\) for all \(t \in [a, b]\).

  2. Attainment of Bounds: By the extreme value theorem, a continuous function on a closed interval attains its maximum and minimum values at least once within that interval. Therefore, there exists some \(t_{\text{max}} \in [a, b]\) where \(|x(t_{\text{max}})| = \max_{t \in [a, b]} |x(t)|\).

With these properties, the non-negativity of the norm can be discussed in detail:

  • Non-negativity: The norm \(\|x\|\) is always non-negative because absolute values are non-negative, and because \(x\) is continuous, it achieves a maximum absolute value on \([a, b]\). This maximum is the value of the norm and cannot be negative.

  • Zero Norm: The norm \(\|x\|\) is zero if and only if the maximum absolute value that \(x(t)\) achieves over the interval \([a, b]\) is zero. If \(\|x\| = 0\), then \(\max_{t \in [a, b]} |x(t)| = 0\), implying that \(|x(t)| = 0\) for all \(t \in [a, b]\). Since a real number's absolute value is zero if and only if the number itself is zero, it follows that \(x(t) = 0\) for all \(t \in [a, b]\). Conversely, if \(x(t) = 0\) for all \(t \in [a, b]\), then clearly \(\|x\| = 0\).

These points confirm the non-negativity of the norm and the condition under which the norm of a function is zero.

Why a Continuous Function on a Closed Interval is Bounded:

A continuous function on a closed interval ([a, b]) is guaranteed to be bounded. This assertion is supported by the Boundedness Theorem, which is a direct consequence of the Extreme Value Theorem. The reasoning is as follows:

  • Closed Interval: A closed interval \([a, b]\) includes its endpoints, making it a compact set in the real numbers. Compactness in real numbers implies that the set is both closed and bounded.

  • Continuity: A function \(f\) is continuous on \([a, b]\) if, for every point \(c\) in the interval and every \(\epsilon > 0\), there exists a \(\delta > 0\) such that for all \(x\) within \(\delta\) of \(c\), the value of \(f(x)\) is within \(\epsilon\) of \(f(c)\). This means the function does not exhibit jumps, breaks, or infinite behavior within the interval.

  • Extreme Value Theorem: Due to continuity and the closed nature of the interval, the Extreme Value Theorem ensures that a continuous function on a closed interval will attain both its maximum and minimum values within that interval. This theorem does not hold for open intervals or functions that are not continuous.

Intuitive Explanation:

If a continuous function were not bounded on a closed interval, it would suggest that the function could assume arbitrarily large or small values. However, continuity ensures a gradual change without sudden leaps. As the interval is closed, the function cannot 'escape' to infinity at the endpoints, because these points are part of the interval and the function must be defined and finite at them. If the function were unbounded, there would exist points where the function's values would become arbitrarily large, contradicting the very definition of continuity.

Thus, the interplay between the function's continuity (precluding abrupt changes or infinite values) and the interval's closed nature (disallowing endpoints from being unbounded) ensures that the function must be bounded.


Problem Statement Show that the closed unit ball \(\tilde{B}_1(0)\) in a normed space \(X\) is convex.

Solution: To prove that the closed unit ball \(\tilde{B}_1(0)\) is convex, we need to demonstrate that for any two points \(x, y \in \tilde{B}_1(0)\), the line segment joining them is entirely contained within \(\tilde{B}_1(0)\). A line segment in a vector space can be represented as the set of all convex combinations of \(x\) and \(y\), which is given by

\begin{equation*} z = \alpha x + (1 - \alpha) y \end{equation*}

where \(0 \leq \alpha \leq 1\). The point \(z\) is a point on the line segment between \(x\) and \(y\), varying smoothly from one to the other as \(\alpha\) goes from 0 to 1.

Now, we must show that \(z\) also belongs to \(\tilde{B}_1(0)\), which means that \(\|z\| \leq 1\). Given that \(x, y \in \tilde{B}_1(0)\), we have \(\|x\| \leq 1\) and \(\|y\| \leq 1\). The norm of \(z\) is computed as follows:

\begin{equation*} \|z\| = \|\alpha x + (1 - \alpha) y\| \leq \alpha \|x\| + (1 - \alpha) \|y\| \end{equation*}

Here we have used the triangle inequality and the property of absolute scalability of norms. Because \(\|x\| \leq 1\) and \(\|y\| \leq 1\), it follows that:

\begin{equation*} \alpha \|x\| + (1 - \alpha) \|y\| \leq \alpha \cdot 1 + (1 - \alpha) \cdot 1 = \alpha + 1 - \alpha = 1 \end{equation*}

Hence, \(\|z\| \leq 1\), which implies that \(z\) is in \(\tilde{B}_1(0)\). This confirms that \(\tilde{B}_1(0)\) is convex, as every point \(z\) formed as a convex combination of any two points \(x\) and \(y\) in \(\tilde{B}_1(0)\) also lies within \(\tilde{B}_1(0)\).

Explanation:

The point \(z = \alpha x + (1 - \alpha) y\) is crucial in the definition of a convex set because it represents any point on the line segment between two points \(x\) and \(y\) within a vector space \(X\). The scalar \(\alpha\) ranges from 0 to 1 and determines the position of \(z\) on the line segment:

  • When \(\alpha = 0\), the expression becomes \(z = 0 \cdot x + (1 - 0) \cdot y = y\), placing \(z\) at the point \(y\).

  • When \(\alpha = 1\), it simplifies to \(z = 1 \cdot x + (1 - 1) \cdot y = x\), positioning \(z\) at the point \(x\).

  • For values of \(\alpha\) between 0 and 1, \(z\) lies within the line segment connecting \(x\) and \(y\).

A set is convex if, for every pair of points within the set, the entire line segment that connects them also lies within the set. The point \(z\) symbolizes a general point on the line segment between \(x\) and \(y\). Demonstrating that for all values of \(\alpha\) in the closed interval [0, 1], \(z\) remains within the set proves the set's convexity. This is the essence of why the expression \(z = \alpha x + (1 - \alpha) y\) is used: it is a generic representation of any point on the line segment, and verifying that all such points are contained within the set for all \(\alpha\) in [0, 1] affirms the convexity of the set.


Problem Statement 15. Show that a subset \(M\) in a normed space \(X\) is bounded if and only if there is a positive number \(c\) such that \(\|x\| \leq c\) for every \(x \in M\). The diameter \(\delta(A)\) of a nonempty set \(A\) in a metric space \((X, d)\) is defined to be \(\delta(A) = \sup \{d(x, y) : x, y \in A\}\). A set \(A\) is said to be bounded if \(\delta(A) < \infty\).

Solution:

If \(M\) is bounded, then there exists a \(c\) such that \(\|x\| \leq c\) for every \(x \in M\):

Step 1: Assume \(M\) is bounded.

By definition of boundedness in the context of a normed space, this means that the diameter \(\delta(M)\), which is the supremum of the distances between all pairs of points in \(M\), is less than infinity. In mathematical terms, \(\delta(M) = \sup \{\|x - y\| : x, y \in M\} = b < \infty\).

Step 2: Choose any \(x \in M\) and also take a fixed element \(x_0 \in M\).

We need a reference point in \(M\) to compare all other points to. The choice of \(x_0\) is arbitrary but will be used to help establish a universal bound for the norm of any element in \(M\).

Step 3: Define \(c = b + \|x_0\|\), which is a positive number.

Here \(b\) is the diameter \(\delta(M)\) we defined earlier, which captures the maximum distance between any two points in \(M\). We are defining a new constant \(c\) that not only accounts for this maximum distance but also adds the norm of our reference point \(x_0\) to ensure that \(c\) will be an upper bound for the norm of any point in \(M\).

Step 4: For any \(x \in M\), estimate \(\|x\|\) using the triangle inequality:

\begin{equation*} \|x\| = \|x - x_0 + x_0\| \leq \|x - x_0\| + \|x_0\| \leq b + \|x_0\| = c. \end{equation*}

The last step uses the fact that \(\delta(M) = b\) is the supremum of all such norms \(\|x - x_0\|\), and therefore \(\|x - x_0\|\) cannot exceed \(b\).

Step 5: This shows that for every \(x \in M\), \(\|x\| \leq c\).

The definition of \(c\) was constructed to be a bound for the norms of all points in \(M\) relative to the fixed point \(x_0\) and the diameter of \(M\).

Conversely , if there exists a \(c\) such that \(\|x\| \leq c\) for every \(x \in M\), then \(M\) is bounded:

Step 1: Assume that for every \(x \in M\), \(\|x\| \leq c\) for some positive number \(c\).

This is the hypothesis that there is a uniform bound on the norms of all elements in the set \(M\).

Step 2: For any \(x, y \in M\), using the triangle inequality we have:

\begin{equation*} \|x - y\| \leq \|x\| + \|y\|. \end{equation*}

This is the triangle inequality applied to the points \(x\) and \(y\) in \(M\).

Step 3: Since \(\|x\| \leq c\) and \(\|y\| \leq c\), it follows that:

\begin{equation*} \|x - y\| \leq c + c = 2c. \end{equation*}

This step uses the bound for the norms of \(x\) and \(y\) to establish a bound for the distance between them.

Step 4: This inequality holds for all \(x, y \in M\), so \(\delta(M)\), the supremum of all such distances, is at most \(2c\).

Here we use the definition of \(\delta(M)\) again, which is the supremum of all distances between points in \(M\). Since we've shown that every such distance is bounded by \(2c\), it follows that \(\delta(M) \leq 2c\).

Step 5: Therefore, \(\delta(M) \leq 2c < \infty\), which means \(M\) is bounded.

Since \(2c\) is a finite number, the supremum of the set of distances (the diameter) is also finite, confirming that \(M\) is bounded by definition.

In both directions of the proof, the definition of \(\delta(M)\) as the supremum of distances \(\|x - y\|\) for \(x, y \in M\) is crucial for establishing the boundedness of \(M\).