Kreyszig 2.1, Normed Spaces - Vector Space

Problem 1. Show that the set of real numbers, with the usual addition and multiplication, constitutes a one-dimensional real vector space, and the set of all complex numbers constitutes a one-dimensional complex vector space.

Proof for Real Numbers as a One-Dimensional Real Vector Space

  1. Additive Identity: There exists a number \(0\) such that for every real number \(x\), \(x + 0 = x\).

    Example:

    \begin{equation*} 5 + 0 = 5 \end{equation*}
  2. Additive Inverse: For every real number \(x\), there exists a number \(-x\) such that \(x + (-x) = 0\).

    Example:

    \begin{equation*} 5 + (-5) = 0 \end{equation*}
  3. Closure under Addition: For every pair of real numbers \(x\) and \(y\), their sum \(x + y\) is also a real number.

    Example:

    \begin{equation*} 5 + 3 = 8 \end{equation*}
  4. Closure under Scalar Multiplication: For every real number \(x\) and every scalar \(a\), the product \(ax\) is also a real number.

    Example:

    \begin{equation*} 3 \times 5 = 15 \end{equation*}
  5. Distributivity of Scalar Multiplication with respect to Vector Addition: For every real number \(x\) and \(y\) and every scalar \(a\), \(a(x + y) = ax + ay\).

    Example:

    \begin{equation*} 3(5 + 2) = 3 \times 5 + 3 \times 2 \end{equation*}
  6. Distributivity of Scalar Multiplication with respect to Scalar Addition: For every real number \(x\) and scalars \(a\) and \(b\), \((a + b)x = ax + bx\).

    Example:

    \begin{equation*} (3 + 2) \times 5 = 3 \times 5 + 2 \times 5 \end{equation*}
  7. Associativity of Scalar Multiplication: For every real number \(x\) and scalars \(a\) and \(b\), \(a(bx) = (ab)x\).

    Example:

    \begin{equation*} 3(2 \times 5) = (3 \times 2) \times 5 \end{equation*}
  8. Multiplicative Identity of Scalar Multiplication: There exists a scalar \(1\) such that for every real number \(x\), \(1 \times x = x\).

    Example:

    \begin{equation*} 1 \times 5 = 5 \end{equation*}

\(\blacksquare\)

Proof for Complex Numbers as a One-Dimensional Complex Vector Space

  1. Additive Identity: There exists a complex number \(0\) such that for every complex number \(z\), \(z + 0 = z\).

    Example:

    \begin{equation*} (3 + 2i) + 0 = 3 + 2i \end{equation*}
  2. Additive Inverse: For every complex number \(z\), there exists a complex number \(-z\) such that \(z + (-z) = 0\).

    Example:

    \begin{equation*} (3 + 2i) + (-3 - 2i) = 0 \end{equation*}
  3. Closure under Addition: For every pair of complex numbers \(z_1\) and \(z_2\), their sum \(z_1 + z_2\) is also a complex number.

    Example:

    \begin{equation*} (3 + 2i) + (1 + 4i) = 4 + 6i \end{equation*}
  4. Closure under Scalar Multiplication: For every complex number \(z\) and every scalar \(c\) in the complex numbers, the product \(cz\) is also a complex number.

    Example:

    \begin{equation*} (2 + i) \times (3 + 2i) = 6 + 7i \end{equation*}
  5. Distributivity of Scalar Multiplication with respect to Vector Addition: For every complex number \(z_1\) and \(z_2\) and every scalar \(c\) in the complex numbers, \(c(z_1 + z_2) = cz_1 + cz_2\).

    Example:

    \begin{equation*} (2 + i)((3 + 2i) + (1 + 4i)) = (2 + i)(3 + 2i) + (2 + i)(1 + 4i) \end{equation*}
  6. Distributivity of Scalar Multiplication with respect to Scalar Addition: For every complex number \(z\) and scalars \(c_1\) and \(c_2\) in the complex numbers, \((c_1 + c_2)z = c_1z + c_2z\).

    Example:

    \begin{equation*} (2 + i + 1)(3 + 2i) = (2 + i)(3 + 2i) + 3 + 2i \end{equation*}
  7. Associativity of Scalar Multiplication: For every complex number \(z\) and scalars \(c_1\) and \(c_2\) in the complex numbers, \(c_1(c_2z) = (c_1c_2)z\).

    Example:

    \begin{equation*} (2 + i)(i(3 + 2i)) = (2 + i \times i)(3 + 2i) \end{equation*}
  8. Multiplicative Identity of Scalar Multiplication: There exists a scalar \(1\) in the complex numbers such that for every complex number \(z\), \(1 \times z = z\).

    Example:

    \begin{equation*} 1 \times (3 + 2i) = 3 + 2i \end{equation*}

\(\blacksquare\)


Problem 2. Proof for Properties of the Zero Vector

Given that \(\theta\) is the zero vector in a vector space.

  1. Proof for \(0 \cdot \mathbf{x} = \theta\):

    Using the distributive property of scalar multiplication over vector addition, we have:

    \begin{equation*} 0 \cdot \mathbf{x} = (0 + 0) \cdot \mathbf{x} = 0 \cdot \mathbf{x} + 0 \cdot \mathbf{x} \end{equation*}

    Subtracting \(0 \cdot \mathbf{x}\) from both sides:

    \begin{equation*} 0 \cdot \mathbf{x} - 0 \cdot \mathbf{x} = \theta \end{equation*}

    Thus, \(0 \cdot \mathbf{x} = \theta\).

    \(\blacksquare\)

  2. Proof for \(\alpha \cdot \theta = \theta\):

    For any scalar \(\alpha\):

    \begin{equation*} \alpha \cdot \theta = \alpha \cdot (0 \cdot \mathbf{x}) = (\alpha \cdot 0) \cdot \mathbf{x} = 0 \cdot \mathbf{x} = \theta \end{equation*}

    Therefore, \(\alpha \cdot \theta = \theta\).

    \(\blacksquare\)

Proof for the Property \((-1) \cdot \mathbf{x} = -\mathbf{x}\)

Given a vector \(\cdot\) in a vector space.

To prove: \((-1) \cdot \mathbf{x} = -\mathbf{x}\)

Proof:

Using the distributive property of scalar multiplication over scalar addition, we have:

\begin{equation*} \mathbf{x} + (-1) \cdot \mathbf{x} = (1 + (-1)) \cdot \mathbf{x} = 0 \cdot \mathbf{x} \end{equation*}

From a previous proof, we know that:

\begin{equation*} 0 \cdot \mathbf{x} = \theta \end{equation*}

Where \(\theta\) is the zero vector.

Therefore:

\begin{equation*} \mathbf{x} + (-1) \cdot \mathbf{x} = \theta \end{equation*}

This implies that \((-1) \cdot \mathbf{x}\) is the additive inverse of \(\mathbf{x}\), which is denoted as \(-\mathbf{x}\).

Hence, \((-1) \cdot \mathbf{x} = -\mathbf{x}\).

\(\blacksquare\)


Problem 3. Span of the Set M in \(\mathbb{R}^3\)

The span of a set of vectors is the set of all linear combinations of those vectors. In other words, it's the set of all vectors that can be obtained by taking weighted sums of the vectors in the set.

Given the set \(M = \{ (1,1,1), (0,0,2) \}\) in \(\mathbb{R}^3\), any vector in the span of \(M\) can be written as:

\begin{equation*} \alpha (1,1,1) + \beta (0,0,2) = (\alpha, \alpha, \alpha + 2\beta) \end{equation*}

From the above expression, we can see that:

  1. The first and second components of any vector in the span are always equal.

  2. The third component can be any real number since \(\alpha\) and \(\beta\) can be any real numbers.

Thus, the span of \(M\) in \(\mathbb{R}^3\) is the set of all vectors of the form \((a, a, b)\) where \(a\) and \(b\) are real numbers. This is a plane in \(\mathbb{R}^3\) that passes through the origin and is defined by the equation \(x = y\).

Visualization

3D Visualization of the plane x=y

This plane passes through the origin and spans infinitely in all directions within the plane. The vectors \((1,1,1)\) and \((0,0,2)\) from the set \(M\) lie on this plane, and their linear combinations fill out the entire plane.

\(\blacksquare\)


Problem 4. Determination of Subspaces in \(\mathbb{R}^3\)

To determine whether a subset of \(\mathbb{R}^3\) constitutes a subspace, it must satisfy the following three properties:

  1. The zero vector of \(\mathbb{R}^3\) is in the subset.

  2. The subset is closed under vector addition.

  3. The subset is closed under scalar multiplication.

Given the subsets:

  1. All \(x\) with \(\xi_1 = \xi_2\), and \(\xi_2 = 0\)

Evaluation:

This means \(x\) is of the form \((0, 0, \xi_3)\).

  1. The zero vector \((0, 0, 0)\) is in this subset.

  2. Sum of any two vectors in this subset will also be in this subset.

  3. Scalar multiplication of any vector in this subset will also be in this subset.

Thus, (a) is a subspace of \(\mathbb{R}^3\).

  1. All \(x\) with \(\xi_1 = \xi_2 + 1\)

Evaluation:

  1. This subset doesn't contain the zero vector.

  2. It's not closed under scalar multiplication since multiplying by a negative scalar will result in a vector outside this subset.

Thus, (b) is not a subspace of \(\mathbb{R}^3\).

  1. All \(x\) with positive \(\xi_1, \xi_2, \xi_3\)

Evaluation:

  1. This subset doesn't contain the zero vector.

  2. It's not closed under scalar multiplication since multiplying by a negative scalar will result in a vector outside this subset.

Thus, (c) is not a subspace of \(\mathbb{R}^3\).

  1. All \(x\) with \(\xi_1 - \xi_2 + \xi_3 = \text{const}\)

Evaluation:

If the constant is zero, then this subset could be a subspace. But if the constant is any other value, then the subset won't contain the zero vector, so it won't be a subspace.

Conclusion:

    1. is a subspace of \(\mathbb{R}^3\).

    1. is not a subspace of \(\mathbb{R}^3\).

    1. is not a subspace of \(\mathbb{R}^3\).

    1. could be a subspace if the constant is zero; otherwise, it's not.

\(\blacksquare\)


Problem 5. The space \(C[a,b]\) consists of all continuous functions defined on the closed interval \([a, b]\). That is, \(C[a,b]\) is the set of functions \(f: [a, b] \to \mathbb{R}\) such that \(f\) is continuous on \([a, b]\).

Proof of Linear Independence

To show that the set \(\{ x_1, ..., x_n \}\), where \(x_j(x) = t^j\) for \(j = 1, ..., n\), is linearly independent in \(C[a,b]\), we need to show that the only scalars \(c_1, ..., c_n\) that satisfy the equation

\begin{equation*} c_1 x_1(t) + c_2 x_2(t) + ... + c_n x_n(t) = 0 \end{equation*}

for all \(t\) in \([a, b]\) are \(c_1 = c_2 = ... = c_n = 0\).

Given the functions \(x_j(t) = t^j\), the above equation becomes:

\begin{equation*} c_1 t + c_2 t^2 + ... + c_n t^n = 0 \end{equation*}

This is a polynomial of degree \(n\). If this polynomial is identically zero on the interval \([a, b]\), then all its coefficients must be zero. This is because a non-zero polynomial of degree \(n\) can have at most \(n\) roots, but if the polynomial is zero for all \(t\) in a continuous interval, it must be the zero polynomial.

Therefore, \(c_1 = c_2 = ... = c_n = 0\), which proves that the set \(\{ x_1, ..., x_n \}\) is linearly independent in \(C[a,b]\).

\(\blacksquare\)

Clarification on Polynomials and Their Roots

A polynomial of degree \(n\) is an expression of the form:

\begin{equation*} p(t) = c_0 + c_1 t + c_2 t^2 + \dots + c_n t^n \end{equation*}

where \(c_0, c_1, \dots, c_n\) are coefficients and \(n\) is a non-negative integer.

The Fundamental Theorem of Algebra states that a non-zero polynomial of degree \(n\) has exactly \(n\) roots, counting multiplicities. This means that the polynomial can be zero at most \(n\) times.

However, if we have a polynomial that is zero for every value of \(t\) in a continuous interval (like \([a, b]\)), then it's not just zero at isolated points—it's zero everywhere in that interval. This behavior is inconsistent with a polynomial that has non-zero coefficients because such a polynomial would not be zero at more than \(n\) points.

Therefore, the only way a polynomial can be zero for all \(t\) in a continuous interval is if it's the zero polynomial, which means all its coefficients \(c_0, c_1, \dots, c_n\) are zero.

In simpler terms: If you have a polynomial that's zero everywhere in an interval, then it's actually the zero polynomial, and all its coefficients are zero.

The crux of the proof is:

If a polynomial is zero for every value of t within a continuous interval (like [a,b]), then it cannot merely be the result of the polynomial having roots within that interval. Instead, the polynomial must be the zero polynomial, meaning all its coefficients are zero.

In essence, a non-zero polynomial can only be zero at a finite number of points determined by its degree. If it's zero everywhere in a continuous interval, it contradicts this property, so it must be the zero polynomial.


Problem 6. Show that in an \(n\)-dimensional vector space \(X\), the representation of any vector \(x\) as a linear combination of a given basis vectors \(e_1, \dots, e_n\) is unique.

Proof:

Assume, for the sake of contradiction, that there are two different representations of the vector \(x\) in terms of the basis vectors.

Let these representations be:

\begin{equation*} x = a_1 e_1 + a_2 e_2 + \dots + a_n e_n \end{equation*}
\begin{equation*} x = b_1 e_1 + b_2 e_2 + \dots + b_n e_n \end{equation*}

where \(a_i\) and \(b_i\) are scalars, and at least one \(a_i\) is not equal to \(b_i\).

Subtracting the second equation from the first, we get:

\begin{equation*} 0 = (a_1 - b_1) e_1 + (a_2 - b_2) e_2 + \dots + (a_n - b_n) e_n \end{equation*}

Now, since \(\{ e_1, e_2, \dots, e_n \}\) is a basis for \(X\), these vectors are linearly independent. This means that the only way the above equation can hold is if each coefficient \((a_i - b_i)\) is zero.

Thus, \(a_i - b_i = 0\) for all \(i\), which implies \(a_i = b_i\) for all \(i\).

This contradicts our assumption that the two representations were different. Therefore, our original assumption was false, and the representation of any vector \(x\) as a linear combination of the basis vectors is unique.

\(\blacksquare\)

Truth Table for the Proof's Logic

https://www.wolframcloud.com/obj/845f9add-0989-4031-9cc4-aaec65b61ba3

The logical structure of the proof can be summarized as:

  1. Assumption: There are two different representations of \(x\).

  2. Implication: Subtracting the two representations results in a non-zero polynomial.

  3. Contradiction: A non-zero polynomial cannot be zero everywhere in a continuous interval.

  4. Conclusion: The assumption is false; the representation is unique.


Problem 7: Basis and Dimension of Complex Vector Space \(X\)

Problem Statement:

Let \(\{e_1,...,e_n\}\) be a basis for a complex vector space \(X\). Find the basis for \(X\) regarded as a real vector space. What is the dimension of \(X\) in either case?

Solution:

  1. Basis for :math:`X` as a Complex Vector Space:

Given that \(\{e_1, \dots, e_n\}\) is a basis for the complex vector space \(X\), any vector \(v\) in \(X\) can be expressed as:

\begin{equation*} v = a_1 e_1 + a_2 e_2 + \dots + a_n e_n \end{equation*}

where \(a_i\) are complex numbers.

  1. Basis for :math:`X` as a Real Vector Space:

When we regard \(X\) as a real vector space, the basis for \(X\) is:

\begin{equation*} \{e_1, i e_1, e_2, i e_2, \dots, e_n, i e_n\} \end{equation*}
  1. Dimension of :math:`X` in Either Case:

  • As a complex vector space, the dimension of \(X\) is \(n\).

  • As a real vector space, the dimension of \(X\) is \(2n\).

\(\blacksquare\)


Problem 8. If \(M\) is a linearly dependent set in a complex vector space \(X\), is \(M\) linearly dependent in \(X\), regarded as a real vector space?

Solution

If \(M\) is linearly dependent in a complex vector space \(X\), then there exist complex scalars, not all zero, such that:

\begin{equation*} c_1 v_1 + c_2 v_2 + \dots + c_n v_n = 0 \end{equation*}

where \(v_1, v_2, \dots, v_n\) are vectors in \(M\) and at least one of the \(c_i\) is non-zero.

Now, when we regard \(X\) as a real vector space, each complex scalar \(c_i\) can be expressed as:

\begin{equation*} c_i = a_i + b_i i \end{equation*}

where \(a_i\) and \(b_i\) are real numbers.

Substituting this into our linear combination, we get:

\begin{equation*} (a_1 + b_1 i) v_1 + (a_2 + b_2 i) v_2 + \dots + (a_n + b_n i) v_n = 0 \end{equation*}

This can be rearranged as:

\begin{equation*} a_1 v_1 + a_2 v_2 + \dots + a_n v_n + i(b_1 v_1 + b_2 v_2 + \dots + b_n v_n) = 0 \end{equation*}

For the above equation to hold true, both the real part and the imaginary part of the equation must be zero:

\begin{equation*} a_1 v_1 + a_2 v_2 + \dots + a_n v_n = 0 b_1 v_1 + b_2 v_2 + \dots + b_n v_n = 0 \end{equation*}

Given that at least one of the \(c_i\) is non-zero, it implies that at least one of the \(a_i\) or \(b_i\) is non-zero. Therefore, the set \(M\) is also linearly dependent when \(X\) is regarded as a real vector space.

Conclusion

Yes, if \(M\) is linearly dependent in a complex vector space \(X\), then \(M\) is also linearly dependent in \(X\) when regarded as a real vector space.

\(\blacksquare\)


Problem 9 Statement

On a fixed interval \([a, b] \subset \mathbb{R}\), consider the set \(X\) consisting of all polynomials with real coefficients and of degree not exceeding a given \(n\), and the polynomial \(x = 0\) (for which a degree is not defined in a usual discussion of degree). Show that \(X\) with usual addition and usual multiplication by real numbers is a real vector space of dimension \(n+1\).

/ Solution

Vector Space Axioms Verification:

  1. Closure under Addition: For any two polynomials \(p(t), q(t) \in X\), their sum \(p(t) + q(t)\) is also a polynomial with real coefficients, and its degree is not exceeding \(n\). Therefore, \(p(t) + q(t) \in X\).

  2. Closure under Scalar Multiplication: For any polynomial \(p(t) \in X\) and any real number \(c\), the product \(c \cdot p(t)\) is also a polynomial with real coefficients, and its degree is not exceeding \(n\). Therefore, \(c \cdot p(t) \in X\).

  3. Associativity of Addition: For any \(p(t), q(t), r(t) \in X\), \((p(t) + q(t)) + r(t) = p(t) + (q(t) + r(t))\).

  4. Commutativity of Addition: For any \(p(t), q(t) \in X\), \(p(t) + q(t) = q(t) + p(t)\).

  5. Identity Element of Addition: The zero polynomial \(0\) acts as the additive identity in \(X\), since for any \(p(t) \in X\), \(p(t) + 0 = p(t)\).

  6. Inverse Elements of Addition: For every \(p(t) \in X\), its additive inverse is \(-p(t)\), which is also in \(X\). Thus, \(p(t) + (-p(t)) = 0\).

  7. Compatibility of Scalar Multiplication with Field Multiplication: For any real numbers \(a, b\) and any \(p(t) \in X\), \(a \cdot (b \cdot p(t)) = (a \cdot b) \cdot p(t)\).

  8. Identity Element of Scalar Multiplication: For any \(p(t) \in X\), \(1 \cdot p(t) = p(t)\), where 1 is the multiplicative identity in \(\mathbb{R}\).

  9. Distributivity of Scalar Multiplication with respect to Vector Addition: For any real number \(a\) and any \(p(t), q(t) \in X\), \(a \cdot (p(t) + q(t)) = a \cdot p(t) + a \cdot q(t)\).

  10. Distributivity of Scalar Multiplication with respect to Scalar Addition: For any real numbers \(a, b\) and any \(p(t) \in X\), \((a + b) \cdot p(t) = a \cdot p(t) + b \cdot p(t)\).

Basis and Dimension:

A basis for \(X\) can be the set of monomials \(\{e_0, e_1, \ldots, e_n\}\), where \(e_j(t) = t^j\) for \(t \in [a, b]\) and \(0 \leq j \leq n\). This set is linearly independent and spans \(X\), as any polynomial of degree not exceeding \(n\) can be written as a linear combination of these monomials.

The dimension of \(X\) is the number of vectors in its basis, which is \(n+1\).

Conclusion:

\(X\) is a real vector space of dimension \(n+1\) on the interval \([a, b]\), with a basis \(\{e_0, e_1, \ldots, e_n\}\).

Problem 9 (Real Polynomial Example) Statement

On a fixed interval \([a, b] \subset \mathbb{R}\), consider two specific real polynomials \(p(t) = 2t^2 + 3t + 1\) and \(q(t) = t^2 - 2t + 4\). Show that the set \(X\) of all real polynomials of degree not exceeding a given \(n\) with the usual addition and scalar multiplication is a real vector space.

Solution

Vector Space Axioms Verification:

  1. Closure under Addition:

    \begin{equation*} p(t) + q(t) = (2t^2 + 3t + 1) + (t^2 - 2t + 4) = 3t^2 + t + 5 \end{equation*}

    The result is a polynomial of degree 2, which is not exceeding \(n\) if \(n \geq 2\).

  2. Closure under Scalar Multiplication: Let's take a real number \(c = 3\).

    \begin{equation*} c \cdot p(t) = 3 \cdot (2t^2 + 3t + 1) = 6t^2 + 9t + 3 \end{equation*}

    The result is a polynomial of degree 2, which is not exceeding \(n\) if \(n \geq 2\).

  3. Associativity of Addition: Let's take another polynomial \(r(t) = 4t - 1\).

    \begin{equation*} (p(t) + q(t)) + r(t) = (3t^2 + t + 5) + (4t - 1) = 3t^2 + 5t + 4 p(t) + (q(t) + r(t)) = (2t^2 + 3t + 1) + (5t^2 - 2t + 3) = 3t^2 + 5t + 4 \end{equation*}

    Both results are equal.

  4. Commutativity of Addition:

    \begin{equation*} p(t) + q(t) = 3t^2 + t + 5 q(t) + p(t) = 3t^2 + t + 5 \end{equation*}

    Both results are equal.

  5. Identity Element of Addition: The zero polynomial is \(0\).

    \begin{equation*} p(t) + 0 = 2t^2 + 3t + 1 0 + p(t) = 2t^2 + 3t + 1 \end{equation*}

    Both results are equal to \(p(t)\).

  6. Inverse Elements of Addition: The additive inverse of \(p(t)\) is \(-p(t) = -2t^2 - 3t - 1\).

    \begin{equation*} p(t) + (-p(t)) = 2t^2 + 3t + 1 + (-2t^2 - 3t - 1) = 0 \end{equation*}
  7. Compatibility of Scalar Multiplication with Field Multiplication: For real numbers \(a = 2\) and \(b = 3\),

    \begin{equation*} a \cdot (b \cdot p(t)) = 2 \cdot (3 \cdot (2t^2 + 3t + 1)) = 12t^2 + 18t + 6 (a \cdot b) \cdot p(t) = (2 \cdot 3) \cdot (2t^2 + 3t + 1) = 12t^2 + 18t + 6 \end{equation*}

    Both results are equal.

  8. Identity Element of Scalar Multiplication:

    \begin{equation*} 1 \cdot p(t) = 1 \cdot (2t^2 + 3t + 1) = 2t^2 + 3t + 1 \end{equation*}

    The result is equal to \(p(t)\).

  9. Distributivity of Scalar Multiplication with respect to Vector Addition: For any real number \(a\),

    \begin{equation*} a \cdot (p(t) + q(t)) = 3 \cdot (3t^2 + t + 5) = 9t^2 + 3t + 15 a \cdot p(t) + a \cdot q(t) = 3 \cdot (2t^2 + 3t + 1) + 3 \cdot (t^2 - 2t + 4) = 9t^2 + 3t + 15 \end{equation*}

    Both results are equal.

  10. Distributivity of Scalar Multiplication with respect to Scalar Addition: For any real numbers \(a, b\),

    \begin{equation*} (a + b) \cdot p(t) = (3 + 2) \cdot (2t^2 + 3t + 1) = 10t^2 + 15t + 5 a \cdot p(t) + b \cdot p(t) = 3 \cdot (2t^2 + 3t + 1) + 2 \cdot (2t^2 + 3t + 1) = 10t^2 + 15t + 5 \end{equation*}

    Both results are equal.

Basis and Dimension:

A basis for \(X\) can be the set of monomials \(\{1, t, t^2, \ldots, t^n\}\). In our specific example, a basis for polynomials of degree not exceeding 2 is \(\{1, t, t^2\}\).

The dimension of \(X\) is the number of vectors in its basis, which is \(n+1\). In our specific example, the dimension is 3.

Conclusion:

The set \(X\) of all real polynomials of degree not exceeding \(n\) on the interval \([a, b]\) is a real vector space of dimension \(n+1\), with the usual addition and scalar multiplication.

Problem Statement

Show that we can obtain a complex metric space \(\tilde{X}\) in a similar fashion if we let the coefficients be complex. Also, determine if \(X\) is a subspace of \(\tilde{X}\).

Solution

Part 1: Constructing \(\tilde{X}\)

  1. Set Definition: Let \(\tilde{X}\) be the set of all polynomials with complex coefficients of degree not exceeding \(n\), along with the zero polynomial. A general element of \(\tilde{X}\) can be represented as:

    \begin{equation*} p(t) = c_0 + c_1t + c_2t^2 + \ldots + c_nt^n \end{equation*}

    where \(c_0, c_1, \ldots, c_n\) are complex numbers.

  2. Vector Space Operations:

    • Addition: For any two polynomials \(p(t), q(t) \in \tilde{X}\), their sum \(p(t) + q(t)\) is also a polynomial in \(\tilde{X}\) with complex coefficients.

    • Scalar Multiplication: For any complex number \(\alpha\) and any polynomial \(p(t) \in \tilde{X}\), the product \(\alpha p(t)\) is also a polynomial in \(\tilde{X}\).

  3. Verification of Vector Space Axioms: Similar to the real case, one can verify that \(\tilde{X}\) satisfies all the vector space axioms under these operations.

Part 2: Is \(X\) a Subspace of \(\tilde{X}\)?

  1. Subspace Criteria: A subset \(Y\) of a vector space \(Z\) is a subspace of \(Z\) if:

    • The zero vector of \(Z\) is in \(Y\).

    • For every \(u, v \in Y\), the sum \(u + v\) is in \(Y\).

    • For every \(u \in Y\) and scalar \(c\), the product \(cu\) is in \(Y\).

  2. Application to \(X\) and \(\tilde{X}\):

    • The zero polynomial is in both \(X\) and \(\tilde{X}\).

    • The sum of any two polynomials in \(X\) (with real coefficients) is a polynomial with real coefficients, which is in \(X\).

    • The product of any polynomial in \(X\) by any real number is a polynomial with real coefficients, which is in \(X\).

  3. Failure under Complex Scalar Multiplication:

    However, if we consider scalar multiplication by complex numbers (as is allowed in \(\tilde{X}\)), \(X\) is not closed under this operation. For example, if \(p(t) \in X\) and \(i\) is the imaginary unit, \(i \cdot p(t)\) will be a polynomial with complex coefficients, which is not in \(X\) but is in \(\tilde{X}\).

  4. Conclusion: While \(X\) satisfies the criteria for being a subspace under real scalar multiplication, it does not satisfy the criteria under complex scalar multiplication. Therefore, \(X\) is not a subspace of \(\tilde{X}\) when \(\tilde{X}\) is considered as a complex vector space.


Problem 10.

If \(Y\) and \(Z\) are subspaces of a vector space \(X\), show that \(Y \cap Z\) is a subspace of \(X\), but \(Y \cup Z\) need not be one. Provide three examples to illustrate the concepts.

Solution

Part 1: \(Y \cap Z\) is a Subspace

To show that \(Y \cap Z\) is a subspace of \(X\), we need to verify the subspace criteria:

  1. Non-emptiness: Since \(Y\) and \(Z\) are subspaces, they both contain the zero vector. Therefore, \(Y \cap Z\) is non-empty as it at least contains the zero vector.

  2. Closed under addition: Let \(u\) and \(v\) be any vectors in \(Y \cap Z\). Since \(u\) and \(v\) are in both \(Y\) and \(Z\), and since \(Y\) and \(Z\) are subspaces (and thus closed under addition), \(u + v\) must be in both \(Y\) and \(Z\). Therefore, \(u + v\) is in \(Y \cap Z\).

  3. Closed under scalar multiplication: Let \(u\) be any vector in \(Y \cap Z\) and let \(c\) be any scalar. Since \(u\) is in both \(Y\) and \(Z\), and since \(Y\) and \(Z\) are subspaces (and thus closed under scalar multiplication), \(c \cdot u\) must be in both \(Y\) and \(Z\). Therefore, \(c \cdot u\) is in \(Y \cap Z\).

Part 2: \(Y \cup Z\) Need Not Be a Subspace

To show that \(Y \cup Z\) need not be a subspace, consider the following examples:

  1. Example 1: Let \(X = \mathbb{R}^2\), \(Y = \{(x, 0) \mid x \in \mathbb{R}\}\) (the x-axis), and \(Z = \{(0, y) \mid y \in \mathbb{R}\}\) (the y-axis). \(Y\) and \(Z\) are both subspaces of \(X\), but \(Y \cup Z\) is not because it is not closed under addition. For example, \((1, 0) \in Y\) and \((0, 1) \in Z\), but \((1, 0) + (0, 1) = (1, 1) \notin Y \cup Z\).

  2. Example 2: Let \(X = \mathbb{R}^3\), \(Y = \{(x, 0, 0) \mid x \in \mathbb{R}\}\), and \(Z = \{(0, y, 0) \mid y \in \mathbb{R}\}\). \(Y\) and \(Z\) are both subspaces of \(X\), but \(Y \cup Z\) is not a subspace because it is not closed under scalar multiplication. For example, \((1, 0, 0) \in Y\), but \(2 \cdot (1, 0, 0) = (2, 0, 0) \notin Y \cup Z\).

  3. Example 3: Let \(X = \mathbb{R}\), \(Y = \{2\}\), and \(Z = \{3\}\). \(Y\) and \(Z\) are both subspaces of \(X\), but \(Y \cup Z = \{2, 3\}\) is not a subspace because it is not closed under addition or scalar multiplication.

Conclusion

While the intersection of two subspaces is always a subspace, the union of two subspaces is generally not a subspace unless one of the subspaces is contained within the other. The provided examples illustrate scenarios where the union of two subspaces fails to satisfy the subspace criteria.


Problem 11

If \(M \neq \emptyset\) is any subset of a vector space \(X\), show that \(\text{span}(M)\) is a subspace of \(X\).

Solution

To prove that \(\text{span}(M)\) is a subspace of \(X\), we need to verify the following properties:

  1. Non-emptiness: Since \(M \neq \emptyset\), there is at least one vector in \(M\). The zero vector of \(X\) can be represented as a linear combination of vectors in \(M\) with all coefficients being zero. Hence, the zero vector is in \(\text{span}(M)\), ensuring that \(\text{span}(M)\) is non-empty.

    \begin{equation*} \text{Let } \mathbf{v} \in M, \text{ then } 0 \cdot \mathbf{v} = \mathbf{0} \in \text{span}(M) \end{equation*}
  2. Closure under Addition: Let \(\mathbf{u}\) and \(\mathbf{v}\) be any two vectors in \(\text{span}(M)\). This means that \(\mathbf{u}\) and \(\mathbf{v}\) can be expressed as linear combinations of vectors from \(M\). The sum \(\mathbf{u} + \mathbf{v}\) is also a linear combination of vectors from \(M\) and is therefore in \(\text{span}(M)\).

    \begin{equation*} \text{Let } \mathbf{u} = \sum_{i=1}^{n} a_i \mathbf{m}_i \text{ and } \mathbf{v} = \sum_{i=1}^{n} b_i \mathbf{m}_i, \text{ where } \mathbf{m}_i \in M \text{Then, } \mathbf{u} + \mathbf{v} = \sum_{i=1}^{n} (a_i + b_i) \mathbf{m}_i \in \text{span}(M) \end{equation*}
  3. Closure under Scalar Multiplication: Let \(\mathbf{u}\) be a vector in \(\text{span}(M)\) and \(c\) be any scalar. The product \(c\mathbf{u}\) is also a linear combination of vectors from \(M\) and is therefore in \(\text{span}(M)\).

    \begin{equation*} \text{Let } \mathbf{u} = \sum_{i=1}^{n} a_i \mathbf{m}_i, \text{ where } \mathbf{m}_i \in M \text{Then, } c\mathbf{u} = c\sum_{i=1}^{n} a_i \mathbf{m}_i = \sum_{i=1}^{n} (ca_i) \mathbf{m}_i \in \text{span}(M) \end{equation*}

By verifying these properties, we have shown that \(\text{span}(M)\) is a subspace of \(X\).