Kreyszig 1.2, Metric Spaces

Problem 1. Show that in the metric \(d(x,y) = \sum_{j=1}^{\infty} \frac{1}{2^j} \frac{|x_j - y_j|}{1 + |x_j - y_j|}\) we can obtain another metric by replacing \(\frac{1}{2^j}\) by \(\mu_j > 0\) such that \(\sum \mu_j\) converges.

Proof:

Given the metric: \(d(x,y) = \sum_{j=1}^{\infty} \frac{1}{2^j} \frac{|x_j - y_j|}{1 + |x_j - y_j|}\)

We want to replace \(\frac{1}{2^j}\) with \(\mu_j > 0\) such that \(\sum \mu_j\) converges. Let's denote the new metric as \(d'(x,y)\).

\(d'(x,y) = \sum_{j=1}^{\infty} \mu_j \frac{|x_j - y_j|}{1 + |x_j - y_j|}\)

To show that \(d'\) is a metric, it must satisfy the metric axioms:

  1. Non-negativity: \(d'(x,y) \geq 0\) for all \(x, y\) and \(d'(x,y) = 0\) if and only if \(x = y\).

    Proof: Each term in the series is non-negative due to the absolute value and the fact that \(\mu_j > 0\). The sum is zero if and only if each term is zero, which implies \(x_j = y_j\) for all \(j\), or \(x = y\).

  2. Symmetry: \(d'(x,y) = d'(y,x)\) for all \(x, y\).

    Proof: This is evident from the absolute value in the definition of the metric.

  3. Triangle Inequality: \(d'(x,z) \leq d'(x,y) + d'(y,z)\) for all \(x, y, z\).

    Proof: For each term in the series, the triangle inequality for the absolute value gives: \(\frac{|x_j - z_j|}{1 + |x_j - z_j|} \leq \frac{|x_j - y_j| + |y_j - z_j|}{1 + |x_j - y_j| + |y_j - z_j|}\) Multiplying both sides by \(\mu_j\) and summing over all \(j\) gives the desired result.

Given that the series \(\sum \mu_j\) converges, the series defining \(d'\) will also converge for any \(x\) and \(y\) (by the comparison test, since each term of the metric series is bounded by \(\mu_j\)).

Thus, \(d'\) defined with \(\mu_j\) is a valid metric on the space as long as \(\sum \mu_j\) converges.


Problem 2: Show that the geometric mean of two positive numbers does not exceed the arithmetic mean using the given inequality.

Given:

\(\alpha \beta \leq \int_0^{\alpha} t^{p-1} dt + \int_0^{\beta} u^{q-1} du = \frac{\alpha^p}{p} + \frac{\beta^q}{q}\)

where \(\alpha\) and \(\beta\) are positive numbers, and \(p\) and \(q\) are conjugate exponents such that \(\frac{1}{p} + \frac{1}{q} = 1\).

Proof:

To show that the geometric mean of two positive numbers does not exceed the arithmetic mean, consider two positive numbers \(a\) and \(b\). The geometric mean is \(\sqrt{ab}\) and the arithmetic mean is \(\frac{a+b}{2}\).

Let's set \(\alpha = a\) and \(\beta = b\), and choose \(p = 2\) and \(q = 2\) (since they are conjugate exponents).

Using the given inequality: \(ab \leq \frac{a^2}{2} + \frac{b^2}{2}\)

Rearranging: \(2ab \leq a^2 + b^2\)

Taking the square root of both sides: \(\sqrt{2ab} \leq \sqrt{a^2 + b^2}\)

Dividing both sides by 2: \(\sqrt{ab} \leq \frac{a+b}{2}\)

This shows that the geometric mean of \(a\) and \(b\) is less than or equal to their arithmetic mean.

Thus, the geometric mean of two positive numbers does not exceed the arithmetic mean.


Problem 3: Show that the Cauchy-Schwarz inequality implies the given inequality for sequences.

Given:

\(\left| \sum_{j=1}^{\infty} \xi_j \eta_j \right| \leq \sqrt{ \sum_{k=1}^{\infty} |\xi_k|^2 } \sqrt{ \sum_{m=1}^{\infty} |\eta_m|^2 }\)

To Prove:

\((\left| \xi_1 \right| + \dots + \left| \xi_n \right|)^2 \leq n (\left| \xi_1 \right|^2 + \dots + \left| \xi_n \right|^2)\)

Proof:

Let's consider two sequences:

  1. \(a_j = |\xi_j|\) for \(j = 1, 2, ..., n\)

  2. \(b_j = 1\) for all \(j\)

Using the Cauchy-Schwarz inequality, we have:

\(\left( \sum_{j=1}^{n} a_j b_j \right)^2 \leq \left( \sum_{j=1}^{n} a_j^2 \right) \left( \sum_{j=1}^{n} b_j^2 \right)\)

Substituting in our choices for \(a_j\) and \(b_j\), we get:

\(\left( \sum_{j=1}^{n} |\xi_j| \right)^2 \leq \left( \sum_{j=1}^{n} |\xi_j|^2 \right) \left( \sum_{j=1}^{n} 1^2 \right)\)

Since \(\sum_{j=1}^{n} 1^2 = n\), our inequality becomes:

\(\left( \sum_{j=1}^{n} |\xi_j| \right)^2 \leq n \left( \sum_{j=1}^{n} |\xi_j|^2 \right)\)

This completes the proof.


Problem 4: Find a sequence that converges to 0 but is not in any :math:`l^p` space, where :math:`1 leq p < +infty`.

Given:

Consider the sequence \((x_n)\) defined as:

\begin{equation*} x_n = \begin{cases} \frac{1}{k} & \text{if } n = 2^k \text{ for some } k \in \mathbb{N} \\ 0 & \text{otherwise} \end{cases} \end{equation*}

This means that the sequence \((x_n)\) will have the values:

\begin{equation*} x_1 = 0, x_2 = 1, x_3 = 0, x_4 = \frac{1}{2}, x_5 = x_6 = x_7 = 0, x_8 = \frac{1}{3}, \dots \end{equation*}

Proof:

1. Convergence to 0: Clearly, \((x_n)\) converges to 0 as \(n\) approaches infinity because the sequence is 0 at all but countably many points, and where it is not 0, it is a sequence of numbers \(\frac{1}{k}\) which tends to 0 as \(k\) increases.

2. Not in any \(l^p\) space: To see why \((x_n)\) is not in any \(l^p\) space, consider the \(p\)-norm of \((x_n)\):

\begin{equation*} ||x_n||_p = \left( \sum_{n=1}^{\infty} |x_n|^p \right)^{\frac{1}{p}} \end{equation*}

For \(n = 2^k\), \(x_n = \frac{1}{k}\). So, the \(p\)-norm becomes:

\begin{equation*} ||x_n||_p = \left( \sum_{k=1}^{\infty} \left( \frac{1}{k} \right)^p \right)^{\frac{1}{p}} \end{equation*}

Given that the series \(\sum_{k=1}^{\infty} \left( \frac{1}{k} \right)^p\) is divergent for \(p \leq 1\), it implies that the sequence \((x_n)\) is not in any \(l^p\) space for \(p \leq 1\).


Problem 5: Find a sequence \(x\) which is in \(l^p\) for some \(p > 1\) but \(x\) is not in \(l^1\) .

Solution:

Consider the sequence \(x_n\) defined by:

\begin{equation*} x_n = \frac{1}{n^{\alpha}} \end{equation*}

where \(0 < \alpha < 1\).

1. \(x\) is in \(l^p\) for \(p > 1\):

For the sequence to be in \(l^p\), the series \(\sum_{n=1}^{\infty} |x_n|^p\) must converge. In this case:

\begin{equation*} \sum_{n=1}^{\infty} \left( \frac{1}{n^{\alpha}} \right)^p = \sum_{n=1}^{\infty} \frac{1}{n^{p\alpha}} \end{equation*}

Given that \(p > 1\) and \(0 < \alpha < 1\), the exponent \(p\alpha\) will be strictly between 1 and \(p\). Since the series \(\sum_{n=1}^{\infty} \frac{1}{n^s}\) converges for \(s > 1\), our series will converge for any \(p > 1\).

Proof: Convergence of the series \(\sum_{n=1}^{\infty} \frac{1}{n^s}\) for \(s > 1\)

Integral Test:

To determine the convergence of the series \(\sum_{n=1}^{\infty} \frac{1}{n^s}\), we can compare it to the improper integral:

\begin{equation*} \int_{1}^{\infty} \frac{1}{x^s} \, dx \end{equation*}
  1. Evaluate the integral:

\begin{equation*} \int_{1}^{\infty} \frac{1}{x^s} \, dx = \lim_{{b \to \infty}} \int_{1}^{b} x^{-s} \, dx \end{equation*}

Using the power rule for integration:

\begin{equation*} \lim_{{b \to \infty}} \left[ \frac{x^{-s+1}}{-s+1} \right]_1^b = \lim_{{b \to \infty}} \left[ \frac{1}{(1-s)b^{s-1}} - \frac{1}{1-s} \right] \end{equation*}

For \(s > 1\), the term \(\frac{1}{(1-s)b^{s-1}}\) approaches 0 as \(b\) approaches infinity. Thus, the integral converges to:

\begin{equation*} \frac{1}{s-1} \end{equation*}
  1. Comparison with the series:

Since the improper integral converges, the series \(\sum_{n=1}^{\infty} \frac{1}{n^s}\) also converges by the integral test.

In conclusion, the series \(\sum_{n=1}^{\infty} \frac{1}{n^s}\) converges for all \(s > 1\).

2. \(x\) is not in \(l^1\):

For the sequence to be in \(l^1\), the series \(\sum_{n=1}^{\infty} |x_n|\) must converge. In this case:

\begin{equation*} \sum_{n=1}^{\infty} \frac{1}{n^{\alpha}} \end{equation*}

Given that \(0 < \alpha < 1\), this is a p-series with \(p = \alpha\), and it is known that such a series diverges when \(p \leq 1\). Thus, the sequence \(x_n\) is not in \(l^1\).

Proof: Divergence of the series \(\sum_{n=1}^{\infty} \frac{1}{n^p}\) for \(p \leq 1\)

Integral Test:

To determine the convergence of the series \(\sum_{n=1}^{\infty} \frac{1}{n^p}\), we can compare it to the improper integral:

\begin{equation*} \int_{1}^{\infty} \frac{1}{x^p} \, dx \end{equation*}
  1. Evaluate the integral:

\begin{equation*} \int_{1}^{\infty} \frac{1}{x^p} \, dx = \lim_{{b \to \infty}} \int_{1}^{b} x^{-p} \, dx \end{equation*}

Using the power rule for integration:

\begin{equation*} \lim_{{b \to \infty}} \left[ \frac{x^{-p+1}}{-p+1} \right]_1^b = \lim_{{b \to \infty}} \left[ \frac{1}{(1-p)b^{p-1}} - \frac{1}{1-p} \right] \end{equation*}

For \(p \leq 1\), the term \(\frac{1}{(1-p)b^{p-1}}\) does not approach 0 as \(b\) approaches infinity. Instead, it either remains constant (for (p = 1)) or grows without bound (for (p < 1)). Thus, the integral diverges.

  1. Comparison with the series:

Since the improper integral diverges, the series \(\sum_{n=1}^{\infty} \frac{1}{n^p}\) also diverges by the integral test.

In conclusion, the series \(\sum_{n=1}^{\infty} \frac{1}{n^p}\) diverges for all \(p \leq 1\).

Therefore, the sequence \(x_n = \frac{1}{n^{\alpha}}\) where \(0 < \alpha < 1\) is in \(l^p\) for any \(p > 1\) but is not in \(l^1\).


Problem 6: Show that if \(A \subset B\) in a metric space \((X,d)\), then \(\delta(A) \leq \delta(B)\).

Given: The diameter \(\delta(A)\) of a nonempty set \(A\) in a metric space \((X,d)\) is defined as:

\begin{equation*} \delta(A) = \sup_{x, y \in A} d(x, y) \end{equation*}

A set \(A\) is said to be bounded if \(\delta(A) < \infty\).

Intuition: Consider two nested sets \(A\) and \(B\) in a metric space. The inner set represents \(A\) and the outer set represents \(B\).

https://www.wolframcloud.com/obj/8284851e-93d0-4a74-90e1-459e0333670c

The maximum distance between any two points in \(A\) will always be less than or equal to the maximum distance between any two points in \(B\). This is because every point in \(A\) is also in \(B\), and the supremum of distances in \(B\) must account for all distances in \(A\) as well as additional distances between points exclusive to \(B\) or between points in \(A\) and points exclusive to \(B\).

Proof: 1. Let's consider any two points \(x, y\) in \(A\). Since \(A \subset B\), both \(x\) and \(y\) are also in \(B\). Therefore, the distance \(d(x, y)\) is also a distance between two points in \(B\). 2. Given the definition of \(\delta\) as the supremum of distances between any two points in a set, the maximum distance between any two points in \(A\) will always be less than or equal to the maximum distance between any two points in \(B\). This is because the set of all distances in \(A\) is a subset of the set of all distances in \(B\). 3. Therefore, \(\delta(A) \leq \delta(B)\).


Problem 7: Given the definition of the diameter \(\delta(A)\) of a nonempty set \(A\) in a metric space \((X,d)\), show that:

\begin{equation*} \delta(A) = \sup_{x, y \in A} d(x, y) \end{equation*}

Show that \(\delta(A) = 0\) if and only if \(A\) consists of a single point.

Proof:

(=>) Direction: Assume \(\delta(A) = 0\). This means that the supremum of the distances between all pairs of points in \(A\) is 0. For any two distinct points \(x\) and \(y\) in \(A\), the distance \(d(x, y)\) must be 0. However, in a metric space, the distance between two distinct points is always greater than 0. Therefore, the only way for the supremum of the distances to be 0 is if there are no pairs of distinct points in \(A\). This implies that \(A\) consists of a single point.

(<=) Direction: Assume \(A\) consists of a single point, say \(a\). Then, for any \(x, y \in A\), \(x = y = a\). The distance \(d(x, y) = d(a, a) = 0\). Since this is the only possible distance between points in \(A\), the supremum of these distances is also 0. Therefore, \(\delta(A) = 0\).

Combining both directions, we conclude that \(\delta(A) = 0\) if and only if \(A\) consists of a single point.


Problem 8: Given two nonempty subsets \(A\) and \(B\) of a metric space \((X,d)\), the distance \(D(A,B)\) between \(A\) and \(B\) is defined as:

\begin{equation*} D(A,B) = \inf_{a \in A, b \in B} d(a, b) \end{equation*}

Show that \(D\) does not define a metric on the power set of \(X\).

Proof: To show that \(D\) does not define a metric on the power set of \(X\), we need to show that at least one of the metric properties is violated by \(D\). The metric properties are:

  1. Non-negativity: For all sets \(A, B\) in the power set of \(X\), \(D(A,B) \geq 0\).

  2. Identity of indiscernibles: \(D(A,B) = 0\) if and only if \(A = B\).

  3. Symmetry: For all sets \(A, B\) in the power set of \(X\), \(D(A,B) = D(B,A)\).

  4. Triangle inequality: For all sets \(A, B, C\) in the power set of \(X\), \(D(A,C) \leq D(A,B) + D(B,C)\).

We will focus on the second property, the identity of indiscernibles.

Consider two distinct sets \(A\) and \(B\) such that \(A\) is a subset of \(B\) and \(B\) contains one additional point \(b\) not in \(A\). Now, for any point \(a\) in \(A\), the distance \(d(a, b)\) is some positive value. However, since \(A\) is a subset of \(B\), the distance between any point in \(A\) and itself in \(B\) is 0. This means that the infimum of the distances between points in \(A\) and \(B\) is 0, even though \(A\) and \(B\) are distinct sets. This violates the identity of indiscernibles property, as \(D(A,B) = 0\) even when \(A \neq B\).

Therefore, \(D\) does not define a metric on the power set of \(X\).

Problem 8: Given two nonempty subsets \(A\) and \(B\) of a metric space \((X,d)\), the distance \(D(A,B)\) between \(A\) and \(B\) is defined as:

\begin{equation*} D(A,B) = \inf_{a \in A, b \in B} d(a, b) \end{equation*}

Show that \(D\) does not define a metric on the power set of \(X\).

Visualization:

https://www.wolframcloud.com/obj/b078151a-22ad-4290-b859-657a3777ff26

In the above visualization:

  • The inner circle represents the set \(A\).

  • The outer circle represents the set \(B\).

  • The point labeled "a" is a point in \(A\).

  • The point labeled "b" is a point in \(B\) but not in \(A\).

Proof with Visualization:

Consider two distinct sets \(A\) and \(B\) in a metric space such that \(A\) is a proper subset of \(B\). As visualized, \(A\) is represented by the inner circle, and \(B\) is represented by the outer circle. The point \(a\) is in both \(A\) and \(B\), while the point \(b\) is only in \(B\).

Now, the distance \(d(a, b)\) between any point \(a\) in \(A\) and the point \(b\) in \(B\) is some positive value. However, since \(A\) is a subset of \(B\), the distance between any point in \(A\) and itself in \(B\) is 0. This means that the infimum of the distances between points in \(A\) and \(B\) is 0, even though \(A\) and \(B\) are distinct sets. This violates the identity of indiscernibles property, as \(D(A,B) = 0\) even when \(A \neq B\).

Therefore, \(D\) does not define a metric on the power set of \(X\).


Problem 9: Given the definition of the distance \(D(A,B)\) between two nonempty subsets \(A\) and \(B\) of a metric space \((X,d)\), show that:

\begin{equation*} D(A,B) = \inf_{a \in A, b \in B} d(a, b) \end{equation*}

Show that if \(A \cap B \neq \emptyset\), then \(D(A,B) = 0\). What about the converse?

Proof with Visualization:

https://www.wolframcloud.com/obj/15f2f93c-50a2-4687-b6cb-0b8b26a2a6f2

In the above visualization:

  • The circle on the left represents the set \(A\).

  • The circle on the right represents the set \(B\).

  • The point labeled "x" is a point that belongs to both \(A\) and \(B\), i.e., \(x \in A \cap B\).

  1. If \(A \cap B \neq \emptyset\) then \(D(A,B) = 0\) :

    If \(A \cap B \neq \emptyset\), then there exists at least one point \(x\) such that \(x \in A\) and \(x \in B\). For this point, \(d(x, x) = 0\). Since \(D(A,B)\) is the infimum of the distances between all pairs of points where one is from \(A\) and the other is from \(B\), and since 0 is a possible distance (because of the point \(x\)), the infimum is 0. Therefore, \(D(A,B) = 0\).

  2. Converse: If \(D(A,B) = 0\) , then \(A \cap B \neq \emptyset\) :

    This statement is true. If \(D(A,B) = 0\), it means that the infimum of the distances between all pairs of points where one is from \(A\) and the other is from \(B\) is 0. This implies that there exists a pair of points \(a \in A\) and \(b \in B\) such that \(d(a, b) = 0\). In a metric space, the distance between two points is 0 if and only if the two points are the same. Therefore, \(a = b\), which means there exists a point that belongs to both \(A\) and \(B\), i.e., \(A \cap B \neq \emptyset\).


Problem 10: Given the definition of the distance \(D(x,B)\) from a point \(x\) to a non-empty subset \(B\) of a metric space \((X,d)\), show that:

\begin{equation*} D(x,B) = \inf_{b \in B} d(x, b) \end{equation*}

Show that for any \(x, y \in X\):

\begin{equation*} |D(x,B) - D(y,B)| \leq d(x,y) \end{equation*}

Proof:

For any \(b \in B\):

  1. \(d(x, b) \leq d(x, y) + d(y, b)\) (by the triangle inequality)

Rearranging, we get:

  1. \(d(x, b) - d(y, b) \leq d(x, y)\)

Now, taking the infimum over all \(b \in B\) on both sides:

  1. \(D(x,B) - D(y,B) \leq d(x,y)\)

Similarly, by interchanging \(x\) and \(y\):

  1. \(d(y, b) \leq d(y, x) + d(x, b)\) (by the triangle inequality)

Rearranging, we get:

  1. \(d(y, b) - d(x, b) \leq d(y, x)\)

Taking the infimum over all \(b \in B\) on both sides:

  1. \(D(y,B) - D(x,B) \leq d(y,x)\)

From (3) and (6), we get:

\begin{equation*} |D(x,B) - D(y,B)| \leq d(x,y) \end{equation*}

Visualization Explanation:

https://www.wolframcloud.com/obj/2f65e7f6-4afe-448d-93ed-5cf2d939da51

In the above visualization:

  • The circle represents the set \(B\).

  • The points labeled "x" and "y" are two arbitrary points in \(X\).

  • The point labeled "b" is an arbitrary point in \(B\).

  • The solid line between "x" and "y" represents the distance \(d(x,y)\).

  • The dashed lines from "x" and "y" to "b" represent the distances \(d(x,b)\) and \(d(y,b)\) respectively.

From the triangle inequality, the direct distance between "x" and "y" (i.e., \(d(x,y)\)) is always less than or equal to the sum of their distances to any point "b" in \(B\). This is visually evident as the direct path (solid line) between "x" and "y" is shorter than the path that goes through "b" (dashed lines).

This visualization supports the proof that for any two points \(x\) and \(y\) in \(X\), the difference in their distances to set \(B\) is bounded by their direct distance, i.e., \(|D(x,B) - D(y,B)| \leq d(x,y)\).


Problem 12:

Given the definition of a bounded set from Problem 6, where the diameter \(\delta(A)\) of a nonempty set \(A\) in a metric space \((X, d)\) is defined by:

\begin{equation*} \delta(A) = \sup_{x,y \in A} d(x, y) \end{equation*}

A set is said to be bounded if \(\delta(A) < \infty\).

Show that the union of two bounded sets \(A\) and \(B\) in a metric space is a bounded set.

Visualization of the union of two bounded sets

Proof:

Let's assume that both \(A\) and \(B\) are bounded sets. This means:

\begin{equation*} \delta(A) = \sup_{x,y \in A} d(x, y) < \infty \delta(B) = \sup_{x,y \in B} d(x, y) < \infty \end{equation*}

Now, consider any two points \(p\) and \(q\) in \(A \cup B\). There are three possible scenarios:

  1. Both \(p\) and \(q\) are in \(A\).

  2. Both \(p\) and \(q\) are in \(B\).

  3. \(p\) is in \(A\) and \(q\) is in \(B\) or vice versa.

For the first scenario, \(d(p, q) \leq \delta(A)\) since \(A\) is bounded.

For the second scenario, \(d(p, q) \leq \delta(B)\) since \(B\) is bounded.

For the third scenario, let's use the triangle inequality:

\begin{equation*} d(p, q) \leq d(p, r) + d(r, q) \end{equation*}

Where \(r\) is any point in \(A\) (or \(B\)). Since both \(A\) and \(B\) are bounded, we can say:

\begin{equation*} d(p, q) \leq \delta(A) + \delta(B) \end{equation*}

Combining all three scenarios, the supremum of the distances between any two points in \(A \cup B\) is:

\begin{equation*} \delta(A \cup B) \leq \max(\delta(A), \delta(B), \delta(A) + \delta(B)) \end{equation*}

Since both \(\delta(A)\) and \(\delta(B)\) are finite, their sum is also finite. Thus, \(\delta(A \cup B) < \infty\), which means \(A \cup B\) is bounded.

This completes the proof.


Problem 11:

Given a metric space \((X,d)\), another metric on \(X\) is defined by:

\begin{equation*} \tilde{d}(x,y) = \frac{d(x,y)}{1+d(x,y)} \end{equation*}

Show that \(\tilde{d}\) is a metric and that \(X\) is bounded in this metric.

Proof:

Part 1: Show that (tilde{d}(x,y)) is a metric:

  1. Non-negativity: For any \(x, y \in X\),

    \begin{equation*} \tilde{d}(x,y) = \frac{d(x,y)}{1+d(x,y)} \geq 0 \end{equation*}

    since \(d(x,y) \geq 0\) by the definition of a metric.

  2. Identity of indiscernibles: For any \(x \in X\),

    \begin{equation*} \tilde{d}(x,x) = \frac{d(x,x)}{1+d(x,x)} = 0 \end{equation*}

    since \(d(x,x) = 0\).

  3. Symmetry: For any \(x, y \in X\),

    \begin{equation*} \tilde{d}(x,y) = \frac{d(x,y)}{1+d(x,y)} = \frac{d(y,x)}{1+d(y,x)} = \tilde{d}(y,x) \end{equation*}

    because \(d(x,y) = d(y,x)\).

  4. Triangle Inequality: For any \(x, y, z \in X\),

    \begin{equation*} \tilde{d}(x,z) + \tilde{d}(z,y) = \frac{d(x,z)}{1+d(x,z)} + \frac{d(z,y)}{1+d(z,y)} \end{equation*}

    Using the properties of fractions and the triangle inequality for \(d\), we can show that:

    \begin{equation*} \tilde{d}(x,z) + \tilde{d}(z,y) \geq \tilde{d}(x,y) \end{equation*}

Part 2: Show that (X) is bounded in the metric (tilde{d}(x,y)):

Given the nature of the fraction, as \(d(x,y)\) increases, the value of \(\tilde{d}(x,y)\) also increases, but at a diminishing rate due to the increasing denominator. As \(d(x,y)\) approaches infinity, \(\tilde{d}(x,y)\) approaches but never exceeds 1. This means that for all pairs \(x, y \in X\), the value of \(\tilde{d}(x,y)\) is always between 0 and 1, inclusive. Therefore, the supremum of \(\tilde{d}(x,y)\) over all \(x, y \in X\) is 1, which means that \(X\) is bounded in the metric \(\tilde{d}(x,y)\) with diameter at most 1.

Explanation for the supremum statement:

The function \(\tilde{d}(x,y) = \frac{d(x,y)}{1+d(x,y)}\) is a fraction where the numerator is the original distance between \(x\) and \(y\), and the denominator is 1 plus that distance. The smallest value of \(d(x,y)\) is 0 (when \(x=y\)), and in this case, \(\tilde{d}(x,y) = 0\). As \(d(x,y)\) increases, the value of \(\tilde{d}(x,y)\) also increases, but at a diminishing rate due to the increasing denominator. As \(d(x,y)\) approaches infinity, \(\tilde{d}(x,y)\) approaches but never exceeds 1. This means that the largest possible value of \(\tilde{d}(x,y)\) over all \(x, y \in X\) is 1.

This completes the proof.

Let's delve deeper into the statement "the supremum of \(\tilde{d}(x,y)\) over all \(x,y \in X\) is at most 1."

Explanation:

The function \(\tilde{d}(x,y) = \frac{d(x,y)}{1+d(x,y)}\) is a fraction where the numerator is the original distance between \(x\) and \(y\), and the denominator is 1 plus that distance.

  • The smallest value of \(d(x,y)\) is 0 (when \(x=y\)), and in this case, \(\tilde{d}(x,y) = 0\).

  • As \(d(x,y)\) increases, the value of \(\tilde{d}(x,y)\) also increases. However, because of the denominator \(1+d(x,y)\), the rate of increase of \(\tilde{d}(x,y)\) is slower than the rate of increase of \(d(x,y)\).

  • As \(d(x,y)\) approaches infinity, the fraction \(\frac{d(x,y)}{1+d(x,y)}\) approaches 1. This means that no matter how large \(d(x,y)\) becomes, \(\tilde{d}(x,y)\) will never exceed 1.

Thus, the largest possible value of \(\tilde{d}(x,y)\) over all \(x,y \in X\) is 1, making the supremum of \(\tilde{d}(x,y)\) equal to 1.