2

Theorem: Let $ V $ be an Euclidian space of dimension $ n $ and let $ v_1,...,v_m \in V $ s.t. $ \langle v_i,v_j \rangle < 0 $ for all $ i \neq j $.
Show that $ m \leq n + 1 $ ( Hint: project $ v_1,...,v_{m-1} $ on $ \{ v_m \}^{\perp} $ )

I have no idea what to do, I feel like I'm stuck because I don't fully understand how to apply the definition of projection in this problem, nevertheless, this is an exercise I must do.

I was given the following ( Suppose the vector spaces we are talking about are finitely created ):

(Definition) Let $ W \subseteq V $ be a subspace and let $ w_1,...,w_r $ be an orthonormal basis of $ W $. Then for all $ v \in V $ we'll define the projection of $ v $ on $ W$ as $ w = \langle v,w_1 \rangle w_1 + ... + \langle v,w_r \rangle w_r $ .

(Definition) Let $ W \subseteq V $ be a subspace, we'll define the orthogonal complement of $ W $ as $ W^{\perp} = \{ v \in V | \forall w > \in W. \langle v,w \rangle = 0 \} $ .

(Theorem) Let $ W \subseteq V $ be a subspace, then $ V = W \oplus W^{\perp} $.

I thought that I'd prove the theorem by induction on $ n $ and at base case and induction step I'd look at a basis $ ( \tau_1, ..., \tau_q ) $ of $ \{ v_m \}^{\perp} $ where $ \dim \{ v_m \}^{\perp} = q $ and then using the hint, the projection of $ v_1,...,v_{m-1} $ on $ \{ v_m \}^{\perp} $ will give me the vectors:
$ \delta_1 = \langle v_1,\tau_1 \rangle \tau_1 + ... + \langle v_1,\tau_q \rangle \tau_q $
.
.
.
$ \delta_{m-1} = \langle v_{m-1},\tau_1 \rangle \tau_1 + ... + \langle v_{m-1},\tau_q \rangle \tau_q $

But I don't know if what I've done will give me anything fruitful... Can you please help? I don't know how to prove the theorem.

Thanks in advance!

hazelnut_116
  • 1,699

4 Answers4

3

Here's a way to visualize it, hopefully you can turn it into a proof.

First rotate coordinates so that $v_m$ is a positive multiple of ${\bf e}_n$, the $n$th unit coordinate vector. Then the vectors $v_1,...v_{m-1}$ all have negative $n$th entry. Then repeat on the vectors $v_1,...v_{m-1}$; rotate in the first $m-1$ variables so that the second to last entry of $v_{m-1}$ is positive and the previous entries are all zero. (You don't have to worry about $v_{m-1}$ being a negative multiple of ${\bf e}_n$ since if that were the case there would be no other vectors; you can't have negative inner products with both a positive and a negative multiple of ${\bf e}_n$.)

Then each of $v_1,...,v_{m-2}$ will have their last two entries negative. Now repeat the above procedure, each time rotating so that the $k$th to last vector has its $k$th to last entry positive, its later entries negative, and its earlier entries zero. You can do this at most $n$ times. The remaining $m - n$ vectors will have to have all negative entries. Since any two such vectors will have positive inner product with each other, there can be at most one of them. Hence a maximum of $n + 1$ total vectors.

anon
  • 151,657
Zarrax
  • 44,950
3

Let $m > n + 1$ and $v_1, \ldots, v_m$ be any vectors in $\mathbb R^n$. Let $w_j$ equal $v_j$ extended with an additional $1$, so $w_j \in \mathbb R^{n+1}$. Then the $w_j$ are linearly dependent: there are coefficients $c_j$, not all zero, such that both

  1. $\sum c_j v_j = 0$
  2. $\sum c_j = 0$

The second equality implies that not all non-zero $c_j$ can have the same sign. Let $I$ be the set of indices for which $c_j >0$ and $J$ the set for which $c_j<0$. Then 1. can be written as $$v = \sum_I c_j v_j = -\sum_J c_j v_j.$$ It follows that $$0 \leq \langle v, v \rangle = \sum_{(j,k) \in I\times J} -c_j c_k \langle v_j, v_k \rangle.$$ Now all coeficients $-c_j c_k$ are positive and therefore not all $\langle v_j, v_k \rangle$ can be negative.

WimC
  • 32,192
  • 2
  • 48
  • 88
1

Here's the result via Gram Matrices and Perron Theory, which gets at the heart of $\langle v_i,v_j \rangle < 0$ for $i\neq j$.

Let $G\in \mathbb R^{m \times m}$ be the associated Gram Matrix, i.e. $g_{i,j}=\langle v_i, v_j\rangle$. Since the inner product is positive definite (and in particular, non-degenerate), we can observe that
$\mathbf 0 = G\mathbf y \iff \mathbf 0= \bigg[\begin{array}{c|c|c|c|c|c|c} v_1 & v_2&\cdots & v_m \end{array}\bigg]\mathbf y=\sum_{j=1}^m y_j\cdot v_j$
which implies
$\text{rank }G =\text{rank }\bigg[\begin{array}{c|c|c|c|c|c|c} v_1 & v_2&\cdots & v_m \end{array}\bigg]\leq \dim V=n$


alternative justification:
write out $\mathbf x^T G\mathbf x = \sum_{i=1}^m x_{i} \sum_{j=1}^m x_{j}\langle \mathbf v_i, \mathbf v_j \rangle = \big\langle\big(\sum_{j=1}^m x_{j} \mathbf v_j\big), \big(\sum_{j=1}^m x_{j} \mathbf v_j \big)\big\rangle\geq 0$
with equality iff $\big(\sum_{j=1}^m x_{j} \mathbf v_j \big)=\mathbf 0$ by positive definiteness of the inner product. Thus an $m\times m$ Gram matrix with rank $m$ implies all $m$ vectors are linearly independent. In general if $\text{rank }G=r$ then $G$ is congruent to a matrix with the leading $r\times r$ principal submatrix being an invertible Gram Matrix and all other components zero, which implies those $r$ vectors are linearly independent. See Prove the existence of a principal submatrix of order $r$ in $M\in\Bbb F^{n\times n}, M=-M^T,\ \operatorname{rank}(M)=r$ for the congruence argument.

What follows shows $m-1 \leq \text{rank } G$, thus at least $m-1$ of the vectors $\big\{v_1, v_2,\dots, v_m\big\}$ are linearly independent so $m-1\leq\dim V= n$.


Being a Gram Matrix we know $G\succeq \mathbf 0$ and given the structure of this problem, all diagonal entries of $G$ are positive and all off-diagonal entries are negative. We can homogenize the diagonals via congruence by defining diagonal matrix $D$ with $d_{i,i}:=g_{i,i}$

$G':=D^\frac{-1}{2}GD^\frac{-1}{2} =2I-M$
where $G'$ has all diagonal entries equal to $1$ and all off diagonal entries negative and thus $M$ is a positive matrix (in the Perron Theory sense of all components being positive). By Perron Theory, $M$ has a positive eignevalue $\lambda_1$ that is simple, and of strictly maximum modulus -- i.e. $\lambda_1 \gt \vert \lambda_j\vert$ for $j\in\big\{2,3,...,m\big\}$. Thus $G'$ has a strictly minimum modulus eigenvalue of $2-\lambda_1 \geq 0$ since $G'\succeq \mathbf 0$ because $G'$ is congruent to $G$. Thus $1\geq \dim \ker G' =\dim \ker G$ and by rank-nullity $m-1\leq \text{rank } G$.

Putting the rank inequalities together gives:
$m-1\leq \text{rank } G\leq n\implies m \leq n+1$

user8675309
  • 10,034
0

I think m <= n and strictly m < n+1. Comments above imply none of the v_j can be zero.

Indeed, inner_product(v_i, v_j) = 0 iff v_i is perpendicular to v_j, hence, v_i and v_j are pairwise linearly independent.

Suppose there was a linear dependency among the v_1, .., v_m. So sum(u_iv_i) = 0 for some real numbers u_i. Then WLOG, v_1 = -sum(k_jv_j)|_{m>=j>1}, where k_j = u_j/u_1.

However, if v_1 is orthogonal to each v_j (j not 1), then v_1 is orthogonal to rv_j for all r, and v_1 is orthogonal to -sum(k_jv_j) (j not 1). {this is a separate lemma that one might want to prove, and I can try to prove it if you like}

This is a contradiction, because v_1 can't both be a linear combination of {v_2, .., v_m} and perpendicular to that same linear combination.

Hence, the set {v1, .., v_m} is linearly independent in R^n. Hence m <= n.

I'm a new to this stuff, so please correct me if I'm wrong.