2

Let $P$ be a finite set of vectors in $\mathbb R_{\geq 0}^n$ with $1$-norm of $1$. i.e. for each $p\in P$, we have that $\sum_{i=1}^np_i = 1$.

I have the following statement:

A particular such set $P$ satisfies the property $\phi(P)$ iff: For every pair of two arbitrary vectors $d,e\in\mathbb R^n$, if ($d_1>d_2$ and $e_1\leq e_2$), or ($d_2>d_1$ and $e_2\leq e_1$), then there exists two $p,q\in P$ such that

  • $d\cdot q>d\cdot p$, and

  • For all $r\in P$, it holds that $e\cdot r\leq e\cdot p$

I am looking for a simplified statement of $\phi(P)$, that doesn't include a reference to the vectors $d,e$. i.e. a statement that is equivalent to it (assuming that $P$ is a set of vectors in $\mathbb R_{\geq 0}^n$ with $1$-norm $1$), but simplified in that way.

I've been looking for a simplified form for a while now, but can't find it.

I have two questions:

My main question: A methodological question. I am looking for a way to prove or at least make a good educated guess whether such a simplified statement even theoretically exists. Is there a principle by which we can reason that the statement can or cannot possibly be simplified in that way? (I asked this question about this), also see this question for a motivation.

A practical question. If it is possible, how do we actually go about simplifying it?

user56834
  • 12,925
  • It seems like an odd condition: you are using $n$-dimensional vectors, but part of the condition only refers to the first and second coordinates of $d$ and $e$? – Joppy Sep 30 '18 at 11:36
  • @Joppy, yes that's correct. This makes it hard to simplify it seems. (and that's also one of the reasons I want to get rid of the $d,e$ stuff altogether, if possible). Basically the subcondition is: if one of the first two elements of the first vector is strictly bigger than the other, but this doesn't hold for the other vector. – user56834 Sep 30 '18 at 11:45
  • 1
    That condition is precisely the fact that $d$ and $e$ have opposite (or zero, for $e$) signs when dotted with $(1, -1, 0, 0, \ldots, 0)$, but I'm not sure if that fact can be used to simplify the statement. – Joppy Sep 30 '18 at 11:53

1 Answers1

0

This is one way to present your condition in a point-free way:

  1. Let $R(a,b)$ be a relation on vectors defined by the condition $(a_1-a_2)(b_1-b_2) > 0$ or $(b_1-b_2) = 0$.
  2. For each $p\in P$, let $A_p \subseteq \mathbb{R}^n$ denote the set of vectors for which $p$ gives a maximal dot product relative to any other $q\in P$:

    $$A_p \equiv \{x \in \mathbb{R}^n : \forall q \in P, (x\cdot p \geq x\cdot q) \}.$$

    Interestingly, the sets $A_p$ are not necessarily disjoint (maxima are not necessarily unique), and they are closed under positive linear combinations.

  3. Let $S_P(a,b)$ be another relation on vectors defined by $\forall p \in P, (a \in A_p \Rightarrow b\in A_p).$

  4. Then your condition $\phi(P)$ is equivalent to the condition that $P$ is nonempty and that $S_P\subseteq R$.

$\square$


Proof:

  • Abstractly, your condition reads: $\phi(P)$ holds iff for every pair of vectors $d$ and $e$, the condition $f(\ldots)$ implies that there exist $p,q\in P$ for which $g(\ldots)$ and $h(\ldots)$.

  • Filling in arguments, $\phi(P)$ holds iff for every pair of vectors $d$ and $e$, the condition $f(d,e)$ implies that there exist $p,q\in P$ for which $g(d,p,q)$ and $h(e,p)$.

  • The condition $h(e,p)$ is just that $p$ maximizes the dot product with $e$. As long as $P$ is nonempty, $h(e,p)$ will always be satisfied— some $p$ maximizes the dot product. So we don't have to worry about the existence of $p$ satisfying $h(e,p)$ as long as $P$ is nonempty.

  • The notation $A_p$ actually makes these two conditions simpler. The condition $h(e,p)$ is just $e \in A_p$. The condition that there exists $q$ such that $g(d,p,q)$ is just $d \notin A_p$ (!).

  • So, we can rewrite your condition as:

    ... if $P$ is nonempty, then for every pair $d, e$, the condition $f(d,e)$ implies that there's a $p$ for which $(e\in A_p)$ and $(d \notin A_p)$.

    A great simplification, given that we've now eliminated reference to $q$.

  • Using contrapositive, we can equivalently write:

    ... if $P$ is nonempty, then for every pair $d, e$, the condition $\forall p\in P, [(e\notin A_p)$ or $(d \in A_p)]$ implies that $f(d,e)$ fails.

    or just

    ... if $P$ is nonempty, then for every pair $d, e$, the condition $\forall p\in P, [(e\in A_p)\Rightarrow (d \in A_p)]$ implies that $f(d,e)$ fails.

  • Let $S_P(a,b)$ be this first condition, which does depend on $P$. Let $R(a,b)$ be the second condition (the failure of $f$), which does not.
  • We can use arithmetic to define $R$ succinctly, and we can replace implication with set inclusion for further brevity.

We find that $\phi(P)$ is equivalent to the condition that $P$ is nonempty and that $S_P\subseteq R$.

user326210
  • 17,287
  • 1
    This is not a solution. If you expand your condition $S_P\subseteq R$ into a first-order statement, you just get forall $d,e\in \mathbb R^n$, .... In fact, it seems like your condition is just restating my condition into set-relations. It doesn't actually simplify it, it just "hides" the complexity into the definitions of $S_P$, $A_p$, and $R$. I am looking for a statement that is actually simpler than the beginning one, and more importantly, an understanding of why this is or is not even theoretically possible. – user56834 Oct 06 '18 at 07:16