18

Turing proved that not all real numbers are effectively computable. In the sense that no algorithm exists to compute some real numbers.

Here is Turing's definition: A real number is computable if its digit sequence can be produced by some algorithm or Turing machine. The algorithm takes an integer $ n \geq 1$ as input and produces the $n$-th digit of the real number's decimal expansion as output.

I am interested in what is known about the efficiency of representing real numbers and decoding speed. Specifically, I want to know the relationship between the compactness of the algorithm (that encodes a real number) and the speed of producing the digits of a real number (decoding speed as a function of $n$). A real number is efficiently computable if the number of steps needed to compute digit $n$ is bounded by a polynomial in $n$. It is hard to compute if no such polynomial bound exists.

Are there any real numbers that are hard to compute (not necessarily computable one)? What are the references?

I have no formal definitions in mind.

Michael Hardy
  • 11,922
  • 11
  • 81
  • 119
  • 1
    You might find some useful discussion here: http://mathoverflow.net/questions/26402/transcendental-numbers-computable-transcendental-numbers – D. Ror. Oct 21 '16 at 13:13
  • 2
    Let $f(n)$ be a (presumably fast growing) computable Turing machine. Consider the number $r$ defined by: $n$th digit of $r$ is $1$ iff the $n$th Turing machine (under some fixed ordering) halts in $f(n)$ steps, and $0$ otherwise. Then $r$ is computable, but for "natural" (time-constructible) $f(n)$ computing digits of $r$ takes "about" $f(n)$ steps, by time hierarchy theorem. – Wojowu Oct 21 '16 at 13:35
  • 4
    I am a bit confused by the last part of your question Are there any real numbers that are hard to compute (not necessarily computable one)?

    I mean, if a number is not computable, it is obviously very very hard to compute (in the sense that it is impossible to compute it!). Am I missing something?

    – Francesco Polizzi Oct 21 '16 at 15:13
  • @MohammadAl-Turkistany But then the question becomes trivial - any finite number of digits of an uncountable number may be computed, in constant time even. See the (new) first part of my answer. – Noah Schweber Oct 21 '16 at 16:29
  • Take a hard problem $X$, preferably undecidable with boolean answer. If $X$ is false, let $R=0$, else let $R=1.111\ldots111$. $X$ could be Twin prime conjecture, Riemann hypothesis, consistency of ZFC. Compute bits of $R$. – joro Oct 21 '16 at 18:12
  • 3
    @joro That's a bad example; $X$ is efficiently computable, we just don't know what Turing machine does the job. – Noah Schweber Oct 21 '16 at 18:13
  • @NoahSchweber Well could be bad example. $X$ could be efficiently efficiently computable, but is the twin prime/goldbach/3x+1 conjectures true? Just compute $R$. – joro Oct 21 '16 at 18:16

6 Answers6

23

EDIT: This was in a comment below, but I now think it should be part of the main answer:

There are two different ways to ask the question in the OP:

  • Is there a real number $r$ such that no polytime algorithm computes all the bits of $r$?

  • Is there a real number $r$ such that no individual bit of $r$ can be computed by a polytime algorithm?

The former is the question I answer below; the latter is trivial! Given any real $r$, and any $n$, there is a polytime (indeed, constant time) algorithm $p_{r, n}$ which computes the first $n$ bits of $r$ correctly. (Consider the silly algorithm which has the string $\langle r(0), r(1), . . . , r(n)\rangle$" "hard-coded" in - on input $k$ for $k\le n$ this algorithm outputs $r(k)$, and on input $k$ for $k>n$ it outputs $0$ (say).) So in order to get anything interesting, we need to look at algorithms which attempt to compute all the bits simultaneously.

Note that noncomputable reals trivially satisfy the first question, so the right question to ask is:

Is there a computable real number $r$ such that no polytime algorithm computes all the bits of $r$?


Here's an explicit construction of a computable real which is hard to compute:

Let $\{\varphi_e\}$ list all (partial) computable functions, and $\{p_e\}$ all polynomials.

We define a real number $r$ as follows: the $n$th bit of the binary expansion of $r$ is a $1$ iff $\varphi_i(n)$ does not halt and output $1$ in $\le p_j(n)$ steps (so, either doesn't halt in that time, or does halt and outputs something $\not=1$) - where $n=\langle i, j\rangle$. (Here "$\langle\cdot,\cdot\rangle$" denotes the Cantor pairing function.)

$r$ is computable (note that we run each computable function for only a bounded amount of time before deciding what to do for that bit), but $r$ is not computable in polynomial time since it diagonalizes against all polynomial-time computations: if $\varphi_i$ has running time bounded by $p_j$, then $\varphi_i(\langle i, j\rangle)\not=r(\langle i, j\rangle)$.


Note that nothing special about polynomials was used here: given any reasonable complexity class (for instance, any complexity class of the form "Running time bounded by some $f\in\mathcal{F}$" for $\mathcal{F}$ a computable set of computable total functions), there is a computable real whose bits are not computable by any algorithm in that class.


Going back to the second, "trivial" version of the question, it can actually be "de-trivialized" in an interesting way: look at how hard it is to get successively longer approximations to $r$. That is, the silly algorithm I describe which gets the first $n$ bits correctly has length $\approx 2^n$; can we do better?

This question forms the basis of Kolmogorov complexity. Roughly speaking, given a real $r$, let $K^r$ be the function whose $n$th value is the length of the shortest Turing machine which computes the first $n$ bits of $r$ correctly. Then we can ask (even for noncomputable $r$!): how quickly does $K^r$ grow? If $r$ is computable, then $K^r$ is eventually constant; if $r$ is not computable, however, things get really interesting. (See e.g. the notion of K-trivials, which are reals which are "almost computable" in the sense of Kolmogorov complexity.)

Now, this isn't quite what you're asking about - you'd want to look at $K_{poly}^r$, the function whose $n$th value is the length of the least polytime Turing machine which computes the first $n$ bits of $r$ correctly. This, and other resource-bounded variations of Kolmogorov complexity, don't appear to be as studied - but see this article.

Noah Schweber
  • 18,932
  • 2
    May be I am missing something but I don't see why your real number is hard to compute? – Mohammad Al-Turkistany Oct 21 '16 at 15:04
  • 3
    @MohammadAl-Turkistany See my last paragraph before the line. No algorithm running in polynomial time correctly computes every bit of this real - in particular, if the $i$th algorithm runs in time bounded by the $j$th polynomial, then it gets the $\langle i, j\rangle$th bit wrong. This is an instance of a very general and powerful technique - diagonalization. – Noah Schweber Oct 21 '16 at 15:05
  • @MohammadAl-Turkistany I have edited my answer to comment on the different versions of the question you ask (see the beginning and end sections). – Noah Schweber Oct 21 '16 at 16:28
  • This post shows that deciding whether a TM runs in polynomial time is undecidable. So a list of all polynomial time computable TM's does not exist. Right.? http://cstheory.stackexchange.com/questions/5004/are-runtime-bounds-in-p-decidable-answer-no/5006#5006 – Mohammad Al-Turkistany Oct 21 '16 at 16:35
  • @MohammadAl-Turkistany That's true - which is why that's not what I did. I listed all TMs, as well as all polynomials, and looked at "cut-off TMs": the $\langle i, j\rangle$th "cut-off TIM" is the TM you get by taking the $i$th TM and forcing it to halt in $p_j$-many steps (where $p_j$ is the $j$th polynomial). Recall the definition of $r$: "the $n$th bit of the binary expansion of $r$ is a $1$ iff $\varphi_i(n)$ does not halt and output $1$ in $\le p_j(n)$ steps". At no point do I need to know if a machine runs in polytime - I just look at the first few steps of the computation. – Noah Schweber Oct 21 '16 at 16:37
  • Note that this trick does not allow me to tell if a given TM runs in polynomial time: rather, it just gives me a list of new TMs $\psi_i$, each of which runs in polynomial time, such that every polytime $\varphi_m$ is equivalent to some $\psi_i$. It's these that I diagonalize against. Note that this same trick works for much broader classes - e.g. it lets me diagonalize against all the exponential-time computable functions. – Noah Schweber Oct 21 '16 at 16:38
  • Are you aware of any natural real number that is very hard to compute its digits ( not necessarily all the digits)? – Mohammad Al-Turkistany Oct 21 '16 at 16:51
  • @MohammadAl-Turkistany What do you mean by "compute it's digits", if you're not asking to compute all the digits? As I wrote in my answer, you have to be very precise here, otherwise the question trivializes. Are you asking for a natural real with high (polytime) Kolmogorov complexity, or something else? – Noah Schweber Oct 21 '16 at 16:54
  • I mean it may compute some of the bits incorrectly. – Mohammad Al-Turkistany Oct 21 '16 at 16:57
  • @MohammadAl-Turkistany If you allow it to compute some bits incorrectly, then - as I wrote in my answer above - every real is easily computed. So you need to give some more restriction in order for that question to not be trivial. – Noah Schweber Oct 21 '16 at 16:58
  • Oops. I suggest a polynomial bound on the number of incorrectly computed bits. Is that still trivial? – Mohammad Al-Turkistany Oct 21 '16 at 17:02
  • @MohammadAl-Turkistany I'm not sure what "a polynomial bound on the number of incorrectly computed bits" means. We have one function, and one real - if the real is incomputable, the function gets infinitely many bits wrong: is that polynomial or not? (Are you asking something about the proportion of bits $f$ guesses correctly?) – Noah Schweber Oct 21 '16 at 17:44
  • I think you answered the original question in my post. – Mohammad Al-Turkistany Oct 21 '16 at 18:36
10

This is not an answer to your exact question but it is a closely related topic that you might find interesting. The Hartmanis–Stearns conjecture states that the binary expansion of an algebraic irrational number cannot be computed by a real-time Turing machine. Dick Lipton has blogged about this conjecture here and here.

Timothy Chow
  • 78,129
10

Noah Schweber has answered the question, but here is a simple example that may help drive the point home more intuitively.

List all graphs. Let $r$ the be real number whose $n$th binary digit is 1 if and only if the $n$th graph is Hamiltonian.

In other words, any computational problem with a yes/no answer can be encoded in the binary expansion of a real number. Since there exist hard computational problems, there exist hard-to-compute real numbers.

Timothy Chow
  • 78,129
  • 1
    For this formulation we have no idea which bits are hard to compute it seems. Could it be that almost all of them are easy ? In fact if we compute the bits in order and the graphs are suitably ordered, could all the bits be easy to compute? – Simd Oct 22 '16 at 08:17
  • Isn't this conditional on $P \ne NP$? (If $P=NP$ then Hamiltonian cycle is in $P$ ). – Mohammad Al-Turkistany Oct 22 '16 at 10:42
  • 2
    @MohammadAl-Turkistany I'm afraid you are right. But I feel like the point is not so much about the specific example of Hamiltonian graph. You can replace it with any problem known to not be in P. – Wojowu Oct 22 '16 at 14:32
  • @Lembik It is true for nearly every "natural" decision problem that there are some instances where the problem is easy. This is even true for halting problem - there are a lot of machines for which we easily see whether they halt or not (let me also mention this paper). You are also right this depends on the ordering - we can define an ordering such that odd-numbered graphs have cycles and even-numbered ones don't. It is necessary for us to specify that the ordering is "simpler" than the problem at hand, so to say. This is usually meant implicitly. – Wojowu Oct 22 '16 at 14:36
  • 2
    @Lembik : If you want a computational problem for which all the bits are (believed to be) hard to compute, try a cryptographic problem. Anyway, the point is that the whole business about real numbers is a red herring. The question reduces to, "Do there exist hard computational problems?" The answer is yes. – Timothy Chow Oct 22 '16 at 22:56
  • Problems that take exponential time to compute aren't hard enough to be good examples. Because to compute a number to precision $2^{-n}$, you need to compute all the first $n$ bits, not just the $n$th bit. This will already take exponential time if the bits are given by an easy computational problem, and if the bits are given by a problem that takes exponential time, then computing the first $n$ of them takes... still exponential time. To get a number that is hard to compute, you need to use a decision problem that cannot be solved in exponential time. – Alex Mennen May 27 '18 at 17:17
10

You ask about formal definitions, so let me mention a certain issue regarding Turing's definition. The fact is that although Turing had introduced the notion of computable number, his definition, which appears in the first sentence of his 1936 paper, is not the definition that is typically given today.

enter image description here

The problem is that if one wants to regard the computable numbers as represented by their programs, as Turing did, for the purpose of further computation, then with Turing's definition many seemingly computable operations on real numbers will not actually be computable. For example, there is no computable procedure to compute the sum of two computable real numbers, given programs for the summands, if one uses Turing's definition.

I explain more at length in my blog post, Alan Turing, on computable numbers. An excerpt:

One specific problem with Turing's approach is that on this account, it turns out that the operations of addition and multiplication for computable real numbers are not computable operations. Of course this is not what we want.

The basic mathematical fact in play is that the digits of a sum of two real numbers $a+b$ is not a continuous function of the digits of $a$ and $b$ separately; in some cases, one cannot say with certainty the initial digits of $a+b$, knowing only finitely many digits, as many as desired, of $a$ and $b$.

To see this, consider the following sum $a+b$ $$\begin{align*} &0.343434343434\cdots \\ +\quad &0.656565656565\cdots \\[-7pt] &\hskip-.5cm\rule{2in}{.4pt}\\ &0.999999999999\cdots \end{align*}$$ If you add up the numbers digit-wise, you get $9$ in every place. That much is fine, and of course we should accept either $0.9999\cdots$ or $1.0000\cdots$ as correct answers for $a+b$ in this instance, since those are both legitimate decimal representations of the number $1$.

The problem, I claim, is that we cannot assign the digits of $a+b$ in a way that will depend only on finitely many digits each of $a$ and $b$. The basic problem is that if we inspect only finitely many digits of $a$ and $b$, then we cannot be sure whether that pattern will continue, whether there will eventually be a carry or not, and depending on how the digits proceed, the initial digits of $a+b$ can be affected.

In detail, suppose that we have committed to the idea that the initial digits of $a+b$ are $0.999$, on the basis of sufficiently many digits of $a$ and $b$. Let $a'$ and $b'$ be numbers that agree with $a$ and $b$ on those finite parts of $a$ and $b$, but afterwards have all $7$s. In this case, the sum $a'+b'$ will involve a carry, which will turn all the nines up to that point to $0$, with a leading $1$, making $a'+b'$ strictly great than $1$ and having decimal representation $1.000\cdots00005555\cdots$. Thus, the initial-digits answer $0.999$ would be wrong for $a'+b'$, even though $a'$ and $b'$ agreed with $a$ and $b$ on the sufficiently many digits supposedly justifying the $0.999$ answer. On the other hand, if we had committed ourselves to $1.000$ for $a+b$, on the basis of finite parts of $a$ and $b$ separately, then let $a''$ and $b''$ be all $2$s beyond that finite part, in which case $a''+b''$ is definitely less than $1$, making $1.000$ wrong.

Therefore, there is no algorithm to compute the digits of $a+b$ continuously from the digits of $a$ and $b$ separately. It follows that there can be no computable algorithm for computing the digits of $a+b$, given the programs that compute $a$ and $b$ separately, which is how Turing defines computable functions on the computable reals. (This consequence is a subtly different and stronger claim, but one can prove it using the Kleene recursion theorem. Namely, let $a=.343434\cdots$ and then consider the program to enumerate a number $b$, which will begin with $0.656565$ and keep repeating $65$ until it sees that the addition program has given the initial digits for $a+b$, and at this moment our program for $b$ will either switch to all $7$s or all $2$s in such a way so as to refute the result. The Kleene recursion theorem is used in order to know that indeed there is such a self-referential program enumerating $b$.)

One can make similar examples showing that multiplication and many other very simple functions are not computable, if one insists that a computable number is an algorithm enumerating the digits of the number.

The definition of computable number used in computable analysis is that a real number is computable, if there is a computable procedure to produce approximations to it, to within any desired given accuracy. You don't have to get the digits right — you just have to be close enough by the metric, and the main point is that this isn't the same thing.

In regard to this question, my main point is that the relevant discontinuity issue will apply just as much in the feasible realm. In particular, one should modify the definition to say a number is polytime computable, if there is a polytime algorithm to produce approximations to the real, to within a given accuracy. For example, I would find it reasonable to say that, given $n$, one wants a rational approximation within $1/2^n$ of the target real, by a polytime algorithm in $n$.

6

Also, I thought the set of computable numbers is countable. So, you should be able to find many that are not: take Chaitin's constant.

T. Amdeberhan
  • 41,802
  • I know about $\Omega$ but I have not seen any proof that computing the $n$-th digit needs more than a polynomial function in $n$. – Mohammad Al-Turkistany Oct 21 '16 at 15:58
  • 6
    @MohammadAl-Turkistany First let me second a comment which Noah Schweber made above - if we want, for every $n$, a machine which outputs the $n$th digit of our number, where for every $n$ this machine can be different, then we can always choose one of two suitable machines and be happy with polynomial time. If we want a single machine which works for all $n$, then, as Chaitin's constant is uncomputable, there is no such machine, let alone a polynomial time one! – Wojowu Oct 21 '16 at 16:02
0

Yes. As a trivial example take the following real number.

0.1121213221312... where the nth place is the number of prime factors of the n+1th integer(including repeated factors) This is assuming finding the number of integer factors of a natural number is hard(It might not be we don't know). another example would be

$$\sum^{\infty}_{n=0} \frac{1} {Ack(n, n)}$$ Where Ack(x, y) is the Ackermann function. In general you can just add up the reciprocals of arbitrarily hard to compute but commutable functions and have an arbitrarily difficult to compute real number.

  • 2
    I think your definition of "hard" differs from that in the question; it's trivial to find the number of factors of an integer in polynomial time in that integer (but not in polynomial time in the number of digits in that integer). – user44191 Nov 15 '22 at 02:15