95

Since long time ago I have been thinking in two problems that I have not been able to solve. It seems that one of them was recently solved. I have been thinking a lot about the motivation and its consequences. Mostly because people used to motivate one of them with some very interesting implications. My conclusion however, is that there is a mistake in the motivation of the problem, and that, while being a really interesting result, it does not make any sense in the setting in which is formulated. As my opinion is not relevant compared to one of the experts in the area, I do not say anything.

My question is if you can provide me some examples of conjectures that were believed to be interesting in the mathematical community because of a specific reason, but that once having the proof, people realized that the reason to motivate the problem was not truly related to its solution. Or in other words, the solution of the problem gives no clues about the original motivation.

user39115
  • 1,785
  • 5
    Mathematical logic in its modern form came into being basically as a response to Hilbert's illusions (is "wishful thinking" a better term?) about the deductive power of formal systems. – Christian Remling Jun 03 '14 at 19:14
  • 15
    Regarding the title, do you really mean a false illusion, or rather do you just mean "illusion"? To my way of reading, a false illusion would not really be an illusion at all, but real. – Joel David Hamkins Jun 03 '14 at 21:22
  • 5
    So this is really about bad/outdated goals in research rather than actual mistakes? – darij grinberg Jun 03 '14 at 22:11
  • 7
    Would be interesting, how the P=?NP question is judged if it were settled; to me it seems a good candidate for unfulfilled expectations. – Manfred Weis Jun 04 '14 at 13:52
  • 2
    nice question but dislike the vagueness of the initial paragraph. why not cite the example? sounds like you could be referring to an important open problem... – vzn Jun 06 '14 at 23:14
  • @JoelDavidHamkins An example of a real illusion: https://i.imgur.com/i3eLl.gif – Steven Gubkin Jul 17 '20 at 14:46

8 Answers8

94

Computer designers and programmers dreamed, from the earliest days of the computer, of a computer that could play chess and win. Even Alan Turing had that dream, and designed turochamp, the first chess-playing computer algorithm (it was executed on paper by hand at first, since no device could yet implement the algorithm in 1948).

As researchers realized the difficulty of playing chess well, the chess challenge was taken on in earnest. The conventional view was that to design a computer that could play chess and win would partake in the essence of artificial intelligence, and in the 1970s, computer chess was described as the Drosophila Melanogaster of Artificial Intelligence, because the work on computer chess was thought to be laying the foundations of artificial intelligence.

The basic conjecture, that computers would play chess well, turned out to be true. But the way that computers played chess well was by brute-force methods, rather than with the kind of subtle intelligence that had been expected to be necessary, and so many artificial intelligence researchers were disappointed, and lost interest in chess.

Meanwhile, the situation has led to debate in the AI community, as some researchers have argued that AI research should in fact follow the computer chess paradigm.

  • 2
    Well -- even what insects do requires considerably more intelligence than playing chess. -- For example so far nobody can construct a collection of robots who act like an ant population and collaborate in building an anthill, or like a hive which collects nectar and builds honeycombs, etc.. – Stefan Kohl Jun 03 '14 at 13:13
  • 6
    @StefanKohl I agree with that, but that issue is orthogonal to the point of the metaphor. If you follow the link, you'll see that Donald Michie was saying that research on computer chess was fundamental, like Mendel's genetic experiments on peas or the work on drosphila. He says, "The use of chess now as a preliminary to the knowledge engineering and cognitive engineering of the future is exactly similar, in my opinion, to the work on drosophila. It should be encouraged in a very intense way, for these reasons." The subsequent computer chess developments reveal that we was probably wrong. – Joel David Hamkins Jun 03 '14 at 16:02
  • However, it seems that computers are still very bad at go... – David Fernandez-Breton Jun 03 '14 at 17:18
  • 6
    @DavidFernandezBreton Computers are now simply "mediocre" at Go (Wikipedia lists a timeline of progress) . To my knowledge, the best algorithms can reasonably compete against most amateurs now. Hillariously, the AI breakthrough that has allowed this is the progression from "brute force search" to "guess and check". – Alexis Beingessner Jun 03 '14 at 18:21
  • 5
    @Alexis the recent breakthrough is a bit more than just "guess and check". A big part of it comes from studying the explore/exploit trade off (applied to deciding which branches of the game tree you should think hardest about). – zeb Jun 03 '14 at 19:59
  • 38
    John McCarthy, the inventor of LISP, has a good quote about this: “Computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies." – Simon Lyons Jun 04 '14 at 00:13
  • 4
    @Alexis Beingessner: Go programs have gotten much stronger in the last 5 years. It's a misrepresentation of the progress in go to say that the best computer programs "can reasonably compete against most amateurs" and that the algorithms are "guess and check." Computers have reached the top amateur levels and perform at the level of some professional players. One of the key ideas since 2006 was to randomize go, to predict which plays are strong on average if you turn go into a game of skill and chance. The analogue in chess would be IM or weak GM, roughly 1000th in the world. – Douglas Zare Jun 04 '14 at 14:48
  • @DouglasZare "Computers have reached the top amateur levels and perform at the level of some professional players." seems to match my claim of "can reasonably compete against most amateurs". Regardless, I was just poking a bit fun at Monte-Carlo techniques in the same vein as the answer (which summarizes mini-max and other related search techniques as "brute-force"). – Alexis Beingessner Jun 04 '14 at 15:32
  • 5
    @Alexis Beingessner: What percentile do you think corresponds to "can reasonably compete against most amateurs?" 50th? 99.99th? The normal meaning would be somewhere between 50th and 90th, which describes go programs from 20 years ago. More accurate is to say programs now "routinely beat almost all amateurs." Brute force is an accurate description of chess algorithms, which primarily improved from C player in 1975 to grandmaster around 1990 due to computing speed, not good algorithms. "Guess and check" doesn't describe the breakthroughs in go algorithms. – Douglas Zare Jun 04 '14 at 16:41
  • @DouglasZare Wouldn't we also have to decide what an amateur is? Everybody who knowns the rules? Everybody who plays on a regular basis? Everybody that competes regularly in amateur tournaments? This hugely changes the percentile wise strength of an amateur player. So what is your notion of an amateur? I asksince for a notion of amateur I consider reasonable here, your claim regarding programms 20 years ago seems false to me and this seems in line with claims on page above that in 1994 the best (or one of the best) programms repeatedly lost against a youth player giving fifteen stones. –  Jun 07 '14 at 10:39
  • Furthermore following a link on wikipedia there is "The strongest Go programs currently[1] play at around 8 - 13 kyu against human opponents" ([1] appears to have been written long ago but slightly less than twenty years ago). I would not consider a strength of 8 - 13 kyu as reasonably paraphrased as "can reasonably compete against most amateurs" (Of course, depending on what notion of amateur you have it might still be in line with your percentile claims. Yet they then would miss the point a bit and not be in line with what I would commonly undertstand under the quoted claims.) [@Douglas ] –  Jun 07 '14 at 10:56
  • 1
    @quid: Of course the definition of the player pool matters if you need the exact percentile, and I wasn't trying to be precise. I probably should have said something like 15 years ago instead of 20. It sounds like you are underestimating the strength of the kids who represented humans against the computers. Many professional go players turn pro at about 14 after years of intense training. The top 12-year-olds are close to professional level. Regardless, it's absurd to call current go programs "mediocre" and say that they "can reasonably compete against most amateurs" ... – Douglas Zare Jun 07 '14 at 14:05
  • 1
    ... since they have reached 6d on KGS (which is stronger than 6d in the AGA), and they have won against 9th dan professional players with handicaps of 4 stones multiple times, and with 3 stones at least once. That suggests they would be about 1000th in the world now. The Monte Carlo algorithms scale with parallel computing power, so you can quibble again about the precise definitions of the computer competitors, but if you find one where Alexis Beingessner's statements aren't grossly misleading let me know. – Douglas Zare Jun 07 '14 at 14:11
  • @DouglasZare I emphasized the 15 stones not the player being young; 15 stones is pretty extreme. I am not sure why you assume I underestimate something here. Anyway I provided a ranking estiamte too, and I consider something at best 8 kyu strength being described as "can reasonably compete against most amateurs" (what you claimed) as stranger than a 6 dan being described in this way. But I agree that AB's statements in isolation is misleading (but so were yours); but they provided a link from which you seem to quote and clarified that they were not that precise. –  Jun 07 '14 at 15:22
  • But since now you also clarified that you were not trying to be precise we can leave it there and everbody interested can study the link provided by Alexis Beingessner. –  Jun 07 '14 at 15:23
  • 1
    I made two mistakes. First, I said 20 years ago when I should have said something like 15. My comments on the algorithms used, the strengths of go programs, and the meaning of the progress against human players were accurate and informative other than this slip. Second, I responded to quid. I'm done. – Douglas Zare Jun 07 '14 at 19:01
  • 1
    Regarding the discussion in this thread, the following article seems relevant. http://www.wired.com/2014/05/the-world-of-computer-go/ – Joel David Hamkins Jun 09 '14 at 14:39
  • From said article: "Go is the only one in which computers don’t stand a chance against humans." –  Jun 09 '14 at 16:38
  • This is not strictly a mathematical problem, but rather a technological one. One could prove even without computers available that a powerful machine can play chess well using brute force. – Anixx Jun 09 '14 at 16:51
  • 20
    Maybe it's time to update the comments on Go. – Gerry Myerson Apr 14 '16 at 00:06
  • 3
    My answer doesn't mention Go, although the comments do. But indeed, the recent triumph of Alpha-Go (https://en.wikipedia.org/wiki/AlphaGo) seems to have been based in part on neural learning algorithms, although still involving extensive tree searching. – Joel David Hamkins Jun 13 '16 at 10:24
94

The three-body problem is one of the most famous problems in the history of mathematics, which also has an important application in science: it was supposed to explain the Moon's motion, among other things. Enormous effort was spent on this problem by many famous mathematicians of the 18th and 19th centuries. Since Newton's time it was clear that there was no simple closed form solution. (The problem also had an important practical application in 18th century, namely to navigation. If you can predict the motion of the Moon for few years ahead with sufficient accuracy, you can determine longitude at sea without a chronometer, just by observing Moon's position with respect to the stars).

In the middle of the 19th century, an exact mathematical formulation of what was desired was achieved: to express the motions of the bodies in the form of convergent series of functions of time, valid for all times. Few people remember nowadays that in this precise form the problem was actually solved (by Sundman, at the very end of the 19th century). This solution can be found in Siegel's book on celestial mechanics.

But by that time it was already understood that this solution was useless for practical purposes, namely for prediction of the Moon's motion over long time periods. It was also useless for understanding the qualitative features of the motion.

78

The simplex method for linear programming was published by Dantzig in 1947. For years many variants were known to provide good performance in practice, but it was unknown whether any of these ran in polynomial time. Klee and Minty showed in 1972 that at least one such variant takes exponential time in the worst case, but the question of whether linear programs are solvable in polynomial time remained open. Given the wide-ranging applications of linear programs, this was viewed not just as a theoretical question, but an important practical one as well.

In 1979 Khachiyan published a version of the ellipsoid method which solves linear programs in polynomial time. This was viewed as an enormous breakthrough, but it turned out that while it answered the theoretical question of polynomial-time solvability, it did so in a way which was completely impractical. It shed no light on the efficiency of the simplex algorithm, which almost always runs much faster than the ellipsoid algorithm. The ellipsoid algorithm typically requires many more iterations and is numerically unstable to boot. Instability can be handled by keeping track of more and more bits of precision at each iteration, but this makes the algorithm even slower, especially in comparison to the simplex method, which does not have these instability problems.

In the years since, a variety of so-called "interior point" algorithms have appeared which are both efficient in theory and practice, but the simplex method remains competitive. There has also been work on smoothed complexity explaining why this is so.

I do not mean to diminish the theoretical importance of the ellipsoid method, but I think it is a good example of the type you ask for. While it technically answered the question at hand, it was in many ways a disappointment.

Noah Stein
  • 8,403
  • 1
  • 32
  • 55
58

Before Erdős and Selberg found an elementary proof of the prime number theorem, G. H. Hardy had predicted that the discovery of such a elementary proof would be cause "for the books to be cast aside and for the theory to be rewritten." Although Erdős and Selberg's work was a major accomplishment, it did not revolutionize number theory in the way that Hardy envisaged.

See also this related MSE question.

Timothy Chow
  • 78,129
31

The simplicial homology of a simplicial complex is not a function of the complex, but of the complex together with a simplicial decomposition. The Hauptvermutung ("main conjecture") stated that simplicial homology could be proven to be an invariant of the complex by showing that any two decompositions have a common refinement.

When Milnor disproved this conjecture in 1962, the invariance of simplicial homology had long been proved using singular homology, and simplicial homology itself was regarded obsolete by many.

20

It was conjectured for quite a long time that all simple $D$-modules are holonomic (see comments on Is simple non-holonomic D-module a local concept? for example) until Stafford gave counterexamples. I have been told that after this was proved, people decided that there had been no particular reason to believe it in the first place, other than that "holonomic" meant having the smallest possible dimension in some sense, and "simple" things ought to be small.

14

A famous example of a "false illusion" is Hilbert Problem 2: to prove that axioms of arithmetic are consistent. Hilbert even did not care to state this as a question: are they consistent or not, so much he was sure that this can be proved.

13

In knot theory, the motivation came from outside of math but it is a good example of what you discuss. It was first developed to help study atoms, following an atomic model by Thompson; this model was later completely discarded but knot theory took a life of its own and is now a thriving field of mathematics.