20

Can modern AI be used to program impressive graphics effects on very low performance home-retrocomputers, in particular 'A 3D-rotating-cube', effects that would be too difficult for a human to program on the cheapest machines

( The time-range of the home-retrocomputers I'm refering to is approximately from 1975-1985 )

NOTE : So I'm using 'A 3D-rotating-cube', rotating / displaying, satisfactorily, as a type of benchmark of achievement to be achieved on very low performance home-retrocomputers( from 1975-1985 ), on many of these machines it seems impossible to satisfactorily achieve this effect on, so therefore I am wondering if modern AI could succeed in the challenge( where humans have failed )

texttext
  • 379
  • 2
  • 6
  • 15
    Can you give an example of a retrocomputer on which it is 'commonly assumed will never be able to successfully display a 3D rotating cube'? – Bruce Abbott Mar 22 '18 at 07:56
  • 9
    The obvious answer is yes, of course, the underlying question, which would be off-topic, is how you do it. – Chenmunka Mar 22 '18 at 09:41
  • 15
    You can't overcome hardware limitations - anything that inherently requires more colours than the display, or more calculations than are possible in a particular period of time. Use of AI to write programs is not really an advanced field either. – pjc50 Mar 22 '18 at 14:16
  • 2
    It could, but you would first need to define "impressive" in terms of a cost function the AI can understand. – allo Mar 22 '18 at 14:48
  • 3
    @BruceAbbott Maybe something like Babbage's Difference Engine or the Colossus. – Omar and Lorraine Mar 22 '18 at 16:53
  • @pjc50 True but often there are tricks to really push the hardware to its limits and some of them where discovered quite some time after the days when the machines in question where popular. Things like NUFLI graphic mode (2009) or playing 8 bit samples (2007/2008) on a C64 for instance. – BlackJack Mar 22 '18 at 18:52
  • There are too many cheap( 1980 onwards ) home-computers from 1975-1985 on which is is too difficult to satisfactorily display a 3D-rotating-cube – texttext Mar 22 '18 at 21:53
  • This may be irrelevant, but I think the introduction of colors and sound to cheap( 1980 onwards ) home-computers from 1975 to 1985 was the reason( or a reason ) why they were so slow or incapable of satisfactorily displaying a 3D-rotating-cube, and yet this was the sort of effect for which many people bought the machines, I have always wondered if there was a secret 'Monochrome-Mode' available in all of these machines from 1975 to 1985 – texttext Mar 22 '18 at 21:56
  • 1
    You could calculate the minimum time it would take to merely write the necessary amount of screen memory (based on how many pixels change from each frame to the next). If that's still to slow, no algorithm can be fast enough. (But based on @introspec's idea, an algorithm might choose a better trade-off between speed and quality.) – John B. Lambe Mar 22 '18 at 22:03
  • @texttext: Some of them did have a monochrome mode. The BBC Micro (all versions and the Acorn Electron) could do 1, 2 or 4-bit colour (with only 8 physical colours). – John B. Lambe Mar 22 '18 at 22:06
  • @BlackJack: How did the "Radioactivity" demo on the C64 do its digitized drums, if not using the ramp wave reset/sample/hold trick? – supercat Mar 22 '18 at 22:34
  • I'm also curious about which computer was not able to display a rotating cube. If a TRS80 model I could render the Dancing Demon, it is hard to imagine what problem a cube would pose. – Martin Argerami Mar 23 '18 at 04:52
  • 1
    @texttext colour does not intrinsically "slow things down", although increased bit depth can indirectly. And many of the home computers did have a monochrome mode (e.g. BBC Micro). And hardly anyone bought a computer for the specific purpose of rendering rotating cubes. – pjc50 Mar 23 '18 at 09:46
  • 1
    Why would it be difficult to do a rotating 3d cube in that time frame I knocked up a quick demo of this when the place I worked at got GINO-F for our super mini's – Neuromancer Mar 23 '18 at 12:52
  • So this question is asking if AI would be able to write a conventional program to do something a human programmer could not? It seems a matter of identifying the most efficient algorithmic representation of the problem - in the case of 3D rotating cubes, something I am relatively convinced humans already understand in this domain. That said, AI could probably push the boundaries of what is possible, but the same techniques could, at least in many domains, also be applied by a human programmer. An interesting answer to this question would hopefully address where the advantages may lie. – Darren Mar 23 '18 at 13:42
  • @Darren The most efficient algorithmic representation might depend on the target hardware. Different computers have different ways to program the video hardware. The algorithm for effect X written for a linear addressable frame buffer might not be the best for bit planes, or the way the C64 organises its graphic memory, or hardware with extra chips that can be programmed to change video registers or fill memory regions, or a vector display. Maybe an AI could help adapt to those differences or special features. I personally have doubt and voted for introspec's answer. – BlackJack Mar 24 '18 at 03:41
  • @BlackJack Good point, I meant the most efficient algorithmic representation for a particular machine. An earlier revision of the question specifically mentioned using AI to write in the language of the computer, like writing a program in C64 basic or x86 assembly. Now the whole thing is rather out of context... but thank you. My thinking was along the lines of yours, and that AI could help find a program for an architecture that a human didn't know or couldn't grasp all the intricacies of. – Darren Mar 24 '18 at 16:10
  • Hey! I actually wrote a computer program, in the summer of 1965, to rotate a stick figure on a 2D display, in order to give an illusion of depth. This is ten years earlier than your time frame. I used assembly language, not AI, and the computer was a PDP-6. Not what you're looking for but it brought back memories. – Walter Mitty Mar 24 '18 at 16:39
  • @Walter Mitty, what you made would have looked like the "Dancing Demon" animation , on the comment further above, a link to see it would be good – texttext Mar 24 '18 at 20:51
  • BTW, I called it a stick figure, but it was really a stick figure representation of a complex protein molecule. – Walter Mitty Mar 26 '18 at 17:07
  • @supercat As far as I can see there's only one song (Nightdawn) with samples in Radioactivity. Nightdawn plays 4 bit samples via the master volume (bug). That sample playing technique was definitely known long before Radioactivity's release in 1990. The commercial software S.A.M. used it back in 1984. – BlackJack Mar 27 '18 at 22:22
  • You could easily draw a rotating cube with a 4MHz ZX Spectrum, giving >10 frames/s, and that suggests you could do it (40% slower) on an original 2.5MHz Z80 circa 1976. – SusanW Oct 08 '19 at 23:19

9 Answers9

50

Modern AI on its own is not really advanced enough to be able to make a meaningful difference in this sense. However, there are many situations where the massive asymmetry between computational capabilities of modern computers and computational capabilities of the old computers can be exploited in ways that were not feasible decades ago.

For example, some types of modern AI are driven by, effectively, optimization algorithms (genetic algorithms etc). So, what if you use some vastly inefficient rendering strategies on old computers, but guide them by optimizing the results of their rendering by a modern computer? This gives you a vastly non-symmetric compression schemes, which were for example exploited by demoscener ilmenit who got his Atari XL/XE to generate this image

enter image description here

using 128 bytes of data and less then 128 bytes of code. (You can read more about this, as well as find a link to the video recording, on Pouet.) I personally was so inspired by this demo that I created my own kind of a similar asymmetric compression scheme; you can, if you want, have a look at it here.

So, my answer is yes, people do exploit the difference in processing power to create programs for retrocomputers that were not feasible before. However, I think that although the computational techniques to achieve this are often not dissimilar from the methods used for AI, most of the people doing asymmetric schemes of this kind would probably refuse to call them AI (and rightly so).

introspec
  • 4,172
  • 1
  • 19
  • 29
  • 4
    Could you explain how the guy generated the code that made that image? – Omar and Lorraine Mar 22 '18 at 16:20
  • @introspec please can you explain how that code was generated? that site is hard to navigate and much of the comments are just talking about how Russia invaded Ukraine... – Krupip Mar 22 '18 at 18:10
  • 3
    This is what ilmenit actually did: http://www.pouet.net/prod.php?which=62917#c681845 I did something similar pretty much based on this brief explanation of his. My own brief explanation looks like this: http://www.pouet.net/prod.php?which=63074#c686535 Ask if you need more details about what I did (I cannot explain the details of what ilmenit did because the comment above is all I had to go by). – introspec Mar 22 '18 at 18:16
  • 11
    I feel a code golf challenge coming on... – Darren H Mar 22 '18 at 19:20
  • My apology for not checking your links on how you achieved that, but I am going to take a guess and say, you gave the picture to an AI on a modern computer, the AI reduced the picture to a 'mathematical-algorithm'( formula ? ), sort of like an algorithm'( formula ? ) for producing 'fractals' back then, but this picture is a 'non-symmetrical' fractal, then you just put the 'mathematical-algorithm'( formula ? ) onto the retrocomputer – texttext Mar 22 '18 at 22:06
  • 3
    @texttext, hmm, no. – introspec Mar 23 '18 at 00:18
  • 3
    @texttext The picture is the sum of 64 different random pictures and s/he used a more powerful computer to search through all the possible random seeds to find the best match. – user253751 Mar 23 '18 at 07:24
  • 2
    @DarrenH, that "code challenge" is called demoscene. They make graphics demos with very small and strict code size limits. It's pretty amazing what they can do in 16k or 64k, etc. – JPhi1618 Mar 23 '18 at 16:17
  • 3
    @DarrenH Not sure if you've seen it, but Paint Starry Night, objectively, in 1kB of code might be up your alley. Neat to look through what people came up with. – brhfl Mar 23 '18 at 18:54
  • @DarrenH we've had some like that :) – hobbs Mar 23 '18 at 22:49
  • 1
    @introspec: Thanks for this. I liked Mona so much that I ported it to the Apple II. (And only discovered the link to the original's source code afterwards! ; - ) There's a link to my version in this thread. Cheers. – Nick Westgate Mar 25 '18 at 05:52
  • 1
    @Nick Westgate, thank you so much, this is amazing, esp. since someone else has got the code golf challenge after all: https://codegolf.stackexchange.com/questions/126738/lets-draw-mona-lisa – introspec Mar 25 '18 at 08:18
  • @introspec: My version is probably not worth submitting to codegolf, but I hope someone in comp.sys.apple2 will take up the challenge on the Apple IIgs. – Nick Westgate Mar 25 '18 at 08:59
25

This is perhaps obvious, but AI isn’t magic. This has two relevant consequences here: it can’t add new hardware capabilities, and it can’t find techniques which humans absolutely couldn’t.

What “AI” is good at is trying lots of things, fast, without getting bored. So it can try lots of different ways of implementing an algorithm, in the hope of finding one which is better than anything we’ve come up with so far (this is called super-optimisation). It can also try to find simplifications of a given algorithm or implementation which don’t have too bad effects on the result; this might be the most useful approach to find ways of doing “impossible” tasks on a retro-computer.

In both cases the typical technique we’d use is known as genetic programming, or genetic algorithms: you view algorithms as a sort of DNA, and mutate and combine different DNA strands (algorithm implementations) to try to find one which has a lower “cost” (in this case, runs faster) while still producing the desired output. Evaluating the output could involve machine vision, although using humans would likely produce more pleasing results (albeit with the speed and boredom issues alluded to above).

However, don’t expect miracles — you won’t go from Alley Cat to 8088 MPH using AI...

Stephen Kitt
  • 121,835
  • 17
  • 505
  • 462
17

Cynically, the modern usage of "AI" is little more than marketing fluff, like "cloud" and "blockchain". Doing a heavy handwave, it is the name that has been used for a lot of not-quite-there (or never-will-be-there) data-analysis techniques that attempt to get a useful answer in a timely manner from a large database of facts that would be prohibitive to do a brute-force search on. They tend to lose the "AI" moniker once they work and are routinely used. For example, most programs are too large for a compiler to brute-force search the optimal set of instructions, but it can certainly use tricks like simulated annealing to get close.

They very much can be used for throwing lots of modern CPU at optimisation for a very small machine. It is too extensive a field to list everything here, but I rather like the O'Reilly book Programming Collective Intelligence which is a nice introduction to a number of techniques which should give you plenty of ideas.

In your specific query, "AI" can (and is) used to heavily-optimise demos by throwing a lot of modern CPU at a relatively small problem. I've already given the compiler example, and in a similar vein one can skip writing the program in the first place and search for a suitable AST to generate the desired effect. It can also be used to perform extreme compression to get an image or video to fit in small hardware.

As to the likes of a 3D rotating cube, that is a simple problem which doesn't need AI. 3D games were not exactly uncommon in the early 1980s, with Battlezone and Elite being just two famous examples of the genre. The mathematics are not too strenuous on the CPU if one keeps the number of vertices down—a cube has just eight—and most of the time is spent drawing lines. Sometimes, you just need to sit down and really understand the hardware and have a good hard think about the code you need to write rather than hope a computer can be convinced to write it for you.

pndc
  • 11,222
  • 3
  • 41
  • 64
  • 2
    Battlezone's probably not a fair comparison, as it was based on a vector drawing system not the raster system in such computers, and therefore didn't need to perform the line interpolations and raster image manipulations that would have been the main source of CPU usage on a typical computer of the era. – Jules Mar 23 '18 at 11:59
  • Indeed - the Vectrex (1982) can put out 3D tunnels and starfields (demo by cmucc), so a cube shouldn't be out of reach. – cxw Mar 24 '18 at 18:34
  • @Jules ... though worth noting that Battlezone game clones were produced on home micros like the ZX Spectrum, and they really did have to crunch the 3D numbers and handle the raster line drawing in code - which of course showed up in their substantially lower frame rates! :-) – SusanW Oct 08 '19 at 22:44
4

hmm why AI I think that might be much more work to code an learn AI to do something like that than write it on your own...

I usually use different approach. If the stuff I need to code gets hideous then I simply write a a small program that will create the source code for me. The same goes for data tables etc. For example this was coded by C++ on PC but runs on AVR32:

MCU2VGA

The MCU simply generate VGA image signal by using SDRAM interface and DMA the circuit consist just from MCU few resistors capacitors and diodes stabilizator and crystal. The image is hard-coded as C++ source generated by the script I mentioned before (some of you can recognize it as a loading screen from one ZX game).

Another example of this approach is auto generated code/templates I use for GLSL code running on CPU (also coded on CPU). Here vec2 template example:

template <class T> class _vec2
    {
public:
    T dat[2];
    _vec2(T _x,T _y) { x=_x; y=_y; }
    _vec2() { for (int i=0;i<2;i++) dat[i]=0; }
    _vec2(const _vec2& a) { *this=a; }
    ~_vec2() {}
    // 1D
    T get_x() { return dat[0]; } void set_x(T q) { dat[0]=q; }
    T get_y() { return dat[1]; } void set_y(T q) { dat[1]=q; }
    __declspec( property (get=get_x, put=set_x) ) T x;
    __declspec( property (get=get_y, put=set_y) ) T y;
    __declspec( property (get=get_x, put=set_x) ) T r;
    __declspec( property (get=get_y, put=set_y) ) T g;
    __declspec( property (get=get_x, put=set_x) ) T s;
    __declspec( property (get=get_y, put=set_y) ) T t;
    // 2D
    _vec2<T> get_xy() { return _vec2<T>(x,y); } void set_xy(_vec2<T> q) { x=q.x; y=q.y; }
    _vec2<T> get_yx() { return _vec2<T>(y,x); } void set_yx(_vec2<T> q) { y=q.x; x=q.y; }
    __declspec( property (get=get_xy, put=set_xy) ) _vec2<T> xy;
    __declspec( property (get=get_xy, put=set_xy) ) _vec2<T> xg;
    __declspec( property (get=get_xy, put=set_xy) ) _vec2<T> xt;
    __declspec( property (get=get_yx, put=set_yx) ) _vec2<T> yx;
    __declspec( property (get=get_yx, put=set_yx) ) _vec2<T> yr;
    __declspec( property (get=get_yx, put=set_yx) ) _vec2<T> ys;
    __declspec( property (get=get_xy, put=set_xy) ) _vec2<T> ry;
    __declspec( property (get=get_xy, put=set_xy) ) _vec2<T> rg;
    __declspec( property (get=get_xy, put=set_xy) ) _vec2<T> rt;
    __declspec( property (get=get_yx, put=set_yx) ) _vec2<T> gx;
    __declspec( property (get=get_yx, put=set_yx) ) _vec2<T> gr;
    __declspec( property (get=get_yx, put=set_yx) ) _vec2<T> gs;
    __declspec( property (get=get_xy, put=set_xy) ) _vec2<T> sy;
    __declspec( property (get=get_xy, put=set_xy) ) _vec2<T> sg;
    __declspec( property (get=get_xy, put=set_xy) ) _vec2<T> st;
    __declspec( property (get=get_yx, put=set_yx) ) _vec2<T> tx;
    __declspec( property (get=get_yx, put=set_yx) ) _vec2<T> tr;
    __declspec( property (get=get_yx, put=set_yx) ) _vec2<T> ts;
    // operators
    _vec2* operator = (const _vec2 &a) { for (int i=0;i<2;i++) dat[i]=a.dat[i]; return this; }                              // =a
    T& operator [](const int i)     { return dat[i]; }                                                                      // a[i]
    _vec2<T> operator + ()          { return *this; }                                                                       // +a
    _vec2<T> operator - ()          { _vec2<T> q;       for (int i=0;i<2;i++) q.dat[i]=      -dat[i];           return q; } // -a
    _vec2<T> operator ++ ()         {                   for (int i=0;i<2;i++)                 dat[i]++;     return *this; } // ++a
    _vec2<T> operator -- ()         {                   for (int i=0;i<2;i++)                 dat[i]--;     return *this; } // --a
    _vec2<T> operator ++ (int)      { _vec2<T> q=*this; for (int i=0;i<2;i++)                 dat[i]++;         return q; } // a++
    _vec2<T> operator -- (int)      { _vec2<T> q=*this; for (int i=0;i<2;i++)                 dat[i]--;         return q; } // a--

    _vec2<T> operator + (_vec2<T>&v){ _vec2<T> q;       for (int i=0;i<2;i++) q.dat[i]=       dat[i]+v.dat[i];  return q; } // a+b
    _vec2<T> operator - (_vec2<T>&v){ _vec2<T> q;       for (int i=0;i<2;i++) q.dat[i]=       dat[i]-v.dat[i];  return q; } // a-b
    _vec2<T> operator * (_vec2<T>&v){ _vec2<T> q;       for (int i=0;i<2;i++) q.dat[i]=       dat[i]*v.dat[i];  return q; } // a*b
    _vec2<T> operator / (_vec2<T>&v){ _vec2<T> q;       for (int i=0;i<2;i++) q.dat[i]=divide(dat[i],v.dat[i]); return q; } // a/b

    _vec2<T> operator + (const T &c){ _vec2<T> q;       for (int i=0;i<2;i++) q.dat[i]=dat[i]+c;                return q; } // a+c
    _vec2<T> operator - (const T &c){ _vec2<T> q;       for (int i=0;i<2;i++) q.dat[i]=dat[i]-c;                return q; } // a-c
    _vec2<T> operator * (const T &c){ _vec2<T> q;       for (int i=0;i<2;i++) q.dat[i]=dat[i]*c;                return q; } // a*c
    _vec2<T> operator / (const T &c){ _vec2<T> q;       for (int i=0;i<2;i++) q.dat[i]=divide(dat[i],c);        return q; } // a/c

    _vec2<T> operator +=(_vec2<T>&v){ this[0]=this[0]+v; return *this; };
    _vec2<T> operator -=(_vec2<T>&v){ this[0]=this[0]-v; return *this; };
    _vec2<T> operator *=(_vec2<T>&v){ this[0]=this[0]*v; return *this; };
    _vec2<T> operator /=(_vec2<T>&v){ this[0]=this[0]/v; return *this; };

    _vec2<T> operator +=(const T &c){ this[0]=this[0]+c; return *this; };
    _vec2<T> operator -=(const T &c){ this[0]=this[0]-c; return *this; };
    _vec2<T> operator *=(const T &c){ this[0]=this[0]*c; return *this; };
    _vec2<T> operator /=(const T &c){ this[0]=this[0]/c; return *this; };
    // members
    int length() { return 2; }  // dimensions
    };

As you can see the getters setters tend to be hideous to code so (this is just 2D now imagine 4D) the full code for vec is around 228 KByte of source. Here the script that I generated it with:

void _vec_generate(AnsiString &txt,const int n) // generate _vec(n)<T> get/set source code n>=2
    {
    int i,j,k,l;
    int i3,j3,k3,l3;
    const int n3=12;
    const char x[n3]="xyzwrgbastpq";
    txt+="template <class T> class _vec"+AnsiString(n)+"\r\n";
    txt+="\t{\r\n";
    for (;;)
        {
        if (n<1) break;
        txt+="\t// 1D\r\n";
        for (i=0;i<n;i++) txt+=AnsiString().sprintf("\tT get_%c() { return dat[%i]; } void set_%c(T q) { dat[%i]=q; }\r\n",x[i],i,x[i],i);
        for (j=0;j<12;j+=4) for (i=0;i<n;i++) txt+=AnsiString().sprintf("\t__declspec( property (get=get_%c, put=set_%c) ) T %c;\r\n",x[i],x[i],x[i+j]);
        if (n<2) break;
        txt+="\t// 2D\r\n";
        for (i=0;i<n;i++)
         for (j=0;j<n;j++) if (i!=j)
            {
            txt+=AnsiString().sprintf("\t_vec2<T> get_%c%c() { return _vec2<T>(%c,%c); } ",x[i],x[j],x[i],x[j]);
            txt+=AnsiString().sprintf("void set_%c%c(_vec2<T> q) { %c=q.%c; %c=q.%c; }\r\n",x[i],x[j],x[i],x[0],x[j],x[1]);
            }
        for (i=i3=0;i3<n3;i3++,i=i3&3) if (i<n)
         for (j=j3=0;j3<n3;j3++,j=j3&3) if (j<n) if (i!=j)
            {
            txt+=AnsiString().sprintf("\t__declspec( property (get=get_%c%c, put=set_%c%c) ) _vec2<T> %c%c;\r\n",x[i],x[j],x[i],x[j],x[i3],x[j3]);
            }
        if (n<3) break;
        txt+="\t// 3D\r\n";
        for (i=0;i<n;i++)
         for (j=0;j<n;j++) if (i!=j)
          for (k=0;k<n;k++) if ((i!=k)&&(j!=k))
            {
            txt+=AnsiString().sprintf("\t_vec3<T> get_%c%c%c() { return _vec3<T>(%c,%c,%c); } ",x[i],x[j],x[k],x[i],x[j],x[k]);
            txt+=AnsiString().sprintf("void set_%c%c%c(_vec3<T> q) { %c=q.%c; %c=q.%c; %c=q.%c; }\r\n",x[i],x[j],x[k],x[i],x[0],x[j],x[1],x[k],x[2]);
            }
        for (i=i3=0;i3<n3;i3++,i=i3&3) if (i<n)
         for (j=j3=0;j3<n3;j3++,j=j3&3) if (j<n) if (i!=j)
          for (k=k3=0;k3<n3;k3++,k=k3&3) if (k<n) if ((i!=k)&&(j!=k))
            {
            txt+=AnsiString().sprintf("\t__declspec( property (get=get_%c%c%c, put=set_%c%c%c) ) _vec3<T> %c%c%c;\r\n",x[i],x[j],x[k],x[i],x[j],x[k],x[i3],x[j3],x[k3]);
            }
        if (n<4) break;
        txt+="\t// 4D\r\n";
        for (i=0;i<n;i++)
         for (j=0;j<n;j++) if (i!=j)
          for (k=0;k<n;k++) if ((i!=k)&&(j!=k))
           for (l=0;l<n;l++) if ((i!=l)&&(j!=l)&&(k!=l))
            {
            txt+=AnsiString().sprintf("\t_vec4<T> get_%c%c%c%c() { return _vec4<T>(%c,%c,%c,%c); } ",x[i],x[j],x[k],x[l],x[i],x[j],x[k],x[l]);
            txt+=AnsiString().sprintf("void set_%c%c%c%c(_vec4<T> q) { %c=q.%c; %c=q.%c; %c=q.%c; %c=q.%c; }\r\n",x[i],x[j],x[k],x[l],x[i],x[0],x[j],x[1],x[k],x[2],x[l],x[3]);
            }
        for (i=i3=0;i3<n3;i3++,i=i3&3) if (i<n)
         for (j=j3=0;j3<n3;j3++,j=j3&3) if (j<n) if (i!=j)
          for (k=k3=0;k3<n3;k3++,k=k3&3) if (k<n) if ((i!=k)&&(j!=k))
           for (l=l3=0;l3<n3;l3++,l=l3&3) if (l<n) if ((i!=l)&&(j!=l)&&(k!=l))
            {
            txt+=AnsiString().sprintf("\t__declspec( property (get=get_%c%c%c%c, put=set_%c%c%c%c) ) _vec4<T> %c%c%c%c;\r\n",x[i],x[j],x[k],x[l],x[i],x[j],x[k],x[l],x[i3],x[j3],x[k3],x[l3]);
            }
        break;
        }
    txt+="\t};\r\n";
    }

Half of the core of my Z80 CPU emulator code is auto-generated this way and is fully configurable from MySQL database dumped to text file.

In demo scene was usual to code some effects on better computer or in advance and encode the result so it can be "played" instead of generated in RT on target platform.

So as you can see there are more easier ways than AI to overcome brick-wall ...

Spektre
  • 7,278
  • 16
  • 33
  • 9
    How does this answer the question? "I don't know, but here's a lot of code I wrote that doesn't use AI" – pipe Mar 23 '18 at 09:49
  • @pipe it is kind of "proof" of concept that these sorts of coding problems are usually much easier to do without an AI. As in the real world AIs are not used for such purpose at least to my knowledge. – Spektre Mar 26 '18 at 09:54
4

Your timeframe is huge

The increase in home computer capability from 1975 to 1985 is massive - at least as much as the increase in capability in cell phones from 2000 to 2010, if not even greater.

In 1975 I don't think there were any home computers that had any bitmapped graphics capability at all, and even if they had, they wouldn't have had enough memory for the image anyway. In 1985, on the other hand, you could buy an Amiga with a megabyte of RAM and near-VGA graphics which would be capable of a lighted, shaded rotating cube without any heroic effort.

Computers of the early 80s typically had limitations in their CPU and graphics that would make full-color 3D rendering impractical. Realtime 3D lighting requires per-pixel calculations that just can't be done fast enough on 8-bit CPUs. Without lighting, 3D rendering looks better in wireframe.

Wireframe rendering is much easier, as you need only a few calculations to project the vertices of the polygons you are rendering, and then you connect them with lines. Line drawing is easily handled on all common 8-bit systems.

In the early 80s many computers were capable of realtime animation of wire-frame 3D. The classic game "Elite," released in 1984 and ported to the majority of common 8-bit platforms, demonstrated this. This game could have theoretically existed in 1979 or so (the earliest you could get a 64K Apple), but most of the platforms it would be developed for would not exist until 1981 or later. A simple wireframe rotating cube could have probably been made for a 16K Apple, available (if expensive) in 1977.

There are many demos for the Commodore 64 that appear to have full-color 3D elements, although I bet dollars to donuts they are done with various tricks rather than a true rasterizing renderer, OpenGL style. A quick search for "C64 demo" will turn up several.

In the 2D category, introspec's answer shows something good for Atari, but here's one for Apple II: BMP2DHR This is closer to what you're asking, since it uses a modern PC's processing power to run advanced dithering algorithms in order to get high quality color (at the cost of contrast). To the human eye, low contrast and high color just looks better than high contrast but inaccurate color. GIF viewers for the Apple II were available, they took a significant amount of time to display an image, and they did not look that good.

fluffysheap
  • 748
  • 3
  • 13
  • 1
    "In 1975 I don't think there were any home computers that had any bitmapped graphics capability at all, and even if they had, they wouldn't have had enough memory for the image anyway." -- in 1975, ownership of home computers was dominated by the homebrew market, and homebrewers were definitely doing graphical display, as evidenced by this article in Byte issue #2 (October 1975). 4K of RAM was reasonably affordable at the time, and would be enough for simple graphic applications. – Jules Mar 25 '18 at 13:44
3

Nope.

All early home computers were fundamentally limited on graphics performance by their video outputs. Pixel resolutions were generally low. The ZX Spectrum managed 256x192, and the Commodore 64 managed 320x200. Worse than that though, their colour depth was tiny. 6-bit colour (i.e. a palette of up to 64 colours) was normal, but graphics were often also limited to how many colours could be present in any given group of pixels.

It was certainly possible to draw wireframe 3D, and many games did. However "filled" 3D graphics were completely beyond these machines, because they did not have the colour palette to shade different faces of a shape accurately enough. It was not until the Acorn Archimedes, Commodore Amiga and Atari ST that regular home computers gained access to 16-bit and 24-bit colour which made this possible. (Specialist graphics cards did exist for PCs - at enormous cost.)

So it doesn't matter what processing you use - or even if you hooked up a hybrid old-school home computer with a modern processor. The video hardware on all old home computers simply was not up to the job. Even the kind of images you'd get from an early-90s 486 with a basic graphics card were far beyond what an early/mid-80s home computer could manage.

Graham
  • 2,054
  • 8
  • 11
  • Well I think the original question is assuming that there's a little room for optimization (for example in the 3D calculations), not that you're just drawing pre-determined lines "as is" with absolutely no logic and therefore no room for optimization. – jeancallisti Mar 23 '18 at 15:49
  • 1
    @jeancallisti The problem is that when your palette and resolution are so limited, you can't optimise it. The problem isn't processing, it's simply what you can put on the screen. Anyone who remembers the loading screens on 80s games will remember the valiant efforts they made to show something halfway close to the cover art - and will also remember how far away from the cover art it always was! – Graham Mar 23 '18 at 15:55
  • I find it very intriguing how adamantly you state that on a simple machine -- no matter how simple it is -- there's no room for optimization. It seems commonknowledge that there's always room for optimization. As long as there are more than 2 instructions, there's room for optimization. – jeancallisti Mar 23 '18 at 16:33
  • 3
    @jeancallisti You misunderstand. Like I said, you can optimise the processing as much as you want, or even put in the latest version of Deep Blue to do the processing. It doesn't matter - what's displayed is limited by the video hardware. If the video hardware can only produce this bit depth, at this resolution, at this refresh rate, then you have a physical limit on what kind of pictures you can ever get. And like I said, coders back in the 80s did understand the hardware enough to reach those physical limits. This is "laws of physics" stuff which ye cannae change. :) – Graham Mar 23 '18 at 17:14
  • 3
    @jeancallisti no, there isn't always room for optimization. The solution space is finite, and by any given metric there will be a set of solutions that are optimal, as in not able to be optimized any further. On a small enough machine it's actually feasible to exhaustively search that space and come up with a routine that just can't be improved on, except by changing the criteria. – hobbs Mar 24 '18 at 05:45
  • 1
    Pedantic re: lack of filled 3d; there are titles that use various degrees of shading in lieu of colours e.g. https://youtu.be/LPXcX72uTGo?t=15s but it doesn't contradict your main point that some limits are absolute, pixel density and colour depth being unambiguously so. – Tommy Mar 27 '18 at 18:11
2

I'm not satisfied by the answers above. They either all assume that there's absolutly no room for optimization (either because the target CPU i too simple) or they present automated tools to produce better code, but leave out the AI part. Or they just state that AI is not magic. They're all leaving a part of the question out.

If the question is : "Could you use AI to achieve this goal?", then the answer would be: Probably not. As an individual you would never have acces to the resources and knowledge to set up a complex AI able to achieve this incredible complex task for an AI. And even if you owned the resources, you'd be better off using your brain instead, or at worst you'd be better off using semi-automated tools to optimize the code -- which doesn't count as an AI.

But, if the question is : "What if Google or any heavyweight company suddenly decided that they want to produce the best possible code for a 1980 machine, and use AI to achieve this? Could they do it, would it help, would it work?", then the answer is probably yes.

The resulting code might only help moving from an "almost completely optimized code" to a "completely optimized code" (hence making the execution only 5% faster if you're extremely lucky -- maybe), but it's still a "yes, probably".

Let me explain how :

  • we know how automated tools can be used to produce extremely optimized code for old hardware. For example, there are tools to produce some extremely optimized compression ratios for hardware with tiny storage capability. Or compilers that produce the best assembly code for a given CPU.

  • the core of these techniques is that the machine producing the optimized code has huge memory or comptation capabilities compared to the target machine. But it's not enough. It all relies on someone who perfectly understood how the target machine works, and was able to provide the tool with some specialized pieces of code --templates if you will-- that are the best in a given context. That person created algorithms that are abstracting what's really needed in the end, doing the calculation on some inflated temporary data aiming at this goal, and in the last moment filling all the templates and keeping only the core of what's needed.

In a way, they're "rendering" the template just like you would render 3D, but instead you're "rendering" assembly or data.

The thing to understand is that the person who created those tools provided to things: the so-called templates and the ways of using them. In other words, they spotted patterns in what they think is the most optimized code or data, and then they created rules to insert them in the right places with the right parameters.

Now, the question is : *"Would an AI be able to spot those patterns and create the rules to inject the patterns?"*

Well, with machine learning, it certainly would. You have a very simple heuristics at the end to evaluate the quality of the result : its execution speed.

The machine would need to learn to spot the coding patterns. That it can do by feeding it lots and lots of well-optimizied existing code for the target machine, that you'd take from the software from back then. Then if you introduce mechanisms of randomness and genetic algorithms, the AI can try variations and find even better optimizations that no one had thought about.

Then the machine would need to create its own rules to wire all the patterns together and achieve the goal that you have set (by the way, you'd need a language to tell the AI what the goal actually is, in abstract terms that it can "understand" and toy around. It's hard to tell a machine "do me a rotating cube" without actually doing it yourself -- but then again machine learning to the rescue to learn to recognize abstract concepts).

Finally you'd end up with a code that a human would hardly understand, and that would be litrally impossible to maintain without breaking everything. Bout it would work -- you'd have the fastest code possible for this one specific task.

jeancallisti
  • 291
  • 1
  • 5
  • 2
    I don't think so. The "optimization for a CPU" kind of breaks down with old 8bit CPUs. They are very simple; their whole instruction set fits on a single-sided A4/letter page of paper, and there is very little (or rather, no) hidden complexity behind it. All the things making modern CPUs complex and good candidates for optimizing are missing from those old beauties. No parallel pipes, no complex timings, no pre-fetch, no branch magic, no cache, no nothing... the old machines were very well able to be really grasped fully by single human brains. – AnoE Mar 23 '18 at 14:33
  • You're making a mistake though : you're assuming that optimizations rely only on instructions set. As demonstrated there are many other ways of optimizing; the first of all being the way you compress data. Another example out of many : repeating sets of instructions several times instead of implementing a loop. Etc. So many ways, that you, as a human, can't think about -- or even if can think about themn, can't balance in just the perfect way. – jeancallisti Mar 23 '18 at 15:32
  • 2
    @jeancallisti You're making a mistake though that you think humans couldn't grasp that. The evidence of the 70s and 80s was that because these processors were fairly simple, talented humans certainly could work it out. – Graham Mar 23 '18 at 16:09
  • @Graham the question is not whether or not humans could work it out, but if a machine loaded with the rght AI could produce the best possible code at all times. – jeancallisti Mar 23 '18 at 16:19
  • Proof? @jeancallisti – AnoE Mar 23 '18 at 17:55
  • @jeancallisti, the question is exactly whether an AI could be better than a human, on an archaic CPU: effects that would be too difficult for a human to program on the cheapest machines – AnoE Mar 23 '18 at 21:11
  • The #1 thing I can see a national-commitment level of AI doing is reducing code size. Program/data size was everything; ROM cost money. Realistically it would have been an advisor to help you write tight code. – Harper - Reinstate Monica Mar 24 '18 at 23:03
  • I feel like reading comments from Go players stating that humans have reached perfection in playing Go and that a machine will never do better. – jeancallisti Mar 27 '18 at 13:01
  • @jeancallisti Go is hard and complicated and those old CPUs are not. That's the point in the comments. Think more about how you would feel reading comments from Tic Tac Toe players that it is impossible to loose or win and a machine can't do better, i.e. win where a human reaches a draw. They would be right. ☺ – BlackJack Mar 28 '18 at 09:03
-1

It's not that we couldn't do it. It's that we couldn't do it in real time. I have had 8-bit processors grinding for 12 hours to precompute graphic elements, data then used to "do the impossible".

As far as exceeding hardware ability, we did a lot of stuff the designers never conceived, 4-way fine scrolling on VIC-20 for instance. That was half the fun. Now they don't let you play with bare metal anymore.

Not seeing how "AI" would have helped though.

-2

A genetic algorithm paired with machine vision for feedback should be able to do the job.

snips-n-snails
  • 17,548
  • 3
  • 63
  • 120
  • You typed "genetic algorithm", you either mean an algorithm with some sort of biological characteristics ?, or you meant to type "generic algorithm" , and yes, I was also thinking of machine vision for feedback, however, the end quality would always have to be approved by a human I think, but maybe not – texttext Mar 22 '18 at 06:14
  • 4
    @texttext: Genetic algorithms are quite common. It is clear that genetic is meant, not generic. – Chenmunka Mar 22 '18 at 09:38
  • 1
    A bit pedantic perhaps, but genetic algorithms are not AI. In fact, they are quite the opposite. – JeremyP Mar 22 '18 at 09:45
  • 3
    @JeremyP It seems most people here would disagree with you. No clue what "the opposite of AI" is, or why you'd consider genetic algorithms to be entirely outside the scope of AI. – Nuclear Hoagie Mar 22 '18 at 12:46
  • 3
    @NuclearWang Genetic algorithms are the opposite of intelligence. There's nothing intelligent about the process of testing a whole load of similar models, throwing away the ones that perform worse, creating some new models that vary randomly from the best ones and repeating. – JeremyP Mar 22 '18 at 13:29
  • 5
    @JeremyP Nevertheless this isn’t what the terms usually mean. A GA is a nonlinear optimisation problem, and as such falls squarely under the definition of machine learning (which is just a fancy word for [usually nonlinear] optimisation). And machine learning is a strict subset of AI. — Furthermore, your judgement that there’s “nothing intelligent about [GAs]” is a bit bizarre, given that the vast majority of successful learning algorithms similarly rely on very simple principles that paradoxically generate perceived complexity. – Konrad Rudolph Mar 22 '18 at 14:29
  • 5
    @JeremyP If the process of generating new models was done entirely randomly, I'd agree with you, but it's not. There's definitely something "smart" about the way in which models are selected for breeding and combined with one another. Just because the algorithm is stochastic and isn't doing directed search toward a known goal doesn't mean the process as a whole isn't "intelligent". The intelligence of the algorithm comes from its evaluation metric and breeding process. – Nuclear Hoagie Mar 22 '18 at 14:33
  • 1
    @KonradRudolph who cares about your fancy definitions that you plucked from the air. There is no way that genetic algorithms are intelligent. It is true though that some of what people label as AI also involves no intelligence. – JeremyP Mar 22 '18 at 15:25
  • @NuclearWang There's nothing necessarily smart about the way algorithms are selected. You just measure them against the criteria you choose. In fact, the whole point of using genetic algorithms is that you can come up with fairly optimal solutions without using any real intelligence. – JeremyP Mar 22 '18 at 15:28
  • @JeremyP They’re not “plucked from the air” at all. These are the established definitions within the field. By all means use different definitions but don’t pretend that these are canonical, or be surprised when they cause confusion. – Konrad Rudolph Mar 22 '18 at 16:18
  • 4
    @JeremyP: current AI research is mostly not "understand how human intelligence works, then manually implement that". It's mostly about building systems with simple rules that allow complexity to emerge. (e.g. neural networks). GAs fall into that category, although using them to optimize a short function is a known super-optimization technique, and not really AI. If you could somehow get a GA to work with whole programs, and avoid having it get stuck in local optima all the time, then sure you could call it AI. A whole new implementation for something often can't be reached gradually. – Peter Cordes Mar 23 '18 at 03:21
  • @JeremyP you're thinking in a way that's too abstract. It doesn't matter if "genetic algorithm" is not "intelligent". It's still a form of AI because there are so many places where it gives up on deterministic, Turing-style thinking, and enters the realm of fuzzy thought – jeancallisti Mar 23 '18 at 15:36