*Anyone*can waste lots of CPU time by being stupid, but it's an art to waste CPU time on a task when performing it efficiently.

For an oldie, FPX writes in factorial on his efforts in computing these. It ends with:

- Eventually, I extended my program to work over a network, and had 70 workstations grinding away for 48 hours on the computation of 1000000!. Never was CPU time more pointlessly wasted.

Lars H: In a paper mentioned in Scientific papers mentioning Tcl, Daniel Andrén, Klas Markström, and I ran some very large jobs to determine the diameter of some Cayley graphs. Our main result (apart from a new fast algorithm for row-reduction of matrices over finite fields, which was found using pen and paper) are summarised in the table

Diameter(n,q) | n=2 | n=3 | n=4 | n=5 | n=6 |
---|---|---|---|---|---|

q=2 | 2 | 4 | 7 | 10 | 13 |

q=3 | 3 | 6 | 9 | 12 | |

q=4 | 4 | 7 | 11 | ||

q=5 | 4 | 7 | 11 | ||

7≤q≤23 | 4 | 8 |

*really*large problems. The last we ran — (n,q) = (3,23) — required 430GB RAM (that's only 2 bits per vertex), caused us to crash the then-largest computer in Norway (Well, maybe it was rather the manufacturer SGI which crashed it. What first happened was that we couldn't run the program since the OS had a 256 GB limit on memory allocated to any single process, so the nice sysops asked SGI for a new OS version without that limit. After a couple of weeks SGI delivered such an update, which they however hadn't been able to fully test since they didn't themselves have any machine with this much memory. And sure enough, there was a bug in it which caused the machine to crash.), ran for several days on over 400 processors, and accumulated a runtime of 4.1 CPUyears. Never was CPU time more pointlessly wasted: we still don't know at what value of q the asymptotic diameter 9 for n=3 is attained! (The next possibility is q=25, which would require 954GB RAM to run.)