Wrestling With Infinity

22 Apr 2025

Art of Approximation

There are few things in life as vexing as the unsolved.

Simplicity, complexity, randomness, spectra, fields. Standalone words which, without context, provide no motivation for deeper consideration. But what if these words held a key that would unravel even the most intractable of problems?

On May 4, 1971 Stephen Cook published a 7-page paper that revolutionized the field of computer science.

In the paper, Cook laid the foundation for complexity theory while studying the polynomial reducibility of the determination of combined logical statements called tautologies. Cook proved that the determination of 'tautologyhood' (the truthiness of a statement) was equivalently difficult to determining if one graph was isomorphic to a subgraph of a different graph. In 1973, Leonid Levin would independently formulate the problem. This problem would later become known as P ?≠ NP, and was one of seven Millenium Problems selected by the Clay Institute, each of which promised a prize of one million dollars to the author of a valid solution.

It is here we will begin to pull at the threads of the impossible.

Why P ?≠ NP? Well, if P, the set of polynomial time reducible algorithms, were equivalent to NP, the set of nondeterministic-polynomial time algorithms, then many real world optimization problems would become trivial to solve. New drugs; improved sensing technologies; and perhaps the most alarming, the nullification of many cryptographic standards. It is commonly believed that P ≠ NP. But, there have been isolated instances of problem misclassification before. In 2004, Agrawal, Kayal, and Saxena published "PRIMES is in P," demonstrating a previously believed NP problem to have a deterministic polynomial time algorithm.

Practically speaking, it is not always necessary to derive a deterministic polynomial time algorithm for a problem. Often, it is entirely sufficient to use some kind of approximation technique instead. Even though you may not reach a globally optimal solution in all cases, it is usually possible to find a reasonably good solution - sometimes even with guaranteed bounds on time, memory, and error. The fields of AI and Scientific Computing exist to develop and apply such methods.

In the 2000s, two separate research efforts emerged which made significant contributions to the theory and application of approximation methods. The first group consisted of researchers like Terrence Tao, Emmanuel Candes, Justin Romberg, David Donoho, and Richard Baraniuk. This group applied results from random matrix theory to signal processing for the purpose of sparse signal recovery. Popularized as the field of 'compressed sensing' this work revolutionized medical image processing by greatly reducing the time it took to acquire a set of measurements. This same technology has also been applied in other domains such as Astronomy and Remote Sensing. Tao has a blog post which provides an approachable though likely outdated overview of the topic.

The second research effort originated in Richard Sutton's lab at The University of Alberta where learned heuristic search techniques were applied to combinatorial games, like Chess, Go, and Shogi. David Silver's university research went on to form the basis of Google Deepmind's Alphazero model. With co-contributors Demis Hassabis, Aja Huang, Oriol Vinyals, Ilya Sutskever, and many more. The application of Monte Carlo Tree Search to games later lead to the development of the Alphafold model which discovered a state of the art solution to the protein folding problem - resulting in a Nobel Prize for Demis Hassabis and John Jumper. Additionally, Sutton and Barto recently won a Turing award for their work on reinforcement learning.

For problems in NP, we would like to learn transition functions for nondeterministic Turing machines in a way which solves these problems in polynomial time. Perhaps just as important, we would like to know what physical computing paradigms can be utilized to accelerate implementations of these models? Even if it is difficult for a model to learn a provably optimal transition function, or create an algorithm to do so (e.g. AlphaTensor), it may be possible to improve training speed and/or inference with highly parallelized bitwise operations or analog arithmetic.

In the end, it may not even be necessary to prove P ?≠ NP. If we could find a clever way to apply our existing knowledge of approximation methods and pair this with highly specialized hardware then we may be able to solve many of the large, unruly, optimization problems we encounter in industry; ideally cheaper than the services widely available today.



/sw-load.js?v=e5ae5a1ed170f4499ac6292e7164b68528c51f6d6518cd75a49e6a6b737831d5728da21fc14dcbc7a91328e53858c6ff7195cc3fc8b25f0feeaef2af151d6686 /fireball.gif?v=569e393374f2af74d6c575090904aaf51e641e5eb5ea89ae7c7de01f7293abc165b3a7e8685690a8b951c778603fec98ae6822ff2f7ea86a536776966cb65d5d /favicon.ico?v=1a6495bbd14c74c75aa77e28420ce82a63372b28cd38c952b98403d8d112a9f76589bea299982ca27048215e661245f9d07294bddee7da377aaee76eee70c622 /favicon-16x16.png?v=7267c6f502a03c1e4df9d8136dcc6cd9e67e0b9644941d22ed34e4fe747580f95a65f77a183bb967c1ec60eecd0c298b2670d89a67a647391fb7d1501bcf0982 /favicon-32x32.png?v=5e23bffe691055b88067cbc8d11b96ce2a8dc5e25e49367803766a3cadbcfc7f05a62079bfa558d5e234c6a7455d21fc2960b196bda5cbd591bd4c2dbe67920d /icon-192x192.png?v=3820c1b1e6d755d2b7c2a04a65f0f1feef793b297f7ee995947137ccd8f73ec304457f6ce1df987a9a0a13ed7dacd203225505b832ccd2318b530ae53a55cebc /icon-512x512.png?v=de62ae905479fd813300d286ed1d2fe6bb6f6292623a5d918691642f6dd09a68943c69ed2a95a1820076919e69ff4fda668bb79e610ebc1d3200fedd7f634443 /apple-touch-icon.png?v=4718a090c66653794b3622234784e821a504ee526b6518f20cd10f6b27907566690892339830ede2ef9cb5fedb8a9796f02fb2610de868500c0720c1083013b7 /main.css?v=c9302c1a43e06a73eeeefe97328d304d5282858fbdd09944688660afcd59de1468a247efeef7202a18338411be8306abdd5e837e4d314ff0a59612258a49cd98 /nerd-fonts.css?v=4b2ec75c55a664da78189dc20d4017cf5bc817cf3b60218a2446ba269ca4fd42c117352d5276363965f35fae32891efce751e0626b5281bae627f40d804a5679 /unstyle.css?v=b14bd48a2efbd463d973763aa3184c69aa02164c0891acacc9eab49ddd275f98f0050b4c31d2093e4671e7abe04f9459a041f0064384a90d97b8ff21b6824825 /langs.css?v=12474958ee314a9fde4704e1f5a032dc632d41f9461faca326ac284297766c4ceb07b45fec7fbc09fa72b0f21dcc64f0c31e64fc2e5e838b1d30f5fe540afd78 /syntax-theme-light.css?v=ccdddc2d2d88953c6d7d0376777b8409028ef625a7321dfa41619547b4f5eddbe89aa95ff5e7e2620da0ea13fbabebe2fd544620bc7e81e3294776b3425df48a /syntax-theme-dark.css?v=dfede4879841e4a58e5fc71115aa5f5b82e206d85eb771ff4e5a40a1d82621570aad2458f637365ae4370d9a1cf5070edc9765f7c2d4506e12e2ba3c6081ffd5 /sw-style.css?v=a0fa1e87fa2bb3e03d18cefc81ef5c8cfa58c6aa6eea0af223fa155e088bc5af22d32d3ee785ebd3fc26b4c49b70f0bd423f7d592a419a24e6d1e2cb720b7e05 /categories/ /categories/theory/ /tags/ /tags/ramblings/ /tags/tech/ /posts/post1/ /posts/post1/cook_1971.pdf?v=a9a2fdddfbeb8853bdbf4926015b58b6b81bb1fbf277e6daf55ed82b3e61cb0f3a8a56f238a51b00138e3a28911106473c7c4cd9d7c1dc83f34f0b77c483f77e /projects/ /posts/ c1tyh4ll.png?v=e6bb8cdead47e48c0deba1e0a3016070984b5f7271166a72638f9ec5a6ef2d2eb8012e8e4cb64f4f3b6574c6d708bf2ae660d04b8b59a6de675ce4d4d62dd4c3 bk-prk.png?v=b00246fb5faeab35a588f347224b00d53083a9d5f4ae8cd87c0c2e0432bf7f348c6a6ab6f4e4edaf5ddbf13ee34b24946dd0af4cd7db6f4f334599f38917ac9e bk-prk.png?v=b00246fb5faeab35a588f347224b00d53083a9d5f4ae8cd87c0c2e0432bf7f348c6a6ab6f4e4edaf5ddbf13ee34b24946dd0af4cd7db6f4f334599f38917ac9e /icon-192x192.png?v=3820c1b1e6d755d2b7c2a04a65f0f1feef793b297f7ee995947137ccd8f73ec304457f6ce1df987a9a0a13ed7dacd203225505b832ccd2318b530ae53a55cebc /sitemap.xml /search_index.en.json /search.js /elasticlunr.min.js?v=b9be63b71422cbfde9f14310b397d9a7092f2946bffec11811a831398ace978c1c592e9a578f1fa32041e6c0dde68657fe58d3c30b0eaa758c63c5fd45117336">