Computational complexity theory is math putting us mere mortals in our place.
Complexity theory got off to a bizarre, albeit beautiful, start. Birthing from Hilbert’s majestic notion that, for mathematics, there is no ignorabimus, the race was on.
The race to show that we could answer any mathematical question systematically. Algorithmically.
The race to champion Hilbert’s slogan “Wir müssen wissen — wir werden wissen!” (“We must know — we will know!”).
The gods answered. They said “No.”
And so we learned that there are some things that mere mortals, even if they’re giants, will never know. In fact, we learned that there are things that even the gods, with all the power and time in the aether, will never know.
This is computability theory. That there are problems that even the gods can’t compute. Math always has some tricks up her sleeves and there is no algorithm than can solve all problems. Moreover, there are mathematical statements that are true but entirely impossible, even by gods, to prove.
But amidst all this impossibility, we started to get a better idea of what the gods could and couldn’t do. The obvious question that came next was “What can we mere mortals do?”
This is complexity theory. If we don’t have all the power and time in the aether, what things can we do? What things can’t we do? How complex are things?
But this is where complexity theory gets bizarre. Because of this history, it spawned from computability theory which was concerned with which mathematical statements could be proved. That is, which problems have an algorithmic way to prove or disprove instances of that problem; simply say YES or NO as to whether a statement was true. Decision problems.
And so new computer science undergraduates, whose thoughts are rightly bouncing with computational problems such as sorting lists and factoring numbers, are blindsided when they study complexity and are given decision problems to study instead of the computational problems we know we want to solve as computer scientists. They are given formal definitions that they eventually kinda sorta get used to but still feel different than the computational problem they had in mind. From the remnants of the computability theory it grew out of, we get problems that only want YES/NO answers and, even worse, we get the vestigial and redundant terminology of recursive and recognizable and recursively enumerable and computable and listable and decidable and co-recognizable and computably enumberable.
Worse yet, it all gets mixed in with formal language theory where Noam Chomsky was trying to study human language and so complexity theory is riddled with terminology about languages and grammars and automata.
And they’re all related in beautiful and cool ways. But we cried “Wir müssen wissen — wir werden wissen!” and we tried to create a pinnacle to conquer math from and so math broken even the gods and scattered us like Babel to speak in confused tongues and rendered our complexity classes an alphabet soup.
So we open our mouths and out comes #P and TQBF is PSPACE-hard and PPAD and RP and coRP and BPP and PPP and SOS and gap-SVP reduces to average-case DLWE and PCP (Probabilistically Checkable Proofs to complexity theorists, Post’s Correspondence Theorem to computability theorists, and Phenyl Cyclohexyl Piperidine to Berkeley residents). And we flounder and don’t come close (although this week has seen the “whopping 3.011n” bound, mentioned in the previous link, become a whopping 3.11n bound) to putting a dent in what we had the hubris to think we could topple.
And the majority of the time all we consider are decision problems when all we really want to study is the computational problems we really have in mind. The search problems.
Yet we talk about the P vs. NP million-dollar problem and it’s profundity (and it is truly profound) always in informal terms of whether it’s as easy to solve a problem as it is to check that a solution is correct when given one. We talk about it in terms of computational problems and not in the decision problem framework that it’s in!
And you can hear sometimes undergrads lament this hypocrisy. And they lash out and yell F*@k P and F#&k NP.
Hence FP and FNP. The search (function) problem versions of P and NP.
Luckily our history hasn’t hamstrung us too badly, and FP=FNP if and only if P=NP. Our question was the correct one to ask! But these FNP classes are still poorly studied and the relationship between search and decision problems is tenuous in subtle ways.
We started by thinking of decision problems and then moved to search problems. We inherited a lot of insight and intuitions from our thoughts on decision problems but in what ways are we stymied by this? In what ways are we not thinking of search problems unto themselves and instead thinking of them in terms of decision problems first?
Karp reductions, Cook reductions, non-determinism. All well-defined with intuitions for decision problems. They have corresponding ideas for search problems but with subtleties and bizarre mismatchings of concepts (try to think about it for a moment; a precursory response that seems right is likely missing some subtleties). The main problem being that we try to reconcile search problems with our notions of decision problems instead of first and foremost thinking about the search problems as their own beast.
The formalism of formal languages gives us a rich landscape to wander but how do we make sure to not be consumed by the formalism of decision problems? How do we keep the nature of computational search problems in our soul?
And to what degree do we babble through languages and formalisms as we try to talk about search problems ever since we were scattered far and wide after we challenged the majesty of math?