## He’s Told Us Not To Blow It…

…’Cause he knows it’s all worthwhile

-Starman (David Bowie)

David Bowie meant a lot of things to a lot of people.  I’m trying to find out what he meant to me.  I’m trying to find out how and why his death is affecting me.

Is it because I like his sound?  I’ll admit, it’s good music and I’ve always liked some of it but I haven’t listened to much of it until, by happenstance, this past semester.  Is it because I like his acting?  It’s certainly always a fun novelty when David Bowie is in a film, but I’ve never seen him have a chance to particularly stand out hugely (except for his bulge in Labyrinth) as an actor.  Is it because I like his lyrics?  Now they are certainly damn good and have connected with me, so this probably has something to do with it.  Is it because of his paintings?  I hadn’t seen them before his death.  Is it his public stance on social and racial issues in crucial times and places?  I can only hope to be as aware and active.  Is it because he had orgies in a fur pit?  Maybe.  Is it because of his artistry – blending his lyrics with impassioned vocals and cinematic, evocative, and spanning sounds?  He certainly has songs that resonate with me.  Is it because he’s a rock star and captures a childhood ideal and aspiration?  Well, it definitely would’ve been cool to be David Bowie.  Am I just feeding into the hype around his death and caring more than I have a right to?  I hope not.  Is it his innovation and ability to create new sounds and trends and reinvent himself?  I’m not going to lie, I wildly admire it.  Is it his comfort within his strangeness that makes us all wish we were that strange and that we had different colored eyes and a permanently dilated pupil?  I wish I was strange, I wish I was comfortable with it.  Is it his created personas that are larger-than-life that offer something otherworldly whose presence you’ve felt before – that you’ve almost tasted – that’s “all worthwhile?”  His meta-aware flamboyance and knowing theatricality did give me the sense that there was something supremely grander we were audience to and conjured memories of the sweet waves I’ve intercepted from the beautiful people and concepts I’ve come across that seemed to have come from a more beautiful and pure Platonic world just to blow our minds.  Is it the immense humanity that swarmed in him and flooded out of him and into us, toying with our humanity and joys and fears and hopes while also making us more human?  I wouldn’t doubt it.

These all affect me.  These all connect with me and make me feel something at his abrupt departure.  But what affects me most is his courage.  What inspires me most is his courage.

The ability to reinvent himself continuously throughout his life is impressive, but what makes David Bowie’s death affect me most was his courage to reinvent himself continuously throughout his life.  He did not choose well-trodden paths as he uprooted his proven successes to go into the unknown and bizarre, but he went into these new directions with zeal and attitude.

To go into the unknown and bizarre.  When I wrote my very first post of this blog (rife with fear and uncertainty) I listened to Space Oddity before and during, multiple times.  I had to.

While Bowie burst into the unknown with his zeal and attitude, it was not without fear.  And I had to hear the fear and estrangement and aspirations he felt through Major Tom.  I had to hear him overcome them in his bittersweet creation as I screwed up whatever courage I could borrow from him.

Academia is an exponentially branching beast and the paths you can take becomes a labyrinth.  I, of course, want to be a successful researcher, but I have aspirations of having my own humanity come out in some way.  To have a human impact on the culture of academia and philosophy of research.  To have some influence through the way I live, research, teach, blog, make videos, make music, ask questions, wax poetic.  To reinvent myself throughout my life and have my person and research emanate into untrodden humanity and knowledge.  But these are pies in the sky and seem impossible to get to, even when (especially when) taking perilous leaps into the unknown.  And I sometimes need to know there’s a Starman in that sky whose been under pressure, with the gall to believe that they can do something new in those bizarre leaps, telling me not to blow it.  Telling me it’s all worthwhile.

So I listened to Space Oddity.

David Bowie is not done inspiring  me or making me more human yet.  But this is already getting long and I have a few more specific ways in which he’s affected and influenced me.  So I’ll save those for the next post.  For now, David, I’m sure you would have stayed longer but you thought you’d blow our minds.

Ashes to ashes, funk to funky

Posted in Grad School, Humanity, News | Tagged , , | 1 Comment

## Well, Obviously…

So there was a math professor lecturing to their class.  The class had just started but they were already getting into some heavy-duty stuff.

No more than five minutes into the class, some smart-ass pipes up, interrupting the professor’s unfurling grandiosity: “Wait, why is what you just wrote there true?”

The professor, interrupted from their unveiling of knowledge, scoffs back: “Well, that’s obvious!  You just have to…  Well, you just… Huh, well if you… Mmmm….”

The professor scratched their head.  The professor twiddled the chalk.  The professor scribbled some scratch into the corner of the board, hiding it from onlookers with their body.  The professor forgot the students in the class and paced back and forth muttering trains of thoughts.  The professor spent the entire class period trying to figure out why what they wrote was true.  The professor was very bothered.

At the end of the class the professor mumbled: “I can’t think of why this is true.  We’ll meet again on Thursday.”  And then hobbled out the door leaving the students sitting there.

So finally, come Thursday, all the students shuffled in for their next class.  Then the professor burst into the room with a wide gait.

The professor swoops up the chalk and exclaims: “I’ve thought about why that statement is true.  I was right!  It’s obvious.”

The professor then picks up where they left off with the obvious statement now “settled”, unfurling their grandiosity onto the passive students…

Russell Impagliazzo told this joke last semester during a course he was teaching for the Fine-Grained Complexity seminar at the Simons Institute.  While it was certainly a funny and somewhat relatable commentary on the culture of mathematics and academia, it also reminded me of my philosophy on the nature of mathematics.

To aphorism-ify a thought by another friend of mine who is named Russell: Math is the art of turning the non-obvious into the obvious.

Now, this isn’t necessarily a new thought.  In fact, it seems that most mathematicians are already Platonists.  That is, they view math as an act of discovery rather than creation.  It’s actually the reason I fell in love with math: I’m not just hacking together some ad hoc creation that’s limited by my imagination, but am discovering pieces of something that is pre-existing.  Something that is universal and immutable and, best of all, bigger than me!

But even with all this Platonism-awareness, it may be easy to forget (or it may not be obvious) that the notion of obviousness is relative.  In fact, the notion of obviousness is all there is, to me, in mathematics.

For a very simple example, let me ask you this easy question: What are the roots to $x^2 -5x+6=0$?

Some of you may say, Oh, well, obviously the roots are 2 and 3.  You’d of course be right.  So my next question to you is Why was that obvious?

Did you simply just look at the equation and the numbers flew at you from the screen?  Or did you do some remodeling first?

I’d bet the majority of you (almost subconsciously) factored the equation in your head to say $x^2 -5x+6=(x-3)(x-2)$ reaching way back into the junior high or even elementary school days.  And, for some reason, $(x-3)(x-2)=0$ makes it more obvious that the roots are 2 and 3.  I mean, hell, they’re right there!

But what did we actually do.  We rewrote the same equation in a different way.  How silly and time-wasting is that?!  $x^2 -5x+6$ was always equal to $(x-2)(x-3)$, so we didn’t create anything new.  We just rewrote it (maybe just in our heads and maybe not even fully out).  But then, BAM!, it was obvious to us what the answer should be.

The point is, that is all we ever do in math.  We take something and just rewrite it a bunch of times until we finally get it into a form that’s obvious for our little mortal brains!

This says to me that there are two possible philosophies for math if you really understand its essence: Either it is the most pointless and futile of all things a human can do, or it is the most pure and profound of all things a human can do.

Pointless and futile because anything that we ever prove was always and will always be true.  Our “discovery” of it changes absolutely nothing.  We were just simply able to bang our heads on the wall enough to make something that was always just utterly and completely obvious to the universe fit into our human heads.

But pure and profound because there can be no feigning at human grandiosity of creation or superiority over the universe.  Our discovery of a proof changes nothing except our human connection with mathematics.  It means that math is nothing except the human experience that we get out of it.  The awe, the wonder, the purpose.  The pursuit!

Ah, the pursuit.  When you realize that all a proof is is taking a true statement that is not obvious to us and simply rewriting it until we (finally) get to a point where the concluding rewriting is obvious to our little brains (Q.E.D.), is when you realize that math is all about the human endeavor and pursuit of the beast.  We simultaneously become humble and shuffle through the towering majesty of math while becoming boisterous and with purpose trying to hunt down and pursue the beast, pridefully winning a match against the Supreme Fascist.

The only problem is that this truth doesn’t stop people from feigning grandiosity of creation or superiority over the universe.  While most mathematicians may be Platonist, it’s not always obvious to follow Platonism’s logical conclusion to the relativity of obviousness.

In fact, I think most of math’s problems of “academic dryness” or pedagogy boil down to forgetting that there is no such thing as obvious.  That “obviousness” is not just entirely relative to humans, but is a completely individual experience.

Recognizing that there is no such thing as obvious (or that everything is obvious in the sense that it’s either true or not) let’s us realize that math is all about finding ways, that are as human as possible, to fit it into our mortal context.  To fit it into our historical context, to fit it into our poetic allusions, to make it visual, audible, and spacial simply because that’s how we operate as humans.  To make analogies to events in our life and childhood. To pinch the air as we think about topology.  To tilt your head to see hallucinatory linear transformations on the wall.  To talk about mesh frames and soap, and donuts and coffee mugs, and peas turning into suns, and cats turning to static and then back again, and ham sandwiches, and snakes swallowing themselves whole.

Math is inextricably tied to our human bodies and histories and literature and politics and hobbies and fears and food.  The more we realize this, the more of this human endeavor we can explore and exalt in.

So when we say that it’s obvious what the roots to $x^2 -5x+6=0$ are, we should recognize that we can only do so because we memorized that we should factor a quadratic equation to get the zeroes.  And that’s because we memorized that $(x-a)(x-b)=0$ implies $x-a=0$ or $x-b=0$ (and maybe we learned much later that this works since $\mathbb{C}$ is an integral domain and so has no zero-divisors).  Just looking at $x^2 -5x+6=0$ doesn’t make it obvious what the roots are, even though we consider it an exceedingly simple problem; it’s the fact that we have techniques built up over the centuries that fit it into our human context and let’s us have a place for it in our small brains.

It’s only obvious to us because we stand on the shoulders of giants!

Although, in retrospect, I don’t know how we ever thought they were giants.  They’re obviously windmills…

## A Circuitous Parable

Last week I accidentally built a bookcase that didn’t fit my bed.

Obviously, that sentence doesn’t make sense.  Do you know what else doesn’t make much sense?  Research.

Actually, research does make sense.  In fact, it makes perfect sense (assuming it’s mathematically rigorous research).  And the first statement I made actually makes sense too.

Theorem: Last week I accidentally built a bookcase that didn’t fit my bed.

This is the quintessence of research:  A true statement that makes absolutely no sense out of context.

So what’s the context?

Proof: I got a bed frame from IKEA.  I finally got myself around to (another quintessential phrase of research) putting it together around 11pm.  I kind of knew this was a mistake and that it would take quite a while.

Well, after an hour or so of labor, I had just a piece of the bed frame put together and I was pretty sleepy and not too happy that the rest would probably take another couple hours.  So I decided to finish the rest the next day.

But, hey, at least I built this thing… Actually, what the hell is this thing?

Turned out it was a bookcase.  It was just a superfluous part of the bed frame design that you put the bed frame up against but isn’t even connected to.  I didn’t even build any of the parts that I actually cared about that would support the bed!

And then, on a second look, the bookcase seemed pretty narrow.  Too narrow for my bed that was supposed to lie up against it in the actual bed frame.  Huh, that’s weird.

So I measured my bed and, surely enough, it was too big for my bookcase.  My bed was a queen-size and I measured the remaining bed frame parts to find out that it was for a full-size.

So, I accidentally built a bookcase that didn’t fit my bed. $\blacksquare$

So there, a clear goal that failed and was recast as a confusing, out-of-context result: the quintessence of research.

All joking aside, theory research is immensely exciting and profound, as are its results.  The aim and results of research are generally trying to answer questions and it finds partial but crucial answers that piece together the landscape of our collective understanding.  It is not just recasting failures as results or missing the big picture of what it’s supposed to be answering. Continue reading

## Scream When You Burn

if it doesn’t come bursting out of you
in spite of everything,
don’t do it.

-Charles Bukowski

This blog has become a chore for me.

Grad school has become a chore for me.  I’ve barely started it (both grad school and the blog) and the spirit wanes.

These past few weeks have been unmotivated and stagnant.  Of course, immersed as a grad student at Berkeley, that still means I have been reading and learning a great deal and have been having great conversations and new experiences but, relative to what I could and should be doing, it’s certainly unmotivated and stagnant in my eyes.  But, mostly, it’s unmotivated.

It’s not bursting out of me.

I’ve been having to force myself to get to campus and force myself to work and even force myself to blog.  Blog!  What was supposed to be my outlet and human endeavor, especially for times like these, is another chore.

The last couple of blogs were meant to capture a simple thought I was kicking around that I thought could be fun to put down and think about.  But it turned into multiple posts and seems to hardly capture what I want.  I can’t put what’s in my head into your head.  Not without bastardizing it.  Through a glass, darkly.

And so finishing this thought process in the next couple of posts intimidates me.  And I have to force myself to sit down and finish it.  And often fail at that.

So, what’s been up with me?  Why can I not get my blog and my work and my life to come bursting out of me?

Maybe it doesn’t have to.  Maybe Bukowski’s strong belief, that you shouldn’t try to be creative or try to do what your passionate about, that it should come bursting out of you, is over-simplistic.

Ben Caulfield said that one thing he learned in grad school is that research isn’t just a stream of immense passion.  That it ends up more being forcing yourself to get out of bed, and to do work for an hour when you don’t want to.  Of course, there are periods of immense passion that burst through throughout this experience, but it seems like research needs a great deal of trying and pushing.

So should I just keep myself on schedule and get my work done incrementally, not caring about being soulful and artistic enough for it to all come bursting through me and out of me because it’s just that powerful?  It’s certainly less poetic that way.  But maybe Bukowski’s claim is just an oversimplification and unnecessarily causes feeling of spiritual inadequacy.

Woke up this morning and it seemed to me,
That every night turns out to be
A little more like Bukowski.
And yeah, I know he’s a pretty good read.
But God who’d want to be?
God who’d want to be such an asshole?

-Modest Mouse

(Sorry, after reading some of my past few posts a friend said “Dude I liked your old posts about insecurity and modest mouse.”  So there we go.)

But I do agree with Bukowski.  At least partially.

I’m in theory.  I’m not doing research to create applications or have my work used in the “real world” or have impacts.  I personally do my research for the human experience.  For the hedonism of having that human experience of the Platonic and immutable and beautiful truth to come bursting through me.  To discover something much bigger than myself, have it revealed to me.  Something perfect.

It’s the artistic element and creativity and humanity I yearn for and I very much need it to burst through me.

But I also very much agree with Ben.  I’ve also learned since starting grad school that research isn’t a stream of passion you can glide on; self-discipline and forcing yourself to start something, that either frightens or bores you, must constantly be done (after you are able to get yourself out of bed).

So it is over-simplistic and unhealthy to think everything must come bursting forth for it to be worth doing.  But this post has unprecedentedly burst forth because there has been something that has scared me since I began grad school.

I’ve been terrified of drifting passively through my life and my research.  I need the beauty of the questions and the truth to overtake me and it terrifies me that I might go through life trying to make improvements on research simply because it’s generally good to get more publications and there’s more improvements to make.

Not only would research become circular to me, it would become pointless.  In theory, we do nothing but prove things true that were already true and will always be true or we prove things false that were already false and will always be false.  We’ve only made it palpable to our mortal minds.  This is pointless.

To me, research is the discovery, the chase, the humanity.  It’s all about the joys we take in the chase and how we share those moments with others.

I definitely make myself aware that I will have to schedule myself and that I will have to make myself work and be productive when I just feel apathetic, and I think that it’s good to acknowledge that this is a part being a researcher.  But let’s also be aware of what research is to us.

Why are we doing it and is something bursting forth?  Maybe not day-to-day, but on a larger scale is there an actual reason bubbling in us that makes us take this huge journey ?

Camus talked about anguish and terror and the miserable condition of Man but he…wrote like a man who had just finished a large dinner of steak and french fries, salad, and had topped it with a bottle of good French wine. Humanity may have been suffering but not him. A wise man, perhaps, but Henry preferred somebody who screamed when they burned.

-Charles Bukowski

I still don’t what this blog is meant to be, but I know I want to keep the human experience at the forefront of my pursuit.  Maybe not geared towards the tortured genius motif that Hollywood loves in order to scream while we burn.  But something human should be bursting and bustling within us.  Something that needs to be shared.  With as many people, in as many ways as possible, all for the human connections and moments that combust. Shared with naked hedonism and playfulness and wonderment and poeticism and fear and conviction and commiseration and irrepressible awe.

But if you’re going to drift through this world.  Giving talks and writing papers because that’s simply what you do as an academic and not because there’s beauty and humanity that needs to be had and shared.  If it doesn’t come bursting out of you, in spite of everything, don’t do it.

Posted in Grad School, Humanity | Tagged , , , | 3 Comments

## A Roll In The Hay

Haystacks.  Haystacks, everywhere.

Let’s get something straight, all algorithms are exhaustive search.  We just need to find the right haystack.

In a previous post I talked about complexity theory getting off to a bizarre, albeit beautiful, start that ended us up in the land of decision problems.  So now we have decision problems and languages and grammars, when what we really wanted to be talking about were the inherent search problems they corresponded to.

Now some of us may have gotten very used to decision problems and they now seem natural, but there are some subtleties to keep in mind.  To get a taste of some of the weirdness of this contrast of decision vs. search, let’s take a look at a specific problem.  Let’s consider the classic NP-complete decision problem of 3-coloring a graph.

For completeness, the 3-COL is to, given a graph $G=(V,E)$, decide whether all of the vertices in $V$ can be each colored one of our 3 colors so that no two vertices that are connected by an edge in $E$ have the same color.

So, of course, we may say exhaustive search for this problem is the naïve algorithm that takes $3^{|V|}$ time by trying every possible coloring.  And since this problem is NP-complete we don’t expect to be able to do something really clever that allows us to only take polynomial time.  Moreover, the Exponential Time Hypothesis (ETH) conjectures that this exhaustive search is essentially the best we can do.  That is, to see if there’s a needle in this haystack, we have to go through every single piece of the $3^{|V|}$ pieces of hay!

But not exactly.  We can do a little better (even if it’s still an exponential run-time):

Instead of looking through every possible coloring to see if it’s a proper coloring, we could instead look for independent sets (a set of vertices that have no edges between them) where the rest of the graph is 2-colorable.  The trick here is that checking whether a graph is 2-colorable can be done in polynomial time!  And so, such an independent set exists if and only if the graph is 3-colorable since the independent set can be colored one color (by definition there’s no edges between these vertices and so there’s no clash of color) and then the rest can be colored with the other two colors.

Why is this useful?  I’m going to be really hand-wavy here and say we just need to look for independent sets of size less than or equal to $\frac{1}{3}|V|$ and so it takes (crassly) approximately $|V| \choose \leq\frac{1}{3}|V|$$\approx 3^{\frac{1}{3}|V|}\approx 1.45^{|V|}$ time to decide whether a graph is 3-colorable with this approach (for more detail that’s still at a high level, see this).  So this is still exponential, but $1.45^{|V|}$ is still much better than the naïve $3^{|V|}$.  (Note that this doesn’t contradict ETH since it’s still an exponentially big haystack in which we need to turn over every single piece of hay).

This type of work generally lies in the area of Exact Exponential Algorithms which is mainly concerned with finding improvements on exhaustive search for NP-complete problems.

But this is where things begin to rub me the wrong way.  We certainly got an improvement, but we did nothing clever with our independent sets once we formulated the fact that they were useful to look for; we just did exhaustive search on them!  And there’s a sentiment I have that this isn’t really an improvement on exhaustive search since maybe the original exhaustive search for a 3-coloring was the wrong notion of exhaustive search and the right one was the independent set technique the entire time.  Or at least that exhaustive search isn’t well-defined since there’s a bunch of things we might be able to exhaustively search over to decide this problem.

And this is where the weird connection of search and decision comes in:  For a decision problem we just need to correctly say YES or NO regardless of how we get there.  And really the only way to get there is by going through some search problem and that will let us make that decision.  That is, to every decision problem there is a slew of corresponding search problems that decide it.

In fact, not only is exhaustive actually well-defined, it’s all there is!

That is, when we talk about improving exhaustive search, we really mean improving naïve search.  And what we do from there is find a different exhaustive search that is less naïve than the original one.

Find a smaller haystack.

So every decision problem is really a search problem.  A search for a witness as to whether we should decide YES.  The only trick is to cleverly swap out our big naïve haystack for a small one with a key property: there is a needle in the big naïve haystack if and only if there is one in the small haystack.

That way, we can just sift through a small (hopefully polynomially-sized) haystack with the confidence that what we find there will tell us what we would have found if we took the trouble to go through the giant one.

The real distinction isn’t between exhaustive search and something more clever, it’s between naïve search and all other types of (possible more clever) exhaustive search.  It’s all exhaustive search.

And all solutions to decision problems are just some form of haystack-swapping.

But this isn’t really a new idea.  It might be a nice framework to think in at times but the idea that everything boils down to exhaustive search was thought of before.  It even caused the Soviets to think the P vs. NP problem was solved (the part that was missed was that exhaustive may not be so bad if the haystacks we get down to are small) and caused a bunch of hoopla and controversy, especially when Leonid Levin defined NP-completeness.

Nonetheless, it’s haystacks all the way down.  And all decision problems come down to search problems (albeit, exhaustive search problems) for a witness.

But then what about search problems themselves?  Is it identical there?  Are there subtleties I’m leaving out?  And, most importantly, where is the disparity between decision and search problems I keep alluding to?

To be continued…

## Cold Off The Press!

Big news!  It looks like Laci Babai has a quasipolynomial time algorithm for Graph Isomorphism (GI).

That is, GI has jumped from the best known algorithm being almost exponential, to being almost polynomial!

This was certainly exciting and I immediately posted the result on Facebook last night only to see it on other main TCS blogs and on twitter the next morning.

This was the point when I realized that “Oh yeah, I have a blog too.  And it’s kinda supposed to be about TCS.  And a good deal of people follow TCS blogs because they talk about recent TCS results and give a feel for the landscape…”

For some reason that wasn’t immediately obvious to me and I just posted it to Facebook and unconsciously assumed the big boys would take care of it in the blogs (we seriously need more women bloggers…assuming we need any bloggers at all).  But I suppose if I want this blog to have any semblance of legitimacy I should note results that excite me.

Well I guess it’s not a big deal I missed this one result…It only might be “the theoretical computer science result of the decade.”  Well, better late than never I suppose.  Here it is, cold off the press!

So GI is one of those weird problems that’s not NP-complete (without collapsing the polynomial hierarchy) but also isn’t known to be in P.  It’s this intermediate problem.  And it’s been an open question for a long time as to where exactly GI is.

It’s generally been the flagship of intermediate problems of NP.  Some of its bed-mates are the well-loved problems of factoring and discrete log and lattice problems.  Thankfully, unlike all of these bed-mates, GI is not used as the basis of any cryptographic protocol…otherwise this would be a scary result.  But this is because we haven’t really thought of GI as a hard problem for quite some time.  So Laci’s result isn’t all that conceptually surprising (all though a large chunk of complexity theory is trying to prove things that we all believe are true and wouldn’t be that surprised when we confirm them, yet we consistently fail and so this result is quite exciting).

GI is solved pretty easily in practice.  That is, instances of GI that arise in the real world seem to be solved pretty nicely and so it is much unlike its cryptographic bed-mates since they seem to be hard on average.

So, now that GI is within striking distance of a polynomial time result, it should be noted that there wouldn’t be much change to the complexity landscape if we get GI in P.  It would be a huge result!  But mostly because it has been a longstanding open question and would give us a definitive and well established place to put GI.

The biggest consequence (besides any technical insights the proof gives us) is that pedagogy would be a little bit trickier in trying to introduce the notion of Zero Knowledge Proof systems.  I’m currently TA’ing a graduate crypto course and just last week was able to teach of one of the lectures and, surprise surprise, it was on Interactive Proofs and Zero Knowledge.  Oh man, that was fun!

But GI was such a great flagship problem for that!  All of the intrigue for its use there kind of fades if we have GI in P.  Well, it’s only (possibly) quasipolynomial for now…

That being said, now that it’s almost polynomial, I really want to see it land in P now!

Feel free to post any questions or thoughts on this exciting result!

## Alphabet Soup

Computational complexity theory is math putting us mere mortals in our place.

Complexity theory got off to a bizarre, albeit beautiful, start.  Birthing from Hilbert’s majestic notion that, for mathematics, there is no ignorabimus, the race was on.

The race to show that we could answer any mathematical question systematically.  Algorithmically.

The race to champion Hilbert’s slogan “Wir müssen wissen — wir werden wissen!” (“We must know — we will know!”).

The gods answered.  They said “No.”

And so we learned that there are some things that mere mortals, even if they’re giants, will never know.  In fact, we learned that there are things that even the gods, with all the power and time in the aether, will never know.

This is computability theory.  That there are problems that even the gods can’t compute.  Math always has some tricks up her sleeves and there is no algorithm than can solve all problems.  Moreover, there are mathematical statements that are true but entirely impossible, even by gods, to prove.

But amidst all this impossibility, we started to get a better idea of what the gods could and couldn’t do.  The obvious question that came next was “What can we mere mortals do?”

This is complexity theory.  If we don’t have all the power and time in the aether, what things can we do?  What things can’t we do?  How complex are things?

But this is where complexity theory gets bizarre.  Because of this history, it spawned from computability theory which was concerned with which mathematical statements could be proved.  That is, which problems have an algorithmic way to prove or disprove instances of that problem; simply say YES or NO as to whether a statement was true.  Decision problems.

And so new computer science undergraduates, whose thoughts are rightly bouncing with computational problems such as sorting lists and factoring numbers, are blindsided when they study complexity and are given decision problems to study instead of the computational problems we know we want to solve as computer scientists.  They are given formal definitions that they eventually kinda sorta get used to but still feel different than the computational problem they had in mind.  From the remnants of the computability theory it grew out of, we get problems that only want YES/NO answers and, even worse, we get the vestigial and redundant terminology of recursive and recognizable and recursively enumerable and computable and listable and decidable and co-recognizable and computably enumberable.

Worse yet, it all gets mixed in with formal language theory where Noam Chomsky was trying to study human language and so complexity theory is riddled with terminology about languages and grammars and automata.

And they’re all related in beautiful and cool ways.  But we cried “Wir müssen wissen — wir werden wissen!” and we tried to create a pinnacle to conquer math from and so math broken even the gods and scattered us like Babel to speak in confused tongues and rendered our complexity classes an alphabet soup.

So we open our mouths and out comes #P and TQBF is PSPACE-hard and PPAD and RP and coRP and BPP and PPP and SOS and gap-SVP reduces to average-case DLWE and PCP (Probabilistically Checkable Proofs to complexity theorists, Post’s Correspondence Theorem to computability theorists, and Phenyl Cyclohexyl Piperidine to Berkeley residents).  And we flounder and  don’t come close (although this week has seen the “whopping 3.011n” bound, mentioned in the previous link, become a whopping 3.11n bound) to putting a dent in what we had the hubris to think we could topple.

And the majority of the time all we consider are decision problems when all we really want to study is the computational problems we really have in mind.  The search problems.

Yet we talk about the P vs. NP million-dollar problem and it’s profundity (and it is truly profound) always in informal terms of whether it’s as easy to solve a problem as it is to check that a solution is correct when given one.  We talk about it in terms of computational problems and not in the decision problem framework that it’s in!

And you can hear sometimes undergrads lament this hypocrisy.  And they lash out and yell F*@k P and F#&k NP.

Hence FP and FNP.  The search (function) problem versions of P and NP.

Luckily our history hasn’t hamstrung us too badly, and FP=FNP if and only if P=NP.  Our question was the correct one to ask!  But these FNP classes are still poorly studied and the relationship between search and decision problems is tenuous in subtle ways.

We started by thinking of decision problems and then moved to search problems.  We inherited a lot of insight and intuitions from our thoughts on decision problems but in what ways are we stymied by this?  In what ways are we not thinking of search problems unto themselves and instead thinking of them in terms of decision problems first?

Karp reductions, Cook reductions, non-determinism.  All well-defined with intuitions for decision problems.  They have corresponding ideas for search problems but with subtleties and bizarre mismatchings of concepts (try to think about it for a moment; a precursory response that seems right is likely missing some subtleties).  The main problem being that we try to reconcile search problems with our notions of decision problems instead of first and foremost thinking about the search problems as their own beast.

The formalism of formal languages gives us a rich landscape to wander but how do we make sure to not be consumed by the formalism of decision problems?  How do we keep the nature of computational search problems in our soul?

And to what degree do we babble through languages and formalisms as we try to talk about search problems ever since we were scattered far and wide after we challenged the majesty of math?

To be continued…