The Little Prince in 2017

The Little Prince had two main things to worry about on Asteroid B-612.

One was to uproot baobab trees.  And the other was to tend his rose.

If he didn’t uproot the baobab trees, the results would be a catastrophe for his small asteroid.

baobabs___the_little_prince_by_celaenotheharpy

And if he didn’t tend to the precious rose, fragile thing, it would die.

The asteroid wouldn’t be destroyed of course if the rose died, but the poor prince’s heart likely would be.

So there were two main responsibilities of the Little Prince: Keep the asteroid in tact by uprooting baobabs and tend to the rose so that the asteroid was worth living on.


 

If you haven’t read The Little Prince, it’s a very tender and interesting piece worth reading.  What I want to talk about today, though, is progress.  I think everyone in some form or another wants progress.  And, with New Year’s Eve a day away with a pretty insane 2016, I’ve been thinking a bit about what progress I want to make in 2017.  There’s a categorization of progress that was helpful to me in making goals and I think it merits sharing.

I believe there are two main types of progress to be made in this world: Baobab uprooting and Rose planting.

Baobab uprooting are the things meant to fix broken things and make sure the lower end of Maslow’s hierarchy of needs are being met.  Here we battle famine, social injustice, cancer, poverty, pain.

Rose planting is introducing new things as improvements and gets at the upper levels of Maslow.  Here we attempt to flourish in creative arts, self-care, theoretical research, space exploration, personal goals, happiness.

 

It seems like these are often taken in a very binary this-or-that way.  Being in the business of theoretical research, I’ve heard many stories of battles (and have participated in some small ones) between these two sects.  Baobab uprooters berate, say, NASA for exploring space when there’s so much poverty and hunger on Earth while rose planters wax poetic (justifiably) about the importance of exploration and discovery to the human spirit (and how sometime rose planting leads to baobab uprooting).  And, in general, I hear this recurring scuffle of each side talking about how important their notion of progress is and how silly it is to waste time on the other’s goals (or they, at the very least, implicitly put themselves into one category or the other, to never consider that they could help in both).

But with a very unpredictable 2017 around the corner, I think it’s worth making this distinction.  There will be enough fighting in the coming year.  For now we all want the same thing: progress.  Let’s look at it’s two faces clearly.

 

For me, this categorization makes it easier for me to make New Year’s resolutions as it allows me to attempt to make goals in both categories.  Without acknowledging both of these as valid and important parts of progress it makes me feel guilty on both ends: If I focus on rose planting it makes me feel privileged to be able to do so and guilty that I’m not doing more to help the massive amounts of pain in the world.  And if I focus on baobab uprooting I feel guilty in graduate school that I’m not accomplishing what’s expected of me and what I’m getting paid for and that I’m not keeping up with my peers; it also neglects myself even though I know I need to be happy on my own end and do things that make me fulfilled.

Making these two notions of progress to be sections that I populate with New Year’s resolutions helps on both ends of this guilt.

One reason I’m writing this is to maybe help alleviate some guilt of the hardcore baobab uprooters.  Keep doing your thing.  But don’t feel guilty for making your own personal asteroid worth living on too.  It’s necessary.

The larger reason I think this matters, though, is for the rose planters: You have time for some baobab uprooting.  Even just an hour a day (which is often more than Facebook steals from me) accumulates to quite a bit (see Vi Hart for more inspiration).  You learn a lot along the way, it’s important, and it doesn’t stop you from planting more roses.  Just a little bit more scheduling.  It’s necessary.

So all of us resolution writers, lets try to get some goals for baobab uprooting and rose planting.  2017 is finally here and we’ve got some Little Prince-ing to do.  We’re gonna need it.

the-little-prince

Advertisements
Posted in Bubble-Breaking, Community, Grad School, Humanity, Social Topic | Tagged , , , , | Leave a comment

OK…So, What Do We Do From Here?

It’s been a while since I’ve posted.

Part of that is, fortunately, because I’ve been busy doing research.  But it’s also because a lot of the issues occupying my brain lately have been overwhelming.

With my first trip to Israel this past summer, I spent a lot of time learning and thinking about the Arab-Israeli conflict.  The Black Lives Matter movement also had quite a bit of activity that summer, also being a highly politicized and divisive conflict.  And, finally, the primaries and beyond for the US presidential election were ripping through the nation.

Each of these was overwhelming.  Because I had my own opinions on each of these and each had an enormous amount of gravity attached to it.  But also because they were so divisive and people were in such conflict.  Staying deep within their own side and having such vitriol towards the other side that they hardly had any contact with.  And there was such pain.  And such fear.

The election has happened now.  And I feel I should have done more on my part to talk more about it.  I feel guilty that I was overwhelmed and didn’t try to start more conversations outside of my own immediate bubble.

Bubbles

I may talk about the election in a future post but, here, I want to talk about bubbles.  Choose your favorite issue or, better yet, leave all issues at the door.  I want to understand how we become so divided and isolated from each other in general and see if there are ways to change that.

The main relevant ideas I’ll focus on are in this video (don’t get scared of its title…it’s much less upsetting than many of the current coverage of specific issues).  Most everything I want to say is in the video, so please give that a watch…it’s important, and it’s worth it.

It’s scary.  We’re scared and hurt and angry animals.  We’ve always been.  But now we have the internet.  And what I want to talk about is if there are any ways to address what’s happening in the video.  Namely, I want to address this part of the video:

Screen Shot 2016-11-12 at 11.31.21 PM.png

These are bubbles.  These are terrifying.  How do we avoid them?  How do we avoid building “totems” of the other side?  How do we avoid building straw men?

I think there is a lot to say about actively doing this socially:
I know someone who, after the election, looked for one post they didn’t agree with or even thought was terrible and ‘liked’ it to better diversify her page and pop the bubble.  I’ve seen others promote this bubble-popping with posts recently and have discussions.  And I’ve seen attempts to match people that voted differently to connect them for conversation and learn from each other.
Luca Trevisan, while acknowledging the value of safe spaces, posted a call-to-action to “occupy unsafe spaces,” since permanently staying in safe spaces allows for bubbles to persist.
And, if you’re a against Trump, this is a (very informal) masterclass in empathy (and this is also a, slightly more formal, important insight to many Trump supporters that I’m happy that I did end up sharing this past summer).
And I’ve tried to talk about context and empathy before.

But I want to abstract a bit more than these social techniques and consider technology’s responsibility in bubbles in general.

(EDIT: After seeing people interact with each other, who are on the same side, after this election, I am very scared that the social techniques will not happen.  People seem to not be bothered (myself often included) to even take a step within their own bubble to have discussions about how to band together or how to fight or how to react, much less take steps to actually get out of that subdivided bubble.  I’m scared.  I think this is an emergent property of how people interact.  And I’m scared nothing will change.  And it is why I needed to write this post for my own sanity.  Even if we can’t change the social aspect, maybe we can abstract to design issues that set the mold and dynamics for the emergent social aspects.  Can we be constructive here?)

The Thought Police Are Here!

The ability of these “thought germs” to spread as they currently do is a direct consequence of the technology and social media platforms that arose in the past decade or so.  As in the video above, can they be like a harmful virus?  Do we have a responsibility to regulate this?

First off, the idea of regulating thoughts might (and maybe should) sound horrifying.  We do not want “thought police.”  So the question becomes, are there ethical ways to foster bubble-popping and can there be bubble-resistant social media platforms?

A simple idea I’ve heard which feels ethical on its face (please comment if there are reasons it may not be) is to have a simple fact-checker built in to Facebook that ranks the statement of a post according to its veracity.  I remember seeing this ad against Trump this election season which I took for granted as true but has recently been pointed out to me as false.  A built-in fact-checker, I believe, is a simple and ethical way to help prevent the type of ‘totems’ that can be built in isolated bubbles as the video talks about.

Another simple way to help prevent bubbles, that has actually been done (maybe without that intention), is that you can now ‘follow’ a public figure like Trump without ‘liking’ them, which allows you to diversify your Facebook newsfeed without fear of ‘liking’ something your friends may not agree with.

Both of these, I think, do not stifle free speech, yet might be steps to being responsible in our design of systems.  Systems that can otherwise cause divides.

Just as a pure free market was seen to have horrible consequences in the industrial revolution of America, we may also have to start thinking of the negative effects of social media and if there are ethical ways to help prevent them.

The Responsibilities of the Tech Community

The technology community can often sprint forward without much consciousness of consequences that arise.  With some apps furthering gentrification and social media allowing the type of vitriol and divisiveness we’ve seen this election, the technology community has a responsibility to at least address those tech-caused consequences when they arise.

I think it’s important to start asking questions about the mediums we output.  Should releasing platforms that are predisposed to exacerbating social problems like gentrification, social segregation, and the propagation of false information be akin to selling non-FDA approved food?

As an extreme, what if a platform is released with its inherent structure designed to cluster people of into bubbles quickly and favors false information.  What if it even randomly sends fake messages to its users from the ‘opposing side’ to be incendiary?  Do we consider that dangerous?  Do we consider it just another platform that anyone can use and practice ‘free’ speech on?  More to the point, should platforms be regulated and can we consider them sometimes to be yelling “Fire!” in a massively crowded theater?

With that in mind, should existing platforms, such as Facebook, be examined for what tendencies of that nature it has?  If we consider a social media platform as having a certain set of dynamics and properties to it – e.g. susceptibility to clustering, diffusion time of ideas, lifespan of false information – do we have a responsibility to build ‘safe’  and socially responsible platforms?

Much as we’d prefer the CDC to exist before airplanes are built, we may want a way to analyze “thought germs” before we have the internet (obviously a bit late for that).  How can we do this ethically?

OK…So, What Do I Do From Here?

Lots of questions.  I just give lots of questions.  Few answers.

I think I’m still trying to process the election results and the current state of the country.  I’m not sure what I do from here or what I’m doing with this post.  This is some sort of fuzzy outlet.  I do know, though, that I want to be constructive.  I know that I want to contribute in what way I can.  And thinking of this technology aspect of it seems the closest thing to me that isn’t already being heavily talked about as far as I know.

What should we do?

Should we consider moral responsibilities in technology and social media design?

Are there small changes (like a fact-checker) that can be incorporated to existing systems?

For theory folk, are there graph-theoretic questions to ask under this motivation?  That is, the above picture can have questions about the graph’s connectivity, its poor expansion, and the related notions of its mixing time and properties about diffusion; these are well-studied, but maybe there can be questions that are motivated by our current issues and how they can be used and explored more.

Are the social aspects to bubble-breaking a lost cause?  Should we put a significant amount of energy in the platforms and mediums instead, that allow the dynamics of the social aspects?  Do you have suggestions for the social changes too?

What ideas do you have?  What do you agree with here?  What do you disagree with?

We often think of the internet as having an amazing ability to connect us.  But we have seen that it also has an amazing ability to divide us.  How do we address this?
Posted in Bubble-Breaking, Humanity, News, Social Topic | Tagged , , , | Leave a comment

Greatest Hits of TCS: SL=L

Alright!  As promised, here’s the first of the Greatest Hits of TCS.

To kick it off, Prasad Raghavendra presented Omer Reingold’s celebrated result of SL=L.  Prasad not only gave a beautiful tour through the work and threw some great perspectives on it, but he also set a strong precedent for embedding the work within its cultural context.

The Outline

I’ll follow the talk loosely and give some of my own musings as we go.  But as this is the first in the Greatest Hits series, I’ll lay down a format that I hope to follow for the rest of the series.

We’ll start off with a good dose of the human element of research in Teetering Bulbs, where we’ll talk about the humans involved.  Of course, these aren’t greatly relevant to the result but, as my blog hopefully gets across, I’m in this business for the human journey and for research as a social activity where we learn not only about Platonic truths but about each other and ourselves as humans (if you’re an AI or not a human, feel free to skip this portion).

We’ll then move to Once Upon A Time… where we’ll look at the landscape the result sits in and the narrative that that leads up to it and that it continues.  Context!

The next section, If You Get Nothing Else…, it almost a TL;DR for Once Upon A Time…  This section may not even be specifically related to the result but it should capture the main paradigm (shift) that the result captures or falls into.  If you get nothing else out of this post, at least get the big landscape picture If You Get Nothing Else… tries to capture.

The Result section is pretty self-explanatory.  It mentions the result and and fits it into the story Once Upon A Time… sets up.

Finally, we’ll sketch the proof or main ideas in The Sketch.  These may be to varying degrees of depth but some of them may end up being in the following spirit:

6a00d83451b31569e2019aff29b7cd970c-450wi

The rest of the series should follow this outline so read in whatever order and whichever subset of sections you want, although I won’t redefine the outline in the future.  Feel free to post questions or comments, especially if you want me to elaborate on something or if there’s an error or to just show off your knowledge!  Have fun!


 

Teetering Bulbs

Omer

Omer Reingold is a great human.  He is, in fact, one of my inspirations for this blog.  The popular Windows On Theory blog was in part created by him and he wrote its very first post; his introspection, humility, self-deprecation, aspiration, and overwhelming humanity in that post was consciously in my mind as I corralled courage for my own first post.

Omer also was at Berkeley the whole summer of 2015 for the Simons Institute’s Cryptography program.  While here, he carried on Shafi Goldwasser and Nir Shavit’s practice of playback theater, beginning it and leading most of them.  Playback is a form of improv theater where a participant tells some personal story and the performers play it back to them in some improvised way under some format.  It can be funny, poignant, personal, sad, inspiring, and always human.  I was very happy to participate in this with Omer and the lively crypto community.  And it did leave an impression on me how human and enthusiastic and open Omer was as a person.  He seems to embody research as a human experience and allows himself to be human in all aspects of his life.  And, as he confided in his blog posts, he shared how his research is marked with points of inactivity and how actually the result we’re about to talk about here was the culmination of a previous visit to Berkeley in which the result was not one he originally thought to accomplish while here but manifested as a surprising success.

Once Upon A Time…

The power of randomness!

OK, so let’s actually get to some material (or at least some context for the material).  Randomness gives us power!

The ability to flip coins and (most importantly) allow errors, relaxes our notion of what it means to “solve” a problem: we don’t always have to get the right answer, we just have to get it right with overwhelming probability (like, the probability we mess up is lower than the probability that we win the lottery six times in a row).  And so, using randomness as a resource, we can widen the scope of what we consider solvable by allowing (negligible) errors.

And, in the early days of computation, it became quickly apparent that randomness gives us a great deal of power.  By creating randomized algorithms we could (easily and elegantly) solve problems that have generally seemed very hard to pin down.

Primality Testing, for example, is the decision problem of determining whether or not a given number is prime.  Creating a deterministic algorithm that was efficient was very elusive for a long time, yet simple randomized algorithms were often created and used that decided the problem (with a negligible probability of failure).

Similarly, the problem of Polynomial Identity Testing (PIT) also evaded deterministic attacks, yet yields easily to a randomized technique (to see if a polynomial simplifies to be just zero/identity, just plug in random values to see if it ever outputs something not zero!).

The power of randomness?

But, wait! – you say – we have a deterministic algorithm solving Primality Testing.

And, right you are.  We do have an algorithm for Primality Testing that doesn’t use a single coin flip.  In fact, many of the problems that were captured with randomness’ power in 70’s were later toppled entirely deterministically.

It began to look like randomness’ power wasn’t all that powerful…

PIT became the main stronghold left for problems left that were efficient when randomized but had not yet succumbed to a deterministic approach.  But, besides these loose ends, it seemed that randomness gave us no power.  That is BPP (the version of P where we can flip coins), captures no new problems that P can’t capture: BPP=P.

This was addressed rigorously in the beautiful work by Russell Impagliazzo and Avi Wigderson that shows we can derandomize BPP (i.e. BPP=P) based on worst-case hardness assumptions (a little stronger than P\neqNP, but that we nonetheless very much believe).

So, there we have it!  Randomness seems to not give us mortals (i.e. P) any power!

Space Complexity

So far we’ve talked about randomness in terms of time-bounded complexity.  More specifically, we’ve talked about it in terms of the small (polynomial) lifespans of mortals: P.  (Note that this automatically puts a polynomial bound on space as well since you can’t write down more than polynomial information before your lifespan runs out and you drop dead!)

So, with mortals (which is what we really care about in complexity theory) still in mind, we can ask about the problems we can do super efficiently.  In this case let’s look at what we can do with just a little bit of memory on hand; that is, what can we do with just a logarithmic amount of space, L (note that it’s easy to show that L\subseteqP which, because of mortals, is why we care about log space and not polylog space).

Just because we can’t save time using randomness, doesn’t mean we can’t we be clever to use less space when flipping coins!  So the question still remains on if BPL=L.

We’re back to our original ponderings: does randomness give us power?

If You Get Nothing Else…

Randomness is a resource.

When we allow ourselves randomness, we try to leverage a helpful resource for power.  Derandomization results are striking in that they tell us that the coins we flipped were placebos: we had the power in us all along and didn’t need that resource.

But, mostly, these results are profound philosophically.  We start poking at the nature of randomness and try to pin this age-old notion down rigorously and, surprisingly, often find that a random world is not more robust than a deterministic one.  So when people try to fight the Church-Turing thesis that humans are essentially just Turing machines by saying our brains are doing non-Turing stuff, this takes the wind out of one more of their arguments: “But I have free will and stuff.  I don’t do things deterministically!  There’s at least random occurrences happening in the marvelous biological machinery of the brain.”  Doesn’t matter!  You can still be modeled by P whether or not you live in a fatalistic world!

The Result

Theorem: Undirected ST-Connectivity is in Log-Space

OK…So what is this and what does it have to do with randomness and what we have been talking about?

Undirected ST-Connectivity (call it USTCON) is the decision problem of, given an undirected graph G=(V,E) and vertices s,t\in V, deciding whether t is reachable from s – i.e. are s and t connected.  So the theorem can be stated as USTCON\inL.

Again, what does this have to do with randomness?  Well, USTCON have a very simple randomized algorithm: just start at vertex s and take a random walk for n^5 steps.  With high probability you’ll hit t if they’re connected, so just accept if you hit t and reject if you don’t and you’ll very likely be right.  Moreover, this is in logspace since you just need to store which node you’re on and that node’s neighbors (every graph can easily be made d-regular in logspace while preserving connected components) which takes logarithmically many bits to identify.  So what Omer did is akin to taking Primality Testing (which had an efficient randomized algorithm – i.e. it was in BPP) and finding a deterministic algorithm for it (i.e. put it in P); he moved USTCON from BPL to L.  We’re knocking off randomized problems one-by-one to show they can be derandomized (hinting that randomness doesn’t give any power)!

But, more than that, USTCON was an important problem to derandomize!  It has it’s own class defined by it!  So some background here: like the super-important P vs. NP problem, there’s the analogous L vs. NL problem.  And, just like 3-SAT is the flagship complete problem for NP, STCON (the directed version of USTCON) is the flagship complete problem for NL.  So, naturally, if STCON is so important and foundational to the L vs. NL problem, we wonder about the smaller problem of USTCON.  It seems easier and like it doesn’t need nondeterminism.  Where does it lie?  Does it shed light on the STCON problem?

At a loss for where USTCON should sit, Harry Lewis and Christos Papadimitriou captured it with the class SL.  By defining a notion of a “symmetric” Turing machine, they showed that USTCON is complete for SL, the class of problems decidable by logspace symmetric Turing machines or, equivalently, just the class of decision problems reducible to USTCON (Christos has told me before that he thinks of complexity classes in terms of their complete problems, as they exactly capture the notion of that class).

So, noting that USTCON\inBPL, we have the a picture of L\subseteqSL\subseteqBPL\subseteqNL.  Omer’s result not just puts USTCON in L, but collapses the first inequality so that SL=L, thus bring BPL ever closer to L.

The Sketch

The main idea of solving USTCON in logspace is, in Omer’s own words, that “if you want to solve a connectivity problem on your input graph, first improve its connectivity.”

The Dream

Imagine for a moment how simple life would be if all of our graphs were “well connected.”  What if, no matter what node I set you on, you knew you could always get to any of the other nodes in just 3 hops!  That is, there was always some path from whichever node you’re on to any other node you’d want to be at that was only 3 edges long.
Then USTCON would be so simple in this dream world!

All we’d have to do, to see if s is connected to t, is to start at node s and go through all length-3 paths from there to see if you ever hit t!  If you hit t then they’re certainly connected, but if you don’t then you know they can’t be connected since all nodes that s can connect to it will connect to in a length-3 path.  And since each node necessarily has less than n neighbors, you can branch into at most n possible choices for each step of you path of length 3; this just comes down to n^3 possible length-3 paths in total which are entirely enumerable with \text{log} (n^3)=3 \text{log} (n) bits.  Thus we easily solve USTCON in logspace!

Expand Your Reality

You wake with an excited jolt, your mind lingering in a dream while reality slips over it.  Your lips thin with disappointment as you are reminded that reality is not a dream world and you are not always given such idyllic graphs.

But, as your more hopeful dream-persona takes its leave, it stumbles over a synapse and ignites a spark: what if we lived in a dreamy enough world?

Of course not every node is just 3 hops away from every other node in all graphs, but what if you could make them not too far?  We call the maximum distance between any two nodes in a graph, that graph’s diameter.  Our dream was a world full of only graphs of diameter 3.  And, while that extreme is ludicrous, what if we could at least have a decently small diameter.  I mean, we saw how useful having a small diameter was; that type of connectivity allows us to really test connectivity properties a lot more!

So, Omer’s idea is to find a way to make a graph much more connected (in terms of its diameter) so that we can easily decide whether s and t are connected.

In fact, we have a suite of graphs that have very nice connectivity properties: expanders.  Intuitively, expanders have the great property that they have high connectivity properties, but they’re also very “sparse”; that is, they act like a complete graph while having very few edges (which makes things simple).  Best of all, expanders have logarithmic diameter!

If we’re given a constant-degree expander then we can start at node s and look at all logarithmic-length (instead of length-3) paths to see if we hit t!  And, since the degree is constant, we only have a constant number of branches to choose from at each of our logarithmic number of steps…a constant to a logarithm will be polynomial.  Thus, again, we can entirely enumerate all paths with just a logarithmic number of bits and thus go through them and test s-t connectivity in log space!

Now we’ve got to scratch our heads a little bit.  This is less unrealistic than our dream…but we’re still not always just handed constant-degree expanders on a platter.  How do we get such an enviable critter when all we have to work with is an arbitrary crude graph?

First off, every graph already is already an expander!  Just maybe not a very good one…

That is, each graph is either really well connected or it has bottlenecks in it or is somewhere in between, and we have a way to measure that: (vertex) expansion.  This expansion measure of a graph G, called the Cheeger constant \phi(G), is between 0 and 1 and informally is bigger if things are more well connected and smaller if there are bottlenecks.  And, loosely, we call a graph a \lambda-expander if its Cheeger constant is about \lambda (we actually often use the second largest eigenvalue of the graph’s adjacency matrix to define expanders since it’s a good approximation to the Cheeger constant as shown in the Cheeger inequalities).  So every graph will be a \lambda-expander for some \lambda, it’s just that some may have particularly poor \lambda\text{'}s.

Think Within The Square

The goal is to take our graph (which is a poor expander) and turn it into a better expander.  And the easiest way to do this is to raise the graph to some high power!

If you haven’t played much with spectral graph theory, this may seem like a weird thing to say – raise a graph to a power – but the whole crux of spectral graph theory is relating graphs to matrices/linear algebra.  And we certainly know how to raise a matrix to a power!

So simply, look at a graph’s adjacency matrix, raise that to some high power, and reinterpret that new matrix as the (weighted) adjacency matrix for a graph that we just “operated on” (note that I’m being really glib in all of this, add all your self-loops and whatever you’ve got to do, people).

But, why does this help?

Well, our resulting graph is a lot more connected and has much better expansion.  In fact, raising our graph to the k^{\text{th}} power would result in the graph that has an edge between nodes from the original graph iff there was a path of length k between them in the original graph.  This allows us to connect disparate parts of the original graph so that our new graph is well connected and has high expansion (if we properly normalize the original adjacency matrix, this can also be thought of as taking a random walk on the graph and mixing this Markov chain).  Note that t is still reachable by s iff it was in the original graph.

OK!  So now we’re getting ourselves closer to USTCON with these nicer to handle expander graphs (with small diameter)!

In fact, it can be shown that any (connected, non-bipartite) graph can be turn into a (good) expander by raising it to just a polynomial power.  Which, lucky for us, can be done by squaring the graph a logarithmic number of times!

So it looks like we’re in the clear and everything boils down to squaring a graph.

What’s the problem then?  Let’s just take any graph and square away until we have an expander with small diameter reminiscent of our dream and then just go ahead and brute-force solve USTCON!

Well we need to remember we need constant-degree expanders for our brute-force technique to work.  But if we look at the operation of squaring the graph (say it’s d-regular) then we get the new graph that connects a node to each of the original graph 2-hop-away neighbors; this is more connected (the diameter roughly halves) but the degree of each node shot up from d to d^2.  Every time we square the graph (each of the logarithmically many times) we square the degree!  That certainly doesn’t leave us with a constant-degree expander and it ruins our approach.

This leaves us at the crux of Omer’s insight.  He gets around this problem by finding a new notion of squaring the graph to increase its expansion properties, yet leave its degree low.

Ziggy Zagdust

After squaring a graph we now have better expansion but have increased the degree.  We would like a way to keep that oh-so-nice expansion gain but be able to decrease our degree again.  To do this, Omer repurposes his zig-zag product that was introduced in 2000 for other reasons (including winning a Gödel Prize).

This allows us to take our squared graph G that is kind of expand-y, but has large degree, and zig-zag product it with a small (good) expander H so that the resulting graph inherits G\text{'}s expansion along with H\text{'}s degree.  So we don’t lose much of the expansion gains we got from squaring G but, as long as H is small and small degree, we can keep the degree small.

The main idea is to take our squared graph G and replace each of it’s vertices by a copy of H!  The notion is that this explodes each of G\text{'}s large degree vertices up into a bunch of vertices so that its high degree is spread amongst all these little vertices so that no vertex has too many edges on it.  And, if H is a good expander, then these replacements don’t injure G\text{'}s connectivity by much: the previous connectivity is only stymied by having to step through H a little bit on your path between nodes but, since H is well connected, that’s not much of a problem.  (Technically this is the replacement product and the zig-zag product connects edges of the replacement product a little more cleverly to really preserve G\text{'}s expansion: the edges come from taking little zig’s within a copy H with big zag’s between copies of H.  See Luca’s treatment for more detail and some nice visuals).

So, if our graph G was D-regular (and D is just too damn big for us) and we have a smaller expander H that is d-regular (and d is much smaller and nice), then the zig-zag product ends up giving us a larger graph (with |G||H| vertices) which is only d^2-regular . (I’ve really glossed over it, but we can in logspace take any graph G and turn it into a regular graph whose connected components are non-bipartite easily, and finding some small constant-degree expander H is also doable in logspace as this was actually one of the original uses of the zig-zag product).

And so we have a way to square the graph in a constant-degree way and thus we can repeat this and end up with a constant-degree expander that preserves connected components and so we can answer USTCON on this logarithmic diameter graph in logspace!

USTCON\inL \blacksquare

Well, That Was Random…

OK, so hopefully it seems a little more natural and a little less esoteric to be making a big deal out of something abstruse-sounding like SL=L.  To some, it can be easy to see results like this superficially as just theoreticians doing weird, inbred things; so I hope that putting it in the context of randomness makes it feel more like a natural progression and less, well, random.

But, as far as derandomization goes, it does feel a little different than how many of our other (time-based complexity) derandomization results go.  In talking about BPP=P type results, they’re, first of all, typically conditional whereas Omer’s result used no assumptions and, second of all, they often use a notion of pseudo-random number generators to derandomize.  That is, we often use good enough randomness to make things deterministic, whereas with SL=L we just talked about graphs and spectral and combinatorial properties to derandomize all of SL.  It feels different.

So, before we finish here, I want to give a rough idea of how this result fits into the usual derandomization framework and context.  In fact, Prasad mostly presented from a paper, Derandomized Squaring of Graphs, that recast Omer’s result in terms of derandomization with some simplifications and some insights as to what was really happening in his combinatorial argument.

The key notion is from the idea of how an expander can be used as a pseudo-random generator.  Let’s look at a complete graph for example: if we have a complete graph on 2^r nodes, then we can think of each node representing each possible r-bit sequence.  To choose a completely random r-bit string, we can just take a random walk on the graph and end up at a completely random node which is an r-bit string.  But, there’s something very silly about this.  To take that random walk we need to flip some coins and each of the, say, t steps we have to choose one of our current node’s 2^r neighbors which needs to be specified with r bits.  So taking the random walk takes rt bits just to get out r random bits!

But!  If we had a d-regular expander instead, then it only takes \text{log} (d) bits to specify a neighbor of our current node.  So we’d only have to generate \text{log}(d)t bits to take a random walk and end up with a “random” r-bit string.  Yet, since expanders are well connected, it acts like a complete graph so that you still end up at a pretty random node after t steps.  So expanders allows us to take a small truly random seed and blow it up into a larger pseudo-random string.

What does this have to do with Omer’s approach?  Well, remember how we knew USTCON was already in BPL by simply taking a random walk from s and hoping to stumble on t?  This interpretation of Omer’s result is saying that Omer just found a fancy way to take pseudo-random walks on the graph and, most importantly, these walks are latently spawned by a pseudorandom generator with only logarithmically sized seeds: since there’s only polynomially many such paths we can enumerate them with logarithmically many bits (these paths are the ones Omer talks about taking on our expander with logarithmic diameter).  And then we can just brute force through all of them in logspace and not leave it up to chance; deterministic!

So, given a d-regular graph G, we want to sample a neighbor of our current node from G^2 (i.e. start our random walk with 2 hops!).  We can do this by sampling a random one of our d neighbors and then do that again for our second step.  But that would take 2\text{log}(d) bits of randomness.  We’d like to use less (especially since we’re doing this recursively as we raise the graph to a power and grow our walk exponentially).  How do we use less random bits?  A pseudo-random generator of course!

And what better than an expander!  So on a vertex, when we want to walk to a random neighbor, let’s use some expander H to make that pseudo-random decision for us.  And as we get further into this idea, we might see that this is kind of like replacing the node we’re on with our expander H.  Omer’s zig-zag product is just kind of a combinatorial way of taking a pseudo-random walk on our graph being squared where the copies of H are implicitly the pseudo-random generator plopped right into our actual graph’s structure!

In this view we don’t actually use the zig-zag product but instead realize it to be more of a conceptual framework to allow us to make pseudo-random walks and we now actually don’t even have to explicitly take the product and grow the number of nodes.  We can just operate on the same number of nodes but sample from G^2\text{'}s edges recursively as we walk (and the exponentiated graph is implicit as we walk).

So, on a high level, that’s it.  Omer found a way to use expanders to take pseudo-random walks on graphs with a logarithmic seed to brute force everything in logspace and be sure everything is hit in that walk.  So, without any assumptions, we have a brute-forceable pseudo-random generator that allows us to derandomize SL.  And thus SL=L falls nicely into our context of derandomization.

While this ended up being long, there’s still lots of details I left out.
But that’s the main skeleton of the proof.

Draw the rest of the damn owl.

Posted in Advanced, Humanity, Landscape, TCS | Tagged , , , , , , , , | Leave a comment

Help

My uncle died yesterday.  From obesity and depression.  But mostly from living in a culture where asking for help is a sign of weakness and emotional needs are strangled with stigma.

So, this post is in memory of my uncle.  And, sure, while I didn’t know him particularly well, there are beautiful things to say about him and many reasons to talk about why he was a good person.
But what I’m feeling right now – what’s upsetting me right now – is a disgust for our culture.

I need to write this post because our culture wastes lives and it wastes happiness and it wastes humanity.

We live in a culture where asking for help is a sign of weakness.  And emotional needs are strangled with stigma.
I have to force myself to write this post now because of the “shame” associated with talking about help.  But it’s talked about too rarely.

My partner and I are getting help.  She and I have been seeing a “communications coach” for the past month or so.  And immediately I feel like I have to defend myself because this culture somehow makes this feel like an admission of defeat.  I feel like I’d have to make you aware that we were probably doing the best we’d ever been when we decided to see our coach.  I feel like I’d have to make you aware that we were happy and felt very solid in our relationship.  I feel like I’d have to make you aware that our recent engagement did not put any strain on our relationship and that we would have felt solid going into our marriage without getting help.  I feel like I’d have to make you aware that we were very proud of our communication and felt that we could (and did) communicate openly about anything before seeing our coach.  I feel like I’d have to make you aware that we were excited by new ways to communicate and connect and were curious about ways to improve and grow what we already felt strong about.  I feel like I’d have to make you aware that there was no impending urgency to this and that we would have stayed together whether or not we did this.
I feel like I’d have to make a whole slew of excuses to make you aware that there was no “defeat” here.

I feel like I’d have to argue that we should always be looking for ways to improve and grow, no matter how solid things currently are.  I feel like I’d have to argue that it’s actually probably even best to look for help and advice when we are at our most solid; or, as Ze Frank puts it on his thoughts on self-help books, “when we’re most in need of help, we’re the most vulnerable to bad advice.  So probably the best time to read self-help is when you least need it.”  I feel like I’d have to argue that we don’t need it, we want it.  And then I’d feel like I’d have to retract my defensive statement and argue that, even if we did need it, then that should be even less “shameful” to look for help and to improve.  I feel like I’d have to argue – to defend myself – that we made it (note the competitive language still ingrained in me as if there was a battle to be fought) 8 years just fine and that we could have made it another 8 years and so on without seeing anyone and that the impetus here was not difficulty but a state of mind that improvement is good and that asking for help is not shameful.  I feel like I’d have to argue that your connotations – that this is an intervention where someone comes in and fixes things that were failed at – are wrong and that we instead are looking for diverse perspectives and techniques to how we could grow and make life easier and more fun.  Life!

I feel like I’d have to despair that we allow ourselves to be helped in almost every other aspect other than emotional.  I feel like I’d have to despair that every athlete has a coach and every academic has an advisor (if there’s one thing I’ve learned in grad school, it’s to ask questions often and without shame) and every human has a doctor and we readily take their advice and drills and techniques from their wealth knowledge that has accumulated throughout history and we use it to grow and improve.  I feel like I’d have to despair that we don’t allow ourselves the same when it relates to our emotional and interpersonal health.  I feel like I’d have to despair that “coach” seems like a euphemism when applied to a “communications coach” but that there isn’t a whiff of stigma when applied to the physical realm (hell, half of each Rocky movie is Rocky trying to convince someone, who he realizes has experience and tools and will provide accountability, to be his coach).  I feel like I’d have to despair that we recognize that we stand on the shoulders of giants in most everything we wish to learn and improve on but don’t extend that humility – we just nip at the heels of giants – to emotional fitness.  I feel like I’d have to despair that our culture shames asking for help as a sign of weakness and strangles emotional health with stigma.  I feel like I’d have to despair about hypermasculinity and that there’s a need to be “strong.”  I feel like I’d have to despair that my uncle was too damn strong to be weak.

 

 

And, finally, I feel like I’d have to temper all my exaltations of my relationship with the fact that we are still human.  We did have insecurities and “problems” and recurring arguments.  And, while we did have good communication and we were solid, there were things that were still difficult for us.  I feel like I’d have to say this so that people are aware that not everything has to be perfect before help is asked for (the opposite should be true!).  And I’d have to say that the communication we had took work and time and wasn’t always easy.  And, for some reason, saying this, regardless of the fact that every relationship between two people will have some difficulties, still feels like an admission of defeat.

And it is all of these thoughts and excuses and admissions – piling up, avalanching over – that overwhelm and bury whenever somebody thinks about getting help.
This is why people don’t get help in the first damned place.  This is why we don’t even talk about it.

We live in a culture where asking for help is a sign of weakness.  And emotional needs are strangled with stigma.

My uncle would be willing to take medicine for diabetes and high blood pressure.  He would be willing to do physical therapy after a gastric bypass surgery.  But as soon as the illness to consider is no longer corporeal, as soon as it is emotional, he was at the mercy of a culture whose lumbering heft left him unable to ask for or accept help.

Introspection and communication are problems that can be helped and improved.  Depression is a disease.  It can be helped.  It can be improved.

Pride and shame are home-wreckers.  Pride and shame brew stagnation.  Pride and shame grow ignorance.  Pride and shame are isolators.  Pride and shame are killers.

 

My partner and I are getting help.  My partner and I are getting help in the same way I ask for help opening a door when my hands are full.  In the same way I tried a half-hour class on bouldering techniques when they happened to offer it while I was climbing at the gym – to see if there’s something new I could learn, improve on.  In the same way as when I need to borrow toll money to cross the SF bridge.  In the same way I go to some talks that are on topics I’m already pretty familiar with.  In the same way that I do for anything that doesn’t involve a bizarre  and suffocating stigma.

Help me understand why we can’t ask for help without admitting some sort of defeat.

Help me understand why we have to be ashamed of talking about it.

Help me understand why I wouldn’t have said any of this if my uncle didn’t die.  Why there’s such a stigma around help that I would be ashamed to confess that I seek it.  Why I had to be slapped in the face with a death to be angered enough with this stigma to say anything.

Help me understand why the culture and stigma is so prevalent and undiscussed that I feel I need to post this to at least attempt a conversation.

Help me understand why we live in a culture where asking for help is a sign of weakness.  And emotional needs are strangled with stigma.

Help me make my uncle’s death not in vain.

Help me talk about this.

Help me change this.

Help me ask for help.

Help

Posted in Humanity, Social Topic | Leave a comment

Two Horses’ Asses

Uh-oh…
Have I learned nothing from seeing Scott Aaronson repeatedly vilified for posting about sociopolitical views to the point that he created a whole category for the posts that set inflamed readers at his throat?!
And I don’t even have the clout to weather such onslaughts!

Yet, here’s a post about gender and racial (in)equity, along with its manifestation in academia while my blog is still in its meandering infancy…

I started this post about four months ago but then, like many things in grad school, it fell by the wayside.

What initially prompted me to start this post was that I was GSI’ing (TA’s at Berkeley are called Graduate Student Instructors since we’re not using enough acronyms yet) a graduate cryptography course last semester.  It being a graduate course, my workload was fairly low and when deadlines weren’t near I sometimes didn’t get any students coming to office hours.  This was one of those sometimes.

There were some undergrads hanging around the area though.  Since I didn’t expect to have any of my students coming and needing the space, I didn’t care to kick the undergrads out so they just stayed and studied for some systems course.

This was all fine and dandy until near the end when they started talking about women’s clubs on campus.  One of the students (a male) was talking about how it didn’t make sense to have clubs and conferences just for women and how he should be able to start a club for men.

To be fair, he seemed to be doing it partially for shock value and to be controversial and impressively opinionated with the two female and one other male students studying with him.  But, fashionable jocularity aside, I’ve continued to hear enough about men’s activism and clubs, and how affirmative action isn’t necessary anymore, and how words can be used differently than the ways that hurt people, and how all lives matter, and how using terms like unconscious bias when discussing gender equity in academia is just feminists blowing off steam.  And after a month (with an extra day due to leap year) of hearing how there should be white history month I feel I finally need to get this post out here.

So all I have is a really short, really simple question:

Why are road lanes the width they are?

 

 

Too vague?  Too philosophical?
It’s just a simple question.  When we make lanes on a road, why do we make them the width we make them?

OK, the obvious answer.  Because we drive cars on them and they are wide enough for a car (with a little wiggle room).  OK.  Good.  New question.

Why are cars the width they are?

 

 

Still too vague?  I mean we could have have 4 seats up front or the car in a whole single file line of people.  There’s a whole slew of designs.  OK, but before you start trying to argue about centers of mass and material expenses of surface area vs. volume and aerodynamics, I’ll say it’s much simpler than that.  In fact, the answer is that cars were based off of wagons and were originally built with some of the same tools that were used to build wagons.

Ah, but you say you now have a question for me?  You want to know why wagons are the width they are?  OK, simple, they needed to make them a certain width to fit all the hand-paved roads back in ancient-er times, especially when there were rails for the wheels to ride on lest the rails don’t fit or break the wheels…and all of the rails were about the same width (pretty crazy coincidence).  Another question for me?  Why were the rails the width they were?

OK, so you may have guessed by now, but I’ve already answered this question for you.  It’s in the title!  (No, I wasn’t calling the two male undergrads horses’ asses).  We had to power wagons, and two horses were a pretty good source of power back then.  So we have the width of rails and thus of wagons and thus of cars and thus of lanes on our current roads of about the width of two horses’ asses.

OK, fun fact.  Cute.  I see, you’re saying our current world is the way it is because of a history that we hardly think about and just take our lane width to be the “way of the world” without ever questioning or thinking why or how it got there.  Nice parable.

But, Manuel, – I pretend my few readers say – we’re a little beyond that point now.  OK, – these readers protest – the roads have some historical story…But we’ve been to the damn moon now!  With new and unrelated technology far beyond those silly and distant wagon drivers.

OK, OK, you got me.  Space Shuttles are pretty amazing.  Maybe we’ve moved past our silly beginnings…But!  If all you have is a space shuttle, then you are having a bad problem and you will not go to space today.  You also need the two big booster rockets to actually blast off and be shot into space.  Those are made in Utah.  And the first space shuttles ever launched were in Florida.  You gotta get the booster rockets to the shuttle!

How do you that?  My preference would be to put them on a rocket and launch them to the shuttle…but they just (boringly) put them on a train.  And that train passes through tunnels that are carved through mountains.  And those tunnels are just a little wider than the railroad tracks.  And guess how wide the railroad tracks are.  Yup, two horses’ asses.

So a major design feature of what is arguably the world’s most advanced transportation system was originally determined by the width of a horse’s ass.


 

Holy hell, that turned out to be long.  The point, though, is that context is huge.  Regardless of how removed we’d like to be from our history, it is inextricably tied to our current world no matter how advanced we think we are.

Of course a black history month shouldn’t exist when other races don’t have one if the playing field is currently leveled and the world was created yesterday.  But!  The playing field isn’t level and the world was created a good little while ago with a big, messy, ugly, intricate history in between that embeds black history month in a context.  And that context makes it so that our current world needs some fixing.  It no longer makes sense to ask for a white history month while history has led us to the point that black youth in inner cities have a higher prevalence of PTSD than soldiers.  It no longer makes sense to say affirmative action is no longer necessary when there is 0 to 1 black student in every graduate class I’ve taken at Berkeley.  It no longer makes sense to say all-lives-matter-too when all-lives generally have different media coverage and aren’t systemically being taken in the same numbers and questionable circumstances as black lives.  Held at the end point of a long and leering history.  It no longer makes sense to get offended by Kendrick Lamar’s Grammy performance for being accusatory or making us uncomfortable when we can leave that discomfort after the 6 minute performance ends while that discomfort is his and many others’ entire lives because of the point in history they were dropped into.

Context context context

It no longer makes sense to argue that words meant to subjugate a race or gender or sexual orientation have taken on a new meaning and should be able to be used newly and freely when that new meaning was birthed from the hateful meaning and is still dripping from the same painful history line.  It no longer makes sense to say that the r-word is now synonymous and as harmless as the word stupid when moron, idiot, cretin, dolt, and imbecile all used to be clinical terms for people with intellectual disabilities but all succumbed to the same path because of a persistent stigma.  All these words are seeping in this inextricable context of malice they were forged in, and to give the r-word a free pass into the innocuous territory that the rest reside in continues the cycle and shuns a conversation to confront the continuing stigma.  The oppressors or privileged cannot be the ones to reclaim a word.

Context. Context. Context.

It no longer makes sense to ask why there aren’t men’s clubs when most of history has been all men’s clubs and the gender gap in STEM fields is still large and lonely.  It no longer makes sense to write off new terminology as feminist-steam-blowing when the term impostor syndrome, that most every grad student has and is helped by knowing the term has a name, was originally defined to describe women in the workplace because of internalized history and stigma.  It no longer makes sense to say the best person should get the job when we don’t have a good way to measure the best person when there are gender and racial performance gaps on standardized test that completely disappear when the only change in the test is being told that the test is a problem solving exercise instead of a measurement of ability; stereotype threat is real.  It no longer makes sense to just tell a female student to raise their hand more ask more questions in class when they’re the only woman in the class and may (subconsciously) feel themselves to be the representative or token woman and was born into a history where interactions with computers in youth has a full culture around it and that culture is male.

Context!

And while many individuals may not feel like they are victims to this or see anything on the micro scale, it is the subtle and unconscious biases that make effects on the macro scale.  Not only is there poor representation in bachelor’s programs in STEM fields, but the gender gap widens as we look at grad school and widens even more as we move up the chain to assistant professors and then to tenured professors and to department chairs.  The pipeline to leadership is leaking!

This past year at FOCS, a random person staying at the hotel was asking what the conference was about and then looked at the schedule and commented how few women there were presenting. And it wasn’t even something I noticed until she said that and I realized how dismal it was. Of course we have female rock stars in TCS and it makes a big difference (there are a lot of women cryptographers and I’ve always been curious if it’s a sort of “founder’s effect” thing happening with Shafi Goldwasser being such a star), but overall there are still much more men than women.  It’s certainly not because of a lack of smart women or because the field is uninteresting.  And I can’t think of any reason other than historic, systemic, and, most importantly, subtle issues that cause us to hemorrhage talented women.

We’re losing talent!

And I’m not saying that I’m not guilty of not being aware of subtle issues or not practicing what I preach, but I think it is crucial to at least recognize that our world has a rich impoverished history that leaves us on a very skewed playing field.  And as nice as it would be to be color-blind or impartial towards gender or free in how words evolve, our current situation is inextricably clawed to history and context.

I read Sophocles’ interpretation of Oedipus Rex in high school (yea, the one where he accidentally kills his dad and has children with his mother before finding out and gouging his eyes out…spoiler alert).  And, it being interpreted as a Greek play, it had some symbolism and the sort and one symbol that struck me was that “Know Thyself” was written over the temple at Delphi.  We argued that Oedipus’ main downfall was that he didn’t follow this maxim and that this was his hamartia that led him to tragedy.

How unfair!  I usually think of a protagonist’s tragic flaw to be hubris or greed or wrath or jealousy or, you know, any of those petty vices that could actually be thought of as flaws.  But just not knowing himself?  That could cause such a tragedy?!

I mean, couldn’t a person redefine themselves in life?  Don’t we have such admiration and poetic attachment to the notion of rebirth and cleansing and being a self-made person?  But his downfall was just that he didn’t know his history or where he came from?

I think why it bothered me so much was that it was such a subtle flaw that it seemed almost not real.  Like we could always start fresh and remove ourselves from our history.  But the fact that this parable’s lesson is so subtle yet monumentally tragic makes it one of the most important moral lessons I know of.

No matter how much we’d like to think we are above what’s come before us or the context we are thrust into.  No matter how advanced we think ourselves to be.  We cannot remove ourselves from context and ignoring it can lead to huge tragedy.  The suffocating of human knowledge by a half.  Deriding groups of people that are the least able to advocate for themselves.  Child PTSD.

So my hope and goal for the future is to, before deciding whether or not to give a rat’s ass, think about two horses’ asses.

Posted in Grad School, Humanity, Social Topic | Tagged , , , | 1 Comment

Greatest Hits of TCS (or This Really IS a TCS Blog, I Swear!)

This semester Prasad Raghavendra is holding a course called Theoretical Computer Science’s Greatest Hits.  It seems like it’s going to be a lot of fun!

Anywho, I thought I’d keep you folks all up to date and give a synopsis of each session.  This is of course for your benefit to get an idea of the landscape of TCS and not at all my selfish attempt to force myself to have a responsibility to read and understand the material…

(I hope to blog about other things than this in the meantime too…but that may fall by the wayside a little bit this semester.  That’s why categories and tags are great!  You can always find which topic that you like me blogging about even when I go on a binge of a topic you might not be interested in!)

This course is essentially a reading group; there’s a (subjective) list of the greatest hits of TCS (within the last 10 or 15 years or so) and the students read whichever one they’re most curious about and present it to the class.  It’s a 3 hour Monday class and we plan to generally go over one paper (greatest hit) per week.

My plan is to make a post once a week giving a summary of the greatest hit and what was covered.  This is my plan…so I’m sure God is having a little laugh to herself right now.

But, if things go well, I’ll be learning, you’ll be learning, everybody wins.

A note about the class: What I love most about this class is that Prasad is putting heavy weight onto learning about the results in the context of TCS landscape and wants us to give the picture of why each result is considered important and what led up to it and what it inspired.  So a good portion of these results should be based on the TCS culture and human aspect, which I absolutely love.

And Prasad has set a strong example with a great beginning by giving the first presentation (on the SL=L result which I’ll discuss in my next post) and a beautiful amount of background and context.  This is going to be exciting!

That being said, it’s hard to say who the audience is.  Well…actually, it’s not hard to say.  The audience is the people that gather into Soda 320 at 3pm on Mondays.  That’s the audience; this is what Prasad said.  With a title like TCS’s Greatest Hits, it got a bit of interest.  From hardcore theory people to systems people that thought it would be a good survey class.  So it’s a wide-ish audience and the TCS background is meant to be minimal-ish but some heavy duty TCS ideas and intuitions are still being thrown around (even if they are kinda from the ground up).  That being said, Prasad said we should make our presentations with this audience in mind…”this audience” meaning these specific people sitting in the room.  So, my blogging about this may be a little bit of a mismatch of audience but we’ll bear through it.

Finally, as a disclaimer, I’m going to relieve myself of all responsibility and say I may not be entirely faithful to either punctuality in blogging or the presentations themselves.  I may skip things, I may add things, I may focus on what I know least about, I may focus on what I know most about.  I hope to really bring out the context of these works and give a sketch of how the result was achieved (which is what we’re meant to do in the course) but the format and focuses of how I do that will emerge as I start doing them.

Prasad has started the course off by proving Omer Reingold’s Gödel-Prize-Winning result that SL=L (although we’re doing it via the perspective of Rozenman and Vadhan in a “simplified” proof).  This first talk will span two classes and after it’s done I will post about it.  I’m thinking about following up the next week with by presenting (and then blogging) the result of QIP=PSPACE.  Below is the current (preliminary) list of papers we may want to read.

PLEASE LET ME KNOW IF THERE’S A PAPER THAT YOU THINK SHOULD BE ADDED!

Note that this is not the order they will be in (the list and order may be updated as they happen though).  Let’s have fun!

  • Solving Laplacian Linear Systems

A simple, combinatorial algorithm for solving SDD systems in nearly-linear time
Jonathan A. Kelner, Lorenzo Orecchia, Aaron Sidford, Zeyuan Allen Zhu

  • SL = L

Undirected ST-Connectivity in Log Space
Omer Reingold

  • Homomorphic Encryption

Efficient Fully Homomorphic Encryption from (Standard) LWE
Zvika Brakerski, Vinod Vaikuntanathan.

  • Unique Games Conjecture & Optimal Hardness for MaxCut

Optimal Inapproximability Results for MaxCut and other 2-Variable CSPs?
Subhash Khot, Guy Kindler, Elchanan Mossel, Ryan O Donnell.

  • Computational Phase Transition

Computational Transition at the Uniqueness Threshold
Allan Sly

  • QIP = PSPACE

QIP = PSPACE
Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay, John Watrous

  • ACC lower bounds

Non-Uniform ACC Circuit Lower Bounds
Ryan Williams

  • Interlacing Polynomials

Interlacing Families I: Bipartite Ramanujan Graphs of All Degrees
Adam Marcus, Dan Spielman, Nikhil Srivastava

  • Information Complexity vs Communication complexity

How to Compress Interactive Communication
Boaz Barak, Mark Braverman, Xi Chen, Anup Rao

  • Sparsest Cut

Expander Flows, Geometric Embeddings and Graph Partitioning
Sanjeev Arora, Satish Rao, Umesh Vazirani

  • Nash is PPAD complete

The complexity of computing Nash equilibria
Constantinos Daskalakis, Paul W Goldberg, Christos H Papadimitrou

  • Oblivious Routing

Optimal hierarchical decompositions for congestion minimization in networks
Harald Racke

  • Computing Maximum Flow

Nearly Maximum Flows in Nearly Linear Time
Jonah Sherman

  • Alternate Proof of PCP Theorem

The PCP Theorem by Gap Amplification.
Irit Dinur

  • Indistinguishability Obfuscation

Candidate Indistinguishability Obfuscation and Functional Encryption for All Circuits
Sanjam Garg,Craig Gentry,Shai Halevi,Mariana Raykova,Amit Sahai,Brent Waters

  • k-SAT Threshold

Proof of Satisfiability Conjecture for large k
Jian Ding, Allan Sly, Nike Sun

  • Two Source Extractors

Explicit Two-Source Extractors & Resilient Functions
Eshan Chattopadhyay, David Zuckerman

  • Lower bounds on size of LPs

Exponential Lower Bounds for Polytopes in Combinatorial Optimization
Samuel Fiorini, Serge Massar, Sebastian Pokutta, Hans Raj Tiwary, Ronald de Wolf

  • Lower bounds on size of SDPs

Lower bounds on the size of semidefinite programming relaxations
James Lee, Prasad Raghavendra, David Steurer

Posted in Intermediate, Landscape, TCS | Tagged , , , | 1 Comment

Bowie In Berkeley

In my last post I gave my little tribute to David Bowie.  Now I want to drop him in Berkeley and see how he’d fare.

I spend a lot of time (maybe too much time) thinking about how to navigate academia and how to grow continuously and in ways that frighten me.  Of course Bowie would do just fine in Berkeley, with his weirdness being exalted, but what I’m curious about is to see how his way of living would survive academia.  I want to see how his forceful soul would brave the culture of academia and how I can channel it as I go forward.

In my tribute, I singled out Bowie’s courage as the aspect that I admire most and hope to learn the most from.  Oh, that courage.  To consistently reinvent himself and his sounds into something unknown and bizarre, leaving behind the well-trod and proven success he could have clung to.  This notion of reinvention terrifies and excites me greatly.  And inspires me.  It is a courage that I know will pump courage into me as I trek through academia.

In a well-circulated talk, Dick Hamming has similar views on self-reinvention: “Somewhere around every seven years make a significant, if not complete, shift in your field… When you go to a new field, you have to start over as a baby. You are no longer the big mukity muk and you can start back there and you can start planting those acorns which will become the giant oaks.”

Start over as a baby.  How terrifying!  After all the clout and momentum you built up.  Just to jump into a possible failure.  This is the courage I admire so much in Bowie.  And to see him succeed with such humanity in his leaps!  This is the Bowie I need to channel through me as fear coos to me under the guise of practicality or convention or exhaustion or contentment.

But, besides sheer courage, Bowie also had some practical ways that would have come in handy in academic Berkeley.  Creativity and productivity are evasive critters, scurrying in our chests, teasing the heart.  Discipline is a necessary stimulant, and techniques are needed to coerce these critters to play within the heart.  And Bowie had some techniques up his sleeve:

“Writing a song for me always felt…it never rang true.  I had no problem writing something for Iggy Pop or working with Lou Reed…I can get into their mood and what they want to do.  But I find it extremely hard to write for me.  So I found it quite easy to write for the artists that I would create.  I did find it much easier, having created the Ziggy [Stardust], to then write for him…even though it’s me doing it.”

I’ve heard it said that Walt Disney had three different personalities: “the dreamer, the realist, and the spoiler.  You never knew which one was coming to your meeting.”  Like Bowie, Disney compartmentalized his creativity into different personas that were each a breadcrumb, leading those manic critters of creativity and productivity to the heart to rage within.

Academia overwhelms me.  Often.  The sheer amount of things that could be (should be!) learned and the myriad of creative directions to go for.  Each of them is paralyzing.  Forcing myself to work on a specific thing  at given times has been an informal technique I’ve adopted to spur some productivity or creativity.  Because it is extremely hard to choose what would be good for me or worthwhile.  So creating personas and interests is something that excels in academia.  Disney’s success was through his juggling of his personas and Bowie brought to life fantastical Starmen brimming with creativity even when he may have felt devoid of creativity for himself.

I’ve tried to create (though have yet to implement) some personas of my own to juggle and I’m curious to see how Bowie’s lead will influence them in the future.

  • The Dreamer
  • The Worker
  • The Pedant
  • The Landscaper

Although less flashy than Bowie’s personas, hopefully one of them is a Starman…

As a slight digression, there is something that Bowie, in his death, has captured that I never expected to really see:

In the Johnny Cash biopic, Walk the Line, a young Cash is trying to record his first record when the producer stops him in the middle of a dime-a-dozen Gospel song Cash is singing.  He says to Cash “If you was hit by a truck and you was lying out there in that gutter dying, and you had time to sing one song. Huh? One song that people would remember before you’re dirt. One song that would let God know how you felt about your time here on Earth. One song that would sum you up. You tellin’ me that’s the song you’d sing?… Or… would you sing somethin’ different.  Somethin’ real. Somethin’ you felt. Cause I’m telling you right now, that’s the kind of song people want to hear. That’s the kind of song that truly saves people.”  Johnny Cash then sings Folsom Prison Blues.

You don’t come across many artists or songs that can fit that description.  I’ve actually classified artists by those that overflow with enough humanity and beauty to fit that description for a long while now, and songs that feel like they could be that “one song” remain among my favorites and most human.  David Bowie has been doing this his whole career.  Hell, his first hit was one of these!  But now he actually did this.

His final album, Black Star, was actually done with the knowledge that he was dying and that this would be his last album.  Voltaire said “God is a comedian, playing to an audience too afraid to laugh.”  Bowie did more than laugh.  He sang, he wailed, he posed and pouted, he created, he danced.  He screamed while he burned.  And he burned through this world on a space-path with a fervor and hunger.  And I’m looking forward to see how the exuding humanity of his last album “truly saves people.”

Humanity!  As much as this was a digression, I can only hope to have as much humanity in my life and to have the courage to let that into my academic scramble.

So, I’m sure there’s much to learn from Bowie in Berkeley.  His courage and creative techniques and humanity, his ambition and fear and lust for life.  This last week of winter break I plan to become the Landscaper to map out the terrain to explore for this upcoming semester, and I hope to channel at least a polynomial fraction of Bowie as I learn these new personas and what courage they have.

So, David, you’re stuck here in Berkeley so long as I’m here and have humanity left in me.  I’ll need all the courage I can get, so let’s turn on and be not alone.

Posted in Grad School, Humanity, News, Survival Tips | Tagged , | 4 Comments