It’s been a while since I’ve posted.
Part of that is, fortunately, because I’ve been busy doing research. But it’s also because a lot of the issues occupying my brain lately have been overwhelming.
With my first trip to Israel this past summer, I spent a lot of time learning and thinking about the Arab-Israeli conflict. The Black Lives Matter movement also had quite a bit of activity that summer, also being a highly politicized and divisive conflict. And, finally, the primaries and beyond for the US presidential election were ripping through the nation.
Each of these was overwhelming. Because I had my own opinions on each of these and each had an enormous amount of gravity attached to it. But also because they were so divisive and people were in such conflict. Staying deep within their own side and having such vitriol towards the other side that they hardly had any contact with. And there was such pain. And such fear.
The election has happened now. And I feel I should have done more on my part to talk more about it. I feel guilty that I was overwhelmed and didn’t try to start more conversations outside of my own immediate bubble.
I may talk about the election in a future post but, here, I want to talk about bubbles. Choose your favorite issue or, better yet, leave all issues at the door. I want to understand how we become so divided and isolated from each other in general and see if there are ways to change that.
The main relevant ideas I’ll focus on are in this video (don’t get scared of its title…it’s much less upsetting than many of the current coverage of specific issues). Most everything I want to say is in the video, so please give that a watch…it’s important, and it’s worth it.
It’s scary. We’re scared and hurt and angry animals. We’ve always been. But now we have the internet. And what I want to talk about is if there are any ways to address what’s happening in the video. Namely, I want to address this part of the video:
These are bubbles. These are terrifying. How do we avoid them? How do we avoid building “totems” of the other side? How do we avoid building straw men?
I think there is a lot to say about actively doing this socially:
I know someone who, after the election, looked for one post they didn’t agree with or even thought was terrible and ‘liked’ it to better diversify her page and pop the bubble. I’ve seen others promote this bubble-popping with posts recently and have discussions. And I’ve seen attempts to match people that voted differently to connect them for conversation and learn from each other.
Luca Trevisan, while acknowledging the value of safe spaces, posted a call-to-action to “occupy unsafe spaces,” since permanently staying in safe spaces allows for bubbles to persist.
And, if you’re a against Trump, this is a (very informal) masterclass in empathy (and this is also a, slightly more formal, important insight to many Trump supporters that I’m happy that I did end up sharing this past summer).
And I’ve tried to talk about context and empathy before.
But I want to abstract a bit more than these social techniques and consider technology’s responsibility in bubbles in general.
(EDIT: After seeing people interact with each other, who are on the same side, after this election, I am very scared that the social techniques will not happen. People seem to not be bothered (myself often included) to even take a step within their own bubble to have discussions about how to band together or how to fight or how to react, much less take steps to actually get out of that subdivided bubble. I’m scared. I think this is an emergent property of how people interact. And I’m scared nothing will change. And it is why I needed to write this post for my own sanity. Even if we can’t change the social aspect, maybe we can abstract to design issues that set the mold and dynamics for the emergent social aspects. Can we be constructive here?)
The Thought Police Are Here!
The ability of these “thought germs” to spread as they currently do is a direct consequence of the technology and social media platforms that arose in the past decade or so. As in the video above, can they be like a harmful virus? Do we have a responsibility to regulate this?
First off, the idea of regulating thoughts might (and maybe should) sound horrifying. We do not want “thought police.” So the question becomes, are there ethical ways to foster bubble-popping and can there be bubble-resistant social media platforms?
A simple idea I’ve heard which feels ethical on its face (please comment if there are reasons it may not be) is to have a simple fact-checker built in to Facebook that ranks the statement of a post according to its veracity. I remember seeing this ad against Trump this election season which I took for granted as true but has recently been pointed out to me as false. A built-in fact-checker, I believe, is a simple and ethical way to help prevent the type of ‘totems’ that can be built in isolated bubbles as the video talks about.
Another simple way to help prevent bubbles, that has actually been done (maybe without that intention), is that you can now ‘follow’ a public figure like Trump without ‘liking’ them, which allows you to diversify your Facebook newsfeed without fear of ‘liking’ something your friends may not agree with.
Both of these, I think, do not stifle free speech, yet might be steps to being responsible in our design of systems. Systems that can otherwise cause divides.
Just as a pure free market was seen to have horrible consequences in the industrial revolution of America, we may also have to start thinking of the negative effects of social media and if there are ethical ways to help prevent them.
The Responsibilities of the Tech Community
The technology community can often sprint forward without much consciousness of consequences that arise. With some apps furthering gentrification and social media allowing the type of vitriol and divisiveness we’ve seen this election, the technology community has a responsibility to at least address those tech-caused consequences when they arise.
I think it’s important to start asking questions about the mediums we output. Should releasing platforms that are predisposed to exacerbating social problems like gentrification, social segregation, and the propagation of false information be akin to selling non-FDA approved food?
As an extreme, what if a platform is released with its inherent structure designed to cluster people of into bubbles quickly and favors false information. What if it even randomly sends fake messages to its users from the ‘opposing side’ to be incendiary? Do we consider that dangerous? Do we consider it just another platform that anyone can use and practice ‘free’ speech on? More to the point, should platforms be regulated and can we consider them sometimes to be yelling “Fire!” in a massively crowded theater?
With that in mind, should existing platforms, such as Facebook, be examined for what tendencies of that nature it has? If we consider a social media platform as having a certain set of dynamics and properties to it – e.g. susceptibility to clustering, diffusion time of ideas, lifespan of false information – do we have a responsibility to build ‘safe’ and socially responsible platforms?
Much as we’d prefer the CDC to exist before airplanes are built, we may want a way to analyze “thought germs” before we have the internet (obviously a bit late for that). How can we do this ethically?
OK…So, What Do I Do From Here?
Lots of questions. I just give lots of questions. Few answers.
I think I’m still trying to process the election results and the current state of the country. I’m not sure what I do from here or what I’m doing with this post. This is some sort of fuzzy outlet. I do know, though, that I want to be constructive. I know that I want to contribute in what way I can. And thinking of this technology aspect of it seems the closest thing to me that isn’t already being heavily talked about as far as I know.
What should we do?
Should we consider moral responsibilities in technology and social media design?
Are there small changes (like a fact-checker) that can be incorporated to existing systems?
For theory folk, are there graph-theoretic questions to ask under this motivation? That is, the above picture can have questions about the graph’s connectivity, its poor expansion, and the related notions of its mixing time and properties about diffusion; these are well-studied, but maybe there can be questions that are motivated by our current issues and how they can be used and explored more.
Are the social aspects to bubble-breaking a lost cause? Should we put a significant amount of energy in the platforms and mediums instead, that allow the dynamics of the social aspects? Do you have suggestions for the social changes too?
What ideas do you have? What do you agree with here? What do you disagree with?