A Better Social Media

Some find it strange that I am not on any social network—a decision I made for privacy concerns. Every now and then, we hear about the data abuses of the social media companies, but users are hooked on to the benefits of social network that they simply cannot leave it. Everybody's there.

But, I am not everybody. It's a wonderful thing that social media allows us to connect, communicate and share information with many of our friends and acquaintances with whom we have different intimacy levels. But I am willing to forego these benefits because of the baggages they come with. Also, from a social standpoint, I guess that social network does more harm than good—the consequences are already evident.

Speaking of consequences, where do I start? First, as a user, you do not own your own data and this allows companies to intrude as much into your life as they can, sometimes even without consent. Second, social networks are highways for fake news and it is manipulating the social psychology and destabilising our society. Third, they employ algorithms and addictive designs for longer usage which contribute to mental health issues among users. I think an ideal social network must be designed as follows.

I own the moral right to my personal data. And if my data can be monetized, I should profit from it. But the current social networks today profit from users' personal data in return for nothing but a free social media account, some features and tons or privacy abuses. An ideal social network must offer 100% privacy and 100% ownership of user data to respective users. And by ownership, I also mean full control of one's data with respect to what is done with it, whether its monetised, reset, wiped, etc.

An ideal social network must take actions to curb fake news and fake narratives spread using it. Fake news and fake narratives are of grave social concerns today and social networks are only feeding them. The root of this feeding is that social media allows the creation of fake accounts, and most posters of fake news or disinformation campaigns know what they are doing and want to ensure that they are not tracked down. Thus creating fake accounts and using them.

So we need a mechanism which ensures that such fake accounts are not created. And this is within the purview of social media companies. They can and must implement a account management system which ensures a certain level of verification. This itself will curb a good number of fake news, verbal assaults and propaganda from social media. Considering the strength of users which spans in billions, the practicality of verifying billions of user accounts is a question. This wouldn't have been hard had such measures been implemented right from the beginning as users grew. Now that the users have grown into billions, it is a tough but possible job.

Besides account verification, there are many measures social media companies can undertake to curb fake news. The only problem is that it is resource intensive, such as those measures that I am not at liberty discussing here. The imporatant point is that there are ways social media companies can curb fake news.

Speaking of algorithms, they contribute to quite a number of problems due to the manner in which they are deployed today. Take for instance, the mental health issues rising from social media. The feeds, which are largely manipulated by algorithms to offer users engaging and relevant content, drowns many posts, thus resulting in few number of likes and shares. And this is emotionally destructive to many youth of today, who are weak and lives on social media. They are destructive because likes and shares, which were introduced to show appreciation and information exchange, are now interpreted by vulnerable youth as validation of who they are.

This is in fact a self-inflicted injury requiring one to blame the parenting and the educational system that cultivated such idiotic mindsets, who are pushed into depression simply because no one liked or shared their posts. I don't have the habit of blaming the tool for the consequences arising from it's misuse—like blaming a knife maker for a murder caused by a knife. So, the like and share functionalities must stay and serve it's purpose. But algorithms play a bad role here, like an accomplice to a crime.

And this is how: content is decisive in making a post popular. But algorithms pick slightly popular posts and make it more popular. The young know this; and if their posts are not popularised by the algorithm, it acts as a validation that their content is unpopular. They realise that the posts were not good enough to kickstart the virality. So we must remove the attractiveness of the post from the equation that makes the post popular and place luck or timing instead.

This in fact is a change to mend a self-inflicted injury; I agree. There is a reason why popular posts get the limelight—because they are popular, and people like to engage in such topics. It's a human thing. But the consequences are dire and I wish that there is an exception made here and we give a fair chance to fix the parenting and the educational system.

As a tentative replacement, I propose that feeds be chronological—ie, posts are shown as per the time and order it was posted. When powered by algorithms, the 'attractiveness' of the post acts as a launchpad for further popularity. But when feeds are powered by chronology, what serves as a launchpad is whether people are online or not at that particular time—what can generally be termed as the timing of the post.

Yes, timing already plays a role, but I am talking about giving it more weight. To posters, the content is already a good content, which is why they are posting it in the first place. The problems is not whether the content is good or not, but whether it is validated by others as good content or not. It is the lack of validation that causes problems in the young minds. So we remove the quality of the post factor from the 'what makes it popular' equation and instead insert factors that the poster can't blame themselves for. Now because there is no algorithm that popularises 'good content', if you post a selfie which is not so popular, nothing will tell you that the reason for the unpopularity is your looks. Instead everything pins down to timing and sheer luck, and therefore your ego is not hurt, at least theoretically.

The use of algorithms also manipulate our beliefs and thoughts, feed us only what we want to see and hear, only what makes us happy and they push us further into the trench hole we have dug ourselves into. In other words, algorithms tribalise us. Although it is human nature to tribalise or 'clan' together based on our ideas, thoughts and identities, this nature is stretched to its extreme form on the Internet, so much so that it and its systematic exploitation by those with vested interests have become destructive to society.

Stopping this exploitation is almost impossible. If we consider educating them of this exploitation, education requires the person to have an open mind; only then can he assimilate and guard himself up against such exploitations. But toxic tribalised people do not have an open mind or a willingness to introspect themselves; because if had they had such a thing, they would not be in the ditch in the first place.

Although the onus to know the other side of the story on the person, the consequences of failing to do so is too dire to our society. Therefore to sit here and blame a person for not doing what he must have done is nothing more than an excuse. But it is also near impossible to grab their attention because most tribes are echo chambers. There are enough posts and articles on the Internet explaining each side of the story, but such opposing views never receive the attention of people from the each side. People prefer staying in their echo chambers. As to how do we grab their attention, there are ways but they will be extremely expensive to do so—money wise, energy wise and time wise. Note that we are are fighting psychological tendencies, not mere lack of attention. There many be few controversial methods to grab their attention, but a more practical way would be to redesign the weapons that lead us to tribalise blindly in today's world—weapons like algorithms.

The fix that I propose is that information flow be designed and constructed to follow common sense—to seek more information. And what algorithms do now is mirror our instinctive or emotional response and feed us content of the same narrative; thus triggering the consistency1 principle among already opinionated individuals and acting like an echo chamber. Or, in the case of unopinionated minds, they are fed with content of similar narrative so as to reinforce it. What algorithms must do instead is feed us the other side of the story because that is the rational thing to do. What algorithms must be designed to do is to recommend related content of opposite view, not similar agreeable content.

Algorithms are currently designed to show agreeable content because the free business model requires organisations to design products such that it increases user engagement. And the designers know how human psychology works—we tend to listen to and watch content that agree with us and feeds our emotions. Content from the other side will often be not in agreement with us and therefore we might not watch it fully. They do not care what side you are on. All they care is to glue you onto their platform so that they can collect more data from you. So they use algorithms to suggest you content that will keep you glued to their platform; and it works. The more you're glued, the more data they can collect data about you, the more they can exploit it and the more profit they can make.

Profit at our expense—this is the business model of free products and services. If you are not paying for the product, you are the product. And they exploit you as a product. Take for instance, the implementation of addictive designs to complement algorithms to glue users further more into their applications. This inhibits their social skills in real life.

I don't think social media companies can adopt any measures that eliminates the need for consumer engagement, so long as their business model is based on user data. Their profitability depends on how much they know about a person. Therefore, they employ addictive designs so that people spend more and more time on the app, thus giving the companies as much of their data as possible. The tactic is simple. So unless the business model of a social media company does not change and remains anchored on the abuse of data, such tactics will continue.

So it is of no fruit to expect the technology companies generating their profits from user data to implement changes to algorithms and addictive designs. The solution is an entirely new business model. But it is very unlikely that the current social media companies will do that given their responsibility towards investors. The profits from abusing data are just too high to be matched with other business models, unless they come up with some kind of a sophisticated business model. So chances are that social media companies will not drop their addictive designs.

There is one thing I expect from social media companies. This has nothing to do with the product, but rather to do with the functioning of the company itself. I want them to respect and honour free speech of users.

I agree that speech such as posts that targets vulnerable children for child pornography, vulnerable people to be trafficked into sex slavery, terrorism, etc., must be cencored. But no posts openly say that they are recruiting for child pornography and interested children must respond; or encourage vulnerable adults into joining a sex racket or being trafficked into some form of slavery. All such motives are masqueraded as normal posts. So you cannot deploy a straight forward censorship program to prevent such evils. In fact, the use case for straight forward censorship is extremely limited to rare cases like a terrorist organisation having the audacity to openly recruit on social media. Regarding such cases, I whole heartedly agree that they must be censored because the stakes are simply too high.

But this is not what's happening at large today in the social media space. Accounts are banned and posts are removed in the name of hate speech. I do not contest the fact that there is hate speech on the internet, from all the political and ideological groups. Nor I do not condone such speech. But its ridiculous what constitutes hate speech today. Any post that does not agree with a particular group now becomes hate speech, and can be removed and the user might be banned as well. It actually gets worse—say something they don't like and you can be labelled a racist, misogynist, homophobic, xenophobic and what not, even if you aren't.

I believe that this tendency has everything to do with politics—many political and social groups are fueling in massive amounts of hyper-sensitivity and a sense of victimisation into people's mind. Social media companies play along for two reasons. First, some souls in the management share a particular political and social view and therefore uses the company assets to advance their views. Second, all political parties in power force social media companies to eliminate content that is inconvenient to their ecosystem.

So perhaps, only a new player can come and change things. But the big tech companies simply crush such new players, as in the case of Parler. Glenn Greenwald wrote a post explaining how Parler got decimated by the big tech companies. It is wroth reading it.

Social media is fundamentally a medium for exchange of speech and information, and they must engage in activities that facilitate this exchange. Even if there is a true hate speech, my opinion is that it must never be censored but kept in order to serve as an evidence to prosecute the poster or discredit him. And this isn't as expensive or resource intensive as it sounds. Opinions are generally information warefare between two sides on the Internet. It makes sense to take advantage of their mutual hostility and let them audit each other. The social media company just have to be the middle party between them.

And yet, after all that we have done, there will still be some souls who got influenced by such posts—either becoming more hateful to others or getting hurt themselves. They are testimonies of the ineffectiveness and failure of our education system and parenting and the decadence of our society, that we have brought up a generation who are miserable and weak, that they are easily influenced to either hate others or get hurt themselves simply by some posts on the Internet. If you are socialising—whether on the Internet or otherwise—you need to have a thick skin dear friend.


  1. Refer to the commitment and consistency principle explained in the book Influence by Robert Cialdini.