Ensuring Digital Privacy


  1. Anonymity
  2. Traceability
  3. Encryption
  4. Open source
  5. Decentralisation
  6. Blockchain
  7. Non-profit vs for-profits
  8. Ensuring privacy
  9. Constitutional guarantee
  10. Privacy Consciousness

Privacy, whether in the digital world or otherwise, is a fundamental right, but it comes with certain responsibilities which most people often forget, or is unaware of. As with every right, privacy comes with responsibility to not misuse it and the subjection to the spirit of justice. A right subject to certain conditions indeed raises concern among the citizens, that it can be misused by the authorities. But this warrents a mechanism to prevent misuse; not a judgement that paves a highway for other misuses. Fortunately or unfortunately, all rights must be subject to the prevention of its misuse or the neglect of it's responsibilities.

There is however, a section of privacy activists who do not agree with such definitions of privacy and fight for unfettered privacy. They do so because they are so myopic that they do not see the enemies, but only one enemy—who happens to be the the least dangerous of all—and build their defences against this enemy such that they inadvertently build a highway for the more dangerous ones. Their strategy for privacy is a suicide mission. They lack the vision and strategy required and do not understand when to wage a battle and when to compromise. In fact, I wonder whether they truly understand what privacy is. There seem to be some misunderstandings about privacy.

§ First of all, privacy isn't anonymity. This is because, for an individual to assert his personal space and claim the right to privacy and enjoy it, he must be identified in the social space. Identification is a prerequisite to privacy. In fact, one need not go that in order to assert the difference between privacy and anonymity. Any child can tell that they are two different things by studying the literal meanings of these words.

Anonymity is a double edge sword—we need it, but we don't need it either. In cases such as crime, anonymity is dangerous; while in cases such as whistleblowing an injustice, anonymity is necessary. Thus, anonymity cannot be an individual right, for if it becomes so, every criminal can mask himself and take shelter in the name of a right and cease the investigation—a very illogical design. Nor must anonymity be criminalised because being anonymous in itself is not a crime. And so, the creation of anonymity tools must be allowed especially when the system cannot protect such people as whistleblowers or ensure personal privacy to citizens.

So I am not completely against the idea of anonymity. I just disagree with the description of anonymity defined by certain groups. They say that being anonymous is required for free speech. I vehemently disagree because anonymity and free speech are mutually exclusive outcomes. The existence of free speech can be validated only when a person speaks without any mask asserting his identity. How's it an exercise of free speech if you are masking your identity and speaking—an act that can be done with no consequences even in a civil society with no free speech? Therefore, in a society that truly exercises free speech, one need not stay anonymous to speak his mind. And if one stays anonymous for fear of persecution or attacks owing to his speech, then either he is too paranoid or there is indeed no free speech at all.

True anonymity on the Internet is doubtful because identification is a requisite for communication, and the fundamental task of Internet is communication—not storage. However, even if true anonymity is achievable, I doubt whether society will allow it on the long run because of the crimes caused by anonymity. To the masses, it is an easy decision to sacrifice anonymity than be a victim of it.

§ Traceability is another feature that is often spoken in the same context. My view is that the ability to trace is necessary. To those who argue that it is a violation of freedom of speech, I say that they are wrong for the same reason discussed earlier. Traceability, if it has anything to do with free speech, is a validation of free speech, for if one is to express himself untraceably due to fear of persecution or attacks owing to his speech, what freedom does he have, enjoy and boast about? Nothing, in my logical deduction. Only if he expresses himself in a system where he can be traced, can we judge free speech to exist or not.

Let me stress that traceability serves as a tool to be misused not just by the governments to persecute dissenters, but in many different ways. Traceability is a double edged sword as well. The society needs it in certain cases and does not need it in others. But we do not ban something we need for that reason alone, that it can be misused, do we? You don't ban knives because it can be and has been used for a murder. The key is to regulate the use of traceability using an independent third party like the judiciary system.

Some say that traceability interferes with encryption, and therefore it is a bad idea. Those who say so must recheck their judgement. Encryption and traceability are two different features—encryption is the scrambling of the contents of a message using a cipher while traceability is the ability to identify who send a message—encrypted or not. One is data while the other is metadata. I do not think that we will compromise encryption while ensuring traceability.

§ Encryption has been touted to protect privacy from all threats. But don't be so complacent. Service providers can know your data if they choose to irrespective of end to end encryption. Whoever encrypts your data have the access to your unencrypted data—which is why they are able to encrypt your data in the first place. Similarly whoever decrypts your data has the same access to your unencrypted data too. Given these cases, end-to-end encryption cannot be touted to ensure privacy from the entity who encrypts and decrypts your data; which is our context are technology companies. Encryption only ensures privacy while in transit; not from the application you use or their developers. A user simply has to trust the word of the organisation or the developers that the application will not leach data before encrypting or after decrypting.

Many users are unaware of this because organisations often lie about encryption and privacy. They say that they cannot see your message, which technical is false. They can if they choose to. They must instead say that they don't choose to see the message, which brings back the argument to trusting the organisation.

§ Nor does the act of open sourcing the code ensure privacy. Many believe that if the code is open sourced, they can audit the code and therefore trust it. This is technically true, but what gurantees that it is the same open sourced code running in the servers or complied for downloads? Nothing, in my view; we can only make such an assumption. For all we know, it could be a modified version of the open sourced code running in the servers, or perhaps there are other programs running parallelly performing malicious activities.

I am not saying that we cannot trust open sourced code. I am merely saying that we cannot trust or distrust software or an organisation simply because the code is open sourced. We must use other factors to judge trust. Open sourcing must not inspire trust nor must open source code does not imply trustability.

Only if you can audit, compile the software and install it yourself, and in this case, you can trust the code. But this implies that you don't trust the organisation or the project maintainers who have already compiled the code and either made it available to download or is running in their servers; which makes self-compiling more of a trust-less move, than a validation of trust.

I do not mean that there is no benefit to open sourcing. They help in improved security audit because more brains are generally better than few, they help you modify the software to fit your need more, and although they do not imply trust by nature, they draw trust from its community, as explained hereafter.

Most open sourced projects are run by communities of contributors and users, whose motives are not threatening or toxic to users, but instead are based on the values of the free software philosophy and of being a good human being, thus inspiring trust in them which is transferred to their products as well. To put it in other words, the flow of trust is reversed. You are not trusting the organisation or community because the code is open sourced, instead you are trusting the code because of the motives and principles of the organisation or community. This transfer of trust—that if you trust the maker, you can trust the product—is seen in all products, including those that are created by for-profit corporations. Thus reiterating my point that open sourcing does not inspire trust.

§ Decentralised systems have shaken the status quo. Every tool and technology has its pros and cons. And one must choose a technology based on the requirements. I don't think decentralised systems will replace centralised systems: they both will run the web together depending on use cases. Neither of them are good or bad. Tool or technologies don't have any intrinsic good or bad nature in them—they are just tools. What we use them for, have good and bad nature. Now, decentralisation is an umbrella term. True decentralisation is when there is no central party at all, even to facilitate the communication or storage—data is storage on your own device and communication is established device to device. Such services come with other disadvantageous in areas such as performance and usability. So to discuss further on the matter of privacy, we must pick out specific decentralised models.

Let's start with federation. From the privacy point of view, federated systems worsen the problem. Federated services are basically decentralised servers which are hosted by a person or a group, thus making it a scattered centralised model that allows cross communication. Unlike the common perception, they too run on the basis of trust, requiring you to trust not just those running your servers but also those running other servers in the fediverse which you are connecting with.

It turns out that in order to escape the clutches of corporations, developers created a system that requires users to blindly trust more parties. This is regression. You can trust servers only if you know the people running them, which you don't. This naturally restricts one's trustees to a small number, unless one decides to rely on mutual trust; ie, D runs the server, A knows B, B knows C and C knows D; therefore A trusts D. But such chain of trust is not truly trust but calculated risks, and federation simply adds up the number of parties you need to trust.

If any malicious activity is caught, there is no guarantee that we can hold D accountable. Many of those who host the federated servers, apparently, are not legal companies or organisations but individuals or communities, thus making it is harder to hold them legally accountable compared to a legally registered organisation.

I think that the buzz around federated system is because of the contrast principle. The big technology companies running centralised architectures lied, stole and abused user data, and garnered the image of an evil corporation among many technologists, inspiring them to create a model that does not require to trust such entities—a natural reaction. This decentralised model, even though it in fact aggravates the concern of trust and privacy for reasons discussed earlier, appears to solve them only because of its image of an alternative to the evil corporations.

Perhaps, this is why federated systems are popular only among the techies and engineers. But, whether you are an engineer or not, techie or not, you face the same issue with federated servers—trusting other server. The only advantage engineers and techies have is the knowledge of setting up one's own home server. However, such a thing cannot be expected from an everyday Tom or your parents and grandparents—assuming they are not engineers. Therefore, I would expect a simpler technical model for consumer products.

§ Blockchain too, worsens the issue of privacy. Blockchain is merely a database system whose data is virtually impossible to modify because of two reasons, of which, the distributed nature is important for this discussion. It is the distributed nature that makes blockchains an aggravator of privacy issues.

Take for instance, the case of block chain based domains, whose registry is hosted in multiple computers in a decentralised fashion. The biggest privacy issue with domain of any kind is the registrant's personal information made public. This allows spammers and marketers to send tons of mails to the registrants. Moreover, anonymous websites in the interest of social justice—such as a whistleblower website—needs their privacy protected.

There are two keys to data assuming that it is encrypted—the encryption key and the encrypted data itself. With decentralised domains, all one needs to do to get the data is participate in the decentralised network, and the data is delivered on a silver platter. Now, it is only a matter of decrypting the data. In case on block chain is not encrypted, then the person has the data now.

Blockchain is useful in situations where data must be non-tamperable at any cost and where privacy does not matter at all—such as the financial records of government spending or non-profit organisation, and votes.. It is very tempting to fall into the hype of blockchains, or for that matter, any hype. Do your due diligence.

§ As far as I have seen, technology designs cannot ensure privacy to all. Many believe that non-profit organisations are only way to protect privacy.

I disagree. Trust on non-profits is a perceived trust. The want and need to make money is often used to judge for-profit organisations to be less trustworthy than open source communities or non-profit organisations. But the truth is, all type of organisations welcome making more money, whether it is a need or a want. It is not profit or the strive for profit that threatens privacy, but the ethos and values of the management.

Some believe that the laws governing non-profit control greed of their management, thereby ensuring trust. But the laws governing and regulating non-profit are fundamentally created to prevent accounting frauds. There are a number of tax benefits and other incentives for non-profit organisations to benefit which, many for-profit organisations have in the past masqueraded as non-profit entities. Greed is an emotion felt within oneself. It cannot be controlled by anyone other than the self. From a legal perspective, you can always argue that non-profits have such laws but for-profits don't have. But, these laws can be circumvented a set of laws cannot immunise an organisation from the exhaustive list of frauds and exploitations. A thief always finds a way.

The laws may indeed prevent the damage if a non-profit chooses to go rogue. But then, one cannot say that non-profit laws instil trust. Trust is either earned or broken based on incidents, not based on the gain or damage. The moment a management goes rogue, trust is broken irrespective of whether the government or judiciary contained the damage. The amount of gain or damage can only be the determinant of whether the broken trust can be fixed or not, or perhaps for legal discourses or settlements. In other words, whether your cheating on your partner resulted in you becoming a parent or not, trust on you is broken. It is in the act of cheating where trust is broken.

But there is a more convincing argument that non-profit laws are of no use to protect privacy. Privacy is not always abused for tangible profits. Most of the privacy abuses, especially those by corporations are done so that they can build other products and services on top of this data, which can profit at some later stage. And in such cases, there is no way proper way to quantify the monetary value of privacy abuse. In case of governments, they spy on citizenry to prevent crime, control dissent, narratives, etc. Here too, there is no proper way to monetarily quantify the abuse. And all these can be carried out using a non-profit organisation.

Therefore, when majority of the privacy abuses are not intended for direct and quantifiable profits, I do not think that non-profit laws that prevent a company from making x amount of money in a year or mandate them to spend y amount of money in a year, prevent privacy abuses. These laws are fundamentally for accounting purposes to prevent for-profit companies masquerading and exploiting the non-profit laws.

Moreover, these non-profit laws does not prevent non-profit organisations from changing their business objectives, models, processes, values and ethos. Nor do they give users any say over such changes. So, such laws are not of much help with privacy.

Hence, my belief that increased trust on non-profit ventures is a perceived trust. Both can go rogue. It is the core values and principles of communities or organisations that guide their moral or ethical character and prevent them from conducting fraud. So judge organisations and communities on their values, principles and actions; not by whether they want to make money or not.

§ If all these structures that are commonly considered to protect one's privacy do not really protect it, it is a fair question to ask what then is the solution to the privacy abuses we see today. In my opinion, a change in Internet business models is a part of the solution. It is high time new business models that respects users' rights to their data are used. And of course strong laws that protect user privacy, although chasing justice is costly.

Such a business model eliminates the motives for data abuses in the first place. There will be no more incentives etched in the business model itself to abuse user data. It is possible that companies can still continue to abuse data for additional profits, but this is extremely unlikely because they will have to undo or bypass their technical infrastructure which is in fact built to conserve user privacy in order to do such malicious activity. It seems stupid to do such things given the fact that they have a business model to focus and a brand image to preserve.

It is the combination of business model, technical design and legal protection that can provide the general public the maximum protection from privacy abuses. I have pondered on various models and I haven't come across anything as powerful as this combination.

§ Speaking of laws, European countries like Sweden or Switzerland are considered to have the best privacy laws in the modern world. But the biggest threats to a law is the legislators themselves: if majority of the legislators want the law changed, they certainly can.

One can argue that citizens will protest against such changes. But such protests and their results will take time to yield. The problem is not the change of law, but what is at stake during the change; meaning that, whatever needs to be compromised, can be compromised during this interval.

It is my opinion that India is one of the best countries for personal privacy. What makes India different from the rest of the developed or developing world is the status of privacy. India's Supreme Court have declared privacy to be a fundamental right,

The right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21 and as a part of the freedoms guaranteed by Part III of the Constitution.

Supreme Court of India

And it is harder to mess with fundamental rights than with laws. Suppose a law has been passed in Lok Sabha by any government challenging users' privacy, such laws can be easily challenged judicially on the basis of fundamental rights. We could even get a stay on the implementation of the law in the event the case takes a long time. The biggest validation of this is the Aadhar case.

Because rights are often subject to certain conditions, it valid to ask if privacy can be ensured as a fundamental right. Conflicts between rights are judged according to which outweighs the other in terms of justice. If two rights conflict with each other, then the one that serves a larger right and upholds the spirit of justice will supersede the other. Take for instance the investigation a human trafficking racket by intercepting his communications and meetings of one established offender or a convincing suspect, which in theory are indeed protected by his right to privacy, but in practice only contributes further into human trafficking, which is a textbook example of violation of right to life and liberty of many individuals. In such cases, I believe that the trafficker's right to privacy must be infringed; the key being, he is indeed a trafficker beyond doubt: not a mere suspect.

It is imperative to say that the logic discussed above holds true only if a crime has been committed or if there is are conclusive evidences that an individual or a group is going to commit a crime according to the law of evidences. Only then, can you quantify the violation on the other side: if it is a murder, right to life is violated; if it is religious proselytising, right to freedom of religion is violated; and so on. Therefore, the logic in question does not justify mass surveillances of citizens. In other words, right to privacy cannot be challenged or compromised for finding possible criminals. Otherwise, any surveillance only implies that the government has already made a judgement that all citizens or a large mass are criminals. But you are innocent until proven guilty.

Therefore, I do support mass surveillances on account of prevention of a crime. Moreover, I believe that mass surveillance does not actually prevent crime. There has been no evidence that mass surveillance, which many governments have been doing over the years, have prevented some form of terrorism.

§ Despite all these information out there in the public domain, most people still using data abusive services. Those that disregard their digital privacy are of four kinds—one who is stuck in a data abusive ecosystem, the other who does not realise the true scale of the threat, the third who loves personalised ads and thus shares information, and the fourth who says that they have nothing to hide.

Those that are stuck with a particular ecosystem must realised that the only way to get them unstuck is to migrate slowly to a more secure ecosystem. There is no other way.

Those who hasn't understood the threat in its complete scale are ignorant of the industry, but ignorance is not a crime. They will value their privacy when they understand the true nature of threat. Much of the journalists and privacy advocates are publishing privacy related content worthy of reading and watching, which must be suggested to them. We must do our part too on educating them about such policies and fake promises.

Those who loves personalised ads better get paid for the data they share. Your data is much more worth than a free account on Facebook, Instagram or WhatsApp. Soon, there will be business models that allows users to earn for sharing data. But I would still ask of them to realise the value of privacy which they are willing to lose for the love of personalised ads. Privacy out-values personalised ads.

Those who has nothing to hide are the tricky people to deal with. Journalist Glenn Greenwald said an excellent thing about such people,

Over the last 16 months, as I've debated this issue around the world, every single time somebody has said to me, "I don't really worry about invasions of privacy because I don't have anything to hide." I always say the same thing to them. I get out a pen, I write down my email address. I say, "Here's my email address. What I want you to do when you get home is email me the passwords to all of your email accounts, not just the nice, respectable work one in your name, but all of them, because I want to be able to just troll through what it is you're doing online, read what I want to read and publish whatever I find interesting. After all, if you're not a bad person, if you're doing nothing wrong, you should have nothing to hide. Not a single person has taken me up on that offer. I check that email account religiously all the time. It's a very desolate place.

In fact, there is a Wikipedia page dedicated to the "nothing to hide" argument. We can only hope that such individuals understand what privacy means and how disregarding it will bite back. They have no idea what the companies are doing with their data and they won't suffer the consequences until it is too late. Companies exploit us psychologically in every way which has been beautifully documented in the 2020 documentary The Social Dilemma. It is as if there are two me(s)—one is the real me sitting here and the other one is my psychological model sitting in Facebook or Google servers and exploiting our emotions. We are all fortunate to have some individuals from the industry speaking out and educating us. But many of us are not willing to listen. There is also a documentary on YouTube called "Shoshana Zuboff on Surveillance Capitalism" that reveals the dark business model of big tech companies.

There is another argument I often hear, that there is no point moving out to alternatives because our privacy is already compromised. That is a stupid argument beyond recognition and only pushes you deeper into the ditch of surveillance. While it is true that companies have a lot of information on us, it gains potency only as more information is added and their algorithm trained on it. So, it is never too late to take back our privacy. If you had 2000 bucks in your wallet and you lose 300 of it, I don't think you will throw away with remaining 700. So, it's never too late to take back your privacy. That's my point!