The Matrix scenario : mindless minds

During my years as a PhD researcher and then class A researcher, I noted that the law of privacy is in essence discriminative and biased against private individuals when confronted with interferences from financial networks.

In a claim for efficiency, officials, lawmakers and attorneys are ignoring concerns of populations regarding of public policies about data protection, urban planning and healthcare.

As most cannot reach the financial and cognitive means to use appropriately the ICTs in order to fight the authority of the administrators of treatment for improvements of privacy, protection of citizens and health services, we argue that the law is being instrumentalized for the benefits of criminal networks.

This situation prevents honest contributors and independent researchers from regulating the situation of oppression experienced by populations caused by abusive procedures based on systematic denials and unethical uses of the technological apparatus.

This led me to design a Matrix scenario for the future as a risk scenario taking into account the interdependance of law and money.

The Matrix Scenario

First, getting rid of intermediaries and witnesses by the means of legal procedures

 

By suppressing intermediairies and witnesses, as law firms and clients collude towards their needs and the defense of their common financial interests argumenting towards conflits of interests, states and tech giants have generated a no man’s lands on the matter of justice, notably regarding of civic liberties and freedom and justice for all.

This situation has been more than observable in the execution of the GDPR in the last months or more recently in cases of money laundering and criminal activities involving the mafia (Giulani).

As soon as in May 2018, we noticed that the benefits of codification were mostly going to Google, IBM, Deloitte, Facebook : “gathering individuals’ consent for targeted advertising at far higher rates than many competing online-ad services, early data show. (…). reinforcing — at least initially — the strength of the biggest online-ad players, led by Google and Facebook.”

Beyond GDPR, the laws corrupt the intent of the code generated for reaching the ends of public good and for protecting citizens against scams and abuses towards the ones of benefiting their clients’ global financial operations, opening path for misguided interpretations of the code of law. That’s actually an idea I started to nurture in some past posts. 

Actually it is a bit naive to consider that the EU lawmakers, in cumpliance with US lawmakers, hadn’t planned the outcomes in the execution of the GDPR giving their extensive knowledge of procedures about the concentrative nature of the existing markets.

The estranged collaboration they more than willingly maintain with tech giants – CEO’s being regularly hosted at the Brussel Parliament – indicate clearly a very narrow degree of acquaintance and co-optation between lawmakers and criminals that it is necessary to point out as an entry point for corruption.

Two, controlling public identities at a distance 

 

If properly interpreted, taking into accounts the specificy of each context and the limits posed by risks over individual targeting and coercive consentment, the GDPR should have prevented tech giants to pressure against governments and policies.

It became instead the place for a socially accepted and enabled transaction of the individual data market with private affairs causing situations of oppression for citizens whose data are being hijacked by global databases to be controlled at distance with the technologies of the corrupted states.

This context can lead to manipulations and errors that might deeply harm citizens’ health and security.

In fact, with the administration of the GDPR procedures, we noticed EUs lawmakers tendency to share a common culture of privacy and security based on the concept of cheats and frauds that falsely criminalize individuals.

Individuals and mass are perceived as idiosynchratic agents of disorder and confusion and cheaters that it is necessary to contain, control and even repress when they endanger their clients’ secrecy, by supporting financially an economy of laws failing to adress the complexity of the phenomenon.

We couldn’t miss the point then that the intention wasn’t there to get rid of the task of adjudicating the legislation to pass it to tech giants at the detriments of societal progresses.

 

Three, forcing people to run and to hide 

 

Tax evasions and spoliations are the main activities involved.

Google has forced publishers to ask users’ consent on their behalf, stealing customers from small ads agencies in Europe to drive them away in his own marketplace in the US with the help of its digital advertising platforms.

Thanks to deals non-Google ad agencies had to grant to get access to the code of products as DBM or Google Adwords for ad campaigns guaranteeing users’ consent, Google could be also spared of paying the harsher penalties for privacy infringements and mainstream media publishers or local producers will pay for the bills.

The GDPR gives american companies a striking and competitive advantage over normal, local actors by granting big corps access, authority, leadership and legitimacy to monitor and to manage private and sensitive data.

As a consequence some valuable investors and high capacity entities have already left Europe, raising concerns for Europe capacity to sustain a stable local economy following the application of the GDPR and security rules.

Confronted with the issue of lack of access and black boxes, the main options for the people are limited to the opt-in/opt-out dichotomy, preventing room for complexity, disruption or nuance, notably regarding of specific cases of security and local surveillance that need to be dealt by experts in ICTs with high knowledge of ethical procedures.

In this aspect, the centralization and automation of procedures is a sound mistake for democracy and for the protection of the populations against corruption.

Citizens can virtually opt-out for the use of censors, microphones or cameras for some Google apps and services but as Google allows their third parties and staff, developpers and data scientists, to legitimately access private informations held in mega-databases, it ultimately denies users the possibility to restrict what they want to disclose to governments or companies.

Moreover most android devices have by-default apps pre-installed programmed to perform with opt-in options activated as Google Play and Google services or tools as Microphone or Camera. If opt-out is activated, the service can no longer be used and users will be deprived from making photos or videos with their cellphones.

In the case of invasive, repeated and ubiquitous intrusions of the private spheres and matters through personal cellphones, trackers and image recognition technologies via CCTV, the opt-out option is not an option.

This situation is causing anxiety and hyper-vigilance. Users then may not have other remedy than to disengage and stop using the internet services on their cellphones.

Four, creating illusions of control for hacking of personal id

Global databases are dealing with constant intrusions in the real and diary lives of citizens.

Data collected at site by default extraction of IP adresses provides a random access to informations of geolocalization and of private affaires from users (friends and lovers’ networks). Data scientists might apply filters to log files to anonymize data in secondary operations but files are still stored with included indicators.

The IP adress collection enables to track citizens in private areas that they’re not willing to share and forces them into disclosure of sensitive informations.

What we observe is an enaction of data justice by the means of juridiction.

State hackers can access databases and use private data malevolently for adversarial purposes serving the interests of their private investors and clients.

Removing filters on an algorithm takes no more than the removal of a line of code asking the machine to hide (not to quite suppress) the information considered as sensitive.

The script of code remains, open and flexible, and can be accessed and transformed as often as desired.

 

Law is a factory of consentment. This impacts trust and affects economy for segmenting populations into stereotyped categories and misrepresentations that allow manipulations and repressions across the globe.

The collection of pieces of informations put together in centralized databases combine civic informations and bits of information on personal devices and augment the effects of the damage caused on populations.

The accessed data might be stolen without consent and divert for secondary purposes as elections or business deals.

I documented about the plans of the US Congress to use social media to track and monitor non US-citizens for reaching political ends and economical benefits, and so plan to do the French government in January 2019 by tracking citizens private affairs on Facebook.

These policies might end damaging the most precarious and vulnerable and reinforce the domination of the biggest ones as rich people are having the means to pay lawyers and firms to hide their profits and white wash their past misactions.

In spite of being heavily detrimental to the persons being the objects of targeted surveillance, with the criminalization of persons and groups coming from visible minorities as persons of colors, people with disabilities or mental health issues and single mothers or women by police, laws around privacy and security are reinforcing prejudices.

Five, megaveillance superpowers vs simple powers

 

Deals that give to device makers access to users’ private information, such as romantic relationship status, geo-localization of homes and sensitive health matters, fall within the label of the infringements of privacy laws and are illegal without a Court injunction.

Even if done in the name of security concerns, counter-terrorism and ‘healthy’ public debate, this compromises the relation between institutions and citizens causing social chaos.

GDPR as many privacy laws grant tech corps with extra-powers to adjucate the law and decide about people’s rights and futures. The thought starts to crack that beyond GDPR applications, a creepy pragmatism has led EU administrators and lawmakers to comply with the views of tech giants to tacitly facilitate the ownership of citizens’ private data in mega-databases.

Same issues about privacy occur in Canada with the Toronto Waterfront/Quayside project directed by Sidewalks Lab (City Lab Toronto) and Google parent’s company, Alphabet, that led Torontonian officials to spouse the core principles of Alphabet Inc, enabling the trading of sensitive and private data of citizens accessed via cell phones and CCTV without users’ consentment, under the veil of anonymization.

The deals are put together quite hastily without any participation and consultation of information experts in technology and ethics, who would surely raise claims and critics regarding the scope and the flexibility of the laws.

An army of lawyers is then required to take charge not of the necessary corrections that society is calling for but to make sure that this law was being rigorously applied and ‘cheaters’ sued.

To make this move acceptable, law scholars whose labs are funded by Palantir, Google and Facebook, have been giving talks in so-called privacy concerned conferences as the APC Amsterdam, with the support of the EU ex-president.

The principles of discrimination encrypted in the code of law cannot obviously merely be based on the basic ignorance of the rules of the market and of the unfair situation that the GDPR contributes to create with an illegal competitive market.

Publicness and privacy are complex notions

Why to rush in passing privacy laws based on ‘bad anonymization’ and ‘broken promises’ (Olm, 2010) rather than to accept the limited risks of some random bad disclosures about personal information into public media, which is far less harmful for society as it is, at least, organically based on users’ preferences ?

People are not either willing to spread informations without thinking, as if deprived of choosing what they disclose with others, of dealing with some personal negociations they make about the amount of informations they share online. Instead our research show they rather carefully arbitrate among diverse priorities (Debaveye, 2012a, 2012b). Technologies might impact and undermine the potential for change of organic conversations and ecologies.

Is the risk of having your social security numbers deduced from social media by some random and passing by cyber-attacker higher than the ones of having it hacked from public databases crossed with multinational databases owned by corporations ? I don’t think so, really.

Ethic surrounding privacy is a complex problem involving many contextual indicators and specific considerations. One scientific matter of verifying the sources might be considered in a whole perspective. Are the intentions of the actors involved in acts of surveillance motivated by the will to reduce the harm or to increase existing prejudices? Is state still good for citizens or for protecting itself ?

By using scenarios of terrors and manipulating issues of trust and paranoia to make high profits, national security and tech securitists blurr the lines between reality and illusion and spread narratives of imaginary frauders and attackers that deceive.

Giant databases endanger democracy

In this aspect, the biggest frauder is the state denying citizens’ rights.

States of increased ubiquitous surveillance and targeted surveillance against sensitive persons result from megadatabases crossing public held databases collected through the administration of procedures by fiscal administrations, police at borders and police controls or health departments, containing civic identities, social security numbers, financial credentials or fingerprints and databases of users’ sensitive data related to private affairs and personal emotional involvements held by corporations as Google and Facebook.

The privileges of tech giants as some users would believe that it is better to be scammed by Google or Facebook than by some obscure russian or chinese company play in their favors. But the case is that, by tracking personal mobile phones and live data 24/7, Google is doing exactly the same thing in Toronto with the cumpliance of city administrators than chinese governments do in China.

Did the EU yielded into despair ? Or is this another sign of advanced corruption and cynism from institutionals colluding with the world of business and the milieu of law in exchange for discreet retributions and personal advantages ?

This question remains into the air, but one thing is sure, the instrumentalization of the code of privacy has not been done in accordance with the principles of equity, social justice, net neutrality and non-discrimination and following the rules of free and fair competition.

The codification of the law through the enactment of the GDPR alas won’t allow for a fresh start for civic liberties and citizens rights. Instead, the GDPR as the main source of reference will make the consensus for a law of punishment and capitulation.

Cheaters who cannot afford the means for personal protection or doesn’t recognize the value of anonymization when anonymization is used as a moral caution to enable the disclosure of sensitive and private data will be first target for these mafias.

The control of private and domestic affairs shouldn’t go either under the loupe of the strict interpretations of the universal code of  law by tech companies as Google who are – might I recall – under public scrutiny for serious cases of sexual harassment and even sexual slavery.

We need more diversity and qualitative insights into science and technology evaluations.

By creating an equivalence between stealing an apple – i.e. not obtaining consent from one customer and storing of semi-public data – and stealing a beef – i.e. diverting millions of dollars and attacking people opposing their financial interests with public ressources –  by disqualifying the contributions of critical researchers and denying the scientific principles that guarantee the independence of the research from state and public affairs, lawmakers not only reveal the compromission of their art but also enable deepest harms against populations whose mistakes are smaller than the ones of whose they represent.

Lawmakers consciously contribute to an escalation in violence for the benefits of corrupted networks. As Mill’s harm principle, core principle of the original liberal philosophy that reached great influence in politics, logics, epistemology and pragmatism since the nineteen century, shows : “If the regulation is more harmful than the behavior in question, it may be best not to regulate, despite the pro tanto case for regulation”.

Relying on the code of law to pretendingly ‘fix’ human issues will create more corruption and collusion by tricking people into the fake belief that the perfection of code is the solution to all the societal ills when all our ills are the consequences of bad rules and biased judgements.

We need to find the ways to develop economies that anticipate and integrate better the fails and limits of the laws as a component of society, i.e. human mistakes, trials and errors, in a way that is also limiting the negative impacts of policies as the regression of civic rights.

 

This post is a reinterpretation of a post published on the 13th of June 2018 about the shadowy relations linking US and EU lawmakers and the industry of the Code.

Licence Creative Commons
Ce(tte) œuvre est mise à disposition selon les termes de la Licence Creative Commons Attribution – Pas d’Utilisation Commerciale – Partage dans les Mêmes Conditions 4.0 International.