Blog > AI

Data Science in Society: Is “Dataism” Becoming a Religion?

Data Science in Society: Is “Dataism” Becoming a Religion?

For individuals, businesses and research institutes working with emerging technologies, it is important to follow and shape societal debates revolving around their field. Sooner or later, societal debates are likely to translate into political action, which may greatly impact work on emerging technologies – for better or worse. Also, if research institutes and businesses aim for more than research results and profit, they’re in a unique position to contribute expert knowledge to societal debates about how to avoid technological risks and ensure good outcomes for the world at large.

Yuval Harari’s theory in “Homo Deus”

When it comes to the societal relevance of big data and data science, Israeli historian Yuval Noah Harari has recently become a leading influence on public discourse. In his bestselling book “Homo Deus – A Brief History of Tomorrow”, he sketches a future reigned by a new religion: Dataism. Narrating mankind’s history throughout its blooms and dark ages, Harari puts together an extensive picture of what we have been, and takes a step further to describe what we might become. Just as people used to believe in traditional religions such as Christianity, they came to believe in humanism, communism and capitalism in recent history. Speaking of these world-views and visions as “religions” – and thus using a rather broad definition of the term –, Harari points out their susceptibility to blind patches due to ideologization. According to Harari, a devout humanist as well as a devout capitalist can become as uncritical and politically extreme about their mindsets as a Christian casting out his gay daughter because he believes homosexuality to be a sin.

Considering the unprecedented techno-scientific pace at which today’s world is changing, Harari expects said traditional visions to gradually disappear and give way to a new religion of “dataism.” Harari claims that just like capitalism and communism, dataism is starting off on scientific grounds but harbors great ideological risks. He goes on to outline what this development might look like: The world-view changes from deo- or homo-centric to data-centric, now holding 1) that the universe consists of data flows and 2) that value is determined by the contribution to maximizing the global data flow.

The Internet-of-All-Things: Praise the new Lord?

Accordingly, there is room for new systems and strategies for value production, e. g, the Internet-of-All-Things as an overarching optimizer of data flow and, consequently, freedom of information, defined not as a right given to human persons but to information itself. Morphing into a religion, Dataism will begin to authoritatively tell right from wrong, leading to the proclamation of practical commandments: Its believers or subordinates must produce, consume and connect more and more information. Nothing – literally no thing – should be exempt from optimally connecting and contributing to the data flow, as disrupting it is the mortal sin directly opposed to dataism’s supreme value.

Harari reckons with the possibility that algorithms might soon know us better than we know ourselves, which increases our dependence on data more and more. Any decision taken by a human being will be excelled by decisions based on big data and intelligent algorithms, implying a massive shift in power. Existing decisional institutions that are inefficient at processing data will undergo drastic reforms or vanish. Currently, it is still mostly humans who write and guide algorithms, but with progress in machine learning and artificial intelligence, algorithms will themselves come up with new algorithms, taking paths no human can possibly follow. 

An objection: Who believes that data flow is valuable for its own sake?

Dataism isn’t directly anti-humanist, but it only values human happiness, suffering and preferences to the extent that they optimally contribute to improving the data flow. Needless to say, this moral tenet of dataism – introduced above under 2) – dangerously misses the mark indeed: It would imply, e.g., that if a human or any other conscious creature were suffering horribly while excellently contributing to the data flow, there would be nothing disvaluable about the situation at all, and the world would be perfect. While dataism’s factual tenet 1) – that everything in the universe can be construed as a dataflow – seems innocuous and is widely shared, probably no one alive today actually believes dataism’s moral tenet 2). So who is Harari criticizing? He could reply that he’s talking about future ideology or about the goals of the agents that will dominate the future, whether human or artificial. In this regard, however, Harari’s analysis isn’t very instructive: To what end will future agents want to let the data flow? Surely the scenario where data flow is being pursued for its own sake is a lot less likely than all the combined scenarios where data is being used to optimally achieve some other goal.

The actual challenge of big data and AI: Aligning it with human values

The main potential problem with big data and algorithms taking over doesn’t seem to be with the specific, improbable scenario of data flow being pursued for its own sake, but with all scenarios where the goals pursued differ from our human goals. In the space of all possible goals, “good human goals” (whichever precise definition we opt for) are a tiny subset. So the risks are certainly there, and given the global stakes we should work very hard to reduce them (cf. our previous post about the AI control problem).

On the other hand, the potential upsides are very significant as well, both in the short and the long run. Harari argues that democracy is about to recede as a consequence of power shifts to “data,” and judges this development to be very negative. In reply one could point to the flaws and dangers of current democracy, and to information technology’s potential to greatly improve democracy too (cf. the concepts of “e-democracy” or “liquid democracy” particular).

Furthermore, valuing the preservation of humankind as it exists today is debatable, too. If humanity were replaced by an artificial intelligence that’s wiser and more compassionate, the world might be greatly improved. Needless to say, the likelihood of such a scenario is debatable in turn, which provides us with all the more reason to work towards it.