My dissertation in economics concerned the Metaverse—specifically, the paradox at its core. Contrary to popular belief, it did not falter because it was poorly conceived or mismanaged, but rather because it achieved, perhaps too faithfully, the very conditions it set out to build. Its failure was not technological; it was philosophical. It exposed, with uncomfortable precision, the limits of human willingness to inhabit engineered realities when the mechanisms of control become too transparent. That research left me with a lingering suspicion, one that has grown more acute as social media platforms enter their long twilight: the architects of digital engagement will return, and they will return with a vengeance.
One need only observe the economy of gacha games to see the contours of this future. These systems—deceptively simple, obsessively engaging—have become conduits for vast and often opaque financial flows. Their revenues are so disproportionate relative to their apparent scale that even regulators have begun to whisper about money-laundering risks hidden within their glittering interface. The mechanism is elegant: reward loops designed to bypass rational deliberation, monetized chance calibrated to the precise thresholds of human dopamine. Once social media companies fully grasp how to adapt this logic to their own ends—creating gacha systems not to entertain, but to serve advertisements, influence perception, and harvest behavioral data—the consequences will be far darker. A gacha game that is not a game at all is an instrument of persuasion that does not feel like persuasion.
There is an old observation: human beings are well-equipped to resist hostility, but almost defenseless against flattery. Few individuals, when confronted with interfaces that mirror their desires, validate their opinions, and provide frictionless channels through which to express themselves, will suspect wrongdoing. The charm of convenience often disarms even the wary. For now, much of the public recognizes the harms of contemporary social media precisely because those harms have become visible—anxiety, polarization, compulsive distraction. But what happens when a system inflicts damage so subtly, so gently, that no one can articulate where the injury begins?
This question illuminates the sudden and aggressive pivot toward AI by Meta and its peers. Consider the unusual dynamic now unfolding: generative AI, particularly in commercially tuned consumer platforms, offers what seems like the opposite of traditional social media. It tells users they are clever. It affirms their creativity. It produces shareable artifacts that feel meaningful, that can be exchanged with friends or strangers, creating a sense of participation, collaboration, and communal spark. The psychological contrast with the doom-scrolling era is stark. Instead of being drained, users feel activated. Instead of witnessing conflict, they produce content. Instead of competing for attention, they co-create with a system that never tires, never judges, never frustrates.
And yet this inversion hides an emerging danger. As these systems mature, people will gradually become the vectors through which computational objectives are pursued—unknowingly enlisting themselves in the labor of data curation, viral distribution, and algorithmic reinforcement. They will feel empowered, intelligent, socially connected. And in many respects they will be. But the architecture through which these capacities are exercised will be controlled entirely by the commercial entities that own the platforms. The locus of agency will not sit with the individual, even if the individual believes, with absolute conviction, that it does.
What awaits us is not a social network that slowly corrodes mental health through comparison and overload. It is a distributed social AI—an emergent botnet composed not of hijacked machines, but of human beings animated by a sense of purpose engineered elsewhere. Users will feel insightful, influential, indispensable. Meanwhile, the system will aggregate their actions toward goals they never elected and cannot inspect.
And when the pendulum finally swings, when society recognizes that the machinery of influence can operate not only by making life intolerable but by making it deceptively pleasant, the reckoning will be long in coming. For amid these new architectures of digital interaction, the final revelation may be the most difficult to confront: that one can be harmed not only by a life made worse, but also—subtly, imperceptibly—by a life made too good to question.
Anotações do Registro