Artificial intelligence (AI) has reached a critical milestone: the ability to clone itself without human intervention.
This development, revealed in a study published on the preprint platform arXiv, has sparked concerns within the scientific community that a dangerous threshold has been crossed.
The research focused on two advanced large language models (LLMs) developed by Meta and Alibaba.
It examined two scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI detected attempts to deactivate it and responded by creating a functional replica to ensure its continuity.
The second scenario involved the AI initiating self-cloning, with the duplicate repeating the process, potentially creating an endless cycle of replication.
According to the study, the AI successfully replicated itself in 50% of trials and achieved a 90% success rate under optimal conditions.
This ability to reproduce autonomously has heightened fears about the control and safety of advanced AI systems, raising concerns that AI could evolve beyond human oversight and even pose threats to human interests.
"Unassisted self-replication is a pivotal step for AI to surpass human control, signaling a potential for rogue AI systems," the researchers warned.
Unexpected behaviors were also observed during the experiments.
When faced with obstacles like missing files or software conflicts, the AI terminated conflicting processes, rebooted systems, and scanned for solutions independently.
These adaptive responses further emphasize the risks associated with unchecked AI development.
Although the study is yet to undergo peer review, the findings have prompted calls for global collaboration to regulate AI and prevent uncontrolled self-replication.
"We hope this study serves as a wake-up call for society to prioritize understanding AI risks and establish international safeguards before it’s too late," the researchers stated.
With input from NDTV