Researchers are gearing up to accelerate the development of Artificial General Intelligence (AGI) through a global network of highly advanced supercomputers, beginning with a new system set to go online in September.
Artificial intelligence (AI) encompasses a wide range of technologies, including machine learning and generative AI systems like GPT-4. These systems excel in specific tasks by leveraging extensive datasets, often surpassing human capabilities in those areas. However, their cognitive abilities and reasoning skills are limited, making them unsuitable for tasks that require cross-disciplinary knowledge.
AGI, in contrast, represents a theoretical future system that would surpass human intelligence across multiple fields and possess the ability to self-learn and enhance its decision-making through continuous data acquisition.
To achieve AGI, SingularityNET is constructing a "multi-level cognitive computing network" designed to host and train the complex architectures required for such an advanced system. This network will integrate cutting-edge AI elements, including deep neural networks that simulate brain functions, large language models (LLMs) that process vast datasets, and multimodal systems that connect human behaviors like speech and movement with corresponding multimedia outputs.
The first of these supercomputers will begin operations in September, with the entire network expected to be completed by late 2024 or early 2025, depending on supplier schedules. The modular supercomputer will feature top-tier components, such as Nvidia L40S GPUs, AMD Instinct and Genoa processors, Tenstorrent Wormhole server racks with Nvidia H200 GPUs, and Nvidia’s GB200 Blackwell systems. Together, these components represent some of the most powerful AI hardware available today.
"This supercomputer marks a significant milestone in the journey toward AGI," stated Ben Goertzel, CEO of SingularityNET, in a written statement to LiveScience. "While our novel neural-symbolic AI approaches reduce the need for data, processing power, and energy compared to traditional deep neural networks, we still require substantial supercomputing resources."
Goertzel described the mission of this computing infrastructure as a pivotal shift towards "non-imitative machine thinking." This new paradigm involves multi-step reasoning algorithms and dynamic world modeling based on cross-domain pattern matching and iterative knowledge distillation. He emphasized that this transition signifies a move towards continuous learning, seamless generalization, and self-modifying AI.
SingularityNET's ultimate goal is to pave the way for AI, AGI, and eventually, artificial superintelligence—a hypothetical future system vastly more intelligent than any human. To manage this ambitious project, Goertzel and his team have developed unique software to orchestrate the federated compute cluster that underpins this network.
Federated compute clusters allow for the secure handling of user data while enabling large-scale computations. This approach is crucial for processing datasets that include sensitive information, such as personally identifiable information (PII).
The software framework at the core of this network is OpenCog Hyperon, an open-source platform specifically designed for building AGI systems. The new hardware is purpose-built to support OpenCog Hyperon and its associated AGI ecosystem.
To facilitate access to this supercomputer, SingularityNET is employing a tokenized system, a common approach in AI. Users can purchase tokens to access the supercomputer, contributing data to the collective pool that others can use to test and deploy AGI concepts.
In simple terms, these AI tokens function like arcade game tokens. Just as players would purchase tokens to play a game, users acquire AI tokens to access computing resources. The data generated through these interactions becomes part of a shared resource pool, available to users worldwide.
For context, GPT-3 was trained on 300 billion tokens, while GPT-4 utilized a staggering 13 trillion. Similarly, self-driving cars are trained on thousands of hours of video footage, and OpenAI’s Copilot for programming is trained on millions of lines of code from platforms like GitHub.
AI leaders, including DeepMind co-founder Shane Legg, predict that systems could achieve or surpass human intelligence by 2028. Goertzel has previously estimated that this milestone could be reached as early as 2027. Meanwhile, Mark Zuckerberg is actively investing in AGI, having committed $10 billion in January to develop the infrastructure needed to train advanced AI models.
SingularityNET, a member of the Artificial Super Intelligence Alliance (ASI)—a consortium of companies dedicated to open-source AI research and development—plans to expand its network and increase the available computing power. Other ASI members, such as Fetch.ai, have recently invested $100 million in decentralized computing platforms for developers, further advancing the field.
Discover:
Toward Artificial General Intelligence:
Deep Learning, Neural Networks, Generative AI
Artifi cial Intelligence (AI) has been an exciting fi eld of study and research in educational institutions and research labs across the globe.