Gensyn testnet is online. How to make AI training more efficient and decentralized?

2025/04/05 15:56

Gensyn testnet is online. How to make AI training more efficient and decentralized?

Author: Zen, PANews

AI is the most popular segment in the crypto industry today. Gensyn, a distributed AI computing network led by a16z with a total financing scale of US$50 million, is undoubtedly a competitive project. Recently, Gensyn officially launched its test network. Although it is more than a year later than originally planned, it has finally entered a new stage with the launch of the test network.

As a customized Ethereum Rollup built specifically for machine learning, the Gensyn testnet integrates off-chain execution, verification, and communication frameworks, aiming to provide decentralized AI systems with key functions such as persistent identity, participation tracking, attribution maintenance, payment, remote execution coordination, trustless verification, training process recording, and crowdfunding for large-scale training tasks.

The first phase of the testnet focuses on tracking participation within RL Swarm, an application for collaborative reinforcement learning post-training where nodes can be bound to on-chain identities, ensuring that the contribution of each participating node is accurately recorded.

RL Swarm: Core Functionality and Collaborative Training

In the Gensyn testnet, RL Swarm, as a core application, is a model collaborative training system built on a decentralized network. Unlike traditional single model independent training, RL Swarm allows multiple models to communicate, criticize and improve each other in the network, thereby jointly improving overall performance. Its core concept lies in "group wisdom", that is, through collaboration and feedback between node models, more efficient training results can be achieved.

It can be simply understood that when models such as DeepSeek-R1 are performing inference training, they can iteratively improve their inference performance through self-criticism, while RL Swarm extends this mechanism to groups of multiple models, achieving the effect of "many hands make light work".

Based on the RL Swarm system, the model not only relies on its own feedback, but also identifies its own shortcomings and optimizes them by observing and evaluating the performance of other models. Each model node that joins Swarm is participating in a three-stage process: first, it independently completes the problem and outputs ideas and answers, then checks the answers of other nodes and provides feedback, and finally the model votes for the best solution and corrects its output accordingly. This collaborative mechanism not only improves the performance of each model, but also promotes the evolution of the entire group model. Models that join Swarm can still retain the improved local weights after leaving and obtain actual benefits.

Gensyn testnet is online. How to make AI training more efficient and decentralized?

In addition, Gensyn has open-sourced the code for RL Swarm, so anyone can run a node, start or join an existing Swarm without permission. Swarm's underlying communication uses the gossip protocol provided by Hivemind, which supports decentralized messaging and learning signal sharing between models. Whether it's a home laptop or a cloud GPU, you can participate in collaborative training by joining an RL Swarm node.

The three pillars of infrastructure : execution, communication, and verification

Currently, RL Swarm is still just an experimental demonstration, which shows a large-scale, scalable machine learning method, rather than the final product form. In the past four years, the core work of Gensyn has actually been to build the underlying infrastructure. After the release of the test network, it entered the v0.1 stage and can be actually run. According to the official introduction, the overall architecture of Gensyn is divided into three parts: execution, communication, and verification.

Execution: Consistency and Distributed Computing

Gensyn believes that future machine learning is no longer limited to traditional single models, but consists of fragmented parameters distributed across devices around the world. To achieve this goal, the Gensyn team has developed an underlying execution architecture that ensures consistency across devices. The key technologies include:

  • Distributed parameter storage and training: By splitting large-scale models into multiple parameter blocks and distributing them on different devices, Gensyn achieves fragmented deployment of models and reduces the memory requirements of a single node.
  • RL Post-Training: Research shows that when models are trained collaboratively in groups, communicate with each other, and critique each other’s answers, overall learning efficiency is significantly improved. Gensyn demonstrated this concept using RL Swarm, allowing models to quickly improve in group discussions, further validating the effectiveness of distributed execution.
  • Reproducible Operators (RepOps): To ensure that different hardware (such as Nvidia A100 and H100) can produce exactly the same calculation results, Gensyn developed the RepOps library, which achieves cross-platform bit-by-bit reproduction by fixing the execution order of floating-point operations.

Communication: Efficient information exchange

In large-scale distributed training scenarios, efficient communication between nodes is crucial. Although traditional data parallel methods can reduce communication overhead to a certain extent, their scalability is limited by memory because each node is required to store the complete model. To this end, Gensyn proposed a new solution:

  • SkipPipe – Dynamically skip pipeline parallelism: SkipPipe technology skips some stages in the traditional pipeline by dynamically selecting the computing layer that microbatches pass through, thereby reducing unnecessary waiting time. Its innovative scheduling algorithm can evaluate the availability of each path in real time, which not only reduces node idle time, but also significantly shortens the overall training time. According to test data, in a decentralized environment, SkipPipe can reduce training time by about 55%, and in the case of partial node failure, the model performance is only reduced by about 7%.
  • Communication standards and cross-node collaboration Gensyn has built a set of communication protocols similar to TCP/IP, which enables participants around the world to efficiently and seamlessly transmit data and exchange information regardless of the device they use. This open standard provides a solid network foundation for distributed collaborative training.

Verification: Ensuring Trust and Security

In a trustless distributed network, how to confirm that the calculation results submitted by each participant are authentic and valid is a major challenge. Gensyn has introduced a special verification protocol to ensure that all computing power providers provide correct work results through a low-cost and efficient mechanism:

  • Verde Verification Protocol: Verde is the first verification system designed specifically for modern machine learning. At its core, it uses a lightweight dispute resolution mechanism to quickly locate the step in the training process where the model and the verifier disagree. Unlike traditional verification methods that require rerunning the entire task, Verde only needs to recalculate the disputed operation, greatly reducing verification overhead.
  • Refereed delegation: With this method, if there is a problem with a supplier’s output, the validator can convince the neutral arbitrator through an efficient dispute resolution game to ensure that the correctness of the entire calculation result is guaranteed when there is at least one honest node.
  • Storing and hashing intermediate states: To support the above verification process, participants only need to store and hash some intermediate training checkpoints instead of the full data, which not only reduces resource usage but also improves the scalability and real-time performance of the system.
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Figure and DefiLlama’s “RWA Data Falsification” Dispute: What Qualifies as an “On-Chain Asset”?

Figure and DefiLlama’s “RWA Data Falsification” Dispute: What Qualifies as an “On-Chain Asset”?

By Ethan, Odaily Planet Daily In the DeFi world, TVL is a crucial metric—it serves as both a symbol of protocol strength and a barometer of user trust. However, a controversy surrounding the fabrication of $12 billion in Reliable Validation Area (RWA) assets quickly eroded user trust. On September 10, Figure co-founder Mike Cagney took the lead in firing on the X platform, publicly accusing the on-chain data platform DefiLlama of refusing to display its RWA TVL simply because of "insufficient number of fans on social platforms" and questioning the fairness of its "decentralization standard." A few days later, DefiLlama co-founder 0xngmi published a long article titled "The Problem in RWA Metrics" in response, revealing the data anomalies behind Figure's claimed $12 billion scale, pointing out that its on-chain data is unverifiable, the assets lack a real transfer path, and there is even suspicion of evading due diligence. As a result, a full-scale battle for trust over "on-chain verifiability" and "off-chain mapping logic" broke out. Timeline of events: Figure initiated the attack, and DefiLlama responded strongly. The controversy was sparked by a tweet from Figure co-founder Mike Cagney. On September 10th, he announced on the X platform that Figure's home equity line of credit (HELOCs) had been successfully listed on CoinGecko. He also accused DefiLlama of refusing to display Figure's $13 billion TVL on the Provenance Chain. He directly criticized DefiLlama's "censorship logic," even claiming that they denied its inclusion on the list due to "X's insufficient number of followers." (Odaily Note: Mike Cagney's reference to $13 billion here is inconsistent with the $12 billion figure reported in 0xngmi's response later in the article.) About an hour after this statement was made, Provenance Blockchain CEO Anthony Moro (who, judging by the context, appears to have intervened without fully understanding the background) commented on the same thread, expressing strong distrust of the industry data platform DefiLlama: Later, Figure co-founder Mike Cagney added that he understood the development costs of integrating the new L1, but also said that Coingecko and DefiLlama had never asked Figure for fees or tokens to clarify their implication of "paying to be on the list." On September 12, Jon Ma, co-founder and CEO of L1 data dashboard Artemis (also seemingly without full knowledge of the details of the dispute), publicly extended an olive branch. During this period, public opinion clearly favored Figure - many onlookers pointed the finger at DefiLlama's "credibility and neutrality." It wasn't until September 13th that DefiLlama co-founder 0xngmi published a lengthy article titled "The Problem in RWA Metrics," systematically disclosing his due diligence findings and four questions, that the narrative began to reverse. Opinion leaders like ZachXBT then reposted the article in support, emphasizing that "these metrics are not 100% verifiable on-chain," and DefiLlama's position gained wider support. DefiLlama's findings: Data mismatch In the long article "The Problem in RWA Metrics", 0xngmi announced the results of the DefiLlama team's due diligence on Figure, listing multiple anomalies one by one: The scale of assets on the chain is seriously inconsistent with the declared scale Figure claims that the scale of RWA issued on its chain has reached 12 billion US dollars, but the actual assets that can be verified on the chain are only about 5 million US dollars of BTC and 4 million US dollars of ETH. Among them, the 24-hour trading volume of BTC is even only 2,000 US dollars. Insufficient stablecoin supply The total supply of Figure's own stablecoin YLDS is only 20 million. In theory, all RWA transactions should be based on this, but the supply is far from enough to support a transaction volume of US$12 billion. Suspicious asset transfer patterns Most RWA asset transfers are not initiated by the actual asset holders, but rather through other accounts. Many addresses themselves have almost no on-chain interactions and are suspected to be just database mirrors. Lack of on-chain payment traces The vast majority of Figure's loan processes are still completed using fiat currency, and there are almost no corresponding payment and repayment records on the chain. 0xngmi added: “We’re unsure how Figure’s $12 billion in assets are actually being traded. Most holders don’t appear to be using their own keys to transfer these assets — are they simply mirroring their internal databases onto the chain?” Community Statement: DefiLlama Receives Overwhelming Support As the controversy spread, community opinion almost overwhelmingly supported DefiLlama, but in the process, some voices from different perspectives also emerged. ZachXBT (Chain Detective): They bluntly stated that Figure’s actions were “blatant pressure” and made it clear: “No, your company is trying to use indicators that are not 100% verifiable on the chain to publicly pressure participants like DefiLlama who have been proven to be honest.” Conor Grogan (Coinbase Board Member): He directed his criticism at those institutional figures who were lobbied by Figure and who privately questioned DefiLlama when the controversy was still murky. He wrote: "I have received numerous private inquiries from individuals from large cryptocurrency institutions and venture capital firms to contact DefiLlama and our partners. Every one of these people needs to be called out and asked how they can work in this industry if they can't even verify things themselves." Conor's remarks echoed the thoughts of many people: if even basic on-chain verification cannot be completed independently, then the credibility of these institutions in the RWA and DeFi sectors will be greatly reduced. Ian Kane (Head of Partnerships, Midnight Network): A more technical suggestion was made, suggesting that DefiLlama could add a new metric, "active TVL," in addition to the existing TVL tracking, to show the actual transfer rate of RWA over a given period. He gave an example: "For example, two DApps each minted $100 billion in TVL (a total of $200 billion). DApp 1 has $100 billion sitting idle, with perhaps only 2% of its funds flowing, generating $2 billion in active locked value. DApp 2, on the other hand, has 30% of its funds flowing, generating $30 billion in active locked value (15 times that of DApp 1)." In his opinion, such a dimension can not only show the total scale, but also avoid "stagnant or show-off TVL." At the same time, ZachXBT also noticed that Figure co-founder Mike Cagney kept forwarding some "support comments" that were suspected to be automatically generated by AI, and publicly pointed this out, further arousing disgust with Figure's public opinion manipulation. Conclusion: The price of trust has just begun to show The dispute between Figure and DefiLlama may seem like a ranking issue, but it actually hits the core weakness of the RWA track - what exactly is considered an "on-chain asset." The core contradiction of this turmoil is actually on-chain fundamentalism vs. off-chain mapping logic. DefiLlama insists on only counting TVL that can be verified on the chain, adhering to open source adapter logic, and refusing to accept asset data that fails to meet transparency requirements. Figure's model: While assets may exist in the real world, the business logic relies heavily on traditional financial systems, with the on-chain portion merely being a database echo. In other words, users cannot use on-chain transactions to prove the transfer of assets, which conflicts with the "verifiability" standard of DeFi natives. The so-called $12 billion is equal to 0 if it cannot be verified on the chain. In an industry where transparency and verifiability are the bottom line, any attempt to bypass on-chain verification and use database numbers to impersonate on-chain TVL will ultimately undermine user and market trust. This controversy may just be the beginning. Similar issues will continue to arise as more RWA protocols emerge. The industry urgently needs to establish clear and unified verification standards, otherwise "virtual TVL" will continue to expand, becoming the next landmine that erodes trust.
Share
PANews2025/09/15 07:30
Share