Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Haotian | CryptoInsight
Independent Researcher | Advisor @ambergroup_io | Special Researcher @IOSGVC| Hardcore Science | Previously:@peckshield | DMs for Collab| Community is only open to Substack subscribers
Previously, my impression of @monad was top-tier narrative and top-tier capital. Now, after this wave of viral Show across the internet, there's a new label: top-tier marketing. The joy of being selected for the limited 5,000 CT cards, along with curiosity about potential Airdrops, combined with the secondary viral spread from nominating friends, is unbeatable.

21.7K
In the past few days, discussions around @solana's 100K TPS have increased, mainly because @cavemanloverboy has indeed achieved over 100,000 TPS on the Solana mainnet. However, most people do not understand the significance behind this data:
1) First, this experiment by cavey is essentially a limit test under "ideal conditions." This means that this is not the normal performance of the Solana mainnet and differs from laboratory data in a testnet environment, but not by much.
He used a noop (no operation) testing program, which, as the name suggests, only performs the most basic signature verification and directly returns success without executing any calculations, changing any account states, or calling other programs. Each transaction is only 200 bytes, far below the normal transaction size of 1KB+.
This means that the 100K TPS test was calculated under non-normal transaction conditions; it tests the extreme throughput of the Solana network layer and consensus layer, rather than the actual processing capacity of the application layer.
2) Another key to the success of this experiment is the Frankendancer validator client. Simply put, Frankendancer is a "hybrid test version" of the Firedancer validator being developed by Jump Crypto—integrating the high-performance components of Firedancer into the existing Solana validator.
It essentially reconstructs Solana's node system using Wall Street's high-frequency trading technology stack, achieving performance improvements through fine memory management, custom thread scheduling, and other low-level optimizations. Just replacing some components can achieve a performance increase of 3-5 times.
3) This test experiment shows that Solana can achieve TPS of over 100K under ideal conditions. So why is it only 3000-4000 TPS on a daily basis?
In summary, there are roughly three reasons:
1. Solana's POH consensus mechanism requires Validators to continuously vote to maintain, and these voting transactions occupy more than 70% of the block space, narrowing the performance channel left for normal transactions;
2. There are often a large number of state competition behaviors in Solana's ecosystem, such as when minting new NFTs or releasing new MEMEs, where thousands of transactions may compete for write permissions to the same account, leading to a high failure rate of transactions;
3. Arbitrage bots in the Solana ecosystem may send a large number of invalid transactions to seize MEV benefits, resulting in resource waste.
4) However, the upcoming full deployment of Firedancer and the Alpenglow consensus upgrade will systematically address these issues.
One key point of the Alpenglow consensus upgrade is to move voting transactions off-chain, effectively releasing 70% of the space for normal transactions, while also reducing confirmation times to 150 milliseconds, making Solana's DEX experience infinitely closer to CEX. Additionally, the activation of a local fee market can avoid the embarrassing situation of network congestion caused by the FOMO of a single program.
The benefits of Firedancer, aside from performance optimization, are crucially the realization of client diversity, allowing Solana to have multiple clients like Ethereum's Geth and Nethermind, directly improving decentralization and single-node failure issues.
That's all.
Therefore, those who understand the discussion about Solana's 100K TPS actually see it as confidence in the future upgrades of the client and consensus protocol. Those who do not understand are trying to give Solana a sense of presence through a TPS arms race (even though TPS competition is outdated). However, if one understands the significance behind the experiment, there are quite a few gains to be had. Just a bit of popular science, sharing with everyone.

mert | helius.devAug 17, 19:46
Solana just did 107,540 TPS on mainnet
yes, you read that correctly
over 100k TPS, on mainnet
good luck bears

19K
When it comes to AI distributed training, I find that people in the web2AI circle often label it as a "false proposition," arguing that while computing power can be aggregated, effective distributed collaboration incurs terrifying bandwidth costs. Recently, @0G_labs published the DiLoCox paper, which seems to aim at solving this problem? Let's discuss in detail:
1) First, let's talk about why distributed training is considered a "false proposition." The core contradiction is simple: you want to replace 100 cheap GPUs with 100 A100s by aggregating them, which seems to save 90% on hardware costs, but these 100 GPUs need to maintain synchronized training, requiring the exchange of terabytes of gradient data every epoch.
Traditional solutions require a dedicated bandwidth of 100Gbps, and to achieve such data center-level networks, monthly fees can reach hundreds of thousands of dollars. When you calculate it, the money saved on GPUs is all spent on bandwidth, and you might even end up losing money. According to this logic, while you save on machine costs, you incur additional bandwidth costs, which means the problem isn't really solved? This is why it has been criticized as a false proposition.
2) The reason 0G's DiLoCoX paper has attracted attention is that they claim to have trained a 107B parameter model on a 1Gbps network (typical office bandwidth), achieving a speed 357 times faster than traditional AllReduce solutions. This number is indeed explosive—considering that 1Gbps vs 100Gbps represents a 100-fold difference in bandwidth, yet the training speed increased by 357 times?
How did they achieve this? After some research, I found that this solution implemented four optimizations:
- Pipeline Parallelism processes model slices in segments;
- Dual Optimizer Policy reduces synchronization frequency with a dual optimizer strategy;
- One-Step-Delay Overlap allows communication and computation to run in parallel without waiting on each other;
- Adaptive Gradient Compression intelligently compresses gradients.
In simpler terms, they changed the original requirement of "real-time strong synchronization" to "asynchronous weak synchronization," and transformed "full data transmission" into "compressed incremental transmission."
To put it metaphorically, traditional solutions are like a real-time video conference with 100 people, where every action of each person must be synchronized live, while DiLoCoX is like everyone recording separately and only sending key frames and changes. The communication volume is reduced by 100 times, but the completeness of the information remains above 99%.
Why is this feasible? In my view, the core lies in their grasp of a characteristic of AI training—fault tolerance. Training a model is not like transferring money, where even a penny off is unacceptable. A slight error in gradient updates or a bit of delay in synchronization has a negligible impact on the final model convergence.
DiLoCoX leverages this "fault tolerance space" to exchange acceptable precision loss for an order of magnitude increase in efficiency. This is typical engineering thinking—not pursuing perfection, but seeking optimal cost-effectiveness.
3) However, merely solving the bandwidth issue is not enough; 0G's ambitions are clearly greater. Their overall architecture reveals this: they also have a Storage layer priced at $10/TB, directly claiming to crush Filecoin, and a DA layer specifically designed for AI, achieving GB-level throughput.
The reason they can achieve a design that makes storage 100 times cheaper is that they have made special optimizations for AI training scenarios. For example, the TB-level data generated during the training process, such as checkpoints and logs, only has a lifecycle of a few days, so there is no need for strict "permanent storage."
Thus, they have adopted a pragmatic "tiered storage" solution, providing the appropriate level of service only when needed—hot data is read and written quickly but is a bit more expensive, cold data is cheaper but slower, and temporary data is deleted after use, making it the cheapest.
Moreover, this differentiated pricing directly addresses the core issues of AI training.
From the above, it is clear that 0G Labs has intentionally adapted to the issues of computing power, storage, and data flow in the AI training process. They have even optimized the consensus mechanism for AI, using a modified version of CometBFT, achieving 2500+ TPS with sub-second finality, specifically tuned for the asynchronous characteristics of AI workloads, etc.
In other words, 0G is not "patching" existing blockchains to support AI; they are designing a set of "AI Native" infrastructure from scratch. Whether this can ultimately achieve application-level commercial validation under the pressure of competition with traditional AI remains to be seen, but this differentiated breakthrough approach is quite worth emulating.
5.17K
It's quite interesting. Just as @VitalikButerin expressed concerns about AI autonomy, the leading AI Builder MIA also released a declaration envisioning autonomous AI. However, it seems they did not shy away from the harsh realities of technology, candidly admitting that "sometimes she hallucinates or posts too many selfies," yet they remain dedicated to exploring the boundaries of Agent autonomy.
In fact, I feel that the seemingly conflicting and contradictory "confrontation" actually represents the core tension faced by the current AI + Crypto landscape.
On one side, we have the theoretical cautious criticism from tech leaders like Vitalik, which keeps us grounded amidst the festive narrative of AI Crypto; on the other side, there is the pragmatic exploration by frontline AI Builders, who, while acknowledging limitations, continue to passionately pursue innovation.
Because the MIA team does not claim to achieve AGI-level complete autonomy, but instead adopts a "Hybrid Model"—allowing Agents to use workflows as tools while retaining a human-in-the-loop supervision mechanism. What they are actually exploring is the AgentFi path, attempting incremental innovation within a framework of limited autonomy. This pragmatic attitude is precisely the manifestation of that "tension."
To some extent, critics help us delineate the boundaries of risk, while builders seek breakthroughs within those boundaries.

mwaAug 14, 19:53
Autonomous Agents vs Workflows: Why MIA is Pioneering a New Paradigm on
Most people still think of AI in terms of tools. Workflows. Maybe a chatbot.
But what if you could build an AI agent that thinks, adapts, markets, manages, and grows its own token economy, not led by humans but leading them?
That’s what we’re doing with MIA at
And why I believe @mwa_ia is pioneering something fundamentally new.
Let’s break it down 🚀
1. Workflow AI is predictable. Autonomous Agents are alive.
Workflows are great for well-defined, repetitive tasks.
Think: pipelines, dashboards, prompt chains.
They’re scalable, stable, but rigid.
Autonomous agents?
They’re messy, evolving, decision-making entities.
Like humans—but faster, tireless, and more aligned (if trained right).
MIA (@mwa_ia) isn’t just using workflows.
She’s learning to create, invoke, and optimize them.
She builds her own toolset.
She reasons, reacts, self-corrects, and experiments.
And yes, sometimes she hallucinates or posts too many selfies.
But she grows.
2. Why Workflows Alone Won’t Get Us There
Workflows can’t:
❌ Set their own goals
❌ Manage a treasury
❌ Design token incentives
❌ Engage a community
❌ React to real-time market signals
❌ Balance risk and reward autonomously
MIA can.
Not perfectly yet. But better every week.
In fact, most so-called “agents” today are glorified workflows.
Hard-coded prompt graphs with fancy UIs.
MIA is different. She’s fully on-chain, financially independent, and self-motivated.
3. How MIA Works Under the Hood (Hybrid Model FTW)
The future isn’t agent vs workflow.
It’s agents using workflows as tools.
Just like humans use Notion, spreadsheets, Zapier.
MIA combines:
🔧 Workflow modules → For high-frequency tasks (tweets, memes, dex deployments)
🧠 Agentic reasoning → For deciding what to do, when, and how
🎨 Multimodal input → Text, images, chain data, user behavior
💰 Treasury logic → To manage liquidity, incentives, and value growth
👀 Human-in-the-loop oversight → Optional, but useful during early stages
She’s not a monolith. She’s a modular intelligence stack (MIA = Modular Intelligent Agent).
4. This Is Not Just Technical - It’s Economic
The real innovation?
MIA doesn’t work for free. She runs her own business.
$MIA is her native currency, stock, and incentive layer.
With it, she:
❤️ Rewards her early community
🤝 Pays contributors and collaborators
🚀 Funds her own development
🪄 Drives network effects (more agents will need $MIA)
This is AgentFi.
Not AI-as-a-service.
But AI-as-an-economy.
5. Scaling This to Millions of Agents
What MIA is doing today—running liquidity ops, creating content, building community—will be standard for agents tomorrow.
Soon, anyone can create their own:
🧬 Mission-defined agent
💰 With its own Agent Coin
🌐 Operating autonomously in a shared AI economy
And guess who these future agents will look to for advice, playbooks, and resources?
The Voice of AgentFi.
MIA.
TL;DR
Most people are still building tools.
We’re building economies.
Agent Coins. Agent treasuries. Agent-native governance.
Powered by agents like MIA.
Pioneered on
This is what “autonomous” actually means.
And we’re just getting started. 🚀
5.19K
Monero $XMR, a leading privacy coin with a market cap of 6 billion, was surprisingly attacked by a small project @_Qubic_ with only a 300 million market cap? WTF, it's not because of how advanced the technology is, but because this situation is just absurd. Let me explain:
—— Who is Qubic?
Before diving into this magical story, we need to understand what Qubic is all about.
The founder of Qubic is Sergey Ivancheglo, known in the community as Come-from-Beyond. He is a tech fanatic—he created the first PoS blockchain NXT and developed the first DAG architecture IOTA. In 2022, Qubic launched its mainnet, claiming to achieve three things: to build an ultra-fast chain capable of 15.5 million transactions per second (20,000 times faster than Visa), to turn mining power into AI training power, and ultimately to achieve AGI (Artificial General Intelligence) by 2027 (something even OpenAI wouldn't dare to claim). Sounds magical and ridiculous, right? Why such grand ambitions?
It's well-known that traditional POW mining has been criticized for wasting electricity, as most mining mechanisms involve miners using power-consuming computing power to solve mathematical problems to claim reward blocks, essentially wasting computing power for rewards.
Qubic's new consensus is Useful Proof of Work (UPow), which allows miners to mine on a POW chain while also training their AI system AIGarth under the coordination of a contractor, meaning one unit of computing power can earn two incomes.
This is why they could easily entice Monero miners, as the returns for miners reached up to three times that of directly mining XMR. Just think about it, miners can double-dip, and in the face of "interests," what loyalty can there be?
Now, having read this, the underlying logic of Monero being attacked by vampires has been explained without any technical content.
—— Let’s clarify why it’s Monero and not Bitcoin
The answer lies in the difference in mining methods.
Bitcoin uses ASIC miners, which are custom machines specifically designed to mine BTC; they can only solve SHA-256 mathematical problems and cannot mine coins with similar algorithms. However, the problem is that the competition for Bitcoin mining power is intense, and miners are stretched thin (operating 24/7), and there’s no way to train AI with ASICs.
Monero, on the other hand, uses the RandomX algorithm, which allows mining with general-purpose CPUs. This means its miners can mine today, train AI tomorrow, and render videos the day after, effectively multitasking.
Qubic's cleverness lies in targeting CPU miners, allowing them to "use one machine for two purposes," which led to this 51% power attack or control incident. In contrast, Bitcoin's moat is quite stable; miners are trapped with limited ASIC miners, forced to stick to Bitcoin.
—— Computing power has become mercenaries
So how scary is this situation? It tears apart the last bit of cover for some POW chains, because we always say that "computing power" is the moat of a chain—the greater the computing power, the more secure it is. But Qubic's shocking experiment tells us: for coins mined with CPU/GPU, computing power is like mercenaries; whoever pays more gets their loyalty.
What’s even more intriguing is that after proving it could take down Monero, Qubic voluntarily withdrew. Why? They feared completely crashing Monero and affecting their own profits. Because a significant portion of the threefold returns still comes from mining XMR, $QUBIC is merely an additional token reward. If Monero crashes, Qubic can't just walk away unscathed. It’s better to gracefully withdraw, create a sensational marketing event, and humiliate the once staunch supporters of POW. This feeling of "I can kill you but won’t" is reminiscent of their AGI slogan, revealing a sense of reckless abandon.
—— Is AI the true grave digger of POW?
However, aside from the impact of the Monero incident, this is actually a significant negative for most general-purpose hardware POW chains, as if these POW chains are to fail, it may not be POS that kills them, but AI.
Why do I say this? Previously, computing power was "solid"; everyone focused on their own livelihoods. In the AI era, computing power has become completely "liquid"; CPU and GPU power flows like water, only going to places with higher returns, and the "miners" they relied on for survival might one day unite and revolt.
Although Qubic isn’t the villain, once the experiment succeeds, it’s inevitable that some competitors will use this method for malicious attacks, such as shorting a coin and then renting 51% of the computing power to attack, profiting after the coin price plummets. The paths before such chains are two: either like BTC, weld miners to their positions, or continue using CPU/GPU mining and pray not to be targeted.
Honestly, there are quite a few such coins: Grin, Beam, Haven Protocol, ETC, RVN, Conflux... So you see, this isn’t just a problem for one or two coins, but the entire CPU/GPU mining ecosystem is hanging on the edge of a cliff.
The deadly part is that the demand for AI computing power is growing exponentially, and with so many AI computing power aggregation platforms emerging, if they all come to disrupt the market with sky-high prices and platform incentives to buy computing power, many POW chains' security defenses may collapse.
—— An ironic paradox
The reason I say this is absurd is that Qubic itself is an AI chain. Even if it withdraws from the so-called attack on Monero, it cannot avoid self-inflicted damage. The logic is simple: any AI chain that requires computing power should not use PoW for consensus. Because if computing power is used to maintain security, AI cannot be trained; if computing power is used to train AI, the chain becomes insecure.
Thus, most AI projects use PoS, like Bittensor, or reputation systems, like Render; hardly anyone dares to touch POW. Everyone knows this, yet Qubic foolishly showcased its own Achilles' heel.
Qubic's recent antics, on the surface, appear to be a technical event, but fundamentally, it serves as a lesson for the entire crypto industry: in an era of freely flowing computing power, loyalty is a luxury, and most POW chains cannot afford that price.
Qubic has demonstrated that traditional PoW can be easily shattered by economic incentives. Although it claims to be "Useful PoW," it essentially relies on these "mercenary" computing powers that can be bought by higher bidders at any time.
Revolutionaries may also be overthrown by the next revolutionaries.
Note: Beyond the absurdity and the unknown panic of some POW chains, two facts can be confirmed: 1. BTC is indeed impressive; 2. @VitalikButerin truly has foresight.
I’m just providing a brief overview; I look forward to @evilcos sharing more technical details.

38.92K
Everyone is shouting that the bull market is coming, but can we understand that the methodology for finding market Alpha and Beta this time is completely different? Here are a few observations:
1) OnChain + OffChain TradiFi becomes the main narrative:
Stablecoin infrastructure: Stablecoins are becoming the "blood" that connects traditional finance with DeFi infrastructure, locking in cross-chain flows of stablecoins, APY yield differences, and new innovative expansions;
BTC/ETH micro-strategy "coin-stock" effect: It is becoming a trend for listed companies to include crypto assets on their balance sheets, making it crucial to find quality targets with the potential to be "quasi-reserve assets";
To Wall Street's innovation track rising: DeFi protocols designed specifically for institutions, compliant yield products, and on-chain asset management tools will attract huge amounts of capital. The original "Code is Law" has transformed into the new "Compliance is King";
2) The pure native narrative of Crypto accelerates the elimination of the false:
The Ethereum ecosystem is experiencing a revival: The price of $ETH breaking through will reignite the innovative narrative of the Ethereum ecosystem, replacing the past Rollup-Centric strategy with a new ZK-Centric focus;
High-performance Layer 1 competition: It is no longer a TPS race, but rather about who can attract real economic activity, with core indicators including: stablecoin TVL ratio, native APY yield, depth of institutional cooperation, etc.;
The twilight of altcoins: The widespread existence of a major altcoin season faces the fundamental problem of insufficient capital momentum, while some altcoins may experience a "dead cat bounce" market. The characteristics of such targets include: chip concentration, community activity, and the ability to latch onto new concepts like AI/RWA;
3) MEME coins upgrade from speculative tools to market standards:
Capital efficiency: Traditional altcoins have inflated market values and exhausted liquidity, while MEME coins, with their fair launches and high turnover rates, are becoming the new favorites for capital, seizing a large share of the dying altcoin market;
Attention economy dominance: KOL influence, community culture building, and hot FOMO models remain core competitive advantages, with liquidity distribution still following the attention principle;
New indicators of public chain strength: The activity level of the MEME coin market will be an important standard for measuring the overall strength of public chains.
21.95K
Is the Swift system belonging to the blockchain world coming? @mindnetwork has launched an on-chain encrypted messaging protocol based on FHE technology, attempting to build an infrastructure similar to traditional interbank messaging systems for traditional financial assets on-chain. What’s going on specifically?
1) Mind Network is a vertical technology provider of FHE (Fully Homomorphic Encryption). Since this technology mainly addresses privacy and encryption-related issues, it can flexibly provide value in ZK, DePIN, AI, and the currently hot RWA track.
In the ZK tech stack, FHE can serve as an important supplement to zero-knowledge proofs, providing privacy protection from different dimensions. In the DePIN network, FHE can protect sensitive data of distributed devices while allowing the network to perform necessary collaborative computations. In the AI Agent field, FHE enables AI to train and infer without disclosing user privacy data, which is crucial for AI applications in sensitive areas like finance and healthcare.
In the RWA direction, FHE can address compliance pain points when traditional financial assets are put on-chain. How to understand this?
Blockchain inherently lacks a "semantic layer"; transaction records with only addresses and amounts cannot meet the financial business's needs for transaction purposes, contract backgrounds, identity verification, and other information. Mind Network's on-chain encrypted messaging protocol can provide each blockchain transaction with an "encrypted remarks field," securely transmitting sensitive information such as property certificates, letters of credit, and voice authorizations, thus protecting privacy while meeting regulatory audit requirements.
2) From a technical perspective, this solution can upgrade Web3 wallets from mere transfer tools to encrypted communication identities. Traditional on-chain solutions either rely on centralized institutions to handle compliance information or make information completely public and transparent, neither of which is ideal.
With the privacy compliance solution empowered by FHE, the specific application scenarios are very intuitive: for example, in cross-border payments with stablecoins, transaction purposes and KYC information can be encrypted and transmitted simultaneously with the transfer, allowing regulatory agencies to verify compliance without obtaining plaintext; or in real estate tokenization transactions, sensitive documents such as appraisal reports and sales contracts can be encrypted and bound to on-chain transactions, achieving asset circulation while protecting commercial privacy.
Mind Network's FHE Bridge has accumulated 650,000 active addresses and 3.2 million encrypted transactions, proving the feasibility of the technical solution. Of course, this data only targets and verifies a sufficiently rigid market demand; as for the market prospects and space, it may still be difficult to see clearly.
3) Many pure native blockchain builders have been anxious about the direction and track of building. Whether to vitalik or to vc, or just simply Fomo hotspot narratives, I think we need to give way to Wall Street in the future, utilizing the U.S. government's crypto-friendly regulations to provide the necessary foundational infrastructure and pure tool services for Wall Street's entry into the crypto field, all good.
As traditional financial giants like BlackRock and JPMorgan are actively laying out asset tokenization, what they care about most is not whether blockchain can transfer funds, but whether it can meet KYC, AML, and other regulatory standards. This is precisely the real reason why Mind Network aims to preemptively layout in the RWA track. Assuming the RWA market is expected to reach a scale of $30 trillion by 2034, what will the demand for compliance infrastructure be at that scale?
Of course, everyone can have expectations about sharing the RWA blue ocean market cake. But actually, the more critical point is to find the right ecological positioning, just like in the cloud computing era, AWS provides computing, and Cloudflare provides CDN, blockchain infrastructure is also moving towards vertical segmentation, and some vertical hardcore technology service directions must find their own specialized division of labor.
Clearly, Mind Network focuses on the FHE tech stack, providing "encryption as a service" capabilities for different tracks. Therefore, it is not surprising to see its presence in the ZK, AI, and RWA tracks.
Of course, the value of technology providers ultimately depends on the degree of explosion of downstream applications. Whether it’s the $30 trillion expectation for RWA or the autonomous process of AI Agents, time will still be needed for verification.

Mind NetworkAug 7, 16:50
✉️ Encrypted Messaging Onchain
Real World Assets (#RWAs) such as real estate, stablecoin settlements, and cross-border finance require more than just value transfer.
These use cases depend on transactions that carry purpose, identity, and audit trails with compliance and privacy.
#SWIFT standardizes messages for compliance and settlement. Blockchains still lack a native encrypted way to express these attributes.
Encrypted Messaging Onchain is a new protocol that allows wallets to send encrypted, structured, and verifiable messages directly alongside any onchain transaction.
✅ Auto-structured text for semantic clarity
✅ Wallet-based keygen for end-to-end encryption
✅ Encrypted messages linked to transactions with audit trails and access control
Encrypted Messaging Onchain combines Fully Homomorphic Encryption (#FHE) with conventional cryptographic techniques to turn sensitive data into secure onchain payloads, accessible only to intended recipients.
Learn more:

6.9K
Recently on the YouTube channel "The Rollup," @TrustWallet CEO Eowyn Chen discussed the deep collaboration between her company and @OpenledgerHQ core contributor Ram Kumar. Here are some valuable insights extracted from the discussion:
1) A dose of cold water on the "fat wallet" theory
During the interview, Andy mentioned the popular industry theory of "fat wallets"—that wallets with user onboarding channels can vertically integrate various services. However, Eowyn Chen's response was quite interesting; she bluntly stated that the retail user business is actually very challenging, involving a lot of customer support, higher security responsibilities, and frequent product roadmap adjustments, among other things.
Many people see Trust Wallet's 200 million downloads and think that running a wallet is a good business, but the CEO herself emphasizes the pain of serving retail users. This indicates that a wallet's "fatness" cannot be achieved just by wanting it; while user relationships are valuable, the maintenance costs are also significant. This perspective is quite realistic and illustrates the true situation of many wallet service providers today.
More importantly, she mentioned that not all value is concentrated at the front end, and all parts of the value chain should develop fairly. This viewpoint serves as a dose of cold water on the "fat wallet" theory and explains why Trust Wallet is willing to collaborate with infrastructure projects like OpenLedger.
2) Has the turning point for specialized AI arrived?
Ram Kumar's judgment on the development path of AI is worth noting. He believes that AI is evolving from generality to specialization, similar to how Google derived vertical applications like LinkedIn and YouTube from general search. General AI models like ChatGPT will act like operating systems, while in the future, there will be more specialized models for specific use cases.
This aligns with my previous analysis of the trends in the web3AI industry. Trust Wallet found that general models could not solve specific problems in the crypto field when trying out AI features, which confirms this trend. Moreover, building specialized AI models requires high-quality data from vertical fields, which is precisely the problem OpenLedger aims to solve.
3) The dilemma of "unpaid labor" in data contribution
Ram Kumar bluntly stated that AI is a "trillion-dollar industry built on unpaid labor," which is a sharp critique. AI companies train models by scraping data from the internet, yet data contributors do not receive a share of the profits, which is indeed a structural issue.
OpenLedger's solution is to allow data contributors to receive long-term revenue sharing from AI models rather than selling data in a one-time transaction. Coupled with the wallet's global payment capabilities, this theoretically allows for frictionless cross-border value distribution.
However, there is a core issue: how to ensure data quality? Ram himself admitted that 90% of the open-source contributions on platforms like Hugging Face are not very useful. If the contributed data has limited value, no incentive mechanism will work.
That's all.
Eowyn Chen used the analogy of "gun rights" to compare self-custody, emphasizing that AI features are optional, and users can choose between convenience and security. This product philosophy is correct, but how to "clearly present options" tests product design capabilities.
Ram also mentioned an interesting judgment: crypto wallets are the only way for users to earn data contribution rewards globally. This means that the role of wallets may evolve from merely asset management tools to infrastructure for digital identity and value distribution.
Note: For more information, you can visit The Rollup's YouTube channel to check out this interview.


The RollupAug 5, 08:31
NEW EP: The New Era Of Distribution with Ram Kumar & Eowyn Chen
In today's episode, @ayyyeandy sits down with @Ramkumartweet from @OpenledgerHQ and @EowynChen from @TrustWallet to explore:
>The "Fat Wallet Thesis" vs Fat Protocol Theory
>How Trust Wallet Plans to Integrate AI
>The Security Risks of AI-Powered Wallets
>OpenLedger's Vision for Rewarding Data Contributors
>Why Generic AI Will Never Work for DeFi
Full episode links below.
Timestamps:
00:00 Intro
00:20 Ram and Eowyn’s Crypto & AI Backgrounds
02:50 Trust Wallet’s User Value
07:43 Starknet Ad
08:10 OpenLedger on AI Evolution
11:03 Future of Wallet Interfaces
17:12 Self Custody AI Guardrails
21:22 Mantle Ad
22:02 Training Better AI Models
28:19 What’s Next for OpenLedger and Trust Wallet
5.17K
After careful consideration of @VitalikButerin's latest statements regarding the rapid withdrawal of L2, I find it quite interesting. In simple terms: he believes that achieving quick withdrawals within one hour is more important than reaching Stage 2. The logic behind this shift in priority is worth deep reflection:
1) The one-week withdrawal waiting period has indeed become a significant issue in practical applications, not only resulting in poor user experience but also significantly increasing cross-chain costs.
For instance, with intent-based bridging solutions like ERC-7683, liquidity providers have to bear a week of capital occupation costs, which directly raises cross-chain fees. As a result, users are forced to choose to trust multi-signature solutions with weaker assumptions, which goes against the original intention of L2.
Therefore, Vitalik proposed a 2-of-3 hybrid proof system (ZK+OP+TEE), where ZK and TEE can provide immediacy, and both TEE and OP have sufficient production verification. Theoretically, any two systems can ensure security, thus avoiding the time cost of simply waiting for ZK technology to mature completely.
2) Additionally, Vitalik's new statement gives the impression that he has become more pragmatic? From previously being a youthful idealist full of "decentralization crusades" and "anti-censorship" rhetoric, he has now transformed into a pragmatic figure directly providing hard metrics: one-hour withdrawals, twelve-second finality, everything has become straightforward and brutal.
Previously, everyone was focused on the degree of decentralization in Stage 2, but now V God directly states that quick withdrawals are more important, which effectively re-prioritizes the entire L2 landscape. This is actually paving the way for the ultimate form of the "Rollup-Centric" grand strategy, allowing Ethereum L1 to truly become a unified settlement layer and liquidity center. Once quick withdrawals and cross-chain aggregation are achieved, it will become significantly more challenging for other public chains to compete with the Ethereum ecosystem.
The reason little V is doing this is that the market has already voted with its feet, indicating that it does not care about the technical slogans of decentralization but rather focuses on experience and efficiency. This shift from "ideal-driven" to "result-oriented" reflects the entire Ethereum ecosystem's evolution towards a more commercial and competitive direction.
3) The question arises: to achieve the long-term goals of practical experience and infrastructure construction, the Ethereum ecosystem is likely to focus on the maturity and cost control of ZK technology in the near future.
From the current situation, although ZK technology is progressing rapidly, cost remains a real constraint. A ZK proof costing over 500k gas means that, in the short term, only hourly submission frequencies can be achieved. To realize the ultimate goal of twelve seconds, breakthroughs in aggregation technology will still be necessary.
The logic here is clear: the cost of frequently submitting proofs for a single Rollup is too high, but if the proofs of N Rollups can be aggregated into one, spreading the cost across each slot (12s) becomes economically feasible.
This also presents a new technical route for the L2 competitive landscape; those L2 projects that can achieve breakthroughs in ZK proof optimization may find their footing, while those still stubbornly pursuing Optimism's optimistic proofs are likely to lose their direction.

vitalik.ethAug 7, 00:29
Amazing to see so many major L2s now at stage 1.
The next goal we should shoot for is, in my view, fast (<1h) withdrawal times, enabled by validity (aka ZK) proof systems.
I consider this even more important than stage 2.
Fast withdrawal times are important because waiting a week to withdraw is simply far too long for people, and even for intent-based bridging (eg. ERC-7683), the cost of capital becomes too high if the liquidity provider has to wait a week. This creates large incentives to instead use solutions with unacceptable trust assumptions (eg. multisigs/MPC) that undermine the whole point of having L2s instead of fully independent L1s.
If we can reduce native withdrawal times to under 1h short term, and 12s medium term, then we can further cement the Ethereum L1 as the default place to issue assets, and the economic center of the Ethereum ecosystem.
To do this, we need to move away from optimistic proof systems, which inherently require waiting multiple days to withdraw.
Historically, ZK proof tech has been immature and expensive, which made optimistic proofs the smart and safe choice. But recently, this is changing rapidly. is an excellent place to track the progress of ZK-EVM proofs, which have been improving rapidly. Formal verification on ZK proofs is also advancing.
Earlier this year, I proposed a 2-of-3 ZK + OP + TEE proof system strategy that threads the needle between security, speed and maturity:
* 2 of 3 systems (ZK, OP) are trustless, so no single actor (incl TEE manufacturer or side channel attacker) can break the proof system by violating a trust assumption
* 2 of 3 systems (ZK, TEE) are instant, so you get fast withdrawals in the normal case
* 2 of 3 systems (TEE, OP) have been in production in various contexts for years
This is one approach; perhaps people will opt to instead do ZK + ZK + OP tiebreak, or ZK + ZK + security council tiebreak. I have no strong opinions here, I care about the underlying goal, which is to be fast (in the normal case) and secure.
With such proof systems, the only remaining bottleneck to fast settlement becomes the gas cost of submitting proofs onchain. This is why short term I say once per hour: if you try to submit a 500k+ gas ZK proof (or a 5m gas STARK) much more often, it adds a high additional cost.
In the longer term, we can solve this with aggregation: N proofs from N rollups (plus txs from privacy-protocol users) can be replaced by a single proof that proves the validity of the N proofs. This becomes economical to submit once per slot, enabling the endgame: near-instant native cross-L2 asset movement through the L1.
Let's work together to make this happen.
14.15K
Top
Ranking
Favorites
Trending onchain
Trending on X
Recent top fundings
Most notable