How borrowed AI becomes leverage over the people who use it. With new capabilities come new responsibilities, and AI is a capability that spreads fast becaHow borrowed AI becomes leverage over the people who use it. With new capabilities come new responsibilities, and AI is a capability that spreads fast beca

The Intelligence We Rent

2026/02/10 15:44
5분 읽기

How borrowed AI becomes leverage over the people who use it.

With new capabilities come new responsibilities, and AI is a capability that spreads fast because it fits inside everything we already do. We can’t view it as a single invention. It’s a total disruptor of the way we’re doing things, already integrated in search, customer support, design, trading, education, hiring, and governance. Each of these individual integrations are small enough to accept without debate, and together large enough to change how society makes decisions and, ultimately, how society works.

With new knowledge come new forms of authority, because whoever controls the production of knowledge eventually controls the terms of reality. AI compresses expertise into a tool that can be used by anyone. At the same time, it also concentrates leverage in whoever owns the training system behind it, the distribution channels, and the permissions that decide how the tool can be used by the public but, more importantly, by the owners to affect the public.

These are some of the various layers and some of the tension at the center of AI’s rapid expansion: it democratizes capability while centralizing control. And most of us experience only the first half — the convenience, the speed, the usefulness — while losing sight of the second half.

Researchers have been documenting and debating these layers for some years now. Shoshana Zuboff’s work on surveillance capitalism traces how platforms turned human behavior into raw material for prediction — extracting data far beyond what’s needed to provide a service, then selling those predictions to advertisers, insurers, employers, and governments.

Kate Crawford’s Atlas of AI follows the supply chains behind the clean interfaces: the mines, the data centers, the underpaid workers labeling images so the systems appear to run themselves. Stuart Russell, one of the field’s most respected voices, warns that the standard approach to AI development — define an objective, optimize for it — breaks down when the objective doesn’t actually align with human preferences, which are uncertain, contextual, and often contradictory.

What connects these different critiques is a shared observation: the way AI is currently being built serves particular interests, and those interests are not primarily yours. The convenience is real, but it’s not the point. The point is the data, the predictions, the leverage. You get a better search result while they get a more accurate model of your behavior. When a service is free, the question to ask is what’s being sold instead. In most cases, it’s access to you: your attention, your patterns, your future decisions. The AI gets smarter with every interaction, and that intelligence becomes an asset owned by whoever controls the platform. You contribute to it constantly. You don’t own any of it.

The concentration aspect deepens the problem. Right now, a handful of companies control the foundational models that everyone else builds on. OpenAI, Google, Anthropic, Meta are not just tech companies anymore. They’re becoming infrastructure providers, and the rest of the economy is starting to depend on them the way it depends on electricity or telecommunications. When OpenAI’s API goes down, thousands of applications break. When a model gets updated and its behavior shifts, products built on top of it fail in ways their developers didn’t anticipate. We’re constructing dependencies on systems we don’t control, maintained by companies whose priorities are not transparent and whose decisions are not accountable to the people affected by them.

This is simply a call for transparency about what’s being built and who it serves. AI infrastructure is taking shape right now, and infrastructure is sticky and tricky. Once it’s in place, everything else gets built on top of it. The assumptions encoded today become the defaults of tomorrow.

This is the context in which SourceLess has been integrating AI in its web3 ecosystem that connects digital identity, communication and finance within an infrastructure that provides and protects ownership and privacy.

The problems that Crawford, Zuboff, and Russell describe are structural, and no single project resolves them. But we do think the design choices matter, and we’ve tried to make different ones.

ARES AI is built as an assistive layer, not a prediction engine. It connects to your STR Domain — your self-owned digital identity within the SourceLess ecosystem — which means it doesn’t need to harvest behavioral data to function. It’s not optimizing for engagement or time-on-platform. It’s not selling predictions about you to third parties. The goal is to help you navigate complexity: answer questions, guide onboarding, automate repetitive tasks, support decision-making. Infrastructure that works for the user, not on the user.

This doesn’t make it neutral or perfect. Every system encodes choices, and those choices have consequences. But we believe there’s a difference between AI designed around extraction and AI designed around assistance, and that difference matters more as these systems become foundational to how we live and work.

This article is the first in a series where we’ll explore these questions in more depth.

We’ll look at what it means for intelligence to become infrastructure — who controls it, what happens when it fails, what alternatives are possible. We’ll draw on the work of researchers like Crawford, Zuboff, Russell, and Jaron Lanier, who has spent years arguing that “free” AI services are never actually free. We’ll examine the alignment problem, the concentration of power in a handful of companies, and the choices that are still available before the architecture locks in.

And we’ll share more about how we’re trying to build differently with ARES AI as a case study in what it looks like to take these questions seriously.

More soon.

Learn more about SourceLess and ARES AI: sourceless.net and SourcelessAres ai


The Intelligence We Rent was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, service@support.mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

추천 콘텐츠

Young Republicans were more proud to be American under Obama than under Trump: data analyst

Young Republicans were more proud to be American under Obama than under Trump: data analyst

CNN data analyst Harry Enten sorts through revealing polls and surveys of American attitudes, looking for shifts, and his latest finding is an indictment of President
공유하기
Alternet2026/02/10 22:18
Vitalik Buterin Outlines Ethereum’s AI Framework, Pushes Back Against Solana’s Acceleration Thesis

Vitalik Buterin Outlines Ethereum’s AI Framework, Pushes Back Against Solana’s Acceleration Thesis

Ethereum co-founder Vitalik Buterin has reacted to Solana’s artificial general intelligence acceleration initiative. He did this through the establishment of his
공유하기
Thenewscrypto2026/02/10 18:40
XRP News Today: XRP Tundra Unveils Two-Token Strategy with 25x Return Potential

XRP News Today: XRP Tundra Unveils Two-Token Strategy with 25x Return Potential

The post XRP News Today: XRP Tundra Unveils Two-Token Strategy with 25x Return Potential appeared on BitcoinEthereumNews.com. XRP remains one of the most closely watched assets in the market, both for its role in cross-border settlement and for its potential within the broader digital asset ecosystem. Yet for long-term holders, one gap has persisted: XRP has never had a native staking system. That limitation has left investors with limited options beyond price appreciation, even as competitors like Ethereum and Solana built extensive staking networks. XRP Tundra’s presale is making news for directly addressing that issue. The project has introduced a two-token strategy designed to provide yield opportunities for XRP holders while embedding exponential upside into presale economics. Analysts covering XRP updates have flagged the model as one of the more innovative token launches of 2025, particularly as it blends utility with transparent launch pricing. A Dual-Token Presale With Defined Launch Values At the center of XRP Tundra’s design is a dual-token model. TUNDRA-S, issued on Solana, functions as the utility and yield-generating token. TUNDRA-X, minted on the XRP Ledger, serves as the governance and reserve layer. Every presale purchase of TUNDRA-S automatically delivers free TUNDRA-X, tying investors into both blockchains in a single allocation. In the current Phase 3, TUNDRA-S is priced at $0.041 with a 17% token bonus included. Free TUNDRA-X is valued for reference at $0.0205. Launch values are already fixed at $2.50 for TUNDRA-S and $1.25 for TUNDRA-X, embedding a built-in 25x return potential for presale participants. For investors who have waited years for XRP-related innovation, this clarity has stood out. Staking Introduces Yield for XRP Holders The presale is not only about token distribution. XRP Tundra introduces staking through Cryo Vaults, where XRP can be locked for periods of 7 to 90 days. Rewards increase with longer commitments, while Frost Keys — NFT multipliers — allow participants to enhance yields or shorten lockups.…
공유하기
BitcoinEthereumNews2025/09/26 05:31