0
0
Article

Yupp: A Refreshing Take on "Train-to-Earn"

By rewarding users transparently, Yupp reframes AI training as participatory, auditable labor.
0
0
Jun 27, 20254 min read

A few weeks back, Yupp announced a $33M seed round led by a16z's crypto arm, with participation from heavyweights like Google Chief Scientist Jeff Dean, Twitter co-founder Biz Stone, Pinterest co-founder Evan Sharp, and Coinbase Ventures. 

Beyond just being a notable raise, the platform’s core premises prove quite interesting. Yupp intends to better the process of human feedback in AI training, and repurpose that feedback to establish a more objective benchmark for which to rate AI performance by.

Human feedback already proves essential to refining AI model training, but currently it’s largely uncredited, unrewarded, and controlled by a handful of tech giants. Yupp's approach is to flip this dynamic, using onchain infra to make feedback transparent, auditable, and financially rewarding for users.

The Human Feedback Problem

While human feedback drives AI training, the current system is fundamentally broken. 

Companies don't share their training processes or feedback pipelines due to competitive pressures, limiting external input and review while also creating opaque development cycles. For the users who do provide feedback, they never see how their interactions shape future models, nor do they receive any compensation for essentially doing unpaid labor.

How Yupp Addresses This

Yupp's solution is straightforward: reward users for quality feedback while creating a transparent, blockchain-recorded system that documents all contributions.

It does this by baking feedback directly into the process of prompting AI though, rather than making it a follow up task at the end when users may be more ready to sign off.

Here's how it works:

1. On Yupp’s home page, enter prompts for whatever you need AI help with. Prompts cost 50 Yupp credits by default, which are provided on sign-up and earned through the feedback process, varying in cost based on the model and use case.

2. See multiple AI-generated responses side-by-side from different models.

3. Pick the best option and provide feedback under categories like "better style" or "faster," with detailed reasoning. Your choices generate digitally signed preference packets recorded onchain for transparent AI training and reward attribution.

4. Get rewarded with either credits to use more AI models for free or direct cash payouts via Stripe, PayPal, Coinbase, and/or stablecoins on Base and Solana. (You must have a minimum of 6K credits though to cash out.) You can also earn bonus credits via referrals.

Yupp Leaderboard: A Better Way to Rank Models

As mentioned, beyond rewards, Yupp is also tackling AI benchmarking.

When companies announce their newest, "best-ever" model which outperforms on every benchmark, you should expect it to, since the vast majority of these benchmarks are set internally. For example, controversy stirred when OpenAI's o3 model scored lower on independent FrontierMath tests than it did internally. This isn’t to say that AI companies are lying; instead, they're likely adjusting the goal posts.

Even external benchmarks, like platform LM Arena, have issues. In this case, reports suggest leading AI companies that were tested by LM Arena were allowed to privately test multiple versions before public release. It's like letting the richest students retake a test until they get the grade they want.

To all this mess, Yupp's leaderboard seeks to offer an alternative: ranking models based on aggregate user feedback rather than opaque company benchmarks. The system uses a metric called VIBE (Vibe Intelligence Benchmark), which rates models on their popularity in everyday use according to real Yupp users.

The platform provides both overall rankings and granular categories like image generation, speed, reasoning ability, and best open models. This approach moves evaluation from corporate-controlled environments to actual user preferences, aiming to create a more democratic and transparent ranking system.

The Train-to-Earn Evolution

Far beyond a flashy raise announcement, Yupp stands out in the emerging “Train-to-Earn” vertical by smoothly integrating feedback into the everyday activity of prompting, rather than as a tedious data-labeling carousel. The design aligns model training with users' goals — people can use AI for their actual needs while contributing to feedback loops simultaneously.

For me, the side-by-side comparison format adds practical value beyond just earning rewards. Yupp users get something like built-in fact-checking when sourcing information, making the experience feel productive and not like unpaid labor. Onboarding comes smoothly too — users can start by simply connecting their Google account.

Compared to other platforms in this space, Yupp's feedback mechanism feels less extractive and more integrated into normal AI usage patterns. Rather than asking users to complete artificial tasks for rewards, it captures genuine preferences during real interactions.

However, the reward system isn't perfect. Credits are distributed sporadically rather than consistently, which certainly proves frustrating. Further, it’s hard not to think of the “only jobs left in the future” meme.

Still, in its current iteration, the experience is polished and intuitive, offering a way to access higher-tier models (albeit somewhat at random) at no cost. With its significant funding and experienced team, Yupp proves to be an intriguing way to prompt and earn.

Not financial or tax advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This newsletter is not tax advice. Talk to your accountant. Do your own research.

Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here.