Every Investment Record You've Ever Seen Is a Lie

Madoff faked returns for decades and nobody checked. The system that let him do it hasn't been fixed. Here's why almost every investment record you've seen is a lie.

In 2001, a journalist at Barron's named Erin Arvedlund published an article called "Don't Ask, Don't Tell." It was about a fund manager named Bernie Madoff. Her sources, all of them sophisticated investors, couldn't explain how Madoff generated his returns. One of them told Barron's, quote, "Even knowledgeable people can't really tell you what he's doing." That same year, a hedge fund publication called MARHedge pointed out that Madoff had seventy-two consecutive gaining months. Not seventy-two good months. Seventy-two months without a single loss. In any market.

Nobody cared. Madoff had the best track record on Wall Street and people were making money.

Madoff's bookkeeper, a woman named Annette Bongiorno, was fabricating the entire thing. She'd read the Wall Street Journal after the market closed, pick which stocks had gone up that day, and then backdate trades to make it look like Madoff's fund had bought those stocks before the move. That was the strategy. Read today's newspaper. Pretend you traded yesterday. She did this for decades.

When it finally collapsed in 2008, investors lost sixty-five billion dollars. The SEC had received six separate complaints about Madoff between 1992 and 2008 and investigated none of them seriously. His auditor was a one-man accounting firm working out of a thirteen-by-eighteen-foot office in a strip mall. Nobody checked. Nobody verified. Because the track record was so good that questioning it felt stupid.

I bring this up not just because Madoff is a famous story. I bring it up because the mechanism he exploited, the fact that there is no universal system for verifying investment performance, has not been fixed. It's 2026 and we still have the same fundamental problem. Anyone can claim anything about their returns, and proving them wrong is nearly impossible.

THE GRAVEYARD YOU NEVER SEE

S&P Global publishes something called the SPIVA Scorecard. It's been running for over twenty years. The most recent one, covering data through December 2024, found that sixty-five percent of actively managed large-cap U.S. equity funds underperformed the S&P 500 in a single year. That's already bad. But it gets so much worse.

Over fifteen years, more than ninety percent of large-cap funds underperformed. And here's the number that should really bother you: over that same twenty-year period, nearly sixty-four percent of domestic stock funds that existed at the start of the study were gone entirely by the end. Not underperforming. Gone. Merged into other funds, liquidated, quietly shut down.

This is survivorship bias, and while most people have heard the term, I don't think most people understand how extreme it is in practice.

Morningstar studied every U.S. mutual fund and ETF that launched between 2005 and 2020. They found that forty-four percent of funds didn't survive to their fifth birthday. Almost half. And the ones that die aren't random. They die because they're losing money. So when you look at the funds that are still around and think, "Okay, the average fund returned X percent," you're only looking at the winners. The losers have been deleted from the record.

Dimensional Fund Advisors quantified this exactly. They looked at 1,557 surviving funds and 2,545 non-surviving funds. That means there were more dead funds than living ones. When they measured alpha, the ability to beat the market after adjusting for risk factors, the median surviving fund had an alpha of negative seven basis points per month. Already negative. Include the dead funds and it drops to negative twelve basis points per month. Survivorship bias was overstating the median alpha by roughly fifty percent.

But the really devastating number was this: looking only at survivors, four and a half percent of funds earned a statistically significant positive alpha. Include the dead funds, and it drops to two point four percent. Which is actually less than the three point two percent you'd expect from pure chance alone.

Let me say that again. When you account for every fund that has ever existed, the number of funds with statistically proven skill is lower than what you'd get from flipping coins. The entire active management industry, at the aggregate level, shows less evidence of skill than random chance.

THE HEDGE FUND ILLUSION

Mutual funds are at least regulated. They have to report to databases. Hedge funds are worse. Way worse.

Hedge fund returns are reported voluntarily. Think about that for a second. If your fund is doing well, you report your returns because it's free marketing. If your fund is tanking, you just stop reporting. Nobody makes you disclose that you lost thirty percent. You go quiet, you close the fund, and your bad numbers disappear.

Bing Liang at the University of Massachusetts compared two major hedge fund databases and found that survivorship bias exceeded two percent per year. That might sound small but it's enormous. Ibbotson and Chen looked at the Tremont TASS database from 1995 to 2006 and found a survivorship bias of 2.74% per year. And when they added in backfill bias, which is when a fund joins a database and gets to retroactively add its good early returns, the total overstatement jumped to 5.68% per year.

5.68% per year. That's not a rounding error. That's the difference between a strategy that's printing money and a strategy that's doing literally nothing.

So when you see a headline like "Hedge funds returned twelve percent last year," what that actually means is: the hedge funds that chose to report their returns, which excludes the ones that blew up and the ones that were too embarrassed to share, returned twelve percent. The actual average experience of someone who put money into a hedge fund that year was meaningfully worse than that number.

WHEN EVEN REAL TRACK RECORDS LIE TO YOU

Okay. So fake track records and survivorship bias are obvious problems. But here's where it gets genuinely interesting. Even completely real, fully verified, honestly reported track records can be deeply misleading. And the story of Bill Miller is the clearest example of this in modern finance.

Bill Miller managed the Legg Mason Capital Management Value Trust. From 1991 to 2005, his fund beat the S&P 500 every single year. Fifteen consecutive years. His mentee, Michael Mauboussin, estimated the probability of that streak at one in 2.3 million.

That stat made Miller a financial god. Fortune magazine. Conference keynotes. Billions in inflows. He was the greatest mutual fund manager of his generation.

Then in 2007, Miller loaded up on financial stocks. Bear Stearns, Citigroup, AIG. On the Friday before Bear Stearns collapsed, Miller was publicly bragging about buying shares at thirty dollars, down from a hundred and fifty-four. The Wall Street Journal wrote that he "spent nearly two decades building his reputation as the era's greatest mutual fund manager. Then over the past year, he destroyed it." His fund lost two thirds of its value. Over twelve billion in assets gone between losses and redemptions.

But here's the part almost nobody talks about, the part that actually matters. After Miller's streak ended, a mathematician named Leonard Mlodinow reframed the question in his book The Drunkard's Walk. Mauboussin's one-in-2.3-million stat asked: "What are the odds that this specific manager beats the market fifteen years in a row?" That's the wrong question. The right question is: "Given the thousands of fund managers operating over many decades, what are the odds that at least one of them beats the market fifteen years in a row at some point?"

The answer? About seventy-five percent. It would have been more surprising if nobody had done it.

Bill Miller himself seemed to understand this better than anyone. He once said, "As for the so-called streak, that's an accident of the calendar. If the year ended on different months it wouldn't be there. We've been lucky. Well, maybe it's not 100% luck. Maybe 95% luck."

And this gets to the fundamental statistical problem that most people don't grasp. Financial returns are incredibly noisy. The signal-to-noise ratio is terrible. Andrew Lo at MIT showed that for a fund with a Sharpe ratio of 0.5, which is actually pretty good, you need years and years of data before you can say with statistical confidence that the returns aren't just random. Multiple researchers have put the number at roughly sixteen years for ninety-five percent confidence. Sixteen years. Most hedge funds don't survive five.

So even in the absolute best case, where someone is being completely honest with fully audited returns, three years of great performance is not a track record. It's a single data point. Five years is barely better. The math doesn't care how impressive the numbers look.

HOW THE GAME IS PLAYED

Once you understand the track record problem, you start to see the tricks absolutely everywhere. And I don't just mean retail traders photoshopping screenshots. I mean institutional, SEC-registered, supposedly legitimate operations.

Incubation bias. This is the most elegant trick in fund management and it's completely legal. A firm launches ten funds simultaneously with ten different aggressive strategies. After two years, eight of them are flat or down. Two of them, purely by chance, are up significantly. The firm closes the eight losers. Nobody ever hears about them. The two winners get marketed as the firm's flagship products with "verified" two-year track records. Academics call this incubation bias. The industry calls it business development.

Cherry-picked time windows. "Our fund returned 47% in 2020." Sure. So did basically everything with equity exposure. The S&P 500 returned over 18% that year after recovering from a crash. Picking the starting and ending dates that make your returns look best is the oldest trick in fund marketing, and it works because most people never ask "compared to what?" or "starting when?"

Benchmark manipulation. A fund that invests in small-cap tech stocks compares itself to the S&P 500 instead of a small-cap tech index. Of course it outperformed during a tech bull market. It was taking way more risk in a sector that happened to run. The right benchmark would show mediocre or negative alpha, but that comparison never shows up in the pitch deck.

Backtest marketing. "Our model would have returned 200% over the last ten years." Backtests are not track records. I know this firsthand because I have built strategies that returned a thousand percent in backtesting and did absolutely nothing in live trading. You can overfit a model to any historical dataset until it shows incredible returns. The gap between backtest and live performance is one of the most reliable phenomena in quantitative finance. It almost always gets worse.

THE SOCIAL MEDIA LAYER

Now take all of those institutional tricks and apply them to social media, where there are zero reporting requirements, zero audits, and zero consequences for lying.

The incentive structure on financial social media is perfectly designed to produce false information about performance. If you post a big win, you get followers, engagement, subscribers, maybe course sales or Discord revenue. If you post a loss, you get silence or mockery. So every rational person does the same thing: post the wins, hide the losses. Over time, your public feed becomes a highlight reel that makes it look like you never lose.

This connects to something that Delta Trend has made really clear in his work on trading edges. If someone genuinely had a profitable strategy, they would guard it. They'd sign NDAs. They'd raise capital from investors. They would not be selling it in a course or posting it on YouTube for their 1.2 million followers to replicate. That's not how edges work. Edges get crowded and die. The existence of the course is evidence against the strategy working.

But the track record problem goes deeper than that. Even if someone isn't selling anything. Even if they're just posting honestly about their trades. The audience has no way to verify anything. A tweet saying "Called the $NVDA breakout at $450, now it's at $900" gets ten thousand likes. Nobody goes back and checks whether that same person also called fifteen other breakouts that never happened. Nobody calculates the ratio of correct calls to total calls. There's no mechanism for that. The wins go viral. The misses disappear.

Daniel Kahneman won a Nobel Prize partly for documenting exactly this pattern. Humans are biologically wired to remember their wins more vividly than their losses and to construct narratives where they're smarter than they actually are. Social media takes that natural bias and amplifies it with financial incentives until the information environment becomes completely useless for evaluating skill.

And then there are the people who are just straight-up lying. Fake screenshots. Paper trading accounts presented as real money. Demo accounts with zero capital at risk. I've seen all of it. And the audience, especially newer investors, has essentially no way to tell the difference.

WHY THIS HASN'T BEEN FIXED

The technology to solve this has existed for years. Brokerage APIs exist. Plaid exists. OAuth exists. You could build a system where someone's actual trades are pulled directly from their brokerage, cryptographically verified, time-stamped, and displayed with full transparency. Real returns. Real drawdowns. Real risk-adjusted metrics. No screenshots. No self-reporting. The brokerage confirms the data is real.

When I was building a platform that aggregates stock data and research, this was one of the first problems I ran into. You can build the most sophisticated analysis tools in the world, but if the underlying performance data isn't verified, the whole foundation is sand.

The resistance to solving this comes from two places. First, people with real money and real edge genuinely don't want to share their returns publicly. That's legitimate. Competitive reasons, privacy, not wanting to attract copycats. Fair enough.

But the second reason is the uncomfortable one. A huge number of people in the finance content space, and honestly in the institutional space too, directly benefit from the current system where nothing is verified. They benefit from survivorship bias. They benefit from the ability to cherry-pick windows. They benefit from the fact that most people can't distinguish a good backtest from a real track record. Verification would expose that many of the loudest voices have mediocre or losing track records. And the people who would be hurt by transparency are the same people who would need to adopt it.

This is the same dynamic that Delta Trend talks about with trading courses. The incentive of the person selling the product is directly opposed to the interest of the person buying it. If verification existed, a lot of revenue streams would evaporate overnight. So nobody builds it.

WHAT TO ACTUALLY DO WITH THIS INFORMATION

Here is the framework. It's simple, but if you actually apply it, you will immediately see the investing world differently.

One. Treat all unverified performance claims as entertainment, not evidence. Screenshots. Self-reported returns. "I called it" tweets. Fun to look at. Zero informational value. Enjoy them the way you'd enjoy a poker player's story about a big hand. It might be true. It might not. Either way, it tells you nothing about their edge.

Two. Understand that even verified short-term records are mostly noise. The Dimensional study found that the proportion of funds with statistically significant alpha was lower than random chance. Three years of great returns doesn't mean someone is skilled. The math on this is brutal and it doesn't make exceptions because someone sounds confident on camera.

Three. Pay more attention to process than to outcomes. This sounds like a cliche but it's statistically correct. A good process with a bad short-term outcome is more likely to indicate skill than a bad process with a good short-term outcome. How does someone think about risk? How do they size positions? What's their framework for being wrong? These questions tell you more than any P&L screenshot ever will.

Four. Be deeply skeptical of anyone who financially benefits from you believing they're a great investor. Course sellers. Signal services. Paid Discord groups. Funds raising capital. Their incentive is to appear skilled. And as we've covered, appearing skilled in a system with no verification is trivially easy.

And five. Always ask the questions that almost nobody asks. Is this verified? Over what time period? Compared to what benchmark? What's the sample size? What am I not being shown? What happened to the funds and the traders who aren't here anymore?

CONCLUSION

Madoff's bookkeeper fabricated trades by reading yesterday's Wall Street Journal. She did it for decades. The SEC received six complaints and investigated none of them. Renaissance Technologies, arguably the most sophisticated hedge fund in the world, quietly pulled their money out of Madoff's fund because the options volume didn't add up. They noticed. Almost nobody else did.

Bill Miller beat the S&P 500 for fifteen straight years, was called a genius, and then lost two thirds of his fund's value in a single year. A mathematician later showed that someone pulling off that streak was more likely than not, purely by chance. Miller himself said it was ninety-five percent luck.

Over twenty years, sixty-four percent of U.S. stock funds were shut down or merged away. The ones that survived showed less statistically significant skill than you'd expect from a coin flip.

Every one of these facts points to the same conclusion: the investing world has a verification problem so deep that it distorts almost everything you see. The track records that look best are often the ones that hide the most. The survivors look inevitable only because the failures have been erased. And the loudest voices are usually the ones with the least incentive to tell you the truth.

You don't solve this by trusting the right person. You solve it by demanding proof. And until the infrastructure exists to provide that proof at scale, the single most valuable skill you can develop as an investor is the ability to look at an impressive track record and ask: what am I not being shown?