Three deals in your inbox. One shows 5% vacancy. One shows 8%. One says "market standard" without defining what market or what standard.
You want to know which deal is better. You can't tell. Not because you lack the skill. Because the inputs aren't comparable.
The Inconsistency Problem Made Concrete
Imagine three deals, all multifamily, all in secondary Sunbelt markets, all showing projected returns in a similar range:
- Deal A: 5% vacancy assumption. IRR 18.2%. Cap rate 6.8%.
- Deal B: 8% vacancy assumption. IRR 16.4%. Cap rate 7.1%.
- Deal C: "Market vacancy rates apply." IRR 17.9%. Cap rate 6.9%.
Which deal has the best risk-adjusted return? You can't determine that from this information because the vacancy assumptions — the single input that most commonly separates aggressive underwriting from conservative — aren't using the same definition.
Deal A's 18.2% IRR might require below-market vacancy to hold. Deal B's 16.4% might survive a 10-point vacancy stress test with DSCR still above 1.25. Deal C's "market vacancy" might mean 5%, or 8%, or something the operator defines only inside their full documentation.
The deal with the best-looking IRR might simply have the most aggressive assumptions. You have no way to know without rebuilding each model from scratch.
Why This Happens
There's no standardization requirement for private real estate deal presentation. Operators self-report their projections. No independent verification layer exists by default.
Each operator builds their model with their own definitions. Vacancy is whatever they've decided is appropriate — which creates an incentive to optimize for a number that makes the deal look competitive. Management fees might be 8% or might be excluded from the expense line entirely. CapEx reserves might be $200/unit or $600/unit.
CrowdStreet didn't just create fraud victims. It created a generation of investors who know — in their bones — that they can't trust what operators put in a pro forma. The fraud produced direct harm: capital distributed across deals that were not what they appeared to be. But the more durable damage was epistemic. CrowdStreet revealed that the platform's model — operator submissions with no independent verification layer — was structurally incapable of catching misrepresentation before capital was deployed.
This isn't unique to CrowdStreet. It's the default architecture of operator-submitted platforms. Any platform that relies entirely on operator submissions for deal information shares a version of this structural vulnerability.
Post-CrowdStreet, investors have adapted by cross-checking against 3–5 external sources per deal. This is rational behavior — but this cross-checking behavior is the investor doing the platform's job.
The 5-Point Post-Fraud Verification Checklist
If you're evaluating deals on any platform that relies primarily on operator-submitted information, here are five independent verification steps that should be standard practice.
Verify NOI against actual rent rolls, not pro forma projections.
Request current rent rolls and trailing 12-month operating statements, not just the pro forma. Compare actual gross rental income and current occupancy against projected figures. A deal projecting $600,000 NOI on a property showing $520,000 trailing NOI requires the investor to underwrite the gap — what changes, when, and on what basis.
Check occupancy trend, not just current snapshot.
Current occupancy of 93% tells you where the property is today. Occupancy trend — 88% six months ago, 90% three months ago, 93% today — tells you if improvement is real and sustainable. Conversely, 96% six months ago and 93% today signals the beginning of a softening trend.
Benchmark the cap rate against submarket comparables.
Pull cap rates on comparable recent transactions in the same submarket, asset class, and vintage. Commercial broker market reports cover major markets quarterly. A deal projecting exit at a 5.5% cap rate in a market where comparable transactions are clearing at 6.5–7.0% requires a specific thesis for why this deal exits at a tighter cap.
Research the operator's track record independently.
Don't rely on the operator's own performance summary. Search SEC EDGAR for prior Reg D filings. Look for any news coverage of prior deals, especially defaults or extensions. Prior LP investors in prior deals — if you can find them — are the most reliable source of execution quality data.
Verify the debt structure and maturity timeline.
Know the loan amount, rate, maturity, and extension options. A deal with a 3-year bridge loan at 7.5% with one extension option and no guaranteed permanent financing has a very different risk profile than a deal with agency permanent financing locked. Understand exactly what happens to this deal if rates stay flat for another 24 months.
The 4–8 Hour Underwriting Tax
Individual investors serious about their investments spend 4–8 hours per deal building their own underwriting model from scratch.
The goal isn't to reproduce the operator's model — it's to build a model with defensible assumptions that the investor can actually rely on. Market-based vacancy. Realistic expense ratios. Current debt market rate assumptions. Stress test applied. This takes hours because it requires pulling market data, comping rents, verifying the expense structure, and modeling downside scenarios against an independently-built baseline.
And the bigger cost: each model the investor builds uses slightly different inputs because they're working from scratch each time. The 12th model you build uses slightly different inputs than the 7th. Different vacancy assumptions based on different market data pulled at different times. Different CapEx reserves because you found conflicting guidance. The cross-deal comparison problem doesn't go away just because you build your own model — it multiplies.
The Hidden Cost: Conviction Without Confidence
There's a third cost that rarely gets named: the investor knows the model is probably off but can't easily verify it.
This creates a specific kind of paralysis. You've done the work. You have a number. But you don't have enough confidence in the number to act with conviction. So you keep researching. You read more. You look for a second opinion. You find another comparable deal to test your assumptions.
The work multiplies. The conviction doesn't keep pace.
Saturday becomes Sunday. Sunday becomes the next deal. The 12th model you build is slightly different from the 7th, and you're not sure which one was right. The five-step checklist above will catch the most common problems with operator-submitted underwriting — but running it on every deal is itself the platform's job, not yours.
What Standardized Underwriting Looks Like
The fix isn't more sophisticated investors. It's a consistent methodology applied before deals reach the investor.
Every ProperLocating deal is underwritten on the same model, with the same definitions, the same documented assumptions, and the same stress test applied. Vacancy is benchmarked to current market data in the specific submarket, not chosen by the operator. Expense ratios are verified against comparable property operating data. NOI is built from actual rent rolls, not pro forma projections. Cap rate is benchmarked against submarket comps. The stress test runs ProperLocating's scenarios, not the operator's.
The result: deals from the ProperLocating pipeline are directly comparable. When the vacancy assumption on Deal A is 7% and Deal B is 9%, both numbers mean the same thing, applied the same way, verified against the same methodology. The five-step checklist becomes confirmation, not due diligence.
Your role changes. You're not building the model. You're evaluating fit. Does this deal match my target market? My return threshold? My hold period? My current portfolio concentration? Those are questions about you, not about the deal's arithmetic. They take minutes, not hours. And they produce a decision, not a research project.
The Saturday hours don't disappear — they shift to the decision itself, which is where they should be.