We review 100 deals to send you 3.
That's not a marketing number — it's the actual rejection rate. For every 100 properties that enter our screening process, 97 don't make it. Here's exactly what they failed on, and why the criteria matter more than the count.
Why the Rejection Rate Is the Feature
Most deal platforms work in reverse. A property gets listed. Investors see it. Investors evaluate it. Some invest. The platform takes a fee.
That model puts the burden of vetting entirely on the investor — after the deal is already in front of them. It means you're evaluating everything that showed up, not everything that deserved to show up. You're doing the filtering work the platform should have done upstream.
Our model is different. We screen before you see anything. The 3% pass rate isn't a bottleneck — it's the product. Every deal in your inbox has already survived seven elimination stages that most individual investors don't have the time, data access, or frameworks to apply on their own.
Here's what those seven stages actually look at.
1. NOI Accuracy and Rent Roll Verification
The first thing to fail in a deal is the income number. Pro forma NOIs are built on assumptions — projected rents, projected occupancy, projected expense ratios. Our first cut verifies actual collected income against trailing 12-month rent rolls.
Take occupied units, multiply by average current rent, add other income (parking, storage, pet fees). Subtract actual vacancy — not the pro forma vacancy, the current occupancy from the rent roll. Subtract operating expenses using reported actuals, not the operator's projection.
If the NOI doesn't match the documentation within 5%, the deal doesn't move forward. This single filter eliminates a significant portion of submissions — particularly from operators who've built pro formas on optimistic assumptions or who are obscuring below-market collections.
2. Occupancy Trends — Not Just the Current Snapshot
Current occupancy is a lagging indicator. A property at 92% today could be at 76% in six months if leases are rolling and the submarket is softening. We look at trailing 24-month occupancy trends: are units being filled, held, or churning? Is occupancy being maintained through concessions that aren't reflected in headline rent?
A current occupancy of 93% looks solid. An occupancy that was 96% six months ago and trending down looks different. Trend matters more than the current number.
3. Cap Rate vs. Submarket Comp
A deal priced at a 6.2% cap rate in a submarket where comparable assets trade at 5.8% looks attractive. The same deal in a submarket where quality assets trade at 7.1% is overpriced — or has something wrong with it.
We benchmark every deal's cap rate against verified comps in its specific submarket, not the metro average. If the asking cap rate is at the low end of the comp range, the deal is priced optimistically — and the underwriting reflects that risk.
4. Operator Track Record and LP History
The asset is only as good as the operator managing it. We review the sponsor's full track record: prior deals, exit history, LP return delivery, and whether promised returns matched actual distributions. We look for operators who have managed through a cycle, not just during one.
Operators who can't share their complete track record don't pass this stage. Operators who present selectively constructed track records — highlighting successes while omitting near-misses — fail at this stage too.
5. Debt Structure and Maturity Profile
One of the most common hidden risks in deals entering the market right now is floating-rate debt originated in 2021–2022. We verify the capital stack: fixed vs. floating, maturity timeline, refinance exposure, and current debt service coverage ratios.
A deal with solid fundamentals but a bridge loan maturing in 18 months at 70% LTV in the current rate environment is a different investment than it appears on the surface. Debt structure is underwriting — we don't treat it as a footnote.
6. Submarket Fundamentals
We don't evaluate assets in isolation. Every deal is reviewed in the context of its submarket: rent growth trends, new supply pipeline, population and employment dynamics, and comparable transaction volume.
A multifamily asset in a submarket with 4,000 new units coming online in the next 18 months is competing in a different environment than its trailing cap rate suggests. We flag supply risk as a first-order concern.
7. Exit Scenario Viability
Every entry needs a plausible exit. We model the deal across multiple scenarios — base case, downside, and extended hold — and require that the downside scenario still produces acceptable returns without assuming cap rate compression or market appreciation.
If the deal only works in the optimistic scenario, it doesn't pass.
What Rejection Looks Like in Practice
Showing what doesn't pass is more credible than showing what does. Three rejection patterns from recent reviews — fully anonymized, with the exact trigger in each case.
Pattern 1: Aggressive Occupancy Assumptions
A suburban multifamily offering in a secondary market. Strong trailing cash flow metrics — 95% occupancy, $220 NOI per unit per month. The offering was pricing on a 2-year stabilized basis.
The stress test: we pulled submarket-level occupancy data for comparable properties. Trailing 12-month average occupancy in the submarket was 89%, not 95%. The 6-point difference wasn't a projection error — it was the current gap between this property and the market.
Running the underwriting at submarket occupancy dropped NOI by 11%. DSCR fell from 1.31x to 1.17x at the proposed debt load. In a downside scenario where occupancy compressed to 85%, the deal fell below 1.0x DSCR.
Rejection trigger: Occupancy assumption not supportable by submarket data.
Pattern 2: Operator Track Record Gaps
A value-add industrial deal with strong projected IRR — 18% over a 4-year hold. The operator had two prior deals. On paper, both had exited at or above projection.
The gap: we requested LP communications for both prior exits. One deal exited on schedule. The second had extended twice, ultimately returned capital at a 9% IRR — below the projected 14%. That outcome wasn't in the operator's marketing materials.
The risk isn't that the operator performed below projection. Markets move. The risk is that the track record was selectively constructed.
Rejection trigger: Track record not verifiable to LP communication level.
Pattern 3: Margin Too Thin to Absorb Variance
A ground-floor retail conversion in an infill market. Underwriting showed a 7.8% cash-on-cash return in the base case. The submarket had strong fundamentals and the operator was credible.
The stress test: we ran the model with a 200 bps cap rate expansion at exit (from projected 6.2% to realized 8.2%). IRR dropped from 14% to 4%. The deal produced almost no return margin above a treasury equivalent under this scenario. Additionally, the retail component had single-tenant exposure — if the anchor tenant vacated, the NOI scenario shifted from base to worst-case within a single quarter.
Rejection trigger: Return structure had no margin for variance.
Why These Patterns Recur
These three rejection patterns appear in roughly 60–70% of deals we review. The root cause is the same each time: operators underwrite to maximize attractiveness, not to tell you where the deal is fragile.
That's not fraud. It's incentive structure. The operator's incentive is to close the deal. The investor's interest is to understand the downside. Those interests diverge — and without an independent filter, the investor is reading a presentation, not a balanced analysis.
What the 3% Pass Rate Means for You
When a deal reaches your inbox, it has already cleared all seven stages. You're not evaluating whether this is a real deal — we've done that. You're evaluating whether it fits your portfolio, your timeline, and your capital position.
That's a materially different question, and it's the question you should be spending your time on.
The alternative — evaluating everything yourself from unvetted listings across fragmented platforms — is how most investors operate. It's time-intensive, inconsistent, and vulnerable to the kind of operator opacity that created outcomes like the CrowdStreet collapse.
Vetting opacity isn't an accident on most platforms. It's the norm. No current listing platform screens before you see the deal. We decided to make the process visible — because transparency about how deals get filtered is what makes the filter trustworthy.