Each content asset in WisePilot receives an optimization score — a weighted composite (0–100) that reflects how well it’s performing against your defined objectives. Scores update automatically every night after the daily performance rollup completes — typically by early morning UTC. See Automation & Data Freshness for the full schedule.
How Scores Are Calculated
The score for any asset is:
Score = (Weighted Metric Sum) × Confidence Multiplier
Where:
- Weighted Metric Sum — Each metric is normalized and multiplied by its configured weight
- Confidence Multiplier — A factor (0.0–1.0) based on data quality, applied to reduce scores when data is unreliable
A score of 75 on high confidence means “this asset is genuinely performing well.” A score of 75 on low confidence means “early signals look good, but we need more data to be sure.”
Optimization Objectives
You can configure scoring rules for different optimization objectives. Each objective uses different metrics and weights:
| Objective | Focus | Key Metrics |
|---|
| Visibility | Being found | Search impressions, rankings, organic traffic |
| Engagement | Being consumed | Pageviews, time on page, scroll depth |
| Offer Attention | Being acted on | CTA view rates, CTA click-through rates |
| Conversion | Generating results | Form submissions, lead attribution, revenue |
Example: Configuring a “Conversion” Objective
A conversion-focused scoring rule might use these weights:
| Metric | Weight | Why |
|---|
| Form submissions | 0.40 | Primary conversion signal |
| CTA click-through rate | 0.25 | Indicates offer relevance |
| CTA view rate | 0.15 | Indicates CTA visibility |
| Pageviews | 0.10 | Traffic baseline |
| Avg. time on page | 0.10 | Engagement quality signal |
| Total | 1.00 | Must sum to 1.0 |
This gives highest weight to actual conversions, moderate weight to CTA engagement, and low weight to traffic metrics.
Example: Configuring a “Visibility” Objective
| Metric | Weight |
|---|
| Search impressions | 0.35 |
| Average position | 0.30 |
| Organic clicks | 0.20 |
| CTR | 0.15 |
| Total | 1.00 |
Confidence Tiers
Not all scores are equally reliable. WisePilot assigns a confidence tier based on three data quality factors:
| Tier | Badge | Join Coverage | Freshness | Sample Size |
|---|
| High | Green | > 75% | < 24 hours | >= 100 events |
| Medium | Yellow | > 50% | < 48 hours | >= 50 events |
| Low | Red | ≤ 50% | > 48 hours | < 50 events |
What each factor means:
- Join coverage — What % of events in the pipeline are successfully attributed? Low coverage means you’re missing data. See Data Quality.
- Freshness — How recently was data last collected? Stale data means the score may not reflect current reality.
- Sample size — How many events does this score draw from? Small samples are statistically unreliable.
Low-confidence scores are visually flagged in the UI with a red badge. Don’t make optimization decisions based on low-confidence data — wait for more events to accumulate or fix the data quality issue first.
Configuring Scoring Rules
- Go to Settings → Optimization → Scoring Rules
- Click Create Rule or edit an existing one
- Select the objective (Visibility, Engagement, Offer Attention, or Conversion)
- Set metric weights — use the sliders or enter values directly. They must sum to 1.0.
- Set priority thresholds:
- High performer — Score above this threshold is flagged green (e.g., > 70)
- Low performer — Score below this threshold is flagged red (e.g., < 30)
- Save the rule
You can have multiple scoring rules active. Each asset shows its score for every active objective.
Score Snapshots and Trends
Scores are saved as daily snapshots, enabling trend analysis:
- Score trend chart — See how an asset’s score changes over time
- Portfolio dashboard — Aggregate score distribution across all assets
- Delta column — In the asset list, see the score change since last snapshot (↑ or ↓)
A steadily improving score indicates your content and optimization efforts are working. A declining score warrants investigation — check Data Quality first, then Revision Impact.
Scores reflect yesterday’s data. If you published content today, expect meaningful scores to appear in 24–48 hours once enough events accumulate. See
Automation & Data Freshness for details on data timing.