Skip to main content
Each content asset in WisePilot receives an optimization score — a weighted composite (0–100) that reflects how well it’s performing against your defined objectives. Scores update automatically every night after the daily performance rollup completes — typically by early morning UTC. See Automation & Data Freshness for the full schedule.

How Scores Are Calculated

The score for any asset is: Score = (Weighted Metric Sum) × Confidence Multiplier Where:
  • Weighted Metric Sum — Each metric is normalized and multiplied by its configured weight
  • Confidence Multiplier — A factor (0.0–1.0) based on data quality, applied to reduce scores when data is unreliable
A score of 75 on high confidence means “this asset is genuinely performing well.” A score of 75 on low confidence means “early signals look good, but we need more data to be sure.”

Optimization Objectives

You can configure scoring rules for different optimization objectives. Each objective uses different metrics and weights:
ObjectiveFocusKey Metrics
VisibilityBeing foundSearch impressions, rankings, organic traffic
EngagementBeing consumedPageviews, time on page, scroll depth
Offer AttentionBeing acted onCTA view rates, CTA click-through rates
ConversionGenerating resultsForm submissions, lead attribution, revenue

Example: Configuring a “Conversion” Objective

A conversion-focused scoring rule might use these weights:
MetricWeightWhy
Form submissions0.40Primary conversion signal
CTA click-through rate0.25Indicates offer relevance
CTA view rate0.15Indicates CTA visibility
Pageviews0.10Traffic baseline
Avg. time on page0.10Engagement quality signal
Total1.00Must sum to 1.0
This gives highest weight to actual conversions, moderate weight to CTA engagement, and low weight to traffic metrics.

Example: Configuring a “Visibility” Objective

MetricWeight
Search impressions0.35
Average position0.30
Organic clicks0.20
CTR0.15
Total1.00

Confidence Tiers

Not all scores are equally reliable. WisePilot assigns a confidence tier based on three data quality factors:
TierBadgeJoin CoverageFreshnessSample Size
HighGreen> 75%< 24 hours>= 100 events
MediumYellow> 50%< 48 hours>= 50 events
LowRed≤ 50%> 48 hours< 50 events
What each factor means:
  • Join coverage — What % of events in the pipeline are successfully attributed? Low coverage means you’re missing data. See Data Quality.
  • Freshness — How recently was data last collected? Stale data means the score may not reflect current reality.
  • Sample size — How many events does this score draw from? Small samples are statistically unreliable.
Low-confidence scores are visually flagged in the UI with a red badge. Don’t make optimization decisions based on low-confidence data — wait for more events to accumulate or fix the data quality issue first.

Configuring Scoring Rules

  1. Go to Settings → Optimization → Scoring Rules
  2. Click Create Rule or edit an existing one
  3. Select the objective (Visibility, Engagement, Offer Attention, or Conversion)
  4. Set metric weights — use the sliders or enter values directly. They must sum to 1.0.
  5. Set priority thresholds:
    • High performer — Score above this threshold is flagged green (e.g., > 70)
    • Low performer — Score below this threshold is flagged red (e.g., < 30)
  6. Save the rule
You can have multiple scoring rules active. Each asset shows its score for every active objective. Scores are saved as daily snapshots, enabling trend analysis:
  • Score trend chart — See how an asset’s score changes over time
  • Portfolio dashboard — Aggregate score distribution across all assets
  • Delta column — In the asset list, see the score change since last snapshot (↑ or ↓)
A steadily improving score indicates your content and optimization efforts are working. A declining score warrants investigation — check Data Quality first, then Revision Impact.
Scores reflect yesterday’s data. If you published content today, expect meaningful scores to appear in 24–48 hours once enough events accumulate. See Automation & Data Freshness for details on data timing.