Core Reference

How NavBoost Works: Technical Architecture of Google's Click Re-Ranking

NavBoost is the system Google uses to re-rank search results based on real user click behavior. This article maps its architecture end-to-end: from raw click collection through Chrome and cookies, to click classification, signal normalization via the squashing function, 13-month aggregation, and final integration into ranking output.

Overview: What NavBoost Does in the Ranking Pipeline

NavBoost is not a standalone ranking algorithm. It is a re-ranking system that sits downstream of Google's primary retrieval and scoring mechanisms. After Google's core systems produce an initial set of ranked results for a query, NavBoost adjusts those rankings based on historical click behavior data collected from real users.

The system was first disclosed publicly during the 2023 U.S. Department of Justice antitrust trial against Google, when Pandu Nayak, Google's Vice President of Search, testified that NavBoost is one of the "most important" ranking signals in Google Search. The 2024 Google API leak subsequently revealed specific implementation details, including the exact click types NavBoost tracks and the normalization mechanisms it applies.

At the highest level, NavBoost operates on a simple premise: if users consistently click on a result and stay there, that result is probably good; if users consistently click on a result and quickly return to the search results page, that result is probably not satisfying their intent. The technical implementation of this premise, however, involves a multi-stage pipeline with several layers of data collection, classification, normalization, and aggregation.

Understanding this architecture is essential for anyone who depends on organic search traffic. NavBoost determines not just whether click signals matter, but precisely how they matter, which types of clicks carry weight, and why short-term manipulation attempts tend to fail against the system's design.

The Data Pipeline: Where Click Data Comes From

NavBoost's effectiveness depends on the volume and reliability of its input data. Google collects user interaction data from multiple sources, each contributing different levels of signal quality and trustworthiness.

Chrome Browser Data

Google Chrome, which holds approximately 65% global browser market share as of early 2026, is the richest source of user behavior data feeding into NavBoost. Chrome provides Google with several categories of interaction data:

  • Search result clicks: Which results a user clicks on from a Google SERP, including the position of the result and the timestamp of the click.
  • Post-click behavior: What the user does after clicking a result — how long they remain on the page, whether they scroll, whether they interact with page elements, and crucially, whether and when they return to the SERP.
  • Navigation patterns: Whether the user navigates to other pages on the same site (indicating deeper engagement) or immediately hits the back button.
  • Session context: Whether the user refines their query, clicks additional results, or abandons the search entirely after visiting a result.

Chrome data from logged-in Google users is considered the highest-trust signal. When a user is signed into their Google account in Chrome, Google can associate their click behavior with a persistent identity, making it significantly harder to fake or manipulate. The leaked API documentation references user-level data tracking that aligns with this logged-in model.

Cookie-Based Tracking

For users who are not logged into a Google account, Google relies on cookie-based tracking to maintain session continuity. This allows NavBoost to track the sequence of clicks within a single search session — for example, a user clicking result #3, returning to the SERP, clicking result #1, and then staying there. That sequence tells NavBoost that result #1 was the satisfying answer for that query and result #3 was not.

Cookie-based data is considered lower-trust than logged-in user data because cookies can be cleared, blocked, or manipulated. However, the sheer volume of cookie-based sessions — billions per day — provides statistical significance that compensates for the lower per-session trust level.

Android and Google App Data

Google also collects interaction data from searches conducted on Android devices and through the Google app on iOS. These mobile interactions are particularly valuable because mobile search behavior tends to be more intent-driven and action-oriented than desktop search. A user searching on their phone while walking down a street has different behavioral patterns than someone casually browsing at a desktop, and NavBoost can incorporate these signals into its models.

Data Volume and Scale

Google processes approximately 8.5 billion searches per day as of 2025. Even if only a fraction of these produce usable click signals for NavBoost, the system operates on a data scale that is difficult to comprehend. This volume is one of the reasons NavBoost is so resistant to manipulation: any artificially generated click signal must compete against billions of genuine interactions.

The following diagram illustrates the high-level data collection pipeline:

NavBoost Data Collection Pipeline Chrome (Logged-in) Chrome (Cookies) Android / Google App | | | v v v [High-Trust Data] [Medium-Trust Data] [Mobile Behavior Data] | | | +----------+-----------+----------+-----------+ | | v v [ Click Event Stream ] [ Session Context ] | | +----------+-----------+ | v [ Raw Click Data Store ] | v [ Click Classification ]

Figure 1: NavBoost ingests user interaction data from multiple sources, each with different trust levels. The raw data is then classified into distinct click types.

Click Classification: How Clicks Are Categorized

Once raw click data enters the NavBoost pipeline, the system classifies each click interaction into one of several categories. These categories, revealed in the 2024 Google API leak, determine how each click affects ranking. For a comprehensive breakdown of each type, see NavBoost Click Types: goodClicks, badClicks, and lastLongestClicks Explained.

goodClicks

A click is classified as a "goodClick" when the user clicks on a search result and demonstrates satisfaction. The primary behavioral signal for this classification is dwell time — the user clicks the result and remains on the destination page for a meaningful period. There is no publicly confirmed threshold for what constitutes "meaningful," but analysis of the API leak and expert interpretation suggests it is likely in the range of 30 seconds to several minutes, depending on query type and content length.

goodClicks serve as positive ranking signals. When a result consistently receives goodClicks across many users and sessions, NavBoost interprets this as evidence that the result satisfies the query intent, and the result's ranking is reinforced or improved.

badClicks

A "badClick" occurs when a user clicks on a search result and quickly returns to the SERP. This behavior — commonly known as pogo-sticking — signals dissatisfaction. The user found the result's title and snippet promising enough to click, but the actual page content did not match their expectations or needs.

badClicks serve as negative ranking signals. A result that consistently generates badClicks will see its NavBoost score decrease over time, potentially leading to lower rankings for the queries where pogo-sticking occurs.

lastLongestClicks

The "lastLongestClick" designation applies to the final result in a search session where the user dwells the longest. This is considered the strongest positive signal in NavBoost's classification system. The reasoning is straightforward: if a user searched, clicked several results, and ultimately settled on one where they spent the most time, that result most likely answered their question or fulfilled their need.

In the context of a search session where a user clicks results #3, #1, and #5 in that order, spending 5 seconds on #3, 12 seconds on #1, and 3 minutes on #5, result #5 would receive the lastLongestClick designation. This is true even though #1 might have received a goodClick classification as well — the lastLongestClick carries additional weight.

The Classification Process

Click Classification Logic User clicks search result | v Measure dwell time + post-click behavior | +--- Quick return to SERP? --------> badClick (negative signal) | +--- Stays on page? ---------------> goodClick (positive signal) | +--- Last click in session AND | longest dwell time? -----------> lastLongestClick (strongest signal) | v Record as raw click event (unsquashedClick) | v Apply squashing function | v Store as normalized click event (squashedClick)

Figure 2: Each click event is classified based on user behavior, then passes through the squashing function before being stored for aggregation.

It is important to note that these classifications are not mutually exclusive in terms of what the system records. A single click can be both a goodClick and a lastLongestClick. The unsquashed and squashed versions represent the same click data before and after normalization, which is covered in the next section.

The Squashing Function: Signal Normalization

One of the most technically significant components revealed in the API leak is the squashing function. This is a mathematical normalization mechanism that Google applies to raw click data before it is used in ranking calculations.

What Squashing Does

The squashing function compresses click signals so that extreme values — whether very high or very low — are brought closer to a central range. In mathematical terms, this is conceptually similar to applying a logarithmic function or a sigmoid curve to the raw data. A result that receives 10,000 clicks does not get a signal that is 100 times stronger than a result that receives 100 clicks. Instead, the signal might be compressed to something like 3-5 times stronger.

This compression has several effects:

  • Prevents single-spike manipulation: A sudden flood of clicks to a result does not produce a proportionally large ranking boost. The squashing function compresses the spike so that its impact is moderated.
  • Normalizes across query volumes: Without squashing, results for high-volume queries would always have stronger click signals than results for low-volume queries. The compression ensures that click quality matters more than raw click quantity.
  • Reduces noise: Short-term fluctuations in click behavior — caused by trending news, seasonal events, or random variation — are dampened so they do not cause erratic ranking changes.

unsquashedClicks vs. squashedClicks

The API leak revealed two distinct data fields: unsquashedClicks and squashedClicks. The unsquashed version represents the raw, unprocessed count of click events. The squashed version represents the same data after the normalization function has been applied.

The existence of both fields suggests that Google may use unsquashed data for some purposes (such as anomaly detection or internal analysis) while using squashed data for the actual ranking calculations. The dual tracking also enables Google to compare raw patterns against normalized patterns, which is useful for identifying manipulation attempts that produce abnormal raw signals but normal-looking squashed signals.

For a deeper technical analysis of this mechanism, see The NavBoost Squashing Function: How Google Normalizes Click Data.

The 13-Month Rolling Window

NavBoost does not operate on real-time click data alone. Instead, it aggregates click signals over a 13-month rolling window. This design decision has profound implications for both ranking stability and manipulation resistance.

How the Window Works

At any given point in time, NavBoost's ranking contribution for a particular URL-query pair is based on the accumulated click data from approximately the previous 13 months. As new data enters the window, the oldest data falls off. This creates a continuously updating but temporally smoothed view of user behavior.

The 13-month duration is notable because it encompasses a full annual cycle plus one additional month. This means seasonal variations — such as increased searches for tax software in April or holiday gift guides in December — are captured within the window. The extra month provides overlap that prevents seasonal data from completely dropping off right when it might be relevant again.

Implications for Ranking Changes

The 13-month window means that ranking changes driven by NavBoost signals tend to be gradual rather than sudden. When a website improves its content and user experience, the resulting improvement in click behavior (more goodClicks, fewer badClicks, more lastLongestClicks) accumulates over months. Similarly, when a formerly good result becomes outdated or degraded, the negative signals also accumulate gradually.

This design provides inherent stability to rankings. Pages do not jump dramatically based on a single week of unusual click activity. Instead, sustained patterns of user satisfaction (or dissatisfaction) drive ranking changes over time.

Implications for Manipulation

The 13-month window is one of NavBoost's most effective defenses against manipulation. Consider a hypothetical attempt to boost rankings through artificial clicks:

  • If an actor generates 1,000 artificial goodClicks for a result in a single month, those clicks represent only one-thirteenth of the total data window.
  • After the squashing function compresses the signal, the impact is further reduced.
  • The artificial signal must compete against 12 months of genuine historical behavior.
  • If the artificial clicking stops, the one month of manipulated data will gradually fall off the window over the following months, and rankings will revert.

This is consistent with reports from SEO practitioners who have experimented with click manipulation: rankings often improve while the artificial clicking is active but revert after it stops, and the improvement magnitude is typically modest rather than dramatic.

13-Month Rolling Window Month: 1 2 3 4 5 6 7 8 9 10 11 12 13 |---|---|---|---|---|---|---|---|---|---|---|---|---| [================ Active Window ================] ^ ^ Oldest data Newest data (falling off) (entering) Each month's click data contributes ~1/13 of the signal. A single month of manipulation = ~7.7% of the total window. After squashing, the effective contribution is even smaller.

Figure 3: The 13-month rolling window ensures that no single month of click data dominates the overall NavBoost signal for a URL-query pair.

How NavBoost Feeds into Final Rankings

NavBoost does not determine rankings in isolation. It is one of many signals that contribute to Google's final ranking output. Understanding where NavBoost fits in the broader ranking pipeline provides important context for its influence.

The Multi-Stage Ranking Pipeline

Google's ranking system operates in multiple stages:

  1. Retrieval: Google's index identifies a candidate set of potentially relevant pages for a query. This stage is primarily based on content relevance, keyword matching, and basic quality signals.
  2. Initial Scoring: The candidate set is scored using hundreds of ranking signals, including content quality, backlink authority, page experience, and topical relevance. Systems like BERT and MUM help with understanding query intent and content semantics.
  3. Re-Ranking (NavBoost): The initially scored results are re-ranked based on historical click behavior data. This is where NavBoost operates. It adjusts the ordering based on how real users have historically interacted with these results for this query (or similar queries).
  4. Final Assembly: The re-ranked results are assembled into the final SERP, incorporating SERP features (featured snippets, knowledge panels, People Also Ask, etc.).

NavBoost's Relative Weight

While Pandu Nayak described NavBoost as one of Google's "most important" ranking signals during the antitrust trial, the exact weight it carries relative to other signals is not publicly known. However, the testimony and the API leak suggest several things about its influence:

  • NavBoost is a primary signal, not a tiebreaker. Its placement in the re-ranking stage means it can override initial scoring decisions, not merely adjust them at the margins.
  • Its influence varies by query type. For queries where users have strong behavioral patterns (commercial queries, navigational queries), NavBoost likely carries significant weight. For queries where click behavior is less differentiated (very new queries, ambiguous queries), its influence may be lower.
  • It interacts with other systems. NavBoost's output is one input among many. Systems like RankBrain (query understanding), BERT (semantic matching), and quality raters (content evaluation) all contribute to the final ranking. NavBoost may reinforce or counteract these other signals depending on the specific URL and query.

Interaction with Other Ranking Systems

The relationship between NavBoost and other ranking components is not purely additive. There are interaction effects:

  • Content quality + positive click signals = reinforcement. A page with strong content that also receives goodClicks and lastLongestClicks will receive reinforcing signals from multiple systems, producing stable high rankings.
  • Content quality + negative click signals = conflicting signals. A page that quality raters score highly but that generates badClicks (pogo-sticking) may create a conflict that results in ranking instability. This can happen when content is technically comprehensive but does not match user intent for a specific query.
  • Weak content + positive click signals = potential manipulation flag. If a page has thin or low-quality content but somehow generates positive click signals, this mismatch may trigger additional scrutiny from Google's click manipulation detection systems.
NavBoost in the Ranking Pipeline Query Entered | v [ Retrieval ] ------> Candidate pages from index | v [ Initial Scoring ] - Content quality - Backlinks - BERT / MUM (semantics) - Page experience - E-E-A-T signals | v [ NavBoost Re-Ranking ] <---- 13-month click data - goodClicks boost (squashed + aggregated) - badClicks demote - lastLongestClicks strongest boost | v [ Final Assembly ] - SERP features - Personalization - Freshness adjustments | v Search Results Page

Figure 4: NavBoost operates in the re-ranking stage, adjusting results that have already been scored by content-based and authority-based signals.

How NavBoost Operates at the Query Level

NavBoost does not evaluate pages in isolation. It operates at the intersection of queries and URLs. The same page may have a strong NavBoost score for one query and a weak or negative score for another, depending on how users interact with that page when arriving from each query.

Query-URL Pairs

For each combination of a query (or query cluster) and a URL, NavBoost maintains a behavioral profile based on the accumulated click data within the 13-month window. This profile includes the proportion of goodClicks vs. badClicks, the frequency of lastLongestClicks, and the overall click volume (after squashing).

This query-level operation means that a page can rank well for queries where it genuinely satisfies user intent and rank poorly for queries where it does not — even if the page content is the same. NavBoost is, in effect, a user-vote-based measure of relevance at the query level.

Query Clustering

Google does not treat every unique query string as an entirely separate entity. Queries with very similar intent — such as "how does navboost work," "navboost technical overview," and "google navboost architecture" — are likely clustered together for the purposes of NavBoost data aggregation. This clustering ensures that NavBoost has sufficient data to make meaningful ranking adjustments even for less common query variations.

The clustering mechanism also means that NavBoost signals from a high-volume query can influence rankings for related lower-volume queries, providing coverage for long-tail variations where individual click data might be sparse.

Geographic and Device Segmentation

NavBoost does not treat all clicks identically regardless of context. Evidence from the API leak and antitrust trial testimony suggests that click data is segmented along at least two important dimensions: geography and device type.

Geographic Segmentation

Click behavior varies significantly by region. A search result about local regulations in the United Kingdom may generate goodClicks from UK-based users but badClicks from users in other countries who find the content irrelevant. NavBoost accounts for this by segmenting click data geographically, so that the click behavior of users in a particular country or region primarily influences rankings for users in the same area.

This geographic segmentation is particularly relevant for local SEO, where the relevance of results is inherently tied to the user's location.

Device Segmentation

Mobile and desktop users exhibit different click behaviors. Mobile users tend to have shorter sessions, are more likely to engage in single-click search (finding one result and stopping), and have different scrolling patterns. Desktop users may be more likely to open multiple results in tabs, which creates a different dwell-time profile.

NavBoost appears to account for these differences by maintaining separate or segmented click profiles for mobile and desktop interactions. This prevents the different behavioral norms of each device category from conflating the signal.

Built-in Manipulation Resistance

NavBoost's architecture includes multiple layers of defense against click manipulation, some explicit and some inherent to the system's design. For a detailed treatment, see How Google Detects Artificial Clicks.

Structural Defenses

Several features of NavBoost's architecture make manipulation inherently difficult:

  • The squashing function compresses extreme signals, meaning that even a large volume of artificial clicks produces a disproportionately small ranking effect.
  • The 13-month window dilutes any short-term manipulation across a long time horizon.
  • Trust-level weighting means that clicks from logged-in Chrome users carry more weight than anonymous clicks, and generating authentic-looking logged-in user behavior at scale is extremely difficult.
  • Multi-signal correlation allows Google to cross-reference click behavior against other signals (content quality, backlinks, user engagement metrics) to identify results where click signals are inconsistent with other quality indicators.

Active Detection

Beyond structural defenses, Google also employs active detection mechanisms. The API leak revealed fields related to click quality assessment and filtering, suggesting that Google actively evaluates whether click patterns appear organic or artificial. Signals that may trigger detection include:

  • Abnormal consistency in click timing (real users have variable response times)
  • Unusual geographic clustering of clicks
  • Device and browser fingerprint uniformity
  • Click patterns that do not include natural behaviors like scrolling, mouse movement, or page interaction
  • Sudden CTR spikes that do not correlate with changes to the page's title, meta description, or snippet

Practical Implications for Search Visibility

Understanding NavBoost's architecture leads to several practical conclusions for anyone focused on organic search performance.

Click-Through Rate Matters, But Contextually

CTR from the SERP is an input to NavBoost, but it is not the only — or even the most important — input. Post-click behavior (dwell time, pogo-sticking, session continuation) is equally or more significant. A high CTR achieved through sensationalized titles that lead to disappointing content will generate badClicks, which is worse than having a moderate CTR with strong satisfaction signals.

Content-Click Alignment Is Critical

The most effective way to generate positive NavBoost signals is to ensure that page content genuinely matches the expectations set by its title and meta description in the SERP. When users click and find exactly what they expected — or better — the result is goodClicks and lastLongestClicks. When users click and find a mismatch, the result is badClicks.

Long-Term Consistency Outweighs Short-Term Spikes

Because of the 13-month window and the squashing function, sustained improvements in user satisfaction produce much stronger NavBoost signals than any short-term tactic. A website that gradually improves its content quality, page speed, and user experience over 6-12 months will build a durable NavBoost advantage that is difficult for competitors to erode quickly.

Different Queries Require Different Optimization

Because NavBoost operates at the query-URL level, optimizing for NavBoost means understanding the specific intent behind each target query and ensuring the landing page matches that intent precisely. A page that ranks for multiple queries may need to be split into more focused pages if the queries have different intents, because a single page is unlikely to satisfy all intents equally well.

Technical Summary

The following table summarizes the key architectural components of NavBoost and their functions:

Component Function Key Detail
Data Collection Gathers user click and post-click behavior Chrome (logged-in), cookies, Android, Google app
Click Classification Categorizes each interaction by satisfaction goodClicks, badClicks, lastLongestClicks
Squashing Function Normalizes raw click signals Compresses extremes; reduces manipulation effectiveness
13-Month Window Aggregates data over time Full annual cycle + overlap; dilutes short-term spikes
Query-URL Pairing Associates click data with specific query-result pairs Same page can have different scores for different queries
Geographic Segmentation Segments behavior by region Prevents irrelevant cross-region signal pollution
Device Segmentation Separates mobile and desktop behavior Accounts for different usage patterns
Re-Ranking Integration Adjusts initial rankings based on click signals Operates after content-based scoring, before final assembly

Sources and Further Reading

The technical details in this article are drawn from the following primary sources:

  • 2024 Google API Leak: Thousands of pages of internal Google API documentation that were inadvertently made public in May 2024, detailing NavBoost click types and normalization mechanisms. See full analysis.
  • U.S. v. Google (2023 Antitrust Trial): Testimony from Pandu Nayak, Google VP of Search, confirming NavBoost's existence and importance. See Google Antitrust Trial.
  • Rand Fishkin: Initial public analysis of the leaked API documentation (SparkToro, May 2024).
  • Mike King: Technical analysis of the leaked API fields and their implications for click-based ranking (iPullRank, 2024).
  • RESONEO: "Google Leak Part 5: Click-data, NavBoost, Glue, and Beyond" — detailed breakdown of click signal handling.
  • Hobo Web: "NavBoost: How Google Uses Large-Scale User Interaction Data to Rank Websites" — comprehensive contextual analysis.

For related topics, see What is NavBoost? for a foundational overview, NavBoost Click Types for detailed click classification analysis, and NavBoost vs. RankBrain for a comparison of Google's major ranking systems.

About this site: NavBoost.com is an independent resource on Google's click-based ranking systems. For businesses looking to improve their organic click-through rates, we recommend SerpClix — the only crowd-sourced CTR service using real human clickers.