Core Reference

NavBoost Click Types: goodClicks, badClicks, and lastLongestClicks Explained

The 2024 Google API leak revealed that NavBoost classifies user clicks into distinct categories, each carrying different weight in Google's ranking calculations. This article examines all five documented click types, how they are determined, and what they mean for search result rankings.

Introduction: Why Click Types Matter

For years, the SEO industry debated whether Google used click data as a ranking signal. Google representatives repeatedly denied or deflected, most notably in a 2016 statement where a Google Search spokesperson said that click-through rate was not used as a ranking signal. The 2024 Google API leak ended that debate definitively.

Among the thousands of internal API fields documented in the leak, several belonged to a system called NavBoost — Google's primary mechanism for re-ranking search results based on user behavior. These fields described specific click categories with distinct names and distinct roles in ranking. The leak confirmed not only that Google tracks clicks, but that it classifies them into multiple types, each serving a different purpose in the ranking algorithm.

Understanding these click types is fundamental to understanding how NavBoost works and, more broadly, how user behavior shapes the search results billions of people see every day. The five documented click types are: goodClicks, badClicks, lastLongestClicks, unsquashedClicks, and squashedClicks.

The first three (goodClicks, badClicks, lastLongestClicks) are behavioral classifications that describe the quality of a click interaction. The last two (unsquashedClicks, squashedClicks) are processing states that describe the click data before and after normalization. Together, they form the vocabulary of NavBoost's click evaluation system.

goodClicks: The Satisfaction Signal

Definition

A "goodClick" is recorded when a user clicks on a search result and subsequently demonstrates satisfied behavior. The primary indicator of satisfaction is dwell time — the user remains on the clicked page for a period that exceeds the threshold for a quick return. In practical terms, the user clicked, found what they were looking for (or at least something worth reading), and stayed.

How goodClicks Are Determined

While Google has not disclosed the exact thresholds or heuristics used to classify a click as "good," the behavioral signals that likely contribute to this classification include:

  • Dwell time above a minimum threshold: The user stays on the page for a meaningful duration. Estimates from researchers who have analyzed the API leak suggest this threshold could be as low as 30 seconds or as high as several minutes, potentially varying by query type. An informational query about a complex topic might have a higher dwell-time threshold than a quick factual lookup.
  • No return to SERP for the same query: The user does not hit the back button and return to Google's search results page to try another result. Returning to the SERP is a signal of dissatisfaction (see badClicks below).
  • On-page engagement: Scrolling behavior, clicks on internal links, and other interaction signals may contribute to the "good" classification, though the relative weight of these signals compared to dwell time is unknown.

Ranking Impact

goodClicks serve as a positive ranking signal. When a URL consistently receives goodClicks for a particular query or query cluster, NavBoost interprets this as evidence that the result satisfies user intent. Over time, accumulated goodClicks contribute to maintaining or improving the result's ranking position.

It is important to note that goodClicks are not binary — a single goodClick does not meaningfully move rankings. NavBoost operates on aggregate patterns across thousands or millions of user interactions, evaluated over a 13-month rolling window. The signal comes from the consistent proportion of goodClicks relative to total clicks for a given query-URL pair.

Real-World Example

A user searches for "how to propagate succulents." They click on a result titled "Complete Guide to Succulent Propagation — Methods, Timing, and Care." The page loads quickly, contains a comprehensive guide with step-by-step instructions and photos, and the user spends 4 minutes reading through the content. They then close the browser tab or navigate to a different site. NavBoost records this interaction as a goodClick for that URL-query pair.

badClicks: The Dissatisfaction Signal

Definition

A "badClick" is recorded when a user clicks on a search result and quickly returns to the search results page. This behavior — universally known in the SEO industry as pogo-sticking — is a strong signal that the clicked result did not satisfy the user's query. The result's title and snippet were enticing enough to earn the click, but the actual page content failed to deliver.

How badClicks Are Determined

The classification of a badClick is primarily based on a rapid return-to-SERP pattern:

  • Short dwell time: The user spends very little time on the page — potentially just a few seconds — before pressing the back button or returning to Google.
  • Return to SERP: The user goes back to the same search results page and continues browsing other results. This is the defining behavioral characteristic. A user who closes their browser entirely does not produce the same signal as one who explicitly returns to try another result.
  • Subsequent clicks on other results: The user clicks on a different search result after returning, indicating they are still looking for an answer. This continuation behavior reinforces the badClick classification for the original result.

Ranking Impact

badClicks serve as a negative ranking signal. When a URL consistently generates badClicks for a particular query, NavBoost interprets this as evidence that the result does not satisfy user intent for that query. Over time, accumulated badClicks contribute to a decline in the result's ranking position.

Importantly, badClicks for one query do not necessarily affect a page's ranking for other queries. A page might generate badClicks when it appears for the query "best budget laptops 2026" (because it covers laptops generally, not budget models) but generate goodClicks for the query "laptop buying guide" (because that broader topic is what the page actually addresses). NavBoost evaluates click quality at the query-URL pair level.

Real-World Example

A user searches for "python list comprehension syntax." They click on a result titled "Python List Comprehension — Everything You Need to Know." The page loads slowly, displays a large interstitial ad, and when the content finally appears, it begins with a 500-word introduction about Python's history before addressing list comprehensions. After 8 seconds, the user hits the back button and clicks a different result. NavBoost records this interaction as a badClick.

lastLongestClicks: The Strongest Signal

Definition

A "lastLongestClick" designation is given to the final result in a search session where the user dwells for the longest period. This click type carries the strongest positive weight in NavBoost's ranking calculations. The logic is intuitive: if a user clicks multiple results for the same query and ultimately settles on one — the last one they visit, where they spend the most time — that result is the most satisfying answer.

How lastLongestClicks Are Determined

This classification depends on the full session context, not just a single click event:

  • Session tracking: NavBoost must track the user's entire search session — all the results they click for a given query, in what order, and for how long.
  • Comparative dwell time: The result where the user spends the longest time is identified. This is measured relative to the other results clicked in the same session, not against an absolute threshold.
  • Finality: The click must also be the last in the session. The user does not return to the SERP after visiting this result. The combination of being both the longest-dwelt and the final result strongly indicates that the user found their answer.

Ranking Impact

lastLongestClicks carry more weight than standard goodClicks. While a goodClick indicates "this result was satisfactory," a lastLongestClick indicates "this result was the best answer among all the options the user tried." It is a comparative signal, not just an absolute one.

Because lastLongestClicks are inherently session-level signals, they provide information that individual-click metrics cannot. They capture the user's revealed preference after sampling multiple alternatives, which is a more robust indicator of result quality than a single click-and-stay event.

Real-World Example

A user searches for "best noise cancelling headphones under $200." They click result #2 (a tech review site), spend 45 seconds scanning the page, return to the SERP. They click result #4 (a retailer's buying guide), spend 20 seconds, return to the SERP. They click result #1 (a detailed comparison article), spend 6 minutes reading reviews, comparing specs, and checking prices. They then close the tab. Result #1 receives the lastLongestClick designation — it was both the last result clicked and the one where the user dwelled the longest.

unsquashedClicks: Raw Click Data

Definition

The "unsquashedClicks" field, as revealed in the API leak, represents the raw, unprocessed count of click events for a given query-URL pair. This is the click data before Google's squashing function has been applied.

Purpose

unsquashedClicks represent the literal count of user interactions. If 5,000 users clicked on a particular result for a particular query over a given time period, the unsquashedClicks value would reflect that number (or a value derived directly from it).

This raw data serves several potential purposes:

  • Baseline for normalization: The squashing function requires raw data as its input. unsquashedClicks are the "before" to squashedClicks' "after."
  • Anomaly detection: Comparing raw click volumes against expected baselines allows Google to identify unusual spikes that might indicate manipulation attempts.
  • Internal analysis: Google's search quality teams likely use raw data for research, debugging, and system evaluation, even if the raw values are not directly used in ranking.

Why Raw Data Is Not Used Directly for Ranking

If Google used raw, unsquashed click counts directly in ranking calculations, the system would be trivially manipulable. An actor could generate thousands of artificial clicks and achieve a proportional ranking boost. The squashing function exists precisely to prevent this. Raw click data is also inherently biased toward high-volume queries, which would create unfair advantages for results on popular topics over equally relevant results on niche topics.

squashedClicks: Normalized Click Data

Definition

The "squashedClicks" field represents click data after Google's normalization function — the squashing function — has been applied. This is the processed form of the data that is actually used in NavBoost's ranking calculations.

How Squashing Works

The squashing function is a mathematical compression mechanism that reduces the range of click values. Conceptually, it operates like a logarithmic or sigmoid function:

  • Small click volumes are relatively preserved (a result with 10 clicks vs. 5 clicks maintains a meaningful difference).
  • Large click volumes are compressed (a result with 100,000 clicks is not treated as 10x more significant than one with 10,000 clicks).
  • Extreme spikes are flattened (a sudden jump from 100 to 10,000 clicks does not produce a 100x signal increase).

This normalization ensures that the quality and consistency of click signals matter more than the raw volume, and that manipulation through sheer click volume yields rapidly diminishing returns.

Relationship to unsquashedClicks

The existence of both unsquashedClicks and squashedClicks as separate API fields is significant. It indicates that Google stores both versions — the raw and the processed — rather than discarding the raw data after normalization. This dual storage enables comparison: if the relationship between a URL's unsquashed and squashed values deviates significantly from the expected pattern (for example, if the unsquashed value is abnormally high relative to the squashed value), this could flag the URL for additional scrutiny.

Comparison of All Five Click Types

Click Type Category Description Ranking Impact Signal Strength
goodClicks Behavioral User clicks and stays; indicates satisfaction Positive Standard positive
badClicks Behavioral User clicks and quickly returns to SERP (pogo-sticking) Negative Standard negative
lastLongestClicks Behavioral Last result clicked in session with longest dwell time Positive Strongest positive
unsquashedClicks Processing state Raw click count before normalization Indirect (input to squashing) N/A — raw data
squashedClicks Processing state Normalized click count after squashing function Direct (used in ranking) Depends on input

Table 1: Summary of all five NavBoost click types revealed in the 2024 Google API leak, showing their category, description, ranking impact, and relative signal strength.

How Click Types Interact in the Ranking Algorithm

The five click types do not operate independently. They interact within NavBoost's ranking calculations in several important ways.

Ratio-Based Evaluation

NavBoost does not simply count goodClicks and subtract badClicks. The system likely evaluates the ratio of click types for each query-URL pair. A result with 800 goodClicks and 200 badClicks (80% satisfaction rate) would have a different NavBoost score than a result with 400 goodClicks and 100 badClicks (also 80%) — but the proportions matter as much as the absolute numbers, especially after the squashing function compresses the raw values.

lastLongestClicks as a Weighting Factor

The lastLongestClick signal likely amplifies the effect of goodClicks. A result that frequently receives both goodClicks and lastLongestClicks is signaling to NavBoost that it is not merely satisfactory but is the preferred result among the options users try. This combination would produce the strongest possible positive signal in the system.

Conversely, a result that receives goodClicks but rarely or never receives lastLongestClicks may be adequate but not exceptional. Users are satisfied enough not to pogo-stick, but they often find a better answer elsewhere in the same session.

The Squashing Layer

All behavioral classifications (goodClicks, badClicks, lastLongestClicks) pass through the squashing function before being used in ranking. This means that the interaction between click types occurs in the squashed (normalized) space, not the raw space. Two practical consequences follow:

  1. Volume manipulation is doubly suppressed: Even if an actor could generate a disproportionate number of artificial goodClicks, those clicks would be compressed by the squashing function, and their effect would be further diluted by the proportion calculation against existing badClicks and other data.
  2. Quality differentiation is preserved: Because squashing compresses volume but preserves relative differences at smaller scales, the behavioral distinction between a truly great result (high lastLongestClick rate) and a merely adequate result (moderate goodClick rate) remains detectable even after normalization.

Behavioral Scenarios Mapped to Click Types

The following scenarios illustrate how common user behaviors translate into NavBoost click classifications.

Scenario 1: The immediate answer. A user searches for "capital of Australia," clicks the first result, reads "Canberra," and closes the tab in 5 seconds. Despite the short dwell time, this is likely classified as a goodClick because the user did not return to the SERP. The query was fully satisfied.

Scenario 2: The deep reader. A user searches for "comprehensive guide to retirement planning," clicks a result, and spends 12 minutes reading the content. They do not return to the SERP. This is a goodClick and a lastLongestClick (since it is the only and therefore last/longest click in the session).

Scenario 3: The comparison shopper. A user searches for "best standing desk 2026," clicks result #3 (30-second dwell), returns to SERP, clicks result #1 (15-second dwell), returns to SERP, clicks result #5 (4-minute dwell). Result #3 and #1 receive badClicks. Result #5 receives a goodClick and a lastLongestClick.

Scenario 4: The misleading snippet. A user searches for "free PDF editor online," clicks a result that says "Free PDF Editor" in the title, but the page requires a $30/month subscription. The user returns to the SERP in 4 seconds. This is a badClick. If repeated across many users, this consistently negative signal will accumulate in NavBoost and suppress the result's ranking for this query.

Scenario 5: The satisfied-but-not-best answer. A user searches for "how to fix a leaky faucet," clicks result #2 (a short article, 90-second dwell), returns to the SERP, clicks result #4 (a video tutorial, 8-minute dwell). Result #2 might receive a goodClick or a borderline classification — the 90-second dwell is not trivial, but the user still returned. Result #4 receives the lastLongestClick.

Implications for Search Optimization

The click type framework has direct implications for how publishers and SEO practitioners should approach their work.

Optimize for lastLongestClicks, Not Just Clicks

The most valuable click type is the lastLongestClick. Achieving this classification requires creating content that is not merely clickable but genuinely the best answer available. This means:

  • Comprehensive coverage that reduces the need to check other sources
  • Fast page load times so users do not abandon before engaging
  • Clear content structure so users can find what they need quickly
  • Accurate and current information that fully satisfies the query

Reduce badClicks by Aligning Titles with Content

badClicks are generated when users feel misled by a search snippet. Reducing them requires honest, accurate title tags and meta descriptions that set appropriate expectations. Clickbait — titles that promise more than the content delivers — will generate short-term clicks but long-term badClick signals that suppress rankings.

Understand That CTR Alone Is Insufficient

A high click-through rate in Google Search Console does not necessarily translate to a positive NavBoost signal. If a high percentage of those clicks are classified as badClicks (users clicking but quickly returning), the net NavBoost effect could be negative. The most meaningful metric is not CTR alone but the combination of CTR and post-click satisfaction.

Think in 13-Month Cycles

Because NavBoost aggregates click data over a 13-month window, the click type composition of a URL's profile changes gradually. Improvements to content quality today will not produce immediate NavBoost benefits. Instead, the positive click signals will accumulate month over month, gradually shifting the URL's NavBoost profile from adequate to strong. Patience and sustained quality are structurally rewarded by the system's design.

Sources

The click types described in this article were documented in the 2024 Google API leak, first reported by Rand Fishkin (SparkToro) and analyzed in depth by Mike King (iPullRank). Additional context comes from:

  • Pandu Nayak's testimony during the U.S. v. Google antitrust trial (2023), confirming NavBoost's existence and significance.
  • RESONEO, "Google Leak Part 5: Click-data, NavBoost, Glue, and Beyond" — detailed field-level analysis of click signal handling.
  • Hobo Web, "NavBoost: How Google Uses Large-Scale User Interaction Data to Rank Websites."
  • Top of the Results, "The Google NavBoost Leak That Validated CTR Manipulation Techniques."

For related reading, see What is NavBoost? for foundational context, How NavBoost Works for the full technical architecture, and The NavBoost Squashing Function for details on how click data is normalized before use in ranking.

About this site: NavBoost.com is an independent resource on Google's click-based ranking systems. For businesses looking to improve their organic click-through rates, we recommend SerpClix — the only crowd-sourced CTR service using real human clickers.