Improve Your Autocomplete Timing with Debouncing

Article summary

We have an application with an autosuggestion search box that’s driven by a query to our GraphQL server. When the user types in a string, such as “at,” we can query our server for the term our search engine thinks we should suggest—maybe “atomic.”

We wanted this search box to be fast and efficient; what’s the point of an autosuggestion if it takes as long to generate as it does to type? So, we had to be thoughtful about how to power the search box. Here’s how we went about designing it.

(If you’d like to skip ahead to the implementation, jump to my follow-up post about Autocomplete with React-Redux & Apollo.)

The Problem

Every time the user types a new letter, we need to make a query to update the autosuggestion in the box, like so:

But what if the user types really fast? That’s a lot of queries being fired off in a short amount of time, and there’s no need to show the suggestion “apple” for “a” if you’ve already typed “atomic” by the time the request comes back. That’s going to look terrible, kind of like this:

00:00:00.000

  • User presses the “a” key.
  • The search box makes a query to know what to autosuggest for “a” and waits for it to come back.
  • Search box text: “a

00:00:00.150

  • User presses the “t” key.
  • The search box makes a query for “at,” and waits for it to come back.
  • Search box text: “at

00:00:00.300

  • The first autosuggestion for “a” comes back: { suggestion: "apples" }
  • The search box gets its autosuggestion and updates the text.
  • Search box text: “atples.” Oh no!

00:00:00.450

  • The second autosuggestion comes back:{ suggestion: "atomic" }
  • The search box text flickers to “atomic.”

The Solution

Cleaning it up

One way we can handle this a little more intelligently is to keep track of the latest search term the user has typed and update the search box’s autosuggestion only if we get a result for that term.

To do this, we’ll package the autosuggestion result with the term it was made for, like so:

{ term: "ap", suggestion: "apples" }

That will solve the overlap problem! Now the workflow looks like this:

00:00:00.000

  • User presses the “a” key.
  • The search box makes a query to know what to autosuggest for “a” and waits for it to come back.
  • Search box text: “a

00:00:00.150

  • User presses the “t” key.
  • The search box makes a query for “at” and waits for it to come back.
  • Search box text: “at

00:00:00.300

  • The first autosuggestion for “a” comes back: { term: "a", suggestion: "apples" }
  • We note the current search term: “at.” That doesn’t match the term field of the autosuggestion result, so we throw this result out.
  • Search box text: “at

00:00:00.450

  • The second autosuggestion comes back:{ term: "at", suggestion: "atomic" }
  • We note the current search term: “at.” That matches the term field of the autosuggestion result, so we update the search box.
  • Search box text: “atomic.”

This is a little better, but we threw out one of the query results. Autosuggestion isn’t cheap; we have a lot of potential suggestions. How can we minimize the load on the server for autosuggestions that we’re never going to use?

Debouncing the query

One way to minimize the number of rapid requests a UI element makes is by debouncing, where we wrap the autosuggestion function in another function (the debounce function) that we call every time the user types a new character. Before making the autosuggestion query, the debounce function waits a few hundred milliseconds to see if any more calls come through.

If not, debounce makes the autosuggestion query and the search box gets updated. But if any new calls do come through in that time, it throws out the old calls and starts waiting again. Here’s what that looks like:

00:00:00.000

  • User presses the “a” key.
  • The search box asks the debounce function to query for an autosuggestion for “a.”
  • The debounce function starts a timer, waiting to see if any more requests come through.
  • Search box text: “a

00:00:00.150

  • User presses the “t” key.
  • The search box tells the debounce function to query for “at.”
  • The debounce function forgets that about the query for an autosuggestion for “a” and starts a new 300ms timer.
  • Search box text: “at

00:00:00.450

  • No new requests have come through, and the debounce function’s 300ms timer ends.
  • The debounce function makes the query for an autosuggestion for “at” and waits for a response.
  • Search box text: “at

00:00:00.600

  • The autosuggestion for “at” comes back: { term: "at", suggestion: "atomic" }
  • We look at the current search term: “at.” That matches the term field of the autosuggestion result, so we update the search box.
  • Search box text: “atomic”

With this solution, we can make only the one query we actually need! When you multiply out this workflow to typing whole words and phrases, you can save a lot of network requests and confusion.

This approach doesn’t eliminate every unnecessary request; if the user takes more than 300ms to type the next character, we’ll still make an extra query. But that might not be the wrong thing to do. Maybe they paused because they couldn’t think of the spelling of the term they needed. Maybe…they needed a suggestion?

Next read all the gory details on how we implemented this behavior in React/Redux and GraphQL!

Conversation
  • Anders Baumann says:

    Have a look at rxjs throttle. It provides the desired functionality.

    • Rachael McQuater Rachael McQuater says:

      Hi Anders, thanks for calling that out! We actually ended up using a similar function from lodash for this. You can find details in the follow-up technical solution post mentioned at the end of this post!

  • Comments are closed.