Scrollable Grid with Just-in-Time Data Loading – Part 4: Fetching Data with Apollo Client and GraphQL

Today I’ll explain how to use Apollo Client to fetch data in batches from a GraphQL endpoint and hook that data up to Infinite Loader. By the end of this post, you’ll have an (almost!) full-stack, just-in-time loading list.

How did we get here? In my last few posts, I’ve explained how to:

  1. Fetch data in batches with React Window’s Infinite Loader
  2. Store and restore user scroll position with React Window
  3. Render the data in windowed chunks with React Window and React Table

To defer hooking up the backend, the demo for those posts “fetches” data from a client-side random data generator. In my next post, I’ll talk about actually fetching the data from the database.

Setting Up the GraphQL Endpoint on the Server

To paginate data on the front end, we will need a server that supports pagination. For the purposes of this article, I’ll assume we already have a GraphQL API up and running. All we have to do is create an endpoint that supports incrementally fetching rows.

Cursor-Based Pagination

GraphQL docs recommend cursor-based pagination. A cursor is a unique identifier for a specific record. In cursor-based pagination, we can query our GraphQL server for the page of “count” number of rows starting from the cursor. In the response, we should get:

  • “Count” number of rows
  • The end cursor pointing to the last rows
  • Some additional information about this chunk of rows (e.g., the page)

Our data will be sorted by the cursor, so the cursor must be a unique, sequential data point (e.g., an auto-incrementing ID or a timestamp). For now, we’ll use a randomly-generated incrementing ID.

Sketching Out the GraphQL Schema

What would the GraphQL schema for cursor-based pagination look like? The top-level query would look like this:


getRowsConnection(count: Int!, startCursor: ID): RowsConnection!

Given a start cursor and “count” number of rows, the server should return “count” number of rows. We might also want some additional information, such as the total count of rows and if we’ve reached the end of the rows.

Instead of returning “count” number of rows directly, let’s return a RowsConnection type with “count” rows, the total row count, and information about the chunk of rows.


type RowsConnection {
rows: [Row!]!
pageInfo: PageInfo!
totalCount: Int!
}

And since we’re returning rows about employees, each row will contain employee information.


type Row {
id: ID!
index: Int!
firstName: String!
lastName: String!
suffix: String!
job: String!
}

Right now, the only thing we need to know about the page as a whole is what chunk of data it got back and whether we’ve reached the end of the list. The PageInfo type will return a pointer to the start of our page (startCursor), a pointer to the end of the page (endCursor), and whether there is more data after the current page (hasNextRow).


type PageInfo {
hasNextRow: Boolean!
startCursor: ID!
endCursor: ID!
}

Implementing the Resolver

Next, let’s implement the resolver that responds to the rowsConnection query and returns a result of type RowsConnection.

For now, the resolver will just use fake data. In my next post, we’ll be hooking up the resolver to actually fetch data from a database.

I’ll use the faker library to generate the fake data. I’m providing faker with a seed to ensure consistent results between queries.


faker.seed(123);
const fakedOutRows = new Array(100).fill(true).map((_, i) => ({
  id: faker.random.uuid(),
  firstName: faker.name.firstName(),
  lastName: faker.name.lastName(),
  suffix: faker.name.suffix(),
  job: faker.name.jobDescriptor(),
  index: i,
}));

For now, the cursor can just be the ID on the employee row.

We’ll use the start cursor argument to index into the rows to get the requested page of data. If this is the first time the client is querying for data, there might not be a start cursor. In that case, we’ll just take a slice of length “count” from the start of randomly generated rows. 

In all subsequent queries, there will be a start cursor argument. In that case, we’ll look up the index corresponding to the start cursor. We’ll take a slice of length “count” of the randomly generated rows starting at that index. The end of the slice would be the start index plus limit. Either way, we now have “count” number of rows to return to the user.

The end cursor will be the ID of the last row being returned. If there is no more data after the current slice, the end cursor would just be the last available row.

This page has rows after it only if the end index is smaller than the total length of the data.


const getRowsConnection: QueryResolvers.GetRowsConnectionResolver = async (
  parent,
  args,
  ctx
) => {
  const startIndex = args.startCursor ? rowIdToRow[args.startCursor].index : 0;
  const end = startIndex + args.count;
  const endIndex = end < fakedOutRows.length ? end : fakedOutRows.length - 1;
  const rows = fakedOutRows.slice(startIndex, end);
  return {
    rows,
    pageInfo: {
      hasNextRow: endIndex < fakedOutRows.length,
      startCursor: fakedOutRows[startIndex].id,
      endCursor: fakedOutRows[endIndex].id,
    },
    totalCount: fakedOutRows.length,
  };
};

Just-in-Time Data Fetching with Apollo Client

Now that we’ve enabled server-side pagination, the next step is to query for and consume this paginated data on the client.

For starters, let’s make the GraphQL query that asks for limit amount of data starting from startCursor.


query GetRowsConnection($count: Int!, $startCursor: ID) {
  getRowsConnection(count: $count, startCursor: $startCursor) {
    rows {
      id
      firstName
      lastName
      suffix
      job
      index
    }
    pageInfo {
      hasNextRow
      startCursor
      endCursor
    }
    totalCount
  }
}

We can invoke this query whenever the user needs to view a chunk of data that has not yet been loaded. By creating a hook, we can abstract the logic of what rows have been loaded and how to load more rows. The hook will be responsible for keeping track of the row state (which rows are loading, if we are currently loading more rows) and providing a callback to load more rows.

The hook’s loadMore callback will use Apollo to fetch data from our GraphQL endpoint. Apollo’s useQuery hook returns a result object, including data, the loading state, and the fetchMore function. The fetchMore function will let us provide the query and variables for a new query and then define how to merge the new query results in with our previous query results. We can use fetchMore to combine our old chunk of rows with a newly fetched chunk of rows.


export function useRows(args: { count: number }): UseRowsResponse {
  const { data, loading, fetchMore } = useQuery(GetRowsConnection.Document, {
    variables: {
      limit: args.limit,
    } as GetRowsConnection.Variables,
  });

  if (loading && !data.getRowsConnection) {
    return {
      loading: true,
      rows: [],
      loadMore: undefined,
      hasNextRow: false,
      totalCount: 0,
    };
  }

  const loadMore = async (offset: number) => {
    await fetchMore({
      query: GetRowsConnection.Document,
      variables: {
        startCursor: data.getRowsConnection.pageInfo.endCursor,
        limit: args.limit,
      },
      updateQuery: (prev, { fetchMoreResult }) => {
        if (!fetchMoreResult) {
          return prev;
        }
        return immer.produce(prev, (draft: GetRowsConnection.Query) => {
          draft.getRowsConnection.rows.push(
            ...fetchMoreResult.getRowsConnection.rows
          );
          draft.getRowsConnection.pageInfo =
            fetchMoreResult.getRowsConnection.pageInfo;
        });
      },
    });
  };

I’m using immer here to more easily produce a new mutated copy of the previous state.

This hook returns the loadMore callback and the current state (what rows are loaded, whether or not there is more data, etc.). Infinite Loader can use the loadMore function to load more items, and the Grid component can use the row data when rendering grid content.

Hooking the Data Up to Infinite Loader

As mentioned in the first post of this series, Infinite Loader is a higher-order component that lets you fetch row data in chunks. It’s responsible for the just-in-time data fetching.

The callback and state returned by the useRows hook can be used to set these Infinite Loader props:

  • isItemLoaded – A function that, given an index, returns whether or not the item at that index is loaded. An index is loaded if there exists a row at that index.
  • loadMoreItems – A callback that gets invoked when more data needs to be loaded. It takes a start and stop index and fetches the rows up the stop index. Under the hood, it can just call the loadMore function returned by the useRows hook and request to load rows up until the stop index. These new rows will then be fetched from the server and combined with the existing rows.

 let loadMoreItems = React.useCallback(
    async (startIndex: number, stopIndex: number) => {
      if (!loading && loadMore) {
        await loadMore(stopIndex);
      }
      return Promise.resolve();
    },
    [loadMore]
  );
  • itemCount – The total number of rows that will be displayed in the list. We can get the row count directly from the useRows hook.
  • children – The render prop’s display component. (For more detail, see the first post in this series.) The display component (in our case, a Grid) will index into the rows returned from the useRows hook when rendering cell content.

What’s Next?

We’ve finally replaced our randomly generated client-side data with data coming from a GraphQL server. We now have a GraphQL server that supports pagination and can be used to just-in-time load data on the front end.

The final step is to have the GraphQL server query a database instead of randomly generated data. Once that is complete, we will have a full-stack implementation of a snappy scrollable grid with just-in-time data loading.

Conversation
  • Jacob says:

    Hi! Thought you might see this comment since it’s on your newest post. In section three of this series (which is AMAZING btw), the link to Github doesn’t include the repo for the demo you were showing for that section. It looks more like section one. Can you please upload that section? You’re basically the only guide I can find to setting this all up.

    Thank you so much and keep up the great work!

  • Comments are closed.