Why GraphQL Connection types don’t work well with Apollo cache

Why GraphQL Connection types don’t work well with Apollo cacheNeele BarthelBlockedUnblockFollowFollowingJun 4Spoiler alert –this article won’t provide a perfect solution– because there really isn’t one (yet).

It will outline the risks and problems you might run into trying to keep Apollo cache in sync with what is on the server when querying and mutating GraphQL Connection types.

To completely understand what this article is about, you should be familiar with GraphQL Connections and Apollo in general.

The users dashboard where he can add, edit and delete posters.

Let’s assume we have a GraphQL Connection called posters.

The connection allows us to use arguments such as first, last, before and after to query only a selection of the posters.

Especially for implementing pagination this can be very helpful.

However, it also comes with some caveats when trying to keep Apollo cache in sync with what is on the server.

Querying this, Apollo cache will save the list of posters to the cache.

The key of the object of posters will include the argument we passed to the query.

In this case this would be posters({"first": 50}).

Updating an itemSince the mutation returns the updated poster object which was previously fetched by our initial posters query, Apollo cache is able to update the previously fetched object that was saved in the cache.

This is because each poster has a unique ID, consisting of the poster id and the objects __typename.

As a result, all references to the just updated object will be updated automatically.

✨Creating an itemWhen creating a poster, it is obvious that the poster is an addition to the existing posters.

Therefore, Apollo will save the new poster object itself to the cache.

However, it won’t know that we want the new poster object to now be part of the posters({"first": 50})` object we queried earlier.

As a result, our list of posters that we fetched earlier, won’t be updated automatically and our cache will be out of sync compared to the data on the server.

To stay in sync between the server and our local Apollo cache, we can try a few different approaches.

1.

Modify the possible payload of the mutationInstead of only returning the recently created poster object in the mutation’s payload, we could also return all the posters we fetched earlier and update what is stored in posters({"first": 50}).

Our mutation could look like this:posterCreate mutation with additional payloadAs you might be able to tell, this comes with the caveat of having to define the exact same arguments in the mutation’s payload, that we defined in the actual query.

Which in our case means that the mutation expects a whole lot of arguments that are actually not meant for the mutation but its payload instead.

2.

Update the cache manuallyUpdating the cache manually seems like a great idea at first.

We basically read our query from the cache, add our new element that we got in the mutations payload and finally write it back to the cache.

YAY, problem solved!.????Unfortunately, it is not as easy as the code example above.

Not only do we need to manually update the cursor field on each added node with a value we don’t know, we’d also have to manually update hasNextPage and hasPreviousPage according to the page size.

This is because we only ever get back the poster object, but not in the context of the Connection.

Here are some examples that make it clear why local cache updates are most of the time troublesome and not sufficient when querying GraphQL Connection types.

What if the initial query already fetched 50 posters and saved them to the cache?The result will still be saved to the cache, however, we’ll have 51 items stored in the cache for the posters object that we queried with the first: 50 argument.

What if the initial query by default sorts the posters descending by name and we wanted to add a poster with the name Apple to our posters?We’ll end up with inconsistencies regarding sorting.

Unless we loop over the array to figure out where it should go before appending it to the existing results.

What if our posters actually used pagination, and our first query gave us back 49 items.

We then add 2 items.

We’ll have trouble handling pagination correctly.

Each item needs its cursor.

We would need to manually check if items are more than 50 and would have to change the hasNextPage field to be true.

We would then need to only store the first item in the cache, so we don’t exceed our page size of 50.

Now, as soon as we try to query the –now existing– second page, we will realise that updating the cache manually isn’t going to work in all cases.

Since we never got the 50th poster back from the server, and we only added this item to the cache ourselves with a made up cursor, the server won’t be able to fetch our second page.

This will return an error because the cursor we saved locally does not exist on the server.

3.

Refetch the initial queryWhen executing the mutation, Apollo allows you to use the prop refetchQueries on the Mutation component.

This will guarantee that, after the mutation has completed, we get the updated data from the server with the context of the updated Connection type.

This means that our previously created poster comes back from the server with correct cursor and pageInfo for the Connection.

????.However, refetching queries comes with the cost of an additional round-trip and might cause your UI to go back into some kind of loading state until the second HTTP request is resolved.

Deleting an itemDeleting posters comes with similar edge cases as creating posters does, though I’m not going to dive into this deeper in this post.

The @connection directiveThough Apollo has a solution for normalising the cache key, this is not of big help for most of our use cases.

Let’s look at the previously mentioned possible solutions again with having the @connection directive in mind.

1.

Modifying the possible payload of the mutationSince we will always need to provide a first argument for our posters query, it would not help us with not having to pass the query arguments to the mutation, if we queried for the posters payload.

2.

Updating the cache manuallyUsing cache updates, we would still be stuck with not having correct cursors and having to manually set the pageInfo fields.

However, using a @connection directive, we could make our query save to the cache without having a semantically incorrect key for our query results.

3.

Refetching the initial queryUsing a @connection directive when refetching the initial query, will not directly enhance this option.

However, it is definitely a nice addition when dealing with a resource that will likely be paginated in multiple places with varying arguments for the query.

It will make it easier for you to find specific resources in your cache, because you can disregard the arguments (if not specified differently) that are otherwise part of the Apollo cache keys.

ConclusionUpdating Apollo cache manually can lead to problems, when used on GraphQL Connection types.

There isn’t a perfect solution for this yet, decide on a case by case basis what’s the best way for your use case to handle this.

E.

g.

There are cases where local cache updates temporarily can make sense.

If you want to be 100% sure that you won’t end up with inconsistencies, do a refetch and accept the cost of an additional round trip.

Did I miss something obvious?.Let me know and I’ll be happy to update this article.

????.. More details

Leave a Reply