How to Build Lightweight Caching in a React App

Daniel Pericich
8 min readAug 18, 2022

--

Photo by Ameer Basheer on Unsplash

Web Development has greatly evolved over the last 30 years. Speed and ease of use are always relative, but the jumps we’ve made from modem connections to advanced geolocation dictated CDNs show the drastic difference in capabilities we’ve developed. There are many ways to speed up the page load time and response and request cycles. These include lazy loading, fragmenting requests for sequential calls, caching and client side driven logic among many others.

As static HTML pages evolved with JavaScript, JQuery and React, it became clear that we could provide a quicker and cleaner experience to users by loading data from an initial HTTP call and then manipulating that data on the client rather than making a server call for every user interaction. There are some drawbacks to this approach and cases where it definitely isn’t ideal, but for the most part, client side updates are a solid approach.

I was recently working on a project where it made sense to architect the front end in a pattern that I like to call lightweight front end caching. In this article I’d like to examine what I mean by this term, explore how we would implement it and finally go over criteria that would lead you to use or not use this pattern.

What is Caching?

Caching is a method of storing data in temporary, more accessible, locations to decrease response times and server load. To give you a better idea of what this is let’s start by looking at a real world physical parallel.

Say that your family has a copy of the local phone book sitting in your kitchen. You like to order takeout food from the living room, but have to walk to the kitchen, open the phonebook, find the number and enter it into your phone every time you want to make a call. For an extra wrinkle, your sister also likes to order food, but doesn’t like pizza. Sometimes when you go to get a phone number she is already there and you have to wait for her to finish before you can use the phonebook.

Figure 1. Phonebook takeout figure.

There has to be a better way to do this, right? The answer is yes, and this is where caching comes into play. Two issues arise from our phonebook example: the first is it takes time to physically go to the phonebook and search for the number, the second issue is waiting for your turn while your sister is using the phonebook.

For web development we see this same situation when it comes to clients requesting data from the server. For every HTTP request to the server, we have to wait for the request to travel to the server, be processed and for a response to be sent back to our client. We may have to wait longer if there are many clients trying to get responses back from the server concurrently.

Figure 2. Client server interaction diagram.

If we know that there is a single page from the phonebook that we always go to, and know that the number won’t change very often, why don’t we just hold keep a copy of it closer to us:

Figure 3. Caching example for phonebook page.

From this diagram we can see that we can solve both of our issues with this new more local copy of the pizza phonebook page. Not only do we remove the need to go from the living room to the kitchen and lookup the number (our HTTP message traffic and server processing) but we don’t have to worry if the phone book is currently in use.

This is caching, the process of storing copies of a resource locally to decrease time to resource, and load on the server handling requests. With this basic understanding of caching between a client and server, let’s now look at different ways to implement a lightweight caching system.

What do I mean by Lightweight?

We’ve gone over normal caching, but what do I mean by “lightweight” caching? Usually when we cache resources we are still storing the items on a server to be returned when requested. This can come in the form of a CDN which will store static items on a server geographically closer to a user to both decrease the time for the packets to be sent physically between server and client and to decrease processing by having a finished static item to send on call.

Our other normal source of caching comes with databases like Redis which make use of key pair values to quickly serve data to our client without making a roundtrip to the server for processing and messaging. Both of these are great for serving up larger, static assets to our apps, but both require extra setup and maintenance.

If we are already building a React or NextJS app wouldn’t it be nice to have a form of caching without introducing new overhead? This is where lightweight caching comes in. A key requirement for any caching is that the data is static, or not commonly changing. This is an important attribute of our resources as we want to store resources that won’t become outdated quickly after we store them.

Lightweight caching consists of making calls to the server a single time on page load, then storing the response as a separate state variable from the mutable data used in producing the page. We can create a new immutable state variable for the initial data to act as a cached resource for our app. We can reference this in creating a second mutable state variable that will drive the logic of our client side app. This architecture prevents the need to reach out to a server, CDN or caching database if we need to call for a new copy of our data.

Implementing our Lightweight React Caching

This sounds great, but how does it work in real life? Say that we have an index table that displays local pizza restaurants in your area. This table shows each restaurant’s name, location, price and open and close times:

Figure 4. Pizza restaurant table.

From talking with business owners we’ve identified that the only filter option our clients care about is the relative price. Because of this we have we have a select element that allows us to pick the price point from inexpensive “$” to most expensive “$$$$”.

A common way to handle something like this would be to add the price as a query param to calls back to the server. Every time the user wants to filter by a different price tier, they would select a new option then our client would request a new set of filtered records from the server, wait for the records to be gathered, receive the records in a response and render the results. That seems like a lot of extra work.

To implement this in our React app we need to do a number of setup steps. First, we need to setup our state variables to assist in this pattern. This is what our state will look like:

Figure 5. Setting our table’s component state.

Here we see there are three pieces of data. The originalRecords will serve as our immutable “cached” records. On initial page load we will set these originalRecords and displayRecords with our HTTP response:

Figure 6. Component’s onload state call and setting.

At this point our records match, though the originalRecords are immutable while the displayRecords are mutable and will be passed to our PizzaTable record to create our table. The other piece of state we set is a lastHTTPCall object which is a Date object. We use this to determine if the records are stale and need to be fetched.

With the state configured, we can handle rendering the data and filtering by price. You can reference the code in the repo here (Github Repo Here) to see how I render the table component. What’s more important is seeing the filter operation. Again, this lightweight caching only works with single data field manipulation. If we were able to sort by many items we would get unpredictable records as filters were applied and removed.

I have placed an onChange event handler on our select element with the following method:

Figure 7. Filter function for changing display records by requested prices.

In this filterByPrice method, we accept an event object to allow us to determine which price point the user wants to see. Next, we check if the data is fresh and make a call for updated records if it is stale. We allow the user to reset the records to the originalRecords if they do not pick a filter. If they do we filter on the originalRecords and set new displayRecords.

Here we can see our originalRecords immutable copy of records coming in handy. Instead of having to make a call to a server every time we want to filter our records, we can instead access the cached state value on our page. This saves time and server load, which is good for heavily accessed pages.

For certain use cases, light weight caching is a great way to allow features to function with only one server call.

Why We Shouldn’t Use this Approach

As always, no technology is a one size fits all solution. There are certain criteria you must examine when determining if lightweight caching will work for you app. You should not use this pattern for your app if:

  1. The data changes often, especially within seconds of your initial call
  2. The data set is hundreds or more records in size that would overload the client’s storage
  3. You have complex operations that would be more quickly handled and returned by SQL
  4. You have many operations for acting on the data which will lead to extra code to build and maintain (This works well if you only have one filter not composite operations).
  5. If the data needs to be exact for your functions (prices for a checkout that may change frequently)

Again, this is light weight and should not be used for more strenuous purposes.

Conclusion

I enjoyed trying to figure out a way to handle calls for my simple application. While not everything needs to be optimized, and over engineering is certainly a real issue, it’s fun to push the boundaries of what we expect in software engineering design. I hope that even if you never use this pattern, this was a good opportunity to ask how you could do less with more in your site designs and solutions.

Notes

https://medium.com/@brockreece/frontend-caching-strategies-38c57f59e254

--

--

Daniel Pericich
Daniel Pericich

Written by Daniel Pericich

Former Big Beer Engineer turned Full Stack Software Engineer

No responses yet