Let me walk you through a simple user journey – A user visits a website
abcd.com/hompage for the first time. It takes some time for the webpage to load and show the content. The user waits for this loading time and then proceeds with their actions.
Now after some time, they visit the
abcd.com/homepage again. This time it feels like the page loaded much faster than what it did on the first try. Did the internet connection magically sped up in this time or is it something else.
It might be
So what is
In very simple terms,
Caching refers to the process where we store copies of files or data in a location, primarily know as
cache is generally a temp location where this copy of data is stored and refreshed periodically.
Now as mentioned above,
cache is the temporary location where the copies of data/files is stored. It is a memory reserved for storing temporary files or data from apps, servers, websites, or browsers to help load faster when requested.
The primary use case for having data in a
cache is to have easier and faster access of data after it has been retrieved earlier.
In this sense, API caching is the process of storing frequently used or slow-to-retrieve responses from an API in order to improve performance and reduce the load on the API server. When a client requests the same resource again, instead of fetching it from the server, the response is retrieved from the cache. This can significantly reduce network latency and improve the overall user experience.
Process of Caching
In order to understand the process of caching, it’s required to have a minimal understanding of what goes behind once you’ve hit a URL in your browser.
The following process then occurs:
- Browser does a DNS Lookup by going into the DNS server to find the IP address that the web address redirects to.
- This is followed by an HTTP request to the server asking for a copy of the website.
- Once the server approves the request, there’s a “200 OK” response from the server sent back to the client.
- After this, the server starts sending the website’s files in small chunks called “data packets” to the browser.
- The browser on receiving the data packets assembles them and renders them as a complete web page for the users.
Now this may look very simple on the first glance, but it never is. This is actually a very complex process and often time-consuming for both the client (browser) and server to ensure the communication happens all the time.
Now as mentioned earlier, the 5-step process that is mentioned above is time taking and for websites dealing in large amount of data and users , like
flipkart.com, this is often a significant amount of time, given the volume of users and data that they have to handle.
If a user has to follow the 5-step process for each request made from their side, this will take some time for each request to process, which in turn would make a significant dent in the user experience, potentially resulting in revenue loss for these companies.
This is where the concept of caching kicks in.
These companies ( and many such others – in fact a majority of them), use website caching , to speed up the process of data communication in user’s future visit – by saving the content’s of the visited page and storing it in a temporary memory location, called the
Once caching is implemented, the 5 step process changes a little bit. Let’s see how this changes –
- A user makes a web page request through their browser for an asset from the origin server.
This asset is a web address — and for this, let’s use
- Upon this request, the browser, CDN, or the server cache first checks to see if the copy of the requested web page (
https://www.flipkart.com) already exists.
- In the process of checking if the cache is available, the user makes a web page request, and the result splits between 2 possible scenarios.
Scenario 1: Cache Hit
- Suppose the copy of the requested web page (
https://www.flipkart.com) is stored in a cache. In that case, it results in a cache hit response, and the asset contained in the cache is delivered to the user.
Scenario 2: Cache Miss
- If there’s no copy of the requested web page (
https://www.flipkart.com) found in the cache, it will result in a cache miss response — and the browser will have to make a new request from the main server.
- Meanwhile, if the web page is cached, the browser continues delivering the cached versions from where it’s stored until it’s cleared or expires.
Where to Cache
Now that we know what
caching is, comes the pertinent question –
Where should I cache my website contents?
This is a very important decision. This is not an easy question to answer and the answer may vary depending on the use case of the website.
However, in essential there are basically two different places where the caching may happen –
- Server side caching
- Client side caching
Both have their pros and cons – but the end goal of both is to help make the websites load faster for the end users.
Server Side Caching
As the name suggests, server side caching is when you cache the data in your backend servers. It is the temporary storing of web files and data on the origin server for reuse. This can be done using in-memory caching, which stores the response in memory on the server, or disk caching, which stores the response on disk.
When user first requests some resource, then the website follows the normal process of the requesting the information about the resource from the origin server.
After making this request and sending a response back to the user, the server saves a copy of the web page.
In the subsequent visits, the origin server sends back the already cached web page ( if there are no changes) without reconstructing or regenerating the contents from the DB.
This process helps avoid repeatedly making expensive database operations to serve up the same content to many different clients.
Server side caching can be done in many ways. Lets see what those can be
A database cache supplements your primary database by removing unnecessary pressure on it, typically in the form of frequently accessed read data. The cache itself can live in a number of areas including your database, application or as a standalone layer.
The three most common types of database caches are the following:
- Database Integrated Caches – DB’s like Aurora DB ( Amazon), offer an integrated cache that is managed within the database engine and has built-in write-through capabilities. When the underlying data changes on the database table, the database updates its cache automatically,
- Local Caches – A local cache stores your frequently used data within your application. This not only speeds up your data retrieval but also removes network traffic associated with retrieving data, making data retrieval faster than other caching architectures
- Remote Caches – In remote caches, the caches are stored on remote dedicated servers using a key-value pair stores like
The advantage of database caching is that it can be used to cache responses for multiple clients, reduce the load on the server, and provide high availability for users. It also has the advantage that it can be easily scaled horizontally as traffic increases. However, it also has the disadvantage that it can be more complex to set up and maintain than other caching methods.
Content Delivery Caching
Content Delivery Networks or CDN’s as they are popularly called, comprises a group of selected servers geographically distributed around the world to provide quicker content delivery to web visitors.
CDNs can do this by caching the content of the website on their network.
Once the content of your website passes through your registered CDN provider for the first time, the servers store a copy of the website.
When a user revisits the website, the CDN locates the nearest servers to the user’s location and delivers the stored copy of the website.
A good example of CDN is
Cloudflare, which powers a lot of the internet traffic today. Sometimes when the cached copy of the website is not available and the website goes down, we see this error message
Another way to cache in BE is to use the API Gateway ( if you’re using any) for any caching needs. API gateway caching involves storing the response in the API gateway cache, which is a layer between the client and the API server. The advantage of API gateway caching is that it can reduce the load on the API server, improve the overall performance of the application, and provide high availability for users. It also has the advantage that it can be easily configured and managed by developers.
Client Side Caching
As the name itself suggests, client side caching involves caching the website contents on the client’s local machine. Client side caching, or browser caching as it is normally called, is the process where the copy of a web page is stored in the browser memory instead of the cache memory in the server.
Most of the modern day browsers – Chrome, Edge, Safari, Firefox et.al have a browser cache memory on any device they are installed in. This allows them to store HTML pages, Images, CSS files, and almost all other types of multimedia files of a website.
Since client side cache involves web browsers and they have very limited memory space defined for caching, so client side caching can only hold a limited amount of data.
Also, since this is on a client’s machine, any time the user clears the cache manually or by some other means, the cached data is removed and the browser behaves as if it is visiting the website for the very first time. So in a way it is dependent on the user choice whether to retain or clear the data.
The advantage of client-side caching is that it reduces the number of requests made to the server, improving the overall performance of the application.
There are several ways to cache API responses, each with their own pros and cons.
Browser Request Caching
This is the most common type of caching that is being used today.
It’s built into the HTTP protocol standard. It lets the webmaster or developer control how often the browser requests a new copy of files from the server.
When it comes to browser request caching, most of the caching occurs in the header section of the website’s code.
When it comes to applications that have UI+API interactions, a lot of the API’s contain headers that pertain to the caching mechanisms.
These API headers can be used in controlling/ maintaining the caching behaviours in the APIs
- Expires – This API headers primarily defines the expiration period for the cached content in the browser cache.
- Pragma (no-cache) – This header controls the type of content to be cached in the browser. The tag instructs the browser that specific content from the user’s response should never be cached.
- E-Tag – Entity-Tag is a hash value that that controls what version of a cached web page or file is shown to the user upon request.
- If-Modified-Since – if-modified-since header is used to make requests to the server to send data only if the currently cached data has been modified since the specified date.
- Last-Modified – used to identify when the cached data was most recently changed—and then the response is sent to the if-modified-since header.
- Cache-Control – A cache-control header is one of the most important headers that is used to control or modify the browser behavior
I wrote about the
Cache-Control header some time ago. You can read about that here
XMLHttpRequest object to fetch data in XML—or any other format and display them in real-time without the need to make another request from the server.
A service worker is also one of the most important aspects of client side caching. Service workers can have complete control over client side caches, and allow for complex cache logic outside of the limitations of the browser cache. The performance improvements can be astounding, with Google itself reporting a 39% reduction in First Contentful Paint on it’s properties.
Order of Caching
At a high level, when caching, a web browser follows the caching order below when it requests a resource :
- Service Worker Cache
- Browser Cache
- Server side cache.
Caching is a huge topic and a very important one. A single article on cache, would not be enough for anyone to understand this concept. What I’ve tried in this article is what is my understanding of where we can have the cache stored. There is a whole another discussion on what approach to take for caching the data – and that is outside of this article. I hope this helps someone in understanding this concept and helps them in understanding how the cache is actually done on various storages.