Data Caching
Data Caching functions powered by Redis.
Last updated
Data Caching functions powered by Redis.
Last updated
Xano provides a data caching service powered by Redis that allows you to temporarily store data in memory for high-performance data retrieval and storage purposes. This is great for storing temporary data that needs to be quickly generated and accessed for a period of time.
Learn more about how to set Caching up and best practices
1. For API Queries that take a long time to process Imagine that your front-end or your users want to generate a complex daily report from tens or even hundreds of thousands of records in your Xano Database. Because there could be complex calculations executing across a large data set, it could take a while for the actual query to process. Longer than you or your users want to wait. Instead of directly querying the database every time your user wants this information, Xano can store the data in memory. You can store things in the data cache ahead of time using something like a Background Task (also called a Cache Worker), or you can store it after the first time the user requests the data. Example Use Case: Your boss wants to generate a daily report on how their million products are doing. They don't have the time to wait around. So you generate the report at the end of each day using a nightly Background Task and use Data Caching to store the report in memory so it's ready for the boss and anyone else that needs access to it.
📕 How-to Guide: Storing an API Response in Memory
2. For Rate Limiting Purposes Rate limiting is used to limit the consumption of a resource by a user of any given application. As you may know, we rate limit our Explore (Free) Plan so users don't consume too many resources. Many apps use rate limiting to prevent DDOS attacks, for server stability and consistency, or simply for cost control.
The most common way to set this up is using the Bucketing method, which basically means if each request to your API is like a drop in a bucket, your API can only be used until the Bucket is filled. Then it will stop working.
Here's typically how it works
Define the size of the "Bucket" and increment the count every time the API is hit.
Get the current count in the Bucket to make sure it hasn't exceeded the max size
Set the Bucket TTL (Time to Live a.k.a Expiration time)
Data Caching has a unique feature of allowing you to set an expiration date (TTL) so you can expire the "Bucket" count.
Example Use Case: Imagine you were building an app like Twitter and wanted to limit each user to posting 10 tweets per 20 seconds so you didn't overload the server. You would set up your Post Tweet
API endpoint to add a "Drop" to the Data Cache (Bucket) up to 10 drops every time the API endpoint is hit and then have that bucket empty (or expire) every 20 seconds.
Scale plans have up to 100MB of storage in cache. If you require more than 100MB of cache, please send us a message.
Through the cache functions you use, you can manually set how long data gets stored, or you can expect it to naturally get overwritten once it reaches the 100MB limit.
NOTE: As mentioned above, Data Caching is temporary storage - so never store something that can't be recovered - if permanent storage is required, then the database can and should be used but performance may not be as good