Caching is how you make a Django application feel instant without rewriting everything. The right caching strategy turns a 400ms page load into a 15ms one, reduces database queries by orders of magnitude, and lets your infrastructure handle significantly more traffic on the same hardware. This guide covers the caching patterns I use in production Django projects: backend configuration, per-view caching, template fragment caching, low-level cache operations, cache invalidation strategies, and the mistakes that cause stale data or cache stampedes. For broader performance optimization, see the Performance hub.
The hardest part of caching is not setting it up. It is knowing when to invalidate. Every caching pattern in this guide includes its invalidation story, because a cache without a clear expiration strategy is a bug waiting to surface.
Cache backend configuration
Redis is the recommended production cache backend. It supports atomic operations, expiration, and works as both a cache and a Celery broker:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.redis.RedisCache',
'LOCATION': os.environ.get('REDIS_URL', 'redis://127.0.0.1:6379/1'),
'OPTIONS': {
'db': '1',
},
'KEY_PREFIX': 'prodjango',
'TIMEOUT': 300, # Default 5-minute expiration
}
}
Use a different Redis database number than your Celery broker to allow independent flushing. The KEY_PREFIX prevents collisions if multiple projects share a Redis instance.
Per-view caching
The simplest caching pattern caches the entire HTTP response:
from django.views.decorators.cache import cache_page
@cache_page(60 * 15) # 15 minutes
def product_list(request):
products = Product.objects.filter(is_active=True)
return render(request, 'catalog/product_list.html', {'products': products})
Per-view caching works well for pages that change infrequently and are the same for all users. It does not work for pages with user-specific content or CSRF tokens in forms.
For class-based views:
from django.utils.decorators import method_decorator
@method_decorator(cache_page(60 * 15), name='dispatch')
class ProductListView(ListView):
model = Product
Template fragment caching
When a page has both dynamic and static sections, cache the expensive fragments:
{% load cache %}
{% cache 900 sidebar %}
<aside>
{% for category in categories %}
<a href="{{ category.get_absolute_url }}">{{ category.name }} ({{ category.product_count }})</a>
{% endfor %}
</aside>
{% endcache %}
The first argument is the timeout in seconds. The second is the cache key fragment name. You can add variable arguments to create per-user or per-language cache keys:
{% cache 900 sidebar request.user.id %}
Low-level cache API
For maximum control, use Django’s cache API directly:
from django.core.cache import cache
def get_featured_products():
key = 'featured_products_v2'
products = cache.get(key)
if products is None:
products = list(
Product.objects.filter(is_featured=True)
.select_related('category')
.values('id', 'name', 'slug', 'price', 'category__name')[:12]
)
cache.set(key, products, timeout=60 * 30)
return products
Convert querysets to lists before caching. Caching a lazy queryset serializes the SQL, not the results, and re-executes the query on every cache hit. This is a common and expensive mistake.
Cache invalidation strategies
Time-based expiration is the simplest. Set a TTL and accept that data may be stale for that duration. Works well for content that changes infrequently.
Signal-based invalidation clears cache entries when the underlying data changes:
from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver
from django.core.cache import cache
@receiver([post_save, post_delete], sender=Product)
def clear_product_cache(sender, **kwargs):
cache.delete('featured_products_v2')
cache.delete('product_count')
Versioned keys avoid the need for explicit deletion. Increment a version number when data changes, making old cache keys naturally expire:
def get_cache_version():
version = cache.get('product_catalog_version')
if version is None:
version = 1
cache.set('product_catalog_version', version, timeout=None)
return version
def get_products():
version = get_cache_version()
key = f'products_v{version}'
return cache.get(key) or refresh_product_cache(key)
Middleware caching
Django’s cache middleware caches all anonymous GET requests site-wide:
MIDDLEWARE = [
'django.middleware.cache.UpdateCacheMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.cache.FetchFromCacheMiddleware',
]
CACHE_MIDDLEWARE_SECONDS = 600
CACHE_MIDDLEWARE_KEY_PREFIX = 'site'
This is powerful but blunt. It works best for content-heavy sites with few logged-in users.
Avoiding cache stampedes
When a popular cache key expires, multiple requests simultaneously try to rebuild it. This “stampede” can overload the database.
Use cache locking to let one request rebuild while others wait:
import time
def get_with_lock(key, builder, timeout=300, lock_timeout=30):
value = cache.get(key)
if value is not None:
return value
lock_key = f'lock:{key}'
if cache.add(lock_key, '1', lock_timeout):
try:
value = builder()
cache.set(key, value, timeout)
return value
finally:
cache.delete(lock_key)
else:
# Another process is rebuilding; wait briefly
for _ in range(10):
time.sleep(0.1)
value = cache.get(key)
if value is not None:
return value
return builder()
Frequently asked questions
When should I add caching to a Django project? When database query time or view rendering time becomes a measurable bottleneck. Do not cache prematurely. Profile first, cache second. Premature caching adds complexity and makes debugging harder.
Should I cache database queries or rendered templates? Cache at the highest level that makes sense. Caching a full rendered HTML response is more efficient than caching individual query results. But if different pages share the same expensive query, caching the query result avoids redundant database hits.
How do I cache authenticated pages?
Vary the cache key by user or session. The Vary: Cookie header tells Django’s cache middleware to create per-user cache entries. Be aware this significantly reduces cache hit rates.
What about using Memcached instead of Redis? Memcached is simpler and slightly faster for basic key-value operations. Redis offers more features: data structures, persistence, pub/sub. If you already run Redis for Celery, use it for caching too. Running one less service is worth more than marginal speed differences.