Coding With Fun
Home Docker Django Node.js Articles Python pip guide FAQ Policy

Django's caching framework


May 14, 2021 Django


Table of contents


Django's caching framework

The basic trade-offs of dynamic websites are dynamic. E ach time a user requests a page, the Web server performs a variety of calculations - from database queries to template rendering to business logic - to create pages that site visitors can see. From a processing overhead perspective, this is much more expensive than a standard server system that reads files from a file.

For most Web applications, this overhead is not great. M ost Web applications are not washingtonpost.com or slashdot.org; T hey are small and medium-sized sites with moderate traffic. However, for medium to high-traffic sites, you must minimize overhead.

That's where the cache comes from.

Some content is cached to save expensive calculations, so you don't have to perform calculations the next time. Here are some pseudo-codes that explain how to apply them to dynamically generated Web pages:

given a URL, try finding that page in the cache
if the page is in the cache:
    return the cached page
else:
    generate the page
    save the generated page in the cache (for next time)
    return the generated page

Django has a robust caching system that lets you save dynamic pages, so you don't have to count them for every request. For convenience, Django provides different levels of cache granularity: you can cache the output of a particular view, you can cache only fragments that are difficult to generate, or you can cache the entire site.

Django can also be used with "downstream" caches, such as Squid and browser-based caches. These are types of caches that you do not directly control, but you can give them tips (via http headers) about which parts of the site and how they should be cached.

You can also take a look

The design concept of the cached framework explains some of the framework's design decisions.

Set the cache

The caching system requires a small number of settings. T hat is, you must tell it where the cached data should be stored , whether in the database, on the file system, or directly in memory. T his is an important decision that affects cache performance. Yes, some cache types are faster than others.

Your cache preferences go into the settings in the CACHES settings file. The following is a description of all available values for CACHES.

Memcached

Memcached is the fastest and most efficient type of cache supported natively by Django, a memory-based cache server that was originally developed to handle high loads on LiveJournal.com and then open sourced by Danga Interactive. Websites such as Facebook and Wikipedia use it to reduce database access and significantly improve site performance.

Memcached runs as a daemon and is assigned a specified amount of RAM. A ll it does is provide a quick interface for adding, retrieving, and deleting data from the cache. All data is stored directly in memory, so there is no overhead for database or file system usage.

After installing Memcached itself, you need to install Memcached binding. T here are several Python Memcached bindings available. The two most common are python-memcached and pylibmc.

To use Memcached with Django:

  • Set BACKEND to django.core.cache.backends.memcachedCache or django.core.cache.backends.memcached.PyLibMCCache (depending on the memory cache binding you choose)
  • Set LOCATION to ip:port value, where ip is the IP address of the Memcached daemon, port is the port that runs Memcached, or set to the unix:path value, where path is the path to the Memcached Unix socket file.

In this example, Memcached uses python-memcached binding to run on port 11211 of the local host (127.0.0.1):

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '127.0.0.1:11211',
    }
}

In this example, you can /tmp/memcached.sock use python-memcached bindings through the local Unix socket file using Memcached:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': 'unix:/tmp/memcached.sock',
    }
}

Do not include unix:/prefix when using pylibmc binding:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
        'LOCATION': '/tmp/memcached.sock',
    }
}

One of Memcached's great features is the ability to share caches across multiple servers. T his means that you can run the Memcached daemon on multiple computers, and the program treats the computer group as a single cache without having to cache values repeatedly on each computer. To take advantage of this feature, include all server addresses in location, in the form of a string or list separated by a sign or comma.

In this example, the cache is shared between Memcached instances with IP addresses 172.19.26.240 and 172.19.26.242 and both running on port 11211:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': [
            '172.19.26.240:11211',
            '172.19.26.242:11211',
        ]
    }
}

In the following example, the cache is shared on Memcached instances running on IP address 172.19.26.240 (port 11211), 172.19.26.42 (port 11212) and 172.19.26.244 (port 11213):

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': [
            '172.19.26.240:11211',
            '172.19.26.242:11212',
            '172.19.26.244:11213',
        ]
    }
}

The last point about Memcached is that memory-based caching has one drawback: because cached data is stored in memory, data is lost if the server crashes. O bviously, memory is not used for permanent data storage, so don't rely on memory-based caches as unique data stores. There is no doubt that any Django cache backend should not be used for permanent storage - they are designed as a cache rather than a storage solution - but we point this out here because memory-based caching is particularly temporary.

Database cache

Django can store its cached data in your database. This method works best if you have a database server with a good fast index.

To use a database table as a cache backend:

  • Set BACKEND at django.core.cache.backends.db.DatabaseCache
  • Set LOCATION to tablename, the name of the database table. The name can be any name you want, as long as it is a valid table name that is not already in use in the database.

In this example, the name of the cached table my_cache_table:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
        'LOCATION': 'my_cache_table',
    }
}

Create a cached table

Before you can use a database cache, you must create a cache table using the following commands:

python manage.py createcachetable

This creates a table in your database in the same format as Django's database cache system expects. The name of the table is taken from LOCATION.

If you use multiple database caches, create a table for each cache.

If you are using more than one database, follow the allow_migrate of the database router (see below).

Like migrate, createcachetable does not touch existing tables. It only creates lost tables.

To print the SQL you are about to run instead of running it, use the option. createcachetable --dry-run

Multiple databases

If you use the database cache with multiple databases, you also need to set up routing instructions for the database cache table. F or routing, the database cache table CacheEntry appears as a model named Django_cache. The model does not appear in the model cache, but model details can be used for routing purposes.

For example, the following router directs all cache reads to cache_replica and all writes to cache_primary. The cache tables will only sync to cache_primary:

class CacheRouter:
    """A router to control all database cache operations"""

    def db_for_read(self, model, **hints):
        "All cache read operations go to the replica"
        if model._meta.app_label == 'django_cache':
            return 'cache_replica'
        return None

    def db_for_write(self, model, **hints):
        "All cache write operations go to primary"
        if model._meta.app_label == 'django_cache':
            return 'cache_primary'
        return None

    def allow_migrate(self, db, app_label, model_name=None, **hints):
        "Only install the cache model on primary"
        if app_label == 'django_cache':
            return db == 'cache_primary'
        return None

If you do not specify a routing direction for the database cache model, the cache backend uses the default database.

Of course, if you don't use the database cache backend, you don't have to worry about providing routing instructions for the database cache model.

File system cache

The file-based back end serializes each cached value and stores it as a separate file. T o set backEND to "django.core.cache.backends.filebased.FileBasedCache" and set it to the appropriate directory for LOCATION. For example, to store cached data/var/tmp/django_cache, use the following settings:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
        'LOCATION': '/var/tmp/django_cache',
    }
}

If you're using Windows, place the drive letter at the beginning of the path, as follows:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
        'LOCATION': 'c:/foo/bar',
    }
}

The directory path should be absolute - that is, it should start at the root of the file system. It doesn't matter if you add a slash at the end of the setting.

Make sure that the directory to which this setting points exists and that the system under which the Web server is running can be read and written. Continuing with the example above, if your server is running apache as a user, make sure that the directory/var/tmp/django_cache exists and can be read and written by the user.

Local memory cache

This is the default cache if no other cache is specified in the settings file. I f you want the speed advantage of caching in memory, but don't have the ability to run Memcached, consider using the local memory cache backend. T he cache is per-process (see below) and thread-safe. T o use it, set BACKEND to "django.core.cache.backends.locmem.LocMemCache". For example:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
        'LOCATION': 'unique-snowflake',
    }
}

Cache LOCATION is used to identify individual memory stores. L ocation can be omitted if there is only one locmem cache; However, if you have multiple local memory caches, you need to assign them at least one name to separate them.

The cache uses the most recent least-used (LRU) obsolete policy.

Note that each process has its own private cache instance, which means that cross-process caching is not possible. T his also obviously means that the local memory cache is not particularly efficient memory, so it may not be a good choice for a production environment. This is good for development.

Virtual cache (for development)

Finally, Django comes with a "virtual" cache that doesn't actually cache - it just implements the cache interface and does nothing.

This is useful if your production site uses heavy caches everywhere, but you don't want to cache in a development/test environment and you don't want to change the code to the latter's special case. To activate the virtual cache, set backEND as follows:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
    }
}

Use a custom cache backend

Although Django out-of-the-box supports many cache backends, there are times when you might want to use a custom cache backend. To use Django's external cache backend, use the Python import path as backEND's CACHES setting, as follows:

CACHES = {
    'default': {
        'BACKEND': 'path.to.backend',
    }
}

If you want to build your own backend, you can use the standard cache backend as a reference implementation. You'll find the code in the django/core/cache/backends/in the directory of the Django source code.

Note: If there are no truly compelling reasons, such as hosts that do not support them, stick to the cache backend that came with Django. They have been fully tested and are well documented.

Cache parameters

Additional parameters can be provided for each cache backend to control cache behavior. T hese parameters provide CACHES as other keys in the settings. The valid parameters are as follows:

  • TIMEOUT: The default timeout, in seconds, for caching. T his parameter defaults to 300 seconds (5 minutes). Y ou can set TIMEOUT as None by default, and the cache key never ends. The value 0 of the value expires the key immediately (effectively "not cached").
  • OPTIONS: All options that should be passed to the cache back end. T he list of valid options will vary with each backend, and the cache backend, supported by a third-party library, passes its options directly to the underlying cache library. I mplementing your own culling strategy (i.e., cache back-end locmem, filesystem, and database back-end) will fulfill the following options: MAX_ENTRIES: The maximum number of entries allowed in the cache before deleting the old value. T his parameter defaults to 300. C ULL_FREQUENCY: MAX_ENTRIES score that was rejected on arrival. T he actual scale is , so it is set to reject half of the entries when it is reached. T his parameter should be an integer, which is the default. 1 / CULL_FREQUENCYCULL_FREQUENCY2MAX_ENTRIES3 value 0for indicates CULL_FREQUENCY the MAX_ENTRIES cache will be dumped when the data arrives. I n some backends (database in particular) this makes culling more faster at the cost of more cache misses. T he Memcached back end passes the contents of the OPTOS as keyword parameter to the client constructor, giving it more advanced control over client behavior. For examples of usage, see below.
  • KEY_PREFIX: A string that is automatically included in all cache keys used by the Django server (prefix by default). For more information, see Cache documentation.
  • VERSION: The default version number of the cache key generated by the Django server. For more information, see Cache documentation.
  • KEY_FUNCTION a string that contains a dashed path to a function that defines how the prefix, version, and key make up the final cache key. For more information, see Cache documentation.

In this example, the timeout time at the back end of the file system is 60 seconds, with a maximum capacity of 1000:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
        'LOCATION': '/var/tmp/django_cache',
        'TIMEOUT': 60,
        'OPTIONS': {
            'MAX_ENTRIES': 1000
        }
    }
}

This python-memcached is a back-end-based example configuration with an object size limit of 2MB:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '127.0.0.1:11211',
        'OPTIONS': {
            'server_max_value_length': 1024 * 1024 * 2,
        }
    }
}

This is the base-based back-end sample configuration of pylibmc, which enables binary protocol, SASL authentication, and ketama behavior patterns:

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
        'LOCATION': '127.0.0.1:11211',
        'OPTIONS': {
            'binary': True,
            'username': 'user',
            'password': 'pass',
            'behaviors': {
                'ketama': True,
            }
        }
    }
}

The cache for each site

Once the cache is set up, the easiest way to use it is to cache the entire site. You need to add 'django.middleware.cache.updateCacheMidleware' and 'django.middleware.cache.FetchFromCacheMidleware' to the MIDDLEWARE settings, as shown in the following example:

MIDDLEWARE = [
    'django.middleware.cache.UpdateCacheMiddleware',
    'django.middleware.common.CommonMiddleware',
    'django.middleware.cache.FetchFromCacheMiddleware',
]

Attention

No, this is not an input error: the Update middleware must be at the top of the list, and the Get middleware must be at the end. The details are a bit obscure, but if you want to know the full story, see MIDLEWARE order below.

Then, add the following required settings to the Django settings file:

  • CACHE_MIDDLEWARE_ALIAS - the cache alias used for storage.
  • CACHE_MIDDLEWARE_SECONDS - the number of seconds each page should cache.
  • CACHE_MIDDLEWARE_KEY_PREFIX - If you use the same Django to install a shared cache across multiple sites, set it to the site name or another string unique to the Django instance to prevent keystroke conflicts. If you don't care, use an empty string.

FetchFromCacheMidleware caches 200 GET and HEAD responses, where request and response headers are allowed. R esponses to same URL requests with different query parameters are considered unique and cached separately. T he middleware expects to answer the HEAD request with the same response header as the corresponding GET request. In this case, it can return a cached GET response for the HEAD request.

In addition, UpdateCache Middleware automatically sets some titles in each title httpsponse:

  • Set the Expires title to the current date/time plus the defined CACHE_MIDDLEWARE_SECONDS.
  • Cache-Control is set again to set the header to the maximum life of the page CACHE_MIDDLEWARE_SECONDS.

For more information about middleware, see Middleware.

If the view sets its own cache expiration time (that is, max-age, which has a section in its Cache-Control title), the page is cached until the expiration time, CACHE_MIDDLEWARE_SECONDS. U sing the decorator django.views.decorators.cache, you can easily set the expiration time of the view (using the cache_control() decorator) or disable the cache of the view (using the never_cache() decorator). For more information about these decorators, see the "Using other headers" section.

If USE_I18N set, True's resulting cache keys will include the name of the active language - see also How Django discovers language preferences. This makes it easy to cache multilingual sites without having to create your own cache keys.

When the cache key also includes a positive language, USE_L10N is set to True with the current time zone USE_TZ is set to True.

Each view is cached

django.views.decorators.cache. cache_page ()

A more granular approach to using the caching framework is to cache the output of a single view. django.views.decorators.cache defines a cache_page decorator that automatically caches the response of the view for you:

from django.views.decorators.cache import cache_page

@cache_page(60 * 15)
def my_view(request):
    ...

cache_page has one parameter: cache timeout, in seconds. I n the example above, the my_view () view results are cached for 15 minutes. ( Note that we wrote the code for readability purposes.) I t will be evaluated as - 15 minutes multiplied by 60 seconds per minute. )60 * 1560 * 15900

As with the cache for each site, the cache for each view is typed from the URL. I f more than one URL points to the same view, each URL is cached separately. Continue with my_view example if your URLconf looks like this:

urlpatterns = [
    path('foo/<int:code>/', my_view),
]

The requests/foo/1/and requests/foo/23/will then be cached separately. However, once/foo/23/requests a specific URL (for example), subsequent requests to that URL are cached.

cache_page can also use an optional keyword parameter that indicates that the decorator CACHES uses a specific cache (from your settings) when caching view results. By default, default uses the cache, but you can specify any cache you want:

@cache_page(60 * 15, cache="special_cache")
def my_view(request):
    ...

You can also override the cache prefix based on each view. c ache_page has an optional keyword parameter that key_prefix the same CACHE_MIDDLEWARE_KEY_PREFIX settings as the middleware. You can use this:

@cache_page(60 * 15, key_prefix="site1")
def my_view(request):
    ...

The key_prefix and cache parameters can be specified at the same time. The key_prefix parameters and KEY_PREFIX, CACHES will be connected in series.

Specify the cache for each view in URLconf

The example in the previous section has hardcoded the fact that the cached view cache_page the my_view functionality. T his approach, which couples your views to the cache system, is not ideal for a variety of reasons. F or example, you might want to reuse view functionality on another cacheless site, or you might want to distribute views to people who might want to use them without being cached. The solution to these problems is to specify the cache for each view in URLconf, not next to the view function itself.

You cache_page by wrapping them together when you reference a view function in URLconf. This is the previous version of URLconf:

urlpatterns = [
    path('foo/<int:code>/', my_view),
]

This is the same thing that my_view wrapped in cache_page:

from django.views.decorators.cache import cache_page

urlpatterns = [
    path('foo/<int:code>/', cache_page(60 * 15)(my_view)),
]

The template fragment cache

If you want more control, you can also use the cachetemplate label to cache template fragments. T o give your template access to this label, place it near the top of the template. {% load cache %}

The template label caches the content in the block for a given amount of time. I t requires at least two parameters: cache timeout (in seconds) and the name of the cache fragment provided. I f the timeout is , the clip will always be cached. T he name will be used as is, do not use variables. For example: .% cache %?None

{% load cache %}
{% cache 500 sidebar %}
    .. sidebar ..
{% endcache %}

Sometimes, you may want to cache multiple copies of a fragment based on some of the dynamic data displayed within the fragment. F or example, you might want to provide each user in your site with a separate cached copy of the sidebar used in the previous example. To do this, pass one or more other parameters, which can be variables with or without filters, to the template tag to uniquely identify the cache fragment:

{% load cache %}
{% cache 500 sidebar request.user.username %}
    .. sidebar for logged in user ..
{% endcache %}

If USE_I18N will be set to each True site, the middleware cache will respect the active language. For cache template tags, you can get the same results using one of the translation-specific variables available in the template:

{% load i18n %}
{% load cache %}

{% get_current_language as LANGUAGE_CODE %}

{% cache 600 welcome LANGUAGE_CODE %}
    {% trans "Welcome to example.com" %}
{% endcache %}

Cache timeouts can be template variables, as long as they are resolved to integer values. For example, if you set my_timeout variable to value 600, the following two examples are equivalent:

{% cache 600 sidebar %} ... {% endcache %}
{% cache my_timeout sidebar %} ... {% endcache %}

This feature helps avoid duplication in the template. You can set a timeout in a variable in one location and then reuse the value.

By default, the cache label attempts to use a cache template_fragments "Called." I f such a cache does not exist, it will bounce back to using the default cache. You can choose the alternate cache backend that is used with the using keyword parameter, which must be the last parameter of the tag.

{% cache 300 local-thing ...  using="localcache" %}

Specifying an unconfigured cache name is considered an error.

django.core.cache.utils. make_template_fragment_key (fragment_name, vary_on - None)

If you want to get the cache key used to cache fragments, you can use make_template_fragment_key. f ragment_name is the same as the vary_on parameter of the cachetemplate label; This feature is useful for invalid or overwriting cached items, such as:

>>> from django.core.cache import cache
>>> from django.core.cache.utils import make_template_fragment_key
# cache key for {% cache 500 sidebar username %}
>>> key = make_template_fragment_key('sidebar', [username])
>>> cache.delete(key) # invalidates cached template fragment

Low-level cache API

Sometimes, caching the entire rendered page doesn't bring much benefit, in fact, it's inconvenient.

For example, perhaps your site contains a view that depends on several expensive queries that change their results at different intervals. In this case, using the full-page cache provided by each site or per view cache policy is not ideal because you don't want to cache the entire result (because some data changes frequently), but you still want to cache the results that rarely change.

In such cases, Django exposes a low-level cache API. Y ou can use this API to store objects in the cache at any level of granularity you want. Y ou can cache any Python object that can be safely pickled: strings, dictionaries, lists of model objects, and so on. ( Most common Python objects can be pickled; for more information about pickling, refer to the Python documentation.)

Access the cache

django.core.cache. caches

You CACHES cache configured in the settings with objects like django.core.cache.caches Repeat requests for the same alias in the same thread to return the same object.

>>> from django.core.cache import caches
>>> cache1 = caches['myalias']
>>> cache2 = caches['myalias']
>>> cache1 is cache2
True

If the specified key does InvalidCacheBackendError raised.

To provide thread safety, a different cache back-end instance is returned for each thread.

django.core.cache. cache

As a shortcut, the default cache can be used django.core.cache.cache

>>> from django.core.cache import cache

This object is equivalent caches['default']

Basic usage

The basic interface is:

cache. set key value timeout = DEFAULT_TIMEOUT version = None
>>> cache.set('my_key', 'hello, world!', 30)
cache. get key default = None version = None
>>> cache.get('my_key')
'hello, world!'

Key should be str, and value can be any pickable Python object.

The timeout parameter is optional and defaults to caches (as described above) for the corresponding back end of the timeout in the settings. T his is the number of seconds that the value should be stored in the cache. P ass Nonefor timeout to cache the value forever. A timeout of 0 will not cache the value.

If the object does not exist in the cache, cache.get() returns None:

>>> # Wait 30 seconds for 'my_key' to expire...
>>> cache.get('my_key')
None

We recommend that you do not store text values in the cache, because you will not be able to distinguish between stored One values and cache misses with return values.

cache.get() can default argument. This specifies which value is returned if the object does not exist in the cache:

>>> cache.get('my_key', 'has expired')
'has expired'
cache. add key value timeout = DEFAULT_TIMEOUT version = None

To add a key only if it does not already exist, use the add() method. It uses the same parameter set(), but if the specified key already exists, it will not attempt to update the cache:

>>> cache.set('add_key', 'Initial value')
>>> cache.add('add_key', 'New value')
>>> cache.get('add_key')
'Initial value'

If you need to know if add() has a value stored in the cache, you can check the return value. True If a value is stored, it will return, and False will return otherwise.

cache. get_or_set key default timeout = DEFAULT_TIMEOUT version = None

You can use this method if you want to get the value of the key, or if you want to set the value if get_or_set in the cache. It uses the same parameter, get(), but the default setting is the new cache value for the key instead of returning:

>>> cache.get('my_new_key')  # returns None
>>> cache.get_or_set('my_new_key', 'my new value', 100)
'my new value'

You can also pass any callable as the default:

>>> import datetime
>>> cache.get_or_set('some-timestamp-key', datetime.datetime.now)
datetime.datetime(2014, 12, 11, 0, 15, 49, 457920)
cache. get_many keys version = None

There is also get_many () interface that hits the cache only once. get_many () returns a dictionary containing all the keys you requested that actually exist in the cache (and have not expired):

>>> cache.set('a', 1)
>>> cache.set('b', 2)
>>> cache.set('c', 3)
>>> cache.get_many(['a', 'b', 'c'])
{'a': 1, 'b': 2, 'c': 3}
cache. set_many (dict, timed out)

To set multiple values more efficiently, use set_many() pass key values to the dictionary:

>>> cache.set_many({'a': 1, 'b': 2, 'c': 3})
>>> cache.get_many(['a', 'b', 'c'])
{'a': 1, 'b': 2, 'c': 3}

Like cache.set(), set_many () comes with an optional timeout parameter.

On the supported back end (memcached), the set_many returns a list of keys that could not be inserted.

cache. delete key version = None

You can explicitly delete the key, delete() to clear the cache of a particular object:

>>> cache.delete('a')
cache. delete_many keys version = None

If you want to clear a bunch of keys at delete_many, you can make a list of keys to clear:

>>> cache.delete_many(['a', 'b', 'c'])
cache. clear ()

Finally, if you want to delete all keys in the cache, use cache.clear(). P ay attention to this. clear() removes everything from the cache, not just the keys set by your application.

>>> cache.clear()
cache. touch key timeout = DEFAULT_TIMEOUT version = None

cache.touch() sets a new expiration time for the key. For example, to update the key to expire 10 seconds from now:

>>> cache.touch('a', 10)
True

As with other methods, this timeout parameter is optional and defaults to the option CACHES in the settings for the corresponding back end of TIMEOUT.

Touch() True If the key is successfully touched, False returns;

cache. incr key delta = 1 version = None
cache. decr key delta = 1 version = None

You can also use the incr() or decr() method to increment or decrease the keys that already exist. B y default, the existing cache value will increment or decrease by 1. Y ou can specify additional incremental/decreasing values by providing parameters for incremental/decreasing calls. If you try to increase or decrease a cache key that does not exist, a ValueError is raised:

>>> cache.set('num', 1)
>>> cache.incr('num')
2
>>> cache.incr('num', 10)
12
>>> cache.decr('num')
11
>>> cache.decr('num', 5)
6

Note: The incr()/decr() method is not guaranteed to be atomic. O n those backends that support atomic increments/detracts (most notably, the back end of memory caching), incremental and reduction operations will be atomic. However, if the backend itself does not provide incremental/reduction operations, this is implemented using a two-step retrieval/update.

cache. close ()

Close() If implemented by the cache backend, you can close the connection to the cache.

>>> cache.close()

Note: It is inoperable for caches that do not implement the close method.

Cache key prefix

If you want to share cached instances between servers or between production and development environments, the data cached by one server may be used by another server. If the cached data formats are different between servers, you may have some problems that are difficult to diagnose.

To prevent this, Django provides the ability to prefix all cache keys used by the server. After you save or retrieve a specific cache key, Django automatically adds a value to the cache key KEY_PREFIX set by the cache.

By ensuring that each Django instance has a different KEY_PREFIX, you can ensure that cached values do not conflict.

The cached version

When you change the run code that uses cached values, you may need to clear all existing cached values. The easiest way is to refresh the entire cache, but this results in the loss of cached values that are still valid and useful.

Django provides a better way to locate individual cached values. D jango's cache framework has a system-wide version identifier, which is specified using the VERSION cache settings. The value of this setting is automatically combined with the cache prefix and the cache key provided by the user to get the final cache key.

By default, any key request will automatically contain the site's default cache key version. H owever, the original cache feature contains a version parameter, so you can specify a specific version of the cache key to set or get. For example:

>>> # Set version 2 of a cache key
>>> cache.set('my_key', 'hello world!', version=2)
>>> # Get the default version (assuming version=1)
>>> cache.get('my_key')
None
>>> # Get version 2 of the same key
>>> cache.get('my_key', version=2)
'hello world!'

You incr_version increment and decrease decr_version versions of a particular key using the methods () and "(). T his allows a specific key to change to a new version, while other keys are not affected. Continue with the previous example:

>>> # Increment the version of 'my_key'
>>> cache.incr_version('my_key')
>>> # The default version still isn't available
>>> cache.get('my_key')
None
# Version 2 isn't available, either
>>> cache.get('my_key', version=2)
None
>>> # But version 3 *is* available
>>> cache.get('my_key', version=3)
'hello world!'

Cache key conversion

As described in the previous two sections, users cannot use the user-provided cache key exactly as is, but use it in conjunction with the cache prefix and key version to provide the final cache key. By default, these three sections are connected using colons to generate the final string:

def make_key(key, key_prefix, version):
    return '%s:%s:%s' % (key_prefix, version, key)

You can provide custom key functionality if you want to combine parts in different ways, or if you want to do other processing of the final key (for example, to get a hash summary of the key portion).

The KEY_FUNCTION the dotted path of the feature of the specified matching prototype with the cache settings make_key above (). If provided, this custom key function will be used instead of the default key combination feature.

Cache key warning

Memcached is the most commonly used production cache backend, which does not allow cache keys that are longer than 250 characters or contain spaces or control characters, and the use of such keys can cause exceptions. To encourage the cache of portable code and minimize unpleasant surprises, django.core.cache.backends.base.CacheKeyWarning warns if the key used causes a memcached error, the other built-in cache back end warns ().

If you're using a production backend that accepts a wider range of keys (one of the built-in backends for custom backend or non-memory caches) and want to use a wider range of keys without warning, you can keep CacheKeyWarning silent in this code for one of your modules INSTALLED_APPS:

import warnings

from django.core.cache import CacheKeyWarning

warnings.simplefilter("ignore", CacheKeyWarning)

If you want to provide custom key validation logic for one of the built-in backends, you can sub-classify it, override only the validate_key method, and follow the instructions to use the custom cache backend. For example, to do this for the locmem backend, place the following code in the module:

from django.core.cache.backends.locmem import LocMemCache

class CustomLocMemCache(LocMemCache):
    def validate_key(self, key):
        """Custom validation, raising exceptions or warnings as needed."""
        ...

... And use the dotted Python path CACHES pointing to the class in the BACKEND section of the setting.

Downstream cache

So far, the focus of this document has been on caching your own data. B ut another type of cache is also related to Web development: caches executed by "downstream" caches. These systems can even cache pages for users before requests reach your site.

Here are some examples of downstream caching:

  • Your ISP may cache some pages, so if you request a page from https://example.com/, your ISP will send you that page without having to go directly to example.com. e xample.com of the cache are not aware of it. IsP is located example.com and your web browser, transparently handling all caches.
  • Your Django site may be after an agent cache, such as the Squid Web proxy cache (http://www.squid-cache.org/), that caches pages to improve performance. In this case, each request is first processed by the agent and passed to your application only when needed.
  • Your web browser also caches pages. If the page gives the appropriate title, your browser will use a locally cached copy to make subsequent requests to the page, or even contact the page again to see if it has changed.

Downstream caching can greatly improve efficiency, but there is a danger that the content of many pages varies based on authentication and many other variables, and a caching system that blindly saves pages based only on URLs may expose incorrect or sensitive data to visitors to subsequent pages.

For example, if you use a Web e-mail system, the contents of the Inbox page depend on the user who is logged in. I f the ISP blindly caches your site, the first user to log on through the ISP will have their users. S pecific inbox pages are cached for subsequent visitors to the site. That's not cool.

Fortunately, HTTP provides a solution to this problem. T here are many HTTP headers that instruct downstream caches to distinguish their cache content from specified variables and tell the caching mechanism not to cache specific pages. We'll cover some of these headings in the following sections.

Use the Vary title

The Vary header defines which request header's cache mechanism should be taken into account when establishing its cache keys. For example, if the content of a page depends on the user's language preferences, the page is called "varies by language."

By default, Django's cache system uses the requested standard URL (for example, ) to create its cache key, "https://www.example.com/stories/2005/?order_by=author." T his means that each request for that URL will use the same cached version, regardless of user agent differences, such as cookies or language preferences. However, if this page produces different content depending on the request header (such as a cookie, language, or user agent), you need to use the Vary header to tell the cache mechanism that the page output depends on those things.

To do this in Django, use the handy django.views.decorators.vary.vary_on_headers() view decorator, as follows:

from django.views.decorators.vary import vary_on_headers

@vary_on_headers('User-Agent')
def my_view(request):
    ...

In this case, the caching mechanism, such as Django's own cache middleware, caches a separate version of the page for each unique user agent.

The vary_on_headers of using the decorator instead of setting the Vary header manually (using something like that) is that the decorator is added to the header (which may already exist) instead of setting it from scratch and possibly overwriting anything in it. response['Vary'] = 'user-agent'Vary

You can pass multiple headers to vary_on_headers ():

@vary_on_headers('User-Agent', 'Cookie')
def my_view(request):
    ...

This tells the downstream cache to differ on both, which means that each combination of user agents and cookies will get its own cache value. For example, a request with the user agent Mozilla and the cookie value foo-bar will be considered different from the request foo-ham with the user agent Mozilla and cookie values.

Because changes to cookies are common, there is a django.views.decorators.vary.vary_on_cookie () decorator. The two views are equivalent:

@vary_on_cookie
def my_view(request):
    ...

@vary_on_headers('Cookie')
def my_view(request):
    ...

The header you pass to vary_on_headers case insensescies;

You django.utils.cache.patch_vary_headers () use accessibility directly. T his function is set or added to . For example: Vary Header

from django.shortcuts import render
from django.utils.cache import patch_vary_headers

def my_view(request):
    ...
    response = render(request, 'template_name', context)
    patch_vary_headers(response, ['Cookie'])
    return response

patch_vary_headers the httpResponse instance as the first argument, and the list/tuton of case-insensior header names as the second argument.

For more information about the Vary header, see The Official Vary Specifications.

Control cache: Use a different header file

Other issues with the cache are the privacy of the data and where the data should be stored in the cascading cache.

Users typically face two types of caches: their own browser cache (private cache) and the provider's cache (public cache). P ublic caches are used by multiple users and controlled by others. T his can be cumbersome for sensitive data, for example, you don't want to store your bank account in a public cache. Therefore, Web applications need a way to tell the cache which data is private and which is public.

The solution is to indicate that the cache of the page should be Private. T o do this in Django, use the cache_control () view decorator. Cases:

from django.views.decorators.cache import cache_control

@cache_control(private=True)
def my_view(request):
    ...

The decorator is responsible for sending appropriate HTTP headers in the background.

Note that cache control settings Private and Public are mutually exclusive. T he decorator ensures that if it should be set to private, remove the public directive (and vice versa). A n example use of these two pseudo-instructions is a blog site that provides private and public entries. P ublic entries can be cached in any shared cache. The following code patch_cache_control () manually modify the cache control header (called cache_control () decorator internally):

from django.views.decorators.cache import patch_cache_control
from django.views.decorators.vary import vary_on_cookie

@vary_on_cookie
def list_blog_entries_view(request):
    if request.user.is_anonymous:
        response = render_only_public_entries()
        patch_cache_control(response, public=True)
    else:
        response = render_private_and_public_entries(request.user)
        patch_cache_control(response, private=True)

    return response

You can also control downstream caching in other ways (see RFC 7234 for more information about HTTP caching). For example, even if you don't use Django's server-side caching framework, you can still use the following command to tell the client to cache the view for a certain amount of time: The maximum age directive:

from django.views.decorators.cache import cache_control

@cache_control(max_age=3600)
def my_view(request):
    ...

(If you do use cache middleware, it already sets the max-age CACHE_MIDDLEWARE_SECONDS values.) I n this case, max_age (cache_control) decoration from the list will take priority, and the head values will be merged correctly.

Any valid Cache-Control response instructions are valid in the cache_control (). Here are many more examples:

  • no_transform=True
  • must_revalidate=True
  • stale_while_revalidate=num_seconds

A complete list of known instructions can be found in the IAA Registry (note that not all instructions apply to responses).

If you want to use headers to disable caching completely, never_cache() use the view decorator to add headers to ensure that the browser or other cache does not cache responses. Cases:

from django.views.decorators.cache import never_cache

@never_cache
def myview(request):
    ...

MIDDLEWARE

If you use cache middleware, it is important to put each half in the correct position in the MIDLEWARE settings. T his is because the cache middleware needs to know which headers are used to change the cache store. Middleware always adds as much content as possible to the Vary response header.

UpdateCache Middleware runs during the response phase, and middleware runs in reverse order, so the items at the top of the list end up running in the response phase. T herefore, you need to make sure that it appears before any other middleware that might add content to the Vary header. The following middleware modules can do this:

  • Session Middleware adds cookies
  • GZipMiddleware adds Accept-Encoding
  • LocaleMiddleware adds Accept-Language

FetchFromCache Middleware, on the other hand, runs during the request phase, where middleware is applied first, so the items at the top of the list run first during the request phase. The FetchFromCacheMidleware also needs to be updated with other middleware to run The Vary Head, so FetchFromCacheMidleware must do so later on any project.

For more information: https://docs.djangoproject.com/en/3.0/