A simple, secure and fast image resize server with nginx

Recently, I needed a simple, secure and elegant solution to resize million+ images to several dimensions.

Just resize, different sizes, secure, fast.

Those images are meant to serve as static files for an e-commerce application.

So what options do we have?

  • resize the images preprocessed as cronjob periodically (cause we are getting new images every day..)
  • resize the images on the fly on first request

Because of the flexibility, the decision was made to use the second version.

This means, the first user who wants to have a resized version is actually starting the transformation job.

All following requests get the resized version directly from disc delivered via HTTP Server.

I’ve researched a lot on google, stack overflow and github to find a useful solution.

The list of names was long:

There is also a pretty long article from the authors of imgproxy with comparisons of different solutions.

However, none of those solutions are what i want:

  • resize only one time an image and the persist it to disc (or sth. like S3 later).
  • serve the results with correct http caching headers

Most solutions are doing either full on-the-fly conversions without any persistance to disc or limited cache sizes like most nginx based solutions.

In the end I created my own little nginx mod http image filter and try_files configuration, which does the job now. Thanks to all authors of nginx based solutions for inspiration and input.

    server {
        listen EXTERNAL_IP:80 http2;
        server_name YOUR_DOMAIN_COM;

        # please change the regex to allow more dimensions
        location ~* ^\/resize\/([3|4|6][0][0]|\-)_([3|4|6][0][0]|\-)\/(.+)$ {
            # creates 404 with query params set
            if ($args != "") {
                rewrite $uri $uri? last;
            }

            # set base dir for resize folder
            root BASE_FOLDER/resize/;

            # pass request to internal resize server via try_files
            # check local existence, if not pass
            try_files /$1_$2/$3 /$1_$2/$3/ @resize;
        }

        location @resize {
            # use this as dns resolver
            resolver 127.0.0.1;

            # pass to backend
            proxy_pass http://127.0.0.1:8091$uri;

            # store result in filesystem
            proxy_store BASE_FOLDER/$uri; # adds /resize due to url

            # add permission
            proxy_store_access user:rw group:rw;
        }
      }

    # A second server, which will do the actual resize and is bound to 127.0.0.1
    server {
        listen 127.0.0.1:8091;
        server_name localhost;

        location ~* ^\/resize\/([3|4|6][0][0]|\-)_([3|4|6][0][0]|\-)\/(.+)$ {

            # set base dir for resize folder
            alias BASE_FOLDER/source/$3;

            # image filter resize itself
            image_filter resize $1 $2;

            # max image size
            image_filter_buffer 10M;

            # progressive jpegs
            image_filter_interlace on;

            # image quality
            image_filter_jpeg_quality 80;

            # error page
            error_page 415 = /empty;
        }

        location = /empty {
            empty_gif;
        }
    }

Basically, its one external facing proxy server in nginx and and internal one which does the actual resize job. The result from internal is persisted on disc.

Replace all CAPITAL words with your current configuration.

Add this to enabled sites in nginx, restart and you can use the following URL structure:

http://yourdomain.com/resize/600_600/image.jpeg

This results in the following directory structure

    ├── resize
    │   ├── 600_300
    │   │   └── image.jpeg
    │   ├── 600_400
    │   │   └── image.jpeg
    │   └── 600_600
    │       └── image.jpeg
    └── source
        └── image.jpeg

Why is it secure and fast?

  • Only certain dimensions are allowed (via regex), attacker cannot start numerous variations of resizes (gets 404)
  • URL parameter behind image url cannot be used to start transformation again (gets 404)
  • Serves only local files (or if you like S3 via http call), no remote loading
  • You can apply all nginx security features (SSL, rate limiting etc.)
  • You can apply all nginx caching header features
  • You can apply all nginx scaling, load-balancing and reverse proxy features
  • The image process itself is bound to localhost, no direct interaction
  • You can put this server behind CDN and add a HTTP Header Check in nginx (and set Auth Header in CDN config) for scaling and even more security
  • Codesize is VERY limited :)

I decided against any remote loading capabilities in nginx (you find a lot of examples in the nginx solution links above) due to risk, stability and maintainability concerns. In the end, those are 2 separate processes.

Thanks to the nice solution from Pawel Miech this can be done easily parallel, async and secure in Python3 Code (Making 1 million requests with python-aiohttp).

The only problem may be the first transformation (Cache stampede) which can bring down the system. You could change this with some lock and placeholder file easily.

Change the regex to allow more dimensions.

With the actual nginx you will also get webp transformation, which can be pretty nice for your Chrome based users.

Updated: