How to Run NGINX Inside Docker (for Easy Auto-Scaling)

Docker logo

One of the most common workloads of Docker is using it to containerize web servers like NGINX and Apache to run a high-performance content delivery fleet that can be easily auto-scaled and managed. We’ll show you how to set it up with NGINX.

Setting Up NGINX Inside Docker

Docker is a containerization platform, used to package up your application and all of its code into one easily manageable container image. The process of doing this is pretty similar to how you’d go about setting up a new server—the container is a blank slate, so you’ll need to install dependencies, build your code, copy over the build artifacts, and copy over any configuration. Luckily, you don’t have to automate that much. NGINX already has a publicly available Docker container, which you can use as the starting point for your application.

Of course, depending on the application you’re containerizing, this can be a bit more involved. If you’re deploying a CMS like WordPress, you’ll likely need to have an external database, as containers aren’t designed to be persistent. A good place to start for WordPress in particular would be WordPress’s Docker container.

For the purposes of having something a little more involved than a simple Hello World webpage, we’ll create a new project directory and initialize a basic Vue.js application. Your configuration will be different depending on the content you’re serving, but the general idea is the same.

project template

At the root of your project, create a new file simply named Dockerfile with no extension. This will act as the build configuration. By default, the container is empty, and only includes the applications and dependencies that come installed with the base image. You will need to copy over your application’s code; if you’re just serving static content, this is easy, but if you’re working with server-side applications like WordPress, you might need to install additional dependencies.

The following config is pretty basic. Because this is a node application, we need to run npm run build to get a distribution-ready build. We can handle this all in the Dockerfile, by setting up a two-part container build:

FROM node:latest as build-stage
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build

FROM nginx as production-stage
RUN mkdir /src
COPY --from=build-stage /src/dist /src
COPY nginx.conf /etc/nginx/nginx.conf

The first line’s FROM command pulls the node container from Docker Hub and makes a new container called build-stage. The next cd‘s to that directory, and copies over the package.json. Then, it runs npm install, then copies over the app’s code and starts the build process. If your application needs to be built from source, you’ll want to do something similar to this.

The next state pulls the nginx container to serve as the production deployment. It makes the src directory and then copies, from the build-stage container, the /src/dist/ folder containing the build artifacts, over to the /src folder of the production container. It then copies over an NGINX config file.

You’ll also want to make a new file called .dockerignore, to tell it to ignore node_modules as well as any build artifacts from local builds.


The Dockerfile references an nginx.conf, which you’ll also need to create. If you’re running a more complex configuration with multiple configs in /sites-available, you might want to create a new folder for your NGINX configuration, and copy that over.

user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/;
events {
  worker_connections  1024;
http {
  include       /etc/nginx/mime.types;
  default_type  application/octet-stream;
  log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
  access_log  /var/log/nginx/access.log  main;
  sendfile        on;
  keepalive_timeout  65;
  server {
    listen       80;
    server_name  localhost;
    location / {
      root   /src;
      index  index.html;
      try_files $uri $uri/ /index.html;
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
      root   /usr/share/nginx/html;

This is just an HTTP web server. The simplest way to set up HTTPS would be to run LetsEncrypt’s certbot locally, and copy over the certificate from /etc/letsencrypt/live/ into the production container. These certs are valid for 90 days, so you’ll need to renew them regularly. You can automate this as part of the container build process.

Once everything is in order, you can run the Docker build:

docker build . -t my-app

This will build the container as my-app, after which you’re free to tag it and send it off to ECS or a container registry for eventual deployment. You should, or course, test it locally first with docker run binding localhost:8080 to port 80 of the NGINX instance:

docker run -d -p 8080:80 my-app

Once you have a built image, deploying it in production is fairly simple. You can read our guide to setting up an auto-scaling container deployment on AWS ECS to learn more, or read our guide on setting up a CI/CD pipeline with containers to handle automated builds and deployments.