MinIO

Configuring MinIO Part 1: Docker with Nginx Reverse Proxy

January 18th, 2024|Guides, Techy Stuff|
Share

Recently I had been looking at options for storage of PC backups. Currently I use the rather excellent Mac app, Arq, which I use to backup to a local server (via a network share). I also have the software set to do a secondary backup via SFTP to a cloud server.

The limitation with this second backups use of SFTP as a backup protocol is that it doesn’t allow for any sort of routine processing of files from the server itself. For cold storage this is fine. However when I need to validate that my backups are correct, which is scheduled to occur every month or two, the comparison of the checksum for a cloud copy compared to a local copy has to be performed on my local PC, meaning the entire online backup has to be downloaded. This has a significant time implication for verifying backups and also consumes data allowances where a destination measures bandwidth.

MinIO is a free open blob storage solution that implements S3 compatible storage (think Amazon AWS S3). Among other capabilities such as versioning and retention policies, the product allows check-sums for files in a data store to be validated from the server itself. When comparing a backed up file in the cloud to the local copy, only the checksums themselves have to travel over the network, not the entire file. This saves a lot of bandwidth and for my use case this is perfect. Using MinIO over SFTP avoids time waiting for files to download and keeps bandwidth as a minimum whenever I do monthly checks.

Before moving on it’s worth noting there are alternative free products out there. SeaweedFS and Ceph both came up as common alternatives. I settled on MinIO as it appeared to strike a good balance between maturity, viability in the enterprise (I’m not a business, but a product that is proven as business ready is still desirable) while being reportedly less complex than Ceph. SeaweedFS makes some claims around being higher performance which may or may not be true, but for hourly or daily backups performance was a bit lower on my requirements list, and I won’t be dealing with vast amounts of data anyway.

Lastly, I use Nginx as a reverse proxy for multiple products so I made a decision to adopt that here too. I further decided I would use Docker for the application setup to ideally keep that aspect contained and simple. MinIO does support installation direct to the OS, and most information out there was for this setup type. Docker is supported and viable however.

Alongside the above scope concerning Nginx and Docker, the one other significant consideration is that I will be running MinIO in a Single Instance Single Drive configuration. This is a mode that’s not necessarily recommended for users needing high availability of the data, but I only have the one VPS plan I can use and the data I am saving are backups. I also have at least 2 backups of the data so a failure of my VPS is not catastrophic. MinIO supports deployment in clusters across systems and with multiple drives, however that is outside the scope of this article. For my purpose I have a VPS with a singe 4TB partition.

Prerequisites

  • Server has gone through initial setup.
  • Nginx
  • Docker and Docker Compose
  • Two subdomain addresses, both mapped to your server. For example:
    • minio.mysite.com (S3 Access)
    • minioconsole.mysite.com (Web Console)

Debian is assumed to be the server OS, however almost all steps will work on other Linux OS. You may need to use a different package manager to grab any installs along the way however.

Setup a Storage Location

As a non root user, create a folder to store the backups. I will use /data/minio. The -p bellow will cause the parent ‘data’ folder to be created if it doesn’t exist.

Bash
mkdir -p /data/minio

Create the MinIO Docker

Create a folder for your MinIO docker instance. In my case I’ll just create this in the home directory for my non-root user.

Bash
mkdir ~/minio

Navigate to this folder and then create a docker-compose.yaml file.

Bash
nano docker-compose.yaml

Supply the following configuration.

YAML
version: "3.7"

services:
  minio:
    image: quay.io/minio/minio
    command: server --console-address ":9001" /mnt/data
    environment:
      - MINIO_ROOT_USER=CHANGE-ME
      - MINIO_ROOT_PASSWORD=CHANGE-ME
    volumes:
      - type: bind
        source: /data/minio
        target: /mnt/data
    ports:
      - 9000:9000
      - 9001:9001
volumes:
  minio_data:

Note that it is critical that you set a strong password. The MinIO app does not support MFA, so initially your MinIO super user account will be secured by username and password only. In Part 2 I will detail how to use a third party OpenID provider to handle authentication with MFA and disable the root account. The MinIO team further recommends you use a unique user name that will be hard to guess to further protect the login.

If you used a different data directory, change that as necessary by updating the source path.

Save the file when done. Note that port 9001 will be used for the console access, and 9000 for remote S3 access. Both will be behind the Nginx reverse proxy so this isn’t hugely important. If you do change the ports however, you will need to carry the changes through in later steps.

With the docker-compose.yaml file configured, it is time to launch the service.

Bash
docker compose up -d

If the service starts up and shows as running, move onto the next steps.

Nginx Reverse Proxy

You’ll notice that in the prerequisites and the docker configuration that the are two addresses to be used. The first, docker port 9000, is the interface for your backup client to communicate with for data transfers. The second, docker port 9001, is for the web interface. These addresses will be mapped to the subdomains you’ve created already (I hope, otherwise go do that now), and both will be accessed over ports 443.

If you don’t want to use two subdomains, there is the option to use a single domain and have a sub path redirect to the admin console. I won’t go over that here, but you can refer to the MinIO documentation on that configuration as well as further details on the two subdomain option.

Start out by creating the S3 services Nginx configuration file.

Bash
sudo nano /etc/nginx/sites-enabled/minio.mysite.com

Add the following config. Note that to proxy the traffic that we use localhost and port 9000. Save the file when done.

YAML
server {
    server_name minio.mysite.com;

    # Allow special characters in headers
    ignore_invalid_headers off;
    # Allow any size file to be uploaded.
    # Set to a value such as 1000m; to restrict file size to a specific value
    client_max_body_size 0;
    # Disable buffering
    proxy_buffering off;
    proxy_request_buffering off;


    location / {
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_connect_timeout 300;
        # Default is HTTP/1, keepalive is only enabled in HTTP/1.1
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        chunked_transfer_encoding off;

        proxy_pass http://127.0.0.1:9000;
    }
}

The configuration above is important, especially for the S3 API access. Leaving this out and simply doing the proxy passthrough is likely to cause connection issues when you go to use the interface.

Now, repeat the above for the web console. First create an Nginx configuration file.

Bash
sudo nano /etc/nginx/sites-enabled/minioconsole.mysite.com

With the following configuration. Note the use of port 9001 this time. Save the file when done.

YAML
server {
    server_name minioconsole.mysite.com;

    # Allow special characters in headers
    ignore_invalid_headers off;
    # Allow any size file to be uploaded.
    # Set to a value such as 1000m; to restrict file size to a specific value
    client_max_body_size 0;
    # Disable buffering
    proxy_buffering off;
    proxy_request_buffering off;

    location / {
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-NginX-Proxy true;

        # This is necessary to pass the correct IP to be hashed
        real_ip_header X-Real-IP;

        proxy_connect_timeout 300;

        # To support websocket
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        chunked_transfer_encoding off;

        proxy_pass http://127.0.0.1:9001;
    }
}

Confirm that all Nginx configuration is valid

Bash
sudo nginx -t

Then restart the Nginx service

Bash
sudo systemctl restart nginx

You could at this point look to have the above web services listening on port 80 and test that (add “listen 80; listen [::]:80;” to the config file and make sure your firewalls accept inbound traffic on port 80). Instead we’re going to use lets-encrypt to leverage https (port 443) for our connections.

Enable https with LetsEncrypt

If you do not already have it, install certbot.

Bash
sudo apt install certbot python3-certbot-nginx

Once that is done generate the first certificate, answering any questions along the way.

Bash
sudo certbot --nginx -d minio.mysite.com --elliptic-curve=secp384r1

Repeat for the admin console

Bash
sudo certbot --nginx -d minioconsole.mysite.com --elliptic-curve=secp384r1

This should have updated your two Nginx configuration files from earlier. Feel free to check those and when happy to move on, restart the Nginx service.

Bash
sudo systemctl restart nginx

There is one final step you may need to follow. If you used my initial Debian server setup guide, then you will have enabled the UFW firewall and at this point only port 22 (SSH) may be enabled. Update the UFW firewall rules using the following:

Bash
sudo ufw allow 'Nginx Full'
sudo ufw status

At minimum the following rules should be output:

Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Nginx Full                 ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Nginx Full (v6)            ALLOW       Anywhere (v6)

If you are using a different firewall, then follow the steps there to enable port 443.

Note the above rule (Nginx Full) will also allow traffic from port 80 through the firewall. You may choose to restrict UFW to 443 only, however when we set up lets-encrypt a 301 redirect will have been configured to redirect http (port 80) to https (port 443) automatically.

Browse to https://minioconsole.mysite.com and confirm you can access the login page. To login, use the root user account supplied in the initial docker configuration file.

Conclusion

MinIO should now be configured. You may choose to go ahead and create some user accounts and storage buckets. I would instead recommend holding off. Instead, look to integrate the user accounts with an OpenID provider so that you can add multi-factor to your setup and disable the need for a root login via username and password only. This will improve the security of your web facing service. I will go over how to do this in Part 2.