With the increasing centralization of personal web content into huge, monolithic entities, the traditional personal site is becoming increasingly scarce. Self-hosting your own web content used to be the rule rather than the exception, and I think that’s a loss not fully realized yet. Those centralized services are no better at preventing stale links, which has historically been the biggest annoyance with regular websites. They’re also often far, far worse about using privacy-invading tracking.
This site started as a way to write about various projects of my own, particularly those for which there was very little useful information in the search results. I went self-hosted because I’ve never had wonderful experiences with shared hosting, and I wanted to own and control my content, which does not happen on modern social media. All of my content is stored locally and is fully agnostic of the platform it’s hosted on — WordPress just happens to be the option of choice currently.
To understand the configuration I share at the end, it helps to understand how I got there. However, if you’re only interested in that, you can just skip there now
My History of Hosting
I have gone through multiple iterations of hosting and sites through several decades, beginning with the early personal homepage hosting provided by ISPs in the ‘90s, on to shared hosting with custom domains towards the early ‘00s, self-hosting on dedicated hardware for the next few years, and then to self-hosting on VPS instances through to the current day. The best that could be said about much of it is that it worked to make a site reachable on the internet. Unfortunately, that glosses over a large number of negatives.
By far the most commonly used, shared hosting is the process of using a single server or VPS instance to host sites for multiple customers. At a theoretical level, there should be no functional difference between this or any other kind of hosting, but reality reveals its warts.
It’s a virtual certainty in shared hosting that the server will be oversold, effectively guaranteeing that there’s no hardware power left if one or more sites receives a spike it traffic. Latency goes up, often to the point of making the site unreachable, and there is no recourse for the site owner other than to contact support and wait it out. In recent years, many shared hosts have been attempting (and failing) to address this by creating customized caching solutions and watchdog processes. Those are a disaster waiting to happen when it comes to data integrity.
Security is also questionable on shared hosting. Many hosts roll their own solutions, often exposing vulnerabilities due to insufficient file access control (e.g., an NFS share allowing lateral movement) and over-privileged execution of untrusted code (e.g., a root-owned cron job). It’s very difficult, if not impossible, to protect your own site from weaknesses in the hosting outside of your control (e.g., delayed availability of security patches).
I never encountered the latter, fortunately, but my frustration with the former led to me swearing off shared hosting for good. I cannot justify paying for a service that does not deliver.
Self-Hosting
For many, many years the go-to setup for hosting something like this site was LAMP (Linux, Apache, MySQL, and PHP). As a result, much of the software ecosystem was developed with that expectation as evidenced by the heavy use of Apache-specific .htaccess files to make site-specific configuration edits. It still works, and it was the hosting stack behind my first iteration of this site, but I feel there are better options now.
The second iteration of this site (the one right before this at the time of posting) still used Apache, but I had placed it behind Nginx, allowing that to act as the TLS terminator. This is generally considered a better hosting setup because it minimizes Apache’s interactions with the parts it’s worst at and allows for higher traffic capability on the same hardware (a problem this site will probably never have).
All of these have been hosted on instances of various sizes at Digital Ocean, a VPS provider I’ve been very happy with.
The Next Iteration
I have an aversion to doing distro upgrades after having been burned multiple times. Instead, I just spin up a new instance with the version I want and migrate everything over. That was the original plan for the third iteration: spin up a new Ubuntu 18.04 LTS instance, install the Nginx + Apache stack, and migrate. Fortunately, I ran it by a friend who suggested skipping Apache and letting Nginx handle it. The remainder of the stack is unchanged.
PHP under Nginx
One of the deficiencies with running Apache + mod_php is that it executes as the same user regardless of whether you have one or twenty sites running on the server. Without augmenting the access control somehow, this means a breach in one could potentially spread to all. I wanted to address this, particularly because I often host various projects in tandem with my normal content.
The interpreter of choice for Nginx installations is PHP-FPM, and it works very well. However, most tutorials fall short in its configuration and completely gloss over the pool configuration options resulting in all sites executing under the same user and group.
PHP-FPM supports an arbitrary number of named pools, each of which may be configured with its own user/group pair. I dedicate one pool per domain or subdomain, giving its user and group control over the corresponding PHP files.
The pool configuration:
[poolname] user = poolowner group = poolgroup listen = /run/php/php7.2-fpm-poolname.sock listen.owner = www-data listen.group = www-data pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 chdir = /
An individual site’s PHP executes as the user/group configured, inheriting that security scope, and the socket’s owner and group are set to that of the web server. If done correctly, it cannot break out of the web root, significantly mitigating the risk of many types of vulnerabilities from path traversal to remote code execution.
Beyond this, it’s important to use a currently-supported version of PHP (e.g., 7.2) to receive security patches. The most common two versions as of writing (5.6 and 7.0) recently reached end of life status.
MySQL/MariaDB
For many Linux distros, MariaDB has replaced MySQL itself, and for the purposes of this post I’m going to just refer to it as MySQL.
This is the easy one. Outside of extreme scaling needs, the stock configuration works just fine. The only important piece is that each site has its own database user(s) with the minimum required permissions, and that’s very easy to accomplish either through the CLI interface or one of many GUI interfaces.
Nginx
I quickly latched on to the idea of going with only Nginx because of its less crufty configuration syntax and internal support for modern niceties like TLS 1.3 and current ciphers. It is entirely possible to use PHP-FPM with Apache, but the streamlined feel of Nginx is incredibly welcome. My only loss of functionality is the ability to use .htaccess support verbatim — I have to port the directives into the site’s configuration file manually.
The configuration that follows accomplishes a few things:
- The non-
www.
https URL is the canonical one, and all other requests are forwarded to that. - SSL certificate files generated by LetsEncrypt are loaded and used.
- HTTP/2 is supported, which allows supported browsers to load the site’s assets with less overhead. Because I have structured the site to host every asset locally, this is even more effective.
- TLS 1.0 and 1.1 are not supported as their usage is < 1% (Google – October 2018) and are on track to be deprecated. TLS 1.2 is currently the most common while TLS 1.3 will only grow in usage.
- The cipher list is based on the current best practices.
- OCSP stapling is enabled to eliminate the need for a browser to perform an additional request to validate the certificate.
- Strict Transport Security (HSTS) is enabled with its most constraining settings: include all subdomains and opting into browser preloading so no non-https request will ever be attempted.
- Environment configuration files (e.g.,
.user.ini
and.htaccess
) are blocked. - The presence of current cache files generated by the WordPress caching plugin WP Super Cache is checked and loaded as applicable.
- JS and CSS files have the appropriate cache and compression headers set, likewise with other common media files.
- PHP files are passed on to the corresponding PHP-FPM pool.
- Feature Policy and Content Security Policy headers are set to reflect the content of this site. In my case, I block virtually everything since no assets are hosted outside of this domain.
While some of this is site-specific (e.g., the CSP header), most of it is generally applicable across a wide variety of sites and configuring it this way will result both in increased security and better performance. I feel this is a good base point for a self-hosted WordPress site in 2019.
server { listen 80; server_name example.com www.example.com; return 301 https://example.com$request_uri; } server { listen 443 ssl http2; server_name www.example.com; ssl_certificate /usr/local/ssl/example.com.pem; ssl_certificate_key /usr/local/ssl/example.com.key; ssl_dhparam /usr/local/ssl/example.com.dhparam; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 6h; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /usr/local/ssl/example.com.pem; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"; return 301 https://example.com$request_uri; } server { listen 443 ssl http2; server_name example.com; root /var/www/example.com/; index index.php index.html index.htm; ssl_certificate /usr/local/ssl/example.com.pem; ssl_certificate_key /usr/local/ssl/example.com.key; ssl_dhparam /usr/local/ssl/example.com.dhparam; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 6h; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /usr/local/ssl/example.com.pem; access_log /var/log/nginx/example.com-access.log; error_log /var/log/nginx/example.com-error.log; client_body_in_file_only clean; client_body_buffer_size 32K; client_max_body_size 300M; sendfile on; send_timeout 300s; location ~ /\.user\.ini { deny all; } location ~ /\.ht { deny all; } # Cache set $cache_uri $request_uri; # POST requests and urls with a query string should always go to PHP if ($request_method = POST) { set $cache_uri 'null cache'; } if ($query_string != "") { set $cache_uri 'null cache'; } # Don't cache uris containing the following segments if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") { set $cache_uri 'null cache'; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") { set $cache_uri 'null cache'; } # Set CSS and JS headers location ~* \.(?:js|css)$ { log_not_found off; gzip on; gzip_comp_level 6; gzip_min_length 1100; gzip_buffers 16 8k; gzip_proxied any; gzip_types text/css text/js text/javascript application/javascript application/x-javascript; expires 1y; add_header Vary "Accept-Encoding"; add_header Cache-Control "public"; add_header X-Content-Type-Options nosniff; } # Set image, audio, video, and font headers location ~* \.(?:png|jpg|jpeg|gif|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|ttf|ttc|otf|eot|woff|woff2)$ { log_not_found off; expires 7d; add_header Cache-Control "public"; add_header X-Content-Type-Options nosniff; } location / { try_files /wp-content/cache/supercache/$http_host/$cache_uri/index.html $uri $uri/ /index.php?$args; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/run/php/php7.2-fpm-poolname.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; add_header Vary "Accept-Encoding"; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header X-Frame-Options sameorigin; add_header Expect-CT "max-age=86400, enforce"; add_header Referrer-Policy "no-referrer-when-downgrade"; add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; connect-src 'self'; font-src 'self' data:"; add_header Feature-Policy "autoplay 'none'; camera 'none'; geolocation 'none'; microphone 'none'; midi 'none'; payment 'none'; vr 'none'; vibrate 'none'"; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"; } }
WordPress
There are only two must-haves on the WordPress side of things, the rest being entirely dependent on the site’s needs. The first is a page cache (WP Super Cache in my case), and the second is Wordfence whose two strongest features are an IP blacklist and a web application firewall.
WP Super Cache is my choice of cache because of its relative simplicity. In my opinion, a page cache is all that is needed. I have seen no end of issues stemming from two other common cache layer, object caching and database caching, which often cause very sporadic and difficult-to-debug errors. Its sole purpose is to reduce serving overhead for effectively static content in the event of an unexpected surge in traffic.
WordPress has a reputation for insecurity in some circles, but that perception greatly benefits from additional context. The core of WordPress is largely secure, only occasionally experiencing a vulnerability of any consequence (e.g., Unauthenticated Page/Post Content Modification via REST API). Where it gains the reputation is in its incredibly large install base that makes a vulnerability immediately more exploitable and in its community plugin and theme ecosystem.
The overwhelming majority of all WordPress site compromises, however, are due to vulnerabilities in third party plugins and themes. This is where Wordfence shines. Its WAF provides 0-day protection that’s often quicker than official patches, and its IP blacklist blocks the most common offenders before requests even reach potentially vulnerable code. Wordfence also provides two factor authentication, which effectively nullifies all brute force attempts.
Recap
Combining all of these pieces together results in a very secure WordPress installation for less of a cost than shared or managed hosting, and it’s also the fastest I’ve seen WordPress perform on any server, which greatly helps SEO. It builds upon the infosec concept of defense in depth, layering the security features of the setup to maximize protection:
- The first layer of protection is the Nginx TLS termination, offering protection for man-in-the-middle eavesdropping attacks for both the server and visitor.
- At the highest PHP-related level, on the edge where the request comes in, the WAF and IP blacklist filter out known attacks and attackers. Only requests that pass this layer will be processed by the WordPress environment.
- Once the request reaches WordPress, the isolation of PHP-FPM limits the damage any filesystem-related vulnerability can achieve while the MySQL permission constraints limit access to other databases due to any SQL injection vulnerabilities.
- In the response back to the visitor, the content security policy and feature security policy enable the browser to restrict JavaScript and other execution to only that which the site explicitly says it needs, greatly minimizing the damage an exploited cross site scripting vulnerability could achieve.
A secure site is one that’s able to endure the trials of time and remain online, preserving its corner of the web.