Running applications on TrueNas Scale

All of the application that run on the TrueNas Scale, run default unencrypted. When accessing it just from the same network and if you control access to that network, it is a reduced risk. Nevertheless I prefer the communication to and from services to be encrypted.

It fits in the layered security and defense in depth strategies.

Creating the PKI

When creating the Public Key Infrastructure (PKI) there are different solutions. Digicert has a nice series of white papers on what it is and how to use. However the scalable solution is a bit overkill for just one server with a load of services to secure.

By Hand

Previously I have created a nice private PKI using this blog post, however maintaining the infrastructure soon turns into a hassle. Keeping certificate data secure is also challenging. And with the manual hassle to maintain, one could easily decide to just, fuck it, and extend X509 to add support for a wildcard certificate using DNS names and create your own wildcard certificate. Understandable, but not really best practice anymore.

For now I create certificates that have a expiration of one year.

Hardware Security Module

Using a Hardware Security Module (HSM) is the best solution, also the most expensive. The HSM from Yubico starts at € 650. Really too much for me. And keep in mind, I am securing my Home Lab, I am not securing services that are in place for a multi-million Euros company. Best be realistic and see how I can get a more pragmatic solution in.

Using a application

I have found a nice solution that helped me a lot with the certificate and key management XCA. The application is OpenSource and intended for this purpose. And I find this a nice alternative even for smaller organizations, with limited set of servers and/or services to protect.

Storing the data

The application supports local and remote databases. For my Home Lab storing the data in SQLite database is perfectly fine. However I do not want to store the DB somewhere in a cloud storage solution, iCloud Drive or Drop Box. But I do want to be able to manage the PKI from both my iMac and my MacBook Pro.

The simple solution is to store the DB on a USB Stick. The overall cheapest solution, for sure, create a nice encrypted DMG, store that on the USB stick and mount it when needed. Workable for sure, but I would like to have a more hardware encrypted solution and one that is affordable.

There are also hardware encrypted USD drives out there. I settled for the Kingston IronKey Vault Privacy 50 Series The 16 GB is less then € 50 including VAT. And USB-C, perfect for this purpose.

While storing the DB on this stick is fine, when the stick is lost and/or broken, I would have to start over. Having a backup stick is a solution. So I bought myself another IronKey, however this one is a bit different; it has it’s own keypad. Kingston IronKey Keypad 200 Series. This one I will carry in my Tech pouch.

Now the working process is simply a matter of copying the DB over to the other drive when I have made some changes. Perhaps I could automate the sync process?

Setting up the PKI

In order to secure services, I need a couple of things, what DNS name do I use? And is it sufficient with just using a Root Certificate Authority and service specific TLS certificates, or do I need a Intermediate certificate as well?

I have chosen to simply setup a Root CA with server certificates for the services and servers. The Root CA is valid for 10 years, and the server certificates are valid for one year.

I have chosen to use the domain name ’localdomain’, this is a non-public resolvable domain and provides another layer of obfuscation and ‘security’. Even for shoulder glances on my screens it does not give away any sensible information. Since there is no way anyone would be able to resolve this to the correct internal IP addres. Let alone understand the architecture of the home lab.

Securing the services

There are two approaches to secure services, secure every service using the service itself or place a reverse proxy in front of the services. With a proxy reverse proxy service, the TLS handling takes place at the proxy service, and saves you the hassle of having to go to each and every service. Also when a service changes from server or from port, now you can simple change this on one location for all services.

While searching online I came across Nginx Proxy Manager. Looks very promising and less work then doing it yourself with a instance of Nginx.

Nginx Proxy Manager

As I mentioned I came across this service. Let’s try to set it up. Should not be too hard….

Running NPM as a service on TrueNas

This a really easy way to get a reverse proxy working with TrueNas Scale applications. Simply choose it from the catalog and deploy. Easy does it. Adding proxy hosts fro any Scale application is straight forward. However for trying to reverse proxy to a service that runs on a VM on the same server and although I have setup the network on the host in such a way I can access the NAS perfectly, is not really working. For some reason the NPM is not able to communicate with any of the VM’s. I understand up to a level what Kubernetes(K8s) is, however I do not have enough knowledge on this topic to even try what I could do myself in combination with implementation of K8s in TrueNAS. For me the fastest solution is setting up another VM running with Ubuntu 22.04 LTS and have the NPM run from there.

Setting up a VM for running NPM

Setting a VM up is pretty easy. However the NPM is only available with docker. This means that in the VM I would have to setup Docker and then run it in there. Fine, setup docker as well in a VM. Any way, from within a VM I am able to connect to both VM bases services and Scale applications. Pretty sweet setup.

Creating the Proxy Hosts

Creating a certificate is pretty simple and just export the .PEM + key in one file, extract the certificate from this file so you have two files, one key and one certificate. Upload the certificate and key, create the proxy host. Lastly add the DNS rewrite for the service to AdGuard Home and presto it works.

One small caveat; on Mac it might help to give the command: sudo killall -HUP mDNSResponder. This command resets the local resolving and ensures that Safari and other browsers properly resolve the new DNS name for the service. If not you end up with “unable to find” error messages.

Challenge with Paperless-ngx

With Paperless-ngx the setup works, however when you login you are faced with a CSRF error. This is simply solved with adding two environment variables to the setup of the application:


In my case I gave them the values:

This did the trick and now this service is protected with a TLS certificate.

Bonus NPM Config

Normally you manage the NPM using port 81. Is it possible to add a proxy host pointing to port 81 and having the management of this service also encrypted? Yes it can, and pretty easy. Same approach of creating any other certificate and such for any other service, deploy and ready.

All services running on the server are now protected with TLS.

Adding support for Certificate Revocation List

This is a really easy part of the setup, using the XCA application create the CRL and export is as a crl.der file. Next setup a static files service on the NAS, this runs on a custom port like any other service. Share the “host” folder as a SMB share. Now every time I (re)create the CRL, I just have to copy it onto the share and presto, service updated.

Thanks to the NPM, I simply create another proxy host for this crl site, add a DNS rewrite to AdGuard Home and presto, using a DNS name the CRL is available.

Like magic.

Providing chain of trust on my devices

Due to my devices being managed by a MDM, providing this chain of trust is pretty simple. Just create a profile with Apple Configurator, push this profile to all devices and now my self-signed root certificate and its derived certificates are trusted.


All in all a nice setup. It just works and with simple DNS rewrites in my AdGuardHome service, the services are accessible using TLS encrypted connections. And I only have to “open” a limited set of “servers” in TwinGate.

Now all access to internal services are handled by the NPM and no longer direct HTTP access. Since these applications and VMs run on the same device. the traffic does not even leave the server. And all my services are available using DNS names. No more direct IP connections needed.

Next steps

Next I would like to use Let’s Encrypt certificates and let them handle the management of the certificates. However at this moment my public facing domain does not have any references to my internal network, it is something I like to stay that way.

I might in the future look into this tool-set to provide some automation of certificate creation and such. For now this works and enough other stuff I want to learn and do.